Compare commits

...

93 Commits

Author SHA1 Message Date
Tõnis Tiigi
6db68d0295 Merge pull request #155 from tiborvass/vendor-buildkit
vendor: update buildkit to docker-19.03 (ae10b292)
2019-09-27 10:36:16 -07:00
Tibor Vass
abe8ba769e vendor: update buildkit to docker-19.03 (ae10b292)
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-09-27 17:18:25 +00:00
Tõnis Tiigi
96fb17b711 Merge pull request #154 from tiborvass/fix-149
build: fix scoping issue in closure inside loop
2019-09-26 11:32:04 -07:00
Tibor Vass
63e5633d62 build: fix scoping issue in closure inside loop
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-09-26 18:01:29 +00:00
Tibor Vass
299d41660b Merge pull request #153 from tonistiigi/stdin-dockerfile
build: fix stdin dockerfile filename
2019-09-26 10:53:28 -07:00
Tibor Vass
1ec87b7beb Merge pull request #152 from tonistiigi/stream-input
build: use correct in-memory input
2019-09-26 10:45:55 -07:00
Tonis Tiigi
0475107882 build: fix stdin dockerfile filename
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2019-09-26 09:17:04 -07:00
Tonis Tiigi
75f8d7ebb5 build: use correct in-memory input
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2019-09-26 09:10:39 -07:00
Tibor Vass
7c97854b6f Merge pull request #144 from droopy4096/master
Add FOSSA checks to Jenkins CI
2019-09-17 14:56:00 -07:00
Dmytro Makovey
5f4d4a87f7 Add FOSSA checks to Jenkins CI
Signed-off-by: Dmytro Makovey <dmytro.makovey@docker.com>
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-09-17 21:27:29 +00:00
Tõnis Tiigi
c1ce7300d5 Merge pull request #146 from gfrancesco/master
README typo
2019-09-17 10:19:34 -07:00
gfrancesco
e118c4d8e9 UPD: Readme typo 2019-09-17 18:13:16 +02:00
Tibor Vass
5fe779703d Merge pull request #134 from tonistiigi/group-merge
bake: merge targets on same groups
2019-09-05 17:15:01 -07:00
Tonis Tiigi
15a5a42eb1 bake: merge targets on same groups
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2019-08-19 15:48:42 -07:00
Tõnis Tiigi
5b974158f9 Merge pull request #131 from gracenoah/patch-1
Fix some quotes in the readme
2019-08-14 12:16:13 -07:00
gracenoah
1c0a7f14e8 Fix some quotes in the readme 2019-08-13 14:27:10 +02:00
Tibor Vass
7ec8912591 Merge pull request #125 from tiborvass/docs-allow
Document build --allow
2019-08-01 18:18:00 -07:00
Tibor Vass
83da6a3378 docs: crosslink buildkitd-flags and config flags in create
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-08-01 17:56:05 -07:00
Tibor Vass
cad02a4681 docs: document build --allow
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-08-01 17:56:05 -07:00
Tõnis Tiigi
c967f1d570 Merge pull request #124 from tiborvass/update-docs
Update docs
2019-08-01 16:41:26 -07:00
Tibor Vass
be3efc979b docs: add documentation for --buildkitd-flags, --config, --driver-opt on create
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-08-01 16:15:11 -07:00
Tibor Vass
5c5f54c6d6 docs: Update install instructions with Docker CE 19.03
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-08-01 15:23:02 -07:00
Tibor Vass
6f8f04e1f8 Merge pull request #122 from tonistiigi/custom-image
driver: allow setting driver opts
2019-08-01 11:41:49 -07:00
Tonis Tiigi
afd821010d docker-container: allow setting custom buildkit image
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2019-07-31 22:46:37 -07:00
Tonis Tiigi
bcc882cbf1 docker-container: allow using host network
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2019-07-31 17:42:49 -07:00
Tonis Tiigi
75b80c277f driver: allow setting driver opts
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2019-07-31 17:25:25 -07:00
Tibor Vass
096d1befc9 Merge pull request #104 from tonistiigi/entitlements
build: add allowed entitlements
2019-07-31 15:36:13 -07:00
Tibor Vass
2bf6187a88 Merge pull request #121 from tonistiigi/config
driver: allow setting buildkit config file
2019-07-31 15:21:17 -07:00
Tonis Tiigi
8ed8795268 driver: allow setting buildkit config file
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
Co-Authored-By: Tibor Vass <tiborvass@users.noreply.github.com>
2019-07-31 15:08:26 -07:00
Tõnis Tiigi
6e32ea3418 Merge pull request #118 from tiborvass/bake-no-cache-pull
bake: honor --no-cache and --pull
2019-07-31 10:59:59 -07:00
Tibor Vass
8b2171f78a bake: honor --no-cache and --pull
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-07-30 19:39:01 -07:00
Tibor Vass
92f1234aaa Merge pull request #116 from tonistiigi/build-arg-default
build: load default build args from env
2019-07-30 19:20:09 -07:00
Tibor Vass
73645c8348 Merge pull request #117 from tonistiigi/compose-env
bake: replace env in compose files
2019-07-30 19:14:21 -07:00
Tonis Tiigi
662c0768cb bake: replace env in compose files
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2019-07-30 16:44:05 -07:00
Tonis Tiigi
43150ef849 build: load default build args from env
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2019-07-30 16:32:36 -07:00
Tibor Vass
3f18b659a0 Merge pull request #102 from tonistiigi/buildkitd-flags
driver: allow configuring buildkitd flags
2019-07-09 17:27:17 -07:00
Tonis Tiigi
6b81b0bed6 build: add allowed entitlements
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2019-07-08 15:59:53 -07:00
Tonis Tiigi
f0af89a204 driver: allow configuring buildkitd flags
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2019-07-08 15:29:43 -07:00
Tõnis Tiigi
550c2b9042 Merge pull request #100 from FernandoMiguel/patch-1
add chmod
2019-07-06 12:38:08 -07:00
Fernando Miguel
c8cda08209 add chmod 2019-07-05 12:14:40 +01:00
Tõnis Tiigi
2b03339235 Merge pull request #93 from zelahi/enable-fossa-scan
[TAR-853] ADDED .fossa file for fossa scans
2019-06-17 09:21:16 -07:00
zelahi
6e1fd0eab6 ADDED .fossa file for fossa scans 2019-06-14 10:49:12 -07:00
Tõnis Tiigi
5336e74bd4 Merge pull request #89 from khs1994/master
Fix Dockerfile format
2019-06-05 13:56:29 -07:00
Tõnis Tiigi
afeaed790f Merge pull request #86 from AkihiroSuda/driver-ls
Put driver names to create --help
2019-06-05 13:55:44 -07:00
khs1994
aed531a8a9 Fix Dockerfile format
Signed-off-by: Kang HuaiShuai <khs1994@khs1994.com>
2019-06-04 17:43:39 +08:00
Akihiro Suda
eee78c6c10 Put driver names to create --help
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
2019-06-02 00:02:20 +09:00
Tõnis Tiigi
ab5fe3dec5 Merge pull request #87 from tiborvass/no-build-field
[Carry #79] Change compose file handling to require valid service specifications
2019-05-29 19:22:22 -07:00
Tibor Vass
b741350afd bake: compose parser should only error if there are neither build nor image fields
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-05-29 18:12:30 -07:00
Tõnis Tiigi
8b6dfbd9c8 Merge pull request #85 from tiborvass/license-contributing
Add project files (LICENSE, AUTHORS, MAINTAINERS, Code of Conduct, CONTRIBUTING)
2019-05-24 18:39:53 -07:00
Jack Laxson
4b2666b9d6 Change compose file handling to require valid service specifications
Added the checks and some tests
One of the tests wasn't valid docker-compose.yml, that's been changed.
Bad config throws an error and has a test

Signed-off-by: Jack Laxson <jackjrabbit@gmail.com>
2019-05-24 17:41:48 -07:00
Sebastiaan van Stijn
854f704a2f Add LICENSE file
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-05-24 17:35:34 -07:00
Sebastiaan van Stijn
138b2e7415 Add contributing, code of conduct
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-24 17:34:06 -07:00
Sebastiaan van Stijn
e1f54de9ac Add maintainers and authors
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2019-05-24 17:33:28 -07:00
Tibor Vass
61a6fdd767 Merge pull request #84 from tiborvass/platform-local
Update README to use --platform=local with Docker 19.03
2019-05-24 17:16:03 -07:00
Tibor Vass
77c23dd85f Update README to use --platform=local with Docker 19.03
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-05-24 23:29:14 +00:00
Tibor Vass
0eb2df54ce Merge pull request #83 from tonistiigi/relative-dockerfile-path
bake: make dockerfile relative to context
2019-05-24 16:18:50 -07:00
Tonis Tiigi
f1fd9a274b bake: make dockerfile relative to context
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2019-05-24 16:04:01 -07:00
Tibor Vass
4e61674ac8 Merge pull request #81 from tiborvass/experimental
Allow Docker to ship buildx guarded by the experimental flag
2019-05-23 14:42:28 -07:00
Tibor Vass
695034153a cmd/buildx: add empty by default experimental variable
To guard this Docker CLI plugin with docker's experimental CLI config,
buildx needs to be built with `go build -ldflags "-X main.experimental=1"`

Signed-off-by: Tibor Vass <tibor@docker.com>
2019-05-23 21:21:27 +00:00
Tibor Vass
fb9cf7720a Merge pull request #82 from tiborvass/vendor-cli
vendor: update docker/cli (ab688a9a79a1) and docker/docker (3998dffb806f)
2019-05-23 14:20:40 -07:00
Tibor Vass
03ae6f8e54 vendor: update docker/cli (ab688a9a79a1) and docker/docker (3998dffb806f)
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-05-23 20:43:16 +00:00
Tibor Vass
715d38ff96 Merge pull request #75 from tonistiigi/update-buildkit
vendor: update buildkit to f238f1e
2019-05-15 10:39:55 -07:00
Tibor Vass
3045eb1b10 Merge pull request #76 from tonistiigi/build-flags
build: add missing flags
2019-05-15 10:39:05 -07:00
Tonis Tiigi
717a4afae0 build: add missing flags
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2019-05-14 18:06:42 -07:00
Tonis Tiigi
b68b005f68 vendor: update buildkit to f238f1e
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2019-05-14 17:59:01 -07:00
Tõnis Tiigi
cb83df7a69 Merge pull request #71 from cpuguy83/docker_bake_file_help
Correct help output for default bake file.
2019-05-08 10:56:37 -07:00
Brian Goff
e23e4a6bdc Correct help output for default bake file.
Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2019-05-08 08:54:00 -07:00
Tõnis Tiigi
84f7e9d02d Merge pull request #70 from tonistiigi/local-platform
platformutil: add local platform
2019-05-06 17:24:41 -07:00
Tõnis Tiigi
dcd8ca9308 Merge pull request #69 from tonistiigi/progress-env
progress: add env config
2019-05-06 17:24:31 -07:00
Tibor Vass
b3fe1a333d Merge pull request #68 from tonistiigi/arm-variant
imagetools: keep arm variant
2019-05-06 17:11:49 -07:00
Tibor Vass
f38dfd2032 Merge pull request #67 from tonistiigi/unfork-cli
dockerfile: unfork cli
2019-05-06 17:07:49 -07:00
Tibor Vass
250558eb3a Merge pull request #66 from tonistiigi/df-update
dockerfile: update to 1.1
2019-05-06 17:05:58 -07:00
Tibor Vass
5d0db8be7f Merge pull request #64 from tonistiigi/skip-target
readme: simplify install instructions
2019-05-06 17:05:10 -07:00
Tonis Tiigi
13b74e0d80 platformutil: add local platform
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2019-05-06 16:55:36 -07:00
Tonis Tiigi
9b57f9e872 imagetools: keep arm variant
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2019-05-06 16:50:42 -07:00
Tonis Tiigi
f8e8f17f4e progress: add env config
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2019-05-06 16:39:09 -07:00
Tõnis Tiigi
361f52ab6c Merge pull request #65 from tiborvass/load-multiarch
Explain single-platform limitation of docker export type
2019-05-06 16:14:05 -07:00
Tonis Tiigi
bffca0b271 dockerfile: unfork cli
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2019-05-06 16:12:09 -07:00
Tonis Tiigi
9bc85fc3d8 dockerfile: update to 1.1
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2019-05-06 16:04:23 -07:00
Tibor Vass
707e17d05c Explain single-platform limitation of docker export type
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-05-06 15:33:17 -07:00
Tõnis Tiigi
0066fcc9db Merge pull request #60 from tiborvass/linux-install
Add Linux Nightly Docker install instructions
2019-05-06 14:40:16 -07:00
Tibor Vass
c222726c15 Add Linux Docker install instructions
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-05-06 14:24:22 -07:00
Tonis Tiigi
25ae122153 readme: simplify install instructions
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2019-05-06 14:20:41 -07:00
Tibor Vass
7339f85248 Merge pull request #61 from northtyphoon/bindu/readme
fix the typo in --cache-to example
2019-05-06 14:15:25 -07:00
Bin Du
c5a19f754f fix the typo in --cache-to example
Signed-off-by: Bin Du <bindu@microsoft.com>
2019-05-05 23:38:39 -07:00
Tibor Vass
bbbbc2ca4b Merge pull request #58 from tiborvass/readme-install
Update install instructions in README
2019-05-01 10:16:49 -07:00
Tibor Vass
93d21f1a86 Update install instructions in README
Signed-off-by: Tibor Vass <tibor@docker.com>
2019-05-01 10:01:45 -07:00
Tõnis Tiigi
509c4b6bd4 Merge pull request #54 from tonistiigi/readme-update
readme: more guide content
2019-04-25 23:48:54 -07:00
Tonis Tiigi
9e4a609346 readme: more guide content
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2019-04-25 23:27:27 -07:00
Tõnis Tiigi
4c84819a32 Merge pull request #53 from tonistiigi/compose-target
bake: fix parsing target from compose files
2019-04-25 22:53:49 -07:00
Tonis Tiigi
43356bbbbe bake: fix parsing target from compose files
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2019-04-25 21:36:28 -07:00
Tõnis Tiigi
d5a84b1593 Merge pull request #52 from tonistiigi/title-format
readme: better title formatting
2019-04-25 09:16:01 -07:00
Tonis Tiigi
1c35bc7d91 readme: better title formatting
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2019-04-24 21:35:54 -07:00
538 changed files with 15747 additions and 216130 deletions

14
.fossa.yml Executable file
View File

@@ -0,0 +1,14 @@
# Generated by FOSSA CLI (https://github.com/fossas/fossa-cli)
# Visit https://fossa.com to learn more
version: 2
cli:
server: https://app.fossa.io
fetcher: custom
project: git@github.com:docker/buildx
analyze:
modules:
- name: github.com/docker/buildx/cmd/buildx
type: go
target: github.com/docker/buildx/cmd/buildx
path: cmd/buildx

4
.github/CODE_OF_CONDUCT.md vendored Normal file
View File

@@ -0,0 +1,4 @@
# Code of conduct
- [Moby community guidelines](https://github.com/moby/moby/blob/master/CONTRIBUTING.md#moby-community-guidelines)
- [Docker Code of Conduct](https://github.com/docker/code-of-conduct)

292
.github/CONTRIBUTING.md vendored Normal file
View File

@@ -0,0 +1,292 @@
# Contribute to the Buildx project
This page contains information about reporting issues as well as some tips and
guidelines useful to experienced open source contributors.
## Reporting security issues
The project maintainers take security seriously. If you discover a security
issue, please bring it to their attention right away!
**Please _DO NOT_ file a public issue**, instead send your report privately to
[security@docker.com](mailto:security@docker.com).
Security reports are greatly appreciated and we will publicly thank you for it.
We also like to send gifts&mdash;if you're into schwag, make sure to let
us know. We currently do not offer a paid security bounty program, but are not
ruling it out in the future.
## Reporting other issues
A great way to contribute to the project is to send a detailed report when you
encounter an issue. We always appreciate a well-written, thorough bug report,
and will thank you for it!
Check that [our issue database](https://github.com/docker/buildx/issues)
doesn't already include that problem or suggestion before submitting an issue.
If you find a match, you can use the "subscribe" button to get notified on
updates. Do *not* leave random "+1" or "I have this too" comments, as they
only clutter the discussion, and don't help resolving it. However, if you
have ways to reproduce the issue or have additional information that may help
resolving the issue, please leave a comment.
Include the steps required to reproduce the problem if possible and applicable.
This information will help us review and fix your issue faster. When sending
lengthy log-files, consider posting them as an attachment, instead of posting
inline.
**Do not forget to remove sensitive data from your logfiles before submitting**
(you can replace those parts with "REDACTED").
### Pull requests are always welcome
Not sure if that typo is worth a pull request? Found a bug and know how to fix
it? Do it! We will appreciate it.
If your pull request is not accepted on the first try, don't be discouraged! If
there's a problem with the implementation, hopefully you received feedback on
what to improve.
We're trying very hard to keep Buildx lean and focused. We don't want it to
do everything for everybody. This means that we might decide against
incorporating a new feature. However, there might be a way to implement that
feature *on top of* Buildx.
### Design and cleanup proposals
You can propose new designs for existing features. You can also design
entirely new features. We really appreciate contributors who want to refactor or
otherwise cleanup our project.
### Sign your work
The sign-off is a simple line at the end of the explanation for the patch. Your
signature certifies that you wrote the patch or otherwise have the right to pass
it on as an open-source patch. The rules are pretty simple: if you can certify
the below (from [developercertificate.org](http://developercertificate.org/)):
```
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
1 Letterman Drive
Suite D4700
San Francisco, CA, 94129
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
```
Then you just add a line to every git commit message:
Signed-off-by: Joe Smith <joe.smith@email.com>
**Use your real name** (sorry, no pseudonyms or anonymous contributions.)
If you set your `user.name` and `user.email` git configs, you can sign your
commit automatically with `git commit -s`.
### Run the unit- and integration-tests
To enter a demo container environment and experiment, you may run:
```
$ make shell
```
To validate PRs before submitting them you should run:
```
$ make validate-all
```
To generate new vendored files with go modules run:
```
$ make vendor
```
### Conventions
- Fork the repository and make changes on your fork in a feature branch
- Submit tests for your changes. See [run the unit- and integration-tests](#run-the-unit--and-integration-tests)
for details.
- [Sign your work](#sign-your-work)
Write clean code. Universally formatted code promotes ease of writing, reading,
and maintenance. Always run `gofmt -s -w file.go` on each changed file before
committing your changes. Most editors have plug-ins that do this automatically.
Pull request descriptions should be as clear as possible and include a
reference to all the issues that they address. Be sure that the [commit
messages](#commit-messages) also contain the relevant information.
### Successful Changes
Before contributing large or high impact changes, make the effort to coordinate
with the maintainers of the project before submitting a pull request. This
prevents you from doing extra work that may or may not be merged.
Large PRs that are just submitted without any prior communication are unlikely
to be successful.
While pull requests are the methodology for submitting changes to code, changes
are much more likely to be accepted if they are accompanied by additional
engineering work. While we don't define this explicitly, most of these goals
are accomplished through communication of the design goals and subsequent
solutions. Often times, it helps to first state the problem before presenting
solutions.
Typically, the best methods of accomplishing this are to submit an issue,
stating the problem. This issue can include a problem statement and a
checklist with requirements. If solutions are proposed, alternatives should be
listed and eliminated. Even if the criteria for elimination of a solution is
frivolous, say so.
Larger changes typically work best with design documents. These are focused on
providing context to the design at the time the feature was conceived and can
inform future documentation contributions.
### Commit Messages
Commit messages must start with a capitalized and short summary (max. 50 chars)
written in the imperative, followed by an optional, more detailed explanatory
text which is separated from the summary by an empty line.
Commit messages should follow best practices, including explaining the context
of the problem and how it was solved, including in caveats or follow up changes
required. They should tell the story of the change and provide readers
understanding of what led to it.
If you're lost about what this even means, please see [How to Write a Git
Commit Message](http://chris.beams.io/posts/git-commit/) for a start.
In practice, the best approach to maintaining a nice commit message is to
leverage a `git add -p` and `git commit --amend` to formulate a solid
changeset. This allows one to piece together a change, as information becomes
available.
If you squash a series of commits, don't just submit that. Re-write the commit
message, as if the series of commits was a single stroke of brilliance.
That said, there is no requirement to have a single commit for a PR, as long as
each commit tells the story. For example, if there is a feature that requires a
package, it might make sense to have the package in a separate commit then have
a subsequent commit that uses it.
Remember, you're telling part of the story with the commit message. Don't make
your chapter weird.
### Review
Code review comments may be added to your pull request. Discuss, then make the
suggested modifications and push additional commits to your feature branch. Post
a comment after pushing. New commits show up in the pull request automatically,
but the reviewers are notified only when you comment.
Pull requests must be cleanly rebased on top of master without multiple branches
mixed into the PR.
> **Git tip**: If your PR no longer merges cleanly, use `rebase master` in your
> feature branch to update your pull request rather than `merge master`.
Before you make a pull request, squash your commits into logical units of work
using `git rebase -i` and `git push -f`. A logical unit of work is a consistent
set of patches that should be reviewed together: for example, upgrading the
version of a vendored dependency and taking advantage of its now available new
feature constitute two separate units of work. Implementing a new function and
calling it in another file constitute a single logical unit of work. The very
high majority of submissions should have a single commit, so if in doubt: squash
down to one.
- After every commit, [make sure the test suite passes](#run-the-unit--and-integration-tests).
Include documentation changes in the same pull request so that a revert would
remove all traces of the feature or fix.
- Include an issue reference like `closes #XXXX` or `fixes #XXXX` in the PR
description that close an issue. Including references automatically closes
the issue on a merge.
- Do not add yourself to the `AUTHORS` file, as it is regenerated regularly
from the Git history.
- See the [Coding Style](#coding-style) for further guidelines.
### Merge approval
Project maintainers use LGTM (Looks Good To Me) in comments on the code review to
indicate acceptance, or use the Github review approval feature.
## Coding Style
Unless explicitly stated, we follow all coding guidelines from the Go
community. While some of these standards may seem arbitrary, they somehow seem
to result in a solid, consistent codebase.
It is possible that the code base does not currently comply with these
guidelines. We are not looking for a massive PR that fixes this, since that
goes against the spirit of the guidelines. All new contributions should make a
best effort to clean up and make the code base better than they left it.
Obviously, apply your best judgement. Remember, the goal here is to make the
code base easier for humans to navigate and understand. Always keep that in
mind when nudging others to comply.
The rules:
1. All code should be formatted with `gofmt -s`.
2. All code should pass the default levels of
[`golint`](https://github.com/golang/lint).
3. All code should follow the guidelines covered in [Effective
Go](http://golang.org/doc/effective_go.html) and [Go Code Review
Comments](https://github.com/golang/go/wiki/CodeReviewComments).
4. Comment the code. Tell us the why, the history and the context.
5. Document _all_ declarations and methods, even private ones. Declare
expectations, caveats and anything else that may be important. If a type
gets exported, having the comments already there will ensure it's ready.
6. Variable name length should be proportional to its context and no longer.
`noCommaALongVariableNameLikeThisIsNotMoreClearWhenASimpleCommentWouldDo`.
In practice, short methods will have short variable names and globals will
have longer names.
7. No underscores in package names. If you need a compound name, step back,
and re-examine why you need a compound name. If you still think you need a
compound name, lose the underscore.
8. No utils or helpers packages. If a function is not general enough to
warrant its own package, it has not been written generally enough to be a
part of a util package. Just leave it unexported and well-documented.
9. All tests should run with `go test` and outside tooling should not be
required. No, we don't need another unit testing framework. Assertion
packages are acceptable if they provide _real_ incremental value.
10. Even though we call these "rules" above, they are actually just
guidelines. Since you've read all the rules, you now know that.
If you are having trouble getting into the mood of idiomatic Go, we recommend
reading through [Effective Go](https://golang.org/doc/effective_go.html). The
[Go Blog](https://blog.golang.org) is also a great resource.

6
.mailmap Normal file
View File

@@ -0,0 +1,6 @@
# This file lists all individuals having contributed content to the repository.
# For how it is generated, see `hack/generate-authors`.
Tibor Vass <tibor@docker.com>
Tibor Vass <tibor@docker.com> <tiborvass@users.noreply.github.com>
Tõnis Tiigi <tonistiigi@gmail.com>

7
AUTHORS Normal file
View File

@@ -0,0 +1,7 @@
# This file lists all individuals having contributed content to the repository.
# For how it is generated, see `scripts/generate-authors.sh`.
Bin Du <bindu@microsoft.com>
Brian Goff <cpuguy83@gmail.com>
Tibor Vass <tibor@docker.com>
Tõnis Tiigi <tonistiigi@gmail.com>

View File

@@ -1,4 +1,4 @@
# syntax=docker/dockerfile:1.0-experimental
# syntax=docker/dockerfile:1.1-experimental
ARG DOCKERD_VERSION=19.03-rc
ARG CLI_VERSION=19.03
@@ -32,15 +32,15 @@ RUN --mount=target=. --mount=target=/root/.cache,type=cache \
FROM buildx-build AS integration-tests
COPY . .
FROM golang:1.12-alpine AS docker-cli-build
RUN apk add -U git bash coreutils gcc musl-dev
ENV CGO_ENABLED=0
ARG REPO=github.com/tiborvass/cli
ARG BRANCH=cli-plugin-aliases
ARG CLI_VERSION
WORKDIR /go/src/github.com/docker/cli
RUN git clone git://$REPO . && git checkout $BRANCH
RUN ./scripts/build/binary
# FROM golang:1.12-alpine AS docker-cli-build
# RUN apk add -U git bash coreutils gcc musl-dev
# ENV CGO_ENABLED=0
# ARG REPO=github.com/tiborvass/cli
# ARG BRANCH=cli-plugin-aliases
# ARG CLI_VERSION
# WORKDIR /go/src/github.com/docker/cli
# RUN git clone git://$REPO . && git checkout $BRANCH
# RUN ./scripts/build/binary
FROM scratch AS binaries-unix
COPY --from=buildx-build /usr/bin/buildx /
@@ -69,7 +69,7 @@ RUN mkdir -p /usr/local/lib/docker/cli-plugins && ln -s /usr/local/bin/buildx /u
COPY ./hack/demo-env/entrypoint.sh /usr/local/bin
COPY ./hack/demo-env/tmux.conf /root/.tmux.conf
COPY --from=dockerd-release /usr/local/bin /usr/local/bin
COPY --from=docker-cli-build /go/src/github.com/docker/cli/build/docker /usr/local/bin
#COPY --from=docker-cli-build /go/src/github.com/docker/cli/build/docker /usr/local/bin
WORKDIR /work
COPY ./hack/demo-env/examples .

29
Jenkinsfile vendored Normal file
View File

@@ -0,0 +1,29 @@
@Library('jps')
_
pipeline {
agent {
node {
label 'ubuntu-1804-overlay2'
}
}
options {
disableConcurrentBuilds()
}
stages {
stage("FOSSA Analyze") {
steps {
withCredentials([string(credentialsId: 'fossa-api-key', variable: 'FOSSA_API_KEY')]) {
withGithubStatus('FOSSA.scan') {
labelledShell returnStatus: false, returnStdout: true, label: "make fossa-analyze",
script:'make -f Makefile.fossa BRANCH_NAME=${BRANCH_NAME} fossa-analyze'
labelledShell returnStatus: false, returnStdout: true, label: "make fossa-test",
script: 'make -f Makefile.fossa BRANCH_NAME=${BRANCH_NAME} fossa-test'
}
}
}
}
}
}

192
MAINTAINERS Normal file
View File

@@ -0,0 +1,192 @@
# Buildx maintainers file
#
# This file describes the maintainer groups within the project.
# More detail on Moby project governance is available in the
# https://github.com/moby/moby/blob/master/project/GOVERNANCE.md file.
#
# It is structured to be consumable by both humans and programs.
# To extract its contents programmatically, use any TOML-compliant
# parser.
#
[Rules]
[Rules.maintainers]
title = "What is a maintainer?"
text = """
There are different types of maintainers, with different
responsibilities, but all maintainers have 3 things in common:
1) They share responsibility in the project's success.
2) They have made a long-term, recurring time investment to improve
the project.
3) They spend that time doing whatever needs to be done, not
necessarily what is the most interesting or fun.
Maintainers are often under-appreciated, because their work is harder
to appreciate. It's easy to appreciate a really cool and technically
advanced feature. It's harder to appreciate the absence of bugs, the
slow but steady improvement in stability, or the reliability of a
release process. But those things distinguish a good project from a
great one.
"""
[Rules.adding-maintainers]
title = "How are maintainers added?"
text = """
Maintainers are first and foremost contributors that have shown they
are committed to the long term success of a project. Contributors
wanting to become maintainers are expected to be deeply involved in
contributing code, pull request review, and triage of issues in the
project for more than three months.
Just contributing does not make you a maintainer, it is about building
trust with the current maintainers of the project and being a person
that they can depend on and trust to make decisions in the best
interest of the project.
Periodically, the existing maintainers curate a list of contributors
that have shown regular activity on the project over the prior
months. From this list, maintainer candidates are selected.
After a candidate has been announced, the existing maintainers are
given five business days to discuss the candidate, raise objections
and cast their vote. Candidates must be approved by at least 66% of
the current maintainers by adding their vote on the slack
channel. Only maintainers of the repository that the candidate is
proposed for are allowed to vote.
If a candidate is approved, a maintainer will contact the candidate to
invite the candidate to open a pull request that adds the contributor
to the MAINTAINERS file. The candidate becomes a maintainer once the
pull request is merged.
"""
[Rules.stepping-down-policy]
title = "Stepping down policy"
text = """
Life priorities, interests, and passions can change. If you're a
maintainer but feel you must remove yourself from the list, inform
other maintainers that you intend to step down, and if possible, help
find someone to pick up your work. At the very least, ensure your
work can be continued where you left off.
After you've informed other maintainers, create a pull request to
remove yourself from the MAINTAINERS file.
"""
[Rules.inactive-maintainers]
title = "Removal of inactive maintainers"
text = """
Similar to the procedure for adding new maintainers, existing
maintainers can be removed from the list if they do not show
significant activity on the project. Periodically, the maintainers
review the list of maintainers and their activity over the last three
months.
If a maintainer has shown insufficient activity over this period, a
neutral person will contact the maintainer to ask if they want to
continue being a maintainer. If the maintainer decides to step down as
a maintainer, they open a pull request to be removed from the
MAINTAINERS file.
If the maintainer wants to remain a maintainer, but is unable to
perform the required duties they can be removed with a vote of at
least 66% of the current maintainers. The voting period is five
business days. Issues related to a maintainer's performance should be
discussed with them among the other maintainers so that they are not
surprised by a pull request removing them.
"""
[Rules.DCO]
title = "Helping contributors with the DCO"
text = """
The [DCO or `Sign your work`](
https://github.com/moby/buildkit/blob/master/CONTRIBUTING.md#sign-your-work)
requirement is not intended as a roadblock or speed bump.
Some BuildKit contributors are not as familiar with `git`, or have
used a web based editor, and thus asking them to `git commit --amend
-s` is not the best way forward.
In this case, maintainers can update the commits based on clause (c)
of the DCO. The most trivial way for a contributor to allow the
maintainer to do this, is to add a DCO signature in a pull requests's
comment, or a maintainer can simply note that the change is
sufficiently trivial that it does not substantially change the
existing contribution - i.e., a spelling change.
When you add someone's DCO, please also add your own to keep a log.
"""
[Rules."no direct push"]
title = "I'm a maintainer. Should I make pull requests too?"
text = """
Yes. Nobody should ever push to master directly. All changes should be
made through a pull request.
"""
[Rules.meta]
title = "How is this process changed?"
text = "Just like everything else: by making a pull request :)"
[Org]
[Org.Maintainers]
people = [
"tiborvass",
"tonistiigi",
]
[Org.Curators]
# The curators help ensure that incoming issues and pull requests are properly triaged and
# that our various contribution and reviewing processes are respected. With their knowledge of
# the repository activity, they can also guide contributors to relevant material or
# discussions.
#
# They are neither code nor docs reviewers, so they are never expected to merge. They can
# however:
# - close an issue or pull request when it's an exact duplicate
# - close an issue or pull request when it's inappropriate or off-topic
people = [
"thajeztah",
]
[people]
# A reference list of all people associated with the project.
# All other sections should refer to people by their canonical key
# in the people section.
[people.thajeztah]
Name = "Sebastiaan van Stijn"
Email = "github@gone.nl"
GitHub = "thaJeztah"
[people.tiborvass]
Name = "Tibor Vass"
Email = "tibor@docker.com"
GitHub = "tiborvass"
[people.tonistiigi]
Name = "Tõnis Tiigi"
Email = "tonis@docker.com"
GitHub = "tonistiigi"

View File

@@ -25,4 +25,7 @@ validate-all: lint test validate-vendor
vendor:
./hack/update-vendor
.PHONY: vendor lint shell binaries install binaries-cross validate-all
generate-authors:
./hack/generate-authors
.PHONY: vendor lint shell binaries install binaries-cross validate-all generate-authors

18
Makefile.fossa Normal file
View File

@@ -0,0 +1,18 @@
REPO_PATH?=docker/buildx
BUILD_ANALYZER?=docker/fossa-analyzer
FOSSA_OPTS?=--option all-tags:true --option allow-unresolved:true --no-ansi
fossa-analyze:
docker run -i --rm -e FOSSA_API_KEY=$(FOSSA_API_KEY) \
-v $(CURDIR)/$*:/go/src/github.com/$(REPO_PATH) \
-w /go/src/github.com/$(REPO_PATH) \
-e GO111MODULE=on \
$(BUILD_ANALYZER) analyze $(FOSSA_OPTS) --branch $(BRANCH_NAME)
# This command is used to run the fossa test command
fossa-test:
docker run -i --rm -e FOSSA_API_KEY=$(FOSSA_API_KEY) \
-v $(CURDIR)/$*:/go/src/github.com/$(REPO_PATH) \
-w /go/src/github.com/$(REPO_PATH) \
-e GO111MODULE=on \
$(BUILD_ANALYZER) test --debug

238
README.md
View File

@@ -1,6 +1,7 @@
# buildx - Docker CLI plugin for extended build capabilities with BuildKit
# buildx
### Docker CLI plugin for extended build capabilities with BuildKit
## buildx is Tech Preview
_buildx is Tech Preview_
### TL;DR
@@ -18,6 +19,11 @@
- [Building](#building)
+ [with Docker 18.09+](#with-docker-1809)
+ [with buildx or Docker 19.03](#with-buildx-or-docker-1903)
- [Getting started](#getting-started)
* [Building with buildx](#building-with-buildx)
* [Working with builder instances](#working-with-builder-instances)
* [Building multi-platform images](#building-multi-platform-images)
* [High-level build options](#high-level-build-options)
- [Documentation](#documentation)
+ [`buildx build [OPTIONS] PATH | URL | -`](#buildx-build-options-path--url---)
+ [`buildx create [OPTIONS] [CONTEXT|ENDPOINT]`](#buildx-create-options-contextendpoint)
@@ -35,28 +41,21 @@
# Installing
Using `buildx` as a docker CLI plugin requires using Docker 19.03.0 beta. A limited set of functionality works with older versions of Docker when invoking the binary directly.
Using `buildx` as a docker CLI plugin requires using Docker 19.03. A limited set of functionality works with older versions of Docker when invoking the binary directly.
<!---
### Docker CE
### Docker Desktop (Edge)
`buildx` is included with Docker Desktop edge builds since 19.03.0-beta3.
For more information see https://docs.docker.com/docker-for-mac/edge-release-notes/
### Docker CE nightly builds
`buildx` comes bundled with the Docker CE nightly builds.
https://download.docker.com/linux/static/nightly/
https://download.docker.com/mac/static/nightly/
-->
`buildx` comes bundled with Docker CE starting with 19.03, but requires experimental mode to be enabled on the Docker CLI.
To enable it, `"experimental": "enabled"` can be added to the CLI configuration file `~/.docker/config.json`. An alternative is to set the `DOCKER_CLI_EXPERIMENTAL=enabled` environment variable.
### Binary release
Download the latest binary release from https://github.com/docker/buildx/releases/latest and copy it to `~/.docker/cli-plugins` folder with name `docker-buildx`.
Change the permission to execute:
```sh
chmod a+x ~/.docker/cli-plugins/docker-buildx
```
After installing you can run `docker buildx` to see the new commands.
@@ -71,11 +70,90 @@ $ make install
### with buildx or Docker 19.03
```
$ export DOCKER_BUILDKIT=1
$ # choose a platform that matches your architecture
$ docker build --target=binaries --platform=[darwin,windows,linux,linux/arm64] -o . git://github.com/docker/buildx
$ docker build --platform=local -o . git://github.com/docker/buildx
$ mv buildx ~/.docker/cli-plugins/docker-buildx
```
# Getting started
## Building with buildx
Buildx is a Docker CLI plugin that extends the `docker build` command with the full support of the features provided by [Moby BuildKit](https://github.com/moby/buildkit) builder toolkit. It provides the same user experience as `docker build` with many new features like creating scoped builder instances and building against multiple nodes concurrently.
After installation, buildx can be accessed through the `docker buildx` command. `docker buildx build` is the command for starting a new build.
```
$ docker buildx build .
[+] Building 8.4s (23/32)
=> ...
```
Buildx will always build using the BuildKit engine and does not require `DOCKER_BUILDKIT=1` environment variable for starting builds.
Buildx build command supports the features available for `docker build` including the new features in Docker 19.03 such as outputs configuration, inline build caching or specifying target platform. In addition, buildx supports new features not yet available for regular `docker build` like building manifest lists, distributed caching, exporting build results to OCI image tarballs etc.
Buildx is supposed to be flexible and can be run in different configurations that are exposed through a driver concept. Currently, we support a "docker" driver that uses the BuildKit library bundled into the docker daemon binary, and a "docker-container" driver that automatically launches BuildKit inside a Docker container. We plan to add more drivers in the future, for example, one that would allow running buildx inside an (unprivileged) container.
The user experience of using buildx is very similar across drivers, but there are some features that are not currently supported by the "docker" driver, because the BuildKit library bundled into docker daemon currently uses a different storage component. In contrast, all images built with "docker" driver are automatically added to the "docker images" view by default, whereas when using other drivers the method for outputting an image needs to be selected with `--output`.
## Working with builder instances
By default, buildx will initially use the "docker" driver if it is supported, providing a very similar user experience to the native `docker build`. But using a local shared daemon is only one way to build your applications.
Buildx allows you to create new instances of isolated builders. This can be used for getting a scoped environment for your CI builds that does not change the state of the shared daemon or for isolating the builds for different projects. You can create a new instance for a set of remote nodes, forming a build farm, and quickly switch between them.
New instances can be created with `docker buildx create` command. This will create a new builder instance with a single node based on your current configuration. To use a remote node you can specify the `DOCKER_HOST` or remote context name while creating the new builder. After creating a new instance you can manage its lifecycle with the `inspect`, `stop` and `rm` commands and list all available builders with `ls`. After creating a new builder you can also append new nodes to it.
To switch between different builders use `docker buildx use <name>`. After running this command the build commands would automatically keep using this builder.
Docker 19.03 also features a new `docker context` command that can be used for giving names for remote Docker API endpoints. Buildx integrates with `docker context` so that all of your contexts automatically get a default builder instance. While creating a new builder instance or when adding a node to it you can also set the context name as the target.
## Building multi-platform images
BuildKit is designed to work well for building for multiple platforms and not only for the architecture and operating system that the user invoking the build happens to run.
When invoking a build, the `--platform` flag can be used to specify the target platform for the build output, (e.g. linux/amd64, linux/arm64, darwin/amd64). When the current builder instance is backed by the "docker-container" driver, multiple platforms can be specified together. In this case, a manifest list will be built, containing images for all of the specified architectures. When this image is used in `docker run` or `docker service`, Docker will pick the correct image based on the nodes platform.
Multi-platform images can be built by mainly three different strategies that are all supported by buildx and Dockerfiles. You can use the QEMU emulation support in the kernel, build on multiple native nodes using the same builder instance or use a stage in Dockerfile to cross-compile to different architectures.
QEMU is the easiest way to get started if your node already supports it (e.g. if you are using Docker Desktop). It requires no changes to your Dockerfile and BuildKit will automatically detect the secondary architectures that are available. When BuildKit needs to run a binary for a different architecture it will automatically load it through a binary registered in the binfmt_misc handler.
Using multiple native nodes provides better support for more complicated cases not handled by QEMU and generally have better performance. Additional nodes can be added to the builder instance with `--append` flag.
```
# assuming contexts node-amd64 and node-arm64 exist in "docker context ls"
$ docker buildx create --use --name mybuild node-amd64
mybuild
$ docker buildx create --append --name mybuild node-arm64
$ docker buildx build --platform linux/amd64,linux/arm64 .
```
Finally, depending on your project, the language that you use may have good support for cross-compilation. In that case, multi-stage builds in Dockerfiles can be effectively used to build binaries for the platform specified with `--platform` using the native architecture of the build node. List of build arguments like `BUILDPLATFORM` and `TARGETPLATFORM` are available automatically inside your Dockerfile and can be leveraged by the processes running as part of your build.
```
FROM --platform=$BUILDPLATFORM golang:alpine AS build
ARG TARGETPLATFORM
ARG BUILDPLATFORM
RUN echo "I am running on $BUILDPLATFORM, building for $TARGETPLATFORM" > /log
FROM alpine
COPY --from=build /log /log
```
## High-level build options
Buildx also aims to provide support for higher level build concepts that go beyond invoking a single build command. We want to support building all the images in your application together and let the users define project specific reusable build flows that can then be easily invoked by anyone.
BuildKit has great support for efficiently handling multiple concurrent build requests and deduplicating work. While build commands can be combined with general-purpose command runners (eg. make), these tools generally invoke builds in sequence and therefore cant leverage the full potential of BuildKit parallelization or combine BuildKits output for the user. For this use case we have added a command called `docker buildx bake`.
Currently, the bake command supports building images from compose files, similar to `compose build` but allowing all the services to be built concurrently as part of a single request.
There is also support for custom build rules from HCL/JSON files allowing better code reuse and different target groups. The design of bake is in very early stages and we are looking for feedback from users.
# Documentation
### `buildx build [OPTIONS] PATH | URL | -`
@@ -87,6 +165,7 @@ Options:
| Flag | Description |
| --- | --- |
| --add-host [] | Add a custom host-to-IP mapping (host:ip)
| --allow [] | Allow extra privileged entitlement, e.g. network.host, security.insecure
| --build-arg [] | Set build-time variables
| --cache-from [] | External cache sources (eg. user/app:cache, type=local,src=path/to/dir)
| --cache-to [] | Cache export destinations (eg. user/app:cache, type=local,dest=path/to/dir)
@@ -159,7 +238,9 @@ Attribute key:
##### `docker`
The `docker` export type writes the result image as an Docker image specification tarball https://github.com/moby/moby/blob/master/image/spec/v1.2.md on the client. Tarballs created by this exporter are also OCI compatible.
The `docker` export type writes the single-platform result image as a Docker image specification tarball https://github.com/moby/moby/blob/master/image/spec/v1.2.md on the client. Tarballs created by this exporter are also OCI compatible.
Currently, multi-platform images cannot be exported with the `docker` export type. The most common usecase for multi-platform images is to directly push to a registry (see [`registry`](#registry)).
Attribute keys:
@@ -198,15 +279,15 @@ docker buildx build -t tonistiigi/foo -o type=registry
#### `--push`
Shorthand for `--output=type=registry` . Will automatically push the build result to registry.
Shorthand for [`--output=type=registry`](#registry). Will automatically push the build result to registry.
#### `--load`
Shorthand for `--output=type=docker` . Will automatically load the build result to `docker images`.
Shorthand for [`--output=type=docker`](#docker). Will automatically load the single-platform build result to `docker images`.
#### `--cache-from=[NAME|type=TYPE[,KEY=VALUE]]`
Use an external cache source for a build. Supported types are `registry` and `local`. The `registry` source can import cache from a cache manifest or (special) image configuration on the registry. The `local` source can export cache from local files previously exported with `--cache-to`.
Use an external cache source for a build. Supported types are `registry` and `local`. The `registry` source can import cache from a cache manifest or (special) image configuration on the registry. The `local` source can import cache from local files previously exported with `--cache-to`.
If no type is specified, `registry` exporter is used with a specified reference.
@@ -235,9 +316,23 @@ Examples:
docker buildx build --cache-to=user/app:cache .
docker buildx build --cache-to=type=inline .
docker buildx build --cache-to=type=registry,ref=user/app .
docker buildx build --cache-to=type=local,src=path/to/cache .
docker buildx build --cache-to=type=local,dest=path/to/cache .
```
#### `--allow=ENTITLEMENT`
Allow extra privileged entitlement. List of entitlements:
- `network.host` - Allows executions with host networking.
- `security.insecure` - Allows executions without sandbox. See [related Dockerfile extensions](https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/experimental.md#run---securityinsecuresandbox).
For entitlements to be enabled, the `buildkitd` daemon also needs to allow them with `--allow-insecure-entitlement` (see [`create --buildkitd-flags`](#--buildkitd-flags-flags))
Example:
```
$ docker buildx create --use --name insecure-builder --buildkitd-flags '--allow-insecure-entitlement security.insecure'
$ docker buildx build --allow security.insecure .
```
### `buildx create [OPTIONS] [CONTEXT|ENDPOINT]`
@@ -251,21 +346,16 @@ Options:
| Flag | Description |
| --- | --- |
| --append | Append a node to builder instead of changing it
| --driver string | Driver to use (eg. docker-container)
| --leave | Remove a node from builder instead of changing it
| --name string | Builder instance name
| --node string | Create/modify node with given name
| --platform stringArray | Fixed platforms for current node
| --use | Set the current builder instance
#### `--driver DRIVER`
Sets the builder driver to be used. There are two available drivers, each have their own specificities.
- `docker` - Uses the builder that is built into the docker daemon. With this driver, the `--load` flag is implied by default on `buildx build`. However, building multi-platform images or exporting cache is not currently supported.
- `docker-container` - Uses a buildkit container that will be spawned via docker. With this driver, both building multi-platform images and exporting cache are supported. However, images built will not automatically appear in `docker images` (see [`build --load`](#--load)).
| --append | Append a node to builder instead of changing it
| --buildkitd-flags string | Flags for buildkitd daemon
| --config string | BuildKit config file
| --driver string | Driver to use (eg. docker-container)
| --driver-opt stringArray | Options for the driver
| --leave | Remove a node from builder instead of changing it
| --name string | Builder instance name
| --node string | Create/modify node with given name
| --platform stringArray | Fixed platforms for current node
| --use | Set the current builder instance
#### `--append`
@@ -279,6 +369,41 @@ $ docker buildx create --name eager_beaver --append mycontext2
eager_beaver
```
#### `--buildkitd-flags FLAGS`
Adds flags when starting the buildkitd daemon. They take precedence over the configuration file specified by [`--config`](#--config-file). See `buildkitd --help` for the available flags.
Example:
```
--buildkitd-flags '--debug --debugaddr 0.0.0.0:6666'
```
#### `--config FILE`
Specifies the configuration file for the buildkitd daemon to use. The configuration can be overridden by [`--buildkitd-flags`](#--buildkitd-flags-flags). See an [example buildkitd configuration file](https://github.com/moby/buildkit/blob/master/docs/buildkitd.toml.md).
#### `--driver DRIVER`
Sets the builder driver to be used. There are two available drivers, each have their own specificities.
- `docker` - Uses the builder that is built into the docker daemon. With this driver, the [`--load`](#--load) flag is implied by default on `buildx build`. However, building multi-platform images or exporting cache is not currently supported.
- `docker-container` - Uses a buildkit container that will be spawned via docker. With this driver, both building multi-platform images and exporting cache are supported. However, images built will not automatically appear in `docker images` (see [`build --load`](#--load)).
#### `--driver-opt OPTIONS`
Passes additional driver-specific options. Details for each driver:
- `docker` - No driver options
- `docker-container`
- `image` - Sets the container image to be used for running buildkit.
- `network` - Sets the network mode for running the buildkit container.
- Example:
```
--driver docker-container --driver-opt image=moby/buildkit:master,network=host
```
#### `--leave`
Changes the action of the command to removes a node from a builder. The builder needs to be specified with `--name` and node that is removed is set with `--node`.
@@ -452,23 +577,23 @@ Note: Design of bake command is work in progress, the user experience may change
Example HCL defintion:
```
group default {
targets = [db, webapp-dev]
group "default" {
targets = ["db", "webapp-dev"]
}
target webapp-dev {
target "webapp-dev" {
dockerfile = "Dockerfile.webapp"
tags = ["docker.io/username/webapp"]
}
target webapp-release {
inherits = [webapp-dev]
platforms = [linux/amd64, linux/arm64]
target "webapp-release" {
inherits = ["webapp-dev"]
platforms = ["linux/amd64", "linux/arm64"]
}
target db {
target "db" {
dockerfile = "Dockerfile.db"
tags = [docker.io/username/db]
tags = ["docker.io/username/db"]
}
```
@@ -550,20 +675,5 @@ To remove this alias, you can run `docker buildx uninstall`.
# Contributing
To enter a demo container environment and experiment, you may run:
```
$ make shell
```
To validate PRs before submitting them you should run:
```
$ make validate-all
```
To generate new vendored files with go modules run:
```
$ make vendor
```
Want to contribute to Buildx? Awesome! You can find information about
contributing to this project in the [CONTRIBUTING.md](/.github/CONTRIBUTING.md)

View File

@@ -3,10 +3,13 @@ package bake
import (
"context"
"io/ioutil"
"os"
"path"
"strings"
"github.com/docker/buildx/build"
"github.com/docker/buildx/util/platformutil"
"github.com/docker/docker/pkg/urlutil"
"github.com/moby/buildkit/session/auth/authprovider"
"github.com/pkg/errors"
)
@@ -74,7 +77,20 @@ func mergeConfig(c1, c2 Config) Config {
if c1.Group == nil {
c1.Group = map[string]Group{}
}
c1.Group[k] = g
if g1, exists := c1.Group[k]; exists {
nextTarget:
for _, t := range g.Targets {
for _, t2 := range g1.Targets {
if t == t2 {
continue nextTarget
}
}
g1.Targets = append(g1.Targets, t)
}
c1.Group[k] = g1
} else {
c1.Group[k] = g
}
}
for k, t := range c2.Target {
@@ -245,10 +261,10 @@ func (t *Target) normalize() {
t.Outputs = removeDupes(t.Outputs)
}
func TargetsToBuildOpt(m map[string]Target) (map[string]build.Options, error) {
func TargetsToBuildOpt(m map[string]Target, noCache, pull bool) (map[string]build.Options, error) {
m2 := make(map[string]build.Options, len(m))
for k, v := range m {
bo, err := toBuildOpt(v)
bo, err := toBuildOpt(v, noCache, pull)
if err != nil {
return nil, err
}
@@ -257,7 +273,7 @@ func TargetsToBuildOpt(m map[string]Target) (map[string]build.Options, error) {
return m2, nil
}
func toBuildOpt(t Target) (*build.Options, error) {
func toBuildOpt(t Target, noCache, pull bool) (*build.Options, error) {
if v := t.Context; v != nil && *v == "-" {
return nil, errors.Errorf("context from stdin not allowed in bake")
}
@@ -274,6 +290,10 @@ func toBuildOpt(t Target) (*build.Options, error) {
dockerfilePath = *t.Dockerfile
}
if !isRemoteResource(contextPath) && !path.IsAbs(dockerfilePath) {
dockerfilePath = path.Join(contextPath, dockerfilePath)
}
bo := &build.Options{
Inputs: build.Inputs{
ContextPath: contextPath,
@@ -282,6 +302,8 @@ func toBuildOpt(t Target) (*build.Options, error) {
Tags: t.Tags,
BuildArgs: t.Args,
Labels: t.Labels,
NoCache: noCache,
Pull: pull,
}
platforms, err := platformutil.Parse(t.Platforms)
@@ -290,7 +312,7 @@ func toBuildOpt(t Target) (*build.Options, error) {
}
bo.Platforms = platforms
bo.Session = append(bo.Session, authprovider.NewDockerAuthProvider())
bo.Session = append(bo.Session, authprovider.NewDockerAuthProvider(os.Stderr))
secrets, err := build.ParseSecretSpecs(t.Secrets)
if err != nil {
@@ -393,3 +415,7 @@ func removeDupes(s []string) []string {
}
return s[:i]
}
func isRemoteResource(str string) bool {
return urlutil.IsGitURL(str) || urlutil.IsURL(str)
}

View File

@@ -59,11 +59,30 @@ services:
`), 0600)
require.NoError(t, err)
ctx := context.TODO()
fp2 := filepath.Join(tmpdir, "docker-compose2.yml")
err = ioutil.WriteFile(fp2, []byte(`
version: "3"
m, err := ReadTargets(ctx, []string{fp}, []string{"default"}, nil)
services:
newservice:
build: .
webapp:
build:
args:
buildno2: 12
`), 0600)
require.NoError(t, err)
ctx := context.TODO()
m, err := ReadTargets(ctx, []string{fp, fp2}, []string{"default"}, nil)
require.NoError(t, err)
require.Equal(t, 3, len(m))
_, ok := m["newservice"]
require.True(t, ok)
require.Equal(t, "Dockerfile.webapp", *m["webapp"].Dockerfile)
require.Equal(t, ".", *m["webapp"].Context)
require.Equal(t, "1", m["webapp"].Args["buildno"])
require.Equal(t, "12", m["webapp"].Args["buildno2"])
}

View File

@@ -1,6 +1,11 @@
package bake
import (
"fmt"
"os"
"reflect"
"strings"
"github.com/docker/cli/cli/compose/loader"
composetypes "github.com/docker/cli/cli/compose/types"
)
@@ -16,9 +21,22 @@ func parseCompose(dt []byte) (*composetypes.Config, error) {
Config: parsed,
},
},
Environment: envMap(os.Environ()),
})
}
func envMap(env []string) map[string]string {
result := make(map[string]string, len(env))
for _, s := range env {
kv := strings.SplitN(s, "=", 2)
if len(kv) != 2 {
continue
}
result[kv[0]] = kv[1]
}
return result
}
func ParseCompose(dt []byte) (*Config, error) {
cfg, err := parseCompose(dt)
if err != nil {
@@ -26,6 +44,7 @@ func ParseCompose(dt []byte) (*Config, error) {
}
var c Config
var zeroBuildConfig composetypes.BuildConfig
if len(cfg.Services) > 0 {
c.Group = map[string]Group{}
c.Target = map[string]Target{}
@@ -33,7 +52,15 @@ func ParseCompose(dt []byte) (*Config, error) {
var g Group
for _, s := range cfg.Services {
g.Targets = append(g.Targets, s.Name)
if reflect.DeepEqual(s.Build, zeroBuildConfig) {
// if not make sure they're setting an image or it's invalid d-c.yml
if s.Image == "" {
return nil, fmt.Errorf("compose file invalid: service %s has neither an image nor a build context specified. At least one must be provided.", s.Name)
}
continue
}
var contextPathP *string
if s.Build.Context != "" {
contextPath := s.Build.Context
@@ -44,6 +71,7 @@ func ParseCompose(dt []byte) (*Config, error) {
dockerfilePath := s.Build.Dockerfile
dockerfilePathP = &dockerfilePath
}
g.Targets = append(g.Targets, s.Name)
t := Target{
Context: contextPathP,
Dockerfile: dockerfilePathP,
@@ -53,7 +81,8 @@ func ParseCompose(dt []byte) (*Config, error) {
// TODO: add platforms
}
if s.Build.Target != "" {
t.Target = &s.Build.Target
target := s.Build.Target
t.Target = &target
}
if s.Image != "" {
t.Tags = []string{s.Image}
@@ -72,6 +101,8 @@ func toMap(in composetypes.MappingWithEquals) map[string]string {
for k, v := range in {
if v != nil {
m[k] = *v
} else {
m[k] = os.Getenv(k)
}
}
return m

View File

@@ -39,3 +39,79 @@ services:
require.Equal(t, 1, len(c.Target["webapp"].Args))
require.Equal(t, "123", c.Target["webapp"].Args["buildno"])
}
func TestNoBuildOutOfTreeService(t *testing.T) {
var dt = []byte(`
version: "3.7"
services:
external:
image: "verycooldb:1337"
webapp:
build: ./db
`)
c, err := ParseCompose(dt)
require.NoError(t, err)
require.Equal(t, 1, len(c.Group))
}
func TestParseComposeTarget(t *testing.T) {
var dt = []byte(`
version: "3.7"
services:
db:
build:
context: ./db
target: db
webapp:
build:
context: .
target: webapp
`)
c, err := ParseCompose(dt)
require.NoError(t, err)
require.Equal(t, "db", *c.Target["db"].Target)
require.Equal(t, "webapp", *c.Target["webapp"].Target)
}
func TestComposeBuildWithoutContext(t *testing.T) {
var dt = []byte(`
version: "3.7"
services:
db:
build:
target: db
webapp:
build:
context: .
target: webapp
`)
c, err := ParseCompose(dt)
require.NoError(t, err)
require.Equal(t, "db", *c.Target["db"].Target)
require.Equal(t, "webapp", *c.Target["webapp"].Target)
}
func TestBogusCompose(t *testing.T) {
var dt = []byte(`
version: "3.7"
services:
db:
labels:
- "foo"
webapp:
build:
context: .
target: webapp
`)
_, err := ParseCompose(dt)
require.Error(t, err)
require.Contains(t, err.Error(), "has neither an image nor a build context specified. At least one must be provided")
}

View File

@@ -24,6 +24,7 @@ import (
"github.com/moby/buildkit/client"
"github.com/moby/buildkit/session"
"github.com/moby/buildkit/session/upload/uploadprovider"
"github.com/moby/buildkit/util/entitlements"
"github.com/opencontainers/go-digest"
specs "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
@@ -55,6 +56,7 @@ type Options struct {
CacheFrom []client.CacheOptionsEntry
CacheTo []client.CacheOptionsEntry
Allow []entitlements.Entitlement
// DockerTarget
}
@@ -324,11 +326,12 @@ func toSolveOpt(d driver.Driver, multiDriver bool, opt Options, dl dockerLoadCal
}
so := client.SolveOpt{
Frontend: "dockerfile.v0",
FrontendAttrs: map[string]string{},
LocalDirs: map[string]string{},
CacheExports: opt.CacheTo,
CacheImports: opt.CacheFrom,
Frontend: "dockerfile.v0",
FrontendAttrs: map[string]string{},
LocalDirs: map[string]string{},
CacheExports: opt.CacheTo,
CacheImports: opt.CacheFrom,
AllowedEntitlements: opt.Allow,
}
if multiDriver {
@@ -397,7 +400,7 @@ func toSolveOpt(d driver.Driver, multiDriver bool, opt Options, dl dockerLoadCal
return nil, nil, err
}
defers = append(defers, cancel)
opt.Exports[i].Output = w
opt.Exports[i].Output = wrapWriteCloser(w)
}
} else if !d.Features()[driver.DockerExporter] {
return nil, nil, notSupported(d, driver.DockerExporter)
@@ -454,6 +457,7 @@ func toSolveOpt(d driver.Driver, multiDriver bool, opt Options, dl dockerLoadCal
switch opt.NetworkMode {
case "host", "none":
so.FrontendAttrs["force-network-mode"] = opt.NetworkMode
so.AllowedEntitlements = append(so.AllowedEntitlements, entitlements.EntitlementNetworkHost)
case "", "default":
default:
return nil, nil, errors.Errorf("network mode %q not supported by buildkit", opt.NetworkMode)
@@ -537,7 +541,7 @@ func Build(ctx context.Context, drivers []DriverInfo, opt map[string]Options, do
multiTarget := len(opt) > 1
for k, opt := range opt {
err := func() error {
err := func(k string) error {
opt := opt
dps := m[k]
multiDriver := len(m[k]) > 1
@@ -681,7 +685,7 @@ func Build(ctx context.Context, drivers []DriverInfo, opt map[string]Options, do
}
return nil
}()
}(k)
if err != nil {
return nil, err
}
@@ -731,7 +735,7 @@ func LoadInputs(inp Inputs, target *client.SolveOpt) (func(), error) {
return nil, errStdinConflict
}
buf := bufio.NewReader(os.Stdin)
buf := bufio.NewReader(inp.InStream)
magic, err := buf.Peek(archiveHeaderSize * 2)
if err != nil && err != io.EOF {
return nil, errors.Wrap(err, "failed to peek context header from STDIN")
@@ -757,7 +761,7 @@ func LoadInputs(inp Inputs, target *client.SolveOpt) (func(), error) {
target.LocalDirs["context"] = inp.ContextPath
switch inp.DockerfilePath {
case "-":
dockerfileReader = os.Stdin
dockerfileReader = inp.InStream
case "":
dockerfileDir = inp.ContextPath
default:
@@ -780,6 +784,7 @@ func LoadInputs(inp Inputs, target *client.SolveOpt) (func(), error) {
return nil, err
}
toRemove = append(toRemove, dockerfileDir)
dockerfileName = "Dockerfile"
}
if dockerfileName == "" {

21
build/entitlements.go Normal file
View File

@@ -0,0 +1,21 @@
package build
import (
"github.com/moby/buildkit/util/entitlements"
"github.com/pkg/errors"
)
func ParseEntitlements(in []string) ([]entitlements.Entitlement, error) {
out := make([]entitlements.Entitlement, 0, len(in))
for _, v := range in {
switch v {
case "security.insecure":
out = append(out, entitlements.EntitlementSecurityInsecure)
case "network.host":
out = append(out, entitlements.EntitlementNetworkHost)
default:
return nil, errors.Errorf("invalid entitlement: %v", v)
}
}
return out, nil
}

View File

@@ -2,6 +2,7 @@ package build
import (
"encoding/csv"
"io"
"os"
"strings"
@@ -81,7 +82,7 @@ func ParseOutputs(inp []string) ([]client.ExportEntry, error) {
if _, err := console.ConsoleFromFile(os.Stdout); err == nil {
return nil, errors.Errorf("output file is required for %s exporter. refusing to write to console", out.Type)
}
out.Output = os.Stdout
out.Output = wrapWriteCloser(os.Stdout)
} else if dest != "" {
fi, err := os.Stat(dest)
if err != nil && !os.IsNotExist(err) {
@@ -94,7 +95,7 @@ func ParseOutputs(inp []string) ([]client.ExportEntry, error) {
if err != nil {
return nil, errors.Errorf("failed to open %s", err)
}
out.Output = f
out.Output = wrapWriteCloser(f)
}
delete(out.Attrs, "dest")
case "registry":
@@ -106,3 +107,9 @@ func ParseOutputs(inp []string) ([]client.ExportEntry, error) {
}
return outs, nil
}
func wrapWriteCloser(wc io.WriteCloser) func(map[string]string) (io.WriteCloser, error) {
return func(map[string]string) (io.WriteCloser, error) {
return wc, nil
}
}

View File

@@ -16,6 +16,8 @@ import (
_ "github.com/docker/buildx/driver/docker-container"
)
var experimental string
func main() {
if os.Getenv("DOCKER_CLI_PLUGIN_ORIGINAL_CLI_COMMAND") == "" {
if len(os.Args) < 2 || os.Args[1] != manager.MetadataSubcommandName {
@@ -41,5 +43,6 @@ func main() {
SchemaVersion: "0.1.0",
Vendor: "Docker Inc.",
Version: version.Version,
Experimental: experimental != "",
})
}

View File

@@ -28,7 +28,7 @@ func runBake(dockerCli command.Cli, targets []string, in bakeOptions) error {
return err
}
if len(files) == 0 {
return errors.Errorf("no docker-compose.yml or dockerbuild.hcl found, specify build file with -f/--file")
return errors.Errorf("no docker-compose.yml or docker-bake.hcl found, specify build file with -f/--file")
}
in.files = files
}
@@ -51,7 +51,7 @@ func runBake(dockerCli command.Cli, targets []string, in bakeOptions) error {
return nil
}
bo, err := bake.TargetsToBuildOpt(m)
bo, err := bake.TargetsToBuildOpt(m, in.noCache, in.pull)
if err != nil {
return err
}

View File

@@ -44,6 +44,8 @@ type buildOptions struct {
squash bool
quiet bool
allow []string
// hidden
// untrusted bool
// ulimits *opts.UlimitOpt
@@ -84,8 +86,8 @@ func runBuild(dockerCli command.Cli, in buildOptions) error {
InStream: os.Stdin,
},
Tags: in.tags,
Labels: listToMap(in.labels),
BuildArgs: listToMap(in.buildArgs),
Labels: listToMap(in.labels, false),
BuildArgs: listToMap(in.buildArgs, true),
Pull: in.pull,
NoCache: in.noCache,
Target: in.target,
@@ -100,7 +102,7 @@ func runBuild(dockerCli command.Cli, in buildOptions) error {
}
opts.Platforms = platforms
opts.Session = append(opts.Session, authprovider.NewDockerAuthProvider())
opts.Session = append(opts.Session, authprovider.NewDockerAuthProvider(os.Stderr))
secrets, err := build.ParseSecretSpecs(in.secrets)
if err != nil {
@@ -167,6 +169,12 @@ func runBuild(dockerCli command.Cli, in buildOptions) error {
}
opts.CacheTo = cacheExports
allow, err := build.ParseEntitlements(in.allow)
if err != nil {
return err
}
opts.Allow = allow
return buildTargets(ctx, dockerCli, map[string]build.Options{"default": opts}, in.progress)
}
@@ -214,6 +222,8 @@ func buildCmd(dockerCli command.Cli) *cobra.Command {
flags.StringVar(&options.target, "target", "", "Set the target build stage to build.")
flags.StringSliceVar(&options.allow, "allow", []string{}, "Allow extra privileged entitlement, e.g. network.host, security.insecure")
// not implemented
flags.BoolVarP(&options.quiet, "quiet", "q", false, "Suppress the build output and print image ID on success")
flags.StringVar(&options.networkMode, "network", "default", "Set the networking mode for the RUN instructions during build")
@@ -254,6 +264,10 @@ func buildCmd(dockerCli command.Cli) *cobra.Command {
flags.MarkHidden("cgroup-parent")
flags.StringVar(&ignore, "isolation", "", "Container isolation technology")
flags.MarkHidden("isolation")
flags.BoolVar(&ignoreBool, "rm", true, "Remove intermediate containers after a successful build")
flags.MarkHidden("rm")
flags.BoolVar(&ignoreBool, "force-rm", false, "Always remove intermediate containers")
flags.MarkHidden("force-rm")
platformsDefault := []string{}
if v := os.Getenv("DOCKER_DEFAULT_PLATFORM"); v != "" {
@@ -278,12 +292,16 @@ func commonFlags(options *commonOptions, flags *pflag.FlagSet) {
flags.BoolVar(&options.pull, "pull", false, "Always attempt to pull a newer version of the image")
}
func listToMap(values []string) map[string]string {
func listToMap(values []string, defaultEnv bool) map[string]string {
result := make(map[string]string, len(values))
for _, value := range values {
kv := strings.SplitN(value, "=", 2)
if len(kv) == 1 {
result[kv[0]] = ""
if defaultEnv {
result[kv[0]] = os.Getenv(kv[0])
} else {
result[kv[0]] = ""
}
} else {
result[kv[0]] = kv[1]
}

View File

@@ -1,13 +1,16 @@
package commands
import (
"encoding/csv"
"fmt"
"os"
"strings"
"github.com/docker/buildx/driver"
"github.com/docker/buildx/store"
"github.com/docker/cli/cli"
"github.com/docker/cli/cli/command"
"github.com/google/shlex"
"github.com/moby/buildkit/util/appcontext"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
@@ -22,6 +25,9 @@ type createOptions struct {
actionAppend bool
actionLeave bool
use bool
flags string
configFile string
driverOpts []string
// upgrade bool // perform upgrade of the driver
}
@@ -107,6 +113,14 @@ func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
ng.Driver = driverName
}
var flags []string
if in.flags != "" {
flags, err = shlex.Split(in.flags)
if err != nil {
return errors.Wrap(err, "failed to parse buildkit flags")
}
}
var ep string
if in.actionLeave {
if err := ng.Leave(in.nodeName); err != nil {
@@ -128,7 +142,11 @@ func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
return err
}
}
if err := ng.Update(in.nodeName, ep, in.platform, len(args) > 0, in.actionAppend); err != nil {
m, err := csvToMap(in.driverOpts)
if err != nil {
return err
}
if err := ng.Update(in.nodeName, ep, in.platform, len(args) > 0, in.actionAppend, flags, in.configFile, m); err != nil {
return err
}
}
@@ -154,6 +172,11 @@ func runCreate(dockerCli command.Cli, in createOptions, args []string) error {
func createCmd(dockerCli command.Cli) *cobra.Command {
var options createOptions
var drivers []string
for s := range driver.GetFactories() {
drivers = append(drivers, s)
}
cmd := &cobra.Command{
Use: "create [OPTIONS] [CONTEXT|ENDPOINT]",
Short: "Create a new builder instance",
@@ -166,9 +189,12 @@ func createCmd(dockerCli command.Cli) *cobra.Command {
flags := cmd.Flags()
flags.StringVar(&options.name, "name", "", "Builder instance name")
flags.StringVar(&options.driver, "driver", "", "Driver to use (eg. docker-container)")
flags.StringVar(&options.driver, "driver", "", fmt.Sprintf("Driver to use (available: %v)", drivers))
flags.StringVar(&options.nodeName, "node", "", "Create/modify node with given name")
flags.StringVar(&options.flags, "buildkitd-flags", "", "Flags for buildkitd daemon")
flags.StringVar(&options.configFile, "config", "", "BuildKit config file")
flags.StringArrayVar(&options.platform, "platform", []string{}, "Fixed platforms for current node")
flags.StringArrayVar(&options.driverOpts, "driver-opt", []string{}, "Options for the driver")
flags.BoolVar(&options.actionAppend, "append", false, "Append a node to builder instead of changing it")
flags.BoolVar(&options.actionLeave, "leave", false, "Remove a node from builder instead of changing it")
@@ -178,3 +204,22 @@ func createCmd(dockerCli command.Cli) *cobra.Command {
return cmd
}
func csvToMap(in []string) (map[string]string, error) {
m := make(map[string]string, len(in))
for _, s := range in {
csvReader := csv.NewReader(strings.NewReader(s))
fields, err := csvReader.Read()
if err != nil {
return nil, err
}
for _, v := range fields {
p := strings.SplitN(v, "=", 2)
if len(p) != 2 {
return nil, errors.Errorf("invalid value %q, expecting k=v", v)
}
m[p[0]] = p[1]
}
}
return m, nil
}

View File

@@ -114,6 +114,9 @@ func runInspect(dockerCli command.Cli, in inspectOptions, args []string) error {
fmt.Fprintf(w, "Error:\t%s\n", err.Error())
} else {
fmt.Fprintf(w, "Status:\t%s\n", ngi.drivers[i].info.Status)
if len(n.Flags) > 0 {
fmt.Fprintf(w, "Flags:\t%s\n", strings.Join(n.Flags, " "))
}
fmt.Fprintf(w, "Platforms:\t%s\n", strings.Join(platformutil.Format(platformutil.Dedupe(append(n.Platforms, ngi.drivers[i].platforms...))), ", "))
}
}

View File

@@ -43,7 +43,7 @@ func runLs(dockerCli command.Cli, in lsOptions) error {
builders[i] = &nginfo{ng: ng}
}
list, err := dockerCli.ContextStore().ListContexts()
list, err := dockerCli.ContextStore().List()
if err != nil {
return err
}

View File

@@ -36,7 +36,7 @@ func runUse(dockerCli command.Cli, in useOptions, name string) error {
}
return nil
}
list, err := dockerCli.ContextStore().ListContexts()
list, err := dockerCli.ContextStore().List()
if err != nil {
return err
}

View File

@@ -42,7 +42,7 @@ func getCurrentEndpoint(dockerCli command.Cli) (string, error) {
// getDockerEndpoint returns docker endpoint string for given context
func getDockerEndpoint(dockerCli command.Cli, name string) (string, error) {
list, err := dockerCli.ContextStore().ListContexts()
list, err := dockerCli.ContextStore().List()
if err != nil {
return "", err
}
@@ -111,7 +111,7 @@ func getNodeGroup(txn *store.Txn, dockerCli command.Cli, name string) (*store.No
name = dockerCli.CurrentContext()
}
list, err := dockerCli.ContextStore().ListContexts()
list, err := dockerCli.ContextStore().List()
if err != nil {
return nil, err
}
@@ -174,7 +174,7 @@ func driversForNodeGroup(ctx context.Context, dockerCli command.Cli, ng *store.N
// TODO: replace the following line with dockerclient.WithAPIVersionNegotiation option in clientForEndpoint
dockerapi.NegotiateAPIVersion(ctx)
d, err := driver.GetDriver(ctx, "buildx_buildkit_"+n.Name, f, dockerapi)
d, err := driver.GetDriver(ctx, "buildx_buildkit_"+n.Name, f, dockerapi, n.Flags, n.ConfigFile, n.DriverOpts)
if err != nil {
di.Err = err
return nil
@@ -194,7 +194,7 @@ func driversForNodeGroup(ctx context.Context, dockerCli command.Cli, ng *store.N
// clientForEndpoint returns a docker client for an endpoint
func clientForEndpoint(dockerCli command.Cli, name string) (dockerclient.APIClient, error) {
list, err := dockerCli.ContextStore().ListContexts()
list, err := dockerCli.ContextStore().List()
if err != nil {
return nil, err
}
@@ -251,7 +251,7 @@ func getDefaultDrivers(ctx context.Context, dockerCli command.Cli) ([]build.Driv
return driversForNodeGroup(ctx, dockerCli, ng)
}
d, err := driver.GetDriver(ctx, "buildx_buildkit_default", nil, dockerCli.Client())
d, err := driver.GetDriver(ctx, "buildx_buildkit_default", nil, dockerCli.Client(), nil, "", nil)
if err != nil {
return nil, err
}

View File

@@ -1,6 +1,8 @@
package docker
import (
"archive/tar"
"bytes"
"context"
"io"
"io/ioutil"
@@ -20,11 +22,13 @@ import (
"github.com/pkg/errors"
)
var buildkitImage = "moby/buildkit:master" // TODO: make this verified and configuratble
var defaultBuildkitImage = "moby/buildkit:buildx-stable-1" // TODO: make this verified
type Driver struct {
driver.InitConfig
factory driver.Factory
netMode string
image string
}
func (d *Driver) Bootstrap(ctx context.Context, l progress.Logger) error {
@@ -49,8 +53,12 @@ func (d *Driver) Bootstrap(ctx context.Context, l progress.Logger) error {
}
func (d *Driver) create(ctx context.Context, l progress.SubLogger) error {
if err := l.Wrap("pulling image "+buildkitImage, func() error {
rc, err := d.DockerAPI.ImageCreate(ctx, buildkitImage, types.ImageCreateOptions{})
imageName := defaultBuildkitImage
if d.image != "" {
imageName = d.image
}
if err := l.Wrap("pulling image "+imageName, func() error {
rc, err := d.DockerAPI.ImageCreate(ctx, imageName, types.ImageCreateOptions{})
if err != nil {
return err
}
@@ -59,15 +67,34 @@ func (d *Driver) create(ctx context.Context, l progress.SubLogger) error {
}); err != nil {
return err
}
cfg := &container.Config{
Image: imageName,
}
if d.InitConfig.BuildkitFlags != nil {
cfg.Cmd = d.InitConfig.BuildkitFlags
}
if err := l.Wrap("creating container "+d.Name, func() error {
_, err := d.DockerAPI.ContainerCreate(ctx, &container.Config{
Image: buildkitImage,
}, &container.HostConfig{
hc := &container.HostConfig{
Privileged: true,
}, &network.NetworkingConfig{}, d.Name)
}
if d.netMode != "" {
hc.NetworkMode = container.NetworkMode(d.netMode)
}
_, err := d.DockerAPI.ContainerCreate(ctx, cfg, hc, &network.NetworkingConfig{}, d.Name)
if err != nil {
return err
}
if f := d.InitConfig.ConfigFile; f != "" {
buf, err := readFileToTar(f)
if err != nil {
return err
}
if err := d.DockerAPI.CopyToContainer(ctx, d.Name, "/", buf, dockertypes.CopyToContainerOptions{}); err != nil {
return err
}
}
if err := d.start(ctx, l); err != nil {
return err
}
@@ -239,3 +266,26 @@ type demux struct {
func (d *demux) Read(dt []byte) (int, error) {
return d.Reader.Read(dt)
}
func readFileToTar(fn string) (*bytes.Buffer, error) {
buf := bytes.NewBuffer(nil)
tw := tar.NewWriter(buf)
dt, err := ioutil.ReadFile(fn)
if err != nil {
return nil, err
}
if err := tw.WriteHeader(&tar.Header{
Name: "/etc/buildkit/buildkitd.toml",
Size: int64(len(dt)),
Mode: 0644,
}); err != nil {
return nil, err
}
if _, err := tw.Write(dt); err != nil {
return nil, err
}
if err := tw.Close(); err != nil {
return nil, err
}
return buf, nil
}

View File

@@ -37,8 +37,22 @@ func (f *factory) New(ctx context.Context, cfg driver.InitConfig) (driver.Driver
if cfg.DockerAPI == nil {
return nil, errors.Errorf("%s driver requires docker API access", f.Name())
}
d := &Driver{factory: f, InitConfig: cfg}
for k, v := range cfg.DriverOpts {
switch k {
case "network":
d.netMode = v
if v == "host" {
d.InitConfig.BuildkitFlags = append(d.InitConfig.BuildkitFlags, "--allow-insecure-entitlement=network.host")
}
case "image":
d.image = v
default:
return nil, errors.Errorf("invalid driver option %s for docker-container driver", k)
}
}
return &Driver{factory: f, InitConfig: cfg}, nil
return d, nil
}
func (f *factory) AllowsInstances() bool {

View File

@@ -44,6 +44,9 @@ func (f *factory) New(ctx context.Context, cfg driver.InitConfig) (driver.Driver
if cfg.DockerAPI == nil {
return nil, errors.Errorf("docker driver requires docker API access")
}
if cfg.ConfigFile != "" {
return nil, errors.Errorf("setting config file is not supported for docker driver, use dockerd configuration file")
}
return &Driver{factory: f, InitConfig: cfg}, nil
}

View File

@@ -23,10 +23,11 @@ type BuildkitConfig struct {
type InitConfig struct {
// This object needs updates to be generic for different drivers
Name string
DockerAPI dockerclient.APIClient
BuildkitConfig BuildkitConfig
Meta map[string]interface{}
Name string
DockerAPI dockerclient.APIClient
BuildkitFlags []string
ConfigFile string
DriverOpts map[string]string
}
var drivers map[string]Factory
@@ -71,10 +72,13 @@ func GetFactory(name string, instanceRequired bool) Factory {
return nil
}
func GetDriver(ctx context.Context, name string, f Factory, api dockerclient.APIClient) (Driver, error) {
func GetDriver(ctx context.Context, name string, f Factory, api dockerclient.APIClient, flags []string, config string, do map[string]string) (Driver, error) {
ic := InitConfig{
DockerAPI: api,
Name: name,
DockerAPI: api,
Name: name,
BuildkitFlags: flags,
ConfigFile: config,
DriverOpts: do,
}
if f == nil {
var err error
@@ -85,3 +89,7 @@ func GetDriver(ctx context.Context, name string, f Factory, api dockerclient.API
}
return f.New(ctx, ic)
}
func GetFactories() map[string]Factory {
return drivers
}

20
go.mod
View File

@@ -14,16 +14,14 @@ require (
github.com/cenkalti/backoff v2.1.1+incompatible // indirect
github.com/cloudflare/cfssl v0.0.0-20181213083726-b94e044bb51e // indirect
github.com/containerd/console v0.0.0-20181022165439-0650fd9eeb50
github.com/containerd/containerd v1.3.0-0.20190321141026-ceba56893a76
github.com/containerd/continuity v0.0.0-20181203112020-004b46473808 // indirect
github.com/containerd/containerd v1.3.0-0.20190507210959-7c1e88399ec0
github.com/containerd/fifo v0.0.0-20190226154929-a9fb20d87448 // indirect
github.com/containerd/typeurl v0.0.0-20190228175220-2a93cfde8c20 // indirect
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e // indirect
github.com/denisenkom/go-mssqldb v0.0.0-20190315220205-a8ed825ac853 // indirect
github.com/docker/cli v0.0.0-20190321234815-f40f9c240ab0
github.com/docker/cli v1.14.0-0.20190523191156-ab688a9a79a1
github.com/docker/compose-on-kubernetes v0.4.19-0.20190128150448-356b2919c496 // indirect
github.com/docker/distribution v2.7.1-0.20190205005809-0d3efadf0154+incompatible
github.com/docker/docker v1.14.0-0.20190410063227-d9d9eccdc862
github.com/docker/docker v1.14.0-0.20190410063227-3998dffb806f3887f804b813069f59bc14a7f3c1
github.com/docker/docker-credential-helpers v0.6.1 // indirect
github.com/docker/go v1.5.1-1.0.20160303222718-d30aec9fd63c // indirect
github.com/docker/go-connections v0.4.0 // indirect
@@ -34,14 +32,11 @@ require (
github.com/go-sql-driver/mysql v1.4.1 // indirect
github.com/gofrs/flock v0.7.0
github.com/gofrs/uuid v3.2.0+incompatible // indirect
github.com/gogo/googleapis v1.1.0 // indirect
github.com/gogo/protobuf v1.2.1 // indirect
github.com/google/btree v1.0.0 // indirect
github.com/google/certificate-transparency-go v1.0.21 // indirect
github.com/google/gofuzz v0.0.0-20170612174753-24818f796faf // indirect
github.com/googleapis/gnostic v0.2.0 // indirect
github.com/google/shlex v0.0.0-20150127133951-6f45313302b9
github.com/gorilla/mux v1.7.0 // indirect
github.com/gregjones/httpcache v0.0.0-20190212212710-3befbb6ad0cc // indirect
github.com/hailocab/go-hostpool v0.0.0-20160125115350-e80d13ce29ed // indirect
github.com/hashicorp/go-version v1.1.0 // indirect
github.com/hashicorp/hcl v1.0.0
@@ -57,13 +52,12 @@ require (
github.com/mattn/go-sqlite3 v1.10.0 // indirect
github.com/matttproud/golang_protobuf_extensions v1.0.1 // indirect
github.com/miekg/pkcs11 v0.0.0-20190322140431-074fd7a1ed19 // indirect
github.com/moby/buildkit v0.4.1-0.20190410165125-bcd466a748e950507826c30e835b087289aaf384
github.com/moby/buildkit v0.6.2-0.20190921002054-ae10b292fefb
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.1 // indirect
github.com/opencontainers/go-digest v1.0.0-rc1
github.com/opencontainers/image-spec v1.0.1
github.com/opencontainers/runtime-spec v1.0.1 // indirect
github.com/peterbourgon/diskv v2.0.1+incompatible // indirect
github.com/pkg/errors v0.8.1
github.com/prometheus/client_golang v0.8.0 // indirect
github.com/prometheus/client_model v0.0.0-20170216185247-6f3806018612 // indirect
@@ -74,7 +68,6 @@ require (
github.com/spf13/pflag v1.0.3
github.com/spf13/viper v1.3.2 // indirect
github.com/stretchr/testify v1.3.0
github.com/syndtr/gocapability v0.0.0-20180916011248-d98352740cb2 // indirect
github.com/theupdateframework/notary v0.6.1 // indirect
github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f // indirect
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 // indirect
@@ -83,7 +76,6 @@ require (
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f
golang.org/x/sys v0.0.0-20190322080309-f49334f85ddc // indirect
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 // indirect
google.golang.org/grpc v1.19.1 // indirect
gopkg.in/dancannon/gorethink.v3 v3.0.5 // indirect
gopkg.in/fatih/pool.v2 v2.0.0 // indirect
gopkg.in/gorethink/gorethink.v3 v3.0.5 // indirect
@@ -95,5 +87,3 @@ require (
)
replace github.com/jaguilar/vt100 => github.com/tonistiigi/vt100 v0.0.0-20190402012908-ad4c4a574305
replace github.com/docker/cli => github.com/tiborvass/cli v0.0.0-20190419012645-1ed02c40fe68

85
go.sum
View File

@@ -5,8 +5,8 @@ github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/Microsoft/go-winio v0.4.11/go.mod h1:VhR8bwka0BXejwEJY73c50VrPtXAaKcyvVC4A4RozmA=
github.com/Microsoft/go-winio v0.4.12 h1:xAfWHN1IrQ0NJ9TBC0KBZoqLjzDTr1ML+4MywiUOryc=
github.com/Microsoft/go-winio v0.4.12/go.mod h1:VhR8bwka0BXejwEJY73c50VrPtXAaKcyvVC4A4RozmA=
github.com/Microsoft/go-winio v0.4.13-0.20190408173621-84b4ab48a507 h1:QeteIkN4WmQcflYv2SINj79dlbN22KqsfZHZruD8JXw=
github.com/Microsoft/go-winio v0.4.13-0.20190408173621-84b4ab48a507/go.mod h1:VhR8bwka0BXejwEJY73c50VrPtXAaKcyvVC4A4RozmA=
github.com/Microsoft/hcsshim v0.8.5/go.mod h1:Op3hHsoHPAvb6lceZHDtd9OkTew38wNoXnJs8iY7rUg=
github.com/Microsoft/hcsshim v0.8.6 h1:ZfF0+zZeYdzMIVMZHKtDKJvLHj76XCuVae/jNkjj0IA=
github.com/Microsoft/hcsshim v0.8.6/go.mod h1:Op3hHsoHPAvb6lceZHDtd9OkTew38wNoXnJs8iY7rUg=
@@ -36,26 +36,28 @@ github.com/cloudflare/cfssl v0.0.0-20181213083726-b94e044bb51e/go.mod h1:yMWuSON
github.com/codahale/hdrhistogram v0.0.0-20160425231609-f8ad88b59a58/go.mod h1:sE/e/2PUdi/liOCUjSTXgM1o87ZssimdTWN964YiIeI=
github.com/containerd/cgroups v0.0.0-20190226200435-dbea6f2bd416 h1:AaSMvkPaxfZD/OsDVBueAKzY5lnWAqLWgUivNg37WHA=
github.com/containerd/cgroups v0.0.0-20190226200435-dbea6f2bd416/go.mod h1:X9rLEHIqSf/wfK8NsPqxJmeZgW4pcfzdXITDrUSJ6uI=
github.com/containerd/console v0.0.0-20180822173158-c12b1e7919c1/go.mod h1:Tj/on1eG8kiEhd0+fhSDzsPAFESxzBBvdyEgyryXffw=
github.com/containerd/console v0.0.0-20181022165439-0650fd9eeb50 h1:WMpHmC6AxwWb9hMqhudkqG7A/p14KiMnl6d3r1iUMjU=
github.com/containerd/console v0.0.0-20181022165439-0650fd9eeb50/go.mod h1:Tj/on1eG8kiEhd0+fhSDzsPAFESxzBBvdyEgyryXffw=
github.com/containerd/containerd v1.2.4/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/containerd v1.3.0-0.20190321141026-ceba56893a76 h1:+do6jVam4JCI9KPBxTzrJpNpu0ZoMpsexTLAAQYNzaQ=
github.com/containerd/containerd v1.3.0-0.20190321141026-ceba56893a76/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/containerd v1.3.0-0.20190507210959-7c1e88399ec0 h1:enps1EZBEgR8QxwdrpsoSxcsCXWnMKchIQ/0dzC0eKw=
github.com/containerd/containerd v1.3.0-0.20190507210959-7c1e88399ec0/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
github.com/containerd/continuity v0.0.0-20181001140422-bd77b46c8352/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
github.com/containerd/continuity v0.0.0-20181203112020-004b46473808 h1:4BX8f882bXEDKfWIf0wa8HRvpnBoPszJJXL+TVbBw4M=
github.com/containerd/continuity v0.0.0-20181203112020-004b46473808/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
github.com/containerd/continuity v0.0.0-20190827140505-75bee3e2ccb6 h1:NmTXa/uVnDyp0TY5MKi197+3HWcnYWfnHGyaFthlnGw=
github.com/containerd/continuity v0.0.0-20190827140505-75bee3e2ccb6/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
github.com/containerd/fifo v0.0.0-20180307165137-3d5202aec260/go.mod h1:ODA38xgv3Kuk8dQz2ZQXpnv/UZZUHUCL7pnLehbXgQI=
github.com/containerd/fifo v0.0.0-20190226154929-a9fb20d87448 h1:PUD50EuOMkXVcpBIA/R95d56duJR9VxhwncsFbNnxW4=
github.com/containerd/fifo v0.0.0-20190226154929-a9fb20d87448/go.mod h1:ODA38xgv3Kuk8dQz2ZQXpnv/UZZUHUCL7pnLehbXgQI=
github.com/containerd/go-runc v0.0.0-20180907222934-5a6d9f37cfa3/go.mod h1:IV7qH3hrUgRmyYrtgEeGWJfWbgcHL9CSRruz2Vqcph0=
github.com/containerd/go-cni v0.0.0-20190610170741-5a4663dad645/go.mod h1:2wlRxCQdiBY+OcjNg5x8kI+5mEL1fGt25L4IzQHYJsM=
github.com/containerd/go-runc v0.0.0-20190911050354-e029b79d8cda/go.mod h1:IV7qH3hrUgRmyYrtgEeGWJfWbgcHL9CSRruz2Vqcph0=
github.com/containerd/ttrpc v0.0.0-20190411181408-699c4e40d1e7 h1:SKDlsIhYxNE1LO0xwuOR+3QWj3zRibVQu5jWIMQmOfU=
github.com/containerd/ttrpc v0.0.0-20190411181408-699c4e40d1e7/go.mod h1:PvCDdDGpgqzQIzDW1TphrGLssLDZp2GuS+X5DkEJB8o=
github.com/containerd/typeurl v0.0.0-20180627222232-a93fcdb778cd/go.mod h1:Cm3kwCdlkCfMSHURc+r6fwoGH6/F1hH3S4sg0rLFWPc=
github.com/containerd/typeurl v0.0.0-20190228175220-2a93cfde8c20 h1:14r0i3IeJj6zkNLigAJiv/TWSR8EY+pxIjv5tFiT+n8=
github.com/containerd/typeurl v0.0.0-20190228175220-2a93cfde8c20/go.mod h1:Cm3kwCdlkCfMSHURc+r6fwoGH6/F1hH3S4sg0rLFWPc=
github.com/containernetworking/cni v0.6.1-0.20180218032124-142cde0c766c/go.mod h1:LGwApLUm2FpoOfxTDEeq8T9ipbpZ61X79hmU3w8FmsY=
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk=
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd v0.0.0-20161114122254-48702e0da86b/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e h1:Wf6HqHfScWJN9/ZjdUKyjop4mf3Qdd+1TvvltAvM3m8=
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
@@ -63,15 +65,18 @@ github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/denisenkom/go-mssqldb v0.0.0-20190315220205-a8ed825ac853 h1:tTngnoO/B6HQnJ+pK8tN7kEAhmhIfaJOutqq/A4/JTM=
github.com/denisenkom/go-mssqldb v0.0.0-20190315220205-a8ed825ac853/go.mod h1:xN/JuLBIz4bjkxNmByTiV1IbhfnYb6oo99phBn4Eqhc=
github.com/docker/cli v0.0.0-20190321234815-f40f9c240ab0 h1:E7NTtHfZYV+iu35yZ49AbrxqhMHpiOl3FstDYm38vQ0=
github.com/docker/cli v0.0.0-20190321234815-f40f9c240ab0/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
github.com/docker/cli v1.14.0-0.20190523191156-ab688a9a79a1 h1:Va5DmtF6dHEy+IjKQEkOkTVKmGSPysDDSf8Md0w0RXg=
github.com/docker/cli v1.14.0-0.20190523191156-ab688a9a79a1/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
github.com/docker/compose-on-kubernetes v0.4.19-0.20190128150448-356b2919c496 h1:90ytrX1dbzL7Uf/hHiuWwvywC+gikHv4hkAy4CwRTbs=
github.com/docker/compose-on-kubernetes v0.4.19-0.20190128150448-356b2919c496/go.mod h1:iT2pYfi580XlpaV4KmK0T6+4/9+XoKmk/fhoDod1emE=
github.com/docker/distribution v2.7.1-0.20190205005809-0d3efadf0154+incompatible h1:dvc1KSkIYTVjZgHf/CTC2diTYC8PzhaA5sFISRfNVrE=
github.com/docker/distribution v2.7.1-0.20190205005809-0d3efadf0154+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/docker v0.0.0-20180531152204-71cd53e4a197/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/docker v1.14.0-0.20190319215453-e7b5f7dbe98c h1:rZ+3jNsgjvYgdZ0Nrd4Udrv8rneDbWBohAPuXsTsvGU=
github.com/docker/docker v1.14.0-0.20190319215453-e7b5f7dbe98c/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/docker v1.14.0-0.20190410063227-d9d9eccdc862 h1:bPhIuPlkAVUi1VJXZLvgRGPEaTqTQ7TuRGuD9TcZd5o=
github.com/docker/docker v1.14.0-0.20190410063227-d9d9eccdc862/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/docker v1.14.0-0.20190410063227-3998dffb806f3887f804b813069f59bc14a7f3c1 h1:7RxCTE2/CdvPF5C622C/L8bPqbJd+/Sjz22p5GdcjcQ=
github.com/docker/docker v1.14.0-0.20190410063227-3998dffb806f3887f804b813069f59bc14a7f3c1/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/docker-credential-helpers v0.6.0/go.mod h1:WRaJzqw3CTB9bk10avuGsjVBZsD05qeibJ1/TYlvc0Y=
github.com/docker/docker-credential-helpers v0.6.1 h1:Dq4iIfcM7cNtddhLVWe9h4QDjsi4OER3Z8voPu/I52g=
github.com/docker/docker-credential-helpers v0.6.1/go.mod h1:WRaJzqw3CTB9bk10avuGsjVBZsD05qeibJ1/TYlvc0Y=
@@ -86,8 +91,7 @@ github.com/docker/go-metrics v0.0.0-20170502235133-d466d4f6fd96 h1:HVQ/BC7Ze+bcV
github.com/docker/go-metrics v0.0.0-20170502235133-d466d4f6fd96/go.mod h1:/u0gXw0Gay3ceNrsHubL3BtdOL2fHf93USgMTe0W5dI=
github.com/docker/go-units v0.3.1 h1:QAFdsA6jLCnglbqE6mUsHuPcJlntY94DkxHf4deHKIU=
github.com/docker/go-units v0.3.1/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/docker/libnetwork v0.0.0-20180913200009-36d3bed0e9f4 h1:PmuO8T1R9jxvOI+Y6+8YBF+um2L6xp6/F52oDeYTZkk=
github.com/docker/libnetwork v0.0.0-20180913200009-36d3bed0e9f4/go.mod h1:93m0aTqz6z+g32wla4l4WxTrdtvBRmVzYRkYvasA5Z8=
github.com/docker/libnetwork v0.8.0-dev.2.0.20190604151032-3c26b4e7495e/go.mod h1:93m0aTqz6z+g32wla4l4WxTrdtvBRmVzYRkYvasA5Z8=
github.com/docker/libtrust v0.0.0-20150526203908-9cbd2a1374f4 h1:k8TfKGeAcDQFFQOGCQMRN04N4a9YrPlRMMKnzAuvM9Q=
github.com/docker/libtrust v0.0.0-20150526203908-9cbd2a1374f4/go.mod h1:cyGadeNEkKy96OOhEzfZl+yxihPEzKnqJwvfuSUqbZE=
github.com/erikstmartin/go-testdb v0.0.0-20160219214506-8d10e4a1bae5 h1:Yzb9+7DPaBjB8zlTR87/ElzFsnQfuHnVUVqpZZIcV5Y=
@@ -104,7 +108,6 @@ github.com/gofrs/flock v0.7.0 h1:pGFUjl501gafK9HBt1VGL1KCOd/YhIooID+xgyJCf3g=
github.com/gofrs/flock v0.7.0/go.mod h1:F1TvTiK9OcQqauNUHlbJvyl9Qa1QvF/gOUDKA14jxHU=
github.com/gofrs/uuid v3.2.0+incompatible h1:y12jRkkFxsd7GpqdSZ+/KCs/fJbqpEXSGd4+jfEaewE=
github.com/gofrs/uuid v3.2.0+incompatible/go.mod h1:b2aQJv3Z4Fp6yNu3cdSllBxTCLRxnplIgP/c0N/04lM=
github.com/gogo/googleapis v0.0.0-20180501115203-b23578765ee5/go.mod h1:gf4bu3Q80BeJ6H1S1vYPm8/ELATdvryBaNFGgqEef3s=
github.com/gogo/googleapis v1.1.0 h1:kFkMAZBNAn4j7K0GiZr8cRYzejq68VbheufiV3YuyFI=
github.com/gogo/googleapis v1.1.0/go.mod h1:gf4bu3Q80BeJ6H1S1vYPm8/ELATdvryBaNFGgqEef3s=
github.com/gogo/protobuf v1.0.0/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
@@ -116,8 +119,6 @@ github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfU
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/protobuf v1.2.0 h1:P3YflyNX/ehuJFLhxviNdFxQPkGK5cDcApsge1SqnvM=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/google/btree v1.0.0 h1:0udJVsspx3VBr5FwtLhQQtuAsVc79tTq0ocGIPAU6qo=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/certificate-transparency-go v1.0.21 h1:Yf1aXowfZ2nuboBsg7iYGLmwsOARdV86pfH3g95wXmE=
github.com/google/certificate-transparency-go v1.0.21/go.mod h1:QeJfpSbVSfYc7RgB3gJFj9cbuQMMchQxrWXz8Ruopmg=
github.com/google/go-cmp v0.2.0 h1:+dTQ8DZQJz0Mb/HjFlkptS1FeQ4cWSnN941F8aEG4SQ=
@@ -126,13 +127,9 @@ github.com/google/gofuzz v0.0.0-20170612174753-24818f796faf h1:+RRA9JqSOZFfKrOeq
github.com/google/gofuzz v0.0.0-20170612174753-24818f796faf/go.mod h1:HP5RmnzzSNb993RKQDq4+1A4ia9nllfqcQFTQJedwGI=
github.com/google/shlex v0.0.0-20150127133951-6f45313302b9 h1:JM174NTeGNJ2m/oLH3UOWOvWQQKd+BoL3hcSCUWFLt0=
github.com/google/shlex v0.0.0-20150127133951-6f45313302b9/go.mod h1:RpwtwJQFrIEPstU94h88MWPXP2ektJZ8cZ0YntAmXiE=
github.com/googleapis/gnostic v0.2.0 h1:l6N3VoaVzTncYYW+9yOz2LJJammFZGBO13sqgEhpy9g=
github.com/googleapis/gnostic v0.2.0/go.mod h1:sJBsCZ4ayReDTBIg8b9dl28c5xFWyhBTVRp3pOg5EKY=
github.com/gorilla/mux v1.7.0 h1:tOSd0UKHQd6urX6ApfOn4XdBMY6Sh1MfxV3kmaazO+U=
github.com/gorilla/mux v1.7.0/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
github.com/gotestyourself/gotestyourself v2.2.0+incompatible/go.mod h1:zZKM6oeNM8k+FRljX1mnzVYeS8wiGgQyvST1/GafPbY=
github.com/gregjones/httpcache v0.0.0-20190212212710-3befbb6ad0cc h1:f8eY6cV/x1x+HLjOp4r72s/31/V2aTUtg5oKRRPf8/Q=
github.com/gregjones/httpcache v0.0.0-20190212212710-3befbb6ad0cc/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
github.com/grpc-ecosystem/grpc-opentracing v0.0.0-20180507213350-8e809c8a8645 h1:MJG/KsmcqMwFAkh8mTnAwhyKoB+sTAnY4CACC110tbU=
github.com/grpc-ecosystem/grpc-opentracing v0.0.0-20180507213350-8e809c8a8645/go.mod h1:6iZfnjpejD4L/4DwD7NryNaJyCQdzwWwH2MWhCA90Kw=
github.com/hailocab/go-hostpool v0.0.0-20160125115350-e80d13ce29ed h1:5upAirOpQc1Q53c0bnx2ufif5kANL7bfZWcc6VJWJd8=
@@ -186,8 +183,8 @@ github.com/miekg/pkcs11 v0.0.0-20190322140431-074fd7a1ed19/go.mod h1:WCBAbTOdfhH
github.com/mitchellh/hashstructure v0.0.0-20170609045927-2bca23e0e452/go.mod h1:QjSHrPWS+BGUVBYkbTZWEnOh3G1DutKwClXU/ABz6AQ=
github.com/mitchellh/mapstructure v1.1.2 h1:fmNYVwqnSfB9mZU6OS2O6GsXM+wcskZDuKQzvN1EDeE=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/moby/buildkit v0.4.1-0.20190410165125-bcd466a748e950507826c30e835b087289aaf384 h1:SsOa4L1rQyJlSUm7fBAspkBNHDgmcUU60WOZPGuTTy4=
github.com/moby/buildkit v0.4.1-0.20190410165125-bcd466a748e950507826c30e835b087289aaf384/go.mod h1:ivyIn0/bTW+YAXWeOqMWJ9aHLeac6SKIB4QeGlxzu/Q=
github.com/moby/buildkit v0.6.2-0.20190921002054-ae10b292fefb h1:enyviD1ZOxgo62sGpT2yQY1uTtruq84wYJPjFJwsbH0=
github.com/moby/buildkit v0.6.2-0.20190921002054-ae10b292fefb/go.mod h1:JKVImCzxztxvULr5P6ZiBfA/B2P+ZpR6UHxOXQn4KiU=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v1.0.1 h1:9f412s+6RmYXLWZSEzVVgPGK7C2PphHj5RJrvfx9AWI=
@@ -202,8 +199,8 @@ github.com/opencontainers/go-digest v1.0.0-rc1/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQ
github.com/opencontainers/image-spec v1.0.1 h1:JMemWkRwHx4Zj+fVxWoMCFm/8sYGGrUVojFA6h/TRcI=
github.com/opencontainers/image-spec v1.0.1/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
github.com/opencontainers/runc v1.0.0-rc6/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
github.com/opencontainers/runc v1.0.1-0.20190307181833-2b18fe1d885e h1:+uPGJuuDl61O9GKN/rLHkUCf597mpxmJI06RqMQX81A=
github.com/opencontainers/runc v1.0.1-0.20190307181833-2b18fe1d885e/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
github.com/opencontainers/runc v1.0.0-rc8 h1:dDCFes8Hj1r/i5qnypONo5jdOme/8HWZC/aNDyhECt0=
github.com/opencontainers/runc v1.0.0-rc8/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
github.com/opencontainers/runtime-spec v0.0.0-20180909173843-eba862dc2470/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
github.com/opencontainers/runtime-spec v1.0.1 h1:wY4pOY8fBdSIvs9+IDHC55thBuEulhzfSgKeC1yFvzQ=
github.com/opencontainers/runtime-spec v1.0.1/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
@@ -212,8 +209,6 @@ github.com/opentracing/opentracing-go v0.0.0-20171003133519-1361b9cd60be h1:vn0r
github.com/opentracing/opentracing-go v0.0.0-20171003133519-1361b9cd60be/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
github.com/pelletier/go-toml v1.2.0 h1:T5zMGML61Wp+FlcbWjRDT7yAxhJNAiPPLOFECq181zc=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/peterbourgon/diskv v2.0.1+incompatible h1:UBdAOUP5p4RWqPBg048CAvpKN+vxiaj6gdUUzhl4XmI=
github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU=
github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/profile v1.2.1/go.mod h1:hJw3o1OdXxsrSjjVksARp5W95eeEaEfptyVZyv6JUPA=
@@ -227,6 +222,7 @@ github.com/prometheus/common v0.0.0-20180518154759-7600349dcfe1 h1:osmNoEW2SCW3L
github.com/prometheus/common v0.0.0-20180518154759-7600349dcfe1/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/procfs v0.0.0-20180612222113-7d6f385de8be h1:MoyXp/VjXUwM0GyDcdwT7Ubea2gxOSHpPaFo3qV+Y2A=
github.com/prometheus/procfs v0.0.0-20180612222113-7d6f385de8be/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/serialx/hashring v0.0.0-20190422032157-8b2912629002/go.mod h1:/yeG0My1xr/u+HZrFQ1tOQQQQrOawfyMUH13ai5brBc=
github.com/sirupsen/logrus v1.0.3/go.mod h1:pMByvHTf9Beacp5x1UXfOR9xyW/9antXMhjMPG0dEzc=
github.com/sirupsen/logrus v1.3.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.0 h1:yKenngtzGh+cUSSh6GWbxW2abRqhYUSR/t/6+2QqNvE=
@@ -245,19 +241,15 @@ github.com/spf13/viper v1.3.2 h1:VUFqw5KcqRf7i70GOzW7N+Q7+gxVBkSSqiXB12+JQ4M=
github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/syndtr/gocapability v0.0.0-20170704070218-db04d3cc01c8/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/syndtr/gocapability v0.0.0-20180916011248-d98352740cb2 h1:b6uOv7YOFK0TYG7HtkIgExQo+2RdLuwRft63jn2HWj8=
github.com/syndtr/gocapability v0.0.0-20180916011248-d98352740cb2/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
github.com/theupdateframework/notary v0.6.1 h1:7wshjstgS9x9F5LuB1L5mBI2xNMObWqjz+cjWoom6l0=
github.com/theupdateframework/notary v0.6.1/go.mod h1:MOfgIfmox8s7/7fduvB2xyPPMJCrjRLRizA8OFwpnKY=
github.com/tiborvass/cli v0.0.0-20190419012645-1ed02c40fe68 h1:NYTot8AmowoFRqpczwLpi7JRKJblipjgfXh3XE+hi5o=
github.com/tiborvass/cli v0.0.0-20190419012645-1ed02c40fe68/go.mod h1:S/jEQ1OdFvFXmvGs5oicYe8IJS7pBTP99Z0EtGfRs8s=
github.com/tonistiigi/fsutil v0.0.0-20190327153851-3bbb99cdbd76 h1:eGfgYrNUSD448sa4mxH6nQpyZfN39QH0mLB7QaKIjus=
github.com/tonistiigi/fsutil v0.0.0-20190327153851-3bbb99cdbd76/go.mod h1:pzh7kdwkDRh+Bx8J30uqaKJ1M4QrSH/um8fcIXeM8rc=
github.com/tonistiigi/fsutil v0.0.0-20190819224149-3d2716dd0a4d h1:HJg27yqwTV7vFG9dWPDbUi373o/bmSDYGN9mZgVwdH0=
github.com/tonistiigi/fsutil v0.0.0-20190819224149-3d2716dd0a4d/go.mod h1:pzh7kdwkDRh+Bx8J30uqaKJ1M4QrSH/um8fcIXeM8rc=
github.com/tonistiigi/units v0.0.0-20180711220420-6950e57a87ea h1:SXhTLE6pb6eld/v/cCndK0AMpt1wiVFb/YYmqB3/QG0=
github.com/tonistiigi/units v0.0.0-20180711220420-6950e57a87ea/go.mod h1:WPnis/6cRcDZSUvVmezrxJPkiO87ThFYsoUiMwWNDJk=
github.com/tonistiigi/vt100 v0.0.0-20190402012908-ad4c4a574305 h1:y/1cL5AL2oRcfzz8CAHHhR6kDDfIOT0WEyH5k40sccM=
@@ -266,9 +258,7 @@ github.com/uber/jaeger-client-go v0.0.0-20180103221425-e02c85f9069e/go.mod h1:WV
github.com/uber/jaeger-lib v1.2.1/go.mod h1:ComeNDZlWwrWnDv8aPp0Ba6+uUTzImX/AauajbLI56U=
github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0=
github.com/urfave/cli v0.0.0-20171014202726-7bc6a0acffa5/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
github.com/vishvananda/netlink v1.0.0 h1:bqNY2lgheFIu1meHUFSH3d7vG93AFyqg3oGbJCOJgSM=
github.com/vishvananda/netlink v1.0.0/go.mod h1:+SR5DhBJrl6ZM7CoCKvpw5BKroDKQ+PJqOg65H/2ktk=
github.com/vishvananda/netns v0.0.0-20180720170159-13995c7128cc h1:R83G5ikgLMxrBvLh22JhdfI8K6YXEPHx5P03Uu3DRs4=
github.com/vishvananda/netns v0.0.0-20180720170159-13995c7128cc/go.mod h1:ZjcWmFBXmLKZu9Nxj3WKYEafiSqer2rnvPr0en9UNpI=
github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f h1:J9EGpcZtP0E/raorCMxlFGSTBrsSlaDGf3jU/qvAE2c=
github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU=
@@ -280,26 +270,22 @@ github.com/xlab/handysort v0.0.0-20150421192137-fb3537ed64a1 h1:j2hhcujLRHAg872R
github.com/xlab/handysort v0.0.0-20150421192137-fb3537ed64a1/go.mod h1:QcJo0QPSfTONNIgpN5RA8prR7fF8nkF6cTWTcNerRO8=
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793 h1:u+LnwYTOOW7Ukr/fppxEb1Nwz0AtPflrblfvUudpo+I=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9 h1:mKdxBk7AujPs8kU4m80U72y/zjbZ3UcXC7dClwKbUI0=
golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190129210102-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190129210102-ccddf3741a0c h1:MWY7h75sb9ioBR+s5Zgq1JYXxhbZvrSP2okwLi3ItmI=
golang.org/x/crypto v0.0.0-20190129210102-ccddf3741a0c/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d h1:g9qWBGx4puODJTMVyoPrpoxPFgVGd+z1DZwjfRu4d0I=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd h1:nTDtHvHSdCn1m6ITfMRqtOd/9+7a3s8RBNOZ3eYZzJA=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2 h1:VklqNMn3ovrHsnt90PveolxSbWFaJdECFbxSq0Mqo2M=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be h1:vEDujvNQGv4jgYKudGeI/+DAX4Jffq6hpD55MmoEvKs=
golang.org/x/net v0.0.0-20190311183353-d8887717615a h1:oWX7TPOiFAMXLq8o0ikBYfCJVlRHBcsciT5bXOrH628=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f h1:wMNYb4v58l5UBM7MYRLPG6ZhfOqbKu7X5eyFl8ZhKvA=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190303122642-d455e41777fc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190322080309-f49334f85ddc h1:4gbWbmmPFp4ySWICouJl6emP0MyS31yy9SrTlAGFT+g=
golang.org/x/sys v0.0.0-20190322080309-f49334f85ddc/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -309,17 +295,14 @@ golang.org/x/time v0.0.0-20161028155119-f51c12702a4d/go.mod h1:tRJNPiyCQ0inRvYxb
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 h1:SvFZT6jyqRaOeXpc5h/JSfZenJ2O330aBsf7JfSUXmQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
google.golang.org/appengine v1.1.0 h1:igQkv0AAhEIvTEpD5LIpAfav2eeVO9HBTjvKHVJPRSs=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/genproto v0.0.0-20170523043604-d80a6e20e776/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8 h1:Nw54tB0rB7hY/N0NQvRW8DG4Yk3Q6T9cu9RcFQDu1tc=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/grpc v1.12.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
google.golang.org/grpc v1.19.1 h1:TrBcJ1yqAl1G++wO39nD/qtgpsW9/1+QGrluyMGEYgM=
google.golang.org/grpc v1.19.1/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1 h1:Hz2g2wirWK7H0qIIhGIqRGTuMwTE8HEKFnDZZ7lm9NU=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
gopkg.in/airbrake/gobrake.v2 v2.0.9/go.mod h1:/h5ZAUhDkGaJfjzjKLSjv6zCL6O0LLBxU4K+aSYdM/U=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=

View File

@@ -7,6 +7,7 @@ services:
image: docker.io/tonistiigi/db
webapp:
build:
context: .
dockerfile: Dockerfile.webapp
args:
buildno: 1

21
hack/generate-authors Executable file
View File

@@ -0,0 +1,21 @@
#!/usr/bin/env bash
set -eu -o pipefail -x
if [ -x "$(command -v greadlink)" ]; then
# on macOS, GNU readlink is ava (greadlink) can be installed through brew install coreutils
cd "$(dirname "$(greadlink -f "$BASH_SOURCE")")/.."
else
cd "$(dirname "$(readlink -f "$BASH_SOURCE")")/.."
fi
# see also ".mailmap" for how email addresses and names are deduplicated
{
cat <<-'EOH'
# This file lists all individuals having contributed content to the repository.
# For how it is generated, see `scripts/generate-authors.sh`.
EOH
echo
git log --format='%aN <%aE>' | LC_ALL=C.UTF-8 sort -uf
} > AUTHORS

View File

@@ -16,9 +16,12 @@ type NodeGroup struct {
}
type Node struct {
Name string
Endpoint string
Platforms []specs.Platform
Name string
Endpoint string
Platforms []specs.Platform
Flags []string
ConfigFile string
DriverOpts map[string]string
}
func (ng *NodeGroup) Leave(name string) error {
@@ -33,7 +36,7 @@ func (ng *NodeGroup) Leave(name string) error {
return nil
}
func (ng *NodeGroup) Update(name, endpoint string, platforms []string, endpointsSet bool, actionAppend bool) error {
func (ng *NodeGroup) Update(name, endpoint string, platforms []string, endpointsSet bool, actionAppend bool, flags []string, configFile string, do map[string]string) error {
i := ng.findNode(name)
if i == -1 && !actionAppend {
if len(ng.Nodes) > 0 {
@@ -55,6 +58,9 @@ func (ng *NodeGroup) Update(name, endpoint string, platforms []string, endpoints
if len(platforms) > 0 {
n.Platforms = pp
}
if flags != nil {
n.Flags = flags
}
ng.Nodes[i] = n
if err := ng.validateDuplicates(endpoint, i); err != nil {
return err
@@ -72,9 +78,12 @@ func (ng *NodeGroup) Update(name, endpoint string, platforms []string, endpoints
}
n := Node{
Name: name,
Endpoint: endpoint,
Platforms: pp,
Name: name,
Endpoint: endpoint,
Platforms: pp,
ConfigFile: configFile,
Flags: flags,
DriverOpts: do,
}
ng.Nodes = append(ng.Nodes, n)

View File

@@ -11,16 +11,16 @@ func TestNodeGroupUpdate(t *testing.T) {
t.Parallel()
ng := &NodeGroup{}
err := ng.Update("foo", "foo0", []string{"linux/amd64"}, true, false)
err := ng.Update("foo", "foo0", []string{"linux/amd64"}, true, false, []string{"--debug"}, "", nil)
require.NoError(t, err)
err = ng.Update("foo1", "foo1", []string{"linux/arm64", "linux/arm/v7"}, true, true)
err = ng.Update("foo1", "foo1", []string{"linux/arm64", "linux/arm/v7"}, true, true, nil, "", nil)
require.NoError(t, err)
require.Equal(t, len(ng.Nodes), 2)
// update
err = ng.Update("foo", "foo2", []string{"linux/amd64", "linux/arm"}, true, false)
err = ng.Update("foo", "foo2", []string{"linux/amd64", "linux/arm"}, true, false, nil, "", nil)
require.NoError(t, err)
require.Equal(t, len(ng.Nodes), 2)
@@ -28,9 +28,11 @@ func TestNodeGroupUpdate(t *testing.T) {
require.Equal(t, []string{"linux/arm64"}, platformutil.Format(ng.Nodes[1].Platforms))
require.Equal(t, "foo2", ng.Nodes[0].Endpoint)
require.Equal(t, []string{"--debug"}, ng.Nodes[0].Flags)
require.Equal(t, []string(nil), ng.Nodes[1].Flags)
// duplicate endpoint
err = ng.Update("foo1", "foo2", nil, true, false)
err = ng.Update("foo1", "foo2", nil, true, false, nil, "", nil)
require.Error(t, err)
require.Contains(t, err.Error(), "duplicate endpoint")

View File

@@ -8,6 +8,7 @@ import (
"github.com/containerd/containerd/content"
"github.com/containerd/containerd/errdefs"
"github.com/containerd/containerd/images"
"github.com/containerd/containerd/platforms"
"github.com/docker/distribution/reference"
"github.com/opencontainers/go-digest"
"github.com/opencontainers/image-spec/specs-go"
@@ -47,14 +48,11 @@ func (r *Resolver) Combine(ctx context.Context, in string, descs []ocispec.Descr
switch mt {
case images.MediaTypeDockerSchema2Manifest, ocispec.MediaTypeImageManifest:
if descs[i].Platform == nil {
cfg, err := r.loadConfig(ctx, in, dt)
p, err := r.loadPlatform(ctx, in, dt)
if err != nil {
return err
}
descs[i].Platform = &ocispec.Platform{
OS: cfg.OS,
Architecture: cfg.Architecture,
}
descs[i].Platform = p
}
case images.MediaTypeDockerSchema1Manifest:
return errors.Errorf("schema1 manifests are not allowed in manifest lists")
@@ -168,7 +166,7 @@ func (r *Resolver) Push(ctx context.Context, ref reference.Named, desc ocispec.D
return err
}
func (r *Resolver) loadConfig(ctx context.Context, in string, dt []byte) (*ocispec.Image, error) {
func (r *Resolver) loadPlatform(ctx context.Context, in string, dt []byte) (*ocispec.Platform, error) {
var manifest ocispec.Manifest
if err := json.Unmarshal(dt, &manifest); err != nil {
return nil, errors.WithStack(err)
@@ -179,12 +177,13 @@ func (r *Resolver) loadConfig(ctx context.Context, in string, dt []byte) (*ocisp
return nil, err
}
var img ocispec.Image
if err := json.Unmarshal(dt, &img); err != nil {
var p ocispec.Platform
if err := json.Unmarshal(dt, &p); err != nil {
return nil, errors.WithStack(err)
}
return &img, nil
p = platforms.Normalize(p)
return &p, nil
}
func detectMediaType(dt []byte) (string, error) {

View File

@@ -22,7 +22,7 @@ func Parse(platformsStr []string) ([]specs.Platform, error) {
out = append(out, p...)
continue
}
p, err := platforms.Parse(s)
p, err := parse(s)
if err != nil {
return nil, err
}
@@ -31,6 +31,13 @@ func Parse(platformsStr []string) ([]specs.Platform, error) {
return out, nil
}
func parse(in string) (specs.Platform, error) {
if strings.EqualFold(in, "local") {
return platforms.DefaultSpec(), nil
}
return platforms.Parse(in)
}
func Dedupe(in []specs.Platform) []specs.Platform {
m := map[string]struct{}{}
out := make([]specs.Platform, 0, len(in))

View File

@@ -39,6 +39,10 @@ func NewPrinter(ctx context.Context, out *os.File, mode string) Writer {
done: doneCh,
}
if v := os.Getenv("BUILDKIT_PROGRESS"); v != "" && mode == "auto" {
mode = v
}
go func() {
var c console.Console
if cons, err := console.ConsoleFromFile(out); err == nil && (mode == "auto" || mode == "tty") {

View File

@@ -3,10 +3,13 @@
package winio
import (
"context"
"errors"
"fmt"
"io"
"net"
"os"
"runtime"
"syscall"
"time"
"unsafe"
@@ -18,6 +21,48 @@ import (
//sys getNamedPipeInfo(pipe syscall.Handle, flags *uint32, outSize *uint32, inSize *uint32, maxInstances *uint32) (err error) = GetNamedPipeInfo
//sys getNamedPipeHandleState(pipe syscall.Handle, state *uint32, curInstances *uint32, maxCollectionCount *uint32, collectDataTimeout *uint32, userName *uint16, maxUserNameSize uint32) (err error) = GetNamedPipeHandleStateW
//sys localAlloc(uFlags uint32, length uint32) (ptr uintptr) = LocalAlloc
//sys ntCreateNamedPipeFile(pipe *syscall.Handle, access uint32, oa *objectAttributes, iosb *ioStatusBlock, share uint32, disposition uint32, options uint32, typ uint32, readMode uint32, completionMode uint32, maxInstances uint32, inboundQuota uint32, outputQuota uint32, timeout *int64) (status ntstatus) = ntdll.NtCreateNamedPipeFile
//sys rtlNtStatusToDosError(status ntstatus) (winerr error) = ntdll.RtlNtStatusToDosErrorNoTeb
//sys rtlDosPathNameToNtPathName(name *uint16, ntName *unicodeString, filePart uintptr, reserved uintptr) (status ntstatus) = ntdll.RtlDosPathNameToNtPathName_U
//sys rtlDefaultNpAcl(dacl *uintptr) (status ntstatus) = ntdll.RtlDefaultNpAcl
type ioStatusBlock struct {
Status, Information uintptr
}
type objectAttributes struct {
Length uintptr
RootDirectory uintptr
ObjectName *unicodeString
Attributes uintptr
SecurityDescriptor *securityDescriptor
SecurityQoS uintptr
}
type unicodeString struct {
Length uint16
MaximumLength uint16
Buffer uintptr
}
type securityDescriptor struct {
Revision byte
Sbz1 byte
Control uint16
Owner uintptr
Group uintptr
Sacl uintptr
Dacl uintptr
}
type ntstatus int32
func (status ntstatus) Err() error {
if status >= 0 {
return nil
}
return rtlNtStatusToDosError(status)
}
const (
cERROR_PIPE_BUSY = syscall.Errno(231)
@@ -25,21 +70,20 @@ const (
cERROR_PIPE_CONNECTED = syscall.Errno(535)
cERROR_SEM_TIMEOUT = syscall.Errno(121)
cPIPE_ACCESS_DUPLEX = 0x3
cFILE_FLAG_FIRST_PIPE_INSTANCE = 0x80000
cSECURITY_SQOS_PRESENT = 0x100000
cSECURITY_ANONYMOUS = 0
cPIPE_REJECT_REMOTE_CLIENTS = 0x8
cPIPE_UNLIMITED_INSTANCES = 255
cNMPWAIT_USE_DEFAULT_WAIT = 0
cNMPWAIT_NOWAIT = 1
cSECURITY_SQOS_PRESENT = 0x100000
cSECURITY_ANONYMOUS = 0
cPIPE_TYPE_MESSAGE = 4
cPIPE_READMODE_MESSAGE = 2
cFILE_OPEN = 1
cFILE_CREATE = 2
cFILE_PIPE_MESSAGE_TYPE = 1
cFILE_PIPE_REJECT_REMOTE_CLIENTS = 2
cSE_DACL_PRESENT = 4
)
var (
@@ -137,9 +181,30 @@ func (s pipeAddress) String() string {
return string(s)
}
// tryDialPipe attempts to dial the pipe at `path` until `ctx` cancellation or timeout.
func tryDialPipe(ctx context.Context, path *string) (syscall.Handle, error) {
for {
select {
case <-ctx.Done():
return syscall.Handle(0), ctx.Err()
default:
h, err := createFile(*path, syscall.GENERIC_READ|syscall.GENERIC_WRITE, 0, nil, syscall.OPEN_EXISTING, syscall.FILE_FLAG_OVERLAPPED|cSECURITY_SQOS_PRESENT|cSECURITY_ANONYMOUS, 0)
if err == nil {
return h, nil
}
if err != cERROR_PIPE_BUSY {
return h, &os.PathError{Err: err, Op: "open", Path: *path}
}
// Wait 10 msec and try again. This is a rather simplistic
// view, as we always try each 10 milliseconds.
time.Sleep(time.Millisecond * 10)
}
}
}
// DialPipe connects to a named pipe by path, timing out if the connection
// takes longer than the specified duration. If timeout is nil, then we use
// a default timeout of 5 seconds. (We do not use WaitNamedPipe.)
// a default timeout of 2 seconds. (We do not use WaitNamedPipe.)
func DialPipe(path string, timeout *time.Duration) (net.Conn, error) {
var absTimeout time.Time
if timeout != nil {
@@ -147,23 +212,22 @@ func DialPipe(path string, timeout *time.Duration) (net.Conn, error) {
} else {
absTimeout = time.Now().Add(time.Second * 2)
}
ctx, _ := context.WithDeadline(context.Background(), absTimeout)
conn, err := DialPipeContext(ctx, path)
if err == context.DeadlineExceeded {
return nil, ErrTimeout
}
return conn, err
}
// DialPipeContext attempts to connect to a named pipe by `path` until `ctx`
// cancellation or timeout.
func DialPipeContext(ctx context.Context, path string) (net.Conn, error) {
var err error
var h syscall.Handle
for {
h, err = createFile(path, syscall.GENERIC_READ|syscall.GENERIC_WRITE, 0, nil, syscall.OPEN_EXISTING, syscall.FILE_FLAG_OVERLAPPED|cSECURITY_SQOS_PRESENT|cSECURITY_ANONYMOUS, 0)
if err != cERROR_PIPE_BUSY {
break
}
if time.Now().After(absTimeout) {
return nil, ErrTimeout
}
// Wait 10 msec and try again. This is a rather simplistic
// view, as we always try each 10 milliseconds.
time.Sleep(time.Millisecond * 10)
}
h, err = tryDialPipe(ctx, &path)
if err != nil {
return nil, &os.PathError{Op: "open", Path: path, Err: err}
return nil, err
}
var flags uint32
@@ -194,43 +258,87 @@ type acceptResponse struct {
}
type win32PipeListener struct {
firstHandle syscall.Handle
path string
securityDescriptor []byte
config PipeConfig
acceptCh chan (chan acceptResponse)
closeCh chan int
doneCh chan int
firstHandle syscall.Handle
path string
config PipeConfig
acceptCh chan (chan acceptResponse)
closeCh chan int
doneCh chan int
}
func makeServerPipeHandle(path string, securityDescriptor []byte, c *PipeConfig, first bool) (syscall.Handle, error) {
var flags uint32 = cPIPE_ACCESS_DUPLEX | syscall.FILE_FLAG_OVERLAPPED
if first {
flags |= cFILE_FLAG_FIRST_PIPE_INSTANCE
}
var mode uint32 = cPIPE_REJECT_REMOTE_CLIENTS
if c.MessageMode {
mode |= cPIPE_TYPE_MESSAGE
}
sa := &syscall.SecurityAttributes{}
sa.Length = uint32(unsafe.Sizeof(*sa))
if securityDescriptor != nil {
len := uint32(len(securityDescriptor))
sa.SecurityDescriptor = localAlloc(0, len)
defer localFree(sa.SecurityDescriptor)
copy((*[0xffff]byte)(unsafe.Pointer(sa.SecurityDescriptor))[:], securityDescriptor)
}
h, err := createNamedPipe(path, flags, mode, cPIPE_UNLIMITED_INSTANCES, uint32(c.OutputBufferSize), uint32(c.InputBufferSize), 0, sa)
func makeServerPipeHandle(path string, sd []byte, c *PipeConfig, first bool) (syscall.Handle, error) {
path16, err := syscall.UTF16FromString(path)
if err != nil {
return 0, &os.PathError{Op: "open", Path: path, Err: err}
}
var oa objectAttributes
oa.Length = unsafe.Sizeof(oa)
var ntPath unicodeString
if err := rtlDosPathNameToNtPathName(&path16[0], &ntPath, 0, 0).Err(); err != nil {
return 0, &os.PathError{Op: "open", Path: path, Err: err}
}
defer localFree(ntPath.Buffer)
oa.ObjectName = &ntPath
// The security descriptor is only needed for the first pipe.
if first {
if sd != nil {
len := uint32(len(sd))
sdb := localAlloc(0, len)
defer localFree(sdb)
copy((*[0xffff]byte)(unsafe.Pointer(sdb))[:], sd)
oa.SecurityDescriptor = (*securityDescriptor)(unsafe.Pointer(sdb))
} else {
// Construct the default named pipe security descriptor.
var dacl uintptr
if err := rtlDefaultNpAcl(&dacl).Err(); err != nil {
return 0, fmt.Errorf("getting default named pipe ACL: %s", err)
}
defer localFree(dacl)
sdb := &securityDescriptor{
Revision: 1,
Control: cSE_DACL_PRESENT,
Dacl: dacl,
}
oa.SecurityDescriptor = sdb
}
}
typ := uint32(cFILE_PIPE_REJECT_REMOTE_CLIENTS)
if c.MessageMode {
typ |= cFILE_PIPE_MESSAGE_TYPE
}
disposition := uint32(cFILE_OPEN)
access := uint32(syscall.GENERIC_READ | syscall.GENERIC_WRITE | syscall.SYNCHRONIZE)
if first {
disposition = cFILE_CREATE
// By not asking for read or write access, the named pipe file system
// will put this pipe into an initially disconnected state, blocking
// client connections until the next call with first == false.
access = syscall.SYNCHRONIZE
}
timeout := int64(-50 * 10000) // 50ms
var (
h syscall.Handle
iosb ioStatusBlock
)
err = ntCreateNamedPipeFile(&h, access, &oa, &iosb, syscall.FILE_SHARE_READ|syscall.FILE_SHARE_WRITE, disposition, 0, typ, 0, 0, 0xffffffff, uint32(c.InputBufferSize), uint32(c.OutputBufferSize), &timeout).Err()
if err != nil {
return 0, &os.PathError{Op: "open", Path: path, Err: err}
}
runtime.KeepAlive(ntPath)
return h, nil
}
func (l *win32PipeListener) makeServerPipe() (*win32File, error) {
h, err := makeServerPipeHandle(l.path, l.securityDescriptor, &l.config, false)
h, err := makeServerPipeHandle(l.path, nil, &l.config, false)
if err != nil {
return nil, err
}
@@ -341,32 +449,13 @@ func ListenPipe(path string, c *PipeConfig) (net.Listener, error) {
if err != nil {
return nil, err
}
// Create a client handle and connect it. This results in the pipe
// instance always existing, so that clients see ERROR_PIPE_BUSY
// rather than ERROR_FILE_NOT_FOUND. This ties the first instance
// up so that no other instances can be used. This would have been
// cleaner if the Win32 API matched CreateFile with ConnectNamedPipe
// instead of CreateNamedPipe. (Apparently created named pipes are
// considered to be in listening state regardless of whether any
// active calls to ConnectNamedPipe are outstanding.)
h2, err := createFile(path, 0, 0, nil, syscall.OPEN_EXISTING, cSECURITY_SQOS_PRESENT|cSECURITY_ANONYMOUS, 0)
if err != nil {
syscall.Close(h)
return nil, err
}
// Close the client handle. The server side of the instance will
// still be busy, leading to ERROR_PIPE_BUSY instead of
// ERROR_NOT_FOUND, as long as we don't close the server handle,
// or disconnect the client with DisconnectNamedPipe.
syscall.Close(h2)
l := &win32PipeListener{
firstHandle: h,
path: path,
securityDescriptor: sd,
config: *c,
acceptCh: make(chan (chan acceptResponse)),
closeCh: make(chan int),
doneCh: make(chan int),
firstHandle: h,
path: path,
config: *c,
acceptCh: make(chan (chan acceptResponse)),
closeCh: make(chan int),
doneCh: make(chan int),
}
go l.listenerRoutine()
return l, nil

View File

@@ -1,4 +1,4 @@
// MACHINE GENERATED BY 'go generate' COMMAND; DO NOT EDIT
// Code generated by 'go generate'; DO NOT EDIT.
package winio
@@ -38,6 +38,7 @@ func errnoErr(e syscall.Errno) error {
var (
modkernel32 = windows.NewLazySystemDLL("kernel32.dll")
modntdll = windows.NewLazySystemDLL("ntdll.dll")
modadvapi32 = windows.NewLazySystemDLL("advapi32.dll")
procCancelIoEx = modkernel32.NewProc("CancelIoEx")
@@ -47,10 +48,13 @@ var (
procConnectNamedPipe = modkernel32.NewProc("ConnectNamedPipe")
procCreateNamedPipeW = modkernel32.NewProc("CreateNamedPipeW")
procCreateFileW = modkernel32.NewProc("CreateFileW")
procWaitNamedPipeW = modkernel32.NewProc("WaitNamedPipeW")
procGetNamedPipeInfo = modkernel32.NewProc("GetNamedPipeInfo")
procGetNamedPipeHandleStateW = modkernel32.NewProc("GetNamedPipeHandleStateW")
procLocalAlloc = modkernel32.NewProc("LocalAlloc")
procNtCreateNamedPipeFile = modntdll.NewProc("NtCreateNamedPipeFile")
procRtlNtStatusToDosErrorNoTeb = modntdll.NewProc("RtlNtStatusToDosErrorNoTeb")
procRtlDosPathNameToNtPathName_U = modntdll.NewProc("RtlDosPathNameToNtPathName_U")
procRtlDefaultNpAcl = modntdll.NewProc("RtlDefaultNpAcl")
procLookupAccountNameW = modadvapi32.NewProc("LookupAccountNameW")
procConvertSidToStringSidW = modadvapi32.NewProc("ConvertSidToStringSidW")
procConvertStringSecurityDescriptorToSecurityDescriptorW = modadvapi32.NewProc("ConvertStringSecurityDescriptorToSecurityDescriptorW")
@@ -176,27 +180,6 @@ func _createFile(name *uint16, access uint32, mode uint32, sa *syscall.SecurityA
return
}
func waitNamedPipe(name string, timeout uint32) (err error) {
var _p0 *uint16
_p0, err = syscall.UTF16PtrFromString(name)
if err != nil {
return
}
return _waitNamedPipe(_p0, timeout)
}
func _waitNamedPipe(name *uint16, timeout uint32) (err error) {
r1, _, e1 := syscall.Syscall(procWaitNamedPipeW.Addr(), 2, uintptr(unsafe.Pointer(name)), uintptr(timeout), 0)
if r1 == 0 {
if e1 != 0 {
err = errnoErr(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func getNamedPipeInfo(pipe syscall.Handle, flags *uint32, outSize *uint32, inSize *uint32, maxInstances *uint32) (err error) {
r1, _, e1 := syscall.Syscall6(procGetNamedPipeInfo.Addr(), 5, uintptr(pipe), uintptr(unsafe.Pointer(flags)), uintptr(unsafe.Pointer(outSize)), uintptr(unsafe.Pointer(inSize)), uintptr(unsafe.Pointer(maxInstances)), 0)
if r1 == 0 {
@@ -227,6 +210,32 @@ func localAlloc(uFlags uint32, length uint32) (ptr uintptr) {
return
}
func ntCreateNamedPipeFile(pipe *syscall.Handle, access uint32, oa *objectAttributes, iosb *ioStatusBlock, share uint32, disposition uint32, options uint32, typ uint32, readMode uint32, completionMode uint32, maxInstances uint32, inboundQuota uint32, outputQuota uint32, timeout *int64) (status ntstatus) {
r0, _, _ := syscall.Syscall15(procNtCreateNamedPipeFile.Addr(), 14, uintptr(unsafe.Pointer(pipe)), uintptr(access), uintptr(unsafe.Pointer(oa)), uintptr(unsafe.Pointer(iosb)), uintptr(share), uintptr(disposition), uintptr(options), uintptr(typ), uintptr(readMode), uintptr(completionMode), uintptr(maxInstances), uintptr(inboundQuota), uintptr(outputQuota), uintptr(unsafe.Pointer(timeout)), 0)
status = ntstatus(r0)
return
}
func rtlNtStatusToDosError(status ntstatus) (winerr error) {
r0, _, _ := syscall.Syscall(procRtlNtStatusToDosErrorNoTeb.Addr(), 1, uintptr(status), 0, 0)
if r0 != 0 {
winerr = syscall.Errno(r0)
}
return
}
func rtlDosPathNameToNtPathName(name *uint16, ntName *unicodeString, filePart uintptr, reserved uintptr) (status ntstatus) {
r0, _, _ := syscall.Syscall6(procRtlDosPathNameToNtPathName_U.Addr(), 4, uintptr(unsafe.Pointer(name)), uintptr(unsafe.Pointer(ntName)), uintptr(filePart), uintptr(reserved), 0, 0)
status = ntstatus(r0)
return
}
func rtlDefaultNpAcl(dacl *uintptr) (status ntstatus) {
r0, _, _ := syscall.Syscall(procRtlDefaultNpAcl.Addr(), 1, uintptr(unsafe.Pointer(dacl)), 0, 0)
status = ntstatus(r0)
return
}
func lookupAccountName(systemName *uint16, accountName string, sid *byte, sidSize *uint32, refDomain *uint16, refDomainSize *uint32, sidNameUse *uint32) (err error) {
var _p0 *uint16
_p0, err = syscall.UTF16PtrFromString(accountName)

View File

@@ -0,0 +1,51 @@
package osversion
import (
"fmt"
"golang.org/x/sys/windows"
)
// OSVersion is a wrapper for Windows version information
// https://msdn.microsoft.com/en-us/library/windows/desktop/ms724439(v=vs.85).aspx
type OSVersion struct {
Version uint32
MajorVersion uint8
MinorVersion uint8
Build uint16
}
// https://msdn.microsoft.com/en-us/library/windows/desktop/ms724833(v=vs.85).aspx
type osVersionInfoEx struct {
OSVersionInfoSize uint32
MajorVersion uint32
MinorVersion uint32
BuildNumber uint32
PlatformID uint32
CSDVersion [128]uint16
ServicePackMajor uint16
ServicePackMinor uint16
SuiteMask uint16
ProductType byte
Reserve byte
}
// Get gets the operating system version on Windows.
// The calling application must be manifested to get the correct version information.
func Get() OSVersion {
var err error
osv := OSVersion{}
osv.Version, err = windows.GetVersion()
if err != nil {
// GetVersion never fails.
panic(err)
}
osv.MajorVersion = uint8(osv.Version & 0xFF)
osv.MinorVersion = uint8(osv.Version >> 8 & 0xFF)
osv.Build = uint16(osv.Version >> 16)
return osv
}
func (osv OSVersion) ToString() string {
return fmt.Sprintf("%d.%d.%d", osv.MajorVersion, osv.MinorVersion, osv.Build)
}

View File

@@ -0,0 +1,10 @@
package osversion
const (
// RS2 was a client-only release in case you're asking why it's not in the list.
RS1 = 14393
RS3 = 16299
RS4 = 17134
RS5 = 17763
)

View File

@@ -12,7 +12,7 @@ environment:
GOPATH: C:\gopath
CGO_ENABLED: 1
matrix:
- GO_VERSION: 1.11
- GO_VERSION: 1.12.1
before_build:
- choco install -y mingw --version 5.3.0
@@ -22,10 +22,29 @@ before_build:
- 7z x go%GO_VERSION%.windows-amd64.zip -oC:\ >nul
- go version
- choco install codecov
# Clone hcsshim at the vendored version
- bash.exe -elc "export PATH=/c/tools/mingw64/bin:$PATH;
rm -rf /c/gopath/src/github.com/Microsoft/hcsshim;
git clone -q https://github.com/Microsoft/hcsshim.git /c/gopath/src/github.com/Microsoft/hcsshim;
export HCSSHIM_VERSION=`grep Microsoft/hcsshim vendor.conf | awk '{print $2}'`;
echo Using Microsoft/hcsshim $HCSSHIM_VERSION;
pushd /c/gopath/src/github.com/Microsoft/hcsshim;
git checkout $HCSSHIM_VERSION;
popd"
# Print host version. TODO: Remove this when containerd has a way to get host version
- ps: $psversiontable
build_script:
# Build containerd-shim-runhcs-v1.exe and runhcs.exe from Microsoft/hcsshim
- bash.exe -elc "export PATH=/c/tools/mingw64/bin:$PATH;
export GOBIN=/c/gopath/src/github.com/Microsoft/hcsshim/bin;
mkdir $GOBIN;
pushd /c/gopath/src/github.com/Microsoft/hcsshim/cmd/containerd-shim-runhcs-v1;
go install;
cd ../runhcs;
go install;
ls -al $GOBIN;
popd"
- bash.exe -elc "export PATH=/c/tools/mingw64/bin:/c/gopath/bin:$PATH;
script/setup/install-dev-tools;
mingw32-make.exe check"

View File

@@ -1,6 +1,7 @@
Abhinandan Prativadi <abhi@docker.com> Abhinandan Prativadi <aprativadi@gmail.com>
Abhinandan Prativadi <abhi@docker.com> abhi <abhi@docker.com>
Akihiro Suda <suda.akihiro@lab.ntt.co.jp> Akihiro Suda <suda.kyoto@gmail.com>
Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp> Akihiro Suda <suda.kyoto@gmail.com>
Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp> Akihiro Suda <suda.akihiro@lab.ntt.co.jp>
Andrei Vagin <avagin@virtuozzo.com> Andrei Vagin <avagin@openvz.org>
Brent Baude <bbaude@redhat.com> baude <bbaude@redhat.com>
Frank Yang <yyb196@gmail.com> frank yang <yyb196@gmail.com>

View File

@@ -74,14 +74,21 @@ script:
- if [ "$GOOS" = "linux" ]; then sudo PATH=$PATH GOPATH=$GOPATH make integration ; fi
# Run the integration suite a second time. See discussion in github.com/containerd/containerd/pull/1759
- if [ "$GOOS" = "linux" ]; then sudo PATH=$PATH GOPATH=$GOPATH TESTFLAGS_PARALLEL=1 make integration ; fi
- if [ "$GOOS" = "linux" ]; then
- |
if [ "$GOOS" = "linux" ]; then
sudo mkdir -p /etc/containerd
sudo bash -c "cat > /etc/containerd/config.toml <<EOF
[plugins.cri.containerd.default_runtime]
runtime_type = \"${TEST_RUNTIME}\"
EOF"
sudo PATH=$PATH containerd -log-level debug &> /tmp/containerd-cri.log &
sudo ctr version ;
sudo PATH=$PATH GOPATH=$GOPATH critest --runtime-endpoint=/var/run/containerd/containerd.sock --parallel=8 ;
TEST_RC=$? ;
test $TEST_RC -ne 0 && cat /tmp/containerd-cri.log ;
sudo pkill containerd ;
test $TEST_RC -eq 0 || /bin/false ;
sudo ctr version
sudo PATH=$PATH GOPATH=$GOPATH critest --runtime-endpoint=/var/run/containerd/containerd.sock --parallel=8
TEST_RC=$?
test $TEST_RC -ne 0 && cat /tmp/containerd-cri.log
sudo pkill containerd
sudo rm -rf /etc/containerd
test $TEST_RC -eq 0 || /bin/false
fi
after_success:

View File

@@ -104,6 +104,7 @@ make generate
> * `no_btrfs`: A build tag disables building the btrfs snapshot driver.
> * `no_cri`: A build tag disables building Kubernetes [CRI](http://blog.kubernetes.io/2016/12/container-runtime-interface-cri-in-kubernetes.html) support into containerd.
> See [here](https://github.com/containerd/cri-containerd#build-tags) for build tags of CRI plugin.
> * `no_devmapper`: A build tag disables building the device mapper snapshot driver.
>
> For example, adding `BUILDTAGS=no_btrfs` to your environment before calling the **binaries**
> Makefile target will disable the btrfs driver within the containerd Go build.

View File

@@ -231,6 +231,20 @@ clean: ## clean up binaries
@echo "$(WHALE) $@"
@rm -f $(BINARIES)
clean-test: ## clean up debris from previously failed tests
@echo "$(WHALE) $@"
$(eval containers=$(shell find /run/containerd/runc -mindepth 2 -maxdepth 3 -type d -exec basename {} \;))
$(shell pidof containerd containerd-shim runc | xargs -r -n 1 kill -9)
@( for container in $(containers); do \
grep $$container /proc/self/mountinfo | while read -r mountpoint; do \
umount $$(echo $$mountpoint | awk '{print $$5}'); \
done; \
find /sys/fs/cgroup -name $$container -print0 | xargs -r -0 rmdir; \
done )
@rm -rf /run/containerd/runc/*
@rm -rf /run/containerd/fifo/*
@rm -rf /run/containerd-test/*
install: ## install binaries
@echo "$(WHALE) $@ $(BINARIES)"
@mkdir -p $(DESTDIR)/bin

View File

@@ -35,6 +35,10 @@ plugins = ["grpc", "fieldpath"]
prefixes = ["github.com/containerd/containerd/api/events"]
plugins = ["fieldpath"] # disable grpc for this package
[[overrides]]
prefixes = ["github.com/containerd/containerd/api/services/ttrpc/events/v1"]
plugins = ["ttrpc", "fieldpath"]
[[overrides]]
# enable ttrpc and disable fieldpath and grpc for the shim
prefixes = ["github.com/containerd/containerd/runtime/v1/shim/v1", "github.com/containerd/containerd/runtime/v2/task"]

View File

@@ -1,4 +1,4 @@
![containerd banner](https://raw.githubusercontent.com/cncf/artwork/master/containerd/horizontal/color/containerd-horizontal-color.png)
![containerd banner](https://raw.githubusercontent.com/cncf/artwork/master/projects/containerd/horizontal/color/containerd-horizontal-color.png)
[![GoDoc](https://godoc.org/github.com/containerd/containerd?status.svg)](https://godoc.org/github.com/containerd/containerd)
[![Build Status](https://travis-ci.org/containerd/containerd.svg?branch=master)](https://travis-ci.org/containerd/containerd)

View File

@@ -78,13 +78,16 @@ by `<major>.<minor>`. Releases branches will be in one of several states:
- __*Next*__: The next planned release branch.
- __*Active*__: The release is currently supported and accepting patches.
- __*Extended*__: The release is only accepting security patches.
- __*End of Life*__: The release branch is no longer supported and no new patches will be accepted.
Releases will be supported up to one year after a _minor_ release. This means that
we will accept bug reports and backports to release branches until the end of
life date. If no new _minor_ release has been made, that release will be
considered supported until the next _minor_ is released or one year, whichever
is longer.
considered supported until 6 months after the next _minor_ is released or one year,
whichever is longer. Additionally, releases may have an extended security support
period after the end of the active period to accept security backports. This
timeframe will be decided by maintainers before the end of the active status.
The current state is available in the following table:
@@ -94,9 +97,9 @@ The current state is available in the following table:
| [0.1](https://github.com/containerd/containerd/releases/tag/v0.1.0) | End of Life | Mar 21, 2016 | - |
| [0.2](https://github.com/containerd/containerd/tree/v0.2.x) | End of Life | Apr 21, 2016 | December 5, 2017 |
| [1.0](https://github.com/containerd/containerd/releases/tag/v1.0.3) | End of Life | December 5, 2017 | December 5, 2018 |
| [1.1](https://github.com/containerd/containerd/releases/tag/v1.1.6) | Active | April 23, 2018 | max(April 23, 2019, Kubernetes 1.10 EOL) |
| [1.2](https://github.com/containerd/containerd/releases/tag/v1.2.4) | Active | October 24, 2018 | max(October 24, 2019, release of 1.3.0) |
| [1.3](https://github.com/containerd/containerd/milestone/20) | Next | TBD | max(TBD+1 year, release of 1.4.0) |
| [1.1](https://github.com/containerd/containerd/releases/tag/v1.1.7) | Active | April 23, 2018 | July 23, 2019 (Active), October 23, 2019 (Extended) |
| [1.2](https://github.com/containerd/containerd/releases/tag/v1.2.6) | Active | October 24, 2018 | max(October 24, 2019, release of 1.3.0 + 6 months) |
| [1.3](https://github.com/containerd/containerd/milestone/20) | Next | TBD | max(TBD+1 year, release of 1.4.0 + 6 months) |
Note that branches and release from before 1.0 may not follow these rules.

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,36 +1,20 @@
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: github.com/containerd/containerd/api/services/diff/v1/diff.proto
/*
Package diff is a generated protocol buffer package.
It is generated from these files:
github.com/containerd/containerd/api/services/diff/v1/diff.proto
It has these top-level messages:
ApplyRequest
ApplyResponse
DiffRequest
DiffResponse
*/
package diff
import proto "github.com/gogo/protobuf/proto"
import fmt "fmt"
import math "math"
// skipping weak import gogoproto "github.com/gogo/protobuf/gogoproto"
import containerd_types "github.com/containerd/containerd/api/types"
import containerd_types1 "github.com/containerd/containerd/api/types"
import context "golang.org/x/net/context"
import grpc "google.golang.org/grpc"
import strings "strings"
import reflect "reflect"
import sortkeys "github.com/gogo/protobuf/sortkeys"
import io "io"
import (
context "context"
fmt "fmt"
types "github.com/containerd/containerd/api/types"
proto "github.com/gogo/protobuf/proto"
github_com_gogo_protobuf_sortkeys "github.com/gogo/protobuf/sortkeys"
grpc "google.golang.org/grpc"
io "io"
math "math"
reflect "reflect"
strings "strings"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
@@ -45,32 +29,94 @@ const _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package
type ApplyRequest struct {
// Diff is the descriptor of the diff to be extracted
Diff *containerd_types1.Descriptor `protobuf:"bytes,1,opt,name=diff" json:"diff,omitempty"`
Mounts []*containerd_types.Mount `protobuf:"bytes,2,rep,name=mounts" json:"mounts,omitempty"`
Diff *types.Descriptor `protobuf:"bytes,1,opt,name=diff,proto3" json:"diff,omitempty"`
Mounts []*types.Mount `protobuf:"bytes,2,rep,name=mounts,proto3" json:"mounts,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *ApplyRequest) Reset() { *m = ApplyRequest{} }
func (*ApplyRequest) ProtoMessage() {}
func (*ApplyRequest) Descriptor() ([]byte, []int) { return fileDescriptorDiff, []int{0} }
func (m *ApplyRequest) Reset() { *m = ApplyRequest{} }
func (*ApplyRequest) ProtoMessage() {}
func (*ApplyRequest) Descriptor() ([]byte, []int) {
return fileDescriptor_3b36a99e6faaa935, []int{0}
}
func (m *ApplyRequest) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *ApplyRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_ApplyRequest.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalTo(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *ApplyRequest) XXX_Merge(src proto.Message) {
xxx_messageInfo_ApplyRequest.Merge(m, src)
}
func (m *ApplyRequest) XXX_Size() int {
return m.Size()
}
func (m *ApplyRequest) XXX_DiscardUnknown() {
xxx_messageInfo_ApplyRequest.DiscardUnknown(m)
}
var xxx_messageInfo_ApplyRequest proto.InternalMessageInfo
type ApplyResponse struct {
// Applied is the descriptor for the object which was applied.
// If the input was a compressed blob then the result will be
// the descriptor for the uncompressed blob.
Applied *containerd_types1.Descriptor `protobuf:"bytes,1,opt,name=applied" json:"applied,omitempty"`
Applied *types.Descriptor `protobuf:"bytes,1,opt,name=applied,proto3" json:"applied,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *ApplyResponse) Reset() { *m = ApplyResponse{} }
func (*ApplyResponse) ProtoMessage() {}
func (*ApplyResponse) Descriptor() ([]byte, []int) { return fileDescriptorDiff, []int{1} }
func (m *ApplyResponse) Reset() { *m = ApplyResponse{} }
func (*ApplyResponse) ProtoMessage() {}
func (*ApplyResponse) Descriptor() ([]byte, []int) {
return fileDescriptor_3b36a99e6faaa935, []int{1}
}
func (m *ApplyResponse) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *ApplyResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_ApplyResponse.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalTo(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *ApplyResponse) XXX_Merge(src proto.Message) {
xxx_messageInfo_ApplyResponse.Merge(m, src)
}
func (m *ApplyResponse) XXX_Size() int {
return m.Size()
}
func (m *ApplyResponse) XXX_DiscardUnknown() {
xxx_messageInfo_ApplyResponse.DiscardUnknown(m)
}
var xxx_messageInfo_ApplyResponse proto.InternalMessageInfo
type DiffRequest struct {
// Left are the mounts which represent the older copy
// in which is the base of the computed changes.
Left []*containerd_types.Mount `protobuf:"bytes,1,rep,name=left" json:"left,omitempty"`
Left []*types.Mount `protobuf:"bytes,1,rep,name=left,proto3" json:"left,omitempty"`
// Right are the mounts which represents the newer copy
// in which changes from the left were made into.
Right []*containerd_types.Mount `protobuf:"bytes,2,rep,name=right" json:"right,omitempty"`
Right []*types.Mount `protobuf:"bytes,2,rep,name=right,proto3" json:"right,omitempty"`
// MediaType is the media type descriptor for the created diff
// object
MediaType string `protobuf:"bytes,3,opt,name=media_type,json=mediaType,proto3" json:"media_type,omitempty"`
@@ -79,29 +125,129 @@ type DiffRequest struct {
Ref string `protobuf:"bytes,4,opt,name=ref,proto3" json:"ref,omitempty"`
// Labels are the labels to apply to the generated content
// on content store commit.
Labels map[string]string `protobuf:"bytes,5,rep,name=labels" json:"labels,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
Labels map[string]string `protobuf:"bytes,5,rep,name=labels,proto3" json:"labels,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *DiffRequest) Reset() { *m = DiffRequest{} }
func (*DiffRequest) ProtoMessage() {}
func (*DiffRequest) Descriptor() ([]byte, []int) { return fileDescriptorDiff, []int{2} }
func (m *DiffRequest) Reset() { *m = DiffRequest{} }
func (*DiffRequest) ProtoMessage() {}
func (*DiffRequest) Descriptor() ([]byte, []int) {
return fileDescriptor_3b36a99e6faaa935, []int{2}
}
func (m *DiffRequest) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *DiffRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_DiffRequest.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalTo(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *DiffRequest) XXX_Merge(src proto.Message) {
xxx_messageInfo_DiffRequest.Merge(m, src)
}
func (m *DiffRequest) XXX_Size() int {
return m.Size()
}
func (m *DiffRequest) XXX_DiscardUnknown() {
xxx_messageInfo_DiffRequest.DiscardUnknown(m)
}
var xxx_messageInfo_DiffRequest proto.InternalMessageInfo
type DiffResponse struct {
// Diff is the descriptor of the diff which can be applied
Diff *containerd_types1.Descriptor `protobuf:"bytes,3,opt,name=diff" json:"diff,omitempty"`
Diff *types.Descriptor `protobuf:"bytes,3,opt,name=diff,proto3" json:"diff,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *DiffResponse) Reset() { *m = DiffResponse{} }
func (*DiffResponse) ProtoMessage() {}
func (*DiffResponse) Descriptor() ([]byte, []int) { return fileDescriptorDiff, []int{3} }
func (m *DiffResponse) Reset() { *m = DiffResponse{} }
func (*DiffResponse) ProtoMessage() {}
func (*DiffResponse) Descriptor() ([]byte, []int) {
return fileDescriptor_3b36a99e6faaa935, []int{3}
}
func (m *DiffResponse) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *DiffResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_DiffResponse.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalTo(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *DiffResponse) XXX_Merge(src proto.Message) {
xxx_messageInfo_DiffResponse.Merge(m, src)
}
func (m *DiffResponse) XXX_Size() int {
return m.Size()
}
func (m *DiffResponse) XXX_DiscardUnknown() {
xxx_messageInfo_DiffResponse.DiscardUnknown(m)
}
var xxx_messageInfo_DiffResponse proto.InternalMessageInfo
func init() {
proto.RegisterType((*ApplyRequest)(nil), "containerd.services.diff.v1.ApplyRequest")
proto.RegisterType((*ApplyResponse)(nil), "containerd.services.diff.v1.ApplyResponse")
proto.RegisterType((*DiffRequest)(nil), "containerd.services.diff.v1.DiffRequest")
proto.RegisterMapType((map[string]string)(nil), "containerd.services.diff.v1.DiffRequest.LabelsEntry")
proto.RegisterType((*DiffResponse)(nil), "containerd.services.diff.v1.DiffResponse")
}
func init() {
proto.RegisterFile("github.com/containerd/containerd/api/services/diff/v1/diff.proto", fileDescriptor_3b36a99e6faaa935)
}
var fileDescriptor_3b36a99e6faaa935 = []byte{
// 457 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x9c, 0x53, 0x4f, 0x6f, 0xd3, 0x30,
0x14, 0xaf, 0xfb, 0x0f, 0xf5, 0x75, 0x48, 0xc8, 0x9a, 0x44, 0x14, 0x20, 0xaa, 0x7a, 0xea, 0x40,
0x38, 0xac, 0xa0, 0x09, 0xb6, 0xcb, 0x40, 0x43, 0x5c, 0xc6, 0x25, 0xda, 0x01, 0x81, 0x04, 0x4a,
0x9b, 0x97, 0xce, 0x22, 0x8d, 0xbd, 0xd8, 0xad, 0x94, 0x1b, 0xdf, 0x85, 0x8f, 0xc2, 0x65, 0x47,
0x8e, 0x1c, 0x69, 0x3f, 0x09, 0xb2, 0x93, 0x40, 0x24, 0xa4, 0x12, 0x76, 0xca, 0xcb, 0xf3, 0xef,
0x9f, 0xfd, 0x6c, 0x38, 0x5d, 0x70, 0x7d, 0xb9, 0x9a, 0xb1, 0xb9, 0x58, 0xfa, 0x73, 0x91, 0xea,
0x90, 0xa7, 0x98, 0x45, 0xf5, 0x32, 0x94, 0xdc, 0x57, 0x98, 0xad, 0xf9, 0x1c, 0x95, 0x1f, 0xf1,
0x38, 0xf6, 0xd7, 0x87, 0xf6, 0xcb, 0x64, 0x26, 0xb4, 0xa0, 0xf7, 0xfe, 0x60, 0x59, 0x85, 0x63,
0x76, 0x7d, 0x7d, 0xe8, 0xee, 0x2f, 0xc4, 0x42, 0x58, 0x9c, 0x6f, 0xaa, 0x82, 0xe2, 0x1e, 0x35,
0x32, 0xd5, 0xb9, 0x44, 0xe5, 0x2f, 0xc5, 0x2a, 0xd5, 0x25, 0xef, 0xe4, 0x3f, 0x78, 0x11, 0xaa,
0x79, 0xc6, 0xa5, 0x16, 0x59, 0x41, 0x1e, 0x5f, 0xc1, 0xde, 0x4b, 0x29, 0x93, 0x3c, 0xc0, 0xab,
0x15, 0x2a, 0x4d, 0x9f, 0x40, 0xd7, 0xa4, 0x74, 0xc8, 0x88, 0x4c, 0x86, 0xd3, 0xfb, 0xac, 0xb6,
0x0d, 0xab, 0xc0, 0xce, 0x7e, 0x2b, 0x04, 0x16, 0x49, 0x7d, 0xe8, 0xdb, 0x34, 0xca, 0x69, 0x8f,
0x3a, 0x93, 0xe1, 0xf4, 0xee, 0xdf, 0x9c, 0xb7, 0x66, 0x3d, 0x28, 0x61, 0xe3, 0x37, 0x70, 0xbb,
0xb4, 0x54, 0x52, 0xa4, 0x0a, 0xe9, 0x11, 0xdc, 0x0a, 0xa5, 0x4c, 0x38, 0x46, 0x8d, 0x6c, 0x2b,
0xf0, 0xf8, 0x6b, 0x1b, 0x86, 0x67, 0x3c, 0x8e, 0xab, 0xec, 0x8f, 0xa0, 0x9b, 0x60, 0xac, 0x1d,
0xb2, 0x3b, 0x87, 0x05, 0xd1, 0xc7, 0xd0, 0xcb, 0xf8, 0xe2, 0x52, 0xff, 0x2b, 0x75, 0x81, 0xa2,
0x0f, 0x00, 0x96, 0x18, 0xf1, 0xf0, 0x93, 0x59, 0x73, 0x3a, 0x23, 0x32, 0x19, 0x04, 0x03, 0xdb,
0xb9, 0xc8, 0x25, 0xd2, 0x3b, 0xd0, 0xc9, 0x30, 0x76, 0xba, 0xb6, 0x6f, 0x4a, 0x7a, 0x0e, 0xfd,
0x24, 0x9c, 0x61, 0xa2, 0x9c, 0x9e, 0x35, 0x78, 0xc6, 0x76, 0xdc, 0x08, 0x56, 0xdb, 0x06, 0x3b,
0xb7, 0xb4, 0xd7, 0xa9, 0xce, 0xf2, 0xa0, 0xd4, 0x70, 0x5f, 0xc0, 0xb0, 0xd6, 0x36, 0x76, 0x9f,
0x31, 0xb7, 0xa7, 0x35, 0x08, 0x4c, 0x49, 0xf7, 0xa1, 0xb7, 0x0e, 0x93, 0x15, 0x3a, 0x6d, 0xdb,
0x2b, 0x7e, 0x8e, 0xdb, 0xcf, 0xc9, 0xf8, 0x14, 0xf6, 0x0a, 0xf5, 0xf2, 0xb4, 0xab, 0x09, 0x77,
0x9a, 0x4e, 0x78, 0xfa, 0x8d, 0x40, 0xd7, 0x48, 0xd0, 0x8f, 0xd0, 0xb3, 0x93, 0xa3, 0x07, 0x3b,
0x37, 0x53, 0xbf, 0x50, 0xee, 0xc3, 0x26, 0xd0, 0x32, 0xda, 0x87, 0xd2, 0x67, 0xd2, 0xf4, 0xac,
0xdc, 0x83, 0x06, 0xc8, 0x42, 0xfc, 0xd5, 0xc5, 0xf5, 0xc6, 0x6b, 0xfd, 0xd8, 0x78, 0xad, 0x2f,
0x5b, 0x8f, 0x5c, 0x6f, 0x3d, 0xf2, 0x7d, 0xeb, 0x91, 0x9f, 0x5b, 0x8f, 0xbc, 0x3f, 0xbe, 0xd1,
0x6b, 0x3f, 0x31, 0xdf, 0x77, 0xad, 0x59, 0xdf, 0x3e, 0xa4, 0xa7, 0xbf, 0x02, 0x00, 0x00, 0xff,
0xff, 0x61, 0xd1, 0x6e, 0x9e, 0x34, 0x04, 0x00, 0x00,
}
// Reference imports to suppress errors if they are not otherwise used.
var _ context.Context
var _ grpc.ClientConn
@@ -110,8 +256,9 @@ var _ grpc.ClientConn
// is compatible with the grpc package it is being compiled against.
const _ = grpc.SupportPackageIsVersion4
// Client API for Diff service
// DiffClient is the client API for Diff service.
//
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.
type DiffClient interface {
// Apply applies the content associated with the provided digests onto
// the provided mounts. Archive content will be extracted and
@@ -132,7 +279,7 @@ func NewDiffClient(cc *grpc.ClientConn) DiffClient {
func (c *diffClient) Apply(ctx context.Context, in *ApplyRequest, opts ...grpc.CallOption) (*ApplyResponse, error) {
out := new(ApplyResponse)
err := grpc.Invoke(ctx, "/containerd.services.diff.v1.Diff/Apply", in, out, c.cc, opts...)
err := c.cc.Invoke(ctx, "/containerd.services.diff.v1.Diff/Apply", in, out, opts...)
if err != nil {
return nil, err
}
@@ -141,15 +288,14 @@ func (c *diffClient) Apply(ctx context.Context, in *ApplyRequest, opts ...grpc.C
func (c *diffClient) Diff(ctx context.Context, in *DiffRequest, opts ...grpc.CallOption) (*DiffResponse, error) {
out := new(DiffResponse)
err := grpc.Invoke(ctx, "/containerd.services.diff.v1.Diff/Diff", in, out, c.cc, opts...)
err := c.cc.Invoke(ctx, "/containerd.services.diff.v1.Diff/Diff", in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
// Server API for Diff service
// DiffServer is the server API for Diff service.
type DiffServer interface {
// Apply applies the content associated with the provided digests onto
// the provided mounts. Archive content will be extracted and
@@ -254,6 +400,9 @@ func (m *ApplyRequest) MarshalTo(dAtA []byte) (int, error) {
i += n
}
}
if m.XXX_unrecognized != nil {
i += copy(dAtA[i:], m.XXX_unrecognized)
}
return i, nil
}
@@ -282,6 +431,9 @@ func (m *ApplyResponse) MarshalTo(dAtA []byte) (int, error) {
}
i += n2
}
if m.XXX_unrecognized != nil {
i += copy(dAtA[i:], m.XXX_unrecognized)
}
return i, nil
}
@@ -353,6 +505,9 @@ func (m *DiffRequest) MarshalTo(dAtA []byte) (int, error) {
i += copy(dAtA[i:], v)
}
}
if m.XXX_unrecognized != nil {
i += copy(dAtA[i:], m.XXX_unrecognized)
}
return i, nil
}
@@ -381,6 +536,9 @@ func (m *DiffResponse) MarshalTo(dAtA []byte) (int, error) {
}
i += n3
}
if m.XXX_unrecognized != nil {
i += copy(dAtA[i:], m.XXX_unrecognized)
}
return i, nil
}
@@ -394,6 +552,9 @@ func encodeVarintDiff(dAtA []byte, offset int, v uint64) int {
return offset + 1
}
func (m *ApplyRequest) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if m.Diff != nil {
@@ -406,20 +567,32 @@ func (m *ApplyRequest) Size() (n int) {
n += 1 + l + sovDiff(uint64(l))
}
}
if m.XXX_unrecognized != nil {
n += len(m.XXX_unrecognized)
}
return n
}
func (m *ApplyResponse) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if m.Applied != nil {
l = m.Applied.Size()
n += 1 + l + sovDiff(uint64(l))
}
if m.XXX_unrecognized != nil {
n += len(m.XXX_unrecognized)
}
return n
}
func (m *DiffRequest) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if len(m.Left) > 0 {
@@ -450,16 +623,25 @@ func (m *DiffRequest) Size() (n int) {
n += mapEntrySize + 1 + sovDiff(uint64(mapEntrySize))
}
}
if m.XXX_unrecognized != nil {
n += len(m.XXX_unrecognized)
}
return n
}
func (m *DiffResponse) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if m.Diff != nil {
l = m.Diff.Size()
n += 1 + l + sovDiff(uint64(l))
}
if m.XXX_unrecognized != nil {
n += len(m.XXX_unrecognized)
}
return n
}
@@ -481,8 +663,9 @@ func (this *ApplyRequest) String() string {
return "nil"
}
s := strings.Join([]string{`&ApplyRequest{`,
`Diff:` + strings.Replace(fmt.Sprintf("%v", this.Diff), "Descriptor", "containerd_types1.Descriptor", 1) + `,`,
`Mounts:` + strings.Replace(fmt.Sprintf("%v", this.Mounts), "Mount", "containerd_types.Mount", 1) + `,`,
`Diff:` + strings.Replace(fmt.Sprintf("%v", this.Diff), "Descriptor", "types.Descriptor", 1) + `,`,
`Mounts:` + strings.Replace(fmt.Sprintf("%v", this.Mounts), "Mount", "types.Mount", 1) + `,`,
`XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
`}`,
}, "")
return s
@@ -492,7 +675,8 @@ func (this *ApplyResponse) String() string {
return "nil"
}
s := strings.Join([]string{`&ApplyResponse{`,
`Applied:` + strings.Replace(fmt.Sprintf("%v", this.Applied), "Descriptor", "containerd_types1.Descriptor", 1) + `,`,
`Applied:` + strings.Replace(fmt.Sprintf("%v", this.Applied), "Descriptor", "types.Descriptor", 1) + `,`,
`XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
`}`,
}, "")
return s
@@ -505,18 +689,19 @@ func (this *DiffRequest) String() string {
for k, _ := range this.Labels {
keysForLabels = append(keysForLabels, k)
}
sortkeys.Strings(keysForLabels)
github_com_gogo_protobuf_sortkeys.Strings(keysForLabels)
mapStringForLabels := "map[string]string{"
for _, k := range keysForLabels {
mapStringForLabels += fmt.Sprintf("%v: %v,", k, this.Labels[k])
}
mapStringForLabels += "}"
s := strings.Join([]string{`&DiffRequest{`,
`Left:` + strings.Replace(fmt.Sprintf("%v", this.Left), "Mount", "containerd_types.Mount", 1) + `,`,
`Right:` + strings.Replace(fmt.Sprintf("%v", this.Right), "Mount", "containerd_types.Mount", 1) + `,`,
`Left:` + strings.Replace(fmt.Sprintf("%v", this.Left), "Mount", "types.Mount", 1) + `,`,
`Right:` + strings.Replace(fmt.Sprintf("%v", this.Right), "Mount", "types.Mount", 1) + `,`,
`MediaType:` + fmt.Sprintf("%v", this.MediaType) + `,`,
`Ref:` + fmt.Sprintf("%v", this.Ref) + `,`,
`Labels:` + mapStringForLabels + `,`,
`XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
`}`,
}, "")
return s
@@ -526,7 +711,8 @@ func (this *DiffResponse) String() string {
return "nil"
}
s := strings.Join([]string{`&DiffResponse{`,
`Diff:` + strings.Replace(fmt.Sprintf("%v", this.Diff), "Descriptor", "containerd_types1.Descriptor", 1) + `,`,
`Diff:` + strings.Replace(fmt.Sprintf("%v", this.Diff), "Descriptor", "types.Descriptor", 1) + `,`,
`XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
`}`,
}, "")
return s
@@ -554,7 +740,7 @@ func (m *ApplyRequest) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -582,7 +768,7 @@ func (m *ApplyRequest) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
msglen |= (int(b) & 0x7F) << shift
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -591,11 +777,14 @@ func (m *ApplyRequest) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthDiff
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthDiff
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
if m.Diff == nil {
m.Diff = &containerd_types1.Descriptor{}
m.Diff = &types.Descriptor{}
}
if err := m.Diff.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
@@ -615,7 +804,7 @@ func (m *ApplyRequest) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
msglen |= (int(b) & 0x7F) << shift
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -624,10 +813,13 @@ func (m *ApplyRequest) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthDiff
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthDiff
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Mounts = append(m.Mounts, &containerd_types.Mount{})
m.Mounts = append(m.Mounts, &types.Mount{})
if err := m.Mounts[len(m.Mounts)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
@@ -641,9 +833,13 @@ func (m *ApplyRequest) Unmarshal(dAtA []byte) error {
if skippy < 0 {
return ErrInvalidLengthDiff
}
if (iNdEx + skippy) < 0 {
return ErrInvalidLengthDiff
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
iNdEx += skippy
}
}
@@ -668,7 +864,7 @@ func (m *ApplyResponse) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -696,7 +892,7 @@ func (m *ApplyResponse) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
msglen |= (int(b) & 0x7F) << shift
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -705,11 +901,14 @@ func (m *ApplyResponse) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthDiff
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthDiff
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
if m.Applied == nil {
m.Applied = &containerd_types1.Descriptor{}
m.Applied = &types.Descriptor{}
}
if err := m.Applied.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
@@ -724,9 +923,13 @@ func (m *ApplyResponse) Unmarshal(dAtA []byte) error {
if skippy < 0 {
return ErrInvalidLengthDiff
}
if (iNdEx + skippy) < 0 {
return ErrInvalidLengthDiff
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
iNdEx += skippy
}
}
@@ -751,7 +954,7 @@ func (m *DiffRequest) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -779,7 +982,7 @@ func (m *DiffRequest) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
msglen |= (int(b) & 0x7F) << shift
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -788,10 +991,13 @@ func (m *DiffRequest) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthDiff
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthDiff
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Left = append(m.Left, &containerd_types.Mount{})
m.Left = append(m.Left, &types.Mount{})
if err := m.Left[len(m.Left)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
@@ -810,7 +1016,7 @@ func (m *DiffRequest) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
msglen |= (int(b) & 0x7F) << shift
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -819,10 +1025,13 @@ func (m *DiffRequest) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthDiff
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthDiff
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Right = append(m.Right, &containerd_types.Mount{})
m.Right = append(m.Right, &types.Mount{})
if err := m.Right[len(m.Right)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
@@ -841,7 +1050,7 @@ func (m *DiffRequest) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -851,6 +1060,9 @@ func (m *DiffRequest) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthDiff
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthDiff
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -870,7 +1082,7 @@ func (m *DiffRequest) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -880,6 +1092,9 @@ func (m *DiffRequest) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthDiff
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthDiff
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -899,7 +1114,7 @@ func (m *DiffRequest) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
msglen |= (int(b) & 0x7F) << shift
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -908,6 +1123,9 @@ func (m *DiffRequest) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthDiff
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthDiff
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -928,7 +1146,7 @@ func (m *DiffRequest) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -945,7 +1163,7 @@ func (m *DiffRequest) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLenmapkey |= (uint64(b) & 0x7F) << shift
stringLenmapkey |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -955,6 +1173,9 @@ func (m *DiffRequest) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthDiff
}
postStringIndexmapkey := iNdEx + intStringLenmapkey
if postStringIndexmapkey < 0 {
return ErrInvalidLengthDiff
}
if postStringIndexmapkey > l {
return io.ErrUnexpectedEOF
}
@@ -971,7 +1192,7 @@ func (m *DiffRequest) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLenmapvalue |= (uint64(b) & 0x7F) << shift
stringLenmapvalue |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -981,6 +1202,9 @@ func (m *DiffRequest) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthDiff
}
postStringIndexmapvalue := iNdEx + intStringLenmapvalue
if postStringIndexmapvalue < 0 {
return ErrInvalidLengthDiff
}
if postStringIndexmapvalue > l {
return io.ErrUnexpectedEOF
}
@@ -1012,9 +1236,13 @@ func (m *DiffRequest) Unmarshal(dAtA []byte) error {
if skippy < 0 {
return ErrInvalidLengthDiff
}
if (iNdEx + skippy) < 0 {
return ErrInvalidLengthDiff
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
iNdEx += skippy
}
}
@@ -1039,7 +1267,7 @@ func (m *DiffResponse) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -1067,7 +1295,7 @@ func (m *DiffResponse) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
msglen |= (int(b) & 0x7F) << shift
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -1076,11 +1304,14 @@ func (m *DiffResponse) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthDiff
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthDiff
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
if m.Diff == nil {
m.Diff = &containerd_types1.Descriptor{}
m.Diff = &types.Descriptor{}
}
if err := m.Diff.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
@@ -1095,9 +1326,13 @@ func (m *DiffResponse) Unmarshal(dAtA []byte) error {
if skippy < 0 {
return ErrInvalidLengthDiff
}
if (iNdEx + skippy) < 0 {
return ErrInvalidLengthDiff
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
iNdEx += skippy
}
}
@@ -1161,10 +1396,13 @@ func skipDiff(dAtA []byte) (n int, err error) {
break
}
}
iNdEx += length
if length < 0 {
return 0, ErrInvalidLengthDiff
}
iNdEx += length
if iNdEx < 0 {
return 0, ErrInvalidLengthDiff
}
return iNdEx, nil
case 3:
for {
@@ -1193,6 +1431,9 @@ func skipDiff(dAtA []byte) (n int, err error) {
return 0, err
}
iNdEx = start + next
if iNdEx < 0 {
return 0, ErrInvalidLengthDiff
}
}
return iNdEx, nil
case 4:
@@ -1211,40 +1452,3 @@ var (
ErrInvalidLengthDiff = fmt.Errorf("proto: negative length found during unmarshaling")
ErrIntOverflowDiff = fmt.Errorf("proto: integer overflow")
)
func init() {
proto.RegisterFile("github.com/containerd/containerd/api/services/diff/v1/diff.proto", fileDescriptorDiff)
}
var fileDescriptorDiff = []byte{
// 457 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x9c, 0x53, 0x4f, 0x6f, 0xd3, 0x30,
0x14, 0xaf, 0xfb, 0x0f, 0xf5, 0x75, 0x48, 0xc8, 0x9a, 0x44, 0x14, 0x20, 0xaa, 0x7a, 0xea, 0x40,
0x38, 0xac, 0xa0, 0x09, 0xb6, 0xcb, 0x40, 0x43, 0x5c, 0xc6, 0x25, 0xda, 0x01, 0x81, 0x04, 0x4a,
0x9b, 0x97, 0xce, 0x22, 0x8d, 0xbd, 0xd8, 0xad, 0x94, 0x1b, 0xdf, 0x85, 0x8f, 0xc2, 0x65, 0x47,
0x8e, 0x1c, 0x69, 0x3f, 0x09, 0xb2, 0x93, 0x40, 0x24, 0xa4, 0x12, 0x76, 0xca, 0xcb, 0xf3, 0xef,
0x9f, 0xfd, 0x6c, 0x38, 0x5d, 0x70, 0x7d, 0xb9, 0x9a, 0xb1, 0xb9, 0x58, 0xfa, 0x73, 0x91, 0xea,
0x90, 0xa7, 0x98, 0x45, 0xf5, 0x32, 0x94, 0xdc, 0x57, 0x98, 0xad, 0xf9, 0x1c, 0x95, 0x1f, 0xf1,
0x38, 0xf6, 0xd7, 0x87, 0xf6, 0xcb, 0x64, 0x26, 0xb4, 0xa0, 0xf7, 0xfe, 0x60, 0x59, 0x85, 0x63,
0x76, 0x7d, 0x7d, 0xe8, 0xee, 0x2f, 0xc4, 0x42, 0x58, 0x9c, 0x6f, 0xaa, 0x82, 0xe2, 0x1e, 0x35,
0x32, 0xd5, 0xb9, 0x44, 0xe5, 0x2f, 0xc5, 0x2a, 0xd5, 0x25, 0xef, 0xe4, 0x3f, 0x78, 0x11, 0xaa,
0x79, 0xc6, 0xa5, 0x16, 0x59, 0x41, 0x1e, 0x5f, 0xc1, 0xde, 0x4b, 0x29, 0x93, 0x3c, 0xc0, 0xab,
0x15, 0x2a, 0x4d, 0x9f, 0x40, 0xd7, 0xa4, 0x74, 0xc8, 0x88, 0x4c, 0x86, 0xd3, 0xfb, 0xac, 0xb6,
0x0d, 0xab, 0xc0, 0xce, 0x7e, 0x2b, 0x04, 0x16, 0x49, 0x7d, 0xe8, 0xdb, 0x34, 0xca, 0x69, 0x8f,
0x3a, 0x93, 0xe1, 0xf4, 0xee, 0xdf, 0x9c, 0xb7, 0x66, 0x3d, 0x28, 0x61, 0xe3, 0x37, 0x70, 0xbb,
0xb4, 0x54, 0x52, 0xa4, 0x0a, 0xe9, 0x11, 0xdc, 0x0a, 0xa5, 0x4c, 0x38, 0x46, 0x8d, 0x6c, 0x2b,
0xf0, 0xf8, 0x6b, 0x1b, 0x86, 0x67, 0x3c, 0x8e, 0xab, 0xec, 0x8f, 0xa0, 0x9b, 0x60, 0xac, 0x1d,
0xb2, 0x3b, 0x87, 0x05, 0xd1, 0xc7, 0xd0, 0xcb, 0xf8, 0xe2, 0x52, 0xff, 0x2b, 0x75, 0x81, 0xa2,
0x0f, 0x00, 0x96, 0x18, 0xf1, 0xf0, 0x93, 0x59, 0x73, 0x3a, 0x23, 0x32, 0x19, 0x04, 0x03, 0xdb,
0xb9, 0xc8, 0x25, 0xd2, 0x3b, 0xd0, 0xc9, 0x30, 0x76, 0xba, 0xb6, 0x6f, 0x4a, 0x7a, 0x0e, 0xfd,
0x24, 0x9c, 0x61, 0xa2, 0x9c, 0x9e, 0x35, 0x78, 0xc6, 0x76, 0xdc, 0x08, 0x56, 0xdb, 0x06, 0x3b,
0xb7, 0xb4, 0xd7, 0xa9, 0xce, 0xf2, 0xa0, 0xd4, 0x70, 0x5f, 0xc0, 0xb0, 0xd6, 0x36, 0x76, 0x9f,
0x31, 0xb7, 0xa7, 0x35, 0x08, 0x4c, 0x49, 0xf7, 0xa1, 0xb7, 0x0e, 0x93, 0x15, 0x3a, 0x6d, 0xdb,
0x2b, 0x7e, 0x8e, 0xdb, 0xcf, 0xc9, 0xf8, 0x14, 0xf6, 0x0a, 0xf5, 0xf2, 0xb4, 0xab, 0x09, 0x77,
0x9a, 0x4e, 0x78, 0xfa, 0x8d, 0x40, 0xd7, 0x48, 0xd0, 0x8f, 0xd0, 0xb3, 0x93, 0xa3, 0x07, 0x3b,
0x37, 0x53, 0xbf, 0x50, 0xee, 0xc3, 0x26, 0xd0, 0x32, 0xda, 0x87, 0xd2, 0x67, 0xd2, 0xf4, 0xac,
0xdc, 0x83, 0x06, 0xc8, 0x42, 0xfc, 0xd5, 0xc5, 0xf5, 0xc6, 0x6b, 0xfd, 0xd8, 0x78, 0xad, 0x2f,
0x5b, 0x8f, 0x5c, 0x6f, 0x3d, 0xf2, 0x7d, 0xeb, 0x91, 0x9f, 0x5b, 0x8f, 0xbc, 0x3f, 0xbe, 0xd1,
0x6b, 0x3f, 0x31, 0xdf, 0x77, 0xad, 0x59, 0xdf, 0x3e, 0xa4, 0xa7, 0xbf, 0x02, 0x00, 0x00, 0xff,
0xff, 0x61, 0xd1, 0x6e, 0x9e, 0x34, 0x04, 0x00, 0x00,
}

View File

@@ -1,43 +1,22 @@
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: github.com/containerd/containerd/api/services/events/v1/events.proto
/*
Package events is a generated protocol buffer package.
It is generated from these files:
github.com/containerd/containerd/api/services/events/v1/events.proto
It has these top-level messages:
PublishRequest
ForwardRequest
SubscribeRequest
Envelope
*/
package events
import proto "github.com/gogo/protobuf/proto"
import fmt "fmt"
import math "math"
// skipping weak import containerd_plugin "github.com/containerd/containerd/protobuf/plugin"
// skipping weak import gogoproto "github.com/gogo/protobuf/gogoproto"
import google_protobuf1 "github.com/gogo/protobuf/types"
import google_protobuf2 "github.com/gogo/protobuf/types"
import _ "github.com/gogo/protobuf/types"
import time "time"
import typeurl "github.com/containerd/typeurl"
import context "golang.org/x/net/context"
import grpc "google.golang.org/grpc"
import types "github.com/gogo/protobuf/types"
import strings "strings"
import reflect "reflect"
import io "io"
import (
context "context"
fmt "fmt"
github_com_containerd_typeurl "github.com/containerd/typeurl"
proto "github.com/gogo/protobuf/proto"
github_com_gogo_protobuf_types "github.com/gogo/protobuf/types"
types "github.com/gogo/protobuf/types"
grpc "google.golang.org/grpc"
io "io"
math "math"
reflect "reflect"
strings "strings"
time "time"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
@@ -52,40 +31,164 @@ var _ = time.Kitchen
const _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package
type PublishRequest struct {
Topic string `protobuf:"bytes,1,opt,name=topic,proto3" json:"topic,omitempty"`
Event *google_protobuf1.Any `protobuf:"bytes,2,opt,name=event" json:"event,omitempty"`
Topic string `protobuf:"bytes,1,opt,name=topic,proto3" json:"topic,omitempty"`
Event *types.Any `protobuf:"bytes,2,opt,name=event,proto3" json:"event,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *PublishRequest) Reset() { *m = PublishRequest{} }
func (*PublishRequest) ProtoMessage() {}
func (*PublishRequest) Descriptor() ([]byte, []int) { return fileDescriptorEvents, []int{0} }
func (m *PublishRequest) Reset() { *m = PublishRequest{} }
func (*PublishRequest) ProtoMessage() {}
func (*PublishRequest) Descriptor() ([]byte, []int) {
return fileDescriptor_43fcd20dc1642376, []int{0}
}
func (m *PublishRequest) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *PublishRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_PublishRequest.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalTo(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *PublishRequest) XXX_Merge(src proto.Message) {
xxx_messageInfo_PublishRequest.Merge(m, src)
}
func (m *PublishRequest) XXX_Size() int {
return m.Size()
}
func (m *PublishRequest) XXX_DiscardUnknown() {
xxx_messageInfo_PublishRequest.DiscardUnknown(m)
}
var xxx_messageInfo_PublishRequest proto.InternalMessageInfo
type ForwardRequest struct {
Envelope *Envelope `protobuf:"bytes,1,opt,name=envelope" json:"envelope,omitempty"`
Envelope *Envelope `protobuf:"bytes,1,opt,name=envelope,proto3" json:"envelope,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *ForwardRequest) Reset() { *m = ForwardRequest{} }
func (*ForwardRequest) ProtoMessage() {}
func (*ForwardRequest) Descriptor() ([]byte, []int) { return fileDescriptorEvents, []int{1} }
func (m *ForwardRequest) Reset() { *m = ForwardRequest{} }
func (*ForwardRequest) ProtoMessage() {}
func (*ForwardRequest) Descriptor() ([]byte, []int) {
return fileDescriptor_43fcd20dc1642376, []int{1}
}
func (m *ForwardRequest) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *ForwardRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_ForwardRequest.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalTo(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *ForwardRequest) XXX_Merge(src proto.Message) {
xxx_messageInfo_ForwardRequest.Merge(m, src)
}
func (m *ForwardRequest) XXX_Size() int {
return m.Size()
}
func (m *ForwardRequest) XXX_DiscardUnknown() {
xxx_messageInfo_ForwardRequest.DiscardUnknown(m)
}
var xxx_messageInfo_ForwardRequest proto.InternalMessageInfo
type SubscribeRequest struct {
Filters []string `protobuf:"bytes,1,rep,name=filters" json:"filters,omitempty"`
Filters []string `protobuf:"bytes,1,rep,name=filters,proto3" json:"filters,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *SubscribeRequest) Reset() { *m = SubscribeRequest{} }
func (*SubscribeRequest) ProtoMessage() {}
func (*SubscribeRequest) Descriptor() ([]byte, []int) { return fileDescriptorEvents, []int{2} }
func (m *SubscribeRequest) Reset() { *m = SubscribeRequest{} }
func (*SubscribeRequest) ProtoMessage() {}
func (*SubscribeRequest) Descriptor() ([]byte, []int) {
return fileDescriptor_43fcd20dc1642376, []int{2}
}
func (m *SubscribeRequest) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *SubscribeRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_SubscribeRequest.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalTo(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *SubscribeRequest) XXX_Merge(src proto.Message) {
xxx_messageInfo_SubscribeRequest.Merge(m, src)
}
func (m *SubscribeRequest) XXX_Size() int {
return m.Size()
}
func (m *SubscribeRequest) XXX_DiscardUnknown() {
xxx_messageInfo_SubscribeRequest.DiscardUnknown(m)
}
var xxx_messageInfo_SubscribeRequest proto.InternalMessageInfo
type Envelope struct {
Timestamp time.Time `protobuf:"bytes,1,opt,name=timestamp,stdtime" json:"timestamp"`
Namespace string `protobuf:"bytes,2,opt,name=namespace,proto3" json:"namespace,omitempty"`
Topic string `protobuf:"bytes,3,opt,name=topic,proto3" json:"topic,omitempty"`
Event *google_protobuf1.Any `protobuf:"bytes,4,opt,name=event" json:"event,omitempty"`
Timestamp time.Time `protobuf:"bytes,1,opt,name=timestamp,proto3,stdtime" json:"timestamp"`
Namespace string `protobuf:"bytes,2,opt,name=namespace,proto3" json:"namespace,omitempty"`
Topic string `protobuf:"bytes,3,opt,name=topic,proto3" json:"topic,omitempty"`
Event *types.Any `protobuf:"bytes,4,opt,name=event,proto3" json:"event,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *Envelope) Reset() { *m = Envelope{} }
func (*Envelope) ProtoMessage() {}
func (*Envelope) Descriptor() ([]byte, []int) { return fileDescriptorEvents, []int{3} }
func (m *Envelope) Reset() { *m = Envelope{} }
func (*Envelope) ProtoMessage() {}
func (*Envelope) Descriptor() ([]byte, []int) {
return fileDescriptor_43fcd20dc1642376, []int{3}
}
func (m *Envelope) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *Envelope) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_Envelope.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalTo(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *Envelope) XXX_Merge(src proto.Message) {
xxx_messageInfo_Envelope.Merge(m, src)
}
func (m *Envelope) XXX_Size() int {
return m.Size()
}
func (m *Envelope) XXX_DiscardUnknown() {
xxx_messageInfo_Envelope.DiscardUnknown(m)
}
var xxx_messageInfo_Envelope proto.InternalMessageInfo
func init() {
proto.RegisterType((*PublishRequest)(nil), "containerd.services.events.v1.PublishRequest")
@@ -94,6 +197,44 @@ func init() {
proto.RegisterType((*Envelope)(nil), "containerd.services.events.v1.Envelope")
}
func init() {
proto.RegisterFile("github.com/containerd/containerd/api/services/events/v1/events.proto", fileDescriptor_43fcd20dc1642376)
}
var fileDescriptor_43fcd20dc1642376 = []byte{
// 466 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x9c, 0x93, 0xcd, 0x8e, 0xd3, 0x30,
0x14, 0x85, 0xeb, 0xf9, 0x6d, 0x3c, 0xd2, 0x08, 0x45, 0x15, 0x2a, 0x01, 0xd2, 0xaa, 0x1b, 0x2a,
0x04, 0x0e, 0x53, 0x76, 0x20, 0x21, 0x28, 0x94, 0xf5, 0x28, 0x80, 0x54, 0xb1, 0x4b, 0xd2, 0xdb,
0xd4, 0x52, 0x62, 0x9b, 0xd8, 0x09, 0x9a, 0xdd, 0x3c, 0x02, 0x1b, 0xde, 0x84, 0x0d, 0x6f, 0xd0,
0x25, 0x4b, 0x56, 0xc0, 0xf4, 0x49, 0x50, 0x13, 0xbb, 0x61, 0x3a, 0x40, 0x10, 0xbb, 0x6b, 0xdf,
0xe3, 0xcf, 0xb9, 0xe7, 0x38, 0xf8, 0x45, 0x4c, 0xd5, 0x22, 0x0f, 0x49, 0xc4, 0x53, 0x2f, 0xe2,
0x4c, 0x05, 0x94, 0x41, 0x36, 0xfb, 0xb5, 0x0c, 0x04, 0xf5, 0x24, 0x64, 0x05, 0x8d, 0x40, 0x7a,
0x50, 0x00, 0x53, 0xd2, 0x2b, 0x4e, 0x74, 0x45, 0x44, 0xc6, 0x15, 0xb7, 0x6f, 0xd7, 0x7a, 0x62,
0xb4, 0x44, 0x2b, 0x8a, 0x13, 0xe7, 0x69, 0xe3, 0x25, 0x25, 0x26, 0xcc, 0xe7, 0x9e, 0x48, 0xf2,
0x98, 0x32, 0x6f, 0x4e, 0x21, 0x99, 0x89, 0x40, 0x2d, 0xaa, 0x0b, 0x9c, 0x4e, 0xcc, 0x63, 0x5e,
0x96, 0xde, 0xba, 0xd2, 0xbb, 0x37, 0x62, 0xce, 0xe3, 0x04, 0xea, 0xd3, 0x01, 0x3b, 0xd3, 0xad,
0x9b, 0xdb, 0x2d, 0x48, 0x85, 0x32, 0xcd, 0xde, 0x76, 0x53, 0xd1, 0x14, 0xa4, 0x0a, 0x52, 0x51,
0x09, 0x06, 0x3e, 0x3e, 0x3e, 0xcd, 0xc3, 0x84, 0xca, 0x85, 0x0f, 0xef, 0x72, 0x90, 0xca, 0xee,
0xe0, 0x7d, 0xc5, 0x05, 0x8d, 0xba, 0xa8, 0x8f, 0x86, 0x96, 0x5f, 0x2d, 0xec, 0xbb, 0x78, 0xbf,
0x9c, 0xb2, 0xbb, 0xd3, 0x47, 0xc3, 0xa3, 0x51, 0x87, 0x54, 0x60, 0x62, 0xc0, 0xe4, 0x19, 0x3b,
0xf3, 0x2b, 0xc9, 0xe0, 0x0d, 0x3e, 0x7e, 0xc9, 0xb3, 0xf7, 0x41, 0x36, 0x33, 0xcc, 0xe7, 0xb8,
0x0d, 0xac, 0x80, 0x84, 0x0b, 0x28, 0xb1, 0x47, 0xa3, 0x3b, 0xe4, 0xaf, 0x46, 0x92, 0x89, 0x96,
0xfb, 0x9b, 0x83, 0x83, 0x7b, 0xf8, 0xda, 0xab, 0x3c, 0x94, 0x51, 0x46, 0x43, 0x30, 0xe0, 0x2e,
0x3e, 0x9c, 0xd3, 0x44, 0x41, 0x26, 0xbb, 0xa8, 0xbf, 0x3b, 0xb4, 0x7c, 0xb3, 0x1c, 0x7c, 0x42,
0xb8, 0x6d, 0x20, 0xf6, 0x18, 0x5b, 0x9b, 0xc1, 0xf5, 0x07, 0x38, 0x57, 0x26, 0x78, 0x6d, 0x14,
0xe3, 0xf6, 0xf2, 0x5b, 0xaf, 0xf5, 0xe1, 0x7b, 0x0f, 0xf9, 0xf5, 0x31, 0xfb, 0x16, 0xb6, 0x58,
0x90, 0x82, 0x14, 0x41, 0x04, 0xa5, 0x0b, 0x96, 0x5f, 0x6f, 0xd4, 0xae, 0xed, 0xfe, 0xd6, 0xb5,
0xbd, 0x46, 0xd7, 0x1e, 0xed, 0x9d, 0x7f, 0xee, 0xa1, 0xd1, 0xc7, 0x1d, 0x7c, 0x30, 0x29, 0x5d,
0xb0, 0x4f, 0xf1, 0xa1, 0x8e, 0xc6, 0xbe, 0xdf, 0xe0, 0xd6, 0xe5, 0x08, 0x9d, 0xeb, 0x57, 0xee,
0x99, 0xac, 0xdf, 0xc4, 0x9a, 0xa8, 0x83, 0x69, 0x24, 0x5e, 0x0e, 0xf0, 0x8f, 0xc4, 0x18, 0x5b,
0x9b, 0x4c, 0x6c, 0xaf, 0x81, 0xb9, 0x9d, 0x9e, 0xf3, 0xaf, 0x8f, 0xe0, 0x01, 0x1a, 0x4f, 0x97,
0x17, 0x6e, 0xeb, 0xeb, 0x85, 0xdb, 0x3a, 0x5f, 0xb9, 0x68, 0xb9, 0x72, 0xd1, 0x97, 0x95, 0x8b,
0x7e, 0xac, 0x5c, 0xf4, 0xf6, 0xc9, 0x7f, 0xfe, 0xd7, 0x8f, 0xab, 0x6a, 0xda, 0x9a, 0xa2, 0xf0,
0xa0, 0x1c, 0xeb, 0xe1, 0xcf, 0x00, 0x00, 0x00, 0xff, 0xff, 0xe6, 0xbf, 0x19, 0xa6, 0x24, 0x04,
0x00, 0x00,
}
// Field returns the value for the given fieldpath as a string, if defined.
// If the value is not defined, the second value will be false.
func (m *Envelope) Field(fieldpath []string) (string, bool) {
@@ -108,7 +249,7 @@ func (m *Envelope) Field(fieldpath []string) (string, bool) {
case "topic":
return string(m.Topic), len(m.Topic) > 0
case "event":
decoded, err := typeurl.UnmarshalAny(m.Event)
decoded, err := github_com_containerd_typeurl.UnmarshalAny(m.Event)
if err != nil {
return "", false
}
@@ -130,20 +271,21 @@ var _ grpc.ClientConn
// is compatible with the grpc package it is being compiled against.
const _ = grpc.SupportPackageIsVersion4
// Client API for Events service
// EventsClient is the client API for Events service.
//
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.
type EventsClient interface {
// Publish an event to a topic.
//
// The event will be packed into a timestamp envelope with the namespace
// introspected from the context. The envelope will then be dispatched.
Publish(ctx context.Context, in *PublishRequest, opts ...grpc.CallOption) (*google_protobuf2.Empty, error)
Publish(ctx context.Context, in *PublishRequest, opts ...grpc.CallOption) (*types.Empty, error)
// Forward sends an event that has already been packaged into an envelope
// with a timestamp and namespace.
//
// This is useful if earlier timestamping is required or when forwarding on
// behalf of another component, namespace or publisher.
Forward(ctx context.Context, in *ForwardRequest, opts ...grpc.CallOption) (*google_protobuf2.Empty, error)
Forward(ctx context.Context, in *ForwardRequest, opts ...grpc.CallOption) (*types.Empty, error)
// Subscribe to a stream of events, possibly returning only that match any
// of the provided filters.
//
@@ -162,18 +304,18 @@ func NewEventsClient(cc *grpc.ClientConn) EventsClient {
return &eventsClient{cc}
}
func (c *eventsClient) Publish(ctx context.Context, in *PublishRequest, opts ...grpc.CallOption) (*google_protobuf2.Empty, error) {
out := new(google_protobuf2.Empty)
err := grpc.Invoke(ctx, "/containerd.services.events.v1.Events/Publish", in, out, c.cc, opts...)
func (c *eventsClient) Publish(ctx context.Context, in *PublishRequest, opts ...grpc.CallOption) (*types.Empty, error) {
out := new(types.Empty)
err := c.cc.Invoke(ctx, "/containerd.services.events.v1.Events/Publish", in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
func (c *eventsClient) Forward(ctx context.Context, in *ForwardRequest, opts ...grpc.CallOption) (*google_protobuf2.Empty, error) {
out := new(google_protobuf2.Empty)
err := grpc.Invoke(ctx, "/containerd.services.events.v1.Events/Forward", in, out, c.cc, opts...)
func (c *eventsClient) Forward(ctx context.Context, in *ForwardRequest, opts ...grpc.CallOption) (*types.Empty, error) {
out := new(types.Empty)
err := c.cc.Invoke(ctx, "/containerd.services.events.v1.Events/Forward", in, out, opts...)
if err != nil {
return nil, err
}
@@ -181,7 +323,7 @@ func (c *eventsClient) Forward(ctx context.Context, in *ForwardRequest, opts ...
}
func (c *eventsClient) Subscribe(ctx context.Context, in *SubscribeRequest, opts ...grpc.CallOption) (Events_SubscribeClient, error) {
stream, err := grpc.NewClientStream(ctx, &_Events_serviceDesc.Streams[0], c.cc, "/containerd.services.events.v1.Events/Subscribe", opts...)
stream, err := c.cc.NewStream(ctx, &_Events_serviceDesc.Streams[0], "/containerd.services.events.v1.Events/Subscribe", opts...)
if err != nil {
return nil, err
}
@@ -212,20 +354,19 @@ func (x *eventsSubscribeClient) Recv() (*Envelope, error) {
return m, nil
}
// Server API for Events service
// EventsServer is the server API for Events service.
type EventsServer interface {
// Publish an event to a topic.
//
// The event will be packed into a timestamp envelope with the namespace
// introspected from the context. The envelope will then be dispatched.
Publish(context.Context, *PublishRequest) (*google_protobuf2.Empty, error)
Publish(context.Context, *PublishRequest) (*types.Empty, error)
// Forward sends an event that has already been packaged into an envelope
// with a timestamp and namespace.
//
// This is useful if earlier timestamping is required or when forwarding on
// behalf of another component, namespace or publisher.
Forward(context.Context, *ForwardRequest) (*google_protobuf2.Empty, error)
Forward(context.Context, *ForwardRequest) (*types.Empty, error)
// Subscribe to a stream of events, possibly returning only that match any
// of the provided filters.
//
@@ -351,6 +492,9 @@ func (m *PublishRequest) MarshalTo(dAtA []byte) (int, error) {
}
i += n1
}
if m.XXX_unrecognized != nil {
i += copy(dAtA[i:], m.XXX_unrecognized)
}
return i, nil
}
@@ -379,6 +523,9 @@ func (m *ForwardRequest) MarshalTo(dAtA []byte) (int, error) {
}
i += n2
}
if m.XXX_unrecognized != nil {
i += copy(dAtA[i:], m.XXX_unrecognized)
}
return i, nil
}
@@ -412,6 +559,9 @@ func (m *SubscribeRequest) MarshalTo(dAtA []byte) (int, error) {
i += copy(dAtA[i:], s)
}
}
if m.XXX_unrecognized != nil {
i += copy(dAtA[i:], m.XXX_unrecognized)
}
return i, nil
}
@@ -432,8 +582,8 @@ func (m *Envelope) MarshalTo(dAtA []byte) (int, error) {
_ = l
dAtA[i] = 0xa
i++
i = encodeVarintEvents(dAtA, i, uint64(types.SizeOfStdTime(m.Timestamp)))
n3, err := types.StdTimeMarshalTo(m.Timestamp, dAtA[i:])
i = encodeVarintEvents(dAtA, i, uint64(github_com_gogo_protobuf_types.SizeOfStdTime(m.Timestamp)))
n3, err := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Timestamp, dAtA[i:])
if err != nil {
return 0, err
}
@@ -460,6 +610,9 @@ func (m *Envelope) MarshalTo(dAtA []byte) (int, error) {
}
i += n4
}
if m.XXX_unrecognized != nil {
i += copy(dAtA[i:], m.XXX_unrecognized)
}
return i, nil
}
@@ -473,6 +626,9 @@ func encodeVarintEvents(dAtA []byte, offset int, v uint64) int {
return offset + 1
}
func (m *PublishRequest) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
l = len(m.Topic)
@@ -483,20 +639,32 @@ func (m *PublishRequest) Size() (n int) {
l = m.Event.Size()
n += 1 + l + sovEvents(uint64(l))
}
if m.XXX_unrecognized != nil {
n += len(m.XXX_unrecognized)
}
return n
}
func (m *ForwardRequest) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if m.Envelope != nil {
l = m.Envelope.Size()
n += 1 + l + sovEvents(uint64(l))
}
if m.XXX_unrecognized != nil {
n += len(m.XXX_unrecognized)
}
return n
}
func (m *SubscribeRequest) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if len(m.Filters) > 0 {
@@ -505,13 +673,19 @@ func (m *SubscribeRequest) Size() (n int) {
n += 1 + l + sovEvents(uint64(l))
}
}
if m.XXX_unrecognized != nil {
n += len(m.XXX_unrecognized)
}
return n
}
func (m *Envelope) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
l = types.SizeOfStdTime(m.Timestamp)
l = github_com_gogo_protobuf_types.SizeOfStdTime(m.Timestamp)
n += 1 + l + sovEvents(uint64(l))
l = len(m.Namespace)
if l > 0 {
@@ -525,6 +699,9 @@ func (m *Envelope) Size() (n int) {
l = m.Event.Size()
n += 1 + l + sovEvents(uint64(l))
}
if m.XXX_unrecognized != nil {
n += len(m.XXX_unrecognized)
}
return n
}
@@ -547,7 +724,8 @@ func (this *PublishRequest) String() string {
}
s := strings.Join([]string{`&PublishRequest{`,
`Topic:` + fmt.Sprintf("%v", this.Topic) + `,`,
`Event:` + strings.Replace(fmt.Sprintf("%v", this.Event), "Any", "google_protobuf1.Any", 1) + `,`,
`Event:` + strings.Replace(fmt.Sprintf("%v", this.Event), "Any", "types.Any", 1) + `,`,
`XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
`}`,
}, "")
return s
@@ -558,6 +736,7 @@ func (this *ForwardRequest) String() string {
}
s := strings.Join([]string{`&ForwardRequest{`,
`Envelope:` + strings.Replace(fmt.Sprintf("%v", this.Envelope), "Envelope", "Envelope", 1) + `,`,
`XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
`}`,
}, "")
return s
@@ -568,6 +747,7 @@ func (this *SubscribeRequest) String() string {
}
s := strings.Join([]string{`&SubscribeRequest{`,
`Filters:` + fmt.Sprintf("%v", this.Filters) + `,`,
`XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
`}`,
}, "")
return s
@@ -577,10 +757,11 @@ func (this *Envelope) String() string {
return "nil"
}
s := strings.Join([]string{`&Envelope{`,
`Timestamp:` + strings.Replace(strings.Replace(this.Timestamp.String(), "Timestamp", "google_protobuf3.Timestamp", 1), `&`, ``, 1) + `,`,
`Timestamp:` + strings.Replace(strings.Replace(this.Timestamp.String(), "Timestamp", "types.Timestamp", 1), `&`, ``, 1) + `,`,
`Namespace:` + fmt.Sprintf("%v", this.Namespace) + `,`,
`Topic:` + fmt.Sprintf("%v", this.Topic) + `,`,
`Event:` + strings.Replace(fmt.Sprintf("%v", this.Event), "Any", "google_protobuf1.Any", 1) + `,`,
`Event:` + strings.Replace(fmt.Sprintf("%v", this.Event), "Any", "types.Any", 1) + `,`,
`XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
`}`,
}, "")
return s
@@ -608,7 +789,7 @@ func (m *PublishRequest) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -636,7 +817,7 @@ func (m *PublishRequest) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -646,6 +827,9 @@ func (m *PublishRequest) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthEvents
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthEvents
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -665,7 +849,7 @@ func (m *PublishRequest) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
msglen |= (int(b) & 0x7F) << shift
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -674,11 +858,14 @@ func (m *PublishRequest) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthEvents
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthEvents
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
if m.Event == nil {
m.Event = &google_protobuf1.Any{}
m.Event = &types.Any{}
}
if err := m.Event.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
@@ -693,9 +880,13 @@ func (m *PublishRequest) Unmarshal(dAtA []byte) error {
if skippy < 0 {
return ErrInvalidLengthEvents
}
if (iNdEx + skippy) < 0 {
return ErrInvalidLengthEvents
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
iNdEx += skippy
}
}
@@ -720,7 +911,7 @@ func (m *ForwardRequest) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -748,7 +939,7 @@ func (m *ForwardRequest) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
msglen |= (int(b) & 0x7F) << shift
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -757,6 +948,9 @@ func (m *ForwardRequest) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthEvents
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthEvents
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -776,9 +970,13 @@ func (m *ForwardRequest) Unmarshal(dAtA []byte) error {
if skippy < 0 {
return ErrInvalidLengthEvents
}
if (iNdEx + skippy) < 0 {
return ErrInvalidLengthEvents
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
iNdEx += skippy
}
}
@@ -803,7 +1001,7 @@ func (m *SubscribeRequest) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -831,7 +1029,7 @@ func (m *SubscribeRequest) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -841,6 +1039,9 @@ func (m *SubscribeRequest) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthEvents
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthEvents
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -855,9 +1056,13 @@ func (m *SubscribeRequest) Unmarshal(dAtA []byte) error {
if skippy < 0 {
return ErrInvalidLengthEvents
}
if (iNdEx + skippy) < 0 {
return ErrInvalidLengthEvents
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
iNdEx += skippy
}
}
@@ -882,7 +1087,7 @@ func (m *Envelope) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -910,7 +1115,7 @@ func (m *Envelope) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
msglen |= (int(b) & 0x7F) << shift
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -919,10 +1124,13 @@ func (m *Envelope) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthEvents
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthEvents
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
if err := types.StdTimeUnmarshal(&m.Timestamp, dAtA[iNdEx:postIndex]); err != nil {
if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.Timestamp, dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
@@ -940,7 +1148,7 @@ func (m *Envelope) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -950,6 +1158,9 @@ func (m *Envelope) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthEvents
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthEvents
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -969,7 +1180,7 @@ func (m *Envelope) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -979,6 +1190,9 @@ func (m *Envelope) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthEvents
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthEvents
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -998,7 +1212,7 @@ func (m *Envelope) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
msglen |= (int(b) & 0x7F) << shift
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -1007,11 +1221,14 @@ func (m *Envelope) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthEvents
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthEvents
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
if m.Event == nil {
m.Event = &google_protobuf1.Any{}
m.Event = &types.Any{}
}
if err := m.Event.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
@@ -1026,9 +1243,13 @@ func (m *Envelope) Unmarshal(dAtA []byte) error {
if skippy < 0 {
return ErrInvalidLengthEvents
}
if (iNdEx + skippy) < 0 {
return ErrInvalidLengthEvents
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
iNdEx += skippy
}
}
@@ -1092,10 +1313,13 @@ func skipEvents(dAtA []byte) (n int, err error) {
break
}
}
iNdEx += length
if length < 0 {
return 0, ErrInvalidLengthEvents
}
iNdEx += length
if iNdEx < 0 {
return 0, ErrInvalidLengthEvents
}
return iNdEx, nil
case 3:
for {
@@ -1124,6 +1348,9 @@ func skipEvents(dAtA []byte) (n int, err error) {
return 0, err
}
iNdEx = start + next
if iNdEx < 0 {
return 0, ErrInvalidLengthEvents
}
}
return iNdEx, nil
case 4:
@@ -1142,41 +1369,3 @@ var (
ErrInvalidLengthEvents = fmt.Errorf("proto: negative length found during unmarshaling")
ErrIntOverflowEvents = fmt.Errorf("proto: integer overflow")
)
func init() {
proto.RegisterFile("github.com/containerd/containerd/api/services/events/v1/events.proto", fileDescriptorEvents)
}
var fileDescriptorEvents = []byte{
// 466 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x9c, 0x93, 0xcd, 0x8e, 0xd3, 0x30,
0x14, 0x85, 0xeb, 0xf9, 0x6d, 0x3c, 0xd2, 0x08, 0x45, 0x15, 0x2a, 0x01, 0xd2, 0xaa, 0x1b, 0x2a,
0x04, 0x0e, 0x53, 0x76, 0x20, 0x21, 0x28, 0x94, 0xf5, 0x28, 0x80, 0x54, 0xb1, 0x4b, 0xd2, 0xdb,
0xd4, 0x52, 0x62, 0x9b, 0xd8, 0x09, 0x9a, 0xdd, 0x3c, 0x02, 0x1b, 0xde, 0x84, 0x0d, 0x6f, 0xd0,
0x25, 0x4b, 0x56, 0xc0, 0xf4, 0x49, 0x50, 0x13, 0xbb, 0x61, 0x3a, 0x40, 0x10, 0xbb, 0x6b, 0xdf,
0xe3, 0xcf, 0xb9, 0xe7, 0x38, 0xf8, 0x45, 0x4c, 0xd5, 0x22, 0x0f, 0x49, 0xc4, 0x53, 0x2f, 0xe2,
0x4c, 0x05, 0x94, 0x41, 0x36, 0xfb, 0xb5, 0x0c, 0x04, 0xf5, 0x24, 0x64, 0x05, 0x8d, 0x40, 0x7a,
0x50, 0x00, 0x53, 0xd2, 0x2b, 0x4e, 0x74, 0x45, 0x44, 0xc6, 0x15, 0xb7, 0x6f, 0xd7, 0x7a, 0x62,
0xb4, 0x44, 0x2b, 0x8a, 0x13, 0xe7, 0x69, 0xe3, 0x25, 0x25, 0x26, 0xcc, 0xe7, 0x9e, 0x48, 0xf2,
0x98, 0x32, 0x6f, 0x4e, 0x21, 0x99, 0x89, 0x40, 0x2d, 0xaa, 0x0b, 0x9c, 0x4e, 0xcc, 0x63, 0x5e,
0x96, 0xde, 0xba, 0xd2, 0xbb, 0x37, 0x62, 0xce, 0xe3, 0x04, 0xea, 0xd3, 0x01, 0x3b, 0xd3, 0xad,
0x9b, 0xdb, 0x2d, 0x48, 0x85, 0x32, 0xcd, 0xde, 0x76, 0x53, 0xd1, 0x14, 0xa4, 0x0a, 0x52, 0x51,
0x09, 0x06, 0x3e, 0x3e, 0x3e, 0xcd, 0xc3, 0x84, 0xca, 0x85, 0x0f, 0xef, 0x72, 0x90, 0xca, 0xee,
0xe0, 0x7d, 0xc5, 0x05, 0x8d, 0xba, 0xa8, 0x8f, 0x86, 0x96, 0x5f, 0x2d, 0xec, 0xbb, 0x78, 0xbf,
0x9c, 0xb2, 0xbb, 0xd3, 0x47, 0xc3, 0xa3, 0x51, 0x87, 0x54, 0x60, 0x62, 0xc0, 0xe4, 0x19, 0x3b,
0xf3, 0x2b, 0xc9, 0xe0, 0x0d, 0x3e, 0x7e, 0xc9, 0xb3, 0xf7, 0x41, 0x36, 0x33, 0xcc, 0xe7, 0xb8,
0x0d, 0xac, 0x80, 0x84, 0x0b, 0x28, 0xb1, 0x47, 0xa3, 0x3b, 0xe4, 0xaf, 0x46, 0x92, 0x89, 0x96,
0xfb, 0x9b, 0x83, 0x83, 0x7b, 0xf8, 0xda, 0xab, 0x3c, 0x94, 0x51, 0x46, 0x43, 0x30, 0xe0, 0x2e,
0x3e, 0x9c, 0xd3, 0x44, 0x41, 0x26, 0xbb, 0xa8, 0xbf, 0x3b, 0xb4, 0x7c, 0xb3, 0x1c, 0x7c, 0x42,
0xb8, 0x6d, 0x20, 0xf6, 0x18, 0x5b, 0x9b, 0xc1, 0xf5, 0x07, 0x38, 0x57, 0x26, 0x78, 0x6d, 0x14,
0xe3, 0xf6, 0xf2, 0x5b, 0xaf, 0xf5, 0xe1, 0x7b, 0x0f, 0xf9, 0xf5, 0x31, 0xfb, 0x16, 0xb6, 0x58,
0x90, 0x82, 0x14, 0x41, 0x04, 0xa5, 0x0b, 0x96, 0x5f, 0x6f, 0xd4, 0xae, 0xed, 0xfe, 0xd6, 0xb5,
0xbd, 0x46, 0xd7, 0x1e, 0xed, 0x9d, 0x7f, 0xee, 0xa1, 0xd1, 0xc7, 0x1d, 0x7c, 0x30, 0x29, 0x5d,
0xb0, 0x4f, 0xf1, 0xa1, 0x8e, 0xc6, 0xbe, 0xdf, 0xe0, 0xd6, 0xe5, 0x08, 0x9d, 0xeb, 0x57, 0xee,
0x99, 0xac, 0xdf, 0xc4, 0x9a, 0xa8, 0x83, 0x69, 0x24, 0x5e, 0x0e, 0xf0, 0x8f, 0xc4, 0x18, 0x5b,
0x9b, 0x4c, 0x6c, 0xaf, 0x81, 0xb9, 0x9d, 0x9e, 0xf3, 0xaf, 0x8f, 0xe0, 0x01, 0x1a, 0x4f, 0x97,
0x17, 0x6e, 0xeb, 0xeb, 0x85, 0xdb, 0x3a, 0x5f, 0xb9, 0x68, 0xb9, 0x72, 0xd1, 0x97, 0x95, 0x8b,
0x7e, 0xac, 0x5c, 0xf4, 0xf6, 0xc9, 0x7f, 0xfe, 0xd7, 0x8f, 0xab, 0x6a, 0xda, 0x9a, 0xa2, 0xf0,
0xa0, 0x1c, 0xeb, 0xe1, 0xcf, 0x00, 0x00, 0x00, 0xff, 0xff, 0xe6, 0xbf, 0x19, 0xa6, 0x24, 0x04,
0x00, 0x00,
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,35 +1,21 @@
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: github.com/containerd/containerd/api/services/introspection/v1/introspection.proto
/*
Package introspection is a generated protocol buffer package.
It is generated from these files:
github.com/containerd/containerd/api/services/introspection/v1/introspection.proto
It has these top-level messages:
Plugin
PluginsRequest
PluginsResponse
*/
package introspection
import proto "github.com/gogo/protobuf/proto"
import fmt "fmt"
import math "math"
import containerd_types "github.com/containerd/containerd/api/types"
import google_rpc "github.com/gogo/googleapis/google/rpc"
// skipping weak import gogoproto "github.com/gogo/protobuf/gogoproto"
import context "golang.org/x/net/context"
import grpc "google.golang.org/grpc"
import strings "strings"
import reflect "reflect"
import sortkeys "github.com/gogo/protobuf/sortkeys"
import io "io"
import (
context "context"
fmt "fmt"
types "github.com/containerd/containerd/api/types"
rpc "github.com/gogo/googleapis/google/rpc"
proto "github.com/gogo/protobuf/proto"
github_com_gogo_protobuf_sortkeys "github.com/gogo/protobuf/sortkeys"
grpc "google.golang.org/grpc"
io "io"
math "math"
reflect "reflect"
strings "strings"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
@@ -51,7 +37,7 @@ type Plugin struct {
// ID identifies the plugin uniquely in the system.
ID string `protobuf:"bytes,2,opt,name=id,proto3" json:"id,omitempty"`
// Requires lists the plugin types required by this plugin.
Requires []string `protobuf:"bytes,3,rep,name=requires" json:"requires,omitempty"`
Requires []string `protobuf:"bytes,3,rep,name=requires,proto3" json:"requires,omitempty"`
// Platforms enumerates the platforms this plugin will support.
//
// If values are provided here, the plugin will only be operable under the
@@ -61,30 +47,61 @@ type Plugin struct {
//
// If the plugin prefers certain platforms over others, they should be
// listed from most to least preferred.
Platforms []containerd_types.Platform `protobuf:"bytes,4,rep,name=platforms" json:"platforms"`
Platforms []types.Platform `protobuf:"bytes,4,rep,name=platforms,proto3" json:"platforms"`
// Exports allows plugins to provide values about state or configuration to
// interested parties.
//
// One example is exposing the configured path of a snapshotter plugin.
Exports map[string]string `protobuf:"bytes,5,rep,name=exports" json:"exports,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
Exports map[string]string `protobuf:"bytes,5,rep,name=exports,proto3" json:"exports,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
// Capabilities allows plugins to communicate feature switches to allow
// clients to detect features that may not be on be default or may be
// different from version to version.
//
// Use this sparingly.
Capabilities []string `protobuf:"bytes,6,rep,name=capabilities" json:"capabilities,omitempty"`
Capabilities []string `protobuf:"bytes,6,rep,name=capabilities,proto3" json:"capabilities,omitempty"`
// InitErr will be set if the plugin fails initialization.
//
// This means the plugin may have been registered but a non-terminal error
// was encountered during initialization.
//
// Plugins that have this value set cannot be used.
InitErr *google_rpc.Status `protobuf:"bytes,7,opt,name=init_err,json=initErr" json:"init_err,omitempty"`
InitErr *rpc.Status `protobuf:"bytes,7,opt,name=init_err,json=initErr,proto3" json:"init_err,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *Plugin) Reset() { *m = Plugin{} }
func (*Plugin) ProtoMessage() {}
func (*Plugin) Descriptor() ([]byte, []int) { return fileDescriptorIntrospection, []int{0} }
func (m *Plugin) Reset() { *m = Plugin{} }
func (*Plugin) ProtoMessage() {}
func (*Plugin) Descriptor() ([]byte, []int) {
return fileDescriptor_1a14fda866f10715, []int{0}
}
func (m *Plugin) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *Plugin) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_Plugin.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalTo(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *Plugin) XXX_Merge(src proto.Message) {
xxx_messageInfo_Plugin.Merge(m, src)
}
func (m *Plugin) XXX_Size() int {
return m.Size()
}
func (m *Plugin) XXX_DiscardUnknown() {
xxx_messageInfo_Plugin.DiscardUnknown(m)
}
var xxx_messageInfo_Plugin proto.InternalMessageInfo
type PluginsRequest struct {
// Filters contains one or more filters using the syntax defined in the
@@ -97,27 +114,129 @@ type PluginsRequest struct {
// filters[0] or filters[1] or ... or filters[n-1] or filters[n]
//
// If filters is zero-length or nil, all items will be returned.
Filters []string `protobuf:"bytes,1,rep,name=filters" json:"filters,omitempty"`
Filters []string `protobuf:"bytes,1,rep,name=filters,proto3" json:"filters,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *PluginsRequest) Reset() { *m = PluginsRequest{} }
func (*PluginsRequest) ProtoMessage() {}
func (*PluginsRequest) Descriptor() ([]byte, []int) { return fileDescriptorIntrospection, []int{1} }
func (m *PluginsRequest) Reset() { *m = PluginsRequest{} }
func (*PluginsRequest) ProtoMessage() {}
func (*PluginsRequest) Descriptor() ([]byte, []int) {
return fileDescriptor_1a14fda866f10715, []int{1}
}
func (m *PluginsRequest) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *PluginsRequest) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_PluginsRequest.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalTo(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *PluginsRequest) XXX_Merge(src proto.Message) {
xxx_messageInfo_PluginsRequest.Merge(m, src)
}
func (m *PluginsRequest) XXX_Size() int {
return m.Size()
}
func (m *PluginsRequest) XXX_DiscardUnknown() {
xxx_messageInfo_PluginsRequest.DiscardUnknown(m)
}
var xxx_messageInfo_PluginsRequest proto.InternalMessageInfo
type PluginsResponse struct {
Plugins []Plugin `protobuf:"bytes,1,rep,name=plugins" json:"plugins"`
Plugins []Plugin `protobuf:"bytes,1,rep,name=plugins,proto3" json:"plugins"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *PluginsResponse) Reset() { *m = PluginsResponse{} }
func (*PluginsResponse) ProtoMessage() {}
func (*PluginsResponse) Descriptor() ([]byte, []int) { return fileDescriptorIntrospection, []int{2} }
func (m *PluginsResponse) Reset() { *m = PluginsResponse{} }
func (*PluginsResponse) ProtoMessage() {}
func (*PluginsResponse) Descriptor() ([]byte, []int) {
return fileDescriptor_1a14fda866f10715, []int{2}
}
func (m *PluginsResponse) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *PluginsResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_PluginsResponse.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalTo(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *PluginsResponse) XXX_Merge(src proto.Message) {
xxx_messageInfo_PluginsResponse.Merge(m, src)
}
func (m *PluginsResponse) XXX_Size() int {
return m.Size()
}
func (m *PluginsResponse) XXX_DiscardUnknown() {
xxx_messageInfo_PluginsResponse.DiscardUnknown(m)
}
var xxx_messageInfo_PluginsResponse proto.InternalMessageInfo
func init() {
proto.RegisterType((*Plugin)(nil), "containerd.services.introspection.v1.Plugin")
proto.RegisterMapType((map[string]string)(nil), "containerd.services.introspection.v1.Plugin.ExportsEntry")
proto.RegisterType((*PluginsRequest)(nil), "containerd.services.introspection.v1.PluginsRequest")
proto.RegisterType((*PluginsResponse)(nil), "containerd.services.introspection.v1.PluginsResponse")
}
func init() {
proto.RegisterFile("github.com/containerd/containerd/api/services/introspection/v1/introspection.proto", fileDescriptor_1a14fda866f10715)
}
var fileDescriptor_1a14fda866f10715 = []byte{
// 487 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xa4, 0x53, 0x4d, 0x6f, 0xd3, 0x40,
0x10, 0xcd, 0x3a, 0x69, 0xdc, 0x4c, 0xca, 0x87, 0x56, 0x15, 0x58, 0x3e, 0xb8, 0x51, 0xc4, 0x21,
0x42, 0xb0, 0x56, 0x03, 0x48, 0xb4, 0x48, 0x1c, 0x22, 0x72, 0xa8, 0xd4, 0x43, 0xe5, 0x5e, 0x10,
0x97, 0xca, 0x71, 0x36, 0x66, 0x85, 0xeb, 0xdd, 0xee, 0xae, 0x2d, 0x72, 0xe3, 0xc6, 0x5f, 0xcb,
0x91, 0x23, 0xa7, 0x8a, 0xfa, 0x37, 0xf0, 0x03, 0x90, 0xbd, 0x76, 0x9b, 0xdc, 0x12, 0x71, 0x9b,
0x79, 0x7e, 0x6f, 0xe6, 0xcd, 0x93, 0x17, 0x82, 0x98, 0xe9, 0xaf, 0xd9, 0x8c, 0x44, 0xfc, 0xda,
0x8f, 0x78, 0xaa, 0x43, 0x96, 0x52, 0x39, 0x5f, 0x2f, 0x43, 0xc1, 0x7c, 0x45, 0x65, 0xce, 0x22,
0xaa, 0x7c, 0x96, 0x6a, 0xc9, 0x95, 0xa0, 0x91, 0x66, 0x3c, 0xf5, 0xf3, 0xe3, 0x4d, 0x80, 0x08,
0xc9, 0x35, 0xc7, 0x2f, 0x1e, 0xd4, 0xa4, 0x51, 0x92, 0x4d, 0x62, 0x7e, 0xec, 0x9e, 0x6c, 0xb5,
0x59, 0x2f, 0x05, 0x55, 0xbe, 0x48, 0x42, 0xbd, 0xe0, 0xf2, 0xda, 0x2c, 0x70, 0x9f, 0xc7, 0x9c,
0xc7, 0x09, 0xf5, 0xa5, 0x88, 0x7c, 0xa5, 0x43, 0x9d, 0xa9, 0xfa, 0xc3, 0x61, 0xcc, 0x63, 0x5e,
0x95, 0x7e, 0x59, 0x19, 0x74, 0xf8, 0xd7, 0x82, 0xee, 0x45, 0x92, 0xc5, 0x2c, 0xc5, 0x18, 0x3a,
0xe5, 0x44, 0x07, 0x0d, 0xd0, 0xa8, 0x17, 0x54, 0x35, 0x7e, 0x06, 0x16, 0x9b, 0x3b, 0x56, 0x89,
0x4c, 0xba, 0xc5, 0xed, 0x91, 0x75, 0xf6, 0x29, 0xb0, 0xd8, 0x1c, 0xbb, 0xb0, 0x2f, 0xe9, 0x4d,
0xc6, 0x24, 0x55, 0x4e, 0x7b, 0xd0, 0x1e, 0xf5, 0x82, 0xfb, 0x1e, 0x7f, 0x84, 0x5e, 0xe3, 0x49,
0x39, 0x9d, 0x41, 0x7b, 0xd4, 0x1f, 0xbb, 0x64, 0xed, 0xec, 0xca, 0x36, 0xb9, 0xa8, 0x29, 0x93,
0xce, 0xea, 0xf6, 0xa8, 0x15, 0x3c, 0x48, 0xf0, 0x25, 0xd8, 0xf4, 0xbb, 0xe0, 0x52, 0x2b, 0x67,
0xaf, 0x52, 0x9f, 0x90, 0x6d, 0x42, 0x23, 0xe6, 0x0c, 0x32, 0x35, 0xda, 0x69, 0xaa, 0xe5, 0x32,
0x68, 0x26, 0xe1, 0x21, 0x1c, 0x44, 0xa1, 0x08, 0x67, 0x2c, 0x61, 0x9a, 0x51, 0xe5, 0x74, 0x2b,
0xd3, 0x1b, 0x18, 0x7e, 0x0d, 0xfb, 0x2c, 0x65, 0xfa, 0x8a, 0x4a, 0xe9, 0xd8, 0x03, 0x34, 0xea,
0x8f, 0x31, 0x31, 0x69, 0x12, 0x29, 0x22, 0x72, 0x59, 0xa5, 0x19, 0xd8, 0x25, 0x67, 0x2a, 0xa5,
0x7b, 0x0a, 0x07, 0xeb, 0xbb, 0xf0, 0x53, 0x68, 0x7f, 0xa3, 0xcb, 0x3a, 0xbe, 0xb2, 0xc4, 0x87,
0xb0, 0x97, 0x87, 0x49, 0x46, 0x4d, 0x80, 0x81, 0x69, 0x4e, 0xad, 0xf7, 0x68, 0xf8, 0x12, 0x1e,
0x1b, 0xbb, 0x2a, 0xa0, 0x37, 0x19, 0x55, 0x1a, 0x3b, 0x60, 0x2f, 0x58, 0xa2, 0xa9, 0x54, 0x0e,
0xaa, 0xbc, 0x35, 0xed, 0xf0, 0x0a, 0x9e, 0xdc, 0x73, 0x95, 0xe0, 0xa9, 0xa2, 0xf8, 0x1c, 0x6c,
0x61, 0xa0, 0x8a, 0xdc, 0x1f, 0xbf, 0xda, 0x25, 0xa2, 0x3a, 0xf2, 0x66, 0xc4, 0xf8, 0x27, 0x82,
0x47, 0x67, 0xeb, 0x54, 0x9c, 0x83, 0x5d, 0xaf, 0xc4, 0x6f, 0x77, 0x99, 0xdc, 0x5c, 0xe3, 0xbe,
0xdb, 0x51, 0x65, 0xee, 0x9a, 0x2c, 0x56, 0x77, 0x5e, 0xeb, 0xf7, 0x9d, 0xd7, 0xfa, 0x51, 0x78,
0x68, 0x55, 0x78, 0xe8, 0x57, 0xe1, 0xa1, 0x3f, 0x85, 0x87, 0xbe, 0x9c, 0xff, 0xdf, 0x5b, 0xfc,
0xb0, 0x01, 0x7c, 0xb6, 0x66, 0xdd, 0xea, 0xf7, 0x7f, 0xf3, 0x2f, 0x00, 0x00, 0xff, 0xff, 0xe6,
0x72, 0xde, 0x35, 0xe4, 0x03, 0x00, 0x00,
}
// Reference imports to suppress errors if they are not otherwise used.
var _ context.Context
var _ grpc.ClientConn
@@ -126,8 +245,9 @@ var _ grpc.ClientConn
// is compatible with the grpc package it is being compiled against.
const _ = grpc.SupportPackageIsVersion4
// Client API for Introspection service
// IntrospectionClient is the client API for Introspection service.
//
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.
type IntrospectionClient interface {
// Plugins returns a list of plugins in containerd.
//
@@ -146,15 +266,14 @@ func NewIntrospectionClient(cc *grpc.ClientConn) IntrospectionClient {
func (c *introspectionClient) Plugins(ctx context.Context, in *PluginsRequest, opts ...grpc.CallOption) (*PluginsResponse, error) {
out := new(PluginsResponse)
err := grpc.Invoke(ctx, "/containerd.services.introspection.v1.Introspection/Plugins", in, out, c.cc, opts...)
err := c.cc.Invoke(ctx, "/containerd.services.introspection.v1.Introspection/Plugins", in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
// Server API for Introspection service
// IntrospectionServer is the server API for Introspection service.
type IntrospectionServer interface {
// Plugins returns a list of plugins in containerd.
//
@@ -294,6 +413,9 @@ func (m *Plugin) MarshalTo(dAtA []byte) (int, error) {
}
i += n1
}
if m.XXX_unrecognized != nil {
i += copy(dAtA[i:], m.XXX_unrecognized)
}
return i, nil
}
@@ -327,6 +449,9 @@ func (m *PluginsRequest) MarshalTo(dAtA []byte) (int, error) {
i += copy(dAtA[i:], s)
}
}
if m.XXX_unrecognized != nil {
i += copy(dAtA[i:], m.XXX_unrecognized)
}
return i, nil
}
@@ -357,6 +482,9 @@ func (m *PluginsResponse) MarshalTo(dAtA []byte) (int, error) {
i += n
}
}
if m.XXX_unrecognized != nil {
i += copy(dAtA[i:], m.XXX_unrecognized)
}
return i, nil
}
@@ -370,6 +498,9 @@ func encodeVarintIntrospection(dAtA []byte, offset int, v uint64) int {
return offset + 1
}
func (m *Plugin) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
l = len(m.Type)
@@ -410,10 +541,16 @@ func (m *Plugin) Size() (n int) {
l = m.InitErr.Size()
n += 1 + l + sovIntrospection(uint64(l))
}
if m.XXX_unrecognized != nil {
n += len(m.XXX_unrecognized)
}
return n
}
func (m *PluginsRequest) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if len(m.Filters) > 0 {
@@ -422,10 +559,16 @@ func (m *PluginsRequest) Size() (n int) {
n += 1 + l + sovIntrospection(uint64(l))
}
}
if m.XXX_unrecognized != nil {
n += len(m.XXX_unrecognized)
}
return n
}
func (m *PluginsResponse) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if len(m.Plugins) > 0 {
@@ -434,6 +577,9 @@ func (m *PluginsResponse) Size() (n int) {
n += 1 + l + sovIntrospection(uint64(l))
}
}
if m.XXX_unrecognized != nil {
n += len(m.XXX_unrecognized)
}
return n
}
@@ -458,7 +604,7 @@ func (this *Plugin) String() string {
for k, _ := range this.Exports {
keysForExports = append(keysForExports, k)
}
sortkeys.Strings(keysForExports)
github_com_gogo_protobuf_sortkeys.Strings(keysForExports)
mapStringForExports := "map[string]string{"
for _, k := range keysForExports {
mapStringForExports += fmt.Sprintf("%v: %v,", k, this.Exports[k])
@@ -468,10 +614,11 @@ func (this *Plugin) String() string {
`Type:` + fmt.Sprintf("%v", this.Type) + `,`,
`ID:` + fmt.Sprintf("%v", this.ID) + `,`,
`Requires:` + fmt.Sprintf("%v", this.Requires) + `,`,
`Platforms:` + strings.Replace(strings.Replace(fmt.Sprintf("%v", this.Platforms), "Platform", "containerd_types.Platform", 1), `&`, ``, 1) + `,`,
`Platforms:` + strings.Replace(strings.Replace(fmt.Sprintf("%v", this.Platforms), "Platform", "types.Platform", 1), `&`, ``, 1) + `,`,
`Exports:` + mapStringForExports + `,`,
`Capabilities:` + fmt.Sprintf("%v", this.Capabilities) + `,`,
`InitErr:` + strings.Replace(fmt.Sprintf("%v", this.InitErr), "Status", "google_rpc.Status", 1) + `,`,
`InitErr:` + strings.Replace(fmt.Sprintf("%v", this.InitErr), "Status", "rpc.Status", 1) + `,`,
`XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
`}`,
}, "")
return s
@@ -482,6 +629,7 @@ func (this *PluginsRequest) String() string {
}
s := strings.Join([]string{`&PluginsRequest{`,
`Filters:` + fmt.Sprintf("%v", this.Filters) + `,`,
`XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
`}`,
}, "")
return s
@@ -492,6 +640,7 @@ func (this *PluginsResponse) String() string {
}
s := strings.Join([]string{`&PluginsResponse{`,
`Plugins:` + strings.Replace(strings.Replace(fmt.Sprintf("%v", this.Plugins), "Plugin", "Plugin", 1), `&`, ``, 1) + `,`,
`XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
`}`,
}, "")
return s
@@ -519,7 +668,7 @@ func (m *Plugin) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -547,7 +696,7 @@ func (m *Plugin) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -557,6 +706,9 @@ func (m *Plugin) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthIntrospection
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthIntrospection
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -576,7 +728,7 @@ func (m *Plugin) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -586,6 +738,9 @@ func (m *Plugin) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthIntrospection
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthIntrospection
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -605,7 +760,7 @@ func (m *Plugin) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -615,6 +770,9 @@ func (m *Plugin) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthIntrospection
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthIntrospection
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -634,7 +792,7 @@ func (m *Plugin) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
msglen |= (int(b) & 0x7F) << shift
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -643,10 +801,13 @@ func (m *Plugin) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthIntrospection
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthIntrospection
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
m.Platforms = append(m.Platforms, containerd_types.Platform{})
m.Platforms = append(m.Platforms, types.Platform{})
if err := m.Platforms[len(m.Platforms)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
}
@@ -665,7 +826,7 @@ func (m *Plugin) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
msglen |= (int(b) & 0x7F) << shift
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -674,6 +835,9 @@ func (m *Plugin) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthIntrospection
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthIntrospection
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -694,7 +858,7 @@ func (m *Plugin) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -711,7 +875,7 @@ func (m *Plugin) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLenmapkey |= (uint64(b) & 0x7F) << shift
stringLenmapkey |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -721,6 +885,9 @@ func (m *Plugin) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthIntrospection
}
postStringIndexmapkey := iNdEx + intStringLenmapkey
if postStringIndexmapkey < 0 {
return ErrInvalidLengthIntrospection
}
if postStringIndexmapkey > l {
return io.ErrUnexpectedEOF
}
@@ -737,7 +904,7 @@ func (m *Plugin) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLenmapvalue |= (uint64(b) & 0x7F) << shift
stringLenmapvalue |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -747,6 +914,9 @@ func (m *Plugin) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthIntrospection
}
postStringIndexmapvalue := iNdEx + intStringLenmapvalue
if postStringIndexmapvalue < 0 {
return ErrInvalidLengthIntrospection
}
if postStringIndexmapvalue > l {
return io.ErrUnexpectedEOF
}
@@ -783,7 +953,7 @@ func (m *Plugin) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -793,6 +963,9 @@ func (m *Plugin) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthIntrospection
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthIntrospection
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -812,7 +985,7 @@ func (m *Plugin) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
msglen |= (int(b) & 0x7F) << shift
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -821,11 +994,14 @@ func (m *Plugin) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthIntrospection
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthIntrospection
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
if m.InitErr == nil {
m.InitErr = &google_rpc.Status{}
m.InitErr = &rpc.Status{}
}
if err := m.InitErr.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
@@ -840,9 +1016,13 @@ func (m *Plugin) Unmarshal(dAtA []byte) error {
if skippy < 0 {
return ErrInvalidLengthIntrospection
}
if (iNdEx + skippy) < 0 {
return ErrInvalidLengthIntrospection
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
iNdEx += skippy
}
}
@@ -867,7 +1047,7 @@ func (m *PluginsRequest) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -895,7 +1075,7 @@ func (m *PluginsRequest) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -905,6 +1085,9 @@ func (m *PluginsRequest) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthIntrospection
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthIntrospection
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -919,9 +1102,13 @@ func (m *PluginsRequest) Unmarshal(dAtA []byte) error {
if skippy < 0 {
return ErrInvalidLengthIntrospection
}
if (iNdEx + skippy) < 0 {
return ErrInvalidLengthIntrospection
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
iNdEx += skippy
}
}
@@ -946,7 +1133,7 @@ func (m *PluginsResponse) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -974,7 +1161,7 @@ func (m *PluginsResponse) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
msglen |= (int(b) & 0x7F) << shift
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -983,6 +1170,9 @@ func (m *PluginsResponse) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthIntrospection
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthIntrospection
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -1000,9 +1190,13 @@ func (m *PluginsResponse) Unmarshal(dAtA []byte) error {
if skippy < 0 {
return ErrInvalidLengthIntrospection
}
if (iNdEx + skippy) < 0 {
return ErrInvalidLengthIntrospection
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
iNdEx += skippy
}
}
@@ -1066,10 +1260,13 @@ func skipIntrospection(dAtA []byte) (n int, err error) {
break
}
}
iNdEx += length
if length < 0 {
return 0, ErrInvalidLengthIntrospection
}
iNdEx += length
if iNdEx < 0 {
return 0, ErrInvalidLengthIntrospection
}
return iNdEx, nil
case 3:
for {
@@ -1098,6 +1295,9 @@ func skipIntrospection(dAtA []byte) (n int, err error) {
return 0, err
}
iNdEx = start + next
if iNdEx < 0 {
return 0, ErrInvalidLengthIntrospection
}
}
return iNdEx, nil
case 4:
@@ -1116,42 +1316,3 @@ var (
ErrInvalidLengthIntrospection = fmt.Errorf("proto: negative length found during unmarshaling")
ErrIntOverflowIntrospection = fmt.Errorf("proto: integer overflow")
)
func init() {
proto.RegisterFile("github.com/containerd/containerd/api/services/introspection/v1/introspection.proto", fileDescriptorIntrospection)
}
var fileDescriptorIntrospection = []byte{
// 487 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xa4, 0x53, 0x4d, 0x6f, 0xd3, 0x40,
0x10, 0xcd, 0x3a, 0x69, 0xdc, 0x4c, 0xca, 0x87, 0x56, 0x15, 0x58, 0x3e, 0xb8, 0x51, 0xc4, 0x21,
0x42, 0xb0, 0x56, 0x03, 0x48, 0xb4, 0x48, 0x1c, 0x22, 0x72, 0xa8, 0xd4, 0x43, 0xe5, 0x5e, 0x10,
0x97, 0xca, 0x71, 0x36, 0x66, 0x85, 0xeb, 0xdd, 0xee, 0xae, 0x2d, 0x72, 0xe3, 0xc6, 0x5f, 0xcb,
0x91, 0x23, 0xa7, 0x8a, 0xfa, 0x37, 0xf0, 0x03, 0x90, 0xbd, 0x76, 0x9b, 0xdc, 0x12, 0x71, 0x9b,
0x79, 0x7e, 0x6f, 0xe6, 0xcd, 0x93, 0x17, 0x82, 0x98, 0xe9, 0xaf, 0xd9, 0x8c, 0x44, 0xfc, 0xda,
0x8f, 0x78, 0xaa, 0x43, 0x96, 0x52, 0x39, 0x5f, 0x2f, 0x43, 0xc1, 0x7c, 0x45, 0x65, 0xce, 0x22,
0xaa, 0x7c, 0x96, 0x6a, 0xc9, 0x95, 0xa0, 0x91, 0x66, 0x3c, 0xf5, 0xf3, 0xe3, 0x4d, 0x80, 0x08,
0xc9, 0x35, 0xc7, 0x2f, 0x1e, 0xd4, 0xa4, 0x51, 0x92, 0x4d, 0x62, 0x7e, 0xec, 0x9e, 0x6c, 0xb5,
0x59, 0x2f, 0x05, 0x55, 0xbe, 0x48, 0x42, 0xbd, 0xe0, 0xf2, 0xda, 0x2c, 0x70, 0x9f, 0xc7, 0x9c,
0xc7, 0x09, 0xf5, 0xa5, 0x88, 0x7c, 0xa5, 0x43, 0x9d, 0xa9, 0xfa, 0xc3, 0x61, 0xcc, 0x63, 0x5e,
0x95, 0x7e, 0x59, 0x19, 0x74, 0xf8, 0xd7, 0x82, 0xee, 0x45, 0x92, 0xc5, 0x2c, 0xc5, 0x18, 0x3a,
0xe5, 0x44, 0x07, 0x0d, 0xd0, 0xa8, 0x17, 0x54, 0x35, 0x7e, 0x06, 0x16, 0x9b, 0x3b, 0x56, 0x89,
0x4c, 0xba, 0xc5, 0xed, 0x91, 0x75, 0xf6, 0x29, 0xb0, 0xd8, 0x1c, 0xbb, 0xb0, 0x2f, 0xe9, 0x4d,
0xc6, 0x24, 0x55, 0x4e, 0x7b, 0xd0, 0x1e, 0xf5, 0x82, 0xfb, 0x1e, 0x7f, 0x84, 0x5e, 0xe3, 0x49,
0x39, 0x9d, 0x41, 0x7b, 0xd4, 0x1f, 0xbb, 0x64, 0xed, 0xec, 0xca, 0x36, 0xb9, 0xa8, 0x29, 0x93,
0xce, 0xea, 0xf6, 0xa8, 0x15, 0x3c, 0x48, 0xf0, 0x25, 0xd8, 0xf4, 0xbb, 0xe0, 0x52, 0x2b, 0x67,
0xaf, 0x52, 0x9f, 0x90, 0x6d, 0x42, 0x23, 0xe6, 0x0c, 0x32, 0x35, 0xda, 0x69, 0xaa, 0xe5, 0x32,
0x68, 0x26, 0xe1, 0x21, 0x1c, 0x44, 0xa1, 0x08, 0x67, 0x2c, 0x61, 0x9a, 0x51, 0xe5, 0x74, 0x2b,
0xd3, 0x1b, 0x18, 0x7e, 0x0d, 0xfb, 0x2c, 0x65, 0xfa, 0x8a, 0x4a, 0xe9, 0xd8, 0x03, 0x34, 0xea,
0x8f, 0x31, 0x31, 0x69, 0x12, 0x29, 0x22, 0x72, 0x59, 0xa5, 0x19, 0xd8, 0x25, 0x67, 0x2a, 0xa5,
0x7b, 0x0a, 0x07, 0xeb, 0xbb, 0xf0, 0x53, 0x68, 0x7f, 0xa3, 0xcb, 0x3a, 0xbe, 0xb2, 0xc4, 0x87,
0xb0, 0x97, 0x87, 0x49, 0x46, 0x4d, 0x80, 0x81, 0x69, 0x4e, 0xad, 0xf7, 0x68, 0xf8, 0x12, 0x1e,
0x1b, 0xbb, 0x2a, 0xa0, 0x37, 0x19, 0x55, 0x1a, 0x3b, 0x60, 0x2f, 0x58, 0xa2, 0xa9, 0x54, 0x0e,
0xaa, 0xbc, 0x35, 0xed, 0xf0, 0x0a, 0x9e, 0xdc, 0x73, 0x95, 0xe0, 0xa9, 0xa2, 0xf8, 0x1c, 0x6c,
0x61, 0xa0, 0x8a, 0xdc, 0x1f, 0xbf, 0xda, 0x25, 0xa2, 0x3a, 0xf2, 0x66, 0xc4, 0xf8, 0x27, 0x82,
0x47, 0x67, 0xeb, 0x54, 0x9c, 0x83, 0x5d, 0xaf, 0xc4, 0x6f, 0x77, 0x99, 0xdc, 0x5c, 0xe3, 0xbe,
0xdb, 0x51, 0x65, 0xee, 0x9a, 0x2c, 0x56, 0x77, 0x5e, 0xeb, 0xf7, 0x9d, 0xd7, 0xfa, 0x51, 0x78,
0x68, 0x55, 0x78, 0xe8, 0x57, 0xe1, 0xa1, 0x3f, 0x85, 0x87, 0xbe, 0x9c, 0xff, 0xdf, 0x5b, 0xfc,
0xb0, 0x01, 0x7c, 0xb6, 0x66, 0xdd, 0xea, 0xf7, 0x7f, 0xf3, 0x2f, 0x00, 0x00, 0xff, 0xff, 0xe6,
0x72, 0xde, 0x35, 0xe4, 0x03, 0x00, 0x00,
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,31 +1,19 @@
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: github.com/containerd/containerd/api/services/version/v1/version.proto
/*
Package version is a generated protocol buffer package.
It is generated from these files:
github.com/containerd/containerd/api/services/version/v1/version.proto
It has these top-level messages:
VersionResponse
*/
package version
import proto "github.com/gogo/protobuf/proto"
import fmt "fmt"
import math "math"
import google_protobuf "github.com/gogo/protobuf/types"
// skipping weak import gogoproto "github.com/gogo/protobuf/gogoproto"
import context "golang.org/x/net/context"
import grpc "google.golang.org/grpc"
import strings "strings"
import reflect "reflect"
import io "io"
import (
context "context"
fmt "fmt"
proto "github.com/gogo/protobuf/proto"
types "github.com/gogo/protobuf/types"
grpc "google.golang.org/grpc"
io "io"
math "math"
reflect "reflect"
strings "strings"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
@@ -39,18 +27,73 @@ var _ = math.Inf
const _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package
type VersionResponse struct {
Version string `protobuf:"bytes,1,opt,name=version,proto3" json:"version,omitempty"`
Revision string `protobuf:"bytes,2,opt,name=revision,proto3" json:"revision,omitempty"`
Version string `protobuf:"bytes,1,opt,name=version,proto3" json:"version,omitempty"`
Revision string `protobuf:"bytes,2,opt,name=revision,proto3" json:"revision,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *VersionResponse) Reset() { *m = VersionResponse{} }
func (*VersionResponse) ProtoMessage() {}
func (*VersionResponse) Descriptor() ([]byte, []int) { return fileDescriptorVersion, []int{0} }
func (m *VersionResponse) Reset() { *m = VersionResponse{} }
func (*VersionResponse) ProtoMessage() {}
func (*VersionResponse) Descriptor() ([]byte, []int) {
return fileDescriptor_128109001e578ffe, []int{0}
}
func (m *VersionResponse) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *VersionResponse) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_VersionResponse.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalTo(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *VersionResponse) XXX_Merge(src proto.Message) {
xxx_messageInfo_VersionResponse.Merge(m, src)
}
func (m *VersionResponse) XXX_Size() int {
return m.Size()
}
func (m *VersionResponse) XXX_DiscardUnknown() {
xxx_messageInfo_VersionResponse.DiscardUnknown(m)
}
var xxx_messageInfo_VersionResponse proto.InternalMessageInfo
func init() {
proto.RegisterType((*VersionResponse)(nil), "containerd.services.version.v1.VersionResponse")
}
func init() {
proto.RegisterFile("github.com/containerd/containerd/api/services/version/v1/version.proto", fileDescriptor_128109001e578ffe)
}
var fileDescriptor_128109001e578ffe = []byte{
// 243 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x72, 0x4b, 0xcf, 0x2c, 0xc9,
0x28, 0x4d, 0xd2, 0x4b, 0xce, 0xcf, 0xd5, 0x4f, 0xce, 0xcf, 0x2b, 0x49, 0xcc, 0xcc, 0x4b, 0x2d,
0x4a, 0x41, 0x66, 0x26, 0x16, 0x64, 0xea, 0x17, 0xa7, 0x16, 0x95, 0x65, 0x26, 0xa7, 0x16, 0xeb,
0x97, 0xa5, 0x16, 0x15, 0x67, 0xe6, 0xe7, 0xe9, 0x97, 0x19, 0xc2, 0x98, 0x7a, 0x05, 0x45, 0xf9,
0x25, 0xf9, 0x42, 0x72, 0x08, 0x1d, 0x7a, 0x30, 0xd5, 0x7a, 0x30, 0x25, 0x65, 0x86, 0x52, 0xd2,
0xe9, 0xf9, 0xf9, 0xe9, 0x39, 0xa9, 0xfa, 0x60, 0xd5, 0x49, 0xa5, 0x69, 0xfa, 0xa9, 0xb9, 0x05,
0x25, 0x95, 0x10, 0xcd, 0x52, 0x22, 0xe9, 0xf9, 0xe9, 0xf9, 0x60, 0xa6, 0x3e, 0x88, 0x05, 0x11,
0x55, 0x72, 0xe7, 0xe2, 0x0f, 0x83, 0x18, 0x10, 0x94, 0x5a, 0x5c, 0x90, 0x9f, 0x57, 0x9c, 0x2a,
0x24, 0xc1, 0xc5, 0x0e, 0x35, 0x53, 0x82, 0x51, 0x81, 0x51, 0x83, 0x33, 0x08, 0xc6, 0x15, 0x92,
0xe2, 0xe2, 0x28, 0x4a, 0x2d, 0xcb, 0x04, 0x4b, 0x31, 0x81, 0xa5, 0xe0, 0x7c, 0xa3, 0x58, 0x2e,
0x76, 0xa8, 0x41, 0x42, 0x41, 0x08, 0xa6, 0x98, 0x1e, 0xc4, 0x49, 0x7a, 0x30, 0x27, 0xe9, 0xb9,
0x82, 0x9c, 0x24, 0xa5, 0xaf, 0x87, 0xdf, 0x2b, 0x7a, 0x68, 0x8e, 0x72, 0x8a, 0x3a, 0xf1, 0x50,
0x8e, 0xe1, 0xc6, 0x43, 0x39, 0x86, 0x86, 0x47, 0x72, 0x8c, 0x27, 0x1e, 0xc9, 0x31, 0x5e, 0x78,
0x24, 0xc7, 0xf8, 0xe0, 0x91, 0x1c, 0x63, 0x94, 0x03, 0xb9, 0x81, 0x6b, 0x0d, 0x65, 0x46, 0x30,
0x26, 0xb1, 0x81, 0x9d, 0x67, 0x0c, 0x08, 0x00, 0x00, 0xff, 0xff, 0x95, 0x0d, 0x52, 0x23, 0xa9,
0x01, 0x00, 0x00,
}
// Reference imports to suppress errors if they are not otherwise used.
var _ context.Context
var _ grpc.ClientConn
@@ -59,10 +102,11 @@ var _ grpc.ClientConn
// is compatible with the grpc package it is being compiled against.
const _ = grpc.SupportPackageIsVersion4
// Client API for Version service
// VersionClient is the client API for Version service.
//
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.
type VersionClient interface {
Version(ctx context.Context, in *google_protobuf.Empty, opts ...grpc.CallOption) (*VersionResponse, error)
Version(ctx context.Context, in *types.Empty, opts ...grpc.CallOption) (*VersionResponse, error)
}
type versionClient struct {
@@ -73,19 +117,18 @@ func NewVersionClient(cc *grpc.ClientConn) VersionClient {
return &versionClient{cc}
}
func (c *versionClient) Version(ctx context.Context, in *google_protobuf.Empty, opts ...grpc.CallOption) (*VersionResponse, error) {
func (c *versionClient) Version(ctx context.Context, in *types.Empty, opts ...grpc.CallOption) (*VersionResponse, error) {
out := new(VersionResponse)
err := grpc.Invoke(ctx, "/containerd.services.version.v1.Version/Version", in, out, c.cc, opts...)
err := c.cc.Invoke(ctx, "/containerd.services.version.v1.Version/Version", in, out, opts...)
if err != nil {
return nil, err
}
return out, nil
}
// Server API for Version service
// VersionServer is the server API for Version service.
type VersionServer interface {
Version(context.Context, *google_protobuf.Empty) (*VersionResponse, error)
Version(context.Context, *types.Empty) (*VersionResponse, error)
}
func RegisterVersionServer(s *grpc.Server, srv VersionServer) {
@@ -93,7 +136,7 @@ func RegisterVersionServer(s *grpc.Server, srv VersionServer) {
}
func _Version_Version_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
in := new(google_protobuf.Empty)
in := new(types.Empty)
if err := dec(in); err != nil {
return nil, err
}
@@ -105,7 +148,7 @@ func _Version_Version_Handler(srv interface{}, ctx context.Context, dec func(int
FullMethod: "/containerd.services.version.v1.Version/Version",
}
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
return srv.(VersionServer).Version(ctx, req.(*google_protobuf.Empty))
return srv.(VersionServer).Version(ctx, req.(*types.Empty))
}
return interceptor(ctx, in, info, handler)
}
@@ -150,6 +193,9 @@ func (m *VersionResponse) MarshalTo(dAtA []byte) (int, error) {
i = encodeVarintVersion(dAtA, i, uint64(len(m.Revision)))
i += copy(dAtA[i:], m.Revision)
}
if m.XXX_unrecognized != nil {
i += copy(dAtA[i:], m.XXX_unrecognized)
}
return i, nil
}
@@ -163,6 +209,9 @@ func encodeVarintVersion(dAtA []byte, offset int, v uint64) int {
return offset + 1
}
func (m *VersionResponse) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
l = len(m.Version)
@@ -173,6 +222,9 @@ func (m *VersionResponse) Size() (n int) {
if l > 0 {
n += 1 + l + sovVersion(uint64(l))
}
if m.XXX_unrecognized != nil {
n += len(m.XXX_unrecognized)
}
return n
}
@@ -196,6 +248,7 @@ func (this *VersionResponse) String() string {
s := strings.Join([]string{`&VersionResponse{`,
`Version:` + fmt.Sprintf("%v", this.Version) + `,`,
`Revision:` + fmt.Sprintf("%v", this.Revision) + `,`,
`XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
`}`,
}, "")
return s
@@ -223,7 +276,7 @@ func (m *VersionResponse) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -251,7 +304,7 @@ func (m *VersionResponse) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -261,6 +314,9 @@ func (m *VersionResponse) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthVersion
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthVersion
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -280,7 +336,7 @@ func (m *VersionResponse) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -290,6 +346,9 @@ func (m *VersionResponse) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthVersion
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthVersion
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -304,9 +363,13 @@ func (m *VersionResponse) Unmarshal(dAtA []byte) error {
if skippy < 0 {
return ErrInvalidLengthVersion
}
if (iNdEx + skippy) < 0 {
return ErrInvalidLengthVersion
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
iNdEx += skippy
}
}
@@ -370,10 +433,13 @@ func skipVersion(dAtA []byte) (n int, err error) {
break
}
}
iNdEx += length
if length < 0 {
return 0, ErrInvalidLengthVersion
}
iNdEx += length
if iNdEx < 0 {
return 0, ErrInvalidLengthVersion
}
return iNdEx, nil
case 3:
for {
@@ -402,6 +468,9 @@ func skipVersion(dAtA []byte) (n int, err error) {
return 0, err
}
iNdEx = start + next
if iNdEx < 0 {
return 0, ErrInvalidLengthVersion
}
}
return iNdEx, nil
case 4:
@@ -420,27 +489,3 @@ var (
ErrInvalidLengthVersion = fmt.Errorf("proto: negative length found during unmarshaling")
ErrIntOverflowVersion = fmt.Errorf("proto: integer overflow")
)
func init() {
proto.RegisterFile("github.com/containerd/containerd/api/services/version/v1/version.proto", fileDescriptorVersion)
}
var fileDescriptorVersion = []byte{
// 243 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x72, 0x4b, 0xcf, 0x2c, 0xc9,
0x28, 0x4d, 0xd2, 0x4b, 0xce, 0xcf, 0xd5, 0x4f, 0xce, 0xcf, 0x2b, 0x49, 0xcc, 0xcc, 0x4b, 0x2d,
0x4a, 0x41, 0x66, 0x26, 0x16, 0x64, 0xea, 0x17, 0xa7, 0x16, 0x95, 0x65, 0x26, 0xa7, 0x16, 0xeb,
0x97, 0xa5, 0x16, 0x15, 0x67, 0xe6, 0xe7, 0xe9, 0x97, 0x19, 0xc2, 0x98, 0x7a, 0x05, 0x45, 0xf9,
0x25, 0xf9, 0x42, 0x72, 0x08, 0x1d, 0x7a, 0x30, 0xd5, 0x7a, 0x30, 0x25, 0x65, 0x86, 0x52, 0xd2,
0xe9, 0xf9, 0xf9, 0xe9, 0x39, 0xa9, 0xfa, 0x60, 0xd5, 0x49, 0xa5, 0x69, 0xfa, 0xa9, 0xb9, 0x05,
0x25, 0x95, 0x10, 0xcd, 0x52, 0x22, 0xe9, 0xf9, 0xe9, 0xf9, 0x60, 0xa6, 0x3e, 0x88, 0x05, 0x11,
0x55, 0x72, 0xe7, 0xe2, 0x0f, 0x83, 0x18, 0x10, 0x94, 0x5a, 0x5c, 0x90, 0x9f, 0x57, 0x9c, 0x2a,
0x24, 0xc1, 0xc5, 0x0e, 0x35, 0x53, 0x82, 0x51, 0x81, 0x51, 0x83, 0x33, 0x08, 0xc6, 0x15, 0x92,
0xe2, 0xe2, 0x28, 0x4a, 0x2d, 0xcb, 0x04, 0x4b, 0x31, 0x81, 0xa5, 0xe0, 0x7c, 0xa3, 0x58, 0x2e,
0x76, 0xa8, 0x41, 0x42, 0x41, 0x08, 0xa6, 0x98, 0x1e, 0xc4, 0x49, 0x7a, 0x30, 0x27, 0xe9, 0xb9,
0x82, 0x9c, 0x24, 0xa5, 0xaf, 0x87, 0xdf, 0x2b, 0x7a, 0x68, 0x8e, 0x72, 0x8a, 0x3a, 0xf1, 0x50,
0x8e, 0xe1, 0xc6, 0x43, 0x39, 0x86, 0x86, 0x47, 0x72, 0x8c, 0x27, 0x1e, 0xc9, 0x31, 0x5e, 0x78,
0x24, 0xc7, 0xf8, 0xe0, 0x91, 0x1c, 0x63, 0x94, 0x03, 0xb9, 0x81, 0x6b, 0x0d, 0x65, 0x46, 0x30,
0x26, 0xb1, 0x81, 0x9d, 0x67, 0x0c, 0x08, 0x00, 0x00, 0xff, 0xff, 0x95, 0x0d, 0x52, 0x23, 0xa9,
0x01, 0x00, 0x00,
}

View File

@@ -1,35 +1,18 @@
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: github.com/containerd/containerd/api/types/descriptor.proto
/*
Package types is a generated protocol buffer package.
It is generated from these files:
github.com/containerd/containerd/api/types/descriptor.proto
github.com/containerd/containerd/api/types/metrics.proto
github.com/containerd/containerd/api/types/mount.proto
github.com/containerd/containerd/api/types/platform.proto
It has these top-level messages:
Descriptor
Metric
Mount
Platform
*/
package types
import proto "github.com/gogo/protobuf/proto"
import fmt "fmt"
import math "math"
// skipping weak import gogoproto "github.com/gogo/protobuf/gogoproto"
import github_com_opencontainers_go_digest "github.com/opencontainers/go-digest"
import strings "strings"
import reflect "reflect"
import io "io"
import (
fmt "fmt"
proto "github.com/gogo/protobuf/proto"
github_com_gogo_protobuf_sortkeys "github.com/gogo/protobuf/sortkeys"
github_com_opencontainers_go_digest "github.com/opencontainers/go-digest"
io "io"
math "math"
reflect "reflect"
strings "strings"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
@@ -48,18 +31,80 @@ const _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package
// oci descriptor found in a manifest.
// See https://godoc.org/github.com/opencontainers/image-spec/specs-go/v1#Descriptor
type Descriptor struct {
MediaType string `protobuf:"bytes,1,opt,name=media_type,json=mediaType,proto3" json:"media_type,omitempty"`
Digest github_com_opencontainers_go_digest.Digest `protobuf:"bytes,2,opt,name=digest,proto3,customtype=github.com/opencontainers/go-digest.Digest" json:"digest"`
Size_ int64 `protobuf:"varint,3,opt,name=size,proto3" json:"size,omitempty"`
MediaType string `protobuf:"bytes,1,opt,name=media_type,json=mediaType,proto3" json:"media_type,omitempty"`
Digest github_com_opencontainers_go_digest.Digest `protobuf:"bytes,2,opt,name=digest,proto3,customtype=github.com/opencontainers/go-digest.Digest" json:"digest"`
Size_ int64 `protobuf:"varint,3,opt,name=size,proto3" json:"size,omitempty"`
Annotations map[string]string `protobuf:"bytes,5,rep,name=annotations,proto3" json:"annotations,omitempty" protobuf_key:"bytes,1,opt,name=key,proto3" protobuf_val:"bytes,2,opt,name=value,proto3"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *Descriptor) Reset() { *m = Descriptor{} }
func (*Descriptor) ProtoMessage() {}
func (*Descriptor) Descriptor() ([]byte, []int) { return fileDescriptorDescriptor, []int{0} }
func (m *Descriptor) Reset() { *m = Descriptor{} }
func (*Descriptor) ProtoMessage() {}
func (*Descriptor) Descriptor() ([]byte, []int) {
return fileDescriptor_37f958df3707db9e, []int{0}
}
func (m *Descriptor) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *Descriptor) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_Descriptor.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalTo(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *Descriptor) XXX_Merge(src proto.Message) {
xxx_messageInfo_Descriptor.Merge(m, src)
}
func (m *Descriptor) XXX_Size() int {
return m.Size()
}
func (m *Descriptor) XXX_DiscardUnknown() {
xxx_messageInfo_Descriptor.DiscardUnknown(m)
}
var xxx_messageInfo_Descriptor proto.InternalMessageInfo
func init() {
proto.RegisterType((*Descriptor)(nil), "containerd.types.Descriptor")
proto.RegisterMapType((map[string]string)(nil), "containerd.types.Descriptor.AnnotationsEntry")
}
func init() {
proto.RegisterFile("github.com/containerd/containerd/api/types/descriptor.proto", fileDescriptor_37f958df3707db9e)
}
var fileDescriptor_37f958df3707db9e = []byte{
// 311 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0xb2, 0x4e, 0xcf, 0x2c, 0xc9,
0x28, 0x4d, 0xd2, 0x4b, 0xce, 0xcf, 0xd5, 0x4f, 0xce, 0xcf, 0x2b, 0x49, 0xcc, 0xcc, 0x4b, 0x2d,
0x4a, 0x41, 0x66, 0x26, 0x16, 0x64, 0xea, 0x97, 0x54, 0x16, 0xa4, 0x16, 0xeb, 0xa7, 0xa4, 0x16,
0x27, 0x17, 0x65, 0x16, 0x94, 0xe4, 0x17, 0xe9, 0x15, 0x14, 0xe5, 0x97, 0xe4, 0x0b, 0x09, 0x20,
0x94, 0xe9, 0x81, 0x95, 0x48, 0x89, 0xa4, 0xe7, 0xa7, 0xe7, 0x83, 0x25, 0xf5, 0x41, 0x2c, 0x88,
0x3a, 0xa5, 0x39, 0x4c, 0x5c, 0x5c, 0x2e, 0x70, 0xcd, 0x42, 0xb2, 0x5c, 0x5c, 0xb9, 0xa9, 0x29,
0x99, 0x89, 0xf1, 0x20, 0x3d, 0x12, 0x8c, 0x0a, 0x8c, 0x1a, 0x9c, 0x41, 0x9c, 0x60, 0x91, 0x90,
0xca, 0x82, 0x54, 0x21, 0x2f, 0x2e, 0xb6, 0x94, 0xcc, 0xf4, 0xd4, 0xe2, 0x12, 0x09, 0x26, 0x90,
0x94, 0x93, 0xd1, 0x89, 0x7b, 0xf2, 0x0c, 0xb7, 0xee, 0xc9, 0x6b, 0x21, 0x39, 0x35, 0xbf, 0x20,
0x35, 0x0f, 0x6e, 0x79, 0xb1, 0x7e, 0x7a, 0xbe, 0x2e, 0x44, 0x8b, 0x9e, 0x0b, 0x98, 0x0a, 0x82,
0x9a, 0x20, 0x24, 0xc4, 0xc5, 0x52, 0x9c, 0x59, 0x95, 0x2a, 0xc1, 0xac, 0xc0, 0xa8, 0xc1, 0x1c,
0x04, 0x66, 0x0b, 0xf9, 0x73, 0x71, 0x27, 0xe6, 0xe5, 0xe5, 0x97, 0x24, 0x96, 0x64, 0xe6, 0xe7,
0x15, 0x4b, 0xb0, 0x2a, 0x30, 0x6b, 0x70, 0x1b, 0xe9, 0xea, 0xa1, 0xfb, 0x45, 0x0f, 0xe1, 0x62,
0x3d, 0x47, 0x84, 0x7a, 0xd7, 0xbc, 0x92, 0xa2, 0xca, 0x20, 0x64, 0x13, 0xa4, 0xec, 0xb8, 0x04,
0xd0, 0x15, 0x08, 0x09, 0x70, 0x31, 0x67, 0xa7, 0x56, 0x42, 0x3d, 0x07, 0x62, 0x0a, 0x89, 0x70,
0xb1, 0x96, 0x25, 0xe6, 0x94, 0xa6, 0x42, 0x7c, 0x15, 0x04, 0xe1, 0x58, 0x31, 0x59, 0x30, 0x3a,
0x79, 0x9d, 0x78, 0x28, 0xc7, 0x70, 0xe3, 0xa1, 0x1c, 0x43, 0xc3, 0x23, 0x39, 0xc6, 0x13, 0x8f,
0xe4, 0x18, 0x2f, 0x3c, 0x92, 0x63, 0x7c, 0xf0, 0x48, 0x8e, 0x31, 0xca, 0x80, 0xf8, 0xd8, 0xb1,
0x06, 0x93, 0x11, 0x0c, 0x49, 0x6c, 0xe0, 0x30, 0x37, 0x06, 0x04, 0x00, 0x00, 0xff, 0xff, 0x22,
0x8a, 0x20, 0x4a, 0xda, 0x01, 0x00, 0x00,
}
func (m *Descriptor) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
@@ -92,6 +137,26 @@ func (m *Descriptor) MarshalTo(dAtA []byte) (int, error) {
i++
i = encodeVarintDescriptor(dAtA, i, uint64(m.Size_))
}
if len(m.Annotations) > 0 {
for k, _ := range m.Annotations {
dAtA[i] = 0x2a
i++
v := m.Annotations[k]
mapSize := 1 + len(k) + sovDescriptor(uint64(len(k))) + 1 + len(v) + sovDescriptor(uint64(len(v)))
i = encodeVarintDescriptor(dAtA, i, uint64(mapSize))
dAtA[i] = 0xa
i++
i = encodeVarintDescriptor(dAtA, i, uint64(len(k)))
i += copy(dAtA[i:], k)
dAtA[i] = 0x12
i++
i = encodeVarintDescriptor(dAtA, i, uint64(len(v)))
i += copy(dAtA[i:], v)
}
}
if m.XXX_unrecognized != nil {
i += copy(dAtA[i:], m.XXX_unrecognized)
}
return i, nil
}
@@ -105,6 +170,9 @@ func encodeVarintDescriptor(dAtA []byte, offset int, v uint64) int {
return offset + 1
}
func (m *Descriptor) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
l = len(m.MediaType)
@@ -118,6 +186,17 @@ func (m *Descriptor) Size() (n int) {
if m.Size_ != 0 {
n += 1 + sovDescriptor(uint64(m.Size_))
}
if len(m.Annotations) > 0 {
for k, v := range m.Annotations {
_ = k
_ = v
mapEntrySize := 1 + len(k) + sovDescriptor(uint64(len(k))) + 1 + len(v) + sovDescriptor(uint64(len(v)))
n += mapEntrySize + 1 + sovDescriptor(uint64(mapEntrySize))
}
}
if m.XXX_unrecognized != nil {
n += len(m.XXX_unrecognized)
}
return n
}
@@ -138,10 +217,22 @@ func (this *Descriptor) String() string {
if this == nil {
return "nil"
}
keysForAnnotations := make([]string, 0, len(this.Annotations))
for k, _ := range this.Annotations {
keysForAnnotations = append(keysForAnnotations, k)
}
github_com_gogo_protobuf_sortkeys.Strings(keysForAnnotations)
mapStringForAnnotations := "map[string]string{"
for _, k := range keysForAnnotations {
mapStringForAnnotations += fmt.Sprintf("%v: %v,", k, this.Annotations[k])
}
mapStringForAnnotations += "}"
s := strings.Join([]string{`&Descriptor{`,
`MediaType:` + fmt.Sprintf("%v", this.MediaType) + `,`,
`Digest:` + fmt.Sprintf("%v", this.Digest) + `,`,
`Size_:` + fmt.Sprintf("%v", this.Size_) + `,`,
`Annotations:` + mapStringForAnnotations + `,`,
`XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
`}`,
}, "")
return s
@@ -169,7 +260,7 @@ func (m *Descriptor) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -197,7 +288,7 @@ func (m *Descriptor) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -207,6 +298,9 @@ func (m *Descriptor) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthDescriptor
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthDescriptor
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -226,7 +320,7 @@ func (m *Descriptor) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -236,6 +330,9 @@ func (m *Descriptor) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthDescriptor
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthDescriptor
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -255,11 +352,138 @@ func (m *Descriptor) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
m.Size_ |= (int64(b) & 0x7F) << shift
m.Size_ |= int64(b&0x7F) << shift
if b < 0x80 {
break
}
}
case 5:
if wireType != 2 {
return fmt.Errorf("proto: wrong wireType = %d for field Annotations", wireType)
}
var msglen int
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowDescriptor
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
}
if msglen < 0 {
return ErrInvalidLengthDescriptor
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthDescriptor
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
if m.Annotations == nil {
m.Annotations = make(map[string]string)
}
var mapkey string
var mapvalue string
for iNdEx < postIndex {
entryPreIndex := iNdEx
var wire uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowDescriptor
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
fieldNum := int32(wire >> 3)
if fieldNum == 1 {
var stringLenmapkey uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowDescriptor
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLenmapkey |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLenmapkey := int(stringLenmapkey)
if intStringLenmapkey < 0 {
return ErrInvalidLengthDescriptor
}
postStringIndexmapkey := iNdEx + intStringLenmapkey
if postStringIndexmapkey < 0 {
return ErrInvalidLengthDescriptor
}
if postStringIndexmapkey > l {
return io.ErrUnexpectedEOF
}
mapkey = string(dAtA[iNdEx:postStringIndexmapkey])
iNdEx = postStringIndexmapkey
} else if fieldNum == 2 {
var stringLenmapvalue uint64
for shift := uint(0); ; shift += 7 {
if shift >= 64 {
return ErrIntOverflowDescriptor
}
if iNdEx >= l {
return io.ErrUnexpectedEOF
}
b := dAtA[iNdEx]
iNdEx++
stringLenmapvalue |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
}
intStringLenmapvalue := int(stringLenmapvalue)
if intStringLenmapvalue < 0 {
return ErrInvalidLengthDescriptor
}
postStringIndexmapvalue := iNdEx + intStringLenmapvalue
if postStringIndexmapvalue < 0 {
return ErrInvalidLengthDescriptor
}
if postStringIndexmapvalue > l {
return io.ErrUnexpectedEOF
}
mapvalue = string(dAtA[iNdEx:postStringIndexmapvalue])
iNdEx = postStringIndexmapvalue
} else {
iNdEx = entryPreIndex
skippy, err := skipDescriptor(dAtA[iNdEx:])
if err != nil {
return err
}
if skippy < 0 {
return ErrInvalidLengthDescriptor
}
if (iNdEx + skippy) > postIndex {
return io.ErrUnexpectedEOF
}
iNdEx += skippy
}
}
m.Annotations[mapkey] = mapvalue
iNdEx = postIndex
default:
iNdEx = preIndex
skippy, err := skipDescriptor(dAtA[iNdEx:])
@@ -269,9 +493,13 @@ func (m *Descriptor) Unmarshal(dAtA []byte) error {
if skippy < 0 {
return ErrInvalidLengthDescriptor
}
if (iNdEx + skippy) < 0 {
return ErrInvalidLengthDescriptor
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
iNdEx += skippy
}
}
@@ -335,10 +563,13 @@ func skipDescriptor(dAtA []byte) (n int, err error) {
break
}
}
iNdEx += length
if length < 0 {
return 0, ErrInvalidLengthDescriptor
}
iNdEx += length
if iNdEx < 0 {
return 0, ErrInvalidLengthDescriptor
}
return iNdEx, nil
case 3:
for {
@@ -367,6 +598,9 @@ func skipDescriptor(dAtA []byte) (n int, err error) {
return 0, err
}
iNdEx = start + next
if iNdEx < 0 {
return 0, ErrInvalidLengthDescriptor
}
}
return iNdEx, nil
case 4:
@@ -385,26 +619,3 @@ var (
ErrInvalidLengthDescriptor = fmt.Errorf("proto: negative length found during unmarshaling")
ErrIntOverflowDescriptor = fmt.Errorf("proto: integer overflow")
)
func init() {
proto.RegisterFile("github.com/containerd/containerd/api/types/descriptor.proto", fileDescriptorDescriptor)
}
var fileDescriptorDescriptor = []byte{
// 234 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0xb2, 0x4e, 0xcf, 0x2c, 0xc9,
0x28, 0x4d, 0xd2, 0x4b, 0xce, 0xcf, 0xd5, 0x4f, 0xce, 0xcf, 0x2b, 0x49, 0xcc, 0xcc, 0x4b, 0x2d,
0x4a, 0x41, 0x66, 0x26, 0x16, 0x64, 0xea, 0x97, 0x54, 0x16, 0xa4, 0x16, 0xeb, 0xa7, 0xa4, 0x16,
0x27, 0x17, 0x65, 0x16, 0x94, 0xe4, 0x17, 0xe9, 0x15, 0x14, 0xe5, 0x97, 0xe4, 0x0b, 0x09, 0x20,
0x94, 0xe9, 0x81, 0x95, 0x48, 0x89, 0xa4, 0xe7, 0xa7, 0xe7, 0x83, 0x25, 0xf5, 0x41, 0x2c, 0x88,
0x3a, 0xa5, 0x6e, 0x46, 0x2e, 0x2e, 0x17, 0xb8, 0x66, 0x21, 0x59, 0x2e, 0xae, 0xdc, 0xd4, 0x94,
0xcc, 0xc4, 0x78, 0x90, 0x1e, 0x09, 0x46, 0x05, 0x46, 0x0d, 0xce, 0x20, 0x4e, 0xb0, 0x48, 0x48,
0x65, 0x41, 0xaa, 0x90, 0x17, 0x17, 0x5b, 0x4a, 0x66, 0x7a, 0x6a, 0x71, 0x89, 0x04, 0x13, 0x48,
0xca, 0xc9, 0xe8, 0xc4, 0x3d, 0x79, 0x86, 0x5b, 0xf7, 0xe4, 0xb5, 0x90, 0x9c, 0x9a, 0x5f, 0x90,
0x9a, 0x07, 0xb7, 0xbc, 0x58, 0x3f, 0x3d, 0x5f, 0x17, 0xa2, 0x45, 0xcf, 0x05, 0x4c, 0x05, 0x41,
0x4d, 0x10, 0x12, 0xe2, 0x62, 0x29, 0xce, 0xac, 0x4a, 0x95, 0x60, 0x56, 0x60, 0xd4, 0x60, 0x0e,
0x02, 0xb3, 0x9d, 0xbc, 0x4e, 0x3c, 0x94, 0x63, 0xb8, 0xf1, 0x50, 0x8e, 0xa1, 0xe1, 0x91, 0x1c,
0xe3, 0x89, 0x47, 0x72, 0x8c, 0x17, 0x1e, 0xc9, 0x31, 0x3e, 0x78, 0x24, 0xc7, 0x18, 0x65, 0x40,
0x7c, 0x60, 0x58, 0x83, 0xc9, 0x08, 0x86, 0x24, 0x36, 0xb0, 0x17, 0x8d, 0x01, 0x01, 0x00, 0x00,
0xff, 0xff, 0xea, 0xac, 0x78, 0x9a, 0x49, 0x01, 0x00, 0x00,
}

View File

@@ -15,4 +15,5 @@ message Descriptor {
string media_type = 1;
string digest = 2 [(gogoproto.customtype) = "github.com/opencontainers/go-digest.Digest", (gogoproto.nullable) = false];
int64 size = 3;
map<string, string> annotations = 5;
}

View File

@@ -3,22 +3,17 @@
package types
import proto "github.com/gogo/protobuf/proto"
import fmt "fmt"
import math "math"
// skipping weak import gogoproto "github.com/gogo/protobuf/gogoproto"
import google_protobuf1 "github.com/gogo/protobuf/types"
import _ "github.com/gogo/protobuf/types"
import time "time"
import types1 "github.com/gogo/protobuf/types"
import strings "strings"
import reflect "reflect"
import io "io"
import (
fmt "fmt"
proto "github.com/gogo/protobuf/proto"
github_com_gogo_protobuf_types "github.com/gogo/protobuf/types"
types "github.com/gogo/protobuf/types"
io "io"
math "math"
reflect "reflect"
strings "strings"
time "time"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
@@ -26,19 +21,82 @@ var _ = fmt.Errorf
var _ = math.Inf
var _ = time.Kitchen
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package
type Metric struct {
Timestamp time.Time `protobuf:"bytes,1,opt,name=timestamp,stdtime" json:"timestamp"`
ID string `protobuf:"bytes,2,opt,name=id,proto3" json:"id,omitempty"`
Data *google_protobuf1.Any `protobuf:"bytes,3,opt,name=data" json:"data,omitempty"`
Timestamp time.Time `protobuf:"bytes,1,opt,name=timestamp,proto3,stdtime" json:"timestamp"`
ID string `protobuf:"bytes,2,opt,name=id,proto3" json:"id,omitempty"`
Data *types.Any `protobuf:"bytes,3,opt,name=data,proto3" json:"data,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *Metric) Reset() { *m = Metric{} }
func (*Metric) ProtoMessage() {}
func (*Metric) Descriptor() ([]byte, []int) { return fileDescriptorMetrics, []int{0} }
func (m *Metric) Reset() { *m = Metric{} }
func (*Metric) ProtoMessage() {}
func (*Metric) Descriptor() ([]byte, []int) {
return fileDescriptor_8d594d87edf6e6bc, []int{0}
}
func (m *Metric) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *Metric) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_Metric.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalTo(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *Metric) XXX_Merge(src proto.Message) {
xxx_messageInfo_Metric.Merge(m, src)
}
func (m *Metric) XXX_Size() int {
return m.Size()
}
func (m *Metric) XXX_DiscardUnknown() {
xxx_messageInfo_Metric.DiscardUnknown(m)
}
var xxx_messageInfo_Metric proto.InternalMessageInfo
func init() {
proto.RegisterType((*Metric)(nil), "containerd.types.Metric")
}
func init() {
proto.RegisterFile("github.com/containerd/containerd/api/types/metrics.proto", fileDescriptor_8d594d87edf6e6bc)
}
var fileDescriptor_8d594d87edf6e6bc = []byte{
// 258 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0xb2, 0x48, 0xcf, 0x2c, 0xc9,
0x28, 0x4d, 0xd2, 0x4b, 0xce, 0xcf, 0xd5, 0x4f, 0xce, 0xcf, 0x2b, 0x49, 0xcc, 0xcc, 0x4b, 0x2d,
0x4a, 0x41, 0x66, 0x26, 0x16, 0x64, 0xea, 0x97, 0x54, 0x16, 0xa4, 0x16, 0xeb, 0xe7, 0xa6, 0x96,
0x14, 0x65, 0x26, 0x17, 0xeb, 0x15, 0x14, 0xe5, 0x97, 0xe4, 0x0b, 0x09, 0x20, 0xd4, 0xe8, 0x81,
0xe5, 0xa5, 0x44, 0xd2, 0xf3, 0xd3, 0xf3, 0xc1, 0x92, 0xfa, 0x20, 0x16, 0x44, 0x9d, 0x94, 0x64,
0x7a, 0x7e, 0x7e, 0x7a, 0x4e, 0xaa, 0x3e, 0x98, 0x97, 0x54, 0x9a, 0xa6, 0x9f, 0x98, 0x57, 0x09,
0x95, 0x92, 0x47, 0x97, 0x2a, 0xc9, 0xcc, 0x4d, 0x2d, 0x2e, 0x49, 0xcc, 0x2d, 0x80, 0x28, 0x50,
0xea, 0x63, 0xe4, 0x62, 0xf3, 0x05, 0xdb, 0x2a, 0xe4, 0xc4, 0xc5, 0x09, 0x97, 0x95, 0x60, 0x54,
0x60, 0xd4, 0xe0, 0x36, 0x92, 0xd2, 0x83, 0xe8, 0xd7, 0x83, 0xe9, 0xd7, 0x0b, 0x81, 0xa9, 0x70,
0xe2, 0x38, 0x71, 0x4f, 0x9e, 0x61, 0xc2, 0x7d, 0x79, 0xc6, 0x20, 0x84, 0x36, 0x21, 0x31, 0x2e,
0xa6, 0xcc, 0x14, 0x09, 0x26, 0x05, 0x46, 0x0d, 0x4e, 0x27, 0xb6, 0x47, 0xf7, 0xe4, 0x99, 0x3c,
0x5d, 0x82, 0x98, 0x32, 0x53, 0x84, 0x34, 0xb8, 0x58, 0x52, 0x12, 0x4b, 0x12, 0x25, 0x98, 0xc1,
0xc6, 0x8a, 0x60, 0x18, 0xeb, 0x98, 0x57, 0x19, 0x04, 0x56, 0xe1, 0xe4, 0x75, 0xe2, 0xa1, 0x1c,
0xc3, 0x8d, 0x87, 0x72, 0x0c, 0x0d, 0x8f, 0xe4, 0x18, 0x4f, 0x3c, 0x92, 0x63, 0xbc, 0xf0, 0x48,
0x8e, 0xf1, 0xc1, 0x23, 0x39, 0xc6, 0x28, 0x03, 0xe2, 0x03, 0xd2, 0x1a, 0x4c, 0x46, 0x30, 0x24,
0xb1, 0x81, 0x6d, 0x30, 0x06, 0x04, 0x00, 0x00, 0xff, 0xff, 0xde, 0x0d, 0x02, 0xfe, 0x85, 0x01,
0x00, 0x00,
}
func (m *Metric) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
@@ -56,8 +114,8 @@ func (m *Metric) MarshalTo(dAtA []byte) (int, error) {
_ = l
dAtA[i] = 0xa
i++
i = encodeVarintMetrics(dAtA, i, uint64(types1.SizeOfStdTime(m.Timestamp)))
n1, err := types1.StdTimeMarshalTo(m.Timestamp, dAtA[i:])
i = encodeVarintMetrics(dAtA, i, uint64(github_com_gogo_protobuf_types.SizeOfStdTime(m.Timestamp)))
n1, err := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.Timestamp, dAtA[i:])
if err != nil {
return 0, err
}
@@ -78,6 +136,9 @@ func (m *Metric) MarshalTo(dAtA []byte) (int, error) {
}
i += n2
}
if m.XXX_unrecognized != nil {
i += copy(dAtA[i:], m.XXX_unrecognized)
}
return i, nil
}
@@ -91,9 +152,12 @@ func encodeVarintMetrics(dAtA []byte, offset int, v uint64) int {
return offset + 1
}
func (m *Metric) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
l = types1.SizeOfStdTime(m.Timestamp)
l = github_com_gogo_protobuf_types.SizeOfStdTime(m.Timestamp)
n += 1 + l + sovMetrics(uint64(l))
l = len(m.ID)
if l > 0 {
@@ -103,6 +167,9 @@ func (m *Metric) Size() (n int) {
l = m.Data.Size()
n += 1 + l + sovMetrics(uint64(l))
}
if m.XXX_unrecognized != nil {
n += len(m.XXX_unrecognized)
}
return n
}
@@ -124,9 +191,10 @@ func (this *Metric) String() string {
return "nil"
}
s := strings.Join([]string{`&Metric{`,
`Timestamp:` + strings.Replace(strings.Replace(this.Timestamp.String(), "Timestamp", "google_protobuf2.Timestamp", 1), `&`, ``, 1) + `,`,
`Timestamp:` + strings.Replace(strings.Replace(this.Timestamp.String(), "Timestamp", "types.Timestamp", 1), `&`, ``, 1) + `,`,
`ID:` + fmt.Sprintf("%v", this.ID) + `,`,
`Data:` + strings.Replace(fmt.Sprintf("%v", this.Data), "Any", "google_protobuf1.Any", 1) + `,`,
`Data:` + strings.Replace(fmt.Sprintf("%v", this.Data), "Any", "types.Any", 1) + `,`,
`XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
`}`,
}, "")
return s
@@ -154,7 +222,7 @@ func (m *Metric) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -182,7 +250,7 @@ func (m *Metric) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
msglen |= (int(b) & 0x7F) << shift
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -191,10 +259,13 @@ func (m *Metric) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthMetrics
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthMetrics
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
if err := types1.StdTimeUnmarshal(&m.Timestamp, dAtA[iNdEx:postIndex]); err != nil {
if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.Timestamp, dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
@@ -212,7 +283,7 @@ func (m *Metric) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -222,6 +293,9 @@ func (m *Metric) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthMetrics
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthMetrics
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -241,7 +315,7 @@ func (m *Metric) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
msglen |= (int(b) & 0x7F) << shift
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -250,11 +324,14 @@ func (m *Metric) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthMetrics
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthMetrics
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
if m.Data == nil {
m.Data = &google_protobuf1.Any{}
m.Data = &types.Any{}
}
if err := m.Data.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
@@ -269,9 +346,13 @@ func (m *Metric) Unmarshal(dAtA []byte) error {
if skippy < 0 {
return ErrInvalidLengthMetrics
}
if (iNdEx + skippy) < 0 {
return ErrInvalidLengthMetrics
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
iNdEx += skippy
}
}
@@ -335,10 +416,13 @@ func skipMetrics(dAtA []byte) (n int, err error) {
break
}
}
iNdEx += length
if length < 0 {
return 0, ErrInvalidLengthMetrics
}
iNdEx += length
if iNdEx < 0 {
return 0, ErrInvalidLengthMetrics
}
return iNdEx, nil
case 3:
for {
@@ -367,6 +451,9 @@ func skipMetrics(dAtA []byte) (n int, err error) {
return 0, err
}
iNdEx = start + next
if iNdEx < 0 {
return 0, ErrInvalidLengthMetrics
}
}
return iNdEx, nil
case 4:
@@ -385,28 +472,3 @@ var (
ErrInvalidLengthMetrics = fmt.Errorf("proto: negative length found during unmarshaling")
ErrIntOverflowMetrics = fmt.Errorf("proto: integer overflow")
)
func init() {
proto.RegisterFile("github.com/containerd/containerd/api/types/metrics.proto", fileDescriptorMetrics)
}
var fileDescriptorMetrics = []byte{
// 258 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0xb2, 0x48, 0xcf, 0x2c, 0xc9,
0x28, 0x4d, 0xd2, 0x4b, 0xce, 0xcf, 0xd5, 0x4f, 0xce, 0xcf, 0x2b, 0x49, 0xcc, 0xcc, 0x4b, 0x2d,
0x4a, 0x41, 0x66, 0x26, 0x16, 0x64, 0xea, 0x97, 0x54, 0x16, 0xa4, 0x16, 0xeb, 0xe7, 0xa6, 0x96,
0x14, 0x65, 0x26, 0x17, 0xeb, 0x15, 0x14, 0xe5, 0x97, 0xe4, 0x0b, 0x09, 0x20, 0xd4, 0xe8, 0x81,
0xe5, 0xa5, 0x44, 0xd2, 0xf3, 0xd3, 0xf3, 0xc1, 0x92, 0xfa, 0x20, 0x16, 0x44, 0x9d, 0x94, 0x64,
0x7a, 0x7e, 0x7e, 0x7a, 0x4e, 0xaa, 0x3e, 0x98, 0x97, 0x54, 0x9a, 0xa6, 0x9f, 0x98, 0x57, 0x09,
0x95, 0x92, 0x47, 0x97, 0x2a, 0xc9, 0xcc, 0x4d, 0x2d, 0x2e, 0x49, 0xcc, 0x2d, 0x80, 0x28, 0x50,
0xea, 0x63, 0xe4, 0x62, 0xf3, 0x05, 0xdb, 0x2a, 0xe4, 0xc4, 0xc5, 0x09, 0x97, 0x95, 0x60, 0x54,
0x60, 0xd4, 0xe0, 0x36, 0x92, 0xd2, 0x83, 0xe8, 0xd7, 0x83, 0xe9, 0xd7, 0x0b, 0x81, 0xa9, 0x70,
0xe2, 0x38, 0x71, 0x4f, 0x9e, 0x61, 0xc2, 0x7d, 0x79, 0xc6, 0x20, 0x84, 0x36, 0x21, 0x31, 0x2e,
0xa6, 0xcc, 0x14, 0x09, 0x26, 0x05, 0x46, 0x0d, 0x4e, 0x27, 0xb6, 0x47, 0xf7, 0xe4, 0x99, 0x3c,
0x5d, 0x82, 0x98, 0x32, 0x53, 0x84, 0x34, 0xb8, 0x58, 0x52, 0x12, 0x4b, 0x12, 0x25, 0x98, 0xc1,
0xc6, 0x8a, 0x60, 0x18, 0xeb, 0x98, 0x57, 0x19, 0x04, 0x56, 0xe1, 0xe4, 0x75, 0xe2, 0xa1, 0x1c,
0xc3, 0x8d, 0x87, 0x72, 0x0c, 0x0d, 0x8f, 0xe4, 0x18, 0x4f, 0x3c, 0x92, 0x63, 0xbc, 0xf0, 0x48,
0x8e, 0xf1, 0xc1, 0x23, 0x39, 0xc6, 0x28, 0x03, 0xe2, 0x03, 0xd2, 0x1a, 0x4c, 0x46, 0x30, 0x24,
0xb1, 0x81, 0x6d, 0x30, 0x06, 0x04, 0x00, 0x00, 0xff, 0xff, 0xde, 0x0d, 0x02, 0xfe, 0x85, 0x01,
0x00, 0x00,
}

View File

@@ -3,22 +3,26 @@
package types
import proto "github.com/gogo/protobuf/proto"
import fmt "fmt"
import math "math"
// skipping weak import gogoproto "github.com/gogo/protobuf/gogoproto"
import strings "strings"
import reflect "reflect"
import io "io"
import (
fmt "fmt"
proto "github.com/gogo/protobuf/proto"
io "io"
math "math"
reflect "reflect"
strings "strings"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package
// Mount describes mounts for a container.
//
// This type is the lingua franca of ContainerD. All services provide mounts
@@ -35,16 +39,69 @@ type Mount struct {
// Target path in container
Target string `protobuf:"bytes,3,opt,name=target,proto3" json:"target,omitempty"`
// Options specifies zero or more fstab style mount options.
Options []string `protobuf:"bytes,4,rep,name=options" json:"options,omitempty"`
Options []string `protobuf:"bytes,4,rep,name=options,proto3" json:"options,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *Mount) Reset() { *m = Mount{} }
func (*Mount) ProtoMessage() {}
func (*Mount) Descriptor() ([]byte, []int) { return fileDescriptorMount, []int{0} }
func (m *Mount) Reset() { *m = Mount{} }
func (*Mount) ProtoMessage() {}
func (*Mount) Descriptor() ([]byte, []int) {
return fileDescriptor_920196890d4a7b9f, []int{0}
}
func (m *Mount) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *Mount) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_Mount.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalTo(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *Mount) XXX_Merge(src proto.Message) {
xxx_messageInfo_Mount.Merge(m, src)
}
func (m *Mount) XXX_Size() int {
return m.Size()
}
func (m *Mount) XXX_DiscardUnknown() {
xxx_messageInfo_Mount.DiscardUnknown(m)
}
var xxx_messageInfo_Mount proto.InternalMessageInfo
func init() {
proto.RegisterType((*Mount)(nil), "containerd.types.Mount")
}
func init() {
proto.RegisterFile("github.com/containerd/containerd/api/types/mount.proto", fileDescriptor_920196890d4a7b9f)
}
var fileDescriptor_920196890d4a7b9f = []byte{
// 202 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x32, 0x4b, 0xcf, 0x2c, 0xc9,
0x28, 0x4d, 0xd2, 0x4b, 0xce, 0xcf, 0xd5, 0x4f, 0xce, 0xcf, 0x2b, 0x49, 0xcc, 0xcc, 0x4b, 0x2d,
0x4a, 0x41, 0x66, 0x26, 0x16, 0x64, 0xea, 0x97, 0x54, 0x16, 0xa4, 0x16, 0xeb, 0xe7, 0xe6, 0x97,
0xe6, 0x95, 0xe8, 0x15, 0x14, 0xe5, 0x97, 0xe4, 0x0b, 0x09, 0x20, 0x54, 0xe8, 0x81, 0x65, 0xa5,
0x44, 0xd2, 0xf3, 0xd3, 0xf3, 0xc1, 0x92, 0xfa, 0x20, 0x16, 0x44, 0x9d, 0x52, 0x2a, 0x17, 0xab,
0x2f, 0x48, 0x9b, 0x90, 0x10, 0x17, 0x0b, 0x48, 0x9d, 0x04, 0xa3, 0x02, 0xa3, 0x06, 0x67, 0x10,
0x98, 0x2d, 0x24, 0xc6, 0xc5, 0x56, 0x9c, 0x5f, 0x5a, 0x94, 0x9c, 0x2a, 0xc1, 0x04, 0x16, 0x85,
0xf2, 0x40, 0xe2, 0x25, 0x89, 0x45, 0xe9, 0xa9, 0x25, 0x12, 0xcc, 0x10, 0x71, 0x08, 0x4f, 0x48,
0x82, 0x8b, 0x3d, 0xbf, 0xa0, 0x24, 0x33, 0x3f, 0xaf, 0x58, 0x82, 0x45, 0x81, 0x59, 0x83, 0x33,
0x08, 0xc6, 0x75, 0xf2, 0x3a, 0xf1, 0x50, 0x8e, 0xe1, 0xc6, 0x43, 0x39, 0x86, 0x86, 0x47, 0x72,
0x8c, 0x27, 0x1e, 0xc9, 0x31, 0x5e, 0x78, 0x24, 0xc7, 0xf8, 0xe0, 0x91, 0x1c, 0x63, 0x94, 0x01,
0xf1, 0x1e, 0xb4, 0x06, 0x93, 0x11, 0x0c, 0x49, 0x6c, 0x60, 0xb7, 0x1b, 0x03, 0x02, 0x00, 0x00,
0xff, 0xff, 0x82, 0x1c, 0x02, 0x18, 0x1d, 0x01, 0x00, 0x00,
}
func (m *Mount) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
@@ -93,6 +150,9 @@ func (m *Mount) MarshalTo(dAtA []byte) (int, error) {
i += copy(dAtA[i:], s)
}
}
if m.XXX_unrecognized != nil {
i += copy(dAtA[i:], m.XXX_unrecognized)
}
return i, nil
}
@@ -106,6 +166,9 @@ func encodeVarintMount(dAtA []byte, offset int, v uint64) int {
return offset + 1
}
func (m *Mount) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
l = len(m.Type)
@@ -126,6 +189,9 @@ func (m *Mount) Size() (n int) {
n += 1 + l + sovMount(uint64(l))
}
}
if m.XXX_unrecognized != nil {
n += len(m.XXX_unrecognized)
}
return n
}
@@ -151,6 +217,7 @@ func (this *Mount) String() string {
`Source:` + fmt.Sprintf("%v", this.Source) + `,`,
`Target:` + fmt.Sprintf("%v", this.Target) + `,`,
`Options:` + fmt.Sprintf("%v", this.Options) + `,`,
`XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
`}`,
}, "")
return s
@@ -178,7 +245,7 @@ func (m *Mount) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -206,7 +273,7 @@ func (m *Mount) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -216,6 +283,9 @@ func (m *Mount) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthMount
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthMount
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -235,7 +305,7 @@ func (m *Mount) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -245,6 +315,9 @@ func (m *Mount) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthMount
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthMount
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -264,7 +337,7 @@ func (m *Mount) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -274,6 +347,9 @@ func (m *Mount) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthMount
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthMount
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -293,7 +369,7 @@ func (m *Mount) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -303,6 +379,9 @@ func (m *Mount) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthMount
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthMount
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -317,9 +396,13 @@ func (m *Mount) Unmarshal(dAtA []byte) error {
if skippy < 0 {
return ErrInvalidLengthMount
}
if (iNdEx + skippy) < 0 {
return ErrInvalidLengthMount
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
iNdEx += skippy
}
}
@@ -383,10 +466,13 @@ func skipMount(dAtA []byte) (n int, err error) {
break
}
}
iNdEx += length
if length < 0 {
return 0, ErrInvalidLengthMount
}
iNdEx += length
if iNdEx < 0 {
return 0, ErrInvalidLengthMount
}
return iNdEx, nil
case 3:
for {
@@ -415,6 +501,9 @@ func skipMount(dAtA []byte) (n int, err error) {
return 0, err
}
iNdEx = start + next
if iNdEx < 0 {
return 0, ErrInvalidLengthMount
}
}
return iNdEx, nil
case 4:
@@ -433,24 +522,3 @@ var (
ErrInvalidLengthMount = fmt.Errorf("proto: negative length found during unmarshaling")
ErrIntOverflowMount = fmt.Errorf("proto: integer overflow")
)
func init() {
proto.RegisterFile("github.com/containerd/containerd/api/types/mount.proto", fileDescriptorMount)
}
var fileDescriptorMount = []byte{
// 202 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0x32, 0x4b, 0xcf, 0x2c, 0xc9,
0x28, 0x4d, 0xd2, 0x4b, 0xce, 0xcf, 0xd5, 0x4f, 0xce, 0xcf, 0x2b, 0x49, 0xcc, 0xcc, 0x4b, 0x2d,
0x4a, 0x41, 0x66, 0x26, 0x16, 0x64, 0xea, 0x97, 0x54, 0x16, 0xa4, 0x16, 0xeb, 0xe7, 0xe6, 0x97,
0xe6, 0x95, 0xe8, 0x15, 0x14, 0xe5, 0x97, 0xe4, 0x0b, 0x09, 0x20, 0x54, 0xe8, 0x81, 0x65, 0xa5,
0x44, 0xd2, 0xf3, 0xd3, 0xf3, 0xc1, 0x92, 0xfa, 0x20, 0x16, 0x44, 0x9d, 0x52, 0x2a, 0x17, 0xab,
0x2f, 0x48, 0x9b, 0x90, 0x10, 0x17, 0x0b, 0x48, 0x9d, 0x04, 0xa3, 0x02, 0xa3, 0x06, 0x67, 0x10,
0x98, 0x2d, 0x24, 0xc6, 0xc5, 0x56, 0x9c, 0x5f, 0x5a, 0x94, 0x9c, 0x2a, 0xc1, 0x04, 0x16, 0x85,
0xf2, 0x40, 0xe2, 0x25, 0x89, 0x45, 0xe9, 0xa9, 0x25, 0x12, 0xcc, 0x10, 0x71, 0x08, 0x4f, 0x48,
0x82, 0x8b, 0x3d, 0xbf, 0xa0, 0x24, 0x33, 0x3f, 0xaf, 0x58, 0x82, 0x45, 0x81, 0x59, 0x83, 0x33,
0x08, 0xc6, 0x75, 0xf2, 0x3a, 0xf1, 0x50, 0x8e, 0xe1, 0xc6, 0x43, 0x39, 0x86, 0x86, 0x47, 0x72,
0x8c, 0x27, 0x1e, 0xc9, 0x31, 0x5e, 0x78, 0x24, 0xc7, 0xf8, 0xe0, 0x91, 0x1c, 0x63, 0x94, 0x01,
0xf1, 0x1e, 0xb4, 0x06, 0x93, 0x11, 0x0c, 0x49, 0x6c, 0x60, 0xb7, 0x1b, 0x03, 0x02, 0x00, 0x00,
0xff, 0xff, 0x82, 0x1c, 0x02, 0x18, 0x1d, 0x01, 0x00, 0x00,
}

View File

@@ -3,37 +3,94 @@
package types
import proto "github.com/gogo/protobuf/proto"
import fmt "fmt"
import math "math"
// skipping weak import gogoproto "github.com/gogo/protobuf/gogoproto"
import strings "strings"
import reflect "reflect"
import io "io"
import (
fmt "fmt"
proto "github.com/gogo/protobuf/proto"
io "io"
math "math"
reflect "reflect"
strings "strings"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package
// Platform follows the structure of the OCI platform specification, from
// descriptors.
type Platform struct {
OS string `protobuf:"bytes,1,opt,name=os,proto3" json:"os,omitempty"`
Architecture string `protobuf:"bytes,2,opt,name=architecture,proto3" json:"architecture,omitempty"`
Variant string `protobuf:"bytes,3,opt,name=variant,proto3" json:"variant,omitempty"`
OS string `protobuf:"bytes,1,opt,name=os,proto3" json:"os,omitempty"`
Architecture string `protobuf:"bytes,2,opt,name=architecture,proto3" json:"architecture,omitempty"`
Variant string `protobuf:"bytes,3,opt,name=variant,proto3" json:"variant,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *Platform) Reset() { *m = Platform{} }
func (*Platform) ProtoMessage() {}
func (*Platform) Descriptor() ([]byte, []int) { return fileDescriptorPlatform, []int{0} }
func (m *Platform) Reset() { *m = Platform{} }
func (*Platform) ProtoMessage() {}
func (*Platform) Descriptor() ([]byte, []int) {
return fileDescriptor_24ba7a4b83e2367e, []int{0}
}
func (m *Platform) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *Platform) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_Platform.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalTo(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *Platform) XXX_Merge(src proto.Message) {
xxx_messageInfo_Platform.Merge(m, src)
}
func (m *Platform) XXX_Size() int {
return m.Size()
}
func (m *Platform) XXX_DiscardUnknown() {
xxx_messageInfo_Platform.DiscardUnknown(m)
}
var xxx_messageInfo_Platform proto.InternalMessageInfo
func init() {
proto.RegisterType((*Platform)(nil), "containerd.types.Platform")
}
func init() {
proto.RegisterFile("github.com/containerd/containerd/api/types/platform.proto", fileDescriptor_24ba7a4b83e2367e)
}
var fileDescriptor_24ba7a4b83e2367e = []byte{
// 205 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0xb2, 0x4c, 0xcf, 0x2c, 0xc9,
0x28, 0x4d, 0xd2, 0x4b, 0xce, 0xcf, 0xd5, 0x4f, 0xce, 0xcf, 0x2b, 0x49, 0xcc, 0xcc, 0x4b, 0x2d,
0x4a, 0x41, 0x66, 0x26, 0x16, 0x64, 0xea, 0x97, 0x54, 0x16, 0xa4, 0x16, 0xeb, 0x17, 0xe4, 0x24,
0x96, 0xa4, 0xe5, 0x17, 0xe5, 0xea, 0x15, 0x14, 0xe5, 0x97, 0xe4, 0x0b, 0x09, 0x20, 0x14, 0xe9,
0x81, 0x15, 0x48, 0x89, 0xa4, 0xe7, 0xa7, 0xe7, 0x83, 0x25, 0xf5, 0x41, 0x2c, 0x88, 0x3a, 0xa5,
0x04, 0x2e, 0x8e, 0x00, 0xa8, 0x4e, 0x21, 0x31, 0x2e, 0xa6, 0xfc, 0x62, 0x09, 0x46, 0x05, 0x46,
0x0d, 0x4e, 0x27, 0xb6, 0x47, 0xf7, 0xe4, 0x99, 0xfc, 0x83, 0x83, 0x98, 0xf2, 0x8b, 0x85, 0x94,
0xb8, 0x78, 0x12, 0x8b, 0x92, 0x33, 0x32, 0x4b, 0x52, 0x93, 0x4b, 0x4a, 0x8b, 0x52, 0x25, 0x98,
0x40, 0x2a, 0x82, 0x50, 0xc4, 0x84, 0x24, 0xb8, 0xd8, 0xcb, 0x12, 0x8b, 0x32, 0x13, 0xf3, 0x4a,
0x24, 0x98, 0xc1, 0xd2, 0x30, 0xae, 0x93, 0xd7, 0x89, 0x87, 0x72, 0x0c, 0x37, 0x1e, 0xca, 0x31,
0x34, 0x3c, 0x92, 0x63, 0x3c, 0xf1, 0x48, 0x8e, 0xf1, 0xc2, 0x23, 0x39, 0xc6, 0x07, 0x8f, 0xe4,
0x18, 0xa3, 0x0c, 0x88, 0xf7, 0x9e, 0x35, 0x98, 0x8c, 0x60, 0x48, 0x62, 0x03, 0x3b, 0xdb, 0x18,
0x10, 0x00, 0x00, 0xff, 0xff, 0x05, 0xaa, 0xda, 0xa1, 0x1b, 0x01, 0x00, 0x00,
}
func (m *Platform) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
@@ -67,6 +124,9 @@ func (m *Platform) MarshalTo(dAtA []byte) (int, error) {
i = encodeVarintPlatform(dAtA, i, uint64(len(m.Variant)))
i += copy(dAtA[i:], m.Variant)
}
if m.XXX_unrecognized != nil {
i += copy(dAtA[i:], m.XXX_unrecognized)
}
return i, nil
}
@@ -80,6 +140,9 @@ func encodeVarintPlatform(dAtA []byte, offset int, v uint64) int {
return offset + 1
}
func (m *Platform) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
l = len(m.OS)
@@ -94,6 +157,9 @@ func (m *Platform) Size() (n int) {
if l > 0 {
n += 1 + l + sovPlatform(uint64(l))
}
if m.XXX_unrecognized != nil {
n += len(m.XXX_unrecognized)
}
return n
}
@@ -118,6 +184,7 @@ func (this *Platform) String() string {
`OS:` + fmt.Sprintf("%v", this.OS) + `,`,
`Architecture:` + fmt.Sprintf("%v", this.Architecture) + `,`,
`Variant:` + fmt.Sprintf("%v", this.Variant) + `,`,
`XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
`}`,
}, "")
return s
@@ -145,7 +212,7 @@ func (m *Platform) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -173,7 +240,7 @@ func (m *Platform) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -183,6 +250,9 @@ func (m *Platform) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthPlatform
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthPlatform
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -202,7 +272,7 @@ func (m *Platform) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -212,6 +282,9 @@ func (m *Platform) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthPlatform
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthPlatform
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -231,7 +304,7 @@ func (m *Platform) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -241,6 +314,9 @@ func (m *Platform) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthPlatform
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthPlatform
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -255,9 +331,13 @@ func (m *Platform) Unmarshal(dAtA []byte) error {
if skippy < 0 {
return ErrInvalidLengthPlatform
}
if (iNdEx + skippy) < 0 {
return ErrInvalidLengthPlatform
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
iNdEx += skippy
}
}
@@ -321,10 +401,13 @@ func skipPlatform(dAtA []byte) (n int, err error) {
break
}
}
iNdEx += length
if length < 0 {
return 0, ErrInvalidLengthPlatform
}
iNdEx += length
if iNdEx < 0 {
return 0, ErrInvalidLengthPlatform
}
return iNdEx, nil
case 3:
for {
@@ -353,6 +436,9 @@ func skipPlatform(dAtA []byte) (n int, err error) {
return 0, err
}
iNdEx = start + next
if iNdEx < 0 {
return 0, ErrInvalidLengthPlatform
}
}
return iNdEx, nil
case 4:
@@ -371,24 +457,3 @@ var (
ErrInvalidLengthPlatform = fmt.Errorf("proto: negative length found during unmarshaling")
ErrIntOverflowPlatform = fmt.Errorf("proto: integer overflow")
)
func init() {
proto.RegisterFile("github.com/containerd/containerd/api/types/platform.proto", fileDescriptorPlatform)
}
var fileDescriptorPlatform = []byte{
// 205 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xe2, 0xb2, 0x4c, 0xcf, 0x2c, 0xc9,
0x28, 0x4d, 0xd2, 0x4b, 0xce, 0xcf, 0xd5, 0x4f, 0xce, 0xcf, 0x2b, 0x49, 0xcc, 0xcc, 0x4b, 0x2d,
0x4a, 0x41, 0x66, 0x26, 0x16, 0x64, 0xea, 0x97, 0x54, 0x16, 0xa4, 0x16, 0xeb, 0x17, 0xe4, 0x24,
0x96, 0xa4, 0xe5, 0x17, 0xe5, 0xea, 0x15, 0x14, 0xe5, 0x97, 0xe4, 0x0b, 0x09, 0x20, 0x14, 0xe9,
0x81, 0x15, 0x48, 0x89, 0xa4, 0xe7, 0xa7, 0xe7, 0x83, 0x25, 0xf5, 0x41, 0x2c, 0x88, 0x3a, 0xa5,
0x04, 0x2e, 0x8e, 0x00, 0xa8, 0x4e, 0x21, 0x31, 0x2e, 0xa6, 0xfc, 0x62, 0x09, 0x46, 0x05, 0x46,
0x0d, 0x4e, 0x27, 0xb6, 0x47, 0xf7, 0xe4, 0x99, 0xfc, 0x83, 0x83, 0x98, 0xf2, 0x8b, 0x85, 0x94,
0xb8, 0x78, 0x12, 0x8b, 0x92, 0x33, 0x32, 0x4b, 0x52, 0x93, 0x4b, 0x4a, 0x8b, 0x52, 0x25, 0x98,
0x40, 0x2a, 0x82, 0x50, 0xc4, 0x84, 0x24, 0xb8, 0xd8, 0xcb, 0x12, 0x8b, 0x32, 0x13, 0xf3, 0x4a,
0x24, 0x98, 0xc1, 0xd2, 0x30, 0xae, 0x93, 0xd7, 0x89, 0x87, 0x72, 0x0c, 0x37, 0x1e, 0xca, 0x31,
0x34, 0x3c, 0x92, 0x63, 0x3c, 0xf1, 0x48, 0x8e, 0xf1, 0xc2, 0x23, 0x39, 0xc6, 0x07, 0x8f, 0xe4,
0x18, 0xa3, 0x0c, 0x88, 0xf7, 0x9e, 0x35, 0x98, 0x8c, 0x60, 0x48, 0x62, 0x03, 0x3b, 0xdb, 0x18,
0x10, 0x00, 0x00, 0xff, 0xff, 0x05, 0xaa, 0xda, 0xa1, 0x1b, 0x01, 0x00, 0x00,
}

View File

@@ -1,34 +1,19 @@
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: github.com/containerd/containerd/api/types/task/task.proto
/*
Package task is a generated protocol buffer package.
It is generated from these files:
github.com/containerd/containerd/api/types/task/task.proto
It has these top-level messages:
Process
ProcessInfo
*/
package task
import proto "github.com/gogo/protobuf/proto"
import fmt "fmt"
import math "math"
// skipping weak import gogoproto "github.com/gogo/protobuf/gogoproto"
import _ "github.com/gogo/protobuf/types"
import google_protobuf2 "github.com/gogo/protobuf/types"
import time "time"
import types "github.com/gogo/protobuf/types"
import strings "strings"
import reflect "reflect"
import io "io"
import (
fmt "fmt"
proto "github.com/gogo/protobuf/proto"
github_com_gogo_protobuf_types "github.com/gogo/protobuf/types"
types "github.com/gogo/protobuf/types"
io "io"
math "math"
reflect "reflect"
strings "strings"
time "time"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
@@ -61,6 +46,7 @@ var Status_name = map[int32]string{
4: "PAUSED",
5: "PAUSING",
}
var Status_value = map[string]int32{
"UNKNOWN": 0,
"CREATED": 1,
@@ -73,24 +59,58 @@ var Status_value = map[string]int32{
func (x Status) String() string {
return proto.EnumName(Status_name, int32(x))
}
func (Status) EnumDescriptor() ([]byte, []int) { return fileDescriptorTask, []int{0} }
type Process struct {
ContainerID string `protobuf:"bytes,1,opt,name=container_id,json=containerId,proto3" json:"container_id,omitempty"`
ID string `protobuf:"bytes,2,opt,name=id,proto3" json:"id,omitempty"`
Pid uint32 `protobuf:"varint,3,opt,name=pid,proto3" json:"pid,omitempty"`
Status Status `protobuf:"varint,4,opt,name=status,proto3,enum=containerd.v1.types.Status" json:"status,omitempty"`
Stdin string `protobuf:"bytes,5,opt,name=stdin,proto3" json:"stdin,omitempty"`
Stdout string `protobuf:"bytes,6,opt,name=stdout,proto3" json:"stdout,omitempty"`
Stderr string `protobuf:"bytes,7,opt,name=stderr,proto3" json:"stderr,omitempty"`
Terminal bool `protobuf:"varint,8,opt,name=terminal,proto3" json:"terminal,omitempty"`
ExitStatus uint32 `protobuf:"varint,9,opt,name=exit_status,json=exitStatus,proto3" json:"exit_status,omitempty"`
ExitedAt time.Time `protobuf:"bytes,10,opt,name=exited_at,json=exitedAt,stdtime" json:"exited_at"`
func (Status) EnumDescriptor() ([]byte, []int) {
return fileDescriptor_391ef18c8ab0dc16, []int{0}
}
func (m *Process) Reset() { *m = Process{} }
func (*Process) ProtoMessage() {}
func (*Process) Descriptor() ([]byte, []int) { return fileDescriptorTask, []int{0} }
type Process struct {
ContainerID string `protobuf:"bytes,1,opt,name=container_id,json=containerId,proto3" json:"container_id,omitempty"`
ID string `protobuf:"bytes,2,opt,name=id,proto3" json:"id,omitempty"`
Pid uint32 `protobuf:"varint,3,opt,name=pid,proto3" json:"pid,omitempty"`
Status Status `protobuf:"varint,4,opt,name=status,proto3,enum=containerd.v1.types.Status" json:"status,omitempty"`
Stdin string `protobuf:"bytes,5,opt,name=stdin,proto3" json:"stdin,omitempty"`
Stdout string `protobuf:"bytes,6,opt,name=stdout,proto3" json:"stdout,omitempty"`
Stderr string `protobuf:"bytes,7,opt,name=stderr,proto3" json:"stderr,omitempty"`
Terminal bool `protobuf:"varint,8,opt,name=terminal,proto3" json:"terminal,omitempty"`
ExitStatus uint32 `protobuf:"varint,9,opt,name=exit_status,json=exitStatus,proto3" json:"exit_status,omitempty"`
ExitedAt time.Time `protobuf:"bytes,10,opt,name=exited_at,json=exitedAt,proto3,stdtime" json:"exited_at"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *Process) Reset() { *m = Process{} }
func (*Process) ProtoMessage() {}
func (*Process) Descriptor() ([]byte, []int) {
return fileDescriptor_391ef18c8ab0dc16, []int{0}
}
func (m *Process) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *Process) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_Process.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalTo(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *Process) XXX_Merge(src proto.Message) {
xxx_messageInfo_Process.Merge(m, src)
}
func (m *Process) XXX_Size() int {
return m.Size()
}
func (m *Process) XXX_DiscardUnknown() {
xxx_messageInfo_Process.DiscardUnknown(m)
}
var xxx_messageInfo_Process proto.InternalMessageInfo
type ProcessInfo struct {
// PID is the process ID.
@@ -98,18 +118,93 @@ type ProcessInfo struct {
// Info contains additional process information.
//
// Info varies by platform.
Info *google_protobuf2.Any `protobuf:"bytes,2,opt,name=info" json:"info,omitempty"`
Info *types.Any `protobuf:"bytes,2,opt,name=info,proto3" json:"info,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *ProcessInfo) Reset() { *m = ProcessInfo{} }
func (*ProcessInfo) ProtoMessage() {}
func (*ProcessInfo) Descriptor() ([]byte, []int) { return fileDescriptorTask, []int{1} }
func (m *ProcessInfo) Reset() { *m = ProcessInfo{} }
func (*ProcessInfo) ProtoMessage() {}
func (*ProcessInfo) Descriptor() ([]byte, []int) {
return fileDescriptor_391ef18c8ab0dc16, []int{1}
}
func (m *ProcessInfo) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *ProcessInfo) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_ProcessInfo.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalTo(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *ProcessInfo) XXX_Merge(src proto.Message) {
xxx_messageInfo_ProcessInfo.Merge(m, src)
}
func (m *ProcessInfo) XXX_Size() int {
return m.Size()
}
func (m *ProcessInfo) XXX_DiscardUnknown() {
xxx_messageInfo_ProcessInfo.DiscardUnknown(m)
}
var xxx_messageInfo_ProcessInfo proto.InternalMessageInfo
func init() {
proto.RegisterEnum("containerd.v1.types.Status", Status_name, Status_value)
proto.RegisterType((*Process)(nil), "containerd.v1.types.Process")
proto.RegisterType((*ProcessInfo)(nil), "containerd.v1.types.ProcessInfo")
proto.RegisterEnum("containerd.v1.types.Status", Status_name, Status_value)
}
func init() {
proto.RegisterFile("github.com/containerd/containerd/api/types/task/task.proto", fileDescriptor_391ef18c8ab0dc16)
}
var fileDescriptor_391ef18c8ab0dc16 = []byte{
// 545 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x6c, 0x90, 0x3f, 0x6f, 0xd3, 0x40,
0x18, 0xc6, 0x7d, 0x6e, 0xeb, 0xa6, 0xe7, 0xb6, 0x18, 0x13, 0x55, 0xc6, 0x20, 0xdb, 0xea, 0x64,
0x31, 0xd8, 0x22, 0xdd, 0xd8, 0xf2, 0x4f, 0xc8, 0x42, 0x72, 0x23, 0x27, 0x11, 0x6c, 0x91, 0x13,
0x5f, 0xcc, 0xa9, 0xcd, 0x9d, 0x65, 0x9f, 0x81, 0x6c, 0x8c, 0xa8, 0x13, 0x5f, 0xa0, 0x13, 0x7c,
0x0a, 0x3e, 0x41, 0x46, 0x26, 0xc4, 0x14, 0xa8, 0x3f, 0x09, 0x3a, 0xdb, 0x49, 0x23, 0x60, 0x39,
0xbd, 0xef, 0xf3, 0x7b, 0xee, 0xbd, 0xf7, 0x1e, 0xf8, 0x22, 0xc6, 0xec, 0x6d, 0x3e, 0x75, 0x66,
0x74, 0xe1, 0xce, 0x28, 0x61, 0x21, 0x26, 0x28, 0x8d, 0x76, 0xcb, 0x30, 0xc1, 0x2e, 0x5b, 0x26,
0x28, 0x73, 0x59, 0x98, 0x5d, 0x95, 0x87, 0x93, 0xa4, 0x94, 0x51, 0xf5, 0xd1, 0xbd, 0xcb, 0x79,
0xf7, 0xdc, 0x29, 0x4d, 0x7a, 0x33, 0xa6, 0x31, 0x2d, 0xb9, 0xcb, 0xab, 0xca, 0xaa, 0x9b, 0x31,
0xa5, 0xf1, 0x35, 0x72, 0xcb, 0x6e, 0x9a, 0xcf, 0x5d, 0x86, 0x17, 0x28, 0x63, 0xe1, 0x22, 0xa9,
0x0d, 0x8f, 0xff, 0x36, 0x84, 0x64, 0x59, 0xa1, 0xf3, 0x42, 0x84, 0x87, 0x83, 0x94, 0xce, 0x50,
0x96, 0xa9, 0x2d, 0x78, 0xbc, 0x7d, 0x74, 0x82, 0x23, 0x0d, 0x58, 0xc0, 0x3e, 0xea, 0x3c, 0x28,
0xd6, 0xa6, 0xdc, 0xdd, 0xe8, 0x5e, 0x2f, 0x90, 0xb7, 0x26, 0x2f, 0x52, 0xcf, 0xa0, 0x88, 0x23,
0x4d, 0x2c, 0x9d, 0x52, 0xb1, 0x36, 0x45, 0xaf, 0x17, 0x88, 0x38, 0x52, 0x15, 0xb8, 0x97, 0xe0,
0x48, 0xdb, 0xb3, 0x80, 0x7d, 0x12, 0xf0, 0x52, 0xbd, 0x80, 0x52, 0xc6, 0x42, 0x96, 0x67, 0xda,
0xbe, 0x05, 0xec, 0xd3, 0xd6, 0x13, 0xe7, 0x3f, 0x3f, 0x74, 0x86, 0xa5, 0x25, 0xa8, 0xad, 0x6a,
0x13, 0x1e, 0x64, 0x2c, 0xc2, 0x44, 0x3b, 0xe0, 0x2f, 0x04, 0x55, 0xa3, 0x9e, 0xf1, 0x51, 0x11,
0xcd, 0x99, 0x26, 0x95, 0x72, 0xdd, 0xd5, 0x3a, 0x4a, 0x53, 0xed, 0x70, 0xab, 0xa3, 0x34, 0x55,
0x75, 0xd8, 0x60, 0x28, 0x5d, 0x60, 0x12, 0x5e, 0x6b, 0x0d, 0x0b, 0xd8, 0x8d, 0x60, 0xdb, 0xab,
0x26, 0x94, 0xd1, 0x07, 0xcc, 0x26, 0xf5, 0x6e, 0x47, 0xe5, 0xc2, 0x90, 0x4b, 0xd5, 0x2a, 0x6a,
0x1b, 0x1e, 0xf1, 0x0e, 0x45, 0x93, 0x90, 0x69, 0xd0, 0x02, 0xb6, 0xdc, 0xd2, 0x9d, 0x2a, 0x50,
0x67, 0x13, 0xa8, 0x33, 0xda, 0x24, 0xde, 0x69, 0xac, 0xd6, 0xa6, 0xf0, 0xf9, 0x97, 0x09, 0x82,
0x46, 0x75, 0xad, 0xcd, 0xce, 0x3d, 0x28, 0xd7, 0x19, 0x7b, 0x64, 0x4e, 0x37, 0xd9, 0x80, 0xfb,
0x6c, 0x6c, 0xb8, 0x8f, 0xc9, 0x9c, 0x96, 0x39, 0xca, 0xad, 0xe6, 0x3f, 0xe3, 0xdb, 0x64, 0x19,
0x94, 0x8e, 0x67, 0x3f, 0x00, 0x94, 0xea, 0xc5, 0x0c, 0x78, 0x38, 0xf6, 0x5f, 0xf9, 0x97, 0xaf,
0x7d, 0x45, 0xd0, 0x1f, 0xde, 0xdc, 0x5a, 0x27, 0x15, 0x18, 0x93, 0x2b, 0x42, 0xdf, 0x13, 0xce,
0xbb, 0x41, 0xbf, 0x3d, 0xea, 0xf7, 0x14, 0xb0, 0xcb, 0xbb, 0x29, 0x0a, 0x19, 0x8a, 0x38, 0x0f,
0xc6, 0xbe, 0xef, 0xf9, 0x2f, 0x15, 0x71, 0x97, 0x07, 0x39, 0x21, 0x98, 0xc4, 0x9c, 0x0f, 0x47,
0x97, 0x83, 0x41, 0xbf, 0xa7, 0xec, 0xed, 0xf2, 0x21, 0xa3, 0x49, 0x82, 0x22, 0xf5, 0x29, 0x94,
0x06, 0xed, 0xf1, 0xb0, 0xdf, 0x53, 0xf6, 0x75, 0xe5, 0xe6, 0xd6, 0x3a, 0xae, 0xf0, 0x20, 0xcc,
0xb3, 0x6a, 0x3a, 0xa7, 0x7c, 0xfa, 0xc1, 0xee, 0x6d, 0x8e, 0x31, 0x89, 0xf5, 0xd3, 0x4f, 0x5f,
0x0c, 0xe1, 0xdb, 0x57, 0xa3, 0xfe, 0x4d, 0x47, 0x5b, 0xdd, 0x19, 0xc2, 0xcf, 0x3b, 0x43, 0xf8,
0x58, 0x18, 0x60, 0x55, 0x18, 0xe0, 0x7b, 0x61, 0x80, 0xdf, 0x85, 0x01, 0xde, 0x08, 0x53, 0xa9,
0x0c, 0xe2, 0xe2, 0x4f, 0x00, 0x00, 0x00, 0xff, 0xff, 0xc3, 0x32, 0xd2, 0x86, 0x50, 0x03, 0x00,
0x00,
}
func (m *Process) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
@@ -182,12 +277,15 @@ func (m *Process) MarshalTo(dAtA []byte) (int, error) {
}
dAtA[i] = 0x52
i++
i = encodeVarintTask(dAtA, i, uint64(types.SizeOfStdTime(m.ExitedAt)))
n1, err := types.StdTimeMarshalTo(m.ExitedAt, dAtA[i:])
i = encodeVarintTask(dAtA, i, uint64(github_com_gogo_protobuf_types.SizeOfStdTime(m.ExitedAt)))
n1, err := github_com_gogo_protobuf_types.StdTimeMarshalTo(m.ExitedAt, dAtA[i:])
if err != nil {
return 0, err
}
i += n1
if m.XXX_unrecognized != nil {
i += copy(dAtA[i:], m.XXX_unrecognized)
}
return i, nil
}
@@ -221,6 +319,9 @@ func (m *ProcessInfo) MarshalTo(dAtA []byte) (int, error) {
}
i += n2
}
if m.XXX_unrecognized != nil {
i += copy(dAtA[i:], m.XXX_unrecognized)
}
return i, nil
}
@@ -234,6 +335,9 @@ func encodeVarintTask(dAtA []byte, offset int, v uint64) int {
return offset + 1
}
func (m *Process) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
l = len(m.ContainerID)
@@ -268,12 +372,18 @@ func (m *Process) Size() (n int) {
if m.ExitStatus != 0 {
n += 1 + sovTask(uint64(m.ExitStatus))
}
l = types.SizeOfStdTime(m.ExitedAt)
l = github_com_gogo_protobuf_types.SizeOfStdTime(m.ExitedAt)
n += 1 + l + sovTask(uint64(l))
if m.XXX_unrecognized != nil {
n += len(m.XXX_unrecognized)
}
return n
}
func (m *ProcessInfo) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if m.Pid != 0 {
@@ -283,6 +393,9 @@ func (m *ProcessInfo) Size() (n int) {
l = m.Info.Size()
n += 1 + l + sovTask(uint64(l))
}
if m.XXX_unrecognized != nil {
n += len(m.XXX_unrecognized)
}
return n
}
@@ -313,7 +426,8 @@ func (this *Process) String() string {
`Stderr:` + fmt.Sprintf("%v", this.Stderr) + `,`,
`Terminal:` + fmt.Sprintf("%v", this.Terminal) + `,`,
`ExitStatus:` + fmt.Sprintf("%v", this.ExitStatus) + `,`,
`ExitedAt:` + strings.Replace(strings.Replace(this.ExitedAt.String(), "Timestamp", "google_protobuf1.Timestamp", 1), `&`, ``, 1) + `,`,
`ExitedAt:` + strings.Replace(strings.Replace(this.ExitedAt.String(), "Timestamp", "types.Timestamp", 1), `&`, ``, 1) + `,`,
`XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
`}`,
}, "")
return s
@@ -324,7 +438,8 @@ func (this *ProcessInfo) String() string {
}
s := strings.Join([]string{`&ProcessInfo{`,
`Pid:` + fmt.Sprintf("%v", this.Pid) + `,`,
`Info:` + strings.Replace(fmt.Sprintf("%v", this.Info), "Any", "google_protobuf2.Any", 1) + `,`,
`Info:` + strings.Replace(fmt.Sprintf("%v", this.Info), "Any", "types.Any", 1) + `,`,
`XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
`}`,
}, "")
return s
@@ -352,7 +467,7 @@ func (m *Process) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -380,7 +495,7 @@ func (m *Process) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -390,6 +505,9 @@ func (m *Process) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthTask
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthTask
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -409,7 +527,7 @@ func (m *Process) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -419,6 +537,9 @@ func (m *Process) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthTask
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthTask
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -438,7 +559,7 @@ func (m *Process) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
m.Pid |= (uint32(b) & 0x7F) << shift
m.Pid |= uint32(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -457,7 +578,7 @@ func (m *Process) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
m.Status |= (Status(b) & 0x7F) << shift
m.Status |= Status(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -476,7 +597,7 @@ func (m *Process) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -486,6 +607,9 @@ func (m *Process) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthTask
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthTask
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -505,7 +629,7 @@ func (m *Process) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -515,6 +639,9 @@ func (m *Process) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthTask
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthTask
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -534,7 +661,7 @@ func (m *Process) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -544,6 +671,9 @@ func (m *Process) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthTask
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthTask
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -563,7 +693,7 @@ func (m *Process) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
v |= (int(b) & 0x7F) << shift
v |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -583,7 +713,7 @@ func (m *Process) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
m.ExitStatus |= (uint32(b) & 0x7F) << shift
m.ExitStatus |= uint32(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -602,7 +732,7 @@ func (m *Process) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
msglen |= (int(b) & 0x7F) << shift
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -611,10 +741,13 @@ func (m *Process) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthTask
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthTask
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
if err := types.StdTimeUnmarshal(&m.ExitedAt, dAtA[iNdEx:postIndex]); err != nil {
if err := github_com_gogo_protobuf_types.StdTimeUnmarshal(&m.ExitedAt, dAtA[iNdEx:postIndex]); err != nil {
return err
}
iNdEx = postIndex
@@ -627,9 +760,13 @@ func (m *Process) Unmarshal(dAtA []byte) error {
if skippy < 0 {
return ErrInvalidLengthTask
}
if (iNdEx + skippy) < 0 {
return ErrInvalidLengthTask
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
iNdEx += skippy
}
}
@@ -654,7 +791,7 @@ func (m *ProcessInfo) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -682,7 +819,7 @@ func (m *ProcessInfo) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
m.Pid |= (uint32(b) & 0x7F) << shift
m.Pid |= uint32(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -701,7 +838,7 @@ func (m *ProcessInfo) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
msglen |= (int(b) & 0x7F) << shift
msglen |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -710,11 +847,14 @@ func (m *ProcessInfo) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthTask
}
postIndex := iNdEx + msglen
if postIndex < 0 {
return ErrInvalidLengthTask
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
if m.Info == nil {
m.Info = &google_protobuf2.Any{}
m.Info = &types.Any{}
}
if err := m.Info.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
return err
@@ -729,9 +869,13 @@ func (m *ProcessInfo) Unmarshal(dAtA []byte) error {
if skippy < 0 {
return ErrInvalidLengthTask
}
if (iNdEx + skippy) < 0 {
return ErrInvalidLengthTask
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
iNdEx += skippy
}
}
@@ -795,10 +939,13 @@ func skipTask(dAtA []byte) (n int, err error) {
break
}
}
iNdEx += length
if length < 0 {
return 0, ErrInvalidLengthTask
}
iNdEx += length
if iNdEx < 0 {
return 0, ErrInvalidLengthTask
}
return iNdEx, nil
case 3:
for {
@@ -827,6 +974,9 @@ func skipTask(dAtA []byte) (n int, err error) {
return 0, err
}
iNdEx = start + next
if iNdEx < 0 {
return 0, ErrInvalidLengthTask
}
}
return iNdEx, nil
case 4:
@@ -845,46 +995,3 @@ var (
ErrInvalidLengthTask = fmt.Errorf("proto: negative length found during unmarshaling")
ErrIntOverflowTask = fmt.Errorf("proto: integer overflow")
)
func init() {
proto.RegisterFile("github.com/containerd/containerd/api/types/task/task.proto", fileDescriptorTask)
}
var fileDescriptorTask = []byte{
// 545 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x6c, 0x90, 0x3f, 0x6f, 0xd3, 0x40,
0x18, 0xc6, 0x7d, 0x6e, 0xeb, 0xa6, 0xe7, 0xb6, 0x18, 0x13, 0x55, 0xc6, 0x20, 0xdb, 0xea, 0x64,
0x31, 0xd8, 0x22, 0xdd, 0xd8, 0xf2, 0x4f, 0xc8, 0x42, 0x72, 0x23, 0x27, 0x11, 0x6c, 0x91, 0x13,
0x5f, 0xcc, 0xa9, 0xcd, 0x9d, 0x65, 0x9f, 0x81, 0x6c, 0x8c, 0xa8, 0x13, 0x5f, 0xa0, 0x13, 0x7c,
0x0a, 0x3e, 0x41, 0x46, 0x26, 0xc4, 0x14, 0xa8, 0x3f, 0x09, 0x3a, 0xdb, 0x49, 0x23, 0x60, 0x39,
0xbd, 0xef, 0xf3, 0x7b, 0xee, 0xbd, 0xf7, 0x1e, 0xf8, 0x22, 0xc6, 0xec, 0x6d, 0x3e, 0x75, 0x66,
0x74, 0xe1, 0xce, 0x28, 0x61, 0x21, 0x26, 0x28, 0x8d, 0x76, 0xcb, 0x30, 0xc1, 0x2e, 0x5b, 0x26,
0x28, 0x73, 0x59, 0x98, 0x5d, 0x95, 0x87, 0x93, 0xa4, 0x94, 0x51, 0xf5, 0xd1, 0xbd, 0xcb, 0x79,
0xf7, 0xdc, 0x29, 0x4d, 0x7a, 0x33, 0xa6, 0x31, 0x2d, 0xb9, 0xcb, 0xab, 0xca, 0xaa, 0x9b, 0x31,
0xa5, 0xf1, 0x35, 0x72, 0xcb, 0x6e, 0x9a, 0xcf, 0x5d, 0x86, 0x17, 0x28, 0x63, 0xe1, 0x22, 0xa9,
0x0d, 0x8f, 0xff, 0x36, 0x84, 0x64, 0x59, 0xa1, 0xf3, 0x42, 0x84, 0x87, 0x83, 0x94, 0xce, 0x50,
0x96, 0xa9, 0x2d, 0x78, 0xbc, 0x7d, 0x74, 0x82, 0x23, 0x0d, 0x58, 0xc0, 0x3e, 0xea, 0x3c, 0x28,
0xd6, 0xa6, 0xdc, 0xdd, 0xe8, 0x5e, 0x2f, 0x90, 0xb7, 0x26, 0x2f, 0x52, 0xcf, 0xa0, 0x88, 0x23,
0x4d, 0x2c, 0x9d, 0x52, 0xb1, 0x36, 0x45, 0xaf, 0x17, 0x88, 0x38, 0x52, 0x15, 0xb8, 0x97, 0xe0,
0x48, 0xdb, 0xb3, 0x80, 0x7d, 0x12, 0xf0, 0x52, 0xbd, 0x80, 0x52, 0xc6, 0x42, 0x96, 0x67, 0xda,
0xbe, 0x05, 0xec, 0xd3, 0xd6, 0x13, 0xe7, 0x3f, 0x3f, 0x74, 0x86, 0xa5, 0x25, 0xa8, 0xad, 0x6a,
0x13, 0x1e, 0x64, 0x2c, 0xc2, 0x44, 0x3b, 0xe0, 0x2f, 0x04, 0x55, 0xa3, 0x9e, 0xf1, 0x51, 0x11,
0xcd, 0x99, 0x26, 0x95, 0x72, 0xdd, 0xd5, 0x3a, 0x4a, 0x53, 0xed, 0x70, 0xab, 0xa3, 0x34, 0x55,
0x75, 0xd8, 0x60, 0x28, 0x5d, 0x60, 0x12, 0x5e, 0x6b, 0x0d, 0x0b, 0xd8, 0x8d, 0x60, 0xdb, 0xab,
0x26, 0x94, 0xd1, 0x07, 0xcc, 0x26, 0xf5, 0x6e, 0x47, 0xe5, 0xc2, 0x90, 0x4b, 0xd5, 0x2a, 0x6a,
0x1b, 0x1e, 0xf1, 0x0e, 0x45, 0x93, 0x90, 0x69, 0xd0, 0x02, 0xb6, 0xdc, 0xd2, 0x9d, 0x2a, 0x50,
0x67, 0x13, 0xa8, 0x33, 0xda, 0x24, 0xde, 0x69, 0xac, 0xd6, 0xa6, 0xf0, 0xf9, 0x97, 0x09, 0x82,
0x46, 0x75, 0xad, 0xcd, 0xce, 0x3d, 0x28, 0xd7, 0x19, 0x7b, 0x64, 0x4e, 0x37, 0xd9, 0x80, 0xfb,
0x6c, 0x6c, 0xb8, 0x8f, 0xc9, 0x9c, 0x96, 0x39, 0xca, 0xad, 0xe6, 0x3f, 0xe3, 0xdb, 0x64, 0x19,
0x94, 0x8e, 0x67, 0x3f, 0x00, 0x94, 0xea, 0xc5, 0x0c, 0x78, 0x38, 0xf6, 0x5f, 0xf9, 0x97, 0xaf,
0x7d, 0x45, 0xd0, 0x1f, 0xde, 0xdc, 0x5a, 0x27, 0x15, 0x18, 0x93, 0x2b, 0x42, 0xdf, 0x13, 0xce,
0xbb, 0x41, 0xbf, 0x3d, 0xea, 0xf7, 0x14, 0xb0, 0xcb, 0xbb, 0x29, 0x0a, 0x19, 0x8a, 0x38, 0x0f,
0xc6, 0xbe, 0xef, 0xf9, 0x2f, 0x15, 0x71, 0x97, 0x07, 0x39, 0x21, 0x98, 0xc4, 0x9c, 0x0f, 0x47,
0x97, 0x83, 0x41, 0xbf, 0xa7, 0xec, 0xed, 0xf2, 0x21, 0xa3, 0x49, 0x82, 0x22, 0xf5, 0x29, 0x94,
0x06, 0xed, 0xf1, 0xb0, 0xdf, 0x53, 0xf6, 0x75, 0xe5, 0xe6, 0xd6, 0x3a, 0xae, 0xf0, 0x20, 0xcc,
0xb3, 0x6a, 0x3a, 0xa7, 0x7c, 0xfa, 0xc1, 0xee, 0x6d, 0x8e, 0x31, 0x89, 0xf5, 0xd3, 0x4f, 0x5f,
0x0c, 0xe1, 0xdb, 0x57, 0xa3, 0xfe, 0x4d, 0x47, 0x5b, 0xdd, 0x19, 0xc2, 0xcf, 0x3b, 0x43, 0xf8,
0x58, 0x18, 0x60, 0x55, 0x18, 0xe0, 0x7b, 0x61, 0x80, 0xdf, 0x85, 0x01, 0xde, 0x08, 0x53, 0xa9,
0x0c, 0xe2, 0xe2, 0x4f, 0x00, 0x00, 0x00, 0xff, 0xff, 0xc3, 0x32, 0xd2, 0x86, 0x50, 0x03, 0x00,
0x00,
}

View File

@@ -136,6 +136,20 @@ func New(address string, opts ...ClientOpt) (*Client, error) {
if copts.services == nil && c.conn == nil {
return nil, errors.New("no grpc connection or services is available")
}
// check namespace labels for default runtime
if copts.defaultRuntime == "" && copts.defaultns != "" {
namespaces := c.NamespaceService()
ctx := context.Background()
if labels, err := namespaces.Labels(ctx, copts.defaultns); err == nil {
if defaultRuntime, ok := labels[defaults.DefaultRuntimeNSLabel]; ok {
c.runtime = defaultRuntime
}
} else {
return nil, err
}
}
return c, nil
}
@@ -152,6 +166,20 @@ func NewWithConn(conn *grpc.ClientConn, opts ...ClientOpt) (*Client, error) {
conn: conn,
runtime: fmt.Sprintf("%s.%s", plugin.RuntimePlugin, runtime.GOOS),
}
// check namespace labels for default runtime
if copts.defaultRuntime == "" && copts.defaultns != "" {
namespaces := c.NamespaceService()
ctx := context.Background()
if labels, err := namespaces.Labels(ctx, copts.defaultns); err == nil {
if defaultRuntime, ok := labels[defaults.DefaultRuntimeNSLabel]; ok {
c.runtime = defaultRuntime
}
} else {
return nil, err
}
}
if copts.services != nil {
c.services = *copts.services
}
@@ -594,6 +622,13 @@ func (c *Client) VersionService() versionservice.VersionClient {
return versionservice.NewVersionClient(c.conn)
}
// Conn returns the underlying GRPC connection object
func (c *Client) Conn() *grpc.ClientConn {
c.connMu.Lock()
defer c.connMu.Unlock()
return c.conn
}
// Version of containerd
type Version struct {
// Version number

View File

@@ -70,10 +70,11 @@ func WithCheckpointTask(ctx context.Context, client *Client, c *containers.Conta
for _, d := range task.Descriptors {
platformSpec := platforms.DefaultSpec()
index.Manifests = append(index.Manifests, imagespec.Descriptor{
MediaType: d.MediaType,
Size: d.Size_,
Digest: d.Digest,
Platform: &platformSpec,
MediaType: d.MediaType,
Size: d.Size_,
Digest: d.Digest,
Platform: &platformSpec,
Annotations: d.Annotations,
})
}
// save copts

View File

@@ -20,7 +20,9 @@ import (
"context"
"github.com/containerd/containerd/containers"
"github.com/containerd/containerd/defaults"
"github.com/containerd/containerd/errdefs"
"github.com/containerd/containerd/namespaces"
"github.com/containerd/containerd/oci"
"github.com/containerd/containerd/platforms"
"github.com/containerd/containerd/snapshots"
@@ -107,7 +109,7 @@ func WithSnapshotter(name string) NewContainerOpts {
// WithSnapshot uses an existing root filesystem for the container
func WithSnapshot(id string) NewContainerOpts {
return func(ctx context.Context, client *Client, c *containers.Container) error {
setSnapshotterIfEmpty(c)
setSnapshotterIfEmpty(ctx, client, c)
// check that the snapshot exists, if not, fail on creation
if _, err := client.SnapshotService(c.Snapshotter).Mounts(ctx, id); err != nil {
return err
@@ -125,7 +127,7 @@ func WithNewSnapshot(id string, i Image, opts ...snapshots.Opt) NewContainerOpts
if err != nil {
return err
}
setSnapshotterIfEmpty(c)
setSnapshotterIfEmpty(ctx, client, c)
parent := identity.ChainID(diffIDs).String()
if _, err := client.SnapshotService(c.Snapshotter).Prepare(ctx, id, parent, opts...); err != nil {
return err
@@ -155,7 +157,7 @@ func WithNewSnapshotView(id string, i Image, opts ...snapshots.Opt) NewContainer
if err != nil {
return err
}
setSnapshotterIfEmpty(c)
setSnapshotterIfEmpty(ctx, client, c)
parent := identity.ChainID(diffIDs).String()
if _, err := client.SnapshotService(c.Snapshotter).View(ctx, id, parent, opts...); err != nil {
return err
@@ -166,9 +168,18 @@ func WithNewSnapshotView(id string, i Image, opts ...snapshots.Opt) NewContainer
}
}
func setSnapshotterIfEmpty(c *containers.Container) {
func setSnapshotterIfEmpty(ctx context.Context, client *Client, c *containers.Container) {
if c.Snapshotter == "" {
c.Snapshotter = DefaultSnapshotter
defaultSnapshotter := DefaultSnapshotter
namespaceService := client.NamespaceService()
if ns, err := namespaces.NamespaceRequired(ctx); err == nil {
if labels, err := namespaceService.Labels(ctx, ns); err == nil {
if snapshotLabel, ok := labels[defaults.DefaultSnapshotterNSLabel]; ok {
defaultSnapshotter = snapshotLabel
}
}
}
c.Snapshotter = defaultSnapshotter
}
}

View File

@@ -50,7 +50,7 @@ func withRemappedSnapshotBase(id string, i Image, uid, gid uint32, readonly bool
return err
}
setSnapshotterIfEmpty(c)
setSnapshotterIfEmpty(ctx, client, c)
var (
snapshotter = client.SnapshotService(c.Snapshotter)

View File

@@ -13,7 +13,7 @@ KillMode=process
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
LimitNOFILE=1048576
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity

View File

@@ -33,6 +33,7 @@ import (
"github.com/containerd/containerd/errdefs"
"github.com/containerd/containerd/filters"
"github.com/containerd/containerd/log"
"github.com/sirupsen/logrus"
"github.com/containerd/continuity"
digest "github.com/opencontainers/go-digest"
@@ -477,6 +478,35 @@ func (s *store) Writer(ctx context.Context, opts ...content.WriterOpt) (content.
return w, nil // lock is now held by w.
}
func (s *store) resumeStatus(ref string, total int64, digester digest.Digester) (content.Status, error) {
path, _, data := s.ingestPaths(ref)
status, err := s.status(path)
if err != nil {
return status, errors.Wrap(err, "failed reading status of resume write")
}
if ref != status.Ref {
// NOTE(stevvooe): This is fairly catastrophic. Either we have some
// layout corruption or a hash collision for the ref key.
return status, errors.Wrapf(err, "ref key does not match: %v != %v", ref, status.Ref)
}
if total > 0 && status.Total > 0 && total != status.Total {
return status, errors.Errorf("provided total differs from status: %v != %v", total, status.Total)
}
// TODO(stevvooe): slow slow slow!!, send to goroutine or use resumable hashes
fp, err := os.Open(data)
if err != nil {
return status, err
}
p := bufPool.Get().(*[]byte)
status.Offset, err = io.CopyBuffer(digester.Hash(), fp, *p)
bufPool.Put(p)
fp.Close()
return status, err
}
// writer provides the main implementation of the Writer method. The caller
// must hold the lock correctly and release on error if there is a problem.
func (s *store) writer(ctx context.Context, ref string, total int64, expected digest.Digest) (content.Writer, error) {
@@ -498,45 +528,25 @@ func (s *store) writer(ctx context.Context, ref string, total int64, expected di
updatedAt time.Time
)
foundValidIngest := false
// ensure that the ingest path has been created.
if err := os.Mkdir(path, 0755); err != nil {
if !os.IsExist(err) {
return nil, err
}
status, err := s.status(path)
if err != nil {
return nil, errors.Wrap(err, "failed reading status of resume write")
status, err := s.resumeStatus(ref, total, digester)
if err == nil {
foundValidIngest = true
updatedAt = status.UpdatedAt
startedAt = status.StartedAt
total = status.Total
offset = status.Offset
} else {
logrus.Infof("failed to resume the status from path %s: %s. will recreate them", path, err.Error())
}
}
if ref != status.Ref {
// NOTE(stevvooe): This is fairly catastrophic. Either we have some
// layout corruption or a hash collision for the ref key.
return nil, errors.Wrapf(err, "ref key does not match: %v != %v", ref, status.Ref)
}
if total > 0 && status.Total > 0 && total != status.Total {
return nil, errors.Errorf("provided total differs from status: %v != %v", total, status.Total)
}
// TODO(stevvooe): slow slow slow!!, send to goroutine or use resumable hashes
fp, err := os.Open(data)
if err != nil {
return nil, err
}
p := bufPool.Get().(*[]byte)
offset, err = io.CopyBuffer(digester.Hash(), fp, *p)
bufPool.Put(p)
fp.Close()
if err != nil {
return nil, err
}
updatedAt = status.UpdatedAt
startedAt = status.StartedAt
total = status.Total
} else {
if !foundValidIngest {
startedAt = time.Now()
updatedAt = startedAt
@@ -546,11 +556,11 @@ func (s *store) writer(ctx context.Context, ref string, total int64, expected di
return nil, err
}
if writeTimestampFile(filepath.Join(path, "startedat"), startedAt); err != nil {
if err := writeTimestampFile(filepath.Join(path, "startedat"), startedAt); err != nil {
return nil, err
}
if writeTimestampFile(filepath.Join(path, "updatedat"), startedAt); err != nil {
if err := writeTimestampFile(filepath.Join(path, "updatedat"), startedAt); err != nil {
return nil, err
}

View File

@@ -74,6 +74,9 @@ func (w *writer) Write(p []byte) (n int, err error) {
}
func (w *writer) Commit(ctx context.Context, size int64, expected digest.Digest, opts ...content.Opt) error {
// Ensure even on error the writer is fully closed
defer unlock(w.ref)
var base content.Info
for _, opt := range opts {
if err := opt(&base); err != nil {
@@ -81,8 +84,6 @@ func (w *writer) Commit(ctx context.Context, size int64, expected digest.Digest,
}
}
// Ensure even on error the writer is fully closed
defer unlock(w.ref)
fp := w.fp
w.fp = nil

View File

@@ -23,4 +23,10 @@ const (
// DefaultMaxSendMsgSize defines the default maximum message size for
// sending protobufs passed over the GRPC API.
DefaultMaxSendMsgSize = 16 << 20
// DefaultRuntimeNSLabel defines the namespace label to check for
// default runtime
DefaultRuntimeNSLabel = "containerd.io/defaults/runtime"
// DefaultSnapshotterNSLabel defines the namespances label to check for
// default snapshotter
DefaultSnapshotterNSLabel = "containerd.io/defaults/snapshotter"
)

View File

@@ -26,10 +26,10 @@ import (
var (
// DefaultRootDir is the default location used by containerd to store
// persistent data
DefaultRootDir = filepath.Join(os.Getenv("programfiles"), "containerd", "root")
DefaultRootDir = filepath.Join(os.Getenv("ProgramData"), "containerd", "root")
// DefaultStateDir is the default location used by containerd to store
// transient data
DefaultStateDir = filepath.Join(os.Getenv("programfiles"), "containerd", "state")
DefaultStateDir = filepath.Join(os.Getenv("ProgramData"), "containerd", "state")
)
const (

View File

@@ -80,17 +80,19 @@ func (r *diffRemote) Compare(ctx context.Context, a, b []mount.Mount, opts ...di
func toDescriptor(d *types.Descriptor) ocispec.Descriptor {
return ocispec.Descriptor{
MediaType: d.MediaType,
Digest: d.Digest,
Size: d.Size_,
MediaType: d.MediaType,
Digest: d.Digest,
Size: d.Size_,
Annotations: d.Annotations,
}
}
func fromDescriptor(d ocispec.Descriptor) *types.Descriptor {
return &types.Descriptor{
MediaType: d.MediaType,
Digest: d.Digest,
Size_: d.Size,
MediaType: d.MediaType,
Digest: d.Digest,
Size_: d.Size,
Annotations: d.Annotations,
}
}

View File

@@ -137,16 +137,18 @@ func imagesFromProto(imagespb []imagesapi.Image) []images.Image {
func descFromProto(desc *types.Descriptor) ocispec.Descriptor {
return ocispec.Descriptor{
MediaType: desc.MediaType,
Size: desc.Size_,
Digest: desc.Digest,
MediaType: desc.MediaType,
Size: desc.Size_,
Digest: desc.Digest,
Annotations: desc.Annotations,
}
}
func descToProto(desc *ocispec.Descriptor) types.Descriptor {
return types.Descriptor{
MediaType: desc.MediaType,
Size_: desc.Size,
Digest: desc.Digest,
MediaType: desc.MediaType,
Size_: desc.Size,
Digest: desc.Digest,
Annotations: desc.Annotations,
}
}

View File

@@ -197,10 +197,7 @@ func onUntarJSON(r io.Reader, j interface{}) error {
if err != nil {
return err
}
if err := json.Unmarshal(b, j); err != nil {
return err
}
return nil
return json.Unmarshal(b, j)
}
func onUntarBlob(ctx context.Context, r io.Reader, store content.Ingester, size int64, ref string) (digest.Digest, error) {

View File

@@ -111,7 +111,18 @@ func unmount(target string, flags int) error {
// UnmountAll repeatedly unmounts the given mount point until there
// are no mounts remaining (EINVAL is returned by mount), which is
// useful for undoing a stack of mounts on the same mount point.
// UnmountAll all is noop when the first argument is an empty string.
// This is done when the containerd client did not specify any rootfs
// mounts (e.g. because the rootfs is managed outside containerd)
// UnmountAll is noop when the mount path does not exist.
func UnmountAll(mount string, flags int) error {
if mount == "" {
return nil
}
if _, err := os.Stat(mount); os.IsNotExist(err) {
return nil
}
for {
if err := unmount(mount, flags); err != nil {
// EINVAL is returned if the target is not a

View File

@@ -25,6 +25,8 @@ import (
"os"
"strconv"
"strings"
"github.com/pkg/errors"
)
// Self retrieves a list of mounts for the current running process.
@@ -41,13 +43,15 @@ func Self() ([]Info, error) {
func parseInfoFile(r io.Reader) ([]Info, error) {
s := bufio.NewScanner(r)
out := []Info{}
var err error
for s.Scan() {
if err := s.Err(); err != nil {
if err = s.Err(); err != nil {
return nil, err
}
/*
See http://man7.org/linux/man-pages/man5/proc.5.html
36 35 98:0 /mnt1 /mnt2 rw,noatime master:1 - ext3 /dev/root rw,errors=continue
(1)(2)(3) (4) (5) (6) (7) (8) (9) (10) (11)
(1) mount ID: unique identifier of the mount (may be reused after umount)
@@ -68,7 +72,7 @@ func parseInfoFile(r io.Reader) ([]Info, error) {
numFields := len(fields)
if numFields < 10 {
// should be at least 10 fields
return nil, fmt.Errorf("parsing '%s' failed: not enough fields (%d)", text, numFields)
return nil, errors.Errorf("parsing '%s' failed: not enough fields (%d)", text, numFields)
}
p := Info{}
// ignore any numbers parsing errors, as there should not be any
@@ -76,13 +80,19 @@ func parseInfoFile(r io.Reader) ([]Info, error) {
p.Parent, _ = strconv.Atoi(fields[1])
mm := strings.Split(fields[2], ":")
if len(mm) != 2 {
return nil, fmt.Errorf("parsing '%s' failed: unexpected minor:major pair %s", text, mm)
return nil, errors.Errorf("parsing '%s' failed: unexpected minor:major pair %s", text, mm)
}
p.Major, _ = strconv.Atoi(mm[0])
p.Minor, _ = strconv.Atoi(mm[1])
p.Root = fields[3]
p.Mountpoint = fields[4]
p.Root, err = strconv.Unquote(`"` + fields[3] + `"`)
if err != nil {
return nil, errors.Wrapf(err, "parsing '%s' failed: unable to unquote root field", fields[3])
}
p.Mountpoint, err = strconv.Unquote(`"` + fields[4] + `"`)
if err != nil {
return nil, errors.Wrapf(err, "parsing '%s' failed: unable to unquote mount point field", fields[4])
}
p.Options = fields[5]
// one or more optional fields, when a separator (-)
@@ -101,11 +111,11 @@ func parseInfoFile(r io.Reader) ([]Info, error) {
}
}
if i == numFields {
return nil, fmt.Errorf("parsing '%s' failed: missing separator ('-')", text)
return nil, errors.Errorf("parsing '%s' failed: missing separator ('-')", text)
}
// There should be 3 fields after the separator...
if i+4 > numFields {
return nil, fmt.Errorf("parsing '%s' failed: not enough fields after a separator", text)
return nil, errors.Errorf("parsing '%s' failed: not enough fields after a separator", text)
}
// ... but in Linux <= 3.9 mounting a cifs with spaces in a share name
// (like "//serv/My Documents") _may_ end up having a space in the last field

View File

@@ -741,7 +741,9 @@ func WithCapabilities(caps []string) SpecOpts {
}
// WithAllCapabilities sets all linux capabilities for the process
var WithAllCapabilities = WithCapabilities(GetAllCapabilities())
var WithAllCapabilities = func(ctx context.Context, client Client, c *containers.Container, s *Spec) error {
return WithCapabilities(GetAllCapabilities())(ctx, client, c, s)
}
// GetAllCapabilities returns all caps up to CAP_LAST_CAP
// or CAP_BLOCK_SUSPEND on RHEL6
@@ -771,11 +773,14 @@ func capsContain(caps []string, s string) bool {
}
func removeCap(caps *[]string, s string) {
for i, c := range *caps {
var newcaps []string
for _, c := range *caps {
if c == s {
*caps = append((*caps)[:i], (*caps)[i+1:]...)
continue
}
newcaps = append(newcaps, c)
}
*caps = newcaps
}
// WithAddedCapabilities adds the provided capabilities

View File

@@ -20,6 +20,7 @@ import (
"fmt"
"sync"
"github.com/containerd/ttrpc"
"github.com/pkg/errors"
"google.golang.org/grpc"
)
@@ -123,6 +124,16 @@ type Service interface {
Register(*grpc.Server) error
}
// TTRPCService allows TTRPC services to be registered with the underlying server
type TTRPCService interface {
RegisterTTRPC(*ttrpc.Server) error
}
// TCPService allows GRPC services to be registered with the underlying tcp server
type TCPService interface {
RegisterTCP(*grpc.Server) error
}
var register = struct {
sync.RWMutex
r []*Registration

View File

@@ -52,6 +52,15 @@ type Process interface {
Status(context.Context) (Status, error)
}
// NewExitStatus populates an ExitStatus
func NewExitStatus(code uint32, t time.Time, err error) *ExitStatus {
return &ExitStatus{
code: code,
exitedAt: t,
err: err,
}
}
// ExitStatus encapsulates a process' exit status.
// It is used by `Wait()` to return either a process exit code or an error
type ExitStatus struct {

View File

@@ -18,6 +18,7 @@ package docker
import (
"context"
"encoding/json"
"fmt"
"io"
"io/ioutil"
@@ -28,6 +29,7 @@ import (
"github.com/containerd/containerd/errdefs"
"github.com/containerd/containerd/images"
"github.com/containerd/containerd/log"
"github.com/docker/distribution/registry/api/errcode"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
@@ -101,12 +103,16 @@ func (r dockerFetcher) open(ctx context.Context, u, mediatype string, offset int
// really distinguish between a 206 and a 200. In the case of 200, we
// can discard the bytes, hiding the seek behavior from the
// implementation.
defer resp.Body.Close()
resp.Body.Close()
if resp.StatusCode == http.StatusNotFound {
return nil, errors.Wrapf(errdefs.ErrNotFound, "content at %v not found", u)
}
return nil, errors.Errorf("unexpected status code %v: %v", u, resp.Status)
var registryErr errcode.Errors
if err := json.NewDecoder(resp.Body).Decode(&registryErr); err != nil || registryErr.Len() < 1 {
return nil, errors.Errorf("unexpected status code %v: %v", u, resp.Status)
}
return nil, errors.Errorf("unexpected status code %v: %s - Server message: %s", u, resp.Status, registryErr.Error())
}
if offset > 0 {
cr := resp.Header.Get("content-range")

View File

@@ -88,7 +88,7 @@ func appendDistributionSourceLabel(originLabel, repo string) string {
}
repos = append(repos, repo)
// use emtpy string to present duplicate items
// use empty string to present duplicate items
for i := 1; i < len(repos); i++ {
tmp, j := repos[i], i-1
for ; j >= 0 && repos[j] >= tmp; j-- {

View File

@@ -18,10 +18,10 @@ package docker
import (
"context"
"io"
"net/http"
"net/url"
"path"
"strconv"
"strings"
"github.com/containerd/containerd/errdefs"
@@ -29,6 +29,7 @@ import (
"github.com/containerd/containerd/log"
"github.com/containerd/containerd/reference"
"github.com/containerd/containerd/remotes"
"github.com/containerd/containerd/remotes/docker/schema1"
"github.com/containerd/containerd/version"
digest "github.com/opencontainers/go-digest"
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
@@ -150,6 +151,32 @@ func NewResolver(options ResolverOptions) remotes.Resolver {
}
}
func getManifestMediaType(resp *http.Response) string {
// Strip encoding data (manifests should always be ascii JSON)
contentType := resp.Header.Get("Content-Type")
if sp := strings.IndexByte(contentType, ';'); sp != -1 {
contentType = contentType[0:sp]
}
// As of Apr 30 2019 the registry.access.redhat.com registry does not specify
// the content type of any data but uses schema1 manifests.
if contentType == "text/plain" {
contentType = images.MediaTypeDockerSchema1Manifest
}
return contentType
}
type countingReader struct {
reader io.Reader
bytesRead int64
}
func (r *countingReader) Read(p []byte) (int, error) {
n, err := r.reader.Read(p)
r.bytesRead += int64(n)
return n, err
}
var _ remotes.Resolver = &dockerResolver{}
func (r *dockerResolver) Resolve(ctx context.Context, ref string) (string, ocispec.Descriptor, error) {
@@ -220,40 +247,56 @@ func (r *dockerResolver) Resolve(ctx context.Context, ref string) (string, ocisp
}
return "", ocispec.Descriptor{}, errors.Errorf("unexpected status code %v: %v", u, resp.Status)
}
size := resp.ContentLength
// this is the only point at which we trust the registry. we use the
// content headers to assemble a descriptor for the name. when this becomes
// more robust, we mostly get this information from a secure trust store.
dgstHeader := digest.Digest(resp.Header.Get("Docker-Content-Digest"))
contentType := getManifestMediaType(resp)
if dgstHeader != "" {
if dgstHeader != "" && size != -1 {
if err := dgstHeader.Validate(); err != nil {
return "", ocispec.Descriptor{}, errors.Wrapf(err, "%q in header not a valid digest", dgstHeader)
}
dgst = dgstHeader
}
} else {
log.G(ctx).Debug("no Docker-Content-Digest header, fetching manifest instead")
if dgst == "" {
return "", ocispec.Descriptor{}, errors.Errorf("could not resolve digest for %v", ref)
}
req, err := http.NewRequest(http.MethodGet, u, nil)
if err != nil {
return "", ocispec.Descriptor{}, err
}
req.Header = r.headers
var (
size int64
sizeHeader = resp.Header.Get("Content-Length")
)
resp, err := fetcher.doRequestWithRetries(ctx, req, nil)
if err != nil {
return "", ocispec.Descriptor{}, err
}
defer resp.Body.Close()
size, err = strconv.ParseInt(sizeHeader, 10, 64)
if err != nil {
bodyReader := countingReader{reader: resp.Body}
return "", ocispec.Descriptor{}, errors.Wrapf(err, "invalid size header: %q", sizeHeader)
}
if size < 0 {
return "", ocispec.Descriptor{}, errors.Errorf("%q in header not a valid size", sizeHeader)
contentType = getManifestMediaType(resp)
if contentType == images.MediaTypeDockerSchema1Manifest {
b, err := schema1.ReadStripSignature(&bodyReader)
if err != nil {
return "", ocispec.Descriptor{}, err
}
dgst = digest.FromBytes(b)
} else {
dgst, err = digest.FromReader(&bodyReader)
if err != nil {
return "", ocispec.Descriptor{}, err
}
}
size = bodyReader.bytesRead
}
desc := ocispec.Descriptor{
Digest: dgst,
MediaType: resp.Header.Get("Content-Type"), // need to strip disposition?
MediaType: contentType,
Size: size,
}

View File

@@ -227,6 +227,17 @@ func (c *Converter) Convert(ctx context.Context, opts ...ConvertOpt) (ocispec.De
return desc, nil
}
// ReadStripSignature reads in a schema1 manifest and returns a byte array
// with the "signatures" field stripped
func ReadStripSignature(schema1Blob io.Reader) ([]byte, error) {
b, err := ioutil.ReadAll(io.LimitReader(schema1Blob, manifestSizeLimit)) // limit to 8MB
if err != nil {
return nil, err
}
return stripSignature(b)
}
func (c *Converter) fetchManifest(ctx context.Context, desc ocispec.Descriptor) error {
log.G(ctx).Debug("fetch schema 1")
@@ -235,17 +246,12 @@ func (c *Converter) fetchManifest(ctx context.Context, desc ocispec.Descriptor)
return err
}
b, err := ioutil.ReadAll(io.LimitReader(rc, manifestSizeLimit)) // limit to 8MB
b, err := ReadStripSignature(rc)
rc.Close()
if err != nil {
return err
}
b, err = stripSignature(b)
if err != nil {
return err
}
var m manifest
if err := json.Unmarshal(b, &m); err != nil {
return err

View File

@@ -72,9 +72,9 @@ func (fn FetcherFunc) Fetch(ctx context.Context, desc ocispec.Descriptor) (io.Re
// PusherFunc allows package users to implement a Pusher with just a
// function.
type PusherFunc func(ctx context.Context, desc ocispec.Descriptor, r io.Reader) error
type PusherFunc func(ctx context.Context, desc ocispec.Descriptor) (content.Writer, error)
// Push content
func (fn PusherFunc) Push(ctx context.Context, desc ocispec.Descriptor, r io.Reader) error {
return fn(ctx, desc, r)
func (fn PusherFunc) Push(ctx context.Context, desc ocispec.Descriptor) (content.Writer, error) {
return fn(ctx, desc)
}

View File

@@ -1,30 +1,16 @@
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: github.com/containerd/containerd/runtime/linux/runctypes/runc.proto
/*
Package runctypes is a generated protocol buffer package.
It is generated from these files:
github.com/containerd/containerd/runtime/linux/runctypes/runc.proto
It has these top-level messages:
RuncOptions
CreateOptions
CheckpointOptions
ProcessDetails
*/
package runctypes
import proto "github.com/gogo/protobuf/proto"
import fmt "fmt"
import math "math"
// skipping weak import gogoproto "github.com/gogo/protobuf/gogoproto"
import strings "strings"
import reflect "reflect"
import io "io"
import (
fmt "fmt"
proto "github.com/gogo/protobuf/proto"
io "io"
math "math"
reflect "reflect"
strings "strings"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
@@ -38,59 +24,183 @@ var _ = math.Inf
const _ = proto.GoGoProtoPackageIsVersion2 // please upgrade the proto package
type RuncOptions struct {
Runtime string `protobuf:"bytes,1,opt,name=runtime,proto3" json:"runtime,omitempty"`
RuntimeRoot string `protobuf:"bytes,2,opt,name=runtime_root,json=runtimeRoot,proto3" json:"runtime_root,omitempty"`
CriuPath string `protobuf:"bytes,3,opt,name=criu_path,json=criuPath,proto3" json:"criu_path,omitempty"`
SystemdCgroup bool `protobuf:"varint,4,opt,name=systemd_cgroup,json=systemdCgroup,proto3" json:"systemd_cgroup,omitempty"`
Runtime string `protobuf:"bytes,1,opt,name=runtime,proto3" json:"runtime,omitempty"`
RuntimeRoot string `protobuf:"bytes,2,opt,name=runtime_root,json=runtimeRoot,proto3" json:"runtime_root,omitempty"`
CriuPath string `protobuf:"bytes,3,opt,name=criu_path,json=criuPath,proto3" json:"criu_path,omitempty"`
SystemdCgroup bool `protobuf:"varint,4,opt,name=systemd_cgroup,json=systemdCgroup,proto3" json:"systemd_cgroup,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *RuncOptions) Reset() { *m = RuncOptions{} }
func (*RuncOptions) ProtoMessage() {}
func (*RuncOptions) Descriptor() ([]byte, []int) { return fileDescriptorRunc, []int{0} }
func (m *RuncOptions) Reset() { *m = RuncOptions{} }
func (*RuncOptions) ProtoMessage() {}
func (*RuncOptions) Descriptor() ([]byte, []int) {
return fileDescriptor_d20e2ba8b3cc58b9, []int{0}
}
func (m *RuncOptions) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *RuncOptions) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_RuncOptions.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalTo(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *RuncOptions) XXX_Merge(src proto.Message) {
xxx_messageInfo_RuncOptions.Merge(m, src)
}
func (m *RuncOptions) XXX_Size() int {
return m.Size()
}
func (m *RuncOptions) XXX_DiscardUnknown() {
xxx_messageInfo_RuncOptions.DiscardUnknown(m)
}
var xxx_messageInfo_RuncOptions proto.InternalMessageInfo
type CreateOptions struct {
NoPivotRoot bool `protobuf:"varint,1,opt,name=no_pivot_root,json=noPivotRoot,proto3" json:"no_pivot_root,omitempty"`
OpenTcp bool `protobuf:"varint,2,opt,name=open_tcp,json=openTcp,proto3" json:"open_tcp,omitempty"`
ExternalUnixSockets bool `protobuf:"varint,3,opt,name=external_unix_sockets,json=externalUnixSockets,proto3" json:"external_unix_sockets,omitempty"`
Terminal bool `protobuf:"varint,4,opt,name=terminal,proto3" json:"terminal,omitempty"`
FileLocks bool `protobuf:"varint,5,opt,name=file_locks,json=fileLocks,proto3" json:"file_locks,omitempty"`
EmptyNamespaces []string `protobuf:"bytes,6,rep,name=empty_namespaces,json=emptyNamespaces" json:"empty_namespaces,omitempty"`
CgroupsMode string `protobuf:"bytes,7,opt,name=cgroups_mode,json=cgroupsMode,proto3" json:"cgroups_mode,omitempty"`
NoNewKeyring bool `protobuf:"varint,8,opt,name=no_new_keyring,json=noNewKeyring,proto3" json:"no_new_keyring,omitempty"`
ShimCgroup string `protobuf:"bytes,9,opt,name=shim_cgroup,json=shimCgroup,proto3" json:"shim_cgroup,omitempty"`
IoUid uint32 `protobuf:"varint,10,opt,name=io_uid,json=ioUid,proto3" json:"io_uid,omitempty"`
IoGid uint32 `protobuf:"varint,11,opt,name=io_gid,json=ioGid,proto3" json:"io_gid,omitempty"`
CriuWorkPath string `protobuf:"bytes,12,opt,name=criu_work_path,json=criuWorkPath,proto3" json:"criu_work_path,omitempty"`
CriuImagePath string `protobuf:"bytes,13,opt,name=criu_image_path,json=criuImagePath,proto3" json:"criu_image_path,omitempty"`
NoPivotRoot bool `protobuf:"varint,1,opt,name=no_pivot_root,json=noPivotRoot,proto3" json:"no_pivot_root,omitempty"`
OpenTcp bool `protobuf:"varint,2,opt,name=open_tcp,json=openTcp,proto3" json:"open_tcp,omitempty"`
ExternalUnixSockets bool `protobuf:"varint,3,opt,name=external_unix_sockets,json=externalUnixSockets,proto3" json:"external_unix_sockets,omitempty"`
Terminal bool `protobuf:"varint,4,opt,name=terminal,proto3" json:"terminal,omitempty"`
FileLocks bool `protobuf:"varint,5,opt,name=file_locks,json=fileLocks,proto3" json:"file_locks,omitempty"`
EmptyNamespaces []string `protobuf:"bytes,6,rep,name=empty_namespaces,json=emptyNamespaces,proto3" json:"empty_namespaces,omitempty"`
CgroupsMode string `protobuf:"bytes,7,opt,name=cgroups_mode,json=cgroupsMode,proto3" json:"cgroups_mode,omitempty"`
NoNewKeyring bool `protobuf:"varint,8,opt,name=no_new_keyring,json=noNewKeyring,proto3" json:"no_new_keyring,omitempty"`
ShimCgroup string `protobuf:"bytes,9,opt,name=shim_cgroup,json=shimCgroup,proto3" json:"shim_cgroup,omitempty"`
IoUid uint32 `protobuf:"varint,10,opt,name=io_uid,json=ioUid,proto3" json:"io_uid,omitempty"`
IoGid uint32 `protobuf:"varint,11,opt,name=io_gid,json=ioGid,proto3" json:"io_gid,omitempty"`
CriuWorkPath string `protobuf:"bytes,12,opt,name=criu_work_path,json=criuWorkPath,proto3" json:"criu_work_path,omitempty"`
CriuImagePath string `protobuf:"bytes,13,opt,name=criu_image_path,json=criuImagePath,proto3" json:"criu_image_path,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *CreateOptions) Reset() { *m = CreateOptions{} }
func (*CreateOptions) ProtoMessage() {}
func (*CreateOptions) Descriptor() ([]byte, []int) { return fileDescriptorRunc, []int{1} }
func (m *CreateOptions) Reset() { *m = CreateOptions{} }
func (*CreateOptions) ProtoMessage() {}
func (*CreateOptions) Descriptor() ([]byte, []int) {
return fileDescriptor_d20e2ba8b3cc58b9, []int{1}
}
func (m *CreateOptions) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *CreateOptions) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_CreateOptions.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalTo(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *CreateOptions) XXX_Merge(src proto.Message) {
xxx_messageInfo_CreateOptions.Merge(m, src)
}
func (m *CreateOptions) XXX_Size() int {
return m.Size()
}
func (m *CreateOptions) XXX_DiscardUnknown() {
xxx_messageInfo_CreateOptions.DiscardUnknown(m)
}
var xxx_messageInfo_CreateOptions proto.InternalMessageInfo
type CheckpointOptions struct {
Exit bool `protobuf:"varint,1,opt,name=exit,proto3" json:"exit,omitempty"`
OpenTcp bool `protobuf:"varint,2,opt,name=open_tcp,json=openTcp,proto3" json:"open_tcp,omitempty"`
ExternalUnixSockets bool `protobuf:"varint,3,opt,name=external_unix_sockets,json=externalUnixSockets,proto3" json:"external_unix_sockets,omitempty"`
Terminal bool `protobuf:"varint,4,opt,name=terminal,proto3" json:"terminal,omitempty"`
FileLocks bool `protobuf:"varint,5,opt,name=file_locks,json=fileLocks,proto3" json:"file_locks,omitempty"`
EmptyNamespaces []string `protobuf:"bytes,6,rep,name=empty_namespaces,json=emptyNamespaces" json:"empty_namespaces,omitempty"`
CgroupsMode string `protobuf:"bytes,7,opt,name=cgroups_mode,json=cgroupsMode,proto3" json:"cgroups_mode,omitempty"`
WorkPath string `protobuf:"bytes,8,opt,name=work_path,json=workPath,proto3" json:"work_path,omitempty"`
ImagePath string `protobuf:"bytes,9,opt,name=image_path,json=imagePath,proto3" json:"image_path,omitempty"`
Exit bool `protobuf:"varint,1,opt,name=exit,proto3" json:"exit,omitempty"`
OpenTcp bool `protobuf:"varint,2,opt,name=open_tcp,json=openTcp,proto3" json:"open_tcp,omitempty"`
ExternalUnixSockets bool `protobuf:"varint,3,opt,name=external_unix_sockets,json=externalUnixSockets,proto3" json:"external_unix_sockets,omitempty"`
Terminal bool `protobuf:"varint,4,opt,name=terminal,proto3" json:"terminal,omitempty"`
FileLocks bool `protobuf:"varint,5,opt,name=file_locks,json=fileLocks,proto3" json:"file_locks,omitempty"`
EmptyNamespaces []string `protobuf:"bytes,6,rep,name=empty_namespaces,json=emptyNamespaces,proto3" json:"empty_namespaces,omitempty"`
CgroupsMode string `protobuf:"bytes,7,opt,name=cgroups_mode,json=cgroupsMode,proto3" json:"cgroups_mode,omitempty"`
WorkPath string `protobuf:"bytes,8,opt,name=work_path,json=workPath,proto3" json:"work_path,omitempty"`
ImagePath string `protobuf:"bytes,9,opt,name=image_path,json=imagePath,proto3" json:"image_path,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *CheckpointOptions) Reset() { *m = CheckpointOptions{} }
func (*CheckpointOptions) ProtoMessage() {}
func (*CheckpointOptions) Descriptor() ([]byte, []int) { return fileDescriptorRunc, []int{2} }
func (m *CheckpointOptions) Reset() { *m = CheckpointOptions{} }
func (*CheckpointOptions) ProtoMessage() {}
func (*CheckpointOptions) Descriptor() ([]byte, []int) {
return fileDescriptor_d20e2ba8b3cc58b9, []int{2}
}
func (m *CheckpointOptions) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *CheckpointOptions) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_CheckpointOptions.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalTo(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *CheckpointOptions) XXX_Merge(src proto.Message) {
xxx_messageInfo_CheckpointOptions.Merge(m, src)
}
func (m *CheckpointOptions) XXX_Size() int {
return m.Size()
}
func (m *CheckpointOptions) XXX_DiscardUnknown() {
xxx_messageInfo_CheckpointOptions.DiscardUnknown(m)
}
var xxx_messageInfo_CheckpointOptions proto.InternalMessageInfo
type ProcessDetails struct {
ExecID string `protobuf:"bytes,1,opt,name=exec_id,json=execId,proto3" json:"exec_id,omitempty"`
ExecID string `protobuf:"bytes,1,opt,name=exec_id,json=execId,proto3" json:"exec_id,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *ProcessDetails) Reset() { *m = ProcessDetails{} }
func (*ProcessDetails) ProtoMessage() {}
func (*ProcessDetails) Descriptor() ([]byte, []int) { return fileDescriptorRunc, []int{3} }
func (m *ProcessDetails) Reset() { *m = ProcessDetails{} }
func (*ProcessDetails) ProtoMessage() {}
func (*ProcessDetails) Descriptor() ([]byte, []int) {
return fileDescriptor_d20e2ba8b3cc58b9, []int{3}
}
func (m *ProcessDetails) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *ProcessDetails) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_ProcessDetails.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalTo(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *ProcessDetails) XXX_Merge(src proto.Message) {
xxx_messageInfo_ProcessDetails.Merge(m, src)
}
func (m *ProcessDetails) XXX_Size() int {
return m.Size()
}
func (m *ProcessDetails) XXX_DiscardUnknown() {
xxx_messageInfo_ProcessDetails.DiscardUnknown(m)
}
var xxx_messageInfo_ProcessDetails proto.InternalMessageInfo
func init() {
proto.RegisterType((*RuncOptions)(nil), "containerd.linux.runc.RuncOptions")
@@ -98,6 +208,53 @@ func init() {
proto.RegisterType((*CheckpointOptions)(nil), "containerd.linux.runc.CheckpointOptions")
proto.RegisterType((*ProcessDetails)(nil), "containerd.linux.runc.ProcessDetails")
}
func init() {
proto.RegisterFile("github.com/containerd/containerd/runtime/linux/runctypes/runc.proto", fileDescriptor_d20e2ba8b3cc58b9)
}
var fileDescriptor_d20e2ba8b3cc58b9 = []byte{
// 604 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xdc, 0x94, 0xcf, 0x6e, 0xd3, 0x40,
0x10, 0xc6, 0xeb, 0xfe, 0x49, 0x9c, 0x49, 0xd2, 0xc2, 0x42, 0x25, 0xd3, 0xaa, 0x69, 0x08, 0x7f,
0x14, 0x2e, 0xa9, 0x04, 0xe2, 0xc4, 0xad, 0x29, 0x42, 0x15, 0x50, 0x2a, 0x43, 0x05, 0x42, 0x48,
0x2b, 0x77, 0x3d, 0x24, 0xab, 0xc4, 0x3b, 0x96, 0x77, 0x4d, 0x92, 0x1b, 0x4f, 0xc0, 0x0b, 0xf1,
0x02, 0x3d, 0x21, 0x8e, 0x9c, 0x10, 0xcd, 0x93, 0xa0, 0x5d, 0xc7, 0x69, 0xcf, 0x1c, 0xb9, 0xcd,
0xfc, 0xe6, 0xb3, 0x67, 0xf4, 0x7d, 0xb2, 0xa1, 0x3f, 0x90, 0x66, 0x98, 0x9f, 0xf7, 0x04, 0x25,
0x07, 0x82, 0x94, 0x89, 0xa4, 0xc2, 0x2c, 0xbe, 0x5e, 0x66, 0xb9, 0x32, 0x32, 0xc1, 0x83, 0xb1,
0x54, 0xf9, 0xd4, 0x76, 0xc2, 0xcc, 0x52, 0xd4, 0xae, 0xea, 0xa5, 0x19, 0x19, 0x62, 0xdb, 0x57,
0xf2, 0x9e, 0x93, 0xf5, 0xec, 0x70, 0xe7, 0xf6, 0x80, 0x06, 0xe4, 0x14, 0x07, 0xb6, 0x2a, 0xc4,
0x9d, 0x6f, 0x1e, 0xd4, 0xc3, 0x5c, 0x89, 0x37, 0xa9, 0x91, 0xa4, 0x34, 0x0b, 0xa0, 0xba, 0x58,
0x11, 0x78, 0x6d, 0xaf, 0x5b, 0x0b, 0xcb, 0x96, 0xdd, 0x85, 0xc6, 0xa2, 0xe4, 0x19, 0x91, 0x09,
0x56, 0xdd, 0xb8, 0xbe, 0x60, 0x21, 0x91, 0x61, 0xbb, 0x50, 0x13, 0x99, 0xcc, 0x79, 0x1a, 0x99,
0x61, 0xb0, 0xe6, 0xe6, 0xbe, 0x05, 0xa7, 0x91, 0x19, 0xb2, 0x07, 0xb0, 0xa9, 0x67, 0xda, 0x60,
0x12, 0x73, 0x31, 0xc8, 0x28, 0x4f, 0x83, 0xf5, 0xb6, 0xd7, 0xf5, 0xc3, 0xe6, 0x82, 0xf6, 0x1d,
0xec, 0xfc, 0x58, 0x83, 0x66, 0x3f, 0xc3, 0xc8, 0x60, 0x79, 0x52, 0x07, 0x9a, 0x8a, 0x78, 0x2a,
0xbf, 0x90, 0x29, 0x36, 0x7b, 0xee, 0xb9, 0xba, 0xa2, 0x53, 0xcb, 0xdc, 0xe6, 0x3b, 0xe0, 0x53,
0x8a, 0x8a, 0x1b, 0x91, 0xba, 0xc3, 0xfc, 0xb0, 0x6a, 0xfb, 0x77, 0x22, 0x65, 0x8f, 0x61, 0x1b,
0xa7, 0x06, 0x33, 0x15, 0x8d, 0x79, 0xae, 0xe4, 0x94, 0x6b, 0x12, 0x23, 0x34, 0xda, 0x1d, 0xe8,
0x87, 0xb7, 0xca, 0xe1, 0x99, 0x92, 0xd3, 0xb7, 0xc5, 0x88, 0xed, 0x80, 0x6f, 0x30, 0x4b, 0xa4,
0x8a, 0xc6, 0x8b, 0x2b, 0x97, 0x3d, 0xdb, 0x03, 0xf8, 0x2c, 0xc7, 0xc8, 0xc7, 0x24, 0x46, 0x3a,
0xd8, 0x70, 0xd3, 0x9a, 0x25, 0xaf, 0x2c, 0x60, 0x8f, 0xe0, 0x06, 0x26, 0xa9, 0x99, 0x71, 0x15,
0x25, 0xa8, 0xd3, 0x48, 0xa0, 0x0e, 0x2a, 0xed, 0xb5, 0x6e, 0x2d, 0xdc, 0x72, 0xfc, 0x64, 0x89,
0xad, 0xa3, 0x85, 0x13, 0x9a, 0x27, 0x14, 0x63, 0x50, 0x2d, 0x1c, 0x5d, 0xb0, 0xd7, 0x14, 0x23,
0xbb, 0x0f, 0x9b, 0x8a, 0xb8, 0xc2, 0x09, 0x1f, 0xe1, 0x2c, 0x93, 0x6a, 0x10, 0xf8, 0x6e, 0x61,
0x43, 0xd1, 0x09, 0x4e, 0x5e, 0x16, 0x8c, 0xed, 0x43, 0x5d, 0x0f, 0x65, 0x52, 0xfa, 0x5a, 0x73,
0xef, 0x01, 0x8b, 0x0a, 0x53, 0xd9, 0x36, 0x54, 0x24, 0xf1, 0x5c, 0xc6, 0x01, 0xb4, 0xbd, 0x6e,
0x33, 0xdc, 0x90, 0x74, 0x26, 0xe3, 0x05, 0x1e, 0xc8, 0x38, 0xa8, 0x97, 0xf8, 0x85, 0x8c, 0xed,
0x52, 0x17, 0xe3, 0x84, 0xb2, 0x51, 0x91, 0x65, 0xc3, 0xbd, 0xb1, 0x61, 0xe9, 0x7b, 0xca, 0x46,
0x2e, 0xcf, 0x87, 0xb0, 0xe5, 0x54, 0x32, 0x89, 0x06, 0x58, 0xc8, 0x9a, 0x4e, 0xd6, 0xb4, 0xf8,
0xd8, 0x52, 0xab, 0xeb, 0x7c, 0x5f, 0x85, 0x9b, 0xfd, 0x21, 0x8a, 0x51, 0x4a, 0x52, 0x99, 0x32,
0x54, 0x06, 0xeb, 0x38, 0x95, 0x65, 0x96, 0xae, 0xfe, 0x6f, 0x43, 0xdc, 0x85, 0xda, 0x95, 0x95,
0x7e, 0xf1, 0x59, 0x4c, 0x4a, 0x1b, 0xf7, 0x00, 0xae, 0x39, 0x58, 0x44, 0x57, 0x93, 0x4b, 0xf7,
0x9e, 0xc2, 0xe6, 0x69, 0x46, 0x02, 0xb5, 0x3e, 0x42, 0x13, 0xc9, 0xb1, 0x66, 0xf7, 0xa0, 0x8a,
0x53, 0x14, 0x5c, 0xc6, 0xc5, 0x17, 0x7a, 0x08, 0xf3, 0xdf, 0xfb, 0x95, 0xe7, 0x53, 0x14, 0xc7,
0x47, 0x61, 0xc5, 0x8e, 0x8e, 0xe3, 0xc3, 0x4f, 0x17, 0x97, 0xad, 0x95, 0x5f, 0x97, 0xad, 0x95,
0xaf, 0xf3, 0x96, 0x77, 0x31, 0x6f, 0x79, 0x3f, 0xe7, 0x2d, 0xef, 0xcf, 0xbc, 0xe5, 0x7d, 0x3c,
0xfc, 0xd7, 0x5f, 0xcc, 0xb3, 0x65, 0xf5, 0x61, 0xe5, 0xbc, 0xe2, 0xfe, 0x1e, 0x4f, 0xfe, 0x06,
0x00, 0x00, 0xff, 0xff, 0x7f, 0x24, 0x6f, 0x2e, 0xb1, 0x04, 0x00, 0x00,
}
func (m *RuncOptions) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
@@ -141,6 +298,9 @@ func (m *RuncOptions) MarshalTo(dAtA []byte) (int, error) {
}
i++
}
if m.XXX_unrecognized != nil {
i += copy(dAtA[i:], m.XXX_unrecognized)
}
return i, nil
}
@@ -268,6 +428,9 @@ func (m *CreateOptions) MarshalTo(dAtA []byte) (int, error) {
i = encodeVarintRunc(dAtA, i, uint64(len(m.CriuImagePath)))
i += copy(dAtA[i:], m.CriuImagePath)
}
if m.XXX_unrecognized != nil {
i += copy(dAtA[i:], m.XXX_unrecognized)
}
return i, nil
}
@@ -369,6 +532,9 @@ func (m *CheckpointOptions) MarshalTo(dAtA []byte) (int, error) {
i = encodeVarintRunc(dAtA, i, uint64(len(m.ImagePath)))
i += copy(dAtA[i:], m.ImagePath)
}
if m.XXX_unrecognized != nil {
i += copy(dAtA[i:], m.XXX_unrecognized)
}
return i, nil
}
@@ -393,6 +559,9 @@ func (m *ProcessDetails) MarshalTo(dAtA []byte) (int, error) {
i = encodeVarintRunc(dAtA, i, uint64(len(m.ExecID)))
i += copy(dAtA[i:], m.ExecID)
}
if m.XXX_unrecognized != nil {
i += copy(dAtA[i:], m.XXX_unrecognized)
}
return i, nil
}
@@ -406,6 +575,9 @@ func encodeVarintRunc(dAtA []byte, offset int, v uint64) int {
return offset + 1
}
func (m *RuncOptions) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
l = len(m.Runtime)
@@ -423,10 +595,16 @@ func (m *RuncOptions) Size() (n int) {
if m.SystemdCgroup {
n += 2
}
if m.XXX_unrecognized != nil {
n += len(m.XXX_unrecognized)
}
return n
}
func (m *CreateOptions) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if m.NoPivotRoot {
@@ -475,10 +653,16 @@ func (m *CreateOptions) Size() (n int) {
if l > 0 {
n += 1 + l + sovRunc(uint64(l))
}
if m.XXX_unrecognized != nil {
n += len(m.XXX_unrecognized)
}
return n
}
func (m *CheckpointOptions) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if m.Exit {
@@ -514,16 +698,25 @@ func (m *CheckpointOptions) Size() (n int) {
if l > 0 {
n += 1 + l + sovRunc(uint64(l))
}
if m.XXX_unrecognized != nil {
n += len(m.XXX_unrecognized)
}
return n
}
func (m *ProcessDetails) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
l = len(m.ExecID)
if l > 0 {
n += 1 + l + sovRunc(uint64(l))
}
if m.XXX_unrecognized != nil {
n += len(m.XXX_unrecognized)
}
return n
}
@@ -549,6 +742,7 @@ func (this *RuncOptions) String() string {
`RuntimeRoot:` + fmt.Sprintf("%v", this.RuntimeRoot) + `,`,
`CriuPath:` + fmt.Sprintf("%v", this.CriuPath) + `,`,
`SystemdCgroup:` + fmt.Sprintf("%v", this.SystemdCgroup) + `,`,
`XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
`}`,
}, "")
return s
@@ -571,6 +765,7 @@ func (this *CreateOptions) String() string {
`IoGid:` + fmt.Sprintf("%v", this.IoGid) + `,`,
`CriuWorkPath:` + fmt.Sprintf("%v", this.CriuWorkPath) + `,`,
`CriuImagePath:` + fmt.Sprintf("%v", this.CriuImagePath) + `,`,
`XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
`}`,
}, "")
return s
@@ -589,6 +784,7 @@ func (this *CheckpointOptions) String() string {
`CgroupsMode:` + fmt.Sprintf("%v", this.CgroupsMode) + `,`,
`WorkPath:` + fmt.Sprintf("%v", this.WorkPath) + `,`,
`ImagePath:` + fmt.Sprintf("%v", this.ImagePath) + `,`,
`XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
`}`,
}, "")
return s
@@ -599,6 +795,7 @@ func (this *ProcessDetails) String() string {
}
s := strings.Join([]string{`&ProcessDetails{`,
`ExecID:` + fmt.Sprintf("%v", this.ExecID) + `,`,
`XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
`}`,
}, "")
return s
@@ -626,7 +823,7 @@ func (m *RuncOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -654,7 +851,7 @@ func (m *RuncOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -664,6 +861,9 @@ func (m *RuncOptions) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthRunc
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthRunc
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -683,7 +883,7 @@ func (m *RuncOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -693,6 +893,9 @@ func (m *RuncOptions) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthRunc
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthRunc
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -712,7 +915,7 @@ func (m *RuncOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -722,6 +925,9 @@ func (m *RuncOptions) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthRunc
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthRunc
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -741,7 +947,7 @@ func (m *RuncOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
v |= (int(b) & 0x7F) << shift
v |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -756,9 +962,13 @@ func (m *RuncOptions) Unmarshal(dAtA []byte) error {
if skippy < 0 {
return ErrInvalidLengthRunc
}
if (iNdEx + skippy) < 0 {
return ErrInvalidLengthRunc
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
iNdEx += skippy
}
}
@@ -783,7 +993,7 @@ func (m *CreateOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -811,7 +1021,7 @@ func (m *CreateOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
v |= (int(b) & 0x7F) << shift
v |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -831,7 +1041,7 @@ func (m *CreateOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
v |= (int(b) & 0x7F) << shift
v |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -851,7 +1061,7 @@ func (m *CreateOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
v |= (int(b) & 0x7F) << shift
v |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -871,7 +1081,7 @@ func (m *CreateOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
v |= (int(b) & 0x7F) << shift
v |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -891,7 +1101,7 @@ func (m *CreateOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
v |= (int(b) & 0x7F) << shift
v |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -911,7 +1121,7 @@ func (m *CreateOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -921,6 +1131,9 @@ func (m *CreateOptions) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthRunc
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthRunc
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -940,7 +1153,7 @@ func (m *CreateOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -950,6 +1163,9 @@ func (m *CreateOptions) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthRunc
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthRunc
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -969,7 +1185,7 @@ func (m *CreateOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
v |= (int(b) & 0x7F) << shift
v |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -989,7 +1205,7 @@ func (m *CreateOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -999,6 +1215,9 @@ func (m *CreateOptions) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthRunc
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthRunc
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -1018,7 +1237,7 @@ func (m *CreateOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
m.IoUid |= (uint32(b) & 0x7F) << shift
m.IoUid |= uint32(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -1037,7 +1256,7 @@ func (m *CreateOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
m.IoGid |= (uint32(b) & 0x7F) << shift
m.IoGid |= uint32(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -1056,7 +1275,7 @@ func (m *CreateOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -1066,6 +1285,9 @@ func (m *CreateOptions) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthRunc
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthRunc
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -1085,7 +1307,7 @@ func (m *CreateOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -1095,6 +1317,9 @@ func (m *CreateOptions) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthRunc
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthRunc
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -1109,9 +1334,13 @@ func (m *CreateOptions) Unmarshal(dAtA []byte) error {
if skippy < 0 {
return ErrInvalidLengthRunc
}
if (iNdEx + skippy) < 0 {
return ErrInvalidLengthRunc
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
iNdEx += skippy
}
}
@@ -1136,7 +1365,7 @@ func (m *CheckpointOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -1164,7 +1393,7 @@ func (m *CheckpointOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
v |= (int(b) & 0x7F) << shift
v |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -1184,7 +1413,7 @@ func (m *CheckpointOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
v |= (int(b) & 0x7F) << shift
v |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -1204,7 +1433,7 @@ func (m *CheckpointOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
v |= (int(b) & 0x7F) << shift
v |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -1224,7 +1453,7 @@ func (m *CheckpointOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
v |= (int(b) & 0x7F) << shift
v |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -1244,7 +1473,7 @@ func (m *CheckpointOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
v |= (int(b) & 0x7F) << shift
v |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -1264,7 +1493,7 @@ func (m *CheckpointOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -1274,6 +1503,9 @@ func (m *CheckpointOptions) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthRunc
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthRunc
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -1293,7 +1525,7 @@ func (m *CheckpointOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -1303,6 +1535,9 @@ func (m *CheckpointOptions) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthRunc
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthRunc
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -1322,7 +1557,7 @@ func (m *CheckpointOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -1332,6 +1567,9 @@ func (m *CheckpointOptions) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthRunc
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthRunc
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -1351,7 +1589,7 @@ func (m *CheckpointOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -1361,6 +1599,9 @@ func (m *CheckpointOptions) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthRunc
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthRunc
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -1375,9 +1616,13 @@ func (m *CheckpointOptions) Unmarshal(dAtA []byte) error {
if skippy < 0 {
return ErrInvalidLengthRunc
}
if (iNdEx + skippy) < 0 {
return ErrInvalidLengthRunc
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
iNdEx += skippy
}
}
@@ -1402,7 +1647,7 @@ func (m *ProcessDetails) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -1430,7 +1675,7 @@ func (m *ProcessDetails) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -1440,6 +1685,9 @@ func (m *ProcessDetails) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthRunc
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthRunc
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -1454,9 +1702,13 @@ func (m *ProcessDetails) Unmarshal(dAtA []byte) error {
if skippy < 0 {
return ErrInvalidLengthRunc
}
if (iNdEx + skippy) < 0 {
return ErrInvalidLengthRunc
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
iNdEx += skippy
}
}
@@ -1520,10 +1772,13 @@ func skipRunc(dAtA []byte) (n int, err error) {
break
}
}
iNdEx += length
if length < 0 {
return 0, ErrInvalidLengthRunc
}
iNdEx += length
if iNdEx < 0 {
return 0, ErrInvalidLengthRunc
}
return iNdEx, nil
case 3:
for {
@@ -1552,6 +1807,9 @@ func skipRunc(dAtA []byte) (n int, err error) {
return 0, err
}
iNdEx = start + next
if iNdEx < 0 {
return 0, ErrInvalidLengthRunc
}
}
return iNdEx, nil
case 4:
@@ -1570,49 +1828,3 @@ var (
ErrInvalidLengthRunc = fmt.Errorf("proto: negative length found during unmarshaling")
ErrIntOverflowRunc = fmt.Errorf("proto: integer overflow")
)
func init() {
proto.RegisterFile("github.com/containerd/containerd/runtime/linux/runctypes/runc.proto", fileDescriptorRunc)
}
var fileDescriptorRunc = []byte{
// 604 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xdc, 0x94, 0xcf, 0x6e, 0xd3, 0x40,
0x10, 0xc6, 0xeb, 0xfe, 0x49, 0x9c, 0x49, 0xd2, 0xc2, 0x42, 0x25, 0xd3, 0xaa, 0x69, 0x08, 0x7f,
0x14, 0x2e, 0xa9, 0x04, 0xe2, 0xc4, 0xad, 0x29, 0x42, 0x15, 0x50, 0x2a, 0x43, 0x05, 0x42, 0x48,
0x2b, 0x77, 0x3d, 0x24, 0xab, 0xc4, 0x3b, 0x96, 0x77, 0x4d, 0x92, 0x1b, 0x4f, 0xc0, 0x0b, 0xf1,
0x02, 0x3d, 0x21, 0x8e, 0x9c, 0x10, 0xcd, 0x93, 0xa0, 0x5d, 0xc7, 0x69, 0xcf, 0x1c, 0xb9, 0xcd,
0xfc, 0xe6, 0xb3, 0x67, 0xf4, 0x7d, 0xb2, 0xa1, 0x3f, 0x90, 0x66, 0x98, 0x9f, 0xf7, 0x04, 0x25,
0x07, 0x82, 0x94, 0x89, 0xa4, 0xc2, 0x2c, 0xbe, 0x5e, 0x66, 0xb9, 0x32, 0x32, 0xc1, 0x83, 0xb1,
0x54, 0xf9, 0xd4, 0x76, 0xc2, 0xcc, 0x52, 0xd4, 0xae, 0xea, 0xa5, 0x19, 0x19, 0x62, 0xdb, 0x57,
0xf2, 0x9e, 0x93, 0xf5, 0xec, 0x70, 0xe7, 0xf6, 0x80, 0x06, 0xe4, 0x14, 0x07, 0xb6, 0x2a, 0xc4,
0x9d, 0x6f, 0x1e, 0xd4, 0xc3, 0x5c, 0x89, 0x37, 0xa9, 0x91, 0xa4, 0x34, 0x0b, 0xa0, 0xba, 0x58,
0x11, 0x78, 0x6d, 0xaf, 0x5b, 0x0b, 0xcb, 0x96, 0xdd, 0x85, 0xc6, 0xa2, 0xe4, 0x19, 0x91, 0x09,
0x56, 0xdd, 0xb8, 0xbe, 0x60, 0x21, 0x91, 0x61, 0xbb, 0x50, 0x13, 0x99, 0xcc, 0x79, 0x1a, 0x99,
0x61, 0xb0, 0xe6, 0xe6, 0xbe, 0x05, 0xa7, 0x91, 0x19, 0xb2, 0x07, 0xb0, 0xa9, 0x67, 0xda, 0x60,
0x12, 0x73, 0x31, 0xc8, 0x28, 0x4f, 0x83, 0xf5, 0xb6, 0xd7, 0xf5, 0xc3, 0xe6, 0x82, 0xf6, 0x1d,
0xec, 0xfc, 0x58, 0x83, 0x66, 0x3f, 0xc3, 0xc8, 0x60, 0x79, 0x52, 0x07, 0x9a, 0x8a, 0x78, 0x2a,
0xbf, 0x90, 0x29, 0x36, 0x7b, 0xee, 0xb9, 0xba, 0xa2, 0x53, 0xcb, 0xdc, 0xe6, 0x3b, 0xe0, 0x53,
0x8a, 0x8a, 0x1b, 0x91, 0xba, 0xc3, 0xfc, 0xb0, 0x6a, 0xfb, 0x77, 0x22, 0x65, 0x8f, 0x61, 0x1b,
0xa7, 0x06, 0x33, 0x15, 0x8d, 0x79, 0xae, 0xe4, 0x94, 0x6b, 0x12, 0x23, 0x34, 0xda, 0x1d, 0xe8,
0x87, 0xb7, 0xca, 0xe1, 0x99, 0x92, 0xd3, 0xb7, 0xc5, 0x88, 0xed, 0x80, 0x6f, 0x30, 0x4b, 0xa4,
0x8a, 0xc6, 0x8b, 0x2b, 0x97, 0x3d, 0xdb, 0x03, 0xf8, 0x2c, 0xc7, 0xc8, 0xc7, 0x24, 0x46, 0x3a,
0xd8, 0x70, 0xd3, 0x9a, 0x25, 0xaf, 0x2c, 0x60, 0x8f, 0xe0, 0x06, 0x26, 0xa9, 0x99, 0x71, 0x15,
0x25, 0xa8, 0xd3, 0x48, 0xa0, 0x0e, 0x2a, 0xed, 0xb5, 0x6e, 0x2d, 0xdc, 0x72, 0xfc, 0x64, 0x89,
0xad, 0xa3, 0x85, 0x13, 0x9a, 0x27, 0x14, 0x63, 0x50, 0x2d, 0x1c, 0x5d, 0xb0, 0xd7, 0x14, 0x23,
0xbb, 0x0f, 0x9b, 0x8a, 0xb8, 0xc2, 0x09, 0x1f, 0xe1, 0x2c, 0x93, 0x6a, 0x10, 0xf8, 0x6e, 0x61,
0x43, 0xd1, 0x09, 0x4e, 0x5e, 0x16, 0x8c, 0xed, 0x43, 0x5d, 0x0f, 0x65, 0x52, 0xfa, 0x5a, 0x73,
0xef, 0x01, 0x8b, 0x0a, 0x53, 0xd9, 0x36, 0x54, 0x24, 0xf1, 0x5c, 0xc6, 0x01, 0xb4, 0xbd, 0x6e,
0x33, 0xdc, 0x90, 0x74, 0x26, 0xe3, 0x05, 0x1e, 0xc8, 0x38, 0xa8, 0x97, 0xf8, 0x85, 0x8c, 0xed,
0x52, 0x17, 0xe3, 0x84, 0xb2, 0x51, 0x91, 0x65, 0xc3, 0xbd, 0xb1, 0x61, 0xe9, 0x7b, 0xca, 0x46,
0x2e, 0xcf, 0x87, 0xb0, 0xe5, 0x54, 0x32, 0x89, 0x06, 0x58, 0xc8, 0x9a, 0x4e, 0xd6, 0xb4, 0xf8,
0xd8, 0x52, 0xab, 0xeb, 0x7c, 0x5f, 0x85, 0x9b, 0xfd, 0x21, 0x8a, 0x51, 0x4a, 0x52, 0x99, 0x32,
0x54, 0x06, 0xeb, 0x38, 0x95, 0x65, 0x96, 0xae, 0xfe, 0x6f, 0x43, 0xdc, 0x85, 0xda, 0x95, 0x95,
0x7e, 0xf1, 0x59, 0x4c, 0x4a, 0x1b, 0xf7, 0x00, 0xae, 0x39, 0x58, 0x44, 0x57, 0x93, 0x4b, 0xf7,
0x9e, 0xc2, 0xe6, 0x69, 0x46, 0x02, 0xb5, 0x3e, 0x42, 0x13, 0xc9, 0xb1, 0x66, 0xf7, 0xa0, 0x8a,
0x53, 0x14, 0x5c, 0xc6, 0xc5, 0x17, 0x7a, 0x08, 0xf3, 0xdf, 0xfb, 0x95, 0xe7, 0x53, 0x14, 0xc7,
0x47, 0x61, 0xc5, 0x8e, 0x8e, 0xe3, 0xc3, 0x4f, 0x17, 0x97, 0xad, 0x95, 0x5f, 0x97, 0xad, 0x95,
0xaf, 0xf3, 0x96, 0x77, 0x31, 0x6f, 0x79, 0x3f, 0xe7, 0x2d, 0xef, 0xcf, 0xbc, 0xe5, 0x7d, 0x3c,
0xfc, 0xd7, 0x5f, 0xcc, 0xb3, 0x65, 0xf5, 0x61, 0xe5, 0xbc, 0xe2, 0xfe, 0x1e, 0x4f, 0xfe, 0x06,
0x00, 0x00, 0xff, 0xff, 0x7f, 0x24, 0x6f, 0x2e, 0xb1, 0x04, 0x00, 0x00,
}

View File

@@ -1,29 +1,16 @@
// Code generated by protoc-gen-gogo. DO NOT EDIT.
// source: github.com/containerd/containerd/runtime/v2/runc/options/oci.proto
/*
Package options is a generated protocol buffer package.
It is generated from these files:
github.com/containerd/containerd/runtime/v2/runc/options/oci.proto
It has these top-level messages:
Options
CheckpointOptions
ProcessDetails
*/
package options
import proto "github.com/gogo/protobuf/proto"
import fmt "fmt"
import math "math"
// skipping weak import gogoproto "github.com/gogo/protobuf/gogoproto"
import strings "strings"
import reflect "reflect"
import io "io"
import (
fmt "fmt"
proto "github.com/gogo/protobuf/proto"
io "io"
math "math"
reflect "reflect"
strings "strings"
)
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
@@ -58,12 +45,43 @@ type Options struct {
// criu image path
CriuImagePath string `protobuf:"bytes,10,opt,name=criu_image_path,json=criuImagePath,proto3" json:"criu_image_path,omitempty"`
// criu work path
CriuWorkPath string `protobuf:"bytes,11,opt,name=criu_work_path,json=criuWorkPath,proto3" json:"criu_work_path,omitempty"`
CriuWorkPath string `protobuf:"bytes,11,opt,name=criu_work_path,json=criuWorkPath,proto3" json:"criu_work_path,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *Options) Reset() { *m = Options{} }
func (*Options) ProtoMessage() {}
func (*Options) Descriptor() ([]byte, []int) { return fileDescriptorOci, []int{0} }
func (m *Options) Reset() { *m = Options{} }
func (*Options) ProtoMessage() {}
func (*Options) Descriptor() ([]byte, []int) {
return fileDescriptor_4e5440d739e9a863, []int{0}
}
func (m *Options) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *Options) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_Options.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalTo(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *Options) XXX_Merge(src proto.Message) {
xxx_messageInfo_Options.Merge(m, src)
}
func (m *Options) XXX_Size() int {
return m.Size()
}
func (m *Options) XXX_DiscardUnknown() {
xxx_messageInfo_Options.DiscardUnknown(m)
}
var xxx_messageInfo_Options proto.InternalMessageInfo
type CheckpointOptions struct {
// exit the container after a checkpoint
@@ -77,33 +95,141 @@ type CheckpointOptions struct {
// allow checkpointing of file locks
FileLocks bool `protobuf:"varint,5,opt,name=file_locks,json=fileLocks,proto3" json:"file_locks,omitempty"`
// restore provided namespaces as empty namespaces
EmptyNamespaces []string `protobuf:"bytes,6,rep,name=empty_namespaces,json=emptyNamespaces" json:"empty_namespaces,omitempty"`
EmptyNamespaces []string `protobuf:"bytes,6,rep,name=empty_namespaces,json=emptyNamespaces,proto3" json:"empty_namespaces,omitempty"`
// set the cgroups mode, soft, full, strict
CgroupsMode string `protobuf:"bytes,7,opt,name=cgroups_mode,json=cgroupsMode,proto3" json:"cgroups_mode,omitempty"`
// checkpoint image path
ImagePath string `protobuf:"bytes,8,opt,name=image_path,json=imagePath,proto3" json:"image_path,omitempty"`
// checkpoint work path
WorkPath string `protobuf:"bytes,9,opt,name=work_path,json=workPath,proto3" json:"work_path,omitempty"`
WorkPath string `protobuf:"bytes,9,opt,name=work_path,json=workPath,proto3" json:"work_path,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *CheckpointOptions) Reset() { *m = CheckpointOptions{} }
func (*CheckpointOptions) ProtoMessage() {}
func (*CheckpointOptions) Descriptor() ([]byte, []int) { return fileDescriptorOci, []int{1} }
func (m *CheckpointOptions) Reset() { *m = CheckpointOptions{} }
func (*CheckpointOptions) ProtoMessage() {}
func (*CheckpointOptions) Descriptor() ([]byte, []int) {
return fileDescriptor_4e5440d739e9a863, []int{1}
}
func (m *CheckpointOptions) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *CheckpointOptions) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_CheckpointOptions.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalTo(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *CheckpointOptions) XXX_Merge(src proto.Message) {
xxx_messageInfo_CheckpointOptions.Merge(m, src)
}
func (m *CheckpointOptions) XXX_Size() int {
return m.Size()
}
func (m *CheckpointOptions) XXX_DiscardUnknown() {
xxx_messageInfo_CheckpointOptions.DiscardUnknown(m)
}
var xxx_messageInfo_CheckpointOptions proto.InternalMessageInfo
type ProcessDetails struct {
// exec process id if the process is managed by a shim
ExecID string `protobuf:"bytes,1,opt,name=exec_id,json=execId,proto3" json:"exec_id,omitempty"`
ExecID string `protobuf:"bytes,1,opt,name=exec_id,json=execId,proto3" json:"exec_id,omitempty"`
XXX_NoUnkeyedLiteral struct{} `json:"-"`
XXX_unrecognized []byte `json:"-"`
XXX_sizecache int32 `json:"-"`
}
func (m *ProcessDetails) Reset() { *m = ProcessDetails{} }
func (*ProcessDetails) ProtoMessage() {}
func (*ProcessDetails) Descriptor() ([]byte, []int) { return fileDescriptorOci, []int{2} }
func (m *ProcessDetails) Reset() { *m = ProcessDetails{} }
func (*ProcessDetails) ProtoMessage() {}
func (*ProcessDetails) Descriptor() ([]byte, []int) {
return fileDescriptor_4e5440d739e9a863, []int{2}
}
func (m *ProcessDetails) XXX_Unmarshal(b []byte) error {
return m.Unmarshal(b)
}
func (m *ProcessDetails) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
if deterministic {
return xxx_messageInfo_ProcessDetails.Marshal(b, m, deterministic)
} else {
b = b[:cap(b)]
n, err := m.MarshalTo(b)
if err != nil {
return nil, err
}
return b[:n], nil
}
}
func (m *ProcessDetails) XXX_Merge(src proto.Message) {
xxx_messageInfo_ProcessDetails.Merge(m, src)
}
func (m *ProcessDetails) XXX_Size() int {
return m.Size()
}
func (m *ProcessDetails) XXX_DiscardUnknown() {
xxx_messageInfo_ProcessDetails.DiscardUnknown(m)
}
var xxx_messageInfo_ProcessDetails proto.InternalMessageInfo
func init() {
proto.RegisterType((*Options)(nil), "containerd.runc.v1.Options")
proto.RegisterType((*CheckpointOptions)(nil), "containerd.runc.v1.CheckpointOptions")
proto.RegisterType((*ProcessDetails)(nil), "containerd.runc.v1.ProcessDetails")
}
func init() {
proto.RegisterFile("github.com/containerd/containerd/runtime/v2/runc/options/oci.proto", fileDescriptor_4e5440d739e9a863)
}
var fileDescriptor_4e5440d739e9a863 = []byte{
// 587 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x9c, 0x93, 0xcf, 0x6e, 0xd3, 0x40,
0x10, 0x87, 0xeb, 0xfe, 0x49, 0xec, 0x4d, 0x93, 0xc2, 0x42, 0x25, 0xd3, 0x8a, 0x34, 0x94, 0x82,
0xc2, 0x25, 0x11, 0x45, 0x9c, 0xb8, 0xa0, 0xb6, 0x08, 0x55, 0x40, 0xa9, 0x0c, 0x15, 0xa8, 0x97,
0x95, 0xbb, 0x1e, 0x9c, 0x51, 0xe2, 0x1d, 0xcb, 0xbb, 0x69, 0xd2, 0x1b, 0xef, 0xc5, 0x0b, 0xf4,
0xc8, 0x91, 0x13, 0xa2, 0xb9, 0xf1, 0x16, 0x68, 0xd7, 0x4e, 0xdb, 0x33, 0x27, 0xcf, 0x7e, 0xf3,
0xf3, 0x78, 0xfd, 0xad, 0x96, 0xed, 0xa5, 0x68, 0x06, 0xe3, 0xb3, 0x9e, 0xa4, 0xac, 0x2f, 0x49,
0x99, 0x18, 0x15, 0x14, 0xc9, 0xed, 0xb2, 0x18, 0x2b, 0x83, 0x19, 0xf4, 0xcf, 0x77, 0x6d, 0x29,
0xfb, 0x94, 0x1b, 0x24, 0xa5, 0xfb, 0x24, 0xb1, 0x97, 0x17, 0x64, 0x88, 0xf3, 0x9b, 0x74, 0xcf,
0x46, 0x7a, 0xe7, 0xcf, 0x37, 0xee, 0xa7, 0x94, 0x92, 0x6b, 0xf7, 0x6d, 0x55, 0x26, 0xb7, 0xff,
0x2e, 0xb2, 0xfa, 0xc7, 0xf2, 0x7d, 0xbe, 0xcd, 0x9a, 0x8a, 0x44, 0x8e, 0xe7, 0x64, 0x44, 0x41,
0x64, 0x42, 0xaf, 0xe3, 0x75, 0xfd, 0xa8, 0xa1, 0xe8, 0xd8, 0xb2, 0x88, 0xc8, 0xf0, 0x1d, 0xd6,
0x52, 0x24, 0x14, 0x4c, 0xc4, 0x10, 0x2e, 0x0a, 0x54, 0x69, 0xb8, 0xe8, 0x42, 0xab, 0x8a, 0x8e,
0x60, 0xf2, 0xae, 0x64, 0x7c, 0x8b, 0x35, 0xf4, 0x00, 0x33, 0x21, 0xd3, 0x82, 0xc6, 0x79, 0xb8,
0xd4, 0xf1, 0xba, 0x41, 0xc4, 0x2c, 0xda, 0x77, 0x84, 0xaf, 0xb3, 0x1a, 0x92, 0x18, 0x63, 0x12,
0x2e, 0x77, 0xbc, 0x6e, 0x33, 0x5a, 0x41, 0x3a, 0xc1, 0xa4, 0xc2, 0x29, 0x26, 0xe1, 0xca, 0x1c,
0xbf, 0xc5, 0xc4, 0x8e, 0x3b, 0x43, 0x15, 0x17, 0x17, 0x42, 0xc5, 0x19, 0x84, 0xb5, 0x72, 0x5c,
0x89, 0x8e, 0xe2, 0x0c, 0x38, 0x67, 0xcb, 0x6e, 0xc3, 0x75, 0xd7, 0x71, 0x35, 0xdf, 0x64, 0x81,
0x2c, 0x70, 0x2c, 0xf2, 0xd8, 0x0c, 0x42, 0xdf, 0x35, 0x7c, 0x0b, 0x8e, 0x63, 0x33, 0xe0, 0x4f,
0x58, 0x4b, 0x5f, 0x68, 0x03, 0x59, 0x32, 0xdf, 0x63, 0xe0, 0x7e, 0xa3, 0x59, 0xd1, 0x6a, 0x9b,
0x4f, 0xd9, 0x9a, 0x9b, 0x81, 0x59, 0x9c, 0x42, 0x39, 0x89, 0xb9, 0x49, 0x4d, 0x8b, 0x0f, 0x2d,
0x75, 0xe3, 0x76, 0x58, 0xcb, 0xe5, 0x26, 0x54, 0x0c, 0xcb, 0x58, 0xc3, 0xc5, 0x56, 0x2d, 0xfd,
0x42, 0xc5, 0xd0, 0xa6, 0xb6, 0x7f, 0x2c, 0xb2, 0xbb, 0xfb, 0x03, 0x90, 0xc3, 0x9c, 0x50, 0x99,
0xb9, 0x75, 0xce, 0x96, 0x61, 0x8a, 0x73, 0xd9, 0xae, 0xe6, 0x0f, 0x98, 0x4f, 0x39, 0x28, 0x61,
0x64, 0x5e, 0xf9, 0xad, 0xdb, 0xf5, 0x67, 0x99, 0xf3, 0x5d, 0xb6, 0x0e, 0x53, 0x03, 0x85, 0x8a,
0x47, 0x62, 0xac, 0x70, 0x2a, 0x34, 0xc9, 0x21, 0x18, 0xed, 0x24, 0xfb, 0xd1, 0xbd, 0x79, 0xf3,
0x44, 0xe1, 0xf4, 0x53, 0xd9, 0xe2, 0x1b, 0xcc, 0x37, 0x50, 0x64, 0xa8, 0xe2, 0x91, 0xf3, 0xed,
0x47, 0xd7, 0x6b, 0xfe, 0x90, 0xb1, 0x6f, 0x38, 0x02, 0x31, 0x22, 0x39, 0xd4, 0x4e, 0xbb, 0x1f,
0x05, 0x96, 0xbc, 0xb7, 0x80, 0x3f, 0x63, 0x77, 0x20, 0xcb, 0x4d, 0x69, 0x5e, 0xe7, 0xb1, 0x04,
0x1d, 0xd6, 0x3a, 0x4b, 0xdd, 0x20, 0x5a, 0x73, 0xfc, 0xe8, 0x1a, 0xf3, 0x47, 0x6c, 0xb5, 0x74,
0xa9, 0x45, 0x46, 0x09, 0x54, 0x87, 0xd1, 0xa8, 0xd8, 0x07, 0x4a, 0xc0, 0x7e, 0xec, 0x96, 0xca,
0xf2, 0x50, 0x02, 0xbc, 0xd6, 0xb8, 0xc9, 0x82, 0x1b, 0x83, 0x41, 0x79, 0x64, 0x93, 0xb9, 0xbd,
0x97, 0xac, 0x75, 0x5c, 0x90, 0x04, 0xad, 0x0f, 0xc0, 0xc4, 0x38, 0xd2, 0xfc, 0x31, 0xab, 0xc3,
0x14, 0xa4, 0xc0, 0xc4, 0xc9, 0x0b, 0xf6, 0xd8, 0xec, 0xf7, 0x56, 0xed, 0xcd, 0x14, 0xe4, 0xe1,
0x41, 0x54, 0xb3, 0xad, 0xc3, 0x64, 0xef, 0xf4, 0xf2, 0xaa, 0xbd, 0xf0, 0xeb, 0xaa, 0xbd, 0xf0,
0x7d, 0xd6, 0xf6, 0x2e, 0x67, 0x6d, 0xef, 0xe7, 0xac, 0xed, 0xfd, 0x99, 0xb5, 0xbd, 0xd3, 0xd7,
0xff, 0x7b, 0xd1, 0x5e, 0x55, 0xcf, 0xaf, 0x0b, 0x67, 0x35, 0x77, 0x8b, 0x5e, 0xfc, 0x0b, 0x00,
0x00, 0xff, 0xff, 0x90, 0x50, 0x79, 0xf2, 0xb5, 0x03, 0x00, 0x00,
}
func (m *Options) Marshal() (dAtA []byte, err error) {
size := m.Size()
dAtA = make([]byte, size)
@@ -195,6 +321,9 @@ func (m *Options) MarshalTo(dAtA []byte) (int, error) {
i = encodeVarintOci(dAtA, i, uint64(len(m.CriuWorkPath)))
i += copy(dAtA[i:], m.CriuWorkPath)
}
if m.XXX_unrecognized != nil {
i += copy(dAtA[i:], m.XXX_unrecognized)
}
return i, nil
}
@@ -296,6 +425,9 @@ func (m *CheckpointOptions) MarshalTo(dAtA []byte) (int, error) {
i = encodeVarintOci(dAtA, i, uint64(len(m.WorkPath)))
i += copy(dAtA[i:], m.WorkPath)
}
if m.XXX_unrecognized != nil {
i += copy(dAtA[i:], m.XXX_unrecognized)
}
return i, nil
}
@@ -320,6 +452,9 @@ func (m *ProcessDetails) MarshalTo(dAtA []byte) (int, error) {
i = encodeVarintOci(dAtA, i, uint64(len(m.ExecID)))
i += copy(dAtA[i:], m.ExecID)
}
if m.XXX_unrecognized != nil {
i += copy(dAtA[i:], m.XXX_unrecognized)
}
return i, nil
}
@@ -333,6 +468,9 @@ func encodeVarintOci(dAtA []byte, offset int, v uint64) int {
return offset + 1
}
func (m *Options) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if m.NoPivotRoot {
@@ -374,10 +512,16 @@ func (m *Options) Size() (n int) {
if l > 0 {
n += 1 + l + sovOci(uint64(l))
}
if m.XXX_unrecognized != nil {
n += len(m.XXX_unrecognized)
}
return n
}
func (m *CheckpointOptions) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
if m.Exit {
@@ -413,16 +557,25 @@ func (m *CheckpointOptions) Size() (n int) {
if l > 0 {
n += 1 + l + sovOci(uint64(l))
}
if m.XXX_unrecognized != nil {
n += len(m.XXX_unrecognized)
}
return n
}
func (m *ProcessDetails) Size() (n int) {
if m == nil {
return 0
}
var l int
_ = l
l = len(m.ExecID)
if l > 0 {
n += 1 + l + sovOci(uint64(l))
}
if m.XXX_unrecognized != nil {
n += len(m.XXX_unrecognized)
}
return n
}
@@ -455,6 +608,7 @@ func (this *Options) String() string {
`SystemdCgroup:` + fmt.Sprintf("%v", this.SystemdCgroup) + `,`,
`CriuImagePath:` + fmt.Sprintf("%v", this.CriuImagePath) + `,`,
`CriuWorkPath:` + fmt.Sprintf("%v", this.CriuWorkPath) + `,`,
`XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
`}`,
}, "")
return s
@@ -473,6 +627,7 @@ func (this *CheckpointOptions) String() string {
`CgroupsMode:` + fmt.Sprintf("%v", this.CgroupsMode) + `,`,
`ImagePath:` + fmt.Sprintf("%v", this.ImagePath) + `,`,
`WorkPath:` + fmt.Sprintf("%v", this.WorkPath) + `,`,
`XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
`}`,
}, "")
return s
@@ -483,6 +638,7 @@ func (this *ProcessDetails) String() string {
}
s := strings.Join([]string{`&ProcessDetails{`,
`ExecID:` + fmt.Sprintf("%v", this.ExecID) + `,`,
`XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
`}`,
}, "")
return s
@@ -510,7 +666,7 @@ func (m *Options) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -538,7 +694,7 @@ func (m *Options) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
v |= (int(b) & 0x7F) << shift
v |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -558,7 +714,7 @@ func (m *Options) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
v |= (int(b) & 0x7F) << shift
v |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -578,7 +734,7 @@ func (m *Options) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -588,6 +744,9 @@ func (m *Options) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthOci
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthOci
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -607,7 +766,7 @@ func (m *Options) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
m.IoUid |= (uint32(b) & 0x7F) << shift
m.IoUid |= uint32(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -626,7 +785,7 @@ func (m *Options) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
m.IoGid |= (uint32(b) & 0x7F) << shift
m.IoGid |= uint32(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -645,7 +804,7 @@ func (m *Options) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -655,6 +814,9 @@ func (m *Options) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthOci
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthOci
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -674,7 +836,7 @@ func (m *Options) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -684,6 +846,9 @@ func (m *Options) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthOci
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthOci
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -703,7 +868,7 @@ func (m *Options) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -713,6 +878,9 @@ func (m *Options) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthOci
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthOci
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -732,7 +900,7 @@ func (m *Options) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
v |= (int(b) & 0x7F) << shift
v |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -752,7 +920,7 @@ func (m *Options) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -762,6 +930,9 @@ func (m *Options) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthOci
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthOci
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -781,7 +952,7 @@ func (m *Options) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -791,6 +962,9 @@ func (m *Options) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthOci
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthOci
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -805,9 +979,13 @@ func (m *Options) Unmarshal(dAtA []byte) error {
if skippy < 0 {
return ErrInvalidLengthOci
}
if (iNdEx + skippy) < 0 {
return ErrInvalidLengthOci
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
iNdEx += skippy
}
}
@@ -832,7 +1010,7 @@ func (m *CheckpointOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -860,7 +1038,7 @@ func (m *CheckpointOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
v |= (int(b) & 0x7F) << shift
v |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -880,7 +1058,7 @@ func (m *CheckpointOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
v |= (int(b) & 0x7F) << shift
v |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -900,7 +1078,7 @@ func (m *CheckpointOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
v |= (int(b) & 0x7F) << shift
v |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -920,7 +1098,7 @@ func (m *CheckpointOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
v |= (int(b) & 0x7F) << shift
v |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -940,7 +1118,7 @@ func (m *CheckpointOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
v |= (int(b) & 0x7F) << shift
v |= int(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -960,7 +1138,7 @@ func (m *CheckpointOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -970,6 +1148,9 @@ func (m *CheckpointOptions) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthOci
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthOci
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -989,7 +1170,7 @@ func (m *CheckpointOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -999,6 +1180,9 @@ func (m *CheckpointOptions) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthOci
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthOci
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -1018,7 +1202,7 @@ func (m *CheckpointOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -1028,6 +1212,9 @@ func (m *CheckpointOptions) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthOci
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthOci
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -1047,7 +1234,7 @@ func (m *CheckpointOptions) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -1057,6 +1244,9 @@ func (m *CheckpointOptions) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthOci
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthOci
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -1071,9 +1261,13 @@ func (m *CheckpointOptions) Unmarshal(dAtA []byte) error {
if skippy < 0 {
return ErrInvalidLengthOci
}
if (iNdEx + skippy) < 0 {
return ErrInvalidLengthOci
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
iNdEx += skippy
}
}
@@ -1098,7 +1292,7 @@ func (m *ProcessDetails) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
wire |= (uint64(b) & 0x7F) << shift
wire |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -1126,7 +1320,7 @@ func (m *ProcessDetails) Unmarshal(dAtA []byte) error {
}
b := dAtA[iNdEx]
iNdEx++
stringLen |= (uint64(b) & 0x7F) << shift
stringLen |= uint64(b&0x7F) << shift
if b < 0x80 {
break
}
@@ -1136,6 +1330,9 @@ func (m *ProcessDetails) Unmarshal(dAtA []byte) error {
return ErrInvalidLengthOci
}
postIndex := iNdEx + intStringLen
if postIndex < 0 {
return ErrInvalidLengthOci
}
if postIndex > l {
return io.ErrUnexpectedEOF
}
@@ -1150,9 +1347,13 @@ func (m *ProcessDetails) Unmarshal(dAtA []byte) error {
if skippy < 0 {
return ErrInvalidLengthOci
}
if (iNdEx + skippy) < 0 {
return ErrInvalidLengthOci
}
if (iNdEx + skippy) > l {
return io.ErrUnexpectedEOF
}
m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
iNdEx += skippy
}
}
@@ -1216,10 +1417,13 @@ func skipOci(dAtA []byte) (n int, err error) {
break
}
}
iNdEx += length
if length < 0 {
return 0, ErrInvalidLengthOci
}
iNdEx += length
if iNdEx < 0 {
return 0, ErrInvalidLengthOci
}
return iNdEx, nil
case 3:
for {
@@ -1248,6 +1452,9 @@ func skipOci(dAtA []byte) (n int, err error) {
return 0, err
}
iNdEx = start + next
if iNdEx < 0 {
return 0, ErrInvalidLengthOci
}
}
return iNdEx, nil
case 4:
@@ -1266,48 +1473,3 @@ var (
ErrInvalidLengthOci = fmt.Errorf("proto: negative length found during unmarshaling")
ErrIntOverflowOci = fmt.Errorf("proto: integer overflow")
)
func init() {
proto.RegisterFile("github.com/containerd/containerd/runtime/v2/runc/options/oci.proto", fileDescriptorOci)
}
var fileDescriptorOci = []byte{
// 587 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x9c, 0x93, 0xcf, 0x6e, 0xd3, 0x40,
0x10, 0x87, 0xeb, 0xfe, 0x49, 0xec, 0x4d, 0x93, 0xc2, 0x42, 0x25, 0xd3, 0x8a, 0x34, 0x94, 0x82,
0xc2, 0x25, 0x11, 0x45, 0x9c, 0xb8, 0xa0, 0xb6, 0x08, 0x55, 0x40, 0xa9, 0x0c, 0x15, 0xa8, 0x97,
0x95, 0xbb, 0x1e, 0x9c, 0x51, 0xe2, 0x1d, 0xcb, 0xbb, 0x69, 0xd2, 0x1b, 0xef, 0xc5, 0x0b, 0xf4,
0xc8, 0x91, 0x13, 0xa2, 0xb9, 0xf1, 0x16, 0x68, 0xd7, 0x4e, 0xdb, 0x33, 0x27, 0xcf, 0x7e, 0xf3,
0xf3, 0x78, 0xfd, 0xad, 0x96, 0xed, 0xa5, 0x68, 0x06, 0xe3, 0xb3, 0x9e, 0xa4, 0xac, 0x2f, 0x49,
0x99, 0x18, 0x15, 0x14, 0xc9, 0xed, 0xb2, 0x18, 0x2b, 0x83, 0x19, 0xf4, 0xcf, 0x77, 0x6d, 0x29,
0xfb, 0x94, 0x1b, 0x24, 0xa5, 0xfb, 0x24, 0xb1, 0x97, 0x17, 0x64, 0x88, 0xf3, 0x9b, 0x74, 0xcf,
0x46, 0x7a, 0xe7, 0xcf, 0x37, 0xee, 0xa7, 0x94, 0x92, 0x6b, 0xf7, 0x6d, 0x55, 0x26, 0xb7, 0xff,
0x2e, 0xb2, 0xfa, 0xc7, 0xf2, 0x7d, 0xbe, 0xcd, 0x9a, 0x8a, 0x44, 0x8e, 0xe7, 0x64, 0x44, 0x41,
0x64, 0x42, 0xaf, 0xe3, 0x75, 0xfd, 0xa8, 0xa1, 0xe8, 0xd8, 0xb2, 0x88, 0xc8, 0xf0, 0x1d, 0xd6,
0x52, 0x24, 0x14, 0x4c, 0xc4, 0x10, 0x2e, 0x0a, 0x54, 0x69, 0xb8, 0xe8, 0x42, 0xab, 0x8a, 0x8e,
0x60, 0xf2, 0xae, 0x64, 0x7c, 0x8b, 0x35, 0xf4, 0x00, 0x33, 0x21, 0xd3, 0x82, 0xc6, 0x79, 0xb8,
0xd4, 0xf1, 0xba, 0x41, 0xc4, 0x2c, 0xda, 0x77, 0x84, 0xaf, 0xb3, 0x1a, 0x92, 0x18, 0x63, 0x12,
0x2e, 0x77, 0xbc, 0x6e, 0x33, 0x5a, 0x41, 0x3a, 0xc1, 0xa4, 0xc2, 0x29, 0x26, 0xe1, 0xca, 0x1c,
0xbf, 0xc5, 0xc4, 0x8e, 0x3b, 0x43, 0x15, 0x17, 0x17, 0x42, 0xc5, 0x19, 0x84, 0xb5, 0x72, 0x5c,
0x89, 0x8e, 0xe2, 0x0c, 0x38, 0x67, 0xcb, 0x6e, 0xc3, 0x75, 0xd7, 0x71, 0x35, 0xdf, 0x64, 0x81,
0x2c, 0x70, 0x2c, 0xf2, 0xd8, 0x0c, 0x42, 0xdf, 0x35, 0x7c, 0x0b, 0x8e, 0x63, 0x33, 0xe0, 0x4f,
0x58, 0x4b, 0x5f, 0x68, 0x03, 0x59, 0x32, 0xdf, 0x63, 0xe0, 0x7e, 0xa3, 0x59, 0xd1, 0x6a, 0x9b,
0x4f, 0xd9, 0x9a, 0x9b, 0x81, 0x59, 0x9c, 0x42, 0x39, 0x89, 0xb9, 0x49, 0x4d, 0x8b, 0x0f, 0x2d,
0x75, 0xe3, 0x76, 0x58, 0xcb, 0xe5, 0x26, 0x54, 0x0c, 0xcb, 0x58, 0xc3, 0xc5, 0x56, 0x2d, 0xfd,
0x42, 0xc5, 0xd0, 0xa6, 0xb6, 0x7f, 0x2c, 0xb2, 0xbb, 0xfb, 0x03, 0x90, 0xc3, 0x9c, 0x50, 0x99,
0xb9, 0x75, 0xce, 0x96, 0x61, 0x8a, 0x73, 0xd9, 0xae, 0xe6, 0x0f, 0x98, 0x4f, 0x39, 0x28, 0x61,
0x64, 0x5e, 0xf9, 0xad, 0xdb, 0xf5, 0x67, 0x99, 0xf3, 0x5d, 0xb6, 0x0e, 0x53, 0x03, 0x85, 0x8a,
0x47, 0x62, 0xac, 0x70, 0x2a, 0x34, 0xc9, 0x21, 0x18, 0xed, 0x24, 0xfb, 0xd1, 0xbd, 0x79, 0xf3,
0x44, 0xe1, 0xf4, 0x53, 0xd9, 0xe2, 0x1b, 0xcc, 0x37, 0x50, 0x64, 0xa8, 0xe2, 0x91, 0xf3, 0xed,
0x47, 0xd7, 0x6b, 0xfe, 0x90, 0xb1, 0x6f, 0x38, 0x02, 0x31, 0x22, 0x39, 0xd4, 0x4e, 0xbb, 0x1f,
0x05, 0x96, 0xbc, 0xb7, 0x80, 0x3f, 0x63, 0x77, 0x20, 0xcb, 0x4d, 0x69, 0x5e, 0xe7, 0xb1, 0x04,
0x1d, 0xd6, 0x3a, 0x4b, 0xdd, 0x20, 0x5a, 0x73, 0xfc, 0xe8, 0x1a, 0xf3, 0x47, 0x6c, 0xb5, 0x74,
0xa9, 0x45, 0x46, 0x09, 0x54, 0x87, 0xd1, 0xa8, 0xd8, 0x07, 0x4a, 0xc0, 0x7e, 0xec, 0x96, 0xca,
0xf2, 0x50, 0x02, 0xbc, 0xd6, 0xb8, 0xc9, 0x82, 0x1b, 0x83, 0x41, 0x79, 0x64, 0x93, 0xb9, 0xbd,
0x97, 0xac, 0x75, 0x5c, 0x90, 0x04, 0xad, 0x0f, 0xc0, 0xc4, 0x38, 0xd2, 0xfc, 0x31, 0xab, 0xc3,
0x14, 0xa4, 0xc0, 0xc4, 0xc9, 0x0b, 0xf6, 0xd8, 0xec, 0xf7, 0x56, 0xed, 0xcd, 0x14, 0xe4, 0xe1,
0x41, 0x54, 0xb3, 0xad, 0xc3, 0x64, 0xef, 0xf4, 0xf2, 0xaa, 0xbd, 0xf0, 0xeb, 0xaa, 0xbd, 0xf0,
0x7d, 0xd6, 0xf6, 0x2e, 0x67, 0x6d, 0xef, 0xe7, 0xac, 0xed, 0xfd, 0x99, 0xb5, 0xbd, 0xd3, 0xd7,
0xff, 0x7b, 0xd1, 0x5e, 0x55, 0xcf, 0xaf, 0x0b, 0x67, 0x35, 0x77, 0x8b, 0x5e, 0xfc, 0x0b, 0x00,
0x00, 0xff, 0xff, 0x90, 0x50, 0x79, 0xf2, 0xb5, 0x03, 0x00, 0x00,
}

View File

@@ -24,3 +24,8 @@ import "os"
func ForceRemoveAll(path string) error {
return os.RemoveAll(path)
}
// MkdirAllWithACL is a wrapper for os.MkdirAll on Unix systems.
func MkdirAllWithACL(path string, perm os.FileMode) error {
return os.MkdirAll(path, perm)
}

View File

@@ -30,6 +30,11 @@ import (
"github.com/Microsoft/hcsshim"
)
const (
// SddlAdministratorsLocalSystem is local administrators plus NT AUTHORITY\System
SddlAdministratorsLocalSystem = "D:P(A;OICI;GA;;;BA)(A;OICI;GA;;;SY)"
)
// MkdirAllWithACL is a wrapper for MkdirAll that creates a directory
// ACL'd for Builtin Administrators and Local System.
func MkdirAllWithACL(path string, perm os.FileMode) error {
@@ -78,7 +83,7 @@ func mkdirall(path string, adminAndLocalSystem bool) error {
if j > 1 {
// Create parent
err = mkdirall(path[0:j-1], false)
err = mkdirall(path[0:j-1], adminAndLocalSystem)
if err != nil {
return err
}
@@ -112,8 +117,7 @@ func mkdirall(path string, adminAndLocalSystem bool) error {
// and Local System.
func mkdirWithACL(name string) error {
sa := syscall.SecurityAttributes{Length: 0}
sddl := "D:P(A;OICI;GA;;;BA)(A;OICI;GA;;;SY)"
sd, err := winio.SddlToSecurityDescriptor(sddl)
sd, err := winio.SddlToSecurityDescriptor(SddlAdministratorsLocalSystem)
if err != nil {
return &os.PathError{Op: "mkdir", Path: name, Err: err}
}

View File

@@ -20,8 +20,10 @@ package sys
import (
"fmt"
"io/ioutil"
"os"
"strconv"
"strings"
"github.com/opencontainers/runc/libcontainer/system"
)
@@ -45,3 +47,13 @@ func SetOOMScore(pid, score int) error {
}
return nil
}
// GetOOMScoreAdj gets the oom score for a process
func GetOOMScoreAdj(pid int) (int, error) {
path := fmt.Sprintf("/proc/%d/oom_score_adj", pid)
data, err := ioutil.ReadFile(path)
if err != nil {
return 0, err
}
return strconv.Atoi(strings.TrimSpace(string(data)))
}

View File

@@ -22,3 +22,10 @@ package sys
func SetOOMScore(pid, score int) error {
return nil
}
// GetOOMScoreAdj gets the oom score for a process
//
// Not implemented on Windows
func GetOOMScoreAdj(pid int) (int, error) {
return 0, nil
}

View File

@@ -521,6 +521,9 @@ func (t *task) Update(ctx context.Context, opts ...UpdateTaskOpts) error {
}
func (t *task) LoadProcess(ctx context.Context, id string, ioAttach cio.Attach) (Process, error) {
if id == t.id && ioAttach == nil {
return t, nil
}
response, err := t.client.TaskService().Get(ctx, &tasks.GetRequest{
ContainerID: t.id,
ExecID: id,
@@ -582,6 +585,7 @@ func (t *task) checkpointTask(ctx context.Context, index *v1.Index, request *tas
OS: goruntime.GOOS,
Architecture: goruntime.GOARCH,
},
Annotations: d.Annotations,
})
}
return nil

View File

@@ -59,9 +59,10 @@ func WithTaskCheckpoint(im Image) NewTaskOpts {
for _, m := range index.Manifests {
if m.MediaType == images.MediaTypeContainerd1Checkpoint {
info.Checkpoint = &types.Descriptor{
MediaType: m.MediaType,
Size_: m.Size,
Digest: m.Digest,
MediaType: m.MediaType,
Size_: m.Size,
Digest: m.Digest,
Annotations: m.Annotations,
}
return nil
}

View File

@@ -1,52 +1,51 @@
github.com/containerd/go-runc 5a6d9f37cfa36b15efba46dc7ea349fa9b7143c3
github.com/containerd/console c12b1e7919c14469339a5d38f2f8ed9b64a9de23
github.com/containerd/cgroups dbea6f2bd41658b84b00417ceefa416b979cbf10
github.com/containerd/console 0650fd9eeb50bab4fc99dceb9f2e14cf58f36e7f
github.com/containerd/cgroups 4994991857f9b0ae8dc439551e8bebdbb4bf66c1
github.com/containerd/typeurl a93fcdb778cd272c6e9b3028b2f42d813e785d40
github.com/containerd/fifo 3d5202aec260678c48179c56f40e6f38a095738c
github.com/containerd/btrfs 2e1aa0ddf94f91fa282b6ed87c23bf0d64911244
github.com/containerd/btrfs af5082808c833de0e79c1e72eea9fea239364877
github.com/containerd/continuity bd77b46c8352f74eb12c85bdc01f4b90f69d66b4
github.com/coreos/go-systemd 48702e0da86bd25e76cfef347e2adeb434a0d0a6
github.com/docker/go-metrics 4ea375f7759c82740c893fc030bc37088d2ec098
github.com/docker/go-events 9461782956ad83b30282bf90e31fa6a70c255ba9
github.com/docker/go-units v0.3.1
github.com/docker/go-units v0.4.0
github.com/godbus/dbus c7fdd8b5cd55e87b4e1f4e372cdb1db61dd6c66f
github.com/prometheus/client_golang f4fb1b73fb099f396a7f0036bf86aa8def4ed823
github.com/prometheus/client_model 99fa1f4be8e564e8a6b613da7fa6f46c9edafc6c
github.com/prometheus/common 89604d197083d4781071d3c65855d24ecfb0a563
github.com/prometheus/procfs cb4147076ac75738c9a7d279075a253c0cc5acbd
github.com/beorn7/perks 4c0e84591b9aa9e6dcfdf3e020114cd81f89d5f9
github.com/matttproud/golang_protobuf_extensions v1.0.0
github.com/gogo/protobuf v1.0.0
github.com/gogo/googleapis 08a7655d27152912db7aaf4f983275eaf8d128ef
github.com/golang/protobuf v1.1.0
github.com/matttproud/golang_protobuf_extensions v1.0.1
github.com/gogo/protobuf v1.2.1
github.com/gogo/googleapis v1.2.0
github.com/golang/protobuf v1.2.0
github.com/opencontainers/runtime-spec 29686dbc5559d93fb1ef402eeda3e35c38d75af4 # v1.0.1-59-g29686db
github.com/opencontainers/runc 2b18fe1d885ee5083ef9f0838fee39b62d653e30
github.com/opencontainers/runc v1.0.0-rc8
github.com/konsorten/go-windows-terminal-sequences v1.0.1
github.com/sirupsen/logrus v1.3.0
github.com/sirupsen/logrus v1.4.1
github.com/urfave/cli 7bc6a0acffa589f415f88aca16cc1de5ffd66f9c
golang.org/x/net b3756b4b77d7b13260a0a2ec658753cf48922eac
google.golang.org/grpc v1.12.0
github.com/pkg/errors v0.8.0
github.com/pkg/errors v0.8.1
github.com/opencontainers/go-digest c9281466c8b2f606084ac71339773efd177436e7
golang.org/x/sys d455e41777fca6e8a5a79e34a14b8368bc11d9ba https://github.com/golang/sys
github.com/opencontainers/image-spec v1.0.1
golang.org/x/sync 42b317875d0fa942474b76e1b46a6060d720ae6e
github.com/BurntSushi/toml a368813c5e648fee92e5f6c30e3944ff9d5e8895
github.com/BurntSushi/toml v0.3.1
github.com/grpc-ecosystem/go-grpc-prometheus 6b7015e65d366bf3f19b2b2a000a831940f0f7e0
github.com/Microsoft/go-winio v0.4.12
github.com/Microsoft/hcsshim v0.8.5
github.com/Microsoft/go-winio 84b4ab48a50763fe7b3abcef38e5205c12027fac
github.com/Microsoft/hcsshim 8abdbb8205e4192c68b5f84c31197156f31be517
google.golang.org/genproto d80a6e20e776b0b17a324d0ba1ab50a39c8e8944
golang.org/x/text 19e51611da83d6be54ddafce4a4af510cb3e9ea4
github.com/containerd/ttrpc f02858b1457c5ca3aaec3a0803eb0d59f96e41d6
github.com/syndtr/gocapability db04d3cc01c8b54962a58ec7e491717d06cfcc16
gotest.tools v2.1.0
github.com/google/go-cmp v0.1.0
github.com/containerd/ttrpc 699c4e40d1e7416e08bf7019c7ce2e9beced4636
github.com/syndtr/gocapability d98352740cb2c55f81556b63d4a1ec64c5a319c2
gotest.tools v2.3.0
github.com/google/go-cmp v0.2.0
go.etcd.io/bbolt v1.3.2
# cri dependencies
github.com/containerd/cri 4dd6735020f5596dd41738f8c4f5cb07fa804c5e # master
github.com/containerd/go-cni 40bcf8ec8acd7372be1d77031d585d5d8e561c90
github.com/blang/semver v3.1.0
github.com/containerd/cri 2fc62db8146ce66f27b37306ad5fda34207835f3 # master
github.com/containerd/go-cni 891c2a41e18144b2d7921f971d6c9789a68046b2
github.com/containernetworking/cni v0.6.0
github.com/containernetworking/plugins v0.7.0
github.com/davecgh/go-spew v1.1.0
@@ -60,31 +59,27 @@ github.com/hashicorp/go-multierror ed905158d87462226a13fe39ddf685ea65f1c11f
github.com/json-iterator/go 1.1.5
github.com/modern-go/reflect2 1.0.1
github.com/modern-go/concurrent 1.0.3
github.com/opencontainers/runtime-tools v0.6.0
github.com/opencontainers/selinux b6fa367ed7f534f9ba25391cc2d467085dbb445a
github.com/opencontainers/selinux v1.2.2
github.com/seccomp/libseccomp-golang 32f571b70023028bd57d9288c20efbcb237f3ce0
github.com/tchap/go-patricia v2.2.6
github.com/xeipuuv/gojsonpointer 4e3ac2762d5f479393488629ee9370b50873b3a6
github.com/xeipuuv/gojsonreference bd5ef7bd5415a7ac448318e64f11a24cd21e594b
github.com/xeipuuv/gojsonschema 1d523034197ff1f222f6429836dd36a2457a1874
golang.org/x/crypto 49796115aa4b964c318aad4f3084fdb41e9aa067
golang.org/x/crypto 88737f569e3a9c7ab309cdc09a07fe7fc87233c3
golang.org/x/oauth2 a6bd8cefa1811bd24b86f8902872e4e8225f74c4
golang.org/x/time f51c12702a4d776e4c1fa9b0fabab841babae631
gopkg.in/inf.v0 3887ee99ecf07df5b447e9b00d9c0b2adaa9f3e4
gopkg.in/yaml.v2 v2.2.1
k8s.io/api kubernetes-1.13.0
k8s.io/apimachinery kubernetes-1.13.0
k8s.io/apiserver kubernetes-1.13.0
k8s.io/client-go kubernetes-1.13.0
k8s.io/api kubernetes-1.15.0-alpha.0
k8s.io/apimachinery kubernetes-1.15.0-alpha.0
k8s.io/apiserver kubernetes-1.15.0-alpha.0
k8s.io/client-go kubernetes-1.15.0-alpha.0
k8s.io/klog 8139d8cb77af419532b33dfa7dd09fbc5f1d344f
k8s.io/kubernetes v1.13.0
k8s.io/utils 0d26856f57b32ec3398579285e5c8a2bfe8c5243
k8s.io/kubernetes v1.15.0-alpha.0
k8s.io/utils c2654d5206da6b7b6ace12841e8f359bb89b443c
sigs.k8s.io/yaml v1.1.0
# zfs dependencies
github.com/containerd/zfs 9f6ef3b1fe5144bd91fe5855b4eba81bc0d17d03
github.com/mistifyio/go-zfs 166add352731e515512690329794ee593f1aaff2
github.com/pborman/uuid c65b2f87fee37d1c7854c9164a450713c28d50cd
github.com/containerd/zfs 31af176f2ae84fe142ef2655bf7bb2aa618b3b1f
github.com/mistifyio/go-zfs f784269be439d704d3dfa1906f45dd848fed2beb
github.com/google/uuid v1.1.1
# aufs dependencies
github.com/containerd/aufs da3cf16bfbe68ba8f114f1536a05c01528a25434
github.com/containerd/aufs f894a800659b6e11c1a13084abd1712f346e349c

Some files were not shown because too many files have changed in this diff Show More