mirror of
https://gitea.com/Lydanne/buildx.git
synced 2025-07-09 21:17:09 +08:00
vendor: update buildkit
Signed-off-by: CrazyMax <crazy-max@users.noreply.github.com>
This commit is contained in:
201
vendor/github.com/AdaLogics/go-fuzz-headers/LICENSE
generated
vendored
Normal file
201
vendor/github.com/AdaLogics/go-fuzz-headers/LICENSE
generated
vendored
Normal file
@ -0,0 +1,201 @@
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
93
vendor/github.com/AdaLogics/go-fuzz-headers/README.md
generated
vendored
Normal file
93
vendor/github.com/AdaLogics/go-fuzz-headers/README.md
generated
vendored
Normal file
@ -0,0 +1,93 @@
|
||||
# go-fuzz-headers
|
||||
This repository contains various helper functions for go fuzzing. It is mostly used in combination with [go-fuzz](https://github.com/dvyukov/go-fuzz), but compatibility with fuzzing in the standard library will also be supported. Any coverage guided fuzzing engine that provides an array or slice of bytes can be used with go-fuzz-headers.
|
||||
|
||||
|
||||
## Usage
|
||||
Using go-fuzz-headers is easy. First create a new consumer with the bytes provided by the fuzzing engine:
|
||||
|
||||
```go
|
||||
import (
|
||||
fuzz "github.com/AdaLogics/go-fuzz-headers"
|
||||
)
|
||||
data := []byte{'R', 'a', 'n', 'd', 'o', 'm'}
|
||||
f := fuzz.NewConsumer(data)
|
||||
|
||||
```
|
||||
|
||||
This creates a `Consumer` that consumes the bytes of the input as it uses them to fuzz different types.
|
||||
|
||||
After that, `f` can be used to easily create fuzzed instances of different types. Below are some examples:
|
||||
|
||||
### Structs
|
||||
One of the most useful features of go-fuzz-headers is its ability to fill structs with the data provided by the fuzzing engine. This is done with a single line:
|
||||
```go
|
||||
type Person struct {
|
||||
Name string
|
||||
Age int
|
||||
}
|
||||
p := Person{}
|
||||
// Fill p with values based on the data provided by the fuzzing engine:
|
||||
err := f.GenerateStruct(&p)
|
||||
```
|
||||
|
||||
This includes nested structs too. In this example, the fuzz Consumer will also insert values in `p.BestFriend`:
|
||||
```go
|
||||
type PersonI struct {
|
||||
Name string
|
||||
Age int
|
||||
BestFriend PersonII
|
||||
}
|
||||
type PersonII struct {
|
||||
Name string
|
||||
Age int
|
||||
}
|
||||
p := PersonI{}
|
||||
err := f.GenerateStruct(&p)
|
||||
```
|
||||
|
||||
If the consumer should insert values for unexported fields as well as exported, this can be enabled with:
|
||||
|
||||
```go
|
||||
f.AllowUnexportedFields()
|
||||
```
|
||||
|
||||
...and disabled with:
|
||||
|
||||
```go
|
||||
f.DisallowUnexportedFields()
|
||||
```
|
||||
|
||||
### Other types:
|
||||
|
||||
Other useful APIs:
|
||||
|
||||
```go
|
||||
createdString, err := f.GetString() // Gets a string
|
||||
createdInt, err := f.GetInt() // Gets an integer
|
||||
createdByte, err := f.GetByte() // Gets a byte
|
||||
createdBytes, err := f.GetBytes() // Gets a byte slice
|
||||
createdBool, err := f.GetBool() // Gets a boolean
|
||||
err := f.FuzzMap(target_map) // Fills a map
|
||||
createdTarBytes, err := f.TarBytes() // Gets bytes of a valid tar archive
|
||||
err := f.CreateFiles(inThisDir) // Fills inThisDir with files
|
||||
createdString, err := f.GetStringFrom("anyCharInThisString", ofThisLength) // Gets a string that consists of chars from "anyCharInThisString" and has the exact length "ofThisLength"
|
||||
```
|
||||
|
||||
Most APIs are added as they are needed.
|
||||
|
||||
## Projects that use go-fuzz-headers
|
||||
- [runC](https://github.com/opencontainers/runc)
|
||||
- [Istio](https://github.com/istio/istio)
|
||||
- [Vitess](https://github.com/vitessio/vitess)
|
||||
- [Containerd](https://github.com/containerd/containerd)
|
||||
|
||||
Feel free to add your own project to the list, if you use go-fuzz-headers to fuzz it.
|
||||
|
||||
|
||||
|
||||
|
||||
## Status
|
||||
The project is under development and will be updated regularly.
|
||||
|
||||
## References
|
||||
go-fuzz-headers' approach to fuzzing structs is strongly inspired by [gofuzz](https://github.com/google/gofuzz).
|
899
vendor/github.com/AdaLogics/go-fuzz-headers/consumer.go
generated
vendored
Normal file
899
vendor/github.com/AdaLogics/go-fuzz-headers/consumer.go
generated
vendored
Normal file
@ -0,0 +1,899 @@
|
||||
// Copyright 2023 The go-fuzz-headers Authors.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package gofuzzheaders
|
||||
|
||||
import (
|
||||
"archive/tar"
|
||||
"bytes"
|
||||
"encoding/binary"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"math"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"reflect"
|
||||
"strings"
|
||||
"time"
|
||||
"unsafe"
|
||||
|
||||
securejoin "github.com/cyphar/filepath-securejoin"
|
||||
)
|
||||
|
||||
var (
|
||||
MaxTotalLen uint32 = 2000000
|
||||
maxDepth = 100
|
||||
)
|
||||
|
||||
func SetMaxTotalLen(newLen uint32) {
|
||||
MaxTotalLen = newLen
|
||||
}
|
||||
|
||||
type ConsumeFuzzer struct {
|
||||
data []byte
|
||||
dataTotal uint32
|
||||
CommandPart []byte
|
||||
RestOfArray []byte
|
||||
NumberOfCalls int
|
||||
position uint32
|
||||
fuzzUnexportedFields bool
|
||||
curDepth int
|
||||
Funcs map[reflect.Type]reflect.Value
|
||||
}
|
||||
|
||||
func IsDivisibleBy(n int, divisibleby int) bool {
|
||||
return (n % divisibleby) == 0
|
||||
}
|
||||
|
||||
func NewConsumer(fuzzData []byte) *ConsumeFuzzer {
|
||||
return &ConsumeFuzzer{
|
||||
data: fuzzData,
|
||||
dataTotal: uint32(len(fuzzData)),
|
||||
Funcs: make(map[reflect.Type]reflect.Value),
|
||||
curDepth: 0,
|
||||
}
|
||||
}
|
||||
|
||||
func (f *ConsumeFuzzer) Split(minCalls, maxCalls int) error {
|
||||
if f.dataTotal == 0 {
|
||||
return errors.New("could not split")
|
||||
}
|
||||
numberOfCalls := int(f.data[0])
|
||||
if numberOfCalls < minCalls || numberOfCalls > maxCalls {
|
||||
return errors.New("bad number of calls")
|
||||
}
|
||||
if int(f.dataTotal) < numberOfCalls+numberOfCalls+1 {
|
||||
return errors.New("length of data does not match required parameters")
|
||||
}
|
||||
|
||||
// Define part 2 and 3 of the data array
|
||||
commandPart := f.data[1 : numberOfCalls+1]
|
||||
restOfArray := f.data[numberOfCalls+1:]
|
||||
|
||||
// Just a small check. It is necessary
|
||||
if len(commandPart) != numberOfCalls {
|
||||
return errors.New("length of commandPart does not match number of calls")
|
||||
}
|
||||
|
||||
// Check if restOfArray is divisible by numberOfCalls
|
||||
if !IsDivisibleBy(len(restOfArray), numberOfCalls) {
|
||||
return errors.New("length of commandPart does not match number of calls")
|
||||
}
|
||||
f.CommandPart = commandPart
|
||||
f.RestOfArray = restOfArray
|
||||
f.NumberOfCalls = numberOfCalls
|
||||
return nil
|
||||
}
|
||||
|
||||
func (f *ConsumeFuzzer) AllowUnexportedFields() {
|
||||
f.fuzzUnexportedFields = true
|
||||
}
|
||||
|
||||
func (f *ConsumeFuzzer) DisallowUnexportedFields() {
|
||||
f.fuzzUnexportedFields = false
|
||||
}
|
||||
|
||||
func (f *ConsumeFuzzer) GenerateStruct(targetStruct interface{}) error {
|
||||
e := reflect.ValueOf(targetStruct).Elem()
|
||||
return f.fuzzStruct(e, false)
|
||||
}
|
||||
|
||||
func (f *ConsumeFuzzer) setCustom(v reflect.Value) error {
|
||||
// First: see if we have a fuzz function for it.
|
||||
doCustom, ok := f.Funcs[v.Type()]
|
||||
if !ok {
|
||||
return fmt.Errorf("could not find a custom function")
|
||||
}
|
||||
|
||||
switch v.Kind() {
|
||||
case reflect.Ptr:
|
||||
if v.IsNil() {
|
||||
if !v.CanSet() {
|
||||
return fmt.Errorf("could not use a custom function")
|
||||
}
|
||||
v.Set(reflect.New(v.Type().Elem()))
|
||||
}
|
||||
case reflect.Map:
|
||||
if v.IsNil() {
|
||||
if !v.CanSet() {
|
||||
return fmt.Errorf("could not use a custom function")
|
||||
}
|
||||
v.Set(reflect.MakeMap(v.Type()))
|
||||
}
|
||||
default:
|
||||
return fmt.Errorf("could not use a custom function")
|
||||
}
|
||||
|
||||
verr := doCustom.Call([]reflect.Value{v, reflect.ValueOf(Continue{
|
||||
F: f,
|
||||
})})
|
||||
|
||||
// check if we return an error
|
||||
if verr[0].IsNil() {
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("could not use a custom function")
|
||||
}
|
||||
|
||||
func (f *ConsumeFuzzer) fuzzStruct(e reflect.Value, customFunctions bool) error {
|
||||
if f.curDepth >= maxDepth {
|
||||
// return err or nil here?
|
||||
return nil
|
||||
}
|
||||
f.curDepth++
|
||||
defer func() { f.curDepth-- }()
|
||||
|
||||
// We check if we should check for custom functions
|
||||
if customFunctions && e.IsValid() && e.CanAddr() {
|
||||
err := f.setCustom(e.Addr())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
switch e.Kind() {
|
||||
case reflect.Struct:
|
||||
for i := 0; i < e.NumField(); i++ {
|
||||
var v reflect.Value
|
||||
if !e.Field(i).CanSet() {
|
||||
if f.fuzzUnexportedFields {
|
||||
v = reflect.NewAt(e.Field(i).Type(), unsafe.Pointer(e.Field(i).UnsafeAddr())).Elem()
|
||||
}
|
||||
if err := f.fuzzStruct(v, customFunctions); err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
v = e.Field(i)
|
||||
if err := f.fuzzStruct(v, customFunctions); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
case reflect.String:
|
||||
str, err := f.GetString()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if e.CanSet() {
|
||||
e.SetString(str)
|
||||
}
|
||||
case reflect.Slice:
|
||||
var maxElements uint32
|
||||
// Byte slices should not be restricted
|
||||
if e.Type().String() == "[]uint8" {
|
||||
maxElements = 10000000
|
||||
} else {
|
||||
maxElements = 50
|
||||
}
|
||||
|
||||
randQty, err := f.GetUint32()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
numOfElements := randQty % maxElements
|
||||
if (f.dataTotal - f.position) < numOfElements {
|
||||
numOfElements = f.dataTotal - f.position
|
||||
}
|
||||
|
||||
uu := reflect.MakeSlice(e.Type(), int(numOfElements), int(numOfElements))
|
||||
|
||||
for i := 0; i < int(numOfElements); i++ {
|
||||
// If we have more than 10, then we can proceed with that.
|
||||
if err := f.fuzzStruct(uu.Index(i), customFunctions); err != nil {
|
||||
if i >= 10 {
|
||||
if e.CanSet() {
|
||||
e.Set(uu)
|
||||
}
|
||||
return nil
|
||||
} else {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
if e.CanSet() {
|
||||
e.Set(uu)
|
||||
}
|
||||
case reflect.Uint16:
|
||||
newInt, err := f.GetUint16()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if e.CanSet() {
|
||||
e.SetUint(uint64(newInt))
|
||||
}
|
||||
case reflect.Uint32:
|
||||
newInt, err := f.GetUint32()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if e.CanSet() {
|
||||
e.SetUint(uint64(newInt))
|
||||
}
|
||||
case reflect.Uint64:
|
||||
newInt, err := f.GetInt()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if e.CanSet() {
|
||||
e.SetUint(uint64(newInt))
|
||||
}
|
||||
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
|
||||
newInt, err := f.GetInt()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if e.CanSet() {
|
||||
e.SetInt(int64(newInt))
|
||||
}
|
||||
case reflect.Float32:
|
||||
newFloat, err := f.GetFloat32()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if e.CanSet() {
|
||||
e.SetFloat(float64(newFloat))
|
||||
}
|
||||
case reflect.Float64:
|
||||
newFloat, err := f.GetFloat64()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if e.CanSet() {
|
||||
e.SetFloat(float64(newFloat))
|
||||
}
|
||||
case reflect.Map:
|
||||
if e.CanSet() {
|
||||
e.Set(reflect.MakeMap(e.Type()))
|
||||
const maxElements = 50
|
||||
randQty, err := f.GetInt()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
numOfElements := randQty % maxElements
|
||||
for i := 0; i < numOfElements; i++ {
|
||||
key := reflect.New(e.Type().Key()).Elem()
|
||||
if err := f.fuzzStruct(key, customFunctions); err != nil {
|
||||
return err
|
||||
}
|
||||
val := reflect.New(e.Type().Elem()).Elem()
|
||||
if err = f.fuzzStruct(val, customFunctions); err != nil {
|
||||
return err
|
||||
}
|
||||
e.SetMapIndex(key, val)
|
||||
}
|
||||
}
|
||||
case reflect.Ptr:
|
||||
if e.CanSet() {
|
||||
e.Set(reflect.New(e.Type().Elem()))
|
||||
if err := f.fuzzStruct(e.Elem(), customFunctions); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
case reflect.Uint8:
|
||||
b, err := f.GetByte()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if e.CanSet() {
|
||||
e.SetUint(uint64(b))
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (f *ConsumeFuzzer) GetStringArray() (reflect.Value, error) {
|
||||
// The max size of the array:
|
||||
const max uint32 = 20
|
||||
|
||||
arraySize := f.position
|
||||
if arraySize > max {
|
||||
arraySize = max
|
||||
}
|
||||
stringArray := reflect.MakeSlice(reflect.SliceOf(reflect.TypeOf("string")), int(arraySize), int(arraySize))
|
||||
if f.position+arraySize >= f.dataTotal {
|
||||
return stringArray, errors.New("could not make string array")
|
||||
}
|
||||
|
||||
for i := 0; i < int(arraySize); i++ {
|
||||
stringSize := uint32(f.data[f.position])
|
||||
if f.position+stringSize >= f.dataTotal {
|
||||
return stringArray, nil
|
||||
}
|
||||
stringToAppend := string(f.data[f.position : f.position+stringSize])
|
||||
strVal := reflect.ValueOf(stringToAppend)
|
||||
stringArray = reflect.Append(stringArray, strVal)
|
||||
f.position += stringSize
|
||||
}
|
||||
return stringArray, nil
|
||||
}
|
||||
|
||||
func (f *ConsumeFuzzer) GetInt() (int, error) {
|
||||
if f.position >= f.dataTotal {
|
||||
return 0, errors.New("not enough bytes to create int")
|
||||
}
|
||||
returnInt := int(f.data[f.position])
|
||||
f.position++
|
||||
return returnInt, nil
|
||||
}
|
||||
|
||||
func (f *ConsumeFuzzer) GetByte() (byte, error) {
|
||||
if f.position >= f.dataTotal {
|
||||
return 0x00, errors.New("not enough bytes to get byte")
|
||||
}
|
||||
returnByte := f.data[f.position]
|
||||
f.position++
|
||||
return returnByte, nil
|
||||
}
|
||||
|
||||
func (f *ConsumeFuzzer) GetNBytes(numberOfBytes int) ([]byte, error) {
|
||||
if f.position >= f.dataTotal {
|
||||
return nil, errors.New("not enough bytes to get byte")
|
||||
}
|
||||
returnBytes := make([]byte, 0, numberOfBytes)
|
||||
for i := 0; i < numberOfBytes; i++ {
|
||||
newByte, err := f.GetByte()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
returnBytes = append(returnBytes, newByte)
|
||||
}
|
||||
return returnBytes, nil
|
||||
}
|
||||
|
||||
func (f *ConsumeFuzzer) GetUint16() (uint16, error) {
|
||||
u16, err := f.GetNBytes(2)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
littleEndian, err := f.GetBool()
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if littleEndian {
|
||||
return binary.LittleEndian.Uint16(u16), nil
|
||||
}
|
||||
return binary.BigEndian.Uint16(u16), nil
|
||||
}
|
||||
|
||||
func (f *ConsumeFuzzer) GetUint32() (uint32, error) {
|
||||
i, err := f.GetInt()
|
||||
if err != nil {
|
||||
return uint32(0), err
|
||||
}
|
||||
return uint32(i), nil
|
||||
}
|
||||
|
||||
func (f *ConsumeFuzzer) GetUint64() (uint64, error) {
|
||||
u64, err := f.GetNBytes(8)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
littleEndian, err := f.GetBool()
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if littleEndian {
|
||||
return binary.LittleEndian.Uint64(u64), nil
|
||||
}
|
||||
return binary.BigEndian.Uint64(u64), nil
|
||||
}
|
||||
|
||||
func (f *ConsumeFuzzer) GetBytes() ([]byte, error) {
|
||||
if f.position >= f.dataTotal {
|
||||
return nil, errors.New("not enough bytes to create byte array")
|
||||
}
|
||||
length, err := f.GetUint32()
|
||||
if err != nil {
|
||||
return nil, errors.New("not enough bytes to create byte array")
|
||||
}
|
||||
if f.position+length > MaxTotalLen {
|
||||
return nil, errors.New("created too large a string")
|
||||
}
|
||||
byteBegin := f.position - 1
|
||||
if byteBegin >= f.dataTotal {
|
||||
return nil, errors.New("not enough bytes to create byte array")
|
||||
}
|
||||
if length == 0 {
|
||||
return nil, errors.New("zero-length is not supported")
|
||||
}
|
||||
if byteBegin+length >= f.dataTotal {
|
||||
return nil, errors.New("not enough bytes to create byte array")
|
||||
}
|
||||
if byteBegin+length < byteBegin {
|
||||
return nil, errors.New("numbers overflow")
|
||||
}
|
||||
f.position = byteBegin + length
|
||||
return f.data[byteBegin:f.position], nil
|
||||
}
|
||||
|
||||
func (f *ConsumeFuzzer) GetString() (string, error) {
|
||||
if f.position >= f.dataTotal {
|
||||
return "nil", errors.New("not enough bytes to create string")
|
||||
}
|
||||
length, err := f.GetUint32()
|
||||
if err != nil {
|
||||
return "nil", errors.New("not enough bytes to create string")
|
||||
}
|
||||
if f.position > MaxTotalLen {
|
||||
return "nil", errors.New("created too large a string")
|
||||
}
|
||||
byteBegin := f.position
|
||||
if byteBegin >= f.dataTotal {
|
||||
return "nil", errors.New("not enough bytes to create string")
|
||||
}
|
||||
if byteBegin+length > f.dataTotal {
|
||||
return "nil", errors.New("not enough bytes to create string")
|
||||
}
|
||||
if byteBegin > byteBegin+length {
|
||||
return "nil", errors.New("numbers overflow")
|
||||
}
|
||||
f.position = byteBegin + length
|
||||
return string(f.data[byteBegin:f.position]), nil
|
||||
}
|
||||
|
||||
func (f *ConsumeFuzzer) GetBool() (bool, error) {
|
||||
if f.position >= f.dataTotal {
|
||||
return false, errors.New("not enough bytes to create bool")
|
||||
}
|
||||
if IsDivisibleBy(int(f.data[f.position]), 2) {
|
||||
f.position++
|
||||
return true, nil
|
||||
} else {
|
||||
f.position++
|
||||
return false, nil
|
||||
}
|
||||
}
|
||||
|
||||
func (f *ConsumeFuzzer) FuzzMap(m interface{}) error {
|
||||
return f.GenerateStruct(m)
|
||||
}
|
||||
|
||||
func returnTarBytes(buf []byte) ([]byte, error) {
|
||||
// Count files
|
||||
var fileCounter int
|
||||
tr := tar.NewReader(bytes.NewReader(buf))
|
||||
for {
|
||||
_, err := tr.Next()
|
||||
if err == io.EOF {
|
||||
break
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
fileCounter++
|
||||
}
|
||||
if fileCounter >= 1 {
|
||||
return buf, nil
|
||||
}
|
||||
return nil, fmt.Errorf("not enough files were created\n")
|
||||
}
|
||||
|
||||
func setTarHeaderFormat(hdr *tar.Header, f *ConsumeFuzzer) error {
|
||||
ind, err := f.GetInt()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
switch ind % 4 {
|
||||
case 0:
|
||||
hdr.Format = tar.FormatUnknown
|
||||
case 1:
|
||||
hdr.Format = tar.FormatUSTAR
|
||||
case 2:
|
||||
hdr.Format = tar.FormatPAX
|
||||
case 3:
|
||||
hdr.Format = tar.FormatGNU
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func setTarHeaderTypeflag(hdr *tar.Header, f *ConsumeFuzzer) error {
|
||||
ind, err := f.GetInt()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
switch ind % 13 {
|
||||
case 0:
|
||||
hdr.Typeflag = tar.TypeReg
|
||||
case 1:
|
||||
hdr.Typeflag = tar.TypeLink
|
||||
linkname, err := f.GetString()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
hdr.Linkname = linkname
|
||||
case 2:
|
||||
hdr.Typeflag = tar.TypeSymlink
|
||||
linkname, err := f.GetString()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
hdr.Linkname = linkname
|
||||
case 3:
|
||||
hdr.Typeflag = tar.TypeChar
|
||||
case 4:
|
||||
hdr.Typeflag = tar.TypeBlock
|
||||
case 5:
|
||||
hdr.Typeflag = tar.TypeDir
|
||||
case 6:
|
||||
hdr.Typeflag = tar.TypeFifo
|
||||
case 7:
|
||||
hdr.Typeflag = tar.TypeCont
|
||||
case 8:
|
||||
hdr.Typeflag = tar.TypeXHeader
|
||||
case 9:
|
||||
hdr.Typeflag = tar.TypeXGlobalHeader
|
||||
case 10:
|
||||
hdr.Typeflag = tar.TypeGNUSparse
|
||||
case 11:
|
||||
hdr.Typeflag = tar.TypeGNULongName
|
||||
case 12:
|
||||
hdr.Typeflag = tar.TypeGNULongLink
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func tooSmallFileBody(length uint32) bool {
|
||||
if length < 2 {
|
||||
return true
|
||||
}
|
||||
if length < 4 {
|
||||
return true
|
||||
}
|
||||
if length < 10 {
|
||||
return true
|
||||
}
|
||||
if length < 100 {
|
||||
return true
|
||||
}
|
||||
if length < 500 {
|
||||
return true
|
||||
}
|
||||
if length < 1000 {
|
||||
return true
|
||||
}
|
||||
if length < 2000 {
|
||||
return true
|
||||
}
|
||||
if length < 4000 {
|
||||
return true
|
||||
}
|
||||
if length < 8000 {
|
||||
return true
|
||||
}
|
||||
if length < 16000 {
|
||||
return true
|
||||
}
|
||||
if length < 32000 {
|
||||
return true
|
||||
}
|
||||
if length < 64000 {
|
||||
return true
|
||||
}
|
||||
if length < 128000 {
|
||||
return true
|
||||
}
|
||||
if length < 264000 {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (f *ConsumeFuzzer) createTarFileBody() ([]byte, error) {
|
||||
length, err := f.GetUint32()
|
||||
if err != nil {
|
||||
return nil, errors.New("not enough bytes to create byte array")
|
||||
}
|
||||
|
||||
shouldUseLargeFileBody, err := f.GetBool()
|
||||
if err != nil {
|
||||
return nil, errors.New("not enough bytes to check long file body")
|
||||
}
|
||||
|
||||
if shouldUseLargeFileBody && tooSmallFileBody(length) {
|
||||
return nil, errors.New("File body was too small")
|
||||
}
|
||||
|
||||
// A bit of optimization to attempt to create a file body
|
||||
// when we don't have as many bytes left as "length"
|
||||
remainingBytes := f.dataTotal - f.position
|
||||
if remainingBytes == 0 {
|
||||
return nil, errors.New("created too large a string")
|
||||
}
|
||||
if f.position+length > MaxTotalLen {
|
||||
return nil, errors.New("created too large a string")
|
||||
}
|
||||
byteBegin := f.position
|
||||
if byteBegin >= f.dataTotal {
|
||||
return nil, errors.New("not enough bytes to create byte array")
|
||||
}
|
||||
if length == 0 {
|
||||
return nil, errors.New("zero-length is not supported")
|
||||
}
|
||||
if byteBegin+length >= f.dataTotal {
|
||||
return nil, errors.New("not enough bytes to create byte array")
|
||||
}
|
||||
if byteBegin+length < byteBegin {
|
||||
return nil, errors.New("numbers overflow")
|
||||
}
|
||||
f.position = byteBegin + length
|
||||
return f.data[byteBegin:f.position], nil
|
||||
}
|
||||
|
||||
// getTarFileName is similar to GetString(), but creates string based
|
||||
// on the length of f.data to reduce the likelihood of overflowing
|
||||
// f.data.
|
||||
func (f *ConsumeFuzzer) getTarFilename() (string, error) {
|
||||
length, err := f.GetUint32()
|
||||
if err != nil {
|
||||
return "nil", errors.New("not enough bytes to create string")
|
||||
}
|
||||
|
||||
// A bit of optimization to attempt to create a file name
|
||||
// when we don't have as many bytes left as "length"
|
||||
remainingBytes := f.dataTotal - f.position
|
||||
if remainingBytes == 0 {
|
||||
return "nil", errors.New("created too large a string")
|
||||
}
|
||||
if remainingBytes < 50 {
|
||||
length = length % remainingBytes
|
||||
} else if f.dataTotal < 500 {
|
||||
length = length % f.dataTotal
|
||||
}
|
||||
if f.position > MaxTotalLen {
|
||||
return "nil", errors.New("created too large a string")
|
||||
}
|
||||
byteBegin := f.position
|
||||
if byteBegin >= f.dataTotal {
|
||||
return "nil", errors.New("not enough bytes to create string")
|
||||
}
|
||||
if byteBegin+length > f.dataTotal {
|
||||
return "nil", errors.New("not enough bytes to create string")
|
||||
}
|
||||
if byteBegin > byteBegin+length {
|
||||
return "nil", errors.New("numbers overflow")
|
||||
}
|
||||
f.position = byteBegin + length
|
||||
return string(f.data[byteBegin:f.position]), nil
|
||||
}
|
||||
|
||||
// TarBytes returns valid bytes for a tar archive
|
||||
func (f *ConsumeFuzzer) TarBytes() ([]byte, error) {
|
||||
numberOfFiles, err := f.GetInt()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
var buf bytes.Buffer
|
||||
tw := tar.NewWriter(&buf)
|
||||
defer tw.Close()
|
||||
|
||||
const maxNoOfFiles = 1000
|
||||
for i := 0; i < numberOfFiles%maxNoOfFiles; i++ {
|
||||
filename, err := f.getTarFilename()
|
||||
if err != nil {
|
||||
return returnTarBytes(buf.Bytes())
|
||||
}
|
||||
filebody, err := f.createTarFileBody()
|
||||
if err != nil {
|
||||
return returnTarBytes(buf.Bytes())
|
||||
}
|
||||
sec, err := f.GetInt()
|
||||
if err != nil {
|
||||
return returnTarBytes(buf.Bytes())
|
||||
}
|
||||
nsec, err := f.GetInt()
|
||||
if err != nil {
|
||||
return returnTarBytes(buf.Bytes())
|
||||
}
|
||||
|
||||
hdr := &tar.Header{
|
||||
Name: filename,
|
||||
Size: int64(len(filebody)),
|
||||
Mode: 0o600,
|
||||
ModTime: time.Unix(int64(sec), int64(nsec)),
|
||||
}
|
||||
if err := setTarHeaderTypeflag(hdr, f); err != nil {
|
||||
return returnTarBytes(buf.Bytes())
|
||||
}
|
||||
if err := setTarHeaderFormat(hdr, f); err != nil {
|
||||
return returnTarBytes(buf.Bytes())
|
||||
}
|
||||
if err := tw.WriteHeader(hdr); err != nil {
|
||||
return returnTarBytes(buf.Bytes())
|
||||
}
|
||||
if _, err := tw.Write(filebody); err != nil {
|
||||
return returnTarBytes(buf.Bytes())
|
||||
}
|
||||
}
|
||||
return buf.Bytes(), nil
|
||||
}
|
||||
|
||||
// CreateFiles creates pseudo-random files in rootDir.
|
||||
// It creates subdirs and places the files there.
|
||||
// It is the callers responsibility to ensure that
|
||||
// rootDir exists.
|
||||
func (f *ConsumeFuzzer) CreateFiles(rootDir string) error {
|
||||
numberOfFiles, err := f.GetInt()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
maxNumberOfFiles := numberOfFiles % 4000 // This is completely arbitrary
|
||||
if maxNumberOfFiles == 0 {
|
||||
return errors.New("maxNumberOfFiles is nil")
|
||||
}
|
||||
|
||||
var noOfCreatedFiles int
|
||||
for i := 0; i < maxNumberOfFiles; i++ {
|
||||
// The file to create:
|
||||
fileName, err := f.GetString()
|
||||
if err != nil {
|
||||
if noOfCreatedFiles > 0 {
|
||||
// If files have been created, we don't return an error.
|
||||
break
|
||||
} else {
|
||||
return errors.New("could not get fileName")
|
||||
}
|
||||
}
|
||||
fullFilePath, err := securejoin.SecureJoin(rootDir, fileName)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Find the subdirectory of the file
|
||||
if subDir := filepath.Dir(fileName); subDir != "" && subDir != "." {
|
||||
// create the dir first; avoid going outside the root dir
|
||||
if strings.Contains(subDir, "../") || (len(subDir) > 0 && subDir[0] == 47) || strings.Contains(subDir, "\\") {
|
||||
continue
|
||||
}
|
||||
dirPath, err := securejoin.SecureJoin(rootDir, subDir)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
if _, err := os.Stat(dirPath); os.IsNotExist(err) {
|
||||
err2 := os.MkdirAll(dirPath, 0o777)
|
||||
if err2 != nil {
|
||||
continue
|
||||
}
|
||||
}
|
||||
fullFilePath, err = securejoin.SecureJoin(dirPath, fileName)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
} else {
|
||||
// Create symlink
|
||||
createSymlink, err := f.GetBool()
|
||||
if err != nil {
|
||||
if noOfCreatedFiles > 0 {
|
||||
break
|
||||
} else {
|
||||
return errors.New("could not create the symlink")
|
||||
}
|
||||
}
|
||||
if createSymlink {
|
||||
symlinkTarget, err := f.GetString()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
err = os.Symlink(symlinkTarget, fullFilePath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// stop loop here, since a symlink needs no further action
|
||||
noOfCreatedFiles++
|
||||
continue
|
||||
}
|
||||
// We create a normal file
|
||||
fileContents, err := f.GetBytes()
|
||||
if err != nil {
|
||||
if noOfCreatedFiles > 0 {
|
||||
break
|
||||
} else {
|
||||
return errors.New("could not create the file")
|
||||
}
|
||||
}
|
||||
err = os.WriteFile(fullFilePath, fileContents, 0o666)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
noOfCreatedFiles++
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetStringFrom returns a string that can only consist of characters
|
||||
// included in possibleChars. It returns an error if the created string
|
||||
// does not have the specified length.
|
||||
func (f *ConsumeFuzzer) GetStringFrom(possibleChars string, length int) (string, error) {
|
||||
if (f.dataTotal - f.position) < uint32(length) {
|
||||
return "", errors.New("not enough bytes to create a string")
|
||||
}
|
||||
output := make([]byte, 0, length)
|
||||
for i := 0; i < length; i++ {
|
||||
charIndex, err := f.GetInt()
|
||||
if err != nil {
|
||||
return string(output), err
|
||||
}
|
||||
output = append(output, possibleChars[charIndex%len(possibleChars)])
|
||||
}
|
||||
return string(output), nil
|
||||
}
|
||||
|
||||
func (f *ConsumeFuzzer) GetRune() ([]rune, error) {
|
||||
stringToConvert, err := f.GetString()
|
||||
if err != nil {
|
||||
return []rune("nil"), err
|
||||
}
|
||||
return []rune(stringToConvert), nil
|
||||
}
|
||||
|
||||
func (f *ConsumeFuzzer) GetFloat32() (float32, error) {
|
||||
u32, err := f.GetNBytes(4)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
littleEndian, err := f.GetBool()
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if littleEndian {
|
||||
u32LE := binary.LittleEndian.Uint32(u32)
|
||||
return math.Float32frombits(u32LE), nil
|
||||
}
|
||||
u32BE := binary.BigEndian.Uint32(u32)
|
||||
return math.Float32frombits(u32BE), nil
|
||||
}
|
||||
|
||||
func (f *ConsumeFuzzer) GetFloat64() (float64, error) {
|
||||
u64, err := f.GetNBytes(8)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
littleEndian, err := f.GetBool()
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
if littleEndian {
|
||||
u64LE := binary.LittleEndian.Uint64(u64)
|
||||
return math.Float64frombits(u64LE), nil
|
||||
}
|
||||
u64BE := binary.BigEndian.Uint64(u64)
|
||||
return math.Float64frombits(u64BE), nil
|
||||
}
|
||||
|
||||
func (f *ConsumeFuzzer) CreateSlice(targetSlice interface{}) error {
|
||||
return f.GenerateStruct(targetSlice)
|
||||
}
|
62
vendor/github.com/AdaLogics/go-fuzz-headers/funcs.go
generated
vendored
Normal file
62
vendor/github.com/AdaLogics/go-fuzz-headers/funcs.go
generated
vendored
Normal file
@ -0,0 +1,62 @@
|
||||
// Copyright 2023 The go-fuzz-headers Authors.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package gofuzzheaders
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"reflect"
|
||||
)
|
||||
|
||||
type Continue struct {
|
||||
F *ConsumeFuzzer
|
||||
}
|
||||
|
||||
func (f *ConsumeFuzzer) AddFuncs(fuzzFuncs []interface{}) {
|
||||
for i := range fuzzFuncs {
|
||||
v := reflect.ValueOf(fuzzFuncs[i])
|
||||
if v.Kind() != reflect.Func {
|
||||
panic("Need only funcs!")
|
||||
}
|
||||
t := v.Type()
|
||||
if t.NumIn() != 2 || t.NumOut() != 1 {
|
||||
fmt.Println(t.NumIn(), t.NumOut())
|
||||
|
||||
panic("Need 2 in and 1 out params. In must be the type. Out must be an error")
|
||||
}
|
||||
argT := t.In(0)
|
||||
switch argT.Kind() {
|
||||
case reflect.Ptr, reflect.Map:
|
||||
default:
|
||||
panic("fuzzFunc must take pointer or map type")
|
||||
}
|
||||
if t.In(1) != reflect.TypeOf(Continue{}) {
|
||||
panic("fuzzFunc's second parameter must be type Continue")
|
||||
}
|
||||
f.Funcs[argT] = v
|
||||
}
|
||||
}
|
||||
|
||||
func (f *ConsumeFuzzer) GenerateWithCustom(targetStruct interface{}) error {
|
||||
e := reflect.ValueOf(targetStruct).Elem()
|
||||
return f.fuzzStruct(e, true)
|
||||
}
|
||||
|
||||
func (c Continue) GenerateStruct(targetStruct interface{}) error {
|
||||
return c.F.GenerateStruct(targetStruct)
|
||||
}
|
||||
|
||||
func (c Continue) GenerateStructWithCustom(targetStruct interface{}) error {
|
||||
return c.F.GenerateWithCustom(targetStruct)
|
||||
}
|
556
vendor/github.com/AdaLogics/go-fuzz-headers/sql.go
generated
vendored
Normal file
556
vendor/github.com/AdaLogics/go-fuzz-headers/sql.go
generated
vendored
Normal file
@ -0,0 +1,556 @@
|
||||
// Copyright 2023 The go-fuzz-headers Authors.
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
package gofuzzheaders
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// returns a keyword by index
|
||||
func getKeyword(f *ConsumeFuzzer) (string, error) {
|
||||
index, err := f.GetInt()
|
||||
if err != nil {
|
||||
return keywords[0], err
|
||||
}
|
||||
for i, k := range keywords {
|
||||
if i == index {
|
||||
return k, nil
|
||||
}
|
||||
}
|
||||
return keywords[0], fmt.Errorf("could not get a kw")
|
||||
}
|
||||
|
||||
// Simple utility function to check if a string
|
||||
// slice contains a string.
|
||||
func containsString(s []string, e string) bool {
|
||||
for _, a := range s {
|
||||
if a == e {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// These keywords are used specifically for fuzzing Vitess
|
||||
var keywords = []string{
|
||||
"accessible", "action", "add", "after", "against", "algorithm",
|
||||
"all", "alter", "always", "analyze", "and", "as", "asc", "asensitive",
|
||||
"auto_increment", "avg_row_length", "before", "begin", "between",
|
||||
"bigint", "binary", "_binary", "_utf8mb4", "_utf8", "_latin1", "bit",
|
||||
"blob", "bool", "boolean", "both", "by", "call", "cancel", "cascade",
|
||||
"cascaded", "case", "cast", "channel", "change", "char", "character",
|
||||
"charset", "check", "checksum", "coalesce", "code", "collate", "collation",
|
||||
"column", "columns", "comment", "committed", "commit", "compact", "complete",
|
||||
"compressed", "compression", "condition", "connection", "constraint", "continue",
|
||||
"convert", "copy", "cume_dist", "substr", "substring", "create", "cross",
|
||||
"csv", "current_date", "current_time", "current_timestamp", "current_user",
|
||||
"cursor", "data", "database", "databases", "day", "day_hour", "day_microsecond",
|
||||
"day_minute", "day_second", "date", "datetime", "dec", "decimal", "declare",
|
||||
"default", "definer", "delay_key_write", "delayed", "delete", "dense_rank",
|
||||
"desc", "describe", "deterministic", "directory", "disable", "discard",
|
||||
"disk", "distinct", "distinctrow", "div", "double", "do", "drop", "dumpfile",
|
||||
"duplicate", "dynamic", "each", "else", "elseif", "empty", "enable",
|
||||
"enclosed", "encryption", "end", "enforced", "engine", "engines", "enum",
|
||||
"error", "escape", "escaped", "event", "exchange", "exclusive", "exists",
|
||||
"exit", "explain", "expansion", "export", "extended", "extract", "false",
|
||||
"fetch", "fields", "first", "first_value", "fixed", "float", "float4",
|
||||
"float8", "flush", "for", "force", "foreign", "format", "from", "full",
|
||||
"fulltext", "function", "general", "generated", "geometry", "geometrycollection",
|
||||
"get", "global", "gtid_executed", "grant", "group", "grouping", "groups",
|
||||
"group_concat", "having", "header", "high_priority", "hosts", "hour", "hour_microsecond",
|
||||
"hour_minute", "hour_second", "if", "ignore", "import", "in", "index", "indexes",
|
||||
"infile", "inout", "inner", "inplace", "insensitive", "insert", "insert_method",
|
||||
"int", "int1", "int2", "int3", "int4", "int8", "integer", "interval",
|
||||
"into", "io_after_gtids", "is", "isolation", "iterate", "invoker", "join",
|
||||
"json", "json_table", "key", "keys", "keyspaces", "key_block_size", "kill", "lag",
|
||||
"language", "last", "last_value", "last_insert_id", "lateral", "lead", "leading",
|
||||
"leave", "left", "less", "level", "like", "limit", "linear", "lines",
|
||||
"linestring", "load", "local", "localtime", "localtimestamp", "lock", "logs",
|
||||
"long", "longblob", "longtext", "loop", "low_priority", "manifest",
|
||||
"master_bind", "match", "max_rows", "maxvalue", "mediumblob", "mediumint",
|
||||
"mediumtext", "memory", "merge", "microsecond", "middleint", "min_rows", "minute",
|
||||
"minute_microsecond", "minute_second", "mod", "mode", "modify", "modifies",
|
||||
"multilinestring", "multipoint", "multipolygon", "month", "name",
|
||||
"names", "natural", "nchar", "next", "no", "none", "not", "no_write_to_binlog",
|
||||
"nth_value", "ntile", "null", "numeric", "of", "off", "offset", "on",
|
||||
"only", "open", "optimize", "optimizer_costs", "option", "optionally",
|
||||
"or", "order", "out", "outer", "outfile", "over", "overwrite", "pack_keys",
|
||||
"parser", "partition", "partitioning", "password", "percent_rank", "plugins",
|
||||
"point", "polygon", "precision", "primary", "privileges", "processlist",
|
||||
"procedure", "query", "quarter", "range", "rank", "read", "reads", "read_write",
|
||||
"real", "rebuild", "recursive", "redundant", "references", "regexp", "relay",
|
||||
"release", "remove", "rename", "reorganize", "repair", "repeat", "repeatable",
|
||||
"replace", "require", "resignal", "restrict", "return", "retry", "revert",
|
||||
"revoke", "right", "rlike", "rollback", "row", "row_format", "row_number",
|
||||
"rows", "s3", "savepoint", "schema", "schemas", "second", "second_microsecond",
|
||||
"security", "select", "sensitive", "separator", "sequence", "serializable",
|
||||
"session", "set", "share", "shared", "show", "signal", "signed", "slow",
|
||||
"smallint", "spatial", "specific", "sql", "sqlexception", "sqlstate",
|
||||
"sqlwarning", "sql_big_result", "sql_cache", "sql_calc_found_rows",
|
||||
"sql_no_cache", "sql_small_result", "ssl", "start", "starting",
|
||||
"stats_auto_recalc", "stats_persistent", "stats_sample_pages", "status",
|
||||
"storage", "stored", "straight_join", "stream", "system", "vstream",
|
||||
"table", "tables", "tablespace", "temporary", "temptable", "terminated",
|
||||
"text", "than", "then", "time", "timestamp", "timestampadd", "timestampdiff",
|
||||
"tinyblob", "tinyint", "tinytext", "to", "trailing", "transaction", "tree",
|
||||
"traditional", "trigger", "triggers", "true", "truncate", "uncommitted",
|
||||
"undefined", "undo", "union", "unique", "unlock", "unsigned", "update",
|
||||
"upgrade", "usage", "use", "user", "user_resources", "using", "utc_date",
|
||||
"utc_time", "utc_timestamp", "validation", "values", "variables", "varbinary",
|
||||
"varchar", "varcharacter", "varying", "vgtid_executed", "virtual", "vindex",
|
||||
"vindexes", "view", "vitess", "vitess_keyspaces", "vitess_metadata",
|
||||
"vitess_migration", "vitess_migrations", "vitess_replication_status",
|
||||
"vitess_shards", "vitess_tablets", "vschema", "warnings", "when",
|
||||
"where", "while", "window", "with", "without", "work", "write", "xor",
|
||||
"year", "year_month", "zerofill",
|
||||
}
|
||||
|
||||
// Keywords that could get an additional keyword
|
||||
var needCustomString = []string{
|
||||
"DISTINCTROW", "FROM", // Select keywords:
|
||||
"GROUP BY", "HAVING", "WINDOW",
|
||||
"FOR",
|
||||
"ORDER BY", "LIMIT",
|
||||
"INTO", "PARTITION", "AS", // Insert Keywords:
|
||||
"ON DUPLICATE KEY UPDATE",
|
||||
"WHERE", "LIMIT", // Delete keywords
|
||||
"INFILE", "INTO TABLE", "CHARACTER SET", // Load keywords
|
||||
"TERMINATED BY", "ENCLOSED BY",
|
||||
"ESCAPED BY", "STARTING BY",
|
||||
"TERMINATED BY", "STARTING BY",
|
||||
"IGNORE",
|
||||
"VALUE", "VALUES", // Replace tokens
|
||||
"SET", // Update tokens
|
||||
"ENGINE =", // Drop tokens
|
||||
"DEFINER =", "ON SCHEDULE", "RENAME TO", // Alter tokens
|
||||
"COMMENT", "DO", "INITIAL_SIZE = ", "OPTIONS",
|
||||
}
|
||||
|
||||
var alterTableTokens = [][]string{
|
||||
{"CUSTOM_FUZZ_STRING"},
|
||||
{"CUSTOM_ALTTER_TABLE_OPTIONS"},
|
||||
{"PARTITION_OPTIONS_FOR_ALTER_TABLE"},
|
||||
}
|
||||
|
||||
var alterTokens = [][]string{
|
||||
{
|
||||
"DATABASE", "SCHEMA", "DEFINER = ", "EVENT", "FUNCTION", "INSTANCE",
|
||||
"LOGFILE GROUP", "PROCEDURE", "SERVER",
|
||||
},
|
||||
{"CUSTOM_FUZZ_STRING"},
|
||||
{
|
||||
"ON SCHEDULE", "ON COMPLETION PRESERVE", "ON COMPLETION NOT PRESERVE",
|
||||
"ADD UNDOFILE", "OPTIONS",
|
||||
},
|
||||
{"RENAME TO", "INITIAL_SIZE = "},
|
||||
{"ENABLE", "DISABLE", "DISABLE ON SLAVE", "ENGINE"},
|
||||
{"COMMENT"},
|
||||
{"DO"},
|
||||
}
|
||||
|
||||
var setTokens = [][]string{
|
||||
{"CHARACTER SET", "CHARSET", "CUSTOM_FUZZ_STRING", "NAMES"},
|
||||
{"CUSTOM_FUZZ_STRING", "DEFAULT", "="},
|
||||
{"CUSTOM_FUZZ_STRING"},
|
||||
}
|
||||
|
||||
var dropTokens = [][]string{
|
||||
{"TEMPORARY", "UNDO"},
|
||||
{
|
||||
"DATABASE", "SCHEMA", "EVENT", "INDEX", "LOGFILE GROUP",
|
||||
"PROCEDURE", "FUNCTION", "SERVER", "SPATIAL REFERENCE SYSTEM",
|
||||
"TABLE", "TABLESPACE", "TRIGGER", "VIEW",
|
||||
},
|
||||
{"IF EXISTS"},
|
||||
{"CUSTOM_FUZZ_STRING"},
|
||||
{"ON", "ENGINE = ", "RESTRICT", "CASCADE"},
|
||||
}
|
||||
|
||||
var renameTokens = [][]string{
|
||||
{"TABLE"},
|
||||
{"CUSTOM_FUZZ_STRING"},
|
||||
{"TO"},
|
||||
{"CUSTOM_FUZZ_STRING"},
|
||||
}
|
||||
|
||||
var truncateTokens = [][]string{
|
||||
{"TABLE"},
|
||||
{"CUSTOM_FUZZ_STRING"},
|
||||
}
|
||||
|
||||
var createTokens = [][]string{
|
||||
{"OR REPLACE", "TEMPORARY", "UNDO"}, // For create spatial reference system
|
||||
{
|
||||
"UNIQUE", "FULLTEXT", "SPATIAL", "ALGORITHM = UNDEFINED", "ALGORITHM = MERGE",
|
||||
"ALGORITHM = TEMPTABLE",
|
||||
},
|
||||
{
|
||||
"DATABASE", "SCHEMA", "EVENT", "FUNCTION", "INDEX", "LOGFILE GROUP",
|
||||
"PROCEDURE", "SERVER", "SPATIAL REFERENCE SYSTEM", "TABLE", "TABLESPACE",
|
||||
"TRIGGER", "VIEW",
|
||||
},
|
||||
{"IF NOT EXISTS"},
|
||||
{"CUSTOM_FUZZ_STRING"},
|
||||
}
|
||||
|
||||
/*
|
||||
// For future use.
|
||||
var updateTokens = [][]string{
|
||||
{"LOW_PRIORITY"},
|
||||
{"IGNORE"},
|
||||
{"SET"},
|
||||
{"WHERE"},
|
||||
{"ORDER BY"},
|
||||
{"LIMIT"},
|
||||
}
|
||||
*/
|
||||
|
||||
var replaceTokens = [][]string{
|
||||
{"LOW_PRIORITY", "DELAYED"},
|
||||
{"INTO"},
|
||||
{"PARTITION"},
|
||||
{"CUSTOM_FUZZ_STRING"},
|
||||
{"VALUES", "VALUE"},
|
||||
}
|
||||
|
||||
var loadTokens = [][]string{
|
||||
{"DATA"},
|
||||
{"LOW_PRIORITY", "CONCURRENT", "LOCAL"},
|
||||
{"INFILE"},
|
||||
{"REPLACE", "IGNORE"},
|
||||
{"INTO TABLE"},
|
||||
{"PARTITION"},
|
||||
{"CHARACTER SET"},
|
||||
{"FIELDS", "COLUMNS"},
|
||||
{"TERMINATED BY"},
|
||||
{"OPTIONALLY"},
|
||||
{"ENCLOSED BY"},
|
||||
{"ESCAPED BY"},
|
||||
{"LINES"},
|
||||
{"STARTING BY"},
|
||||
{"TERMINATED BY"},
|
||||
{"IGNORE"},
|
||||
{"LINES", "ROWS"},
|
||||
{"CUSTOM_FUZZ_STRING"},
|
||||
}
|
||||
|
||||
// These Are everything that comes after "INSERT"
|
||||
var insertTokens = [][]string{
|
||||
{"LOW_PRIORITY", "DELAYED", "HIGH_PRIORITY", "IGNORE"},
|
||||
{"INTO"},
|
||||
{"PARTITION"},
|
||||
{"CUSTOM_FUZZ_STRING"},
|
||||
{"AS"},
|
||||
{"ON DUPLICATE KEY UPDATE"},
|
||||
}
|
||||
|
||||
// These are everything that comes after "SELECT"
|
||||
var selectTokens = [][]string{
|
||||
{"*", "CUSTOM_FUZZ_STRING", "DISTINCTROW"},
|
||||
{"HIGH_PRIORITY"},
|
||||
{"STRAIGHT_JOIN"},
|
||||
{"SQL_SMALL_RESULT", "SQL_BIG_RESULT", "SQL_BUFFER_RESULT"},
|
||||
{"SQL_NO_CACHE", "SQL_CALC_FOUND_ROWS"},
|
||||
{"CUSTOM_FUZZ_STRING"},
|
||||
{"FROM"},
|
||||
{"WHERE"},
|
||||
{"GROUP BY"},
|
||||
{"HAVING"},
|
||||
{"WINDOW"},
|
||||
{"ORDER BY"},
|
||||
{"LIMIT"},
|
||||
{"CUSTOM_FUZZ_STRING"},
|
||||
{"FOR"},
|
||||
}
|
||||
|
||||
// These are everything that comes after "DELETE"
|
||||
var deleteTokens = [][]string{
|
||||
{"LOW_PRIORITY", "QUICK", "IGNORE", "FROM", "AS"},
|
||||
{"PARTITION"},
|
||||
{"WHERE"},
|
||||
{"ORDER BY"},
|
||||
{"LIMIT"},
|
||||
}
|
||||
|
||||
var alter_table_options = []string{
|
||||
"ADD", "COLUMN", "FIRST", "AFTER", "INDEX", "KEY", "FULLTEXT", "SPATIAL",
|
||||
"CONSTRAINT", "UNIQUE", "FOREIGN KEY", "CHECK", "ENFORCED", "DROP", "ALTER",
|
||||
"NOT", "INPLACE", "COPY", "SET", "VISIBLE", "INVISIBLE", "DEFAULT", "CHANGE",
|
||||
"CHARACTER SET", "COLLATE", "DISABLE", "ENABLE", "KEYS", "TABLESPACE", "LOCK",
|
||||
"FORCE", "MODIFY", "SHARED", "EXCLUSIVE", "NONE", "ORDER BY", "RENAME COLUMN",
|
||||
"AS", "=", "ASC", "DESC", "WITH", "WITHOUT", "VALIDATION", "ADD PARTITION",
|
||||
"DROP PARTITION", "DISCARD PARTITION", "IMPORT PARTITION", "TRUNCATE PARTITION",
|
||||
"COALESCE PARTITION", "REORGANIZE PARTITION", "EXCHANGE PARTITION",
|
||||
"ANALYZE PARTITION", "CHECK PARTITION", "OPTIMIZE PARTITION", "REBUILD PARTITION",
|
||||
"REPAIR PARTITION", "REMOVE PARTITIONING", "USING", "BTREE", "HASH", "COMMENT",
|
||||
"KEY_BLOCK_SIZE", "WITH PARSER", "AUTOEXTEND_SIZE", "AUTO_INCREMENT", "AVG_ROW_LENGTH",
|
||||
"CHECKSUM", "INSERT_METHOD", "ROW_FORMAT", "DYNAMIC", "FIXED", "COMPRESSED", "REDUNDANT",
|
||||
"COMPACT", "SECONDARY_ENGINE_ATTRIBUTE", "STATS_AUTO_RECALC", "STATS_PERSISTENT",
|
||||
"STATS_SAMPLE_PAGES", "ZLIB", "LZ4", "ENGINE_ATTRIBUTE", "KEY_BLOCK_SIZE", "MAX_ROWS",
|
||||
"MIN_ROWS", "PACK_KEYS", "PASSWORD", "COMPRESSION", "CONNECTION", "DIRECTORY",
|
||||
"DELAY_KEY_WRITE", "ENCRYPTION", "STORAGE", "DISK", "MEMORY", "UNION",
|
||||
}
|
||||
|
||||
// Creates an 'alter table' statement. 'alter table' is an exception
|
||||
// in that it has its own function. The majority of statements
|
||||
// are created by 'createStmt()'.
|
||||
func createAlterTableStmt(f *ConsumeFuzzer) (string, error) {
|
||||
maxArgs, err := f.GetInt()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
maxArgs = maxArgs % 30
|
||||
if maxArgs == 0 {
|
||||
return "", fmt.Errorf("could not create alter table stmt")
|
||||
}
|
||||
|
||||
var stmt strings.Builder
|
||||
stmt.WriteString("ALTER TABLE ")
|
||||
for i := 0; i < maxArgs; i++ {
|
||||
// Calculate if we get existing token or custom string
|
||||
tokenType, err := f.GetInt()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
if tokenType%4 == 1 {
|
||||
customString, err := f.GetString()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
stmt.WriteString(" " + customString)
|
||||
} else {
|
||||
tokenIndex, err := f.GetInt()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
stmt.WriteString(" " + alter_table_options[tokenIndex%len(alter_table_options)])
|
||||
}
|
||||
}
|
||||
return stmt.String(), nil
|
||||
}
|
||||
|
||||
func chooseToken(tokens []string, f *ConsumeFuzzer) (string, error) {
|
||||
index, err := f.GetInt()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
var token strings.Builder
|
||||
token.WriteString(tokens[index%len(tokens)])
|
||||
if token.String() == "CUSTOM_FUZZ_STRING" {
|
||||
customFuzzString, err := f.GetString()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return customFuzzString, nil
|
||||
}
|
||||
|
||||
// Check if token requires an argument
|
||||
if containsString(needCustomString, token.String()) {
|
||||
customFuzzString, err := f.GetString()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
token.WriteString(" " + customFuzzString)
|
||||
}
|
||||
return token.String(), nil
|
||||
}
|
||||
|
||||
var stmtTypes = map[string][][]string{
|
||||
"DELETE": deleteTokens,
|
||||
"INSERT": insertTokens,
|
||||
"SELECT": selectTokens,
|
||||
"LOAD": loadTokens,
|
||||
"REPLACE": replaceTokens,
|
||||
"CREATE": createTokens,
|
||||
"DROP": dropTokens,
|
||||
"RENAME": renameTokens,
|
||||
"TRUNCATE": truncateTokens,
|
||||
"SET": setTokens,
|
||||
"ALTER": alterTokens,
|
||||
"ALTER TABLE": alterTableTokens, // ALTER TABLE has its own set of tokens
|
||||
}
|
||||
|
||||
var stmtTypeEnum = map[int]string{
|
||||
0: "DELETE",
|
||||
1: "INSERT",
|
||||
2: "SELECT",
|
||||
3: "LOAD",
|
||||
4: "REPLACE",
|
||||
5: "CREATE",
|
||||
6: "DROP",
|
||||
7: "RENAME",
|
||||
8: "TRUNCATE",
|
||||
9: "SET",
|
||||
10: "ALTER",
|
||||
11: "ALTER TABLE",
|
||||
}
|
||||
|
||||
func createStmt(f *ConsumeFuzzer) (string, error) {
|
||||
stmtIndex, err := f.GetInt()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
stmtIndex = stmtIndex % len(stmtTypes)
|
||||
|
||||
queryType := stmtTypeEnum[stmtIndex]
|
||||
tokens := stmtTypes[queryType]
|
||||
|
||||
// We have custom creator for ALTER TABLE
|
||||
if queryType == "ALTER TABLE" {
|
||||
query, err := createAlterTableStmt(f)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return query, nil
|
||||
}
|
||||
|
||||
// Here we are creating a query that is not
|
||||
// an 'alter table' query. For available
|
||||
// queries, see "stmtTypes"
|
||||
|
||||
// First specify the first query keyword:
|
||||
var query strings.Builder
|
||||
query.WriteString(queryType)
|
||||
|
||||
// Next create the args for the
|
||||
queryArgs, err := createStmtArgs(tokens, f)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
query.WriteString(" " + queryArgs)
|
||||
return query.String(), nil
|
||||
}
|
||||
|
||||
// Creates the arguments of a statements. In a select statement
|
||||
// that would be everything after "select".
|
||||
func createStmtArgs(tokenslice [][]string, f *ConsumeFuzzer) (string, error) {
|
||||
var query, token strings.Builder
|
||||
|
||||
// We go through the tokens in the tokenslice,
|
||||
// create the respective token and add it to
|
||||
// "query"
|
||||
for _, tokens := range tokenslice {
|
||||
// For extra randomization, the fuzzer can
|
||||
// choose to not include this token.
|
||||
includeThisToken, err := f.GetBool()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
if !includeThisToken {
|
||||
continue
|
||||
}
|
||||
|
||||
// There may be several tokens to choose from:
|
||||
if len(tokens) > 1 {
|
||||
chosenToken, err := chooseToken(tokens, f)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
query.WriteString(" " + chosenToken)
|
||||
} else {
|
||||
token.WriteString(tokens[0])
|
||||
|
||||
// In case the token is "CUSTOM_FUZZ_STRING"
|
||||
// we will then create a non-structured string
|
||||
if token.String() == "CUSTOM_FUZZ_STRING" {
|
||||
customFuzzString, err := f.GetString()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
query.WriteString(" " + customFuzzString)
|
||||
continue
|
||||
}
|
||||
|
||||
// Check if token requires an argument.
|
||||
// Tokens that take an argument can be found
|
||||
// in 'needCustomString'. If so, we add a
|
||||
// non-structured string to the token.
|
||||
if containsString(needCustomString, token.String()) {
|
||||
customFuzzString, err := f.GetString()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
token.WriteString(fmt.Sprintf(" %s", customFuzzString))
|
||||
}
|
||||
query.WriteString(fmt.Sprintf(" %s", token.String()))
|
||||
}
|
||||
}
|
||||
return query.String(), nil
|
||||
}
|
||||
|
||||
// Creates a semi-structured query. It creates a string
|
||||
// that is a combination of the keywords and random strings.
|
||||
func createQuery(f *ConsumeFuzzer) (string, error) {
|
||||
queryLen, err := f.GetInt()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
maxLen := queryLen % 60
|
||||
if maxLen == 0 {
|
||||
return "", fmt.Errorf("could not create a query")
|
||||
}
|
||||
var query strings.Builder
|
||||
for i := 0; i < maxLen; i++ {
|
||||
// Get a new token:
|
||||
useKeyword, err := f.GetBool()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
if useKeyword {
|
||||
keyword, err := getKeyword(f)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
query.WriteString(" " + keyword)
|
||||
} else {
|
||||
customString, err := f.GetString()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
query.WriteString(" " + customString)
|
||||
}
|
||||
}
|
||||
if query.String() == "" {
|
||||
return "", fmt.Errorf("could not create a query")
|
||||
}
|
||||
return query.String(), nil
|
||||
}
|
||||
|
||||
// GetSQLString is the API that users interact with.
|
||||
//
|
||||
// Usage:
|
||||
//
|
||||
// f := NewConsumer(data)
|
||||
// sqlString, err := f.GetSQLString()
|
||||
func (f *ConsumeFuzzer) GetSQLString() (string, error) {
|
||||
var query string
|
||||
veryStructured, err := f.GetBool()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
if veryStructured {
|
||||
query, err = createStmt(f)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
} else {
|
||||
query, err = createQuery(f)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
}
|
||||
return query, nil
|
||||
}
|
1
vendor/github.com/Microsoft/go-winio/.gitattributes
generated
vendored
Normal file
1
vendor/github.com/Microsoft/go-winio/.gitattributes
generated
vendored
Normal file
@ -0,0 +1 @@
|
||||
* text=auto eol=lf
|
9
vendor/github.com/Microsoft/go-winio/.gitignore
generated
vendored
9
vendor/github.com/Microsoft/go-winio/.gitignore
generated
vendored
@ -1 +1,10 @@
|
||||
.vscode/
|
||||
|
||||
*.exe
|
||||
|
||||
# testing
|
||||
testdata
|
||||
|
||||
# go workspaces
|
||||
go.work
|
||||
go.work.sum
|
||||
|
144
vendor/github.com/Microsoft/go-winio/.golangci.yml
generated
vendored
Normal file
144
vendor/github.com/Microsoft/go-winio/.golangci.yml
generated
vendored
Normal file
@ -0,0 +1,144 @@
|
||||
run:
|
||||
skip-dirs:
|
||||
- pkg/etw/sample
|
||||
|
||||
linters:
|
||||
enable:
|
||||
# style
|
||||
- containedctx # struct contains a context
|
||||
- dupl # duplicate code
|
||||
- errname # erorrs are named correctly
|
||||
- goconst # strings that should be constants
|
||||
- godot # comments end in a period
|
||||
- misspell
|
||||
- nolintlint # "//nolint" directives are properly explained
|
||||
- revive # golint replacement
|
||||
- stylecheck # golint replacement, less configurable than revive
|
||||
- unconvert # unnecessary conversions
|
||||
- wastedassign
|
||||
|
||||
# bugs, performance, unused, etc ...
|
||||
- contextcheck # function uses a non-inherited context
|
||||
- errorlint # errors not wrapped for 1.13
|
||||
- exhaustive # check exhaustiveness of enum switch statements
|
||||
- gofmt # files are gofmt'ed
|
||||
- gosec # security
|
||||
- nestif # deeply nested ifs
|
||||
- nilerr # returns nil even with non-nil error
|
||||
- prealloc # slices that can be pre-allocated
|
||||
- structcheck # unused struct fields
|
||||
- unparam # unused function params
|
||||
|
||||
issues:
|
||||
exclude-rules:
|
||||
# err is very often shadowed in nested scopes
|
||||
- linters:
|
||||
- govet
|
||||
text: '^shadow: declaration of "err" shadows declaration'
|
||||
|
||||
# ignore long lines for skip autogen directives
|
||||
- linters:
|
||||
- revive
|
||||
text: "^line-length-limit: "
|
||||
source: "^//(go:generate|sys) "
|
||||
|
||||
# allow unjustified ignores of error checks in defer statements
|
||||
- linters:
|
||||
- nolintlint
|
||||
text: "^directive `//nolint:errcheck` should provide explanation"
|
||||
source: '^\s*defer '
|
||||
|
||||
# allow unjustified ignores of error lints for io.EOF
|
||||
- linters:
|
||||
- nolintlint
|
||||
text: "^directive `//nolint:errorlint` should provide explanation"
|
||||
source: '[=|!]= io.EOF'
|
||||
|
||||
|
||||
linters-settings:
|
||||
govet:
|
||||
enable-all: true
|
||||
disable:
|
||||
# struct order is often for Win32 compat
|
||||
# also, ignore pointer bytes/GC issues for now until performance becomes an issue
|
||||
- fieldalignment
|
||||
check-shadowing: true
|
||||
nolintlint:
|
||||
allow-leading-space: false
|
||||
require-explanation: true
|
||||
require-specific: true
|
||||
revive:
|
||||
# revive is more configurable than static check, so likely the preferred alternative to static-check
|
||||
# (once the perf issue is solved: https://github.com/golangci/golangci-lint/issues/2997)
|
||||
enable-all-rules:
|
||||
true
|
||||
# https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md
|
||||
rules:
|
||||
# rules with required arguments
|
||||
- name: argument-limit
|
||||
disabled: true
|
||||
- name: banned-characters
|
||||
disabled: true
|
||||
- name: cognitive-complexity
|
||||
disabled: true
|
||||
- name: cyclomatic
|
||||
disabled: true
|
||||
- name: file-header
|
||||
disabled: true
|
||||
- name: function-length
|
||||
disabled: true
|
||||
- name: function-result-limit
|
||||
disabled: true
|
||||
- name: max-public-structs
|
||||
disabled: true
|
||||
# geneally annoying rules
|
||||
- name: add-constant # complains about any and all strings and integers
|
||||
disabled: true
|
||||
- name: confusing-naming # we frequently use "Foo()" and "foo()" together
|
||||
disabled: true
|
||||
- name: flag-parameter # excessive, and a common idiom we use
|
||||
disabled: true
|
||||
# general config
|
||||
- name: line-length-limit
|
||||
arguments:
|
||||
- 140
|
||||
- name: var-naming
|
||||
arguments:
|
||||
- []
|
||||
- - CID
|
||||
- CRI
|
||||
- CTRD
|
||||
- DACL
|
||||
- DLL
|
||||
- DOS
|
||||
- ETW
|
||||
- FSCTL
|
||||
- GCS
|
||||
- GMSA
|
||||
- HCS
|
||||
- HV
|
||||
- IO
|
||||
- LCOW
|
||||
- LDAP
|
||||
- LPAC
|
||||
- LTSC
|
||||
- MMIO
|
||||
- NT
|
||||
- OCI
|
||||
- PMEM
|
||||
- PWSH
|
||||
- RX
|
||||
- SACl
|
||||
- SID
|
||||
- SMB
|
||||
- TX
|
||||
- VHD
|
||||
- VHDX
|
||||
- VMID
|
||||
- VPCI
|
||||
- WCOW
|
||||
- WIM
|
||||
stylecheck:
|
||||
checks:
|
||||
- "all"
|
||||
- "-ST1003" # use revive's var naming
|
74
vendor/github.com/Microsoft/go-winio/README.md
generated
vendored
74
vendor/github.com/Microsoft/go-winio/README.md
generated
vendored
@ -13,16 +13,60 @@ Please see the LICENSE file for licensing information.
|
||||
|
||||
## Contributing
|
||||
|
||||
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA)
|
||||
declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.
|
||||
This project welcomes contributions and suggestions.
|
||||
Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that
|
||||
you have the right to, and actually do, grant us the rights to use your contribution.
|
||||
For details, visit [Microsoft CLA](https://cla.microsoft.com).
|
||||
|
||||
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR
|
||||
appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
|
||||
When you submit a pull request, a CLA-bot will automatically determine whether you need to
|
||||
provide a CLA and decorate the PR appropriately (e.g., label, comment).
|
||||
Simply follow the instructions provided by the bot.
|
||||
You will only need to do this once across all repos using our CLA.
|
||||
|
||||
We also require that contributors sign their commits using git commit -s or git commit --signoff to certify they either authored the work themselves
|
||||
or otherwise have permission to use it in this project. Please see https://developercertificate.org/ for more info, as well as to make sure that you can
|
||||
attest to the rules listed. Our CI uses the DCO Github app to ensure that all commits in a given PR are signed-off.
|
||||
Additionally, the pull request pipeline requires the following steps to be performed before
|
||||
mergining.
|
||||
|
||||
### Code Sign-Off
|
||||
|
||||
We require that contributors sign their commits using [`git commit --signoff`][git-commit-s]
|
||||
to certify they either authored the work themselves or otherwise have permission to use it in this project.
|
||||
|
||||
A range of commits can be signed off using [`git rebase --signoff`][git-rebase-s].
|
||||
|
||||
Please see [the developer certificate](https://developercertificate.org) for more info,
|
||||
as well as to make sure that you can attest to the rules listed.
|
||||
Our CI uses the DCO Github app to ensure that all commits in a given PR are signed-off.
|
||||
|
||||
### Linting
|
||||
|
||||
Code must pass a linting stage, which uses [`golangci-lint`][lint].
|
||||
The linting settings are stored in [`.golangci.yaml`](./.golangci.yaml), and can be run
|
||||
automatically with VSCode by adding the following to your workspace or folder settings:
|
||||
|
||||
```json
|
||||
"go.lintTool": "golangci-lint",
|
||||
"go.lintOnSave": "package",
|
||||
```
|
||||
|
||||
Additional editor [integrations options are also available][lint-ide].
|
||||
|
||||
Alternatively, `golangci-lint` can be [installed locally][lint-install] and run from the repo root:
|
||||
|
||||
```shell
|
||||
# use . or specify a path to only lint a package
|
||||
# to show all lint errors, use flags "--max-issues-per-linter=0 --max-same-issues=0"
|
||||
> golangci-lint run ./...
|
||||
```
|
||||
|
||||
### Go Generate
|
||||
|
||||
The pipeline checks that auto-generated code, via `go generate`, are up to date.
|
||||
|
||||
This can be done for the entire repo:
|
||||
|
||||
```shell
|
||||
> go generate ./...
|
||||
```
|
||||
|
||||
## Code of Conduct
|
||||
|
||||
@ -30,8 +74,16 @@ This project has adopted the [Microsoft Open Source Code of Conduct](https://ope
|
||||
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
|
||||
contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
|
||||
|
||||
|
||||
|
||||
## Special Thanks
|
||||
Thanks to natefinch for the inspiration for this library. See https://github.com/natefinch/npipe
|
||||
for another named pipe implementation.
|
||||
|
||||
Thanks to [natefinch][natefinch] for the inspiration for this library.
|
||||
See [npipe](https://github.com/natefinch/npipe) for another named pipe implementation.
|
||||
|
||||
[lint]: https://golangci-lint.run/
|
||||
[lint-ide]: https://golangci-lint.run/usage/integrations/#editor-integration
|
||||
[lint-install]: https://golangci-lint.run/usage/install/#local-installation
|
||||
|
||||
[git-commit-s]: https://git-scm.com/docs/git-commit#Documentation/git-commit.txt--s
|
||||
[git-rebase-s]: https://git-scm.com/docs/git-rebase#Documentation/git-rebase.txt---signoff
|
||||
|
||||
[natefinch]: https://github.com/natefinch
|
||||
|
41
vendor/github.com/Microsoft/go-winio/SECURITY.md
generated
vendored
Normal file
41
vendor/github.com/Microsoft/go-winio/SECURITY.md
generated
vendored
Normal file
@ -0,0 +1,41 @@
|
||||
<!-- BEGIN MICROSOFT SECURITY.MD V0.0.7 BLOCK -->
|
||||
|
||||
## Security
|
||||
|
||||
Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/Microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), and [our GitHub organizations](https://opensource.microsoft.com/).
|
||||
|
||||
If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://aka.ms/opensource/security/definition), please report it to us as described below.
|
||||
|
||||
## Reporting Security Issues
|
||||
|
||||
**Please do not report security vulnerabilities through public GitHub issues.**
|
||||
|
||||
Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/opensource/security/create-report).
|
||||
|
||||
If you prefer to submit without logging in, send email to [secure@microsoft.com](mailto:secure@microsoft.com). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://aka.ms/opensource/security/pgpkey).
|
||||
|
||||
You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://aka.ms/opensource/security/msrc).
|
||||
|
||||
Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue:
|
||||
|
||||
* Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.)
|
||||
* Full paths of source file(s) related to the manifestation of the issue
|
||||
* The location of the affected source code (tag/branch/commit or direct URL)
|
||||
* Any special configuration required to reproduce the issue
|
||||
* Step-by-step instructions to reproduce the issue
|
||||
* Proof-of-concept or exploit code (if possible)
|
||||
* Impact of the issue, including how an attacker might exploit the issue
|
||||
|
||||
This information will help us triage your report more quickly.
|
||||
|
||||
If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/opensource/security/bounty) page for more details about our active programs.
|
||||
|
||||
## Preferred Languages
|
||||
|
||||
We prefer all communications to be in English.
|
||||
|
||||
## Policy
|
||||
|
||||
Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/opensource/security/cvd).
|
||||
|
||||
<!-- END MICROSOFT SECURITY.MD BLOCK -->
|
48
vendor/github.com/Microsoft/go-winio/backup.go
generated
vendored
48
vendor/github.com/Microsoft/go-winio/backup.go
generated
vendored
@ -1,3 +1,4 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package winio
|
||||
@ -7,11 +8,12 @@ import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"runtime"
|
||||
"syscall"
|
||||
"unicode/utf16"
|
||||
|
||||
"golang.org/x/sys/windows"
|
||||
)
|
||||
|
||||
//sys backupRead(h syscall.Handle, b []byte, bytesRead *uint32, abort bool, processSecurity bool, context *uintptr) (err error) = BackupRead
|
||||
@ -24,7 +26,7 @@ const (
|
||||
BackupAlternateData
|
||||
BackupLink
|
||||
BackupPropertyData
|
||||
BackupObjectId
|
||||
BackupObjectId //revive:disable-line:var-naming ID, not Id
|
||||
BackupReparseData
|
||||
BackupSparseBlock
|
||||
BackupTxfsData
|
||||
@ -34,14 +36,16 @@ const (
|
||||
StreamSparseAttributes = uint32(8)
|
||||
)
|
||||
|
||||
//nolint:revive // var-naming: ALL_CAPS
|
||||
const (
|
||||
WRITE_DAC = 0x40000
|
||||
WRITE_OWNER = 0x80000
|
||||
ACCESS_SYSTEM_SECURITY = 0x1000000
|
||||
WRITE_DAC = windows.WRITE_DAC
|
||||
WRITE_OWNER = windows.WRITE_OWNER
|
||||
ACCESS_SYSTEM_SECURITY = windows.ACCESS_SYSTEM_SECURITY
|
||||
)
|
||||
|
||||
// BackupHeader represents a backup stream of a file.
|
||||
type BackupHeader struct {
|
||||
//revive:disable-next-line:var-naming ID, not Id
|
||||
Id uint32 // The backup stream ID
|
||||
Attributes uint32 // Stream attributes
|
||||
Size int64 // The size of the stream in bytes
|
||||
@ -49,8 +53,8 @@ type BackupHeader struct {
|
||||
Offset int64 // The offset of the stream in the file (for BackupSparseBlock only).
|
||||
}
|
||||
|
||||
type win32StreamId struct {
|
||||
StreamId uint32
|
||||
type win32StreamID struct {
|
||||
StreamID uint32
|
||||
Attributes uint32
|
||||
Size uint64
|
||||
NameSize uint32
|
||||
@ -71,7 +75,7 @@ func NewBackupStreamReader(r io.Reader) *BackupStreamReader {
|
||||
// Next returns the next backup stream and prepares for calls to Read(). It skips the remainder of the current stream if
|
||||
// it was not completely read.
|
||||
func (r *BackupStreamReader) Next() (*BackupHeader, error) {
|
||||
if r.bytesLeft > 0 {
|
||||
if r.bytesLeft > 0 { //nolint:nestif // todo: flatten this
|
||||
if s, ok := r.r.(io.Seeker); ok {
|
||||
// Make sure Seek on io.SeekCurrent sometimes succeeds
|
||||
// before trying the actual seek.
|
||||
@ -82,16 +86,16 @@ func (r *BackupStreamReader) Next() (*BackupHeader, error) {
|
||||
r.bytesLeft = 0
|
||||
}
|
||||
}
|
||||
if _, err := io.Copy(ioutil.Discard, r); err != nil {
|
||||
if _, err := io.Copy(io.Discard, r); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
var wsi win32StreamId
|
||||
var wsi win32StreamID
|
||||
if err := binary.Read(r.r, binary.LittleEndian, &wsi); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
hdr := &BackupHeader{
|
||||
Id: wsi.StreamId,
|
||||
Id: wsi.StreamID,
|
||||
Attributes: wsi.Attributes,
|
||||
Size: int64(wsi.Size),
|
||||
}
|
||||
@ -102,7 +106,7 @@ func (r *BackupStreamReader) Next() (*BackupHeader, error) {
|
||||
}
|
||||
hdr.Name = syscall.UTF16ToString(name)
|
||||
}
|
||||
if wsi.StreamId == BackupSparseBlock {
|
||||
if wsi.StreamID == BackupSparseBlock {
|
||||
if err := binary.Read(r.r, binary.LittleEndian, &hdr.Offset); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@ -147,8 +151,8 @@ func (w *BackupStreamWriter) WriteHeader(hdr *BackupHeader) error {
|
||||
return fmt.Errorf("missing %d bytes", w.bytesLeft)
|
||||
}
|
||||
name := utf16.Encode([]rune(hdr.Name))
|
||||
wsi := win32StreamId{
|
||||
StreamId: hdr.Id,
|
||||
wsi := win32StreamID{
|
||||
StreamID: hdr.Id,
|
||||
Attributes: hdr.Attributes,
|
||||
Size: uint64(hdr.Size),
|
||||
NameSize: uint32(len(name) * 2),
|
||||
@ -203,7 +207,7 @@ func (r *BackupFileReader) Read(b []byte) (int, error) {
|
||||
var bytesRead uint32
|
||||
err := backupRead(syscall.Handle(r.f.Fd()), b, &bytesRead, false, r.includeSecurity, &r.ctx)
|
||||
if err != nil {
|
||||
return 0, &os.PathError{"BackupRead", r.f.Name(), err}
|
||||
return 0, &os.PathError{Op: "BackupRead", Path: r.f.Name(), Err: err}
|
||||
}
|
||||
runtime.KeepAlive(r.f)
|
||||
if bytesRead == 0 {
|
||||
@ -216,7 +220,7 @@ func (r *BackupFileReader) Read(b []byte) (int, error) {
|
||||
// the underlying file.
|
||||
func (r *BackupFileReader) Close() error {
|
||||
if r.ctx != 0 {
|
||||
backupRead(syscall.Handle(r.f.Fd()), nil, nil, true, false, &r.ctx)
|
||||
_ = backupRead(syscall.Handle(r.f.Fd()), nil, nil, true, false, &r.ctx)
|
||||
runtime.KeepAlive(r.f)
|
||||
r.ctx = 0
|
||||
}
|
||||
@ -242,7 +246,7 @@ func (w *BackupFileWriter) Write(b []byte) (int, error) {
|
||||
var bytesWritten uint32
|
||||
err := backupWrite(syscall.Handle(w.f.Fd()), b, &bytesWritten, false, w.includeSecurity, &w.ctx)
|
||||
if err != nil {
|
||||
return 0, &os.PathError{"BackupWrite", w.f.Name(), err}
|
||||
return 0, &os.PathError{Op: "BackupWrite", Path: w.f.Name(), Err: err}
|
||||
}
|
||||
runtime.KeepAlive(w.f)
|
||||
if int(bytesWritten) != len(b) {
|
||||
@ -255,7 +259,7 @@ func (w *BackupFileWriter) Write(b []byte) (int, error) {
|
||||
// close the underlying file.
|
||||
func (w *BackupFileWriter) Close() error {
|
||||
if w.ctx != 0 {
|
||||
backupWrite(syscall.Handle(w.f.Fd()), nil, nil, true, false, &w.ctx)
|
||||
_ = backupWrite(syscall.Handle(w.f.Fd()), nil, nil, true, false, &w.ctx)
|
||||
runtime.KeepAlive(w.f)
|
||||
w.ctx = 0
|
||||
}
|
||||
@ -271,7 +275,13 @@ func OpenForBackup(path string, access uint32, share uint32, createmode uint32)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
h, err := syscall.CreateFile(&winPath[0], access, share, nil, createmode, syscall.FILE_FLAG_BACKUP_SEMANTICS|syscall.FILE_FLAG_OPEN_REPARSE_POINT, 0)
|
||||
h, err := syscall.CreateFile(&winPath[0],
|
||||
access,
|
||||
share,
|
||||
nil,
|
||||
createmode,
|
||||
syscall.FILE_FLAG_BACKUP_SEMANTICS|syscall.FILE_FLAG_OPEN_REPARSE_POINT,
|
||||
0)
|
||||
if err != nil {
|
||||
err = &os.PathError{Op: "open", Path: path, Err: err}
|
||||
return nil, err
|
||||
|
22
vendor/github.com/Microsoft/go-winio/doc.go
generated
vendored
Normal file
22
vendor/github.com/Microsoft/go-winio/doc.go
generated
vendored
Normal file
@ -0,0 +1,22 @@
|
||||
// This package provides utilities for efficiently performing Win32 IO operations in Go.
|
||||
// Currently, this package is provides support for genreal IO and management of
|
||||
// - named pipes
|
||||
// - files
|
||||
// - [Hyper-V sockets]
|
||||
//
|
||||
// This code is similar to Go's [net] package, and uses IO completion ports to avoid
|
||||
// blocking IO on system threads, allowing Go to reuse the thread to schedule other goroutines.
|
||||
//
|
||||
// This limits support to Windows Vista and newer operating systems.
|
||||
//
|
||||
// Additionally, this package provides support for:
|
||||
// - creating and managing GUIDs
|
||||
// - writing to [ETW]
|
||||
// - opening and manageing VHDs
|
||||
// - parsing [Windows Image files]
|
||||
// - auto-generating Win32 API code
|
||||
//
|
||||
// [Hyper-V sockets]: https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/user-guide/make-integration-service
|
||||
// [ETW]: https://docs.microsoft.com/en-us/windows-hardware/drivers/devtest/event-tracing-for-windows--etw-
|
||||
// [Windows Image files]: https://docs.microsoft.com/en-us/windows-hardware/manufacture/desktop/work-with-windows-images
|
||||
package winio
|
8
vendor/github.com/Microsoft/go-winio/ea.go
generated
vendored
8
vendor/github.com/Microsoft/go-winio/ea.go
generated
vendored
@ -33,7 +33,7 @@ func parseEa(b []byte) (ea ExtendedAttribute, nb []byte, err error) {
|
||||
err = binary.Read(bytes.NewReader(b), binary.LittleEndian, &info)
|
||||
if err != nil {
|
||||
err = errInvalidEaBuffer
|
||||
return
|
||||
return ea, nb, err
|
||||
}
|
||||
|
||||
nameOffset := fileFullEaInformationSize
|
||||
@ -43,7 +43,7 @@ func parseEa(b []byte) (ea ExtendedAttribute, nb []byte, err error) {
|
||||
nextOffset := int(info.NextEntryOffset)
|
||||
if valueLen+valueOffset > len(b) || nextOffset < 0 || nextOffset > len(b) {
|
||||
err = errInvalidEaBuffer
|
||||
return
|
||||
return ea, nb, err
|
||||
}
|
||||
|
||||
ea.Name = string(b[nameOffset : nameOffset+nameLen])
|
||||
@ -52,7 +52,7 @@ func parseEa(b []byte) (ea ExtendedAttribute, nb []byte, err error) {
|
||||
if info.NextEntryOffset != 0 {
|
||||
nb = b[info.NextEntryOffset:]
|
||||
}
|
||||
return
|
||||
return ea, nb, err
|
||||
}
|
||||
|
||||
// DecodeExtendedAttributes decodes a list of EAs from a FILE_FULL_EA_INFORMATION
|
||||
@ -67,7 +67,7 @@ func DecodeExtendedAttributes(b []byte) (eas []ExtendedAttribute, err error) {
|
||||
eas = append(eas, ea)
|
||||
b = nb
|
||||
}
|
||||
return
|
||||
return eas, err
|
||||
}
|
||||
|
||||
func writeEa(buf *bytes.Buffer, ea *ExtendedAttribute, last bool) error {
|
||||
|
66
vendor/github.com/Microsoft/go-winio/file.go
generated
vendored
66
vendor/github.com/Microsoft/go-winio/file.go
generated
vendored
@ -11,6 +11,8 @@ import (
|
||||
"sync/atomic"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
"golang.org/x/sys/windows"
|
||||
)
|
||||
|
||||
//sys cancelIoEx(file syscall.Handle, o *syscall.Overlapped) (err error) = CancelIoEx
|
||||
@ -24,6 +26,8 @@ type atomicBool int32
|
||||
func (b *atomicBool) isSet() bool { return atomic.LoadInt32((*int32)(b)) != 0 }
|
||||
func (b *atomicBool) setFalse() { atomic.StoreInt32((*int32)(b), 0) }
|
||||
func (b *atomicBool) setTrue() { atomic.StoreInt32((*int32)(b), 1) }
|
||||
|
||||
//revive:disable-next-line:predeclared Keep "new" to maintain consistency with "atomic" pkg
|
||||
func (b *atomicBool) swap(new bool) bool {
|
||||
var newInt int32
|
||||
if new {
|
||||
@ -32,11 +36,6 @@ func (b *atomicBool) swap(new bool) bool {
|
||||
return atomic.SwapInt32((*int32)(b), newInt) == 1
|
||||
}
|
||||
|
||||
const (
|
||||
cFILE_SKIP_COMPLETION_PORT_ON_SUCCESS = 1
|
||||
cFILE_SKIP_SET_EVENT_ON_HANDLE = 2
|
||||
)
|
||||
|
||||
var (
|
||||
ErrFileClosed = errors.New("file has already been closed")
|
||||
ErrTimeout = &timeoutError{}
|
||||
@ -44,28 +43,28 @@ var (
|
||||
|
||||
type timeoutError struct{}
|
||||
|
||||
func (e *timeoutError) Error() string { return "i/o timeout" }
|
||||
func (e *timeoutError) Timeout() bool { return true }
|
||||
func (e *timeoutError) Temporary() bool { return true }
|
||||
func (*timeoutError) Error() string { return "i/o timeout" }
|
||||
func (*timeoutError) Timeout() bool { return true }
|
||||
func (*timeoutError) Temporary() bool { return true }
|
||||
|
||||
type timeoutChan chan struct{}
|
||||
|
||||
var ioInitOnce sync.Once
|
||||
var ioCompletionPort syscall.Handle
|
||||
|
||||
// ioResult contains the result of an asynchronous IO operation
|
||||
// ioResult contains the result of an asynchronous IO operation.
|
||||
type ioResult struct {
|
||||
bytes uint32
|
||||
err error
|
||||
}
|
||||
|
||||
// ioOperation represents an outstanding asynchronous Win32 IO
|
||||
// ioOperation represents an outstanding asynchronous Win32 IO.
|
||||
type ioOperation struct {
|
||||
o syscall.Overlapped
|
||||
ch chan ioResult
|
||||
}
|
||||
|
||||
func initIo() {
|
||||
func initIO() {
|
||||
h, err := createIoCompletionPort(syscall.InvalidHandle, 0, 0, 0xffffffff)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
@ -94,15 +93,15 @@ type deadlineHandler struct {
|
||||
timedout atomicBool
|
||||
}
|
||||
|
||||
// makeWin32File makes a new win32File from an existing file handle
|
||||
// makeWin32File makes a new win32File from an existing file handle.
|
||||
func makeWin32File(h syscall.Handle) (*win32File, error) {
|
||||
f := &win32File{handle: h}
|
||||
ioInitOnce.Do(initIo)
|
||||
ioInitOnce.Do(initIO)
|
||||
_, err := createIoCompletionPort(h, ioCompletionPort, 0, 0xffffffff)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
err = setFileCompletionNotificationModes(h, cFILE_SKIP_COMPLETION_PORT_ON_SUCCESS|cFILE_SKIP_SET_EVENT_ON_HANDLE)
|
||||
err = setFileCompletionNotificationModes(h, windows.FILE_SKIP_COMPLETION_PORT_ON_SUCCESS|windows.FILE_SKIP_SET_EVENT_ON_HANDLE)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@ -121,14 +120,14 @@ func MakeOpenFile(h syscall.Handle) (io.ReadWriteCloser, error) {
|
||||
return f, nil
|
||||
}
|
||||
|
||||
// closeHandle closes the resources associated with a Win32 handle
|
||||
// closeHandle closes the resources associated with a Win32 handle.
|
||||
func (f *win32File) closeHandle() {
|
||||
f.wgLock.Lock()
|
||||
// Atomically set that we are closing, releasing the resources only once.
|
||||
if !f.closing.swap(true) {
|
||||
f.wgLock.Unlock()
|
||||
// cancel all IO and wait for it to complete
|
||||
cancelIoEx(f.handle, nil)
|
||||
_ = cancelIoEx(f.handle, nil)
|
||||
f.wg.Wait()
|
||||
// at this point, no new IO can start
|
||||
syscall.Close(f.handle)
|
||||
@ -144,14 +143,14 @@ func (f *win32File) Close() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// IsClosed checks if the file has been closed
|
||||
// IsClosed checks if the file has been closed.
|
||||
func (f *win32File) IsClosed() bool {
|
||||
return f.closing.isSet()
|
||||
}
|
||||
|
||||
// prepareIo prepares for a new IO operation.
|
||||
// prepareIO prepares for a new IO operation.
|
||||
// The caller must call f.wg.Done() when the IO is finished, prior to Close() returning.
|
||||
func (f *win32File) prepareIo() (*ioOperation, error) {
|
||||
func (f *win32File) prepareIO() (*ioOperation, error) {
|
||||
f.wgLock.RLock()
|
||||
if f.closing.isSet() {
|
||||
f.wgLock.RUnlock()
|
||||
@ -164,7 +163,7 @@ func (f *win32File) prepareIo() (*ioOperation, error) {
|
||||
return c, nil
|
||||
}
|
||||
|
||||
// ioCompletionProcessor processes completed async IOs forever
|
||||
// ioCompletionProcessor processes completed async IOs forever.
|
||||
func ioCompletionProcessor(h syscall.Handle) {
|
||||
for {
|
||||
var bytes uint32
|
||||
@ -178,15 +177,17 @@ func ioCompletionProcessor(h syscall.Handle) {
|
||||
}
|
||||
}
|
||||
|
||||
// asyncIo processes the return value from ReadFile or WriteFile, blocking until
|
||||
// todo: helsaawy - create an asyncIO version that takes a context
|
||||
|
||||
// asyncIO processes the return value from ReadFile or WriteFile, blocking until
|
||||
// the operation has actually completed.
|
||||
func (f *win32File) asyncIo(c *ioOperation, d *deadlineHandler, bytes uint32, err error) (int, error) {
|
||||
if err != syscall.ERROR_IO_PENDING {
|
||||
func (f *win32File) asyncIO(c *ioOperation, d *deadlineHandler, bytes uint32, err error) (int, error) {
|
||||
if err != syscall.ERROR_IO_PENDING { //nolint:errorlint // err is Errno
|
||||
return int(bytes), err
|
||||
}
|
||||
|
||||
if f.closing.isSet() {
|
||||
cancelIoEx(f.handle, &c.o)
|
||||
_ = cancelIoEx(f.handle, &c.o)
|
||||
}
|
||||
|
||||
var timeout timeoutChan
|
||||
@ -200,7 +201,7 @@ func (f *win32File) asyncIo(c *ioOperation, d *deadlineHandler, bytes uint32, er
|
||||
select {
|
||||
case r = <-c.ch:
|
||||
err = r.err
|
||||
if err == syscall.ERROR_OPERATION_ABORTED {
|
||||
if err == syscall.ERROR_OPERATION_ABORTED { //nolint:errorlint // err is Errno
|
||||
if f.closing.isSet() {
|
||||
err = ErrFileClosed
|
||||
}
|
||||
@ -210,10 +211,10 @@ func (f *win32File) asyncIo(c *ioOperation, d *deadlineHandler, bytes uint32, er
|
||||
err = wsaGetOverlappedResult(f.handle, &c.o, &bytes, false, &flags)
|
||||
}
|
||||
case <-timeout:
|
||||
cancelIoEx(f.handle, &c.o)
|
||||
_ = cancelIoEx(f.handle, &c.o)
|
||||
r = <-c.ch
|
||||
err = r.err
|
||||
if err == syscall.ERROR_OPERATION_ABORTED {
|
||||
if err == syscall.ERROR_OPERATION_ABORTED { //nolint:errorlint // err is Errno
|
||||
err = ErrTimeout
|
||||
}
|
||||
}
|
||||
@ -221,13 +222,14 @@ func (f *win32File) asyncIo(c *ioOperation, d *deadlineHandler, bytes uint32, er
|
||||
// runtime.KeepAlive is needed, as c is passed via native
|
||||
// code to ioCompletionProcessor, c must remain alive
|
||||
// until the channel read is complete.
|
||||
// todo: (de)allocate *ioOperation via win32 heap functions, instead of needing to KeepAlive?
|
||||
runtime.KeepAlive(c)
|
||||
return int(r.bytes), err
|
||||
}
|
||||
|
||||
// Read reads from a file handle.
|
||||
func (f *win32File) Read(b []byte) (int, error) {
|
||||
c, err := f.prepareIo()
|
||||
c, err := f.prepareIO()
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
@ -239,13 +241,13 @@ func (f *win32File) Read(b []byte) (int, error) {
|
||||
|
||||
var bytes uint32
|
||||
err = syscall.ReadFile(f.handle, b, &bytes, &c.o)
|
||||
n, err := f.asyncIo(c, &f.readDeadline, bytes, err)
|
||||
n, err := f.asyncIO(c, &f.readDeadline, bytes, err)
|
||||
runtime.KeepAlive(b)
|
||||
|
||||
// Handle EOF conditions.
|
||||
if err == nil && n == 0 && len(b) != 0 {
|
||||
return 0, io.EOF
|
||||
} else if err == syscall.ERROR_BROKEN_PIPE {
|
||||
} else if err == syscall.ERROR_BROKEN_PIPE { //nolint:errorlint // err is Errno
|
||||
return 0, io.EOF
|
||||
} else {
|
||||
return n, err
|
||||
@ -254,7 +256,7 @@ func (f *win32File) Read(b []byte) (int, error) {
|
||||
|
||||
// Write writes to a file handle.
|
||||
func (f *win32File) Write(b []byte) (int, error) {
|
||||
c, err := f.prepareIo()
|
||||
c, err := f.prepareIO()
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
@ -266,7 +268,7 @@ func (f *win32File) Write(b []byte) (int, error) {
|
||||
|
||||
var bytes uint32
|
||||
err = syscall.WriteFile(f.handle, b, &bytes, &c.o)
|
||||
n, err := f.asyncIo(c, &f.writeDeadline, bytes, err)
|
||||
n, err := f.asyncIO(c, &f.writeDeadline, bytes, err)
|
||||
runtime.KeepAlive(b)
|
||||
return n, err
|
||||
}
|
||||
|
29
vendor/github.com/Microsoft/go-winio/fileinfo.go
generated
vendored
29
vendor/github.com/Microsoft/go-winio/fileinfo.go
generated
vendored
@ -1,3 +1,4 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package winio
|
||||
@ -14,13 +15,18 @@ import (
|
||||
type FileBasicInfo struct {
|
||||
CreationTime, LastAccessTime, LastWriteTime, ChangeTime windows.Filetime
|
||||
FileAttributes uint32
|
||||
pad uint32 // padding
|
||||
_ uint32 // padding
|
||||
}
|
||||
|
||||
// GetFileBasicInfo retrieves times and attributes for a file.
|
||||
func GetFileBasicInfo(f *os.File) (*FileBasicInfo, error) {
|
||||
bi := &FileBasicInfo{}
|
||||
if err := windows.GetFileInformationByHandleEx(windows.Handle(f.Fd()), windows.FileBasicInfo, (*byte)(unsafe.Pointer(bi)), uint32(unsafe.Sizeof(*bi))); err != nil {
|
||||
if err := windows.GetFileInformationByHandleEx(
|
||||
windows.Handle(f.Fd()),
|
||||
windows.FileBasicInfo,
|
||||
(*byte)(unsafe.Pointer(bi)),
|
||||
uint32(unsafe.Sizeof(*bi)),
|
||||
); err != nil {
|
||||
return nil, &os.PathError{Op: "GetFileInformationByHandleEx", Path: f.Name(), Err: err}
|
||||
}
|
||||
runtime.KeepAlive(f)
|
||||
@ -29,7 +35,12 @@ func GetFileBasicInfo(f *os.File) (*FileBasicInfo, error) {
|
||||
|
||||
// SetFileBasicInfo sets times and attributes for a file.
|
||||
func SetFileBasicInfo(f *os.File, bi *FileBasicInfo) error {
|
||||
if err := windows.SetFileInformationByHandle(windows.Handle(f.Fd()), windows.FileBasicInfo, (*byte)(unsafe.Pointer(bi)), uint32(unsafe.Sizeof(*bi))); err != nil {
|
||||
if err := windows.SetFileInformationByHandle(
|
||||
windows.Handle(f.Fd()),
|
||||
windows.FileBasicInfo,
|
||||
(*byte)(unsafe.Pointer(bi)),
|
||||
uint32(unsafe.Sizeof(*bi)),
|
||||
); err != nil {
|
||||
return &os.PathError{Op: "SetFileInformationByHandle", Path: f.Name(), Err: err}
|
||||
}
|
||||
runtime.KeepAlive(f)
|
||||
@ -48,7 +59,10 @@ type FileStandardInfo struct {
|
||||
// GetFileStandardInfo retrieves ended information for the file.
|
||||
func GetFileStandardInfo(f *os.File) (*FileStandardInfo, error) {
|
||||
si := &FileStandardInfo{}
|
||||
if err := windows.GetFileInformationByHandleEx(windows.Handle(f.Fd()), windows.FileStandardInfo, (*byte)(unsafe.Pointer(si)), uint32(unsafe.Sizeof(*si))); err != nil {
|
||||
if err := windows.GetFileInformationByHandleEx(windows.Handle(f.Fd()),
|
||||
windows.FileStandardInfo,
|
||||
(*byte)(unsafe.Pointer(si)),
|
||||
uint32(unsafe.Sizeof(*si))); err != nil {
|
||||
return nil, &os.PathError{Op: "GetFileInformationByHandleEx", Path: f.Name(), Err: err}
|
||||
}
|
||||
runtime.KeepAlive(f)
|
||||
@ -65,7 +79,12 @@ type FileIDInfo struct {
|
||||
// GetFileID retrieves the unique (volume, file ID) pair for a file.
|
||||
func GetFileID(f *os.File) (*FileIDInfo, error) {
|
||||
fileID := &FileIDInfo{}
|
||||
if err := windows.GetFileInformationByHandleEx(windows.Handle(f.Fd()), windows.FileIdInfo, (*byte)(unsafe.Pointer(fileID)), uint32(unsafe.Sizeof(*fileID))); err != nil {
|
||||
if err := windows.GetFileInformationByHandleEx(
|
||||
windows.Handle(f.Fd()),
|
||||
windows.FileIdInfo,
|
||||
(*byte)(unsafe.Pointer(fileID)),
|
||||
uint32(unsafe.Sizeof(*fileID)),
|
||||
); err != nil {
|
||||
return nil, &os.PathError{Op: "GetFileInformationByHandleEx", Path: f.Name(), Err: err}
|
||||
}
|
||||
runtime.KeepAlive(f)
|
||||
|
345
vendor/github.com/Microsoft/go-winio/hvsock.go
generated
vendored
345
vendor/github.com/Microsoft/go-winio/hvsock.go
generated
vendored
@ -4,6 +4,8 @@
|
||||
package winio
|
||||
|
||||
import (
|
||||
"context"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"net"
|
||||
@ -12,16 +14,87 @@ import (
|
||||
"time"
|
||||
"unsafe"
|
||||
|
||||
"golang.org/x/sys/windows"
|
||||
|
||||
"github.com/Microsoft/go-winio/internal/socket"
|
||||
"github.com/Microsoft/go-winio/pkg/guid"
|
||||
)
|
||||
|
||||
//sys bind(s syscall.Handle, name unsafe.Pointer, namelen int32) (err error) [failretval==socketError] = ws2_32.bind
|
||||
const afHVSock = 34 // AF_HYPERV
|
||||
|
||||
const (
|
||||
afHvSock = 34 // AF_HYPERV
|
||||
// Well known Service and VM IDs
|
||||
//https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/user-guide/make-integration-service#vmid-wildcards
|
||||
|
||||
socketError = ^uintptr(0)
|
||||
)
|
||||
// HvsockGUIDWildcard is the wildcard VmId for accepting connections from all partitions.
|
||||
func HvsockGUIDWildcard() guid.GUID { // 00000000-0000-0000-0000-000000000000
|
||||
return guid.GUID{}
|
||||
}
|
||||
|
||||
// HvsockGUIDBroadcast is the wildcard VmId for broadcasting sends to all partitions.
|
||||
func HvsockGUIDBroadcast() guid.GUID { //ffffffff-ffff-ffff-ffff-ffffffffffff
|
||||
return guid.GUID{
|
||||
Data1: 0xffffffff,
|
||||
Data2: 0xffff,
|
||||
Data3: 0xffff,
|
||||
Data4: [8]uint8{0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
|
||||
}
|
||||
}
|
||||
|
||||
// HvsockGUIDLoopback is the Loopback VmId for accepting connections to the same partition as the connector.
|
||||
func HvsockGUIDLoopback() guid.GUID { // e0e16197-dd56-4a10-9195-5ee7a155a838
|
||||
return guid.GUID{
|
||||
Data1: 0xe0e16197,
|
||||
Data2: 0xdd56,
|
||||
Data3: 0x4a10,
|
||||
Data4: [8]uint8{0x91, 0x95, 0x5e, 0xe7, 0xa1, 0x55, 0xa8, 0x38},
|
||||
}
|
||||
}
|
||||
|
||||
// HvsockGUIDSiloHost is the address of a silo's host partition:
|
||||
// - The silo host of a hosted silo is the utility VM.
|
||||
// - The silo host of a silo on a physical host is the physical host.
|
||||
func HvsockGUIDSiloHost() guid.GUID { // 36bd0c5c-7276-4223-88ba-7d03b654c568
|
||||
return guid.GUID{
|
||||
Data1: 0x36bd0c5c,
|
||||
Data2: 0x7276,
|
||||
Data3: 0x4223,
|
||||
Data4: [8]byte{0x88, 0xba, 0x7d, 0x03, 0xb6, 0x54, 0xc5, 0x68},
|
||||
}
|
||||
}
|
||||
|
||||
// HvsockGUIDChildren is the wildcard VmId for accepting connections from the connector's child partitions.
|
||||
func HvsockGUIDChildren() guid.GUID { // 90db8b89-0d35-4f79-8ce9-49ea0ac8b7cd
|
||||
return guid.GUID{
|
||||
Data1: 0x90db8b89,
|
||||
Data2: 0xd35,
|
||||
Data3: 0x4f79,
|
||||
Data4: [8]uint8{0x8c, 0xe9, 0x49, 0xea, 0xa, 0xc8, 0xb7, 0xcd},
|
||||
}
|
||||
}
|
||||
|
||||
// HvsockGUIDParent is the wildcard VmId for accepting connections from the connector's parent partition.
|
||||
// Listening on this VmId accepts connection from:
|
||||
// - Inside silos: silo host partition.
|
||||
// - Inside hosted silo: host of the VM.
|
||||
// - Inside VM: VM host.
|
||||
// - Physical host: Not supported.
|
||||
func HvsockGUIDParent() guid.GUID { // a42e7cda-d03f-480c-9cc2-a4de20abb878
|
||||
return guid.GUID{
|
||||
Data1: 0xa42e7cda,
|
||||
Data2: 0xd03f,
|
||||
Data3: 0x480c,
|
||||
Data4: [8]uint8{0x9c, 0xc2, 0xa4, 0xde, 0x20, 0xab, 0xb8, 0x78},
|
||||
}
|
||||
}
|
||||
|
||||
// hvsockVsockServiceTemplate is the Service GUID used for the VSOCK protocol.
|
||||
func hvsockVsockServiceTemplate() guid.GUID { // 00000000-facb-11e6-bd58-64006a7986d3
|
||||
return guid.GUID{
|
||||
Data2: 0xfacb,
|
||||
Data3: 0x11e6,
|
||||
Data4: [8]uint8{0xbd, 0x58, 0x64, 0x00, 0x6a, 0x79, 0x86, 0xd3},
|
||||
}
|
||||
}
|
||||
|
||||
// An HvsockAddr is an address for a AF_HYPERV socket.
|
||||
type HvsockAddr struct {
|
||||
@ -36,8 +109,10 @@ type rawHvsockAddr struct {
|
||||
ServiceID guid.GUID
|
||||
}
|
||||
|
||||
var _ socket.RawSockaddr = &rawHvsockAddr{}
|
||||
|
||||
// Network returns the address's network name, "hvsock".
|
||||
func (addr *HvsockAddr) Network() string {
|
||||
func (*HvsockAddr) Network() string {
|
||||
return "hvsock"
|
||||
}
|
||||
|
||||
@ -47,14 +122,14 @@ func (addr *HvsockAddr) String() string {
|
||||
|
||||
// VsockServiceID returns an hvsock service ID corresponding to the specified AF_VSOCK port.
|
||||
func VsockServiceID(port uint32) guid.GUID {
|
||||
g, _ := guid.FromString("00000000-facb-11e6-bd58-64006a7986d3")
|
||||
g := hvsockVsockServiceTemplate() // make a copy
|
||||
g.Data1 = port
|
||||
return g
|
||||
}
|
||||
|
||||
func (addr *HvsockAddr) raw() rawHvsockAddr {
|
||||
return rawHvsockAddr{
|
||||
Family: afHvSock,
|
||||
Family: afHVSock,
|
||||
VMID: addr.VMID,
|
||||
ServiceID: addr.ServiceID,
|
||||
}
|
||||
@ -65,20 +140,48 @@ func (addr *HvsockAddr) fromRaw(raw *rawHvsockAddr) {
|
||||
addr.ServiceID = raw.ServiceID
|
||||
}
|
||||
|
||||
// Sockaddr returns a pointer to and the size of this struct.
|
||||
//
|
||||
// Implements the [socket.RawSockaddr] interface, and allows use in
|
||||
// [socket.Bind] and [socket.ConnectEx].
|
||||
func (r *rawHvsockAddr) Sockaddr() (unsafe.Pointer, int32, error) {
|
||||
return unsafe.Pointer(r), int32(unsafe.Sizeof(rawHvsockAddr{})), nil
|
||||
}
|
||||
|
||||
// Sockaddr interface allows use with `sockets.Bind()` and `.ConnectEx()`.
|
||||
func (r *rawHvsockAddr) FromBytes(b []byte) error {
|
||||
n := int(unsafe.Sizeof(rawHvsockAddr{}))
|
||||
|
||||
if len(b) < n {
|
||||
return fmt.Errorf("got %d, want %d: %w", len(b), n, socket.ErrBufferSize)
|
||||
}
|
||||
|
||||
copy(unsafe.Slice((*byte)(unsafe.Pointer(r)), n), b[:n])
|
||||
if r.Family != afHVSock {
|
||||
return fmt.Errorf("got %d, want %d: %w", r.Family, afHVSock, socket.ErrAddrFamily)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// HvsockListener is a socket listener for the AF_HYPERV address family.
|
||||
type HvsockListener struct {
|
||||
sock *win32File
|
||||
addr HvsockAddr
|
||||
}
|
||||
|
||||
var _ net.Listener = &HvsockListener{}
|
||||
|
||||
// HvsockConn is a connected socket of the AF_HYPERV address family.
|
||||
type HvsockConn struct {
|
||||
sock *win32File
|
||||
local, remote HvsockAddr
|
||||
}
|
||||
|
||||
func newHvSocket() (*win32File, error) {
|
||||
fd, err := syscall.Socket(afHvSock, syscall.SOCK_STREAM, 1)
|
||||
var _ net.Conn = &HvsockConn{}
|
||||
|
||||
func newHVSocket() (*win32File, error) {
|
||||
fd, err := syscall.Socket(afHVSock, syscall.SOCK_STREAM, 1)
|
||||
if err != nil {
|
||||
return nil, os.NewSyscallError("socket", err)
|
||||
}
|
||||
@ -94,12 +197,12 @@ func newHvSocket() (*win32File, error) {
|
||||
// ListenHvsock listens for connections on the specified hvsock address.
|
||||
func ListenHvsock(addr *HvsockAddr) (_ *HvsockListener, err error) {
|
||||
l := &HvsockListener{addr: *addr}
|
||||
sock, err := newHvSocket()
|
||||
sock, err := newHVSocket()
|
||||
if err != nil {
|
||||
return nil, l.opErr("listen", err)
|
||||
}
|
||||
sa := addr.raw()
|
||||
err = bind(sock.handle, unsafe.Pointer(&sa), int32(unsafe.Sizeof(sa)))
|
||||
err = socket.Bind(windows.Handle(sock.handle), &sa)
|
||||
if err != nil {
|
||||
return nil, l.opErr("listen", os.NewSyscallError("socket", err))
|
||||
}
|
||||
@ -121,7 +224,7 @@ func (l *HvsockListener) Addr() net.Addr {
|
||||
|
||||
// Accept waits for the next connection and returns it.
|
||||
func (l *HvsockListener) Accept() (_ net.Conn, err error) {
|
||||
sock, err := newHvSocket()
|
||||
sock, err := newHVSocket()
|
||||
if err != nil {
|
||||
return nil, l.opErr("accept", err)
|
||||
}
|
||||
@ -130,27 +233,42 @@ func (l *HvsockListener) Accept() (_ net.Conn, err error) {
|
||||
sock.Close()
|
||||
}
|
||||
}()
|
||||
c, err := l.sock.prepareIo()
|
||||
c, err := l.sock.prepareIO()
|
||||
if err != nil {
|
||||
return nil, l.opErr("accept", err)
|
||||
}
|
||||
defer l.sock.wg.Done()
|
||||
|
||||
// AcceptEx, per documentation, requires an extra 16 bytes per address.
|
||||
//
|
||||
// https://docs.microsoft.com/en-us/windows/win32/api/mswsock/nf-mswsock-acceptex
|
||||
const addrlen = uint32(16 + unsafe.Sizeof(rawHvsockAddr{}))
|
||||
var addrbuf [addrlen * 2]byte
|
||||
|
||||
var bytes uint32
|
||||
err = syscall.AcceptEx(l.sock.handle, sock.handle, &addrbuf[0], 0, addrlen, addrlen, &bytes, &c.o)
|
||||
_, err = l.sock.asyncIo(c, nil, bytes, err)
|
||||
if err != nil {
|
||||
err = syscall.AcceptEx(l.sock.handle, sock.handle, &addrbuf[0], 0 /*rxdatalen*/, addrlen, addrlen, &bytes, &c.o)
|
||||
if _, err = l.sock.asyncIO(c, nil, bytes, err); err != nil {
|
||||
return nil, l.opErr("accept", os.NewSyscallError("acceptex", err))
|
||||
}
|
||||
|
||||
conn := &HvsockConn{
|
||||
sock: sock,
|
||||
}
|
||||
// The local address returned in the AcceptEx buffer is the same as the Listener socket's
|
||||
// address. However, the service GUID reported by GetSockName is different from the Listeners
|
||||
// socket, and is sometimes the same as the local address of the socket that dialed the
|
||||
// address, with the service GUID.Data1 incremented, but othertimes is different.
|
||||
// todo: does the local address matter? is the listener's address or the actual address appropriate?
|
||||
conn.local.fromRaw((*rawHvsockAddr)(unsafe.Pointer(&addrbuf[0])))
|
||||
conn.remote.fromRaw((*rawHvsockAddr)(unsafe.Pointer(&addrbuf[addrlen])))
|
||||
|
||||
// initialize the accepted socket and update its properties with those of the listening socket
|
||||
if err = windows.Setsockopt(windows.Handle(sock.handle),
|
||||
windows.SOL_SOCKET, windows.SO_UPDATE_ACCEPT_CONTEXT,
|
||||
(*byte)(unsafe.Pointer(&l.sock.handle)), int32(unsafe.Sizeof(l.sock.handle))); err != nil {
|
||||
return nil, conn.opErr("accept", os.NewSyscallError("setsockopt", err))
|
||||
}
|
||||
|
||||
sock = nil
|
||||
return conn, nil
|
||||
}
|
||||
@ -160,43 +278,171 @@ func (l *HvsockListener) Close() error {
|
||||
return l.sock.Close()
|
||||
}
|
||||
|
||||
/* Need to finish ConnectEx handling
|
||||
func DialHvsock(ctx context.Context, addr *HvsockAddr) (*HvsockConn, error) {
|
||||
sock, err := newHvSocket()
|
||||
// HvsockDialer configures and dials a Hyper-V Socket (ie, [HvsockConn]).
|
||||
type HvsockDialer struct {
|
||||
// Deadline is the time the Dial operation must connect before erroring.
|
||||
Deadline time.Time
|
||||
|
||||
// Retries is the number of additional connects to try if the connection times out, is refused,
|
||||
// or the host is unreachable
|
||||
Retries uint
|
||||
|
||||
// RetryWait is the time to wait after a connection error to retry
|
||||
RetryWait time.Duration
|
||||
|
||||
rt *time.Timer // redial wait timer
|
||||
}
|
||||
|
||||
// Dial the Hyper-V socket at addr.
|
||||
//
|
||||
// See [HvsockDialer.Dial] for more information.
|
||||
func Dial(ctx context.Context, addr *HvsockAddr) (conn *HvsockConn, err error) {
|
||||
return (&HvsockDialer{}).Dial(ctx, addr)
|
||||
}
|
||||
|
||||
// Dial attempts to connect to the Hyper-V socket at addr, and returns a connection if successful.
|
||||
// Will attempt (HvsockDialer).Retries if dialing fails, waiting (HvsockDialer).RetryWait between
|
||||
// retries.
|
||||
//
|
||||
// Dialing can be cancelled either by providing (HvsockDialer).Deadline, or cancelling ctx.
|
||||
func (d *HvsockDialer) Dial(ctx context.Context, addr *HvsockAddr) (conn *HvsockConn, err error) {
|
||||
op := "dial"
|
||||
// create the conn early to use opErr()
|
||||
conn = &HvsockConn{
|
||||
remote: *addr,
|
||||
}
|
||||
|
||||
if !d.Deadline.IsZero() {
|
||||
var cancel context.CancelFunc
|
||||
ctx, cancel = context.WithDeadline(ctx, d.Deadline)
|
||||
defer cancel()
|
||||
}
|
||||
|
||||
// preemptive timeout/cancellation check
|
||||
if err = ctx.Err(); err != nil {
|
||||
return nil, conn.opErr(op, err)
|
||||
}
|
||||
|
||||
sock, err := newHVSocket()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, conn.opErr(op, err)
|
||||
}
|
||||
defer func() {
|
||||
if sock != nil {
|
||||
sock.Close()
|
||||
}
|
||||
}()
|
||||
c, err := sock.prepareIo()
|
||||
|
||||
sa := addr.raw()
|
||||
err = socket.Bind(windows.Handle(sock.handle), &sa)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, conn.opErr(op, os.NewSyscallError("bind", err))
|
||||
}
|
||||
|
||||
c, err := sock.prepareIO()
|
||||
if err != nil {
|
||||
return nil, conn.opErr(op, err)
|
||||
}
|
||||
defer sock.wg.Done()
|
||||
var bytes uint32
|
||||
err = windows.ConnectEx(windows.Handle(sock.handle), sa, nil, 0, &bytes, &c.o)
|
||||
_, err = sock.asyncIo(ctx, c, nil, bytes, err)
|
||||
for i := uint(0); i <= d.Retries; i++ {
|
||||
err = socket.ConnectEx(
|
||||
windows.Handle(sock.handle),
|
||||
&sa,
|
||||
nil, // sendBuf
|
||||
0, // sendDataLen
|
||||
&bytes,
|
||||
(*windows.Overlapped)(unsafe.Pointer(&c.o)))
|
||||
_, err = sock.asyncIO(c, nil, bytes, err)
|
||||
if i < d.Retries && canRedial(err) {
|
||||
if err = d.redialWait(ctx); err == nil {
|
||||
continue
|
||||
}
|
||||
}
|
||||
break
|
||||
}
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, conn.opErr(op, os.NewSyscallError("connectex", err))
|
||||
}
|
||||
conn := &HvsockConn{
|
||||
sock: sock,
|
||||
remote: *addr,
|
||||
|
||||
// update the connection properties, so shutdown can be used
|
||||
if err = windows.Setsockopt(
|
||||
windows.Handle(sock.handle),
|
||||
windows.SOL_SOCKET,
|
||||
windows.SO_UPDATE_CONNECT_CONTEXT,
|
||||
nil, // optvalue
|
||||
0, // optlen
|
||||
); err != nil {
|
||||
return nil, conn.opErr(op, os.NewSyscallError("setsockopt", err))
|
||||
}
|
||||
|
||||
// get the local name
|
||||
var sal rawHvsockAddr
|
||||
err = socket.GetSockName(windows.Handle(sock.handle), &sal)
|
||||
if err != nil {
|
||||
return nil, conn.opErr(op, os.NewSyscallError("getsockname", err))
|
||||
}
|
||||
conn.local.fromRaw(&sal)
|
||||
|
||||
// one last check for timeout, since asyncIO doesn't check the context
|
||||
if err = ctx.Err(); err != nil {
|
||||
return nil, conn.opErr(op, err)
|
||||
}
|
||||
|
||||
conn.sock = sock
|
||||
sock = nil
|
||||
|
||||
return conn, nil
|
||||
}
|
||||
*/
|
||||
|
||||
// redialWait waits before attempting to redial, resetting the timer as appropriate.
|
||||
func (d *HvsockDialer) redialWait(ctx context.Context) (err error) {
|
||||
if d.RetryWait == 0 {
|
||||
return nil
|
||||
}
|
||||
|
||||
if d.rt == nil {
|
||||
d.rt = time.NewTimer(d.RetryWait)
|
||||
} else {
|
||||
// should already be stopped and drained
|
||||
d.rt.Reset(d.RetryWait)
|
||||
}
|
||||
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
case <-d.rt.C:
|
||||
return nil
|
||||
}
|
||||
|
||||
// stop and drain the timer
|
||||
if !d.rt.Stop() {
|
||||
<-d.rt.C
|
||||
}
|
||||
return ctx.Err()
|
||||
}
|
||||
|
||||
// assumes error is a plain, unwrapped syscall.Errno provided by direct syscall.
|
||||
func canRedial(err error) bool {
|
||||
//nolint:errorlint // guaranteed to be an Errno
|
||||
switch err {
|
||||
case windows.WSAECONNREFUSED, windows.WSAENETUNREACH, windows.WSAETIMEDOUT,
|
||||
windows.ERROR_CONNECTION_REFUSED, windows.ERROR_CONNECTION_UNAVAIL:
|
||||
return true
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
func (conn *HvsockConn) opErr(op string, err error) error {
|
||||
// translate from "file closed" to "socket closed"
|
||||
if errors.Is(err, ErrFileClosed) {
|
||||
err = socket.ErrSocketClosed
|
||||
}
|
||||
return &net.OpError{Op: op, Net: "hvsock", Source: &conn.local, Addr: &conn.remote, Err: err}
|
||||
}
|
||||
|
||||
func (conn *HvsockConn) Read(b []byte) (int, error) {
|
||||
c, err := conn.sock.prepareIo()
|
||||
c, err := conn.sock.prepareIO()
|
||||
if err != nil {
|
||||
return 0, conn.opErr("read", err)
|
||||
}
|
||||
@ -204,10 +450,11 @@ func (conn *HvsockConn) Read(b []byte) (int, error) {
|
||||
buf := syscall.WSABuf{Buf: &b[0], Len: uint32(len(b))}
|
||||
var flags, bytes uint32
|
||||
err = syscall.WSARecv(conn.sock.handle, &buf, 1, &bytes, &flags, &c.o, nil)
|
||||
n, err := conn.sock.asyncIo(c, &conn.sock.readDeadline, bytes, err)
|
||||
n, err := conn.sock.asyncIO(c, &conn.sock.readDeadline, bytes, err)
|
||||
if err != nil {
|
||||
if _, ok := err.(syscall.Errno); ok {
|
||||
err = os.NewSyscallError("wsarecv", err)
|
||||
var eno windows.Errno
|
||||
if errors.As(err, &eno) {
|
||||
err = os.NewSyscallError("wsarecv", eno)
|
||||
}
|
||||
return 0, conn.opErr("read", err)
|
||||
} else if n == 0 {
|
||||
@ -230,7 +477,7 @@ func (conn *HvsockConn) Write(b []byte) (int, error) {
|
||||
}
|
||||
|
||||
func (conn *HvsockConn) write(b []byte) (int, error) {
|
||||
c, err := conn.sock.prepareIo()
|
||||
c, err := conn.sock.prepareIO()
|
||||
if err != nil {
|
||||
return 0, conn.opErr("write", err)
|
||||
}
|
||||
@ -238,10 +485,11 @@ func (conn *HvsockConn) write(b []byte) (int, error) {
|
||||
buf := syscall.WSABuf{Buf: &b[0], Len: uint32(len(b))}
|
||||
var bytes uint32
|
||||
err = syscall.WSASend(conn.sock.handle, &buf, 1, &bytes, 0, &c.o, nil)
|
||||
n, err := conn.sock.asyncIo(c, &conn.sock.writeDeadline, bytes, err)
|
||||
n, err := conn.sock.asyncIO(c, &conn.sock.writeDeadline, bytes, err)
|
||||
if err != nil {
|
||||
if _, ok := err.(syscall.Errno); ok {
|
||||
err = os.NewSyscallError("wsasend", err)
|
||||
var eno windows.Errno
|
||||
if errors.As(err, &eno) {
|
||||
err = os.NewSyscallError("wsasend", eno)
|
||||
}
|
||||
return 0, conn.opErr("write", err)
|
||||
}
|
||||
@ -257,13 +505,19 @@ func (conn *HvsockConn) IsClosed() bool {
|
||||
return conn.sock.IsClosed()
|
||||
}
|
||||
|
||||
// shutdown disables sending or receiving on a socket.
|
||||
func (conn *HvsockConn) shutdown(how int) error {
|
||||
if conn.IsClosed() {
|
||||
return ErrFileClosed
|
||||
return socket.ErrSocketClosed
|
||||
}
|
||||
|
||||
err := syscall.Shutdown(conn.sock.handle, how)
|
||||
if err != nil {
|
||||
// If the connection was closed, shutdowns fail with "not connected"
|
||||
if errors.Is(err, windows.WSAENOTCONN) ||
|
||||
errors.Is(err, windows.WSAESHUTDOWN) {
|
||||
err = socket.ErrSocketClosed
|
||||
}
|
||||
return os.NewSyscallError("shutdown", err)
|
||||
}
|
||||
return nil
|
||||
@ -273,7 +527,7 @@ func (conn *HvsockConn) shutdown(how int) error {
|
||||
func (conn *HvsockConn) CloseRead() error {
|
||||
err := conn.shutdown(syscall.SHUT_RD)
|
||||
if err != nil {
|
||||
return conn.opErr("close", err)
|
||||
return conn.opErr("closeread", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
@ -283,7 +537,7 @@ func (conn *HvsockConn) CloseRead() error {
|
||||
func (conn *HvsockConn) CloseWrite() error {
|
||||
err := conn.shutdown(syscall.SHUT_WR)
|
||||
if err != nil {
|
||||
return conn.opErr("close", err)
|
||||
return conn.opErr("closewrite", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
@ -300,8 +554,13 @@ func (conn *HvsockConn) RemoteAddr() net.Addr {
|
||||
|
||||
// SetDeadline implements the net.Conn SetDeadline method.
|
||||
func (conn *HvsockConn) SetDeadline(t time.Time) error {
|
||||
conn.SetReadDeadline(t)
|
||||
conn.SetWriteDeadline(t)
|
||||
// todo: implement `SetDeadline` for `win32File`
|
||||
if err := conn.SetReadDeadline(t); err != nil {
|
||||
return fmt.Errorf("set read deadline: %w", err)
|
||||
}
|
||||
if err := conn.SetWriteDeadline(t); err != nil {
|
||||
return fmt.Errorf("set write deadline: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
|
20
vendor/github.com/Microsoft/go-winio/internal/socket/rawaddr.go
generated
vendored
Normal file
20
vendor/github.com/Microsoft/go-winio/internal/socket/rawaddr.go
generated
vendored
Normal file
@ -0,0 +1,20 @@
|
||||
package socket
|
||||
|
||||
import (
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
// RawSockaddr allows structs to be used with [Bind] and [ConnectEx]. The
|
||||
// struct must meet the Win32 sockaddr requirements specified here:
|
||||
// https://docs.microsoft.com/en-us/windows/win32/winsock/sockaddr-2
|
||||
//
|
||||
// Specifically, the struct size must be least larger than an int16 (unsigned short)
|
||||
// for the address family.
|
||||
type RawSockaddr interface {
|
||||
// Sockaddr returns a pointer to the RawSockaddr and its struct size, allowing
|
||||
// for the RawSockaddr's data to be overwritten by syscalls (if necessary).
|
||||
//
|
||||
// It is the callers responsibility to validate that the values are valid; invalid
|
||||
// pointers or size can cause a panic.
|
||||
Sockaddr() (unsafe.Pointer, int32, error)
|
||||
}
|
179
vendor/github.com/Microsoft/go-winio/internal/socket/socket.go
generated
vendored
Normal file
179
vendor/github.com/Microsoft/go-winio/internal/socket/socket.go
generated
vendored
Normal file
@ -0,0 +1,179 @@
|
||||
//go:build windows
|
||||
|
||||
package socket
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"net"
|
||||
"sync"
|
||||
"syscall"
|
||||
"unsafe"
|
||||
|
||||
"github.com/Microsoft/go-winio/pkg/guid"
|
||||
"golang.org/x/sys/windows"
|
||||
)
|
||||
|
||||
//go:generate go run github.com/Microsoft/go-winio/tools/mkwinsyscall -output zsyscall_windows.go socket.go
|
||||
|
||||
//sys getsockname(s windows.Handle, name unsafe.Pointer, namelen *int32) (err error) [failretval==socketError] = ws2_32.getsockname
|
||||
//sys getpeername(s windows.Handle, name unsafe.Pointer, namelen *int32) (err error) [failretval==socketError] = ws2_32.getpeername
|
||||
//sys bind(s windows.Handle, name unsafe.Pointer, namelen int32) (err error) [failretval==socketError] = ws2_32.bind
|
||||
|
||||
const socketError = uintptr(^uint32(0))
|
||||
|
||||
var (
|
||||
// todo(helsaawy): create custom error types to store the desired vs actual size and addr family?
|
||||
|
||||
ErrBufferSize = errors.New("buffer size")
|
||||
ErrAddrFamily = errors.New("address family")
|
||||
ErrInvalidPointer = errors.New("invalid pointer")
|
||||
ErrSocketClosed = fmt.Errorf("socket closed: %w", net.ErrClosed)
|
||||
)
|
||||
|
||||
// todo(helsaawy): replace these with generics, ie: GetSockName[S RawSockaddr](s windows.Handle) (S, error)
|
||||
|
||||
// GetSockName writes the local address of socket s to the [RawSockaddr] rsa.
|
||||
// If rsa is not large enough, the [windows.WSAEFAULT] is returned.
|
||||
func GetSockName(s windows.Handle, rsa RawSockaddr) error {
|
||||
ptr, l, err := rsa.Sockaddr()
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not retrieve socket pointer and size: %w", err)
|
||||
}
|
||||
|
||||
// although getsockname returns WSAEFAULT if the buffer is too small, it does not set
|
||||
// &l to the correct size, so--apart from doubling the buffer repeatedly--there is no remedy
|
||||
return getsockname(s, ptr, &l)
|
||||
}
|
||||
|
||||
// GetPeerName returns the remote address the socket is connected to.
|
||||
//
|
||||
// See [GetSockName] for more information.
|
||||
func GetPeerName(s windows.Handle, rsa RawSockaddr) error {
|
||||
ptr, l, err := rsa.Sockaddr()
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not retrieve socket pointer and size: %w", err)
|
||||
}
|
||||
|
||||
return getpeername(s, ptr, &l)
|
||||
}
|
||||
|
||||
func Bind(s windows.Handle, rsa RawSockaddr) (err error) {
|
||||
ptr, l, err := rsa.Sockaddr()
|
||||
if err != nil {
|
||||
return fmt.Errorf("could not retrieve socket pointer and size: %w", err)
|
||||
}
|
||||
|
||||
return bind(s, ptr, l)
|
||||
}
|
||||
|
||||
// "golang.org/x/sys/windows".ConnectEx and .Bind only accept internal implementations of the
|
||||
// their sockaddr interface, so they cannot be used with HvsockAddr
|
||||
// Replicate functionality here from
|
||||
// https://cs.opensource.google/go/x/sys/+/master:windows/syscall_windows.go
|
||||
|
||||
// The function pointers to `AcceptEx`, `ConnectEx` and `GetAcceptExSockaddrs` must be loaded at
|
||||
// runtime via a WSAIoctl call:
|
||||
// https://docs.microsoft.com/en-us/windows/win32/api/Mswsock/nc-mswsock-lpfn_connectex#remarks
|
||||
|
||||
type runtimeFunc struct {
|
||||
id guid.GUID
|
||||
once sync.Once
|
||||
addr uintptr
|
||||
err error
|
||||
}
|
||||
|
||||
func (f *runtimeFunc) Load() error {
|
||||
f.once.Do(func() {
|
||||
var s windows.Handle
|
||||
s, f.err = windows.Socket(windows.AF_INET, windows.SOCK_STREAM, windows.IPPROTO_TCP)
|
||||
if f.err != nil {
|
||||
return
|
||||
}
|
||||
defer windows.CloseHandle(s) //nolint:errcheck
|
||||
|
||||
var n uint32
|
||||
f.err = windows.WSAIoctl(s,
|
||||
windows.SIO_GET_EXTENSION_FUNCTION_POINTER,
|
||||
(*byte)(unsafe.Pointer(&f.id)),
|
||||
uint32(unsafe.Sizeof(f.id)),
|
||||
(*byte)(unsafe.Pointer(&f.addr)),
|
||||
uint32(unsafe.Sizeof(f.addr)),
|
||||
&n,
|
||||
nil, //overlapped
|
||||
0, //completionRoutine
|
||||
)
|
||||
})
|
||||
return f.err
|
||||
}
|
||||
|
||||
var (
|
||||
// todo: add `AcceptEx` and `GetAcceptExSockaddrs`
|
||||
WSAID_CONNECTEX = guid.GUID{ //revive:disable-line:var-naming ALL_CAPS
|
||||
Data1: 0x25a207b9,
|
||||
Data2: 0xddf3,
|
||||
Data3: 0x4660,
|
||||
Data4: [8]byte{0x8e, 0xe9, 0x76, 0xe5, 0x8c, 0x74, 0x06, 0x3e},
|
||||
}
|
||||
|
||||
connectExFunc = runtimeFunc{id: WSAID_CONNECTEX}
|
||||
)
|
||||
|
||||
func ConnectEx(
|
||||
fd windows.Handle,
|
||||
rsa RawSockaddr,
|
||||
sendBuf *byte,
|
||||
sendDataLen uint32,
|
||||
bytesSent *uint32,
|
||||
overlapped *windows.Overlapped,
|
||||
) error {
|
||||
if err := connectExFunc.Load(); err != nil {
|
||||
return fmt.Errorf("failed to load ConnectEx function pointer: %w", err)
|
||||
}
|
||||
ptr, n, err := rsa.Sockaddr()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return connectEx(fd, ptr, n, sendBuf, sendDataLen, bytesSent, overlapped)
|
||||
}
|
||||
|
||||
// BOOL LpfnConnectex(
|
||||
// [in] SOCKET s,
|
||||
// [in] const sockaddr *name,
|
||||
// [in] int namelen,
|
||||
// [in, optional] PVOID lpSendBuffer,
|
||||
// [in] DWORD dwSendDataLength,
|
||||
// [out] LPDWORD lpdwBytesSent,
|
||||
// [in] LPOVERLAPPED lpOverlapped
|
||||
// )
|
||||
|
||||
func connectEx(
|
||||
s windows.Handle,
|
||||
name unsafe.Pointer,
|
||||
namelen int32,
|
||||
sendBuf *byte,
|
||||
sendDataLen uint32,
|
||||
bytesSent *uint32,
|
||||
overlapped *windows.Overlapped,
|
||||
) (err error) {
|
||||
// todo: after upgrading to 1.18, switch from syscall.Syscall9 to syscall.SyscallN
|
||||
r1, _, e1 := syscall.Syscall9(connectExFunc.addr,
|
||||
7,
|
||||
uintptr(s),
|
||||
uintptr(name),
|
||||
uintptr(namelen),
|
||||
uintptr(unsafe.Pointer(sendBuf)),
|
||||
uintptr(sendDataLen),
|
||||
uintptr(unsafe.Pointer(bytesSent)),
|
||||
uintptr(unsafe.Pointer(overlapped)),
|
||||
0,
|
||||
0)
|
||||
if r1 == 0 {
|
||||
if e1 != 0 {
|
||||
err = error(e1)
|
||||
} else {
|
||||
err = syscall.EINVAL
|
||||
}
|
||||
}
|
||||
return err
|
||||
}
|
72
vendor/github.com/Microsoft/go-winio/internal/socket/zsyscall_windows.go
generated
vendored
Normal file
72
vendor/github.com/Microsoft/go-winio/internal/socket/zsyscall_windows.go
generated
vendored
Normal file
@ -0,0 +1,72 @@
|
||||
//go:build windows
|
||||
|
||||
// Code generated by 'go generate' using "github.com/Microsoft/go-winio/tools/mkwinsyscall"; DO NOT EDIT.
|
||||
|
||||
package socket
|
||||
|
||||
import (
|
||||
"syscall"
|
||||
"unsafe"
|
||||
|
||||
"golang.org/x/sys/windows"
|
||||
)
|
||||
|
||||
var _ unsafe.Pointer
|
||||
|
||||
// Do the interface allocations only once for common
|
||||
// Errno values.
|
||||
const (
|
||||
errnoERROR_IO_PENDING = 997
|
||||
)
|
||||
|
||||
var (
|
||||
errERROR_IO_PENDING error = syscall.Errno(errnoERROR_IO_PENDING)
|
||||
errERROR_EINVAL error = syscall.EINVAL
|
||||
)
|
||||
|
||||
// errnoErr returns common boxed Errno values, to prevent
|
||||
// allocations at runtime.
|
||||
func errnoErr(e syscall.Errno) error {
|
||||
switch e {
|
||||
case 0:
|
||||
return errERROR_EINVAL
|
||||
case errnoERROR_IO_PENDING:
|
||||
return errERROR_IO_PENDING
|
||||
}
|
||||
// TODO: add more here, after collecting data on the common
|
||||
// error values see on Windows. (perhaps when running
|
||||
// all.bat?)
|
||||
return e
|
||||
}
|
||||
|
||||
var (
|
||||
modws2_32 = windows.NewLazySystemDLL("ws2_32.dll")
|
||||
|
||||
procbind = modws2_32.NewProc("bind")
|
||||
procgetpeername = modws2_32.NewProc("getpeername")
|
||||
procgetsockname = modws2_32.NewProc("getsockname")
|
||||
)
|
||||
|
||||
func bind(s windows.Handle, name unsafe.Pointer, namelen int32) (err error) {
|
||||
r1, _, e1 := syscall.Syscall(procbind.Addr(), 3, uintptr(s), uintptr(name), uintptr(namelen))
|
||||
if r1 == socketError {
|
||||
err = errnoErr(e1)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func getpeername(s windows.Handle, name unsafe.Pointer, namelen *int32) (err error) {
|
||||
r1, _, e1 := syscall.Syscall(procgetpeername.Addr(), 3, uintptr(s), uintptr(name), uintptr(unsafe.Pointer(namelen)))
|
||||
if r1 == socketError {
|
||||
err = errnoErr(e1)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func getsockname(s windows.Handle, name unsafe.Pointer, namelen *int32) (err error) {
|
||||
r1, _, e1 := syscall.Syscall(procgetsockname.Addr(), 3, uintptr(s), uintptr(name), uintptr(unsafe.Pointer(namelen)))
|
||||
if r1 == socketError {
|
||||
err = errnoErr(e1)
|
||||
}
|
||||
return
|
||||
}
|
124
vendor/github.com/Microsoft/go-winio/pipe.go
generated
vendored
124
vendor/github.com/Microsoft/go-winio/pipe.go
generated
vendored
@ -1,3 +1,4 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package winio
|
||||
@ -13,6 +14,8 @@ import (
|
||||
"syscall"
|
||||
"time"
|
||||
"unsafe"
|
||||
|
||||
"golang.org/x/sys/windows"
|
||||
)
|
||||
|
||||
//sys connectNamedPipe(pipe syscall.Handle, o *syscall.Overlapped) (err error) = ConnectNamedPipe
|
||||
@ -21,10 +24,10 @@ import (
|
||||
//sys getNamedPipeInfo(pipe syscall.Handle, flags *uint32, outSize *uint32, inSize *uint32, maxInstances *uint32) (err error) = GetNamedPipeInfo
|
||||
//sys getNamedPipeHandleState(pipe syscall.Handle, state *uint32, curInstances *uint32, maxCollectionCount *uint32, collectDataTimeout *uint32, userName *uint16, maxUserNameSize uint32) (err error) = GetNamedPipeHandleStateW
|
||||
//sys localAlloc(uFlags uint32, length uint32) (ptr uintptr) = LocalAlloc
|
||||
//sys ntCreateNamedPipeFile(pipe *syscall.Handle, access uint32, oa *objectAttributes, iosb *ioStatusBlock, share uint32, disposition uint32, options uint32, typ uint32, readMode uint32, completionMode uint32, maxInstances uint32, inboundQuota uint32, outputQuota uint32, timeout *int64) (status ntstatus) = ntdll.NtCreateNamedPipeFile
|
||||
//sys rtlNtStatusToDosError(status ntstatus) (winerr error) = ntdll.RtlNtStatusToDosErrorNoTeb
|
||||
//sys rtlDosPathNameToNtPathName(name *uint16, ntName *unicodeString, filePart uintptr, reserved uintptr) (status ntstatus) = ntdll.RtlDosPathNameToNtPathName_U
|
||||
//sys rtlDefaultNpAcl(dacl *uintptr) (status ntstatus) = ntdll.RtlDefaultNpAcl
|
||||
//sys ntCreateNamedPipeFile(pipe *syscall.Handle, access uint32, oa *objectAttributes, iosb *ioStatusBlock, share uint32, disposition uint32, options uint32, typ uint32, readMode uint32, completionMode uint32, maxInstances uint32, inboundQuota uint32, outputQuota uint32, timeout *int64) (status ntStatus) = ntdll.NtCreateNamedPipeFile
|
||||
//sys rtlNtStatusToDosError(status ntStatus) (winerr error) = ntdll.RtlNtStatusToDosErrorNoTeb
|
||||
//sys rtlDosPathNameToNtPathName(name *uint16, ntName *unicodeString, filePart uintptr, reserved uintptr) (status ntStatus) = ntdll.RtlDosPathNameToNtPathName_U
|
||||
//sys rtlDefaultNpAcl(dacl *uintptr) (status ntStatus) = ntdll.RtlDefaultNpAcl
|
||||
|
||||
type ioStatusBlock struct {
|
||||
Status, Information uintptr
|
||||
@ -51,45 +54,22 @@ type securityDescriptor struct {
|
||||
Control uint16
|
||||
Owner uintptr
|
||||
Group uintptr
|
||||
Sacl uintptr
|
||||
Dacl uintptr
|
||||
Sacl uintptr //revive:disable-line:var-naming SACL, not Sacl
|
||||
Dacl uintptr //revive:disable-line:var-naming DACL, not Dacl
|
||||
}
|
||||
|
||||
type ntstatus int32
|
||||
type ntStatus int32
|
||||
|
||||
func (status ntstatus) Err() error {
|
||||
func (status ntStatus) Err() error {
|
||||
if status >= 0 {
|
||||
return nil
|
||||
}
|
||||
return rtlNtStatusToDosError(status)
|
||||
}
|
||||
|
||||
const (
|
||||
cERROR_PIPE_BUSY = syscall.Errno(231)
|
||||
cERROR_NO_DATA = syscall.Errno(232)
|
||||
cERROR_PIPE_CONNECTED = syscall.Errno(535)
|
||||
cERROR_SEM_TIMEOUT = syscall.Errno(121)
|
||||
|
||||
cSECURITY_SQOS_PRESENT = 0x100000
|
||||
cSECURITY_ANONYMOUS = 0
|
||||
|
||||
cPIPE_TYPE_MESSAGE = 4
|
||||
|
||||
cPIPE_READMODE_MESSAGE = 2
|
||||
|
||||
cFILE_OPEN = 1
|
||||
cFILE_CREATE = 2
|
||||
|
||||
cFILE_PIPE_MESSAGE_TYPE = 1
|
||||
cFILE_PIPE_REJECT_REMOTE_CLIENTS = 2
|
||||
|
||||
cSE_DACL_PRESENT = 4
|
||||
)
|
||||
|
||||
var (
|
||||
// ErrPipeListenerClosed is returned for pipe operations on listeners that have been closed.
|
||||
// This error should match net.errClosing since docker takes a dependency on its text.
|
||||
ErrPipeListenerClosed = errors.New("use of closed network connection")
|
||||
ErrPipeListenerClosed = net.ErrClosed
|
||||
|
||||
errPipeWriteClosed = errors.New("pipe has been closed for write")
|
||||
)
|
||||
@ -116,9 +96,10 @@ func (f *win32Pipe) RemoteAddr() net.Addr {
|
||||
}
|
||||
|
||||
func (f *win32Pipe) SetDeadline(t time.Time) error {
|
||||
f.SetReadDeadline(t)
|
||||
f.SetWriteDeadline(t)
|
||||
return nil
|
||||
if err := f.SetReadDeadline(t); err != nil {
|
||||
return err
|
||||
}
|
||||
return f.SetWriteDeadline(t)
|
||||
}
|
||||
|
||||
// CloseWrite closes the write side of a message pipe in byte mode.
|
||||
@ -157,14 +138,14 @@ func (f *win32MessageBytePipe) Read(b []byte) (int, error) {
|
||||
return 0, io.EOF
|
||||
}
|
||||
n, err := f.win32File.Read(b)
|
||||
if err == io.EOF {
|
||||
if err == io.EOF { //nolint:errorlint
|
||||
// If this was the result of a zero-byte read, then
|
||||
// it is possible that the read was due to a zero-size
|
||||
// message. Since we are simulating CloseWrite with a
|
||||
// zero-byte message, ensure that all future Read() calls
|
||||
// also return EOF.
|
||||
f.readEOF = true
|
||||
} else if err == syscall.ERROR_MORE_DATA {
|
||||
} else if err == syscall.ERROR_MORE_DATA { //nolint:errorlint // err is Errno
|
||||
// ERROR_MORE_DATA indicates that the pipe's read mode is message mode
|
||||
// and the message still has more bytes. Treat this as a success, since
|
||||
// this package presents all named pipes as byte streams.
|
||||
@ -173,7 +154,7 @@ func (f *win32MessageBytePipe) Read(b []byte) (int, error) {
|
||||
return n, err
|
||||
}
|
||||
|
||||
func (s pipeAddress) Network() string {
|
||||
func (pipeAddress) Network() string {
|
||||
return "pipe"
|
||||
}
|
||||
|
||||
@ -184,16 +165,21 @@ func (s pipeAddress) String() string {
|
||||
// tryDialPipe attempts to dial the pipe at `path` until `ctx` cancellation or timeout.
|
||||
func tryDialPipe(ctx context.Context, path *string, access uint32) (syscall.Handle, error) {
|
||||
for {
|
||||
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return syscall.Handle(0), ctx.Err()
|
||||
default:
|
||||
h, err := createFile(*path, access, 0, nil, syscall.OPEN_EXISTING, syscall.FILE_FLAG_OVERLAPPED|cSECURITY_SQOS_PRESENT|cSECURITY_ANONYMOUS, 0)
|
||||
h, err := createFile(*path,
|
||||
access,
|
||||
0,
|
||||
nil,
|
||||
syscall.OPEN_EXISTING,
|
||||
windows.FILE_FLAG_OVERLAPPED|windows.SECURITY_SQOS_PRESENT|windows.SECURITY_ANONYMOUS,
|
||||
0)
|
||||
if err == nil {
|
||||
return h, nil
|
||||
}
|
||||
if err != cERROR_PIPE_BUSY {
|
||||
if err != windows.ERROR_PIPE_BUSY { //nolint:errorlint // err is Errno
|
||||
return h, &os.PathError{Err: err, Op: "open", Path: *path}
|
||||
}
|
||||
// Wait 10 msec and try again. This is a rather simplistic
|
||||
@ -213,9 +199,10 @@ func DialPipe(path string, timeout *time.Duration) (net.Conn, error) {
|
||||
} else {
|
||||
absTimeout = time.Now().Add(2 * time.Second)
|
||||
}
|
||||
ctx, _ := context.WithDeadline(context.Background(), absTimeout)
|
||||
ctx, cancel := context.WithDeadline(context.Background(), absTimeout)
|
||||
defer cancel()
|
||||
conn, err := DialPipeContext(ctx, path)
|
||||
if err == context.DeadlineExceeded {
|
||||
if errors.Is(err, context.DeadlineExceeded) {
|
||||
return nil, ErrTimeout
|
||||
}
|
||||
return conn, err
|
||||
@ -251,7 +238,7 @@ func DialPipeAccess(ctx context.Context, path string, access uint32) (net.Conn,
|
||||
|
||||
// If the pipe is in message mode, return a message byte pipe, which
|
||||
// supports CloseWrite().
|
||||
if flags&cPIPE_TYPE_MESSAGE != 0 {
|
||||
if flags&windows.PIPE_TYPE_MESSAGE != 0 {
|
||||
return &win32MessageBytePipe{
|
||||
win32Pipe: win32Pipe{win32File: f, path: path},
|
||||
}, nil
|
||||
@ -283,7 +270,11 @@ func makeServerPipeHandle(path string, sd []byte, c *PipeConfig, first bool) (sy
|
||||
oa.Length = unsafe.Sizeof(oa)
|
||||
|
||||
var ntPath unicodeString
|
||||
if err := rtlDosPathNameToNtPathName(&path16[0], &ntPath, 0, 0).Err(); err != nil {
|
||||
if err := rtlDosPathNameToNtPathName(&path16[0],
|
||||
&ntPath,
|
||||
0,
|
||||
0,
|
||||
).Err(); err != nil {
|
||||
return 0, &os.PathError{Op: "open", Path: path, Err: err}
|
||||
}
|
||||
defer localFree(ntPath.Buffer)
|
||||
@ -292,8 +283,8 @@ func makeServerPipeHandle(path string, sd []byte, c *PipeConfig, first bool) (sy
|
||||
// The security descriptor is only needed for the first pipe.
|
||||
if first {
|
||||
if sd != nil {
|
||||
len := uint32(len(sd))
|
||||
sdb := localAlloc(0, len)
|
||||
l := uint32(len(sd))
|
||||
sdb := localAlloc(0, l)
|
||||
defer localFree(sdb)
|
||||
copy((*[0xffff]byte)(unsafe.Pointer(sdb))[:], sd)
|
||||
oa.SecurityDescriptor = (*securityDescriptor)(unsafe.Pointer(sdb))
|
||||
@ -301,28 +292,28 @@ func makeServerPipeHandle(path string, sd []byte, c *PipeConfig, first bool) (sy
|
||||
// Construct the default named pipe security descriptor.
|
||||
var dacl uintptr
|
||||
if err := rtlDefaultNpAcl(&dacl).Err(); err != nil {
|
||||
return 0, fmt.Errorf("getting default named pipe ACL: %s", err)
|
||||
return 0, fmt.Errorf("getting default named pipe ACL: %w", err)
|
||||
}
|
||||
defer localFree(dacl)
|
||||
|
||||
sdb := &securityDescriptor{
|
||||
Revision: 1,
|
||||
Control: cSE_DACL_PRESENT,
|
||||
Control: windows.SE_DACL_PRESENT,
|
||||
Dacl: dacl,
|
||||
}
|
||||
oa.SecurityDescriptor = sdb
|
||||
}
|
||||
}
|
||||
|
||||
typ := uint32(cFILE_PIPE_REJECT_REMOTE_CLIENTS)
|
||||
typ := uint32(windows.FILE_PIPE_REJECT_REMOTE_CLIENTS)
|
||||
if c.MessageMode {
|
||||
typ |= cFILE_PIPE_MESSAGE_TYPE
|
||||
typ |= windows.FILE_PIPE_MESSAGE_TYPE
|
||||
}
|
||||
|
||||
disposition := uint32(cFILE_OPEN)
|
||||
disposition := uint32(windows.FILE_OPEN)
|
||||
access := uint32(syscall.GENERIC_READ | syscall.GENERIC_WRITE | syscall.SYNCHRONIZE)
|
||||
if first {
|
||||
disposition = cFILE_CREATE
|
||||
disposition = windows.FILE_CREATE
|
||||
// By not asking for read or write access, the named pipe file system
|
||||
// will put this pipe into an initially disconnected state, blocking
|
||||
// client connections until the next call with first == false.
|
||||
@ -335,7 +326,20 @@ func makeServerPipeHandle(path string, sd []byte, c *PipeConfig, first bool) (sy
|
||||
h syscall.Handle
|
||||
iosb ioStatusBlock
|
||||
)
|
||||
err = ntCreateNamedPipeFile(&h, access, &oa, &iosb, syscall.FILE_SHARE_READ|syscall.FILE_SHARE_WRITE, disposition, 0, typ, 0, 0, 0xffffffff, uint32(c.InputBufferSize), uint32(c.OutputBufferSize), &timeout).Err()
|
||||
err = ntCreateNamedPipeFile(&h,
|
||||
access,
|
||||
&oa,
|
||||
&iosb,
|
||||
syscall.FILE_SHARE_READ|syscall.FILE_SHARE_WRITE,
|
||||
disposition,
|
||||
0,
|
||||
typ,
|
||||
0,
|
||||
0,
|
||||
0xffffffff,
|
||||
uint32(c.InputBufferSize),
|
||||
uint32(c.OutputBufferSize),
|
||||
&timeout).Err()
|
||||
if err != nil {
|
||||
return 0, &os.PathError{Op: "open", Path: path, Err: err}
|
||||
}
|
||||
@ -380,7 +384,7 @@ func (l *win32PipeListener) makeConnectedServerPipe() (*win32File, error) {
|
||||
p.Close()
|
||||
p = nil
|
||||
err = <-ch
|
||||
if err == nil || err == ErrFileClosed {
|
||||
if err == nil || err == ErrFileClosed { //nolint:errorlint // err is Errno
|
||||
err = ErrPipeListenerClosed
|
||||
}
|
||||
}
|
||||
@ -402,12 +406,12 @@ func (l *win32PipeListener) listenerRoutine() {
|
||||
p, err = l.makeConnectedServerPipe()
|
||||
// If the connection was immediately closed by the client, try
|
||||
// again.
|
||||
if err != cERROR_NO_DATA {
|
||||
if err != windows.ERROR_NO_DATA { //nolint:errorlint // err is Errno
|
||||
break
|
||||
}
|
||||
}
|
||||
responseCh <- acceptResponse{p, err}
|
||||
closed = err == ErrPipeListenerClosed
|
||||
closed = err == ErrPipeListenerClosed //nolint:errorlint // err is Errno
|
||||
}
|
||||
}
|
||||
syscall.Close(l.firstHandle)
|
||||
@ -469,15 +473,15 @@ func ListenPipe(path string, c *PipeConfig) (net.Listener, error) {
|
||||
}
|
||||
|
||||
func connectPipe(p *win32File) error {
|
||||
c, err := p.prepareIo()
|
||||
c, err := p.prepareIO()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer p.wg.Done()
|
||||
|
||||
err = connectNamedPipe(p.handle, &c.o)
|
||||
_, err = p.asyncIo(c, nil, 0, err)
|
||||
if err != nil && err != cERROR_PIPE_CONNECTED {
|
||||
_, err = p.asyncIO(c, nil, 0, err)
|
||||
if err != nil && err != windows.ERROR_PIPE_CONNECTED { //nolint:errorlint // err is Errno
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
|
16
vendor/github.com/Microsoft/go-winio/pkg/guid/guid.go
generated
vendored
16
vendor/github.com/Microsoft/go-winio/pkg/guid/guid.go
generated
vendored
@ -1,5 +1,3 @@
|
||||
// +build windows
|
||||
|
||||
// Package guid provides a GUID type. The backing structure for a GUID is
|
||||
// identical to that used by the golang.org/x/sys/windows GUID type.
|
||||
// There are two main binary encodings used for a GUID, the big-endian encoding,
|
||||
@ -9,24 +7,26 @@ package guid
|
||||
|
||||
import (
|
||||
"crypto/rand"
|
||||
"crypto/sha1"
|
||||
"crypto/sha1" //nolint:gosec // not used for secure application
|
||||
"encoding"
|
||||
"encoding/binary"
|
||||
"fmt"
|
||||
"strconv"
|
||||
)
|
||||
|
||||
//go:generate go run golang.org/x/tools/cmd/stringer -type=Variant -trimprefix=Variant -linecomment
|
||||
|
||||
// Variant specifies which GUID variant (or "type") of the GUID. It determines
|
||||
// how the entirety of the rest of the GUID is interpreted.
|
||||
type Variant uint8
|
||||
|
||||
// The variants specified by RFC 4122.
|
||||
// The variants specified by RFC 4122 section 4.1.1.
|
||||
const (
|
||||
// VariantUnknown specifies a GUID variant which does not conform to one of
|
||||
// the variant encodings specified in RFC 4122.
|
||||
VariantUnknown Variant = iota
|
||||
VariantNCS
|
||||
VariantRFC4122
|
||||
VariantRFC4122 // RFC 4122
|
||||
VariantMicrosoft
|
||||
VariantFuture
|
||||
)
|
||||
@ -36,6 +36,10 @@ const (
|
||||
// hash of an input string.
|
||||
type Version uint8
|
||||
|
||||
func (v Version) String() string {
|
||||
return strconv.FormatUint(uint64(v), 10)
|
||||
}
|
||||
|
||||
var _ = (encoding.TextMarshaler)(GUID{})
|
||||
var _ = (encoding.TextUnmarshaler)(&GUID{})
|
||||
|
||||
@ -61,7 +65,7 @@ func NewV4() (GUID, error) {
|
||||
// big-endian UTF16 stream of bytes. If that is desired, the string can be
|
||||
// encoded as such before being passed to this function.
|
||||
func NewV5(namespace GUID, name []byte) (GUID, error) {
|
||||
b := sha1.New()
|
||||
b := sha1.New() //nolint:gosec // not used for secure application
|
||||
namespaceBytes := namespace.ToArray()
|
||||
b.Write(namespaceBytes[:])
|
||||
b.Write(name)
|
||||
|
1
vendor/github.com/Microsoft/go-winio/pkg/guid/guid_nonwindows.go
generated
vendored
1
vendor/github.com/Microsoft/go-winio/pkg/guid/guid_nonwindows.go
generated
vendored
@ -1,3 +1,4 @@
|
||||
//go:build !windows
|
||||
// +build !windows
|
||||
|
||||
package guid
|
||||
|
3
vendor/github.com/Microsoft/go-winio/pkg/guid/guid_windows.go
generated
vendored
3
vendor/github.com/Microsoft/go-winio/pkg/guid/guid_windows.go
generated
vendored
@ -1,3 +1,6 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package guid
|
||||
|
||||
import "golang.org/x/sys/windows"
|
||||
|
27
vendor/github.com/Microsoft/go-winio/pkg/guid/variant_string.go
generated
vendored
Normal file
27
vendor/github.com/Microsoft/go-winio/pkg/guid/variant_string.go
generated
vendored
Normal file
@ -0,0 +1,27 @@
|
||||
// Code generated by "stringer -type=Variant -trimprefix=Variant -linecomment"; DO NOT EDIT.
|
||||
|
||||
package guid
|
||||
|
||||
import "strconv"
|
||||
|
||||
func _() {
|
||||
// An "invalid array index" compiler error signifies that the constant values have changed.
|
||||
// Re-run the stringer command to generate them again.
|
||||
var x [1]struct{}
|
||||
_ = x[VariantUnknown-0]
|
||||
_ = x[VariantNCS-1]
|
||||
_ = x[VariantRFC4122-2]
|
||||
_ = x[VariantMicrosoft-3]
|
||||
_ = x[VariantFuture-4]
|
||||
}
|
||||
|
||||
const _Variant_name = "UnknownNCSRFC 4122MicrosoftFuture"
|
||||
|
||||
var _Variant_index = [...]uint8{0, 7, 10, 18, 27, 33}
|
||||
|
||||
func (i Variant) String() string {
|
||||
if i >= Variant(len(_Variant_index)-1) {
|
||||
return "Variant(" + strconv.FormatInt(int64(i), 10) + ")"
|
||||
}
|
||||
return _Variant_name[_Variant_index[i]:_Variant_index[i+1]]
|
||||
}
|
32
vendor/github.com/Microsoft/go-winio/privilege.go
generated
vendored
32
vendor/github.com/Microsoft/go-winio/privilege.go
generated
vendored
@ -1,3 +1,4 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package winio
|
||||
@ -24,22 +25,17 @@ import (
|
||||
//sys lookupPrivilegeDisplayName(systemName string, name *uint16, buffer *uint16, size *uint32, languageId *uint32) (err error) = advapi32.LookupPrivilegeDisplayNameW
|
||||
|
||||
const (
|
||||
SE_PRIVILEGE_ENABLED = 2
|
||||
//revive:disable-next-line:var-naming ALL_CAPS
|
||||
SE_PRIVILEGE_ENABLED = windows.SE_PRIVILEGE_ENABLED
|
||||
|
||||
ERROR_NOT_ALL_ASSIGNED syscall.Errno = 1300
|
||||
//revive:disable-next-line:var-naming ALL_CAPS
|
||||
ERROR_NOT_ALL_ASSIGNED syscall.Errno = windows.ERROR_NOT_ALL_ASSIGNED
|
||||
|
||||
SeBackupPrivilege = "SeBackupPrivilege"
|
||||
SeRestorePrivilege = "SeRestorePrivilege"
|
||||
SeSecurityPrivilege = "SeSecurityPrivilege"
|
||||
)
|
||||
|
||||
const (
|
||||
securityAnonymous = iota
|
||||
securityIdentification
|
||||
securityImpersonation
|
||||
securityDelegation
|
||||
)
|
||||
|
||||
var (
|
||||
privNames = make(map[string]uint64)
|
||||
privNameMutex sync.Mutex
|
||||
@ -51,11 +47,9 @@ type PrivilegeError struct {
|
||||
}
|
||||
|
||||
func (e *PrivilegeError) Error() string {
|
||||
s := ""
|
||||
s := "Could not enable privilege "
|
||||
if len(e.privileges) > 1 {
|
||||
s = "Could not enable privileges "
|
||||
} else {
|
||||
s = "Could not enable privilege "
|
||||
}
|
||||
for i, p := range e.privileges {
|
||||
if i != 0 {
|
||||
@ -94,7 +88,7 @@ func RunWithPrivileges(names []string, fn func() error) error {
|
||||
}
|
||||
|
||||
func mapPrivileges(names []string) ([]uint64, error) {
|
||||
var privileges []uint64
|
||||
privileges := make([]uint64, 0, len(names))
|
||||
privNameMutex.Lock()
|
||||
defer privNameMutex.Unlock()
|
||||
for _, name := range names {
|
||||
@ -127,7 +121,7 @@ func enableDisableProcessPrivilege(names []string, action uint32) error {
|
||||
return err
|
||||
}
|
||||
|
||||
p, _ := windows.GetCurrentProcess()
|
||||
p := windows.CurrentProcess()
|
||||
var token windows.Token
|
||||
err = windows.OpenProcessToken(p, windows.TOKEN_ADJUST_PRIVILEGES|windows.TOKEN_QUERY, &token)
|
||||
if err != nil {
|
||||
@ -140,10 +134,10 @@ func enableDisableProcessPrivilege(names []string, action uint32) error {
|
||||
|
||||
func adjustPrivileges(token windows.Token, privileges []uint64, action uint32) error {
|
||||
var b bytes.Buffer
|
||||
binary.Write(&b, binary.LittleEndian, uint32(len(privileges)))
|
||||
_ = binary.Write(&b, binary.LittleEndian, uint32(len(privileges)))
|
||||
for _, p := range privileges {
|
||||
binary.Write(&b, binary.LittleEndian, p)
|
||||
binary.Write(&b, binary.LittleEndian, action)
|
||||
_ = binary.Write(&b, binary.LittleEndian, p)
|
||||
_ = binary.Write(&b, binary.LittleEndian, action)
|
||||
}
|
||||
prevState := make([]byte, b.Len())
|
||||
reqSize := uint32(0)
|
||||
@ -151,7 +145,7 @@ func adjustPrivileges(token windows.Token, privileges []uint64, action uint32) e
|
||||
if !success {
|
||||
return err
|
||||
}
|
||||
if err == ERROR_NOT_ALL_ASSIGNED {
|
||||
if err == ERROR_NOT_ALL_ASSIGNED { //nolint:errorlint // err is Errno
|
||||
return &PrivilegeError{privileges}
|
||||
}
|
||||
return nil
|
||||
@ -177,7 +171,7 @@ func getPrivilegeName(luid uint64) string {
|
||||
}
|
||||
|
||||
func newThreadToken() (windows.Token, error) {
|
||||
err := impersonateSelf(securityImpersonation)
|
||||
err := impersonateSelf(windows.SecurityImpersonation)
|
||||
if err != nil {
|
||||
return 0, err
|
||||
}
|
||||
|
11
vendor/github.com/Microsoft/go-winio/reparse.go
generated
vendored
11
vendor/github.com/Microsoft/go-winio/reparse.go
generated
vendored
@ -1,3 +1,6 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package winio
|
||||
|
||||
import (
|
||||
@ -113,16 +116,16 @@ func EncodeReparsePoint(rp *ReparsePoint) []byte {
|
||||
}
|
||||
|
||||
var b bytes.Buffer
|
||||
binary.Write(&b, binary.LittleEndian, &data)
|
||||
_ = binary.Write(&b, binary.LittleEndian, &data)
|
||||
if !rp.IsMountPoint {
|
||||
flags := uint32(0)
|
||||
if relative {
|
||||
flags |= 1
|
||||
}
|
||||
binary.Write(&b, binary.LittleEndian, flags)
|
||||
_ = binary.Write(&b, binary.LittleEndian, flags)
|
||||
}
|
||||
|
||||
binary.Write(&b, binary.LittleEndian, ntTarget16)
|
||||
binary.Write(&b, binary.LittleEndian, target16)
|
||||
_ = binary.Write(&b, binary.LittleEndian, ntTarget16)
|
||||
_ = binary.Write(&b, binary.LittleEndian, target16)
|
||||
return b.Bytes()
|
||||
}
|
||||
|
64
vendor/github.com/Microsoft/go-winio/sd.go
generated
vendored
64
vendor/github.com/Microsoft/go-winio/sd.go
generated
vendored
@ -1,23 +1,25 @@
|
||||
//go:build windows
|
||||
// +build windows
|
||||
|
||||
package winio
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"syscall"
|
||||
"unsafe"
|
||||
|
||||
"golang.org/x/sys/windows"
|
||||
)
|
||||
|
||||
//sys lookupAccountName(systemName *uint16, accountName string, sid *byte, sidSize *uint32, refDomain *uint16, refDomainSize *uint32, sidNameUse *uint32) (err error) = advapi32.LookupAccountNameW
|
||||
//sys lookupAccountSid(systemName *uint16, sid *byte, name *uint16, nameSize *uint32, refDomain *uint16, refDomainSize *uint32, sidNameUse *uint32) (err error) = advapi32.LookupAccountSidW
|
||||
//sys convertSidToStringSid(sid *byte, str **uint16) (err error) = advapi32.ConvertSidToStringSidW
|
||||
//sys convertStringSidToSid(str *uint16, sid **byte) (err error) = advapi32.ConvertStringSidToSidW
|
||||
//sys convertStringSecurityDescriptorToSecurityDescriptor(str string, revision uint32, sd *uintptr, size *uint32) (err error) = advapi32.ConvertStringSecurityDescriptorToSecurityDescriptorW
|
||||
//sys convertSecurityDescriptorToStringSecurityDescriptor(sd *byte, revision uint32, secInfo uint32, sddl **uint16, sddlSize *uint32) (err error) = advapi32.ConvertSecurityDescriptorToStringSecurityDescriptorW
|
||||
//sys localFree(mem uintptr) = LocalFree
|
||||
//sys getSecurityDescriptorLength(sd uintptr) (len uint32) = advapi32.GetSecurityDescriptorLength
|
||||
|
||||
const (
|
||||
cERROR_NONE_MAPPED = syscall.Errno(1332)
|
||||
)
|
||||
|
||||
type AccountLookupError struct {
|
||||
Name string
|
||||
Err error
|
||||
@ -28,8 +30,10 @@ func (e *AccountLookupError) Error() string {
|
||||
return "lookup account: empty account name specified"
|
||||
}
|
||||
var s string
|
||||
switch e.Err {
|
||||
case cERROR_NONE_MAPPED:
|
||||
switch {
|
||||
case errors.Is(e.Err, windows.ERROR_INVALID_SID):
|
||||
s = "the security ID structure is invalid"
|
||||
case errors.Is(e.Err, windows.ERROR_NONE_MAPPED):
|
||||
s = "not found"
|
||||
default:
|
||||
s = e.Err.Error()
|
||||
@ -37,6 +41,8 @@ func (e *AccountLookupError) Error() string {
|
||||
return "lookup account " + e.Name + ": " + s
|
||||
}
|
||||
|
||||
func (e *AccountLookupError) Unwrap() error { return e.Err }
|
||||
|
||||
type SddlConversionError struct {
|
||||
Sddl string
|
||||
Err error
|
||||
@ -46,15 +52,19 @@ func (e *SddlConversionError) Error() string {
|
||||
return "convert " + e.Sddl + ": " + e.Err.Error()
|
||||
}
|
||||
|
||||
func (e *SddlConversionError) Unwrap() error { return e.Err }
|
||||
|
||||
// LookupSidByName looks up the SID of an account by name
|
||||
//
|
||||
//revive:disable-next-line:var-naming SID, not Sid
|
||||
func LookupSidByName(name string) (sid string, err error) {
|
||||
if name == "" {
|
||||
return "", &AccountLookupError{name, cERROR_NONE_MAPPED}
|
||||
return "", &AccountLookupError{name, windows.ERROR_NONE_MAPPED}
|
||||
}
|
||||
|
||||
var sidSize, sidNameUse, refDomainSize uint32
|
||||
err = lookupAccountName(nil, name, nil, &sidSize, nil, &refDomainSize, &sidNameUse)
|
||||
if err != nil && err != syscall.ERROR_INSUFFICIENT_BUFFER {
|
||||
if err != nil && err != syscall.ERROR_INSUFFICIENT_BUFFER { //nolint:errorlint // err is Errno
|
||||
return "", &AccountLookupError{name, err}
|
||||
}
|
||||
sidBuffer := make([]byte, sidSize)
|
||||
@ -73,6 +83,42 @@ func LookupSidByName(name string) (sid string, err error) {
|
||||
return sid, nil
|
||||
}
|
||||
|
||||
// LookupNameBySid looks up the name of an account by SID
|
||||
//
|
||||
//revive:disable-next-line:var-naming SID, not Sid
|
||||
func LookupNameBySid(sid string) (name string, err error) {
|
||||
if sid == "" {
|
||||
return "", &AccountLookupError{sid, windows.ERROR_NONE_MAPPED}
|
||||
}
|
||||
|
||||
sidBuffer, err := windows.UTF16PtrFromString(sid)
|
||||
if err != nil {
|
||||
return "", &AccountLookupError{sid, err}
|
||||
}
|
||||
|
||||
var sidPtr *byte
|
||||
if err = convertStringSidToSid(sidBuffer, &sidPtr); err != nil {
|
||||
return "", &AccountLookupError{sid, err}
|
||||
}
|
||||
defer localFree(uintptr(unsafe.Pointer(sidPtr)))
|
||||
|
||||
var nameSize, refDomainSize, sidNameUse uint32
|
||||
err = lookupAccountSid(nil, sidPtr, nil, &nameSize, nil, &refDomainSize, &sidNameUse)
|
||||
if err != nil && err != windows.ERROR_INSUFFICIENT_BUFFER { //nolint:errorlint // err is Errno
|
||||
return "", &AccountLookupError{sid, err}
|
||||
}
|
||||
|
||||
nameBuffer := make([]uint16, nameSize)
|
||||
refDomainBuffer := make([]uint16, refDomainSize)
|
||||
err = lookupAccountSid(nil, sidPtr, &nameBuffer[0], &nameSize, &refDomainBuffer[0], &refDomainSize, &sidNameUse)
|
||||
if err != nil {
|
||||
return "", &AccountLookupError{sid, err}
|
||||
}
|
||||
|
||||
name = windows.UTF16ToString(nameBuffer)
|
||||
return name, nil
|
||||
}
|
||||
|
||||
func SddlToSecurityDescriptor(sddl string) ([]byte, error) {
|
||||
var sdBuffer uintptr
|
||||
err := convertStringSecurityDescriptorToSecurityDescriptor(sddl, 1, &sdBuffer, nil)
|
||||
@ -87,7 +133,7 @@ func SddlToSecurityDescriptor(sddl string) ([]byte, error) {
|
||||
|
||||
func SecurityDescriptorToSddl(sd []byte) (string, error) {
|
||||
var sddl *uint16
|
||||
// The returned string length seems to including an aribtrary number of terminating NULs.
|
||||
// The returned string length seems to include an arbitrary number of terminating NULs.
|
||||
// Don't use it.
|
||||
err := convertSecurityDescriptorToStringSecurityDescriptor(&sd[0], 1, 0xff, &sddl, nil)
|
||||
if err != nil {
|
||||
|
4
vendor/github.com/Microsoft/go-winio/syscall.go
generated
vendored
4
vendor/github.com/Microsoft/go-winio/syscall.go
generated
vendored
@ -1,3 +1,5 @@
|
||||
//go:build windows
|
||||
|
||||
package winio
|
||||
|
||||
//go:generate go run golang.org/x/sys/windows/mkwinsyscall -output zsyscall_windows.go file.go pipe.go sd.go fileinfo.go privilege.go backup.go hvsock.go
|
||||
//go:generate go run github.com/Microsoft/go-winio/tools/mkwinsyscall -output zsyscall_windows.go ./*.go
|
||||
|
5
vendor/github.com/Microsoft/go-winio/tools.go
generated
vendored
Normal file
5
vendor/github.com/Microsoft/go-winio/tools.go
generated
vendored
Normal file
@ -0,0 +1,5 @@
|
||||
//go:build tools
|
||||
|
||||
package winio
|
||||
|
||||
import _ "golang.org/x/tools/cmd/stringer"
|
45
vendor/github.com/Microsoft/go-winio/zsyscall_windows.go
generated
vendored
45
vendor/github.com/Microsoft/go-winio/zsyscall_windows.go
generated
vendored
@ -1,4 +1,6 @@
|
||||
// Code generated by 'go generate'; DO NOT EDIT.
|
||||
//go:build windows
|
||||
|
||||
// Code generated by 'go generate' using "github.com/Microsoft/go-winio/tools/mkwinsyscall"; DO NOT EDIT.
|
||||
|
||||
package winio
|
||||
|
||||
@ -47,9 +49,11 @@ var (
|
||||
procConvertSecurityDescriptorToStringSecurityDescriptorW = modadvapi32.NewProc("ConvertSecurityDescriptorToStringSecurityDescriptorW")
|
||||
procConvertSidToStringSidW = modadvapi32.NewProc("ConvertSidToStringSidW")
|
||||
procConvertStringSecurityDescriptorToSecurityDescriptorW = modadvapi32.NewProc("ConvertStringSecurityDescriptorToSecurityDescriptorW")
|
||||
procConvertStringSidToSidW = modadvapi32.NewProc("ConvertStringSidToSidW")
|
||||
procGetSecurityDescriptorLength = modadvapi32.NewProc("GetSecurityDescriptorLength")
|
||||
procImpersonateSelf = modadvapi32.NewProc("ImpersonateSelf")
|
||||
procLookupAccountNameW = modadvapi32.NewProc("LookupAccountNameW")
|
||||
procLookupAccountSidW = modadvapi32.NewProc("LookupAccountSidW")
|
||||
procLookupPrivilegeDisplayNameW = modadvapi32.NewProc("LookupPrivilegeDisplayNameW")
|
||||
procLookupPrivilegeNameW = modadvapi32.NewProc("LookupPrivilegeNameW")
|
||||
procLookupPrivilegeValueW = modadvapi32.NewProc("LookupPrivilegeValueW")
|
||||
@ -74,7 +78,6 @@ var (
|
||||
procRtlDosPathNameToNtPathName_U = modntdll.NewProc("RtlDosPathNameToNtPathName_U")
|
||||
procRtlNtStatusToDosErrorNoTeb = modntdll.NewProc("RtlNtStatusToDosErrorNoTeb")
|
||||
procWSAGetOverlappedResult = modws2_32.NewProc("WSAGetOverlappedResult")
|
||||
procbind = modws2_32.NewProc("bind")
|
||||
)
|
||||
|
||||
func adjustTokenPrivileges(token windows.Token, releaseAll bool, input *byte, outputSize uint32, output *byte, requiredSize *uint32) (success bool, err error) {
|
||||
@ -123,6 +126,14 @@ func _convertStringSecurityDescriptorToSecurityDescriptor(str *uint16, revision
|
||||
return
|
||||
}
|
||||
|
||||
func convertStringSidToSid(str *uint16, sid **byte) (err error) {
|
||||
r1, _, e1 := syscall.Syscall(procConvertStringSidToSidW.Addr(), 2, uintptr(unsafe.Pointer(str)), uintptr(unsafe.Pointer(sid)), 0)
|
||||
if r1 == 0 {
|
||||
err = errnoErr(e1)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func getSecurityDescriptorLength(sd uintptr) (len uint32) {
|
||||
r0, _, _ := syscall.Syscall(procGetSecurityDescriptorLength.Addr(), 1, uintptr(sd), 0, 0)
|
||||
len = uint32(r0)
|
||||
@ -154,6 +165,14 @@ func _lookupAccountName(systemName *uint16, accountName *uint16, sid *byte, sidS
|
||||
return
|
||||
}
|
||||
|
||||
func lookupAccountSid(systemName *uint16, sid *byte, name *uint16, nameSize *uint32, refDomain *uint16, refDomainSize *uint32, sidNameUse *uint32) (err error) {
|
||||
r1, _, e1 := syscall.Syscall9(procLookupAccountSidW.Addr(), 7, uintptr(unsafe.Pointer(systemName)), uintptr(unsafe.Pointer(sid)), uintptr(unsafe.Pointer(name)), uintptr(unsafe.Pointer(nameSize)), uintptr(unsafe.Pointer(refDomain)), uintptr(unsafe.Pointer(refDomainSize)), uintptr(unsafe.Pointer(sidNameUse)), 0, 0)
|
||||
if r1 == 0 {
|
||||
err = errnoErr(e1)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func lookupPrivilegeDisplayName(systemName string, name *uint16, buffer *uint16, size *uint32, languageId *uint32) (err error) {
|
||||
var _p0 *uint16
|
||||
_p0, err = syscall.UTF16PtrFromString(systemName)
|
||||
@ -380,25 +399,25 @@ func setFileCompletionNotificationModes(h syscall.Handle, flags uint8) (err erro
|
||||
return
|
||||
}
|
||||
|
||||
func ntCreateNamedPipeFile(pipe *syscall.Handle, access uint32, oa *objectAttributes, iosb *ioStatusBlock, share uint32, disposition uint32, options uint32, typ uint32, readMode uint32, completionMode uint32, maxInstances uint32, inboundQuota uint32, outputQuota uint32, timeout *int64) (status ntstatus) {
|
||||
func ntCreateNamedPipeFile(pipe *syscall.Handle, access uint32, oa *objectAttributes, iosb *ioStatusBlock, share uint32, disposition uint32, options uint32, typ uint32, readMode uint32, completionMode uint32, maxInstances uint32, inboundQuota uint32, outputQuota uint32, timeout *int64) (status ntStatus) {
|
||||
r0, _, _ := syscall.Syscall15(procNtCreateNamedPipeFile.Addr(), 14, uintptr(unsafe.Pointer(pipe)), uintptr(access), uintptr(unsafe.Pointer(oa)), uintptr(unsafe.Pointer(iosb)), uintptr(share), uintptr(disposition), uintptr(options), uintptr(typ), uintptr(readMode), uintptr(completionMode), uintptr(maxInstances), uintptr(inboundQuota), uintptr(outputQuota), uintptr(unsafe.Pointer(timeout)), 0)
|
||||
status = ntstatus(r0)
|
||||
status = ntStatus(r0)
|
||||
return
|
||||
}
|
||||
|
||||
func rtlDefaultNpAcl(dacl *uintptr) (status ntstatus) {
|
||||
func rtlDefaultNpAcl(dacl *uintptr) (status ntStatus) {
|
||||
r0, _, _ := syscall.Syscall(procRtlDefaultNpAcl.Addr(), 1, uintptr(unsafe.Pointer(dacl)), 0, 0)
|
||||
status = ntstatus(r0)
|
||||
status = ntStatus(r0)
|
||||
return
|
||||
}
|
||||
|
||||
func rtlDosPathNameToNtPathName(name *uint16, ntName *unicodeString, filePart uintptr, reserved uintptr) (status ntstatus) {
|
||||
func rtlDosPathNameToNtPathName(name *uint16, ntName *unicodeString, filePart uintptr, reserved uintptr) (status ntStatus) {
|
||||
r0, _, _ := syscall.Syscall6(procRtlDosPathNameToNtPathName_U.Addr(), 4, uintptr(unsafe.Pointer(name)), uintptr(unsafe.Pointer(ntName)), uintptr(filePart), uintptr(reserved), 0, 0)
|
||||
status = ntstatus(r0)
|
||||
status = ntStatus(r0)
|
||||
return
|
||||
}
|
||||
|
||||
func rtlNtStatusToDosError(status ntstatus) (winerr error) {
|
||||
func rtlNtStatusToDosError(status ntStatus) (winerr error) {
|
||||
r0, _, _ := syscall.Syscall(procRtlNtStatusToDosErrorNoTeb.Addr(), 1, uintptr(status), 0, 0)
|
||||
if r0 != 0 {
|
||||
winerr = syscall.Errno(r0)
|
||||
@ -417,11 +436,3 @@ func wsaGetOverlappedResult(h syscall.Handle, o *syscall.Overlapped, bytes *uint
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func bind(s syscall.Handle, name unsafe.Pointer, namelen int32) (err error) {
|
||||
r1, _, e1 := syscall.Syscall(procbind.Addr(), 3, uintptr(s), uintptr(name), uintptr(namelen))
|
||||
if r1 == socketError {
|
||||
err = errnoErr(e1)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
10
vendor/github.com/cenkalti/backoff/v4/.travis.yml
generated
vendored
10
vendor/github.com/cenkalti/backoff/v4/.travis.yml
generated
vendored
@ -1,10 +0,0 @@
|
||||
language: go
|
||||
go:
|
||||
- 1.13
|
||||
- 1.x
|
||||
- tip
|
||||
before_install:
|
||||
- go get github.com/mattn/goveralls
|
||||
- go get golang.org/x/tools/cmd/cover
|
||||
script:
|
||||
- $HOME/gopath/bin/goveralls -service=travis-ci
|
3
vendor/github.com/cenkalti/backoff/v4/exponential.go
generated
vendored
3
vendor/github.com/cenkalti/backoff/v4/exponential.go
generated
vendored
@ -147,6 +147,9 @@ func (b *ExponentialBackOff) incrementCurrentInterval() {
|
||||
// Returns a random value from the following interval:
|
||||
// [currentInterval - randomizationFactor * currentInterval, currentInterval + randomizationFactor * currentInterval].
|
||||
func getRandomValueFromInterval(randomizationFactor, random float64, currentInterval time.Duration) time.Duration {
|
||||
if randomizationFactor == 0 {
|
||||
return currentInterval // make sure no randomness is used when randomizationFactor is 0.
|
||||
}
|
||||
var delta = randomizationFactor * float64(currentInterval)
|
||||
var minInterval = float64(currentInterval) - delta
|
||||
var maxInterval = float64(currentInterval) + delta
|
||||
|
50
vendor/github.com/cenkalti/backoff/v4/retry.go
generated
vendored
50
vendor/github.com/cenkalti/backoff/v4/retry.go
generated
vendored
@ -5,10 +5,20 @@ import (
|
||||
"time"
|
||||
)
|
||||
|
||||
// An OperationWithData is executing by RetryWithData() or RetryNotifyWithData().
|
||||
// The operation will be retried using a backoff policy if it returns an error.
|
||||
type OperationWithData[T any] func() (T, error)
|
||||
|
||||
// An Operation is executing by Retry() or RetryNotify().
|
||||
// The operation will be retried using a backoff policy if it returns an error.
|
||||
type Operation func() error
|
||||
|
||||
func (o Operation) withEmptyData() OperationWithData[struct{}] {
|
||||
return func() (struct{}, error) {
|
||||
return struct{}{}, o()
|
||||
}
|
||||
}
|
||||
|
||||
// Notify is a notify-on-error function. It receives an operation error and
|
||||
// backoff delay if the operation failed (with an error).
|
||||
//
|
||||
@ -28,18 +38,41 @@ func Retry(o Operation, b BackOff) error {
|
||||
return RetryNotify(o, b, nil)
|
||||
}
|
||||
|
||||
// RetryWithData is like Retry but returns data in the response too.
|
||||
func RetryWithData[T any](o OperationWithData[T], b BackOff) (T, error) {
|
||||
return RetryNotifyWithData(o, b, nil)
|
||||
}
|
||||
|
||||
// RetryNotify calls notify function with the error and wait duration
|
||||
// for each failed attempt before sleep.
|
||||
func RetryNotify(operation Operation, b BackOff, notify Notify) error {
|
||||
return RetryNotifyWithTimer(operation, b, notify, nil)
|
||||
}
|
||||
|
||||
// RetryNotifyWithData is like RetryNotify but returns data in the response too.
|
||||
func RetryNotifyWithData[T any](operation OperationWithData[T], b BackOff, notify Notify) (T, error) {
|
||||
return doRetryNotify(operation, b, notify, nil)
|
||||
}
|
||||
|
||||
// RetryNotifyWithTimer calls notify function with the error and wait duration using the given Timer
|
||||
// for each failed attempt before sleep.
|
||||
// A default timer that uses system timer is used when nil is passed.
|
||||
func RetryNotifyWithTimer(operation Operation, b BackOff, notify Notify, t Timer) error {
|
||||
var err error
|
||||
var next time.Duration
|
||||
_, err := doRetryNotify(operation.withEmptyData(), b, notify, t)
|
||||
return err
|
||||
}
|
||||
|
||||
// RetryNotifyWithTimerAndData is like RetryNotifyWithTimer but returns data in the response too.
|
||||
func RetryNotifyWithTimerAndData[T any](operation OperationWithData[T], b BackOff, notify Notify, t Timer) (T, error) {
|
||||
return doRetryNotify(operation, b, notify, t)
|
||||
}
|
||||
|
||||
func doRetryNotify[T any](operation OperationWithData[T], b BackOff, notify Notify, t Timer) (T, error) {
|
||||
var (
|
||||
err error
|
||||
next time.Duration
|
||||
res T
|
||||
)
|
||||
if t == nil {
|
||||
t = &defaultTimer{}
|
||||
}
|
||||
@ -52,21 +85,22 @@ func RetryNotifyWithTimer(operation Operation, b BackOff, notify Notify, t Timer
|
||||
|
||||
b.Reset()
|
||||
for {
|
||||
if err = operation(); err == nil {
|
||||
return nil
|
||||
res, err = operation()
|
||||
if err == nil {
|
||||
return res, nil
|
||||
}
|
||||
|
||||
var permanent *PermanentError
|
||||
if errors.As(err, &permanent) {
|
||||
return permanent.Err
|
||||
return res, permanent.Err
|
||||
}
|
||||
|
||||
if next = b.NextBackOff(); next == Stop {
|
||||
if cerr := ctx.Err(); cerr != nil {
|
||||
return cerr
|
||||
return res, cerr
|
||||
}
|
||||
|
||||
return err
|
||||
return res, err
|
||||
}
|
||||
|
||||
if notify != nil {
|
||||
@ -77,7 +111,7 @@ func RetryNotifyWithTimer(operation Operation, b BackOff, notify Notify, t Timer
|
||||
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return ctx.Err()
|
||||
return res, ctx.Err()
|
||||
case <-t.C():
|
||||
}
|
||||
}
|
||||
|
6627
vendor/github.com/containerd/containerd/api/services/content/v1/content.pb.go
generated
vendored
6627
vendor/github.com/containerd/containerd/api/services/content/v1/content.pb.go
generated
vendored
File diff suppressed because it is too large
Load Diff
48
vendor/github.com/containerd/containerd/api/services/content/v1/content.proto
generated
vendored
48
vendor/github.com/containerd/containerd/api/services/content/v1/content.proto
generated
vendored
@ -18,7 +18,6 @@ syntax = "proto3";
|
||||
|
||||
package containerd.services.content.v1;
|
||||
|
||||
import weak "gogoproto/gogo.proto";
|
||||
import "google/protobuf/field_mask.proto";
|
||||
import "google/protobuf/timestamp.proto";
|
||||
import "google/protobuf/empty.proto";
|
||||
@ -92,16 +91,16 @@ service Content {
|
||||
|
||||
message Info {
|
||||
// Digest is the hash identity of the blob.
|
||||
string digest = 1 [(gogoproto.customtype) = "github.com/opencontainers/go-digest.Digest", (gogoproto.nullable) = false];
|
||||
string digest = 1;
|
||||
|
||||
// Size is the total number of bytes in the blob.
|
||||
int64 size = 2;
|
||||
|
||||
// CreatedAt provides the time at which the blob was committed.
|
||||
google.protobuf.Timestamp created_at = 3 [(gogoproto.stdtime) = true, (gogoproto.nullable) = false];
|
||||
google.protobuf.Timestamp created_at = 3;
|
||||
|
||||
// UpdatedAt provides the time the info was last updated.
|
||||
google.protobuf.Timestamp updated_at = 4 [(gogoproto.stdtime) = true, (gogoproto.nullable) = false];
|
||||
google.protobuf.Timestamp updated_at = 4;
|
||||
|
||||
// Labels are arbitrary data on snapshots.
|
||||
//
|
||||
@ -110,15 +109,15 @@ message Info {
|
||||
}
|
||||
|
||||
message InfoRequest {
|
||||
string digest = 1 [(gogoproto.customtype) = "github.com/opencontainers/go-digest.Digest", (gogoproto.nullable) = false];
|
||||
string digest = 1;
|
||||
}
|
||||
|
||||
message InfoResponse {
|
||||
Info info = 1 [(gogoproto.nullable) = false];
|
||||
Info info = 1;
|
||||
}
|
||||
|
||||
message UpdateRequest {
|
||||
Info info = 1 [(gogoproto.nullable) = false];
|
||||
Info info = 1;
|
||||
|
||||
// UpdateMask specifies which fields to perform the update on. If empty,
|
||||
// the operation applies to all fields.
|
||||
@ -130,7 +129,7 @@ message UpdateRequest {
|
||||
}
|
||||
|
||||
message UpdateResponse {
|
||||
Info info = 1 [(gogoproto.nullable) = false];
|
||||
Info info = 1;
|
||||
}
|
||||
|
||||
message ListContentRequest {
|
||||
@ -141,26 +140,26 @@ message ListContentRequest {
|
||||
// filters. Expanded, containers that match the following will be
|
||||
// returned:
|
||||
//
|
||||
// filters[0] or filters[1] or ... or filters[n-1] or filters[n]
|
||||
// filters[0] or filters[1] or ... or filters[n-1] or filters[n]
|
||||
//
|
||||
// If filters is zero-length or nil, all items will be returned.
|
||||
repeated string filters = 1;
|
||||
}
|
||||
|
||||
message ListContentResponse {
|
||||
repeated Info info = 1 [(gogoproto.nullable) = false];
|
||||
repeated Info info = 1;
|
||||
}
|
||||
|
||||
message DeleteContentRequest {
|
||||
// Digest specifies which content to delete.
|
||||
string digest = 1 [(gogoproto.customtype) = "github.com/opencontainers/go-digest.Digest", (gogoproto.nullable) = false];
|
||||
string digest = 1;
|
||||
}
|
||||
|
||||
// ReadContentRequest defines the fields that make up a request to read a portion of
|
||||
// data from a stored object.
|
||||
message ReadContentRequest {
|
||||
// Digest is the hash identity to read.
|
||||
string digest = 1 [(gogoproto.customtype) = "github.com/opencontainers/go-digest.Digest", (gogoproto.nullable) = false];
|
||||
string digest = 1;
|
||||
|
||||
// Offset specifies the number of bytes from the start at which to begin
|
||||
// the read. If zero or less, the read will be from the start. This uses
|
||||
@ -179,12 +178,12 @@ message ReadContentResponse {
|
||||
}
|
||||
|
||||
message Status {
|
||||
google.protobuf.Timestamp started_at = 1 [(gogoproto.stdtime) = true, (gogoproto.nullable) = false];
|
||||
google.protobuf.Timestamp updated_at = 2 [(gogoproto.stdtime) = true, (gogoproto.nullable) = false];
|
||||
google.protobuf.Timestamp started_at = 1;
|
||||
google.protobuf.Timestamp updated_at = 2;
|
||||
string ref = 3;
|
||||
int64 offset = 4;
|
||||
int64 total = 5;
|
||||
string expected = 6 [(gogoproto.customtype) = "github.com/opencontainers/go-digest.Digest", (gogoproto.nullable) = false];
|
||||
string expected = 6;
|
||||
}
|
||||
|
||||
|
||||
@ -201,17 +200,14 @@ message ListStatusesRequest {
|
||||
}
|
||||
|
||||
message ListStatusesResponse {
|
||||
repeated Status statuses = 1 [(gogoproto.nullable) = false];
|
||||
repeated Status statuses = 1;
|
||||
}
|
||||
|
||||
// WriteAction defines the behavior of a WriteRequest.
|
||||
enum WriteAction {
|
||||
option (gogoproto.goproto_enum_prefix) = false;
|
||||
option (gogoproto.enum_customname) = "WriteAction";
|
||||
|
||||
// WriteActionStat instructs the writer to return the current status while
|
||||
// holding the lock on the write.
|
||||
STAT = 0 [(gogoproto.enumvalue_customname) = "WriteActionStat"];
|
||||
STAT = 0;
|
||||
|
||||
// WriteActionWrite sets the action for the write request to write data.
|
||||
//
|
||||
@ -219,7 +215,7 @@ enum WriteAction {
|
||||
// transaction will be left open for further writes.
|
||||
//
|
||||
// This is the default.
|
||||
WRITE = 1 [(gogoproto.enumvalue_customname) = "WriteActionWrite"];
|
||||
WRITE = 1;
|
||||
|
||||
// WriteActionCommit will write any outstanding data in the message and
|
||||
// commit the write, storing it under the digest.
|
||||
@ -228,7 +224,7 @@ enum WriteAction {
|
||||
// commit it.
|
||||
//
|
||||
// This action will always terminate the write.
|
||||
COMMIT = 2 [(gogoproto.enumvalue_customname) = "WriteActionCommit"];
|
||||
COMMIT = 2;
|
||||
}
|
||||
|
||||
// WriteContentRequest writes data to the request ref at offset.
|
||||
@ -269,7 +265,7 @@ message WriteContentRequest {
|
||||
// Only the latest version will be used to check the content against the
|
||||
// digest. It is only required to include it on a single message, before or
|
||||
// with the commit action message.
|
||||
string expected = 4 [(gogoproto.customtype) = "github.com/opencontainers/go-digest.Digest", (gogoproto.nullable) = false];
|
||||
string expected = 4;
|
||||
|
||||
// Offset specifies the number of bytes from the start at which to begin
|
||||
// the write. For most implementations, this means from the start of the
|
||||
@ -304,13 +300,13 @@ message WriteContentResponse {
|
||||
//
|
||||
// This must be set for stat and commit write actions. All other write
|
||||
// actions may omit this.
|
||||
google.protobuf.Timestamp started_at = 2 [(gogoproto.stdtime) = true, (gogoproto.nullable) = false];
|
||||
google.protobuf.Timestamp started_at = 2;
|
||||
|
||||
// UpdatedAt provides the last time of a successful write.
|
||||
//
|
||||
// This must be set for stat and commit write actions. All other write
|
||||
// actions may omit this.
|
||||
google.protobuf.Timestamp updated_at = 3 [(gogoproto.stdtime) = true, (gogoproto.nullable) = false];
|
||||
google.protobuf.Timestamp updated_at = 3;
|
||||
|
||||
// Offset is the current committed size for the write.
|
||||
int64 offset = 4;
|
||||
@ -326,7 +322,7 @@ message WriteContentResponse {
|
||||
// Digest, if present, includes the digest up to the currently committed
|
||||
// bytes. If action is commit, this field will be set. It is implementation
|
||||
// defined if this is set for other actions.
|
||||
string digest = 6 [(gogoproto.customtype) = "github.com/opencontainers/go-digest.Digest", (gogoproto.nullable) = false];
|
||||
string digest = 6;
|
||||
}
|
||||
|
||||
message AbortRequest {
|
||||
|
569
vendor/github.com/containerd/containerd/api/services/content/v1/content_grpc.pb.go
generated
vendored
Normal file
569
vendor/github.com/containerd/containerd/api/services/content/v1/content_grpc.pb.go
generated
vendored
Normal file
@ -0,0 +1,569 @@
|
||||
// Code generated by protoc-gen-go-grpc. DO NOT EDIT.
|
||||
// versions:
|
||||
// - protoc-gen-go-grpc v1.2.0
|
||||
// - protoc v3.20.1
|
||||
// source: github.com/containerd/containerd/api/services/content/v1/content.proto
|
||||
|
||||
package content
|
||||
|
||||
import (
|
||||
context "context"
|
||||
grpc "google.golang.org/grpc"
|
||||
codes "google.golang.org/grpc/codes"
|
||||
status "google.golang.org/grpc/status"
|
||||
emptypb "google.golang.org/protobuf/types/known/emptypb"
|
||||
)
|
||||
|
||||
// This is a compile-time assertion to ensure that this generated file
|
||||
// is compatible with the grpc package it is being compiled against.
|
||||
// Requires gRPC-Go v1.32.0 or later.
|
||||
const _ = grpc.SupportPackageIsVersion7
|
||||
|
||||
// ContentClient is the client API for Content service.
|
||||
//
|
||||
// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream.
|
||||
type ContentClient interface {
|
||||
// Info returns information about a committed object.
|
||||
//
|
||||
// This call can be used for getting the size of content and checking for
|
||||
// existence.
|
||||
Info(ctx context.Context, in *InfoRequest, opts ...grpc.CallOption) (*InfoResponse, error)
|
||||
// Update updates content metadata.
|
||||
//
|
||||
// This call can be used to manage the mutable content labels. The
|
||||
// immutable metadata such as digest, size, and committed at cannot
|
||||
// be updated.
|
||||
Update(ctx context.Context, in *UpdateRequest, opts ...grpc.CallOption) (*UpdateResponse, error)
|
||||
// List streams the entire set of content as Info objects and closes the
|
||||
// stream.
|
||||
//
|
||||
// Typically, this will yield a large response, chunked into messages.
|
||||
// Clients should make provisions to ensure they can handle the entire data
|
||||
// set.
|
||||
List(ctx context.Context, in *ListContentRequest, opts ...grpc.CallOption) (Content_ListClient, error)
|
||||
// Delete will delete the referenced object.
|
||||
Delete(ctx context.Context, in *DeleteContentRequest, opts ...grpc.CallOption) (*emptypb.Empty, error)
|
||||
// Read allows one to read an object based on the offset into the content.
|
||||
//
|
||||
// The requested data may be returned in one or more messages.
|
||||
Read(ctx context.Context, in *ReadContentRequest, opts ...grpc.CallOption) (Content_ReadClient, error)
|
||||
// Status returns the status for a single reference.
|
||||
Status(ctx context.Context, in *StatusRequest, opts ...grpc.CallOption) (*StatusResponse, error)
|
||||
// ListStatuses returns the status of ongoing object ingestions, started via
|
||||
// Write.
|
||||
//
|
||||
// Only those matching the regular expression will be provided in the
|
||||
// response. If the provided regular expression is empty, all ingestions
|
||||
// will be provided.
|
||||
ListStatuses(ctx context.Context, in *ListStatusesRequest, opts ...grpc.CallOption) (*ListStatusesResponse, error)
|
||||
// Write begins or resumes writes to a resource identified by a unique ref.
|
||||
// Only one active stream may exist at a time for each ref.
|
||||
//
|
||||
// Once a write stream has started, it may only write to a single ref, thus
|
||||
// once a stream is started, the ref may be omitted on subsequent writes.
|
||||
//
|
||||
// For any write transaction represented by a ref, only a single write may
|
||||
// be made to a given offset. If overlapping writes occur, it is an error.
|
||||
// Writes should be sequential and implementations may throw an error if
|
||||
// this is required.
|
||||
//
|
||||
// If expected_digest is set and already part of the content store, the
|
||||
// write will fail.
|
||||
//
|
||||
// When completed, the commit flag should be set to true. If expected size
|
||||
// or digest is set, the content will be validated against those values.
|
||||
Write(ctx context.Context, opts ...grpc.CallOption) (Content_WriteClient, error)
|
||||
// Abort cancels the ongoing write named in the request. Any resources
|
||||
// associated with the write will be collected.
|
||||
Abort(ctx context.Context, in *AbortRequest, opts ...grpc.CallOption) (*emptypb.Empty, error)
|
||||
}
|
||||
|
||||
type contentClient struct {
|
||||
cc grpc.ClientConnInterface
|
||||
}
|
||||
|
||||
func NewContentClient(cc grpc.ClientConnInterface) ContentClient {
|
||||
return &contentClient{cc}
|
||||
}
|
||||
|
||||
func (c *contentClient) Info(ctx context.Context, in *InfoRequest, opts ...grpc.CallOption) (*InfoResponse, error) {
|
||||
out := new(InfoResponse)
|
||||
err := c.cc.Invoke(ctx, "/containerd.services.content.v1.Content/Info", in, out, opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func (c *contentClient) Update(ctx context.Context, in *UpdateRequest, opts ...grpc.CallOption) (*UpdateResponse, error) {
|
||||
out := new(UpdateResponse)
|
||||
err := c.cc.Invoke(ctx, "/containerd.services.content.v1.Content/Update", in, out, opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func (c *contentClient) List(ctx context.Context, in *ListContentRequest, opts ...grpc.CallOption) (Content_ListClient, error) {
|
||||
stream, err := c.cc.NewStream(ctx, &Content_ServiceDesc.Streams[0], "/containerd.services.content.v1.Content/List", opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
x := &contentListClient{stream}
|
||||
if err := x.ClientStream.SendMsg(in); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := x.ClientStream.CloseSend(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return x, nil
|
||||
}
|
||||
|
||||
type Content_ListClient interface {
|
||||
Recv() (*ListContentResponse, error)
|
||||
grpc.ClientStream
|
||||
}
|
||||
|
||||
type contentListClient struct {
|
||||
grpc.ClientStream
|
||||
}
|
||||
|
||||
func (x *contentListClient) Recv() (*ListContentResponse, error) {
|
||||
m := new(ListContentResponse)
|
||||
if err := x.ClientStream.RecvMsg(m); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return m, nil
|
||||
}
|
||||
|
||||
func (c *contentClient) Delete(ctx context.Context, in *DeleteContentRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) {
|
||||
out := new(emptypb.Empty)
|
||||
err := c.cc.Invoke(ctx, "/containerd.services.content.v1.Content/Delete", in, out, opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func (c *contentClient) Read(ctx context.Context, in *ReadContentRequest, opts ...grpc.CallOption) (Content_ReadClient, error) {
|
||||
stream, err := c.cc.NewStream(ctx, &Content_ServiceDesc.Streams[1], "/containerd.services.content.v1.Content/Read", opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
x := &contentReadClient{stream}
|
||||
if err := x.ClientStream.SendMsg(in); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if err := x.ClientStream.CloseSend(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return x, nil
|
||||
}
|
||||
|
||||
type Content_ReadClient interface {
|
||||
Recv() (*ReadContentResponse, error)
|
||||
grpc.ClientStream
|
||||
}
|
||||
|
||||
type contentReadClient struct {
|
||||
grpc.ClientStream
|
||||
}
|
||||
|
||||
func (x *contentReadClient) Recv() (*ReadContentResponse, error) {
|
||||
m := new(ReadContentResponse)
|
||||
if err := x.ClientStream.RecvMsg(m); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return m, nil
|
||||
}
|
||||
|
||||
func (c *contentClient) Status(ctx context.Context, in *StatusRequest, opts ...grpc.CallOption) (*StatusResponse, error) {
|
||||
out := new(StatusResponse)
|
||||
err := c.cc.Invoke(ctx, "/containerd.services.content.v1.Content/Status", in, out, opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func (c *contentClient) ListStatuses(ctx context.Context, in *ListStatusesRequest, opts ...grpc.CallOption) (*ListStatusesResponse, error) {
|
||||
out := new(ListStatusesResponse)
|
||||
err := c.cc.Invoke(ctx, "/containerd.services.content.v1.Content/ListStatuses", in, out, opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
func (c *contentClient) Write(ctx context.Context, opts ...grpc.CallOption) (Content_WriteClient, error) {
|
||||
stream, err := c.cc.NewStream(ctx, &Content_ServiceDesc.Streams[2], "/containerd.services.content.v1.Content/Write", opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
x := &contentWriteClient{stream}
|
||||
return x, nil
|
||||
}
|
||||
|
||||
type Content_WriteClient interface {
|
||||
Send(*WriteContentRequest) error
|
||||
Recv() (*WriteContentResponse, error)
|
||||
grpc.ClientStream
|
||||
}
|
||||
|
||||
type contentWriteClient struct {
|
||||
grpc.ClientStream
|
||||
}
|
||||
|
||||
func (x *contentWriteClient) Send(m *WriteContentRequest) error {
|
||||
return x.ClientStream.SendMsg(m)
|
||||
}
|
||||
|
||||
func (x *contentWriteClient) Recv() (*WriteContentResponse, error) {
|
||||
m := new(WriteContentResponse)
|
||||
if err := x.ClientStream.RecvMsg(m); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return m, nil
|
||||
}
|
||||
|
||||
func (c *contentClient) Abort(ctx context.Context, in *AbortRequest, opts ...grpc.CallOption) (*emptypb.Empty, error) {
|
||||
out := new(emptypb.Empty)
|
||||
err := c.cc.Invoke(ctx, "/containerd.services.content.v1.Content/Abort", in, out, opts...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return out, nil
|
||||
}
|
||||
|
||||
// ContentServer is the server API for Content service.
|
||||
// All implementations must embed UnimplementedContentServer
|
||||
// for forward compatibility
|
||||
type ContentServer interface {
|
||||
// Info returns information about a committed object.
|
||||
//
|
||||
// This call can be used for getting the size of content and checking for
|
||||
// existence.
|
||||
Info(context.Context, *InfoRequest) (*InfoResponse, error)
|
||||
// Update updates content metadata.
|
||||
//
|
||||
// This call can be used to manage the mutable content labels. The
|
||||
// immutable metadata such as digest, size, and committed at cannot
|
||||
// be updated.
|
||||
Update(context.Context, *UpdateRequest) (*UpdateResponse, error)
|
||||
// List streams the entire set of content as Info objects and closes the
|
||||
// stream.
|
||||
//
|
||||
// Typically, this will yield a large response, chunked into messages.
|
||||
// Clients should make provisions to ensure they can handle the entire data
|
||||
// set.
|
||||
List(*ListContentRequest, Content_ListServer) error
|
||||
// Delete will delete the referenced object.
|
||||
Delete(context.Context, *DeleteContentRequest) (*emptypb.Empty, error)
|
||||
// Read allows one to read an object based on the offset into the content.
|
||||
//
|
||||
// The requested data may be returned in one or more messages.
|
||||
Read(*ReadContentRequest, Content_ReadServer) error
|
||||
// Status returns the status for a single reference.
|
||||
Status(context.Context, *StatusRequest) (*StatusResponse, error)
|
||||
// ListStatuses returns the status of ongoing object ingestions, started via
|
||||
// Write.
|
||||
//
|
||||
// Only those matching the regular expression will be provided in the
|
||||
// response. If the provided regular expression is empty, all ingestions
|
||||
// will be provided.
|
||||
ListStatuses(context.Context, *ListStatusesRequest) (*ListStatusesResponse, error)
|
||||
// Write begins or resumes writes to a resource identified by a unique ref.
|
||||
// Only one active stream may exist at a time for each ref.
|
||||
//
|
||||
// Once a write stream has started, it may only write to a single ref, thus
|
||||
// once a stream is started, the ref may be omitted on subsequent writes.
|
||||
//
|
||||
// For any write transaction represented by a ref, only a single write may
|
||||
// be made to a given offset. If overlapping writes occur, it is an error.
|
||||
// Writes should be sequential and implementations may throw an error if
|
||||
// this is required.
|
||||
//
|
||||
// If expected_digest is set and already part of the content store, the
|
||||
// write will fail.
|
||||
//
|
||||
// When completed, the commit flag should be set to true. If expected size
|
||||
// or digest is set, the content will be validated against those values.
|
||||
Write(Content_WriteServer) error
|
||||
// Abort cancels the ongoing write named in the request. Any resources
|
||||
// associated with the write will be collected.
|
||||
Abort(context.Context, *AbortRequest) (*emptypb.Empty, error)
|
||||
mustEmbedUnimplementedContentServer()
|
||||
}
|
||||
|
||||
// UnimplementedContentServer must be embedded to have forward compatible implementations.
|
||||
type UnimplementedContentServer struct {
|
||||
}
|
||||
|
||||
func (UnimplementedContentServer) Info(context.Context, *InfoRequest) (*InfoResponse, error) {
|
||||
return nil, status.Errorf(codes.Unimplemented, "method Info not implemented")
|
||||
}
|
||||
func (UnimplementedContentServer) Update(context.Context, *UpdateRequest) (*UpdateResponse, error) {
|
||||
return nil, status.Errorf(codes.Unimplemented, "method Update not implemented")
|
||||
}
|
||||
func (UnimplementedContentServer) List(*ListContentRequest, Content_ListServer) error {
|
||||
return status.Errorf(codes.Unimplemented, "method List not implemented")
|
||||
}
|
||||
func (UnimplementedContentServer) Delete(context.Context, *DeleteContentRequest) (*emptypb.Empty, error) {
|
||||
return nil, status.Errorf(codes.Unimplemented, "method Delete not implemented")
|
||||
}
|
||||
func (UnimplementedContentServer) Read(*ReadContentRequest, Content_ReadServer) error {
|
||||
return status.Errorf(codes.Unimplemented, "method Read not implemented")
|
||||
}
|
||||
func (UnimplementedContentServer) Status(context.Context, *StatusRequest) (*StatusResponse, error) {
|
||||
return nil, status.Errorf(codes.Unimplemented, "method Status not implemented")
|
||||
}
|
||||
func (UnimplementedContentServer) ListStatuses(context.Context, *ListStatusesRequest) (*ListStatusesResponse, error) {
|
||||
return nil, status.Errorf(codes.Unimplemented, "method ListStatuses not implemented")
|
||||
}
|
||||
func (UnimplementedContentServer) Write(Content_WriteServer) error {
|
||||
return status.Errorf(codes.Unimplemented, "method Write not implemented")
|
||||
}
|
||||
func (UnimplementedContentServer) Abort(context.Context, *AbortRequest) (*emptypb.Empty, error) {
|
||||
return nil, status.Errorf(codes.Unimplemented, "method Abort not implemented")
|
||||
}
|
||||
func (UnimplementedContentServer) mustEmbedUnimplementedContentServer() {}
|
||||
|
||||
// UnsafeContentServer may be embedded to opt out of forward compatibility for this service.
|
||||
// Use of this interface is not recommended, as added methods to ContentServer will
|
||||
// result in compilation errors.
|
||||
type UnsafeContentServer interface {
|
||||
mustEmbedUnimplementedContentServer()
|
||||
}
|
||||
|
||||
func RegisterContentServer(s grpc.ServiceRegistrar, srv ContentServer) {
|
||||
s.RegisterService(&Content_ServiceDesc, srv)
|
||||
}
|
||||
|
||||
func _Content_Info_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
||||
in := new(InfoRequest)
|
||||
if err := dec(in); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if interceptor == nil {
|
||||
return srv.(ContentServer).Info(ctx, in)
|
||||
}
|
||||
info := &grpc.UnaryServerInfo{
|
||||
Server: srv,
|
||||
FullMethod: "/containerd.services.content.v1.Content/Info",
|
||||
}
|
||||
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
|
||||
return srv.(ContentServer).Info(ctx, req.(*InfoRequest))
|
||||
}
|
||||
return interceptor(ctx, in, info, handler)
|
||||
}
|
||||
|
||||
func _Content_Update_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
||||
in := new(UpdateRequest)
|
||||
if err := dec(in); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if interceptor == nil {
|
||||
return srv.(ContentServer).Update(ctx, in)
|
||||
}
|
||||
info := &grpc.UnaryServerInfo{
|
||||
Server: srv,
|
||||
FullMethod: "/containerd.services.content.v1.Content/Update",
|
||||
}
|
||||
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
|
||||
return srv.(ContentServer).Update(ctx, req.(*UpdateRequest))
|
||||
}
|
||||
return interceptor(ctx, in, info, handler)
|
||||
}
|
||||
|
||||
func _Content_List_Handler(srv interface{}, stream grpc.ServerStream) error {
|
||||
m := new(ListContentRequest)
|
||||
if err := stream.RecvMsg(m); err != nil {
|
||||
return err
|
||||
}
|
||||
return srv.(ContentServer).List(m, &contentListServer{stream})
|
||||
}
|
||||
|
||||
type Content_ListServer interface {
|
||||
Send(*ListContentResponse) error
|
||||
grpc.ServerStream
|
||||
}
|
||||
|
||||
type contentListServer struct {
|
||||
grpc.ServerStream
|
||||
}
|
||||
|
||||
func (x *contentListServer) Send(m *ListContentResponse) error {
|
||||
return x.ServerStream.SendMsg(m)
|
||||
}
|
||||
|
||||
func _Content_Delete_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
||||
in := new(DeleteContentRequest)
|
||||
if err := dec(in); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if interceptor == nil {
|
||||
return srv.(ContentServer).Delete(ctx, in)
|
||||
}
|
||||
info := &grpc.UnaryServerInfo{
|
||||
Server: srv,
|
||||
FullMethod: "/containerd.services.content.v1.Content/Delete",
|
||||
}
|
||||
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
|
||||
return srv.(ContentServer).Delete(ctx, req.(*DeleteContentRequest))
|
||||
}
|
||||
return interceptor(ctx, in, info, handler)
|
||||
}
|
||||
|
||||
func _Content_Read_Handler(srv interface{}, stream grpc.ServerStream) error {
|
||||
m := new(ReadContentRequest)
|
||||
if err := stream.RecvMsg(m); err != nil {
|
||||
return err
|
||||
}
|
||||
return srv.(ContentServer).Read(m, &contentReadServer{stream})
|
||||
}
|
||||
|
||||
type Content_ReadServer interface {
|
||||
Send(*ReadContentResponse) error
|
||||
grpc.ServerStream
|
||||
}
|
||||
|
||||
type contentReadServer struct {
|
||||
grpc.ServerStream
|
||||
}
|
||||
|
||||
func (x *contentReadServer) Send(m *ReadContentResponse) error {
|
||||
return x.ServerStream.SendMsg(m)
|
||||
}
|
||||
|
||||
func _Content_Status_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
||||
in := new(StatusRequest)
|
||||
if err := dec(in); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if interceptor == nil {
|
||||
return srv.(ContentServer).Status(ctx, in)
|
||||
}
|
||||
info := &grpc.UnaryServerInfo{
|
||||
Server: srv,
|
||||
FullMethod: "/containerd.services.content.v1.Content/Status",
|
||||
}
|
||||
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
|
||||
return srv.(ContentServer).Status(ctx, req.(*StatusRequest))
|
||||
}
|
||||
return interceptor(ctx, in, info, handler)
|
||||
}
|
||||
|
||||
func _Content_ListStatuses_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
||||
in := new(ListStatusesRequest)
|
||||
if err := dec(in); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if interceptor == nil {
|
||||
return srv.(ContentServer).ListStatuses(ctx, in)
|
||||
}
|
||||
info := &grpc.UnaryServerInfo{
|
||||
Server: srv,
|
||||
FullMethod: "/containerd.services.content.v1.Content/ListStatuses",
|
||||
}
|
||||
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
|
||||
return srv.(ContentServer).ListStatuses(ctx, req.(*ListStatusesRequest))
|
||||
}
|
||||
return interceptor(ctx, in, info, handler)
|
||||
}
|
||||
|
||||
func _Content_Write_Handler(srv interface{}, stream grpc.ServerStream) error {
|
||||
return srv.(ContentServer).Write(&contentWriteServer{stream})
|
||||
}
|
||||
|
||||
type Content_WriteServer interface {
|
||||
Send(*WriteContentResponse) error
|
||||
Recv() (*WriteContentRequest, error)
|
||||
grpc.ServerStream
|
||||
}
|
||||
|
||||
type contentWriteServer struct {
|
||||
grpc.ServerStream
|
||||
}
|
||||
|
||||
func (x *contentWriteServer) Send(m *WriteContentResponse) error {
|
||||
return x.ServerStream.SendMsg(m)
|
||||
}
|
||||
|
||||
func (x *contentWriteServer) Recv() (*WriteContentRequest, error) {
|
||||
m := new(WriteContentRequest)
|
||||
if err := x.ServerStream.RecvMsg(m); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return m, nil
|
||||
}
|
||||
|
||||
func _Content_Abort_Handler(srv interface{}, ctx context.Context, dec func(interface{}) error, interceptor grpc.UnaryServerInterceptor) (interface{}, error) {
|
||||
in := new(AbortRequest)
|
||||
if err := dec(in); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if interceptor == nil {
|
||||
return srv.(ContentServer).Abort(ctx, in)
|
||||
}
|
||||
info := &grpc.UnaryServerInfo{
|
||||
Server: srv,
|
||||
FullMethod: "/containerd.services.content.v1.Content/Abort",
|
||||
}
|
||||
handler := func(ctx context.Context, req interface{}) (interface{}, error) {
|
||||
return srv.(ContentServer).Abort(ctx, req.(*AbortRequest))
|
||||
}
|
||||
return interceptor(ctx, in, info, handler)
|
||||
}
|
||||
|
||||
// Content_ServiceDesc is the grpc.ServiceDesc for Content service.
|
||||
// It's only intended for direct use with grpc.RegisterService,
|
||||
// and not to be introspected or modified (even as a copy)
|
||||
var Content_ServiceDesc = grpc.ServiceDesc{
|
||||
ServiceName: "containerd.services.content.v1.Content",
|
||||
HandlerType: (*ContentServer)(nil),
|
||||
Methods: []grpc.MethodDesc{
|
||||
{
|
||||
MethodName: "Info",
|
||||
Handler: _Content_Info_Handler,
|
||||
},
|
||||
{
|
||||
MethodName: "Update",
|
||||
Handler: _Content_Update_Handler,
|
||||
},
|
||||
{
|
||||
MethodName: "Delete",
|
||||
Handler: _Content_Delete_Handler,
|
||||
},
|
||||
{
|
||||
MethodName: "Status",
|
||||
Handler: _Content_Status_Handler,
|
||||
},
|
||||
{
|
||||
MethodName: "ListStatuses",
|
||||
Handler: _Content_ListStatuses_Handler,
|
||||
},
|
||||
{
|
||||
MethodName: "Abort",
|
||||
Handler: _Content_Abort_Handler,
|
||||
},
|
||||
},
|
||||
Streams: []grpc.StreamDesc{
|
||||
{
|
||||
StreamName: "List",
|
||||
Handler: _Content_List_Handler,
|
||||
ServerStreams: true,
|
||||
},
|
||||
{
|
||||
StreamName: "Read",
|
||||
Handler: _Content_Read_Handler,
|
||||
ServerStreams: true,
|
||||
},
|
||||
{
|
||||
StreamName: "Write",
|
||||
Handler: _Content_Write_Handler,
|
||||
ServerStreams: true,
|
||||
ClientStreams: true,
|
||||
},
|
||||
},
|
||||
Metadata: "github.com/containerd/containerd/api/services/content/v1/content.proto",
|
||||
}
|
28
vendor/github.com/containerd/containerd/archive/compression/compression_fuzzer.go
generated
vendored
Normal file
28
vendor/github.com/containerd/containerd/archive/compression/compression_fuzzer.go
generated
vendored
Normal file
@ -0,0 +1,28 @@
|
||||
//go:build gofuzz
|
||||
|
||||
/*
|
||||
Copyright The containerd Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package compression
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
)
|
||||
|
||||
func FuzzDecompressStream(data []byte) int {
|
||||
_, _ = DecompressStream(bytes.NewReader(data))
|
||||
return 1
|
||||
}
|
69
vendor/github.com/containerd/containerd/content/content.go
generated
vendored
69
vendor/github.com/containerd/containerd/content/content.go
generated
vendored
@ -25,6 +25,26 @@ import (
|
||||
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
)
|
||||
|
||||
// Store combines the methods of content-oriented interfaces into a set that
|
||||
// are commonly provided by complete implementations.
|
||||
//
|
||||
// Overall content lifecycle:
|
||||
// - Ingester is used to initiate a write operation (aka ingestion)
|
||||
// - IngestManager is used to manage (e.g. list, abort) active ingestions
|
||||
// - Once an ingestion is complete (see Writer.Commit), Provider is used to
|
||||
// query a single piece of content by its digest
|
||||
// - Manager is used to manage (e.g. list, delete) previously committed content
|
||||
//
|
||||
// Note that until ingestion is complete, its content is not visible through
|
||||
// Provider or Manager. Once ingestion is complete, it is no longer exposed
|
||||
// through IngestManager.
|
||||
type Store interface {
|
||||
Manager
|
||||
Provider
|
||||
IngestManager
|
||||
Ingester
|
||||
}
|
||||
|
||||
// ReaderAt extends the standard io.ReaderAt interface with reporting of Size and io.Closer
|
||||
type ReaderAt interface {
|
||||
io.ReaderAt
|
||||
@ -42,10 +62,30 @@ type Provider interface {
|
||||
|
||||
// Ingester writes content
|
||||
type Ingester interface {
|
||||
// Some implementations require WithRef to be included in opts.
|
||||
// Writer initiates a writing operation (aka ingestion). A single ingestion
|
||||
// is uniquely identified by its ref, provided using a WithRef option.
|
||||
// Writer can be called multiple times with the same ref to access the same
|
||||
// ingestion.
|
||||
// Once all the data is written, use Writer.Commit to complete the ingestion.
|
||||
Writer(ctx context.Context, opts ...WriterOpt) (Writer, error)
|
||||
}
|
||||
|
||||
// IngestManager provides methods for managing ingestions. An ingestion is a
|
||||
// not-yet-complete writing operation initiated using Ingester and identified
|
||||
// by a ref string.
|
||||
type IngestManager interface {
|
||||
// Status returns the status of the provided ref.
|
||||
Status(ctx context.Context, ref string) (Status, error)
|
||||
|
||||
// ListStatuses returns the status of any active ingestions whose ref match
|
||||
// the provided regular expression. If empty, all active ingestions will be
|
||||
// returned.
|
||||
ListStatuses(ctx context.Context, filters ...string) ([]Status, error)
|
||||
|
||||
// Abort completely cancels the ingest operation targeted by ref.
|
||||
Abort(ctx context.Context, ref string) error
|
||||
}
|
||||
|
||||
// Info holds content specific information
|
||||
//
|
||||
// TODO(stevvooe): Consider a very different name for this struct. Info is way
|
||||
@ -58,7 +98,7 @@ type Info struct {
|
||||
Labels map[string]string
|
||||
}
|
||||
|
||||
// Status of a content operation
|
||||
// Status of a content operation (i.e. an ingestion)
|
||||
type Status struct {
|
||||
Ref string
|
||||
Offset int64
|
||||
@ -94,21 +134,7 @@ type Manager interface {
|
||||
Delete(ctx context.Context, dgst digest.Digest) error
|
||||
}
|
||||
|
||||
// IngestManager provides methods for managing ingests.
|
||||
type IngestManager interface {
|
||||
// Status returns the status of the provided ref.
|
||||
Status(ctx context.Context, ref string) (Status, error)
|
||||
|
||||
// ListStatuses returns the status of any active ingestions whose ref match the
|
||||
// provided regular expression. If empty, all active ingestions will be
|
||||
// returned.
|
||||
ListStatuses(ctx context.Context, filters ...string) ([]Status, error)
|
||||
|
||||
// Abort completely cancels the ingest operation targeted by ref.
|
||||
Abort(ctx context.Context, ref string) error
|
||||
}
|
||||
|
||||
// Writer handles the write of content into a content store
|
||||
// Writer handles writing of content into a content store
|
||||
type Writer interface {
|
||||
// Close closes the writer, if the writer has not been
|
||||
// committed this allows resuming or aborting.
|
||||
@ -131,15 +157,6 @@ type Writer interface {
|
||||
Truncate(size int64) error
|
||||
}
|
||||
|
||||
// Store combines the methods of content-oriented interfaces into a set that
|
||||
// are commonly provided by complete implementations.
|
||||
type Store interface {
|
||||
Manager
|
||||
Provider
|
||||
IngestManager
|
||||
Ingester
|
||||
}
|
||||
|
||||
// Opt is used to alter the mutable properties of content
|
||||
type Opt func(*Info) error
|
||||
|
||||
|
10
vendor/github.com/containerd/containerd/content/helpers.go
generated
vendored
10
vendor/github.com/containerd/containerd/content/helpers.go
generated
vendored
@ -43,10 +43,16 @@ var bufPool = sync.Pool{
|
||||
},
|
||||
}
|
||||
|
||||
type reader interface {
|
||||
Reader() io.Reader
|
||||
}
|
||||
|
||||
// NewReader returns a io.Reader from a ReaderAt
|
||||
func NewReader(ra ReaderAt) io.Reader {
|
||||
rd := io.NewSectionReader(ra, 0, ra.Size())
|
||||
return rd
|
||||
if rd, ok := ra.(reader); ok {
|
||||
return rd.Reader()
|
||||
}
|
||||
return io.NewSectionReader(ra, 0, ra.Size())
|
||||
}
|
||||
|
||||
// ReadBlob retrieves the entire contents of the blob from the provider.
|
||||
|
76
vendor/github.com/containerd/containerd/content/local/content_local_fuzzer.go
generated
vendored
Normal file
76
vendor/github.com/containerd/containerd/content/local/content_local_fuzzer.go
generated
vendored
Normal file
@ -0,0 +1,76 @@
|
||||
//go:build gofuzz
|
||||
|
||||
/*
|
||||
Copyright The containerd Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package local
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"context"
|
||||
_ "crypto/sha256"
|
||||
"io"
|
||||
"testing"
|
||||
|
||||
"github.com/opencontainers/go-digest"
|
||||
|
||||
"github.com/containerd/containerd/content"
|
||||
)
|
||||
|
||||
func FuzzContentStoreWriter(data []byte) int {
|
||||
t := &testing.T{}
|
||||
ctx := context.Background()
|
||||
ctx, _, cs, cleanup := contentStoreEnv(t)
|
||||
defer cleanup()
|
||||
|
||||
cw, err := cs.Writer(ctx, content.WithRef("myref"))
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
if err := cw.Close(); err != nil {
|
||||
return 0
|
||||
}
|
||||
|
||||
// reopen, so we can test things
|
||||
cw, err = cs.Writer(ctx, content.WithRef("myref"))
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
|
||||
err = checkCopyFuzz(int64(len(data)), cw, bufio.NewReader(io.NopCloser(bytes.NewReader(data))))
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
expected := digest.FromBytes(data)
|
||||
|
||||
if err = cw.Commit(ctx, int64(len(data)), expected); err != nil {
|
||||
return 0
|
||||
}
|
||||
return 1
|
||||
}
|
||||
|
||||
func checkCopyFuzz(size int64, dst io.Writer, src io.Reader) error {
|
||||
nn, err := io.Copy(dst, src)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if nn != size {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
5
vendor/github.com/containerd/containerd/content/local/readerat.go
generated
vendored
5
vendor/github.com/containerd/containerd/content/local/readerat.go
generated
vendored
@ -18,6 +18,7 @@ package local
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
|
||||
"github.com/containerd/containerd/content"
|
||||
@ -65,3 +66,7 @@ func (ra sizeReaderAt) Size() int64 {
|
||||
func (ra sizeReaderAt) Close() error {
|
||||
return ra.fp.Close()
|
||||
}
|
||||
|
||||
func (ra sizeReaderAt) Reader() io.Reader {
|
||||
return io.LimitReader(ra.fp, ra.size)
|
||||
}
|
||||
|
9
vendor/github.com/containerd/containerd/content/local/store.go
generated
vendored
9
vendor/github.com/containerd/containerd/content/local/store.go
generated
vendored
@ -34,7 +34,7 @@ import (
|
||||
"github.com/containerd/containerd/log"
|
||||
"github.com/sirupsen/logrus"
|
||||
|
||||
digest "github.com/opencontainers/go-digest"
|
||||
"github.com/opencontainers/go-digest"
|
||||
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
)
|
||||
|
||||
@ -262,7 +262,7 @@ func (s *store) Walk(ctx context.Context, fn content.WalkFunc, fs ...string) err
|
||||
return nil
|
||||
}
|
||||
|
||||
dgst := digest.NewDigestFromHex(alg.String(), filepath.Base(path))
|
||||
dgst := digest.NewDigestFromEncoded(alg, filepath.Base(path))
|
||||
if err := dgst.Validate(); err != nil {
|
||||
// log error but don't report
|
||||
log.L.WithError(err).WithField("path", path).Error("invalid digest for blob path")
|
||||
@ -505,6 +505,7 @@ func (s *store) resumeStatus(ref string, total int64, digester digest.Digester)
|
||||
return status, fmt.Errorf("provided total differs from status: %v != %v", total, status.Total)
|
||||
}
|
||||
|
||||
//nolint:dupword
|
||||
// TODO(stevvooe): slow slow slow!!, send to goroutine or use resumable hashes
|
||||
fp, err := os.Open(data)
|
||||
if err != nil {
|
||||
@ -628,14 +629,14 @@ func (s *store) blobPath(dgst digest.Digest) (string, error) {
|
||||
return "", fmt.Errorf("cannot calculate blob path from invalid digest: %v: %w", err, errdefs.ErrInvalidArgument)
|
||||
}
|
||||
|
||||
return filepath.Join(s.root, "blobs", dgst.Algorithm().String(), dgst.Hex()), nil
|
||||
return filepath.Join(s.root, "blobs", dgst.Algorithm().String(), dgst.Encoded()), nil
|
||||
}
|
||||
|
||||
func (s *store) ingestRoot(ref string) string {
|
||||
// we take a digest of the ref to keep the ingest paths constant length.
|
||||
// Note that this is not the current or potential digest of incoming content.
|
||||
dgst := digest.FromString(ref)
|
||||
return filepath.Join(s.root, "ingest", dgst.Hex())
|
||||
return filepath.Join(s.root, "ingest", dgst.Encoded())
|
||||
}
|
||||
|
||||
// ingestPaths are returned. The paths are the following:
|
||||
|
1
vendor/github.com/containerd/containerd/content/local/store_bsd.go
generated
vendored
1
vendor/github.com/containerd/containerd/content/local/store_bsd.go
generated
vendored
@ -1,5 +1,4 @@
|
||||
//go:build darwin || freebsd || netbsd
|
||||
// +build darwin freebsd netbsd
|
||||
|
||||
/*
|
||||
Copyright The containerd Authors.
|
||||
|
1
vendor/github.com/containerd/containerd/content/local/store_openbsd.go
generated
vendored
1
vendor/github.com/containerd/containerd/content/local/store_openbsd.go
generated
vendored
@ -1,5 +1,4 @@
|
||||
//go:build openbsd
|
||||
// +build openbsd
|
||||
|
||||
/*
|
||||
Copyright The containerd Authors.
|
||||
|
1
vendor/github.com/containerd/containerd/content/local/store_unix.go
generated
vendored
1
vendor/github.com/containerd/containerd/content/local/store_unix.go
generated
vendored
@ -1,5 +1,4 @@
|
||||
//go:build linux || solaris
|
||||
// +build linux solaris
|
||||
|
||||
/*
|
||||
Copyright The containerd Authors.
|
||||
|
38
vendor/github.com/containerd/containerd/content/local/test_helper.go
generated
vendored
Normal file
38
vendor/github.com/containerd/containerd/content/local/test_helper.go
generated
vendored
Normal file
@ -0,0 +1,38 @@
|
||||
/*
|
||||
Copyright The containerd Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package local
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/containerd/containerd/content"
|
||||
)
|
||||
|
||||
func contentStoreEnv(t testing.TB) (context.Context, string, content.Store, func()) {
|
||||
tmpdir := t.TempDir()
|
||||
|
||||
cs, err := NewStore(tmpdir)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
return ctx, tmpdir, cs, func() {
|
||||
cancel()
|
||||
}
|
||||
}
|
4
vendor/github.com/containerd/containerd/content/proxy/content_reader.go
generated
vendored
4
vendor/github.com/containerd/containerd/content/proxy/content_reader.go
generated
vendored
@ -36,9 +36,9 @@ func (ra *remoteReaderAt) Size() int64 {
|
||||
|
||||
func (ra *remoteReaderAt) ReadAt(p []byte, off int64) (n int, err error) {
|
||||
rr := &contentapi.ReadContentRequest{
|
||||
Digest: ra.digest,
|
||||
Digest: ra.digest.String(),
|
||||
Offset: off,
|
||||
Size_: int64(len(p)),
|
||||
Size: int64(len(p)),
|
||||
}
|
||||
// we need a child context with cancel, or the eventually called
|
||||
// grpc.NewStream will leak the goroutine until the whole thing is cleared.
|
||||
|
47
vendor/github.com/containerd/containerd/content/proxy/content_store.go
generated
vendored
47
vendor/github.com/containerd/containerd/content/proxy/content_store.go
generated
vendored
@ -23,7 +23,8 @@ import (
|
||||
contentapi "github.com/containerd/containerd/api/services/content/v1"
|
||||
"github.com/containerd/containerd/content"
|
||||
"github.com/containerd/containerd/errdefs"
|
||||
protobuftypes "github.com/gogo/protobuf/types"
|
||||
"github.com/containerd/containerd/protobuf"
|
||||
protobuftypes "github.com/containerd/containerd/protobuf/types"
|
||||
digest "github.com/opencontainers/go-digest"
|
||||
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
)
|
||||
@ -42,7 +43,7 @@ func NewContentStore(client contentapi.ContentClient) content.Store {
|
||||
|
||||
func (pcs *proxyContentStore) Info(ctx context.Context, dgst digest.Digest) (content.Info, error) {
|
||||
resp, err := pcs.client.Info(ctx, &contentapi.InfoRequest{
|
||||
Digest: dgst,
|
||||
Digest: dgst.String(),
|
||||
})
|
||||
if err != nil {
|
||||
return content.Info{}, errdefs.FromGRPC(err)
|
||||
@ -81,7 +82,7 @@ func (pcs *proxyContentStore) Walk(ctx context.Context, fn content.WalkFunc, fil
|
||||
|
||||
func (pcs *proxyContentStore) Delete(ctx context.Context, dgst digest.Digest) error {
|
||||
if _, err := pcs.client.Delete(ctx, &contentapi.DeleteContentRequest{
|
||||
Digest: dgst,
|
||||
Digest: dgst.String(),
|
||||
}); err != nil {
|
||||
return errdefs.FromGRPC(err)
|
||||
}
|
||||
@ -115,17 +116,17 @@ func (pcs *proxyContentStore) Status(ctx context.Context, ref string) (content.S
|
||||
status := resp.Status
|
||||
return content.Status{
|
||||
Ref: status.Ref,
|
||||
StartedAt: status.StartedAt,
|
||||
UpdatedAt: status.UpdatedAt,
|
||||
StartedAt: protobuf.FromTimestamp(status.StartedAt),
|
||||
UpdatedAt: protobuf.FromTimestamp(status.UpdatedAt),
|
||||
Offset: status.Offset,
|
||||
Total: status.Total,
|
||||
Expected: status.Expected,
|
||||
Expected: digest.Digest(status.Expected),
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (pcs *proxyContentStore) Update(ctx context.Context, info content.Info, fieldpaths ...string) (content.Info, error) {
|
||||
resp, err := pcs.client.Update(ctx, &contentapi.UpdateRequest{
|
||||
Info: infoToGRPC(info),
|
||||
Info: infoToGRPC(&info),
|
||||
UpdateMask: &protobuftypes.FieldMask{
|
||||
Paths: fieldpaths,
|
||||
},
|
||||
@ -148,11 +149,11 @@ func (pcs *proxyContentStore) ListStatuses(ctx context.Context, filters ...strin
|
||||
for _, status := range resp.Statuses {
|
||||
statuses = append(statuses, content.Status{
|
||||
Ref: status.Ref,
|
||||
StartedAt: status.StartedAt,
|
||||
UpdatedAt: status.UpdatedAt,
|
||||
StartedAt: protobuf.FromTimestamp(status.StartedAt),
|
||||
UpdatedAt: protobuf.FromTimestamp(status.UpdatedAt),
|
||||
Offset: status.Offset,
|
||||
Total: status.Total,
|
||||
Expected: status.Expected,
|
||||
Expected: digest.Digest(status.Expected),
|
||||
})
|
||||
}
|
||||
|
||||
@ -197,10 +198,10 @@ func (pcs *proxyContentStore) negotiate(ctx context.Context, ref string, size in
|
||||
}
|
||||
|
||||
if err := wrclient.Send(&contentapi.WriteContentRequest{
|
||||
Action: contentapi.WriteActionStat,
|
||||
Action: contentapi.WriteAction_STAT,
|
||||
Ref: ref,
|
||||
Total: size,
|
||||
Expected: expected,
|
||||
Expected: expected.String(),
|
||||
}); err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
@ -213,22 +214,22 @@ func (pcs *proxyContentStore) negotiate(ctx context.Context, ref string, size in
|
||||
return wrclient, resp.Offset, nil
|
||||
}
|
||||
|
||||
func infoToGRPC(info content.Info) contentapi.Info {
|
||||
return contentapi.Info{
|
||||
Digest: info.Digest,
|
||||
Size_: info.Size,
|
||||
CreatedAt: info.CreatedAt,
|
||||
UpdatedAt: info.UpdatedAt,
|
||||
func infoToGRPC(info *content.Info) *contentapi.Info {
|
||||
return &contentapi.Info{
|
||||
Digest: info.Digest.String(),
|
||||
Size: info.Size,
|
||||
CreatedAt: protobuf.ToTimestamp(info.CreatedAt),
|
||||
UpdatedAt: protobuf.ToTimestamp(info.UpdatedAt),
|
||||
Labels: info.Labels,
|
||||
}
|
||||
}
|
||||
|
||||
func infoFromGRPC(info contentapi.Info) content.Info {
|
||||
func infoFromGRPC(info *contentapi.Info) content.Info {
|
||||
return content.Info{
|
||||
Digest: info.Digest,
|
||||
Size: info.Size_,
|
||||
CreatedAt: info.CreatedAt,
|
||||
UpdatedAt: info.UpdatedAt,
|
||||
Digest: digest.Digest(info.Digest),
|
||||
Size: info.Size,
|
||||
CreatedAt: protobuf.FromTimestamp(info.CreatedAt),
|
||||
UpdatedAt: protobuf.FromTimestamp(info.UpdatedAt),
|
||||
Labels: info.Labels,
|
||||
}
|
||||
}
|
||||
|
22
vendor/github.com/containerd/containerd/content/proxy/content_writer.go
generated
vendored
22
vendor/github.com/containerd/containerd/content/proxy/content_writer.go
generated
vendored
@ -24,6 +24,7 @@ import (
|
||||
contentapi "github.com/containerd/containerd/api/services/content/v1"
|
||||
"github.com/containerd/containerd/content"
|
||||
"github.com/containerd/containerd/errdefs"
|
||||
"github.com/containerd/containerd/protobuf"
|
||||
digest "github.com/opencontainers/go-digest"
|
||||
)
|
||||
|
||||
@ -45,7 +46,7 @@ func (rw *remoteWriter) send(req *contentapi.WriteContentRequest) (*contentapi.W
|
||||
if err == nil {
|
||||
// try to keep these in sync
|
||||
if resp.Digest != "" {
|
||||
rw.digest = resp.Digest
|
||||
rw.digest = digest.Digest(resp.Digest)
|
||||
}
|
||||
}
|
||||
|
||||
@ -54,7 +55,7 @@ func (rw *remoteWriter) send(req *contentapi.WriteContentRequest) (*contentapi.W
|
||||
|
||||
func (rw *remoteWriter) Status() (content.Status, error) {
|
||||
resp, err := rw.send(&contentapi.WriteContentRequest{
|
||||
Action: contentapi.WriteActionStat,
|
||||
Action: contentapi.WriteAction_STAT,
|
||||
})
|
||||
if err != nil {
|
||||
return content.Status{}, fmt.Errorf("error getting writer status: %w", errdefs.FromGRPC(err))
|
||||
@ -64,8 +65,8 @@ func (rw *remoteWriter) Status() (content.Status, error) {
|
||||
Ref: rw.ref,
|
||||
Offset: resp.Offset,
|
||||
Total: resp.Total,
|
||||
StartedAt: resp.StartedAt,
|
||||
UpdatedAt: resp.UpdatedAt,
|
||||
StartedAt: protobuf.FromTimestamp(resp.StartedAt),
|
||||
UpdatedAt: protobuf.FromTimestamp(resp.UpdatedAt),
|
||||
}, nil
|
||||
}
|
||||
|
||||
@ -77,7 +78,7 @@ func (rw *remoteWriter) Write(p []byte) (n int, err error) {
|
||||
offset := rw.offset
|
||||
|
||||
resp, err := rw.send(&contentapi.WriteContentRequest{
|
||||
Action: contentapi.WriteActionWrite,
|
||||
Action: contentapi.WriteAction_WRITE,
|
||||
Offset: offset,
|
||||
Data: p,
|
||||
})
|
||||
@ -92,7 +93,7 @@ func (rw *remoteWriter) Write(p []byte) (n int, err error) {
|
||||
|
||||
rw.offset += int64(n)
|
||||
if resp.Digest != "" {
|
||||
rw.digest = resp.Digest
|
||||
rw.digest = digest.Digest(resp.Digest)
|
||||
}
|
||||
return
|
||||
}
|
||||
@ -112,10 +113,10 @@ func (rw *remoteWriter) Commit(ctx context.Context, size int64, expected digest.
|
||||
}
|
||||
}
|
||||
resp, err := rw.send(&contentapi.WriteContentRequest{
|
||||
Action: contentapi.WriteActionCommit,
|
||||
Action: contentapi.WriteAction_COMMIT,
|
||||
Total: size,
|
||||
Offset: rw.offset,
|
||||
Expected: expected,
|
||||
Expected: expected.String(),
|
||||
Labels: base.Labels,
|
||||
})
|
||||
if err != nil {
|
||||
@ -126,11 +127,12 @@ func (rw *remoteWriter) Commit(ctx context.Context, size int64, expected digest.
|
||||
return fmt.Errorf("unexpected size: %v != %v", resp.Offset, size)
|
||||
}
|
||||
|
||||
if expected != "" && resp.Digest != expected {
|
||||
actual := digest.Digest(resp.Digest)
|
||||
if expected != "" && actual != expected {
|
||||
return fmt.Errorf("unexpected digest: %v != %v", resp.Digest, expected)
|
||||
}
|
||||
|
||||
rw.digest = resp.Digest
|
||||
rw.digest = actual
|
||||
rw.offset = resp.Offset
|
||||
return nil
|
||||
}
|
||||
|
1
vendor/github.com/containerd/containerd/defaults/defaults_unix.go
generated
vendored
1
vendor/github.com/containerd/containerd/defaults/defaults_unix.go
generated
vendored
@ -1,5 +1,4 @@
|
||||
//go:build !windows && !darwin
|
||||
// +build !windows,!darwin
|
||||
|
||||
/*
|
||||
Copyright The containerd Authors.
|
||||
|
4
vendor/github.com/containerd/containerd/images/image.go
generated
vendored
4
vendor/github.com/containerd/containerd/images/image.go
generated
vendored
@ -138,7 +138,7 @@ type platformManifest struct {
|
||||
// TODO(stevvooe): This violates the current platform agnostic approach to this
|
||||
// package by returning a specific manifest type. We'll need to refactor this
|
||||
// to return a manifest descriptor or decide that we want to bring the API in
|
||||
// this direction because this abstraction is not needed.`
|
||||
// this direction because this abstraction is not needed.
|
||||
func Manifest(ctx context.Context, provider content.Provider, image ocispec.Descriptor, platform platforms.MatchComparer) (ocispec.Manifest, error) {
|
||||
var (
|
||||
limit = 1
|
||||
@ -311,7 +311,7 @@ func Check(ctx context.Context, provider content.Provider, image ocispec.Descrip
|
||||
return false, nil, nil, nil, fmt.Errorf("failed to check image %v: %w", image.Digest, err)
|
||||
}
|
||||
|
||||
// TODO(stevvooe): It is possible that referenced conponents could have
|
||||
// TODO(stevvooe): It is possible that referenced components could have
|
||||
// children, but this is rare. For now, we ignore this and only verify
|
||||
// that manifest components are present.
|
||||
required = append([]ocispec.Descriptor{mfst.Config}, mfst.Layers...)
|
||||
|
31
vendor/github.com/containerd/containerd/images/mediatypes.go
generated
vendored
31
vendor/github.com/containerd/containerd/images/mediatypes.go
generated
vendored
@ -38,7 +38,9 @@ const (
|
||||
MediaTypeDockerSchema2Config = "application/vnd.docker.container.image.v1+json"
|
||||
MediaTypeDockerSchema2Manifest = "application/vnd.docker.distribution.manifest.v2+json"
|
||||
MediaTypeDockerSchema2ManifestList = "application/vnd.docker.distribution.manifest.list.v2+json"
|
||||
|
||||
// Checkpoint/Restore Media Types
|
||||
|
||||
MediaTypeContainerd1Checkpoint = "application/vnd.containerd.container.criu.checkpoint.criu.tar"
|
||||
MediaTypeContainerd1CheckpointPreDump = "application/vnd.containerd.container.criu.checkpoint.predump.tar"
|
||||
MediaTypeContainerd1Resource = "application/vnd.containerd.container.resource.tar"
|
||||
@ -47,9 +49,12 @@ const (
|
||||
MediaTypeContainerd1CheckpointOptions = "application/vnd.containerd.container.checkpoint.options.v1+proto"
|
||||
MediaTypeContainerd1CheckpointRuntimeName = "application/vnd.containerd.container.checkpoint.runtime.name"
|
||||
MediaTypeContainerd1CheckpointRuntimeOptions = "application/vnd.containerd.container.checkpoint.runtime.options+proto"
|
||||
// Legacy Docker schema1 manifest
|
||||
|
||||
// MediaTypeDockerSchema1Manifest is the legacy Docker schema1 manifest
|
||||
MediaTypeDockerSchema1Manifest = "application/vnd.docker.distribution.manifest.v1+prettyjws"
|
||||
// Encypted media types
|
||||
|
||||
// Encrypted media types
|
||||
|
||||
MediaTypeImageLayerEncrypted = ocispec.MediaTypeImageLayer + "+encrypted"
|
||||
MediaTypeImageLayerGzipEncrypted = ocispec.MediaTypeImageLayerGzip + "+encrypted"
|
||||
)
|
||||
@ -93,16 +98,23 @@ func DiffCompression(ctx context.Context, mediaType string) (string, error) {
|
||||
|
||||
// parseMediaTypes splits the media type into the base type and
|
||||
// an array of sorted extensions
|
||||
func parseMediaTypes(mt string) (string, []string) {
|
||||
func parseMediaTypes(mt string) (mediaType string, suffixes []string) {
|
||||
if mt == "" {
|
||||
return "", []string{}
|
||||
}
|
||||
mediaType, ext, ok := strings.Cut(mt, "+")
|
||||
if !ok {
|
||||
return mediaType, []string{}
|
||||
}
|
||||
|
||||
s := strings.Split(mt, "+")
|
||||
ext := s[1:]
|
||||
sort.Strings(ext)
|
||||
|
||||
return s[0], ext
|
||||
// Splitting the extensions following the mediatype "(+)gzip+encrypted".
|
||||
// We expect this to be a limited list, so add an arbitrary limit (50).
|
||||
//
|
||||
// Note that DiffCompression is only using the last element, so perhaps we
|
||||
// should split on the last "+" only.
|
||||
suffixes = strings.SplitN(ext, "+", 50)
|
||||
sort.Strings(suffixes)
|
||||
return mediaType, suffixes
|
||||
}
|
||||
|
||||
// IsNonDistributable returns true if the media type is non-distributable.
|
||||
@ -118,8 +130,7 @@ func IsLayerType(mt string) bool {
|
||||
}
|
||||
|
||||
// Parse Docker media types, strip off any + suffixes first
|
||||
base, _ := parseMediaTypes(mt)
|
||||
switch base {
|
||||
switch base, _ := parseMediaTypes(mt); base {
|
||||
case MediaTypeDockerSchema2Layer, MediaTypeDockerSchema2LayerGzip,
|
||||
MediaTypeDockerSchema2LayerForeign, MediaTypeDockerSchema2LayerForeignGzip:
|
||||
return true
|
||||
|
4
vendor/github.com/containerd/containerd/labels/labels.go
generated
vendored
4
vendor/github.com/containerd/containerd/labels/labels.go
generated
vendored
@ -19,3 +19,7 @@ package labels
|
||||
// LabelUncompressed is added to compressed layer contents.
|
||||
// The value is digest of the uncompressed content.
|
||||
const LabelUncompressed = "containerd.io/uncompressed"
|
||||
|
||||
// LabelSharedNamespace is added to a namespace to allow that namespaces
|
||||
// contents to be shared.
|
||||
const LabelSharedNamespace = "containerd.io/namespace.shareable"
|
||||
|
11
vendor/github.com/containerd/containerd/labels/validate.go
generated
vendored
11
vendor/github.com/containerd/containerd/labels/validate.go
generated
vendored
@ -24,15 +24,18 @@ import (
|
||||
|
||||
const (
|
||||
maxSize = 4096
|
||||
// maximum length of key portion of error message if len of key + len of value > maxSize
|
||||
keyMaxLen = 64
|
||||
)
|
||||
|
||||
// Validate a label's key and value are under 4096 bytes
|
||||
func Validate(k, v string) error {
|
||||
if (len(k) + len(v)) > maxSize {
|
||||
if len(k) > 10 {
|
||||
k = k[:10]
|
||||
total := len(k) + len(v)
|
||||
if total > maxSize {
|
||||
if len(k) > keyMaxLen {
|
||||
k = k[:keyMaxLen]
|
||||
}
|
||||
return fmt.Errorf("label key and value greater than maximum size (%d bytes), key: %s: %w", maxSize, k, errdefs.ErrInvalidArgument)
|
||||
return fmt.Errorf("label key and value length (%d bytes) greater than maximum size (%d bytes), key: %s: %w", total, maxSize, k, errdefs.ErrInvalidArgument)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
1
vendor/github.com/containerd/containerd/pkg/dialer/dialer_unix.go
generated
vendored
1
vendor/github.com/containerd/containerd/pkg/dialer/dialer_unix.go
generated
vendored
@ -1,5 +1,4 @@
|
||||
//go:build !windows
|
||||
// +build !windows
|
||||
|
||||
/*
|
||||
Copyright The containerd Authors.
|
||||
|
1
vendor/github.com/containerd/containerd/pkg/seed/seed_other.go
generated
vendored
1
vendor/github.com/containerd/containerd/pkg/seed/seed_other.go
generated
vendored
@ -1,5 +1,4 @@
|
||||
//go:build !linux
|
||||
// +build !linux
|
||||
|
||||
/*
|
||||
Copyright The containerd Authors.
|
||||
|
1
vendor/github.com/containerd/containerd/pkg/userns/userns_unsupported.go
generated
vendored
1
vendor/github.com/containerd/containerd/pkg/userns/userns_unsupported.go
generated
vendored
@ -1,5 +1,4 @@
|
||||
//go:build !linux
|
||||
// +build !linux
|
||||
|
||||
/*
|
||||
Copyright The containerd Authors.
|
||||
|
98
vendor/github.com/containerd/containerd/platforms/cpuinfo.go
generated
vendored
98
vendor/github.com/containerd/containerd/platforms/cpuinfo.go
generated
vendored
@ -17,14 +17,9 @@
|
||||
package platforms
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"fmt"
|
||||
"os"
|
||||
"runtime"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"github.com/containerd/containerd/errdefs"
|
||||
"github.com/containerd/containerd/log"
|
||||
)
|
||||
|
||||
@ -37,95 +32,12 @@ var cpuVariantOnce sync.Once
|
||||
func cpuVariant() string {
|
||||
cpuVariantOnce.Do(func() {
|
||||
if isArmArch(runtime.GOARCH) {
|
||||
cpuVariantValue = getCPUVariant()
|
||||
var err error
|
||||
cpuVariantValue, err = getCPUVariant()
|
||||
if err != nil {
|
||||
log.L.Errorf("Error getCPUVariant for OS %s: %v", runtime.GOOS, err)
|
||||
}
|
||||
}
|
||||
})
|
||||
return cpuVariantValue
|
||||
}
|
||||
|
||||
// For Linux, the kernel has already detected the ABI, ISA and Features.
|
||||
// So we don't need to access the ARM registers to detect platform information
|
||||
// by ourselves. We can just parse these information from /proc/cpuinfo
|
||||
func getCPUInfo(pattern string) (info string, err error) {
|
||||
if !isLinuxOS(runtime.GOOS) {
|
||||
return "", fmt.Errorf("getCPUInfo for OS %s: %w", runtime.GOOS, errdefs.ErrNotImplemented)
|
||||
}
|
||||
|
||||
cpuinfo, err := os.Open("/proc/cpuinfo")
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
defer cpuinfo.Close()
|
||||
|
||||
// Start to Parse the Cpuinfo line by line. For SMP SoC, we parse
|
||||
// the first core is enough.
|
||||
scanner := bufio.NewScanner(cpuinfo)
|
||||
for scanner.Scan() {
|
||||
newline := scanner.Text()
|
||||
list := strings.Split(newline, ":")
|
||||
|
||||
if len(list) > 1 && strings.EqualFold(strings.TrimSpace(list[0]), pattern) {
|
||||
return strings.TrimSpace(list[1]), nil
|
||||
}
|
||||
}
|
||||
|
||||
// Check whether the scanner encountered errors
|
||||
err = scanner.Err()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
return "", fmt.Errorf("getCPUInfo for pattern: %s: %w", pattern, errdefs.ErrNotFound)
|
||||
}
|
||||
|
||||
func getCPUVariant() string {
|
||||
if runtime.GOOS == "windows" || runtime.GOOS == "darwin" {
|
||||
// Windows/Darwin only supports v7 for ARM32 and v8 for ARM64 and so we can use
|
||||
// runtime.GOARCH to determine the variants
|
||||
var variant string
|
||||
switch runtime.GOARCH {
|
||||
case "arm64":
|
||||
variant = "v8"
|
||||
case "arm":
|
||||
variant = "v7"
|
||||
default:
|
||||
variant = "unknown"
|
||||
}
|
||||
|
||||
return variant
|
||||
}
|
||||
|
||||
variant, err := getCPUInfo("Cpu architecture")
|
||||
if err != nil {
|
||||
log.L.WithError(err).Error("failure getting variant")
|
||||
return ""
|
||||
}
|
||||
|
||||
// handle edge case for Raspberry Pi ARMv6 devices (which due to a kernel quirk, report "CPU architecture: 7")
|
||||
// https://www.raspberrypi.org/forums/viewtopic.php?t=12614
|
||||
if runtime.GOARCH == "arm" && variant == "7" {
|
||||
model, err := getCPUInfo("model name")
|
||||
if err == nil && strings.HasPrefix(strings.ToLower(model), "armv6-compatible") {
|
||||
variant = "6"
|
||||
}
|
||||
}
|
||||
|
||||
switch strings.ToLower(variant) {
|
||||
case "8", "aarch64":
|
||||
variant = "v8"
|
||||
case "7", "7m", "?(12)", "?(13)", "?(14)", "?(15)", "?(16)", "?(17)":
|
||||
variant = "v7"
|
||||
case "6", "6tej":
|
||||
variant = "v6"
|
||||
case "5", "5t", "5te", "5tej":
|
||||
variant = "v5"
|
||||
case "4", "4t":
|
||||
variant = "v4"
|
||||
case "3":
|
||||
variant = "v3"
|
||||
default:
|
||||
variant = "unknown"
|
||||
}
|
||||
|
||||
return variant
|
||||
}
|
||||
|
161
vendor/github.com/containerd/containerd/platforms/cpuinfo_linux.go
generated
vendored
Normal file
161
vendor/github.com/containerd/containerd/platforms/cpuinfo_linux.go
generated
vendored
Normal file
@ -0,0 +1,161 @@
|
||||
/*
|
||||
Copyright The containerd Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package platforms
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"fmt"
|
||||
"os"
|
||||
"runtime"
|
||||
"strings"
|
||||
|
||||
"github.com/containerd/containerd/errdefs"
|
||||
"golang.org/x/sys/unix"
|
||||
)
|
||||
|
||||
// getMachineArch retrieves the machine architecture through system call
|
||||
func getMachineArch() (string, error) {
|
||||
var uname unix.Utsname
|
||||
err := unix.Uname(&uname)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
arch := string(uname.Machine[:bytes.IndexByte(uname.Machine[:], 0)])
|
||||
|
||||
return arch, nil
|
||||
}
|
||||
|
||||
// For Linux, the kernel has already detected the ABI, ISA and Features.
|
||||
// So we don't need to access the ARM registers to detect platform information
|
||||
// by ourselves. We can just parse these information from /proc/cpuinfo
|
||||
func getCPUInfo(pattern string) (info string, err error) {
|
||||
|
||||
cpuinfo, err := os.Open("/proc/cpuinfo")
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
defer cpuinfo.Close()
|
||||
|
||||
// Start to Parse the Cpuinfo line by line. For SMP SoC, we parse
|
||||
// the first core is enough.
|
||||
scanner := bufio.NewScanner(cpuinfo)
|
||||
for scanner.Scan() {
|
||||
newline := scanner.Text()
|
||||
list := strings.Split(newline, ":")
|
||||
|
||||
if len(list) > 1 && strings.EqualFold(strings.TrimSpace(list[0]), pattern) {
|
||||
return strings.TrimSpace(list[1]), nil
|
||||
}
|
||||
}
|
||||
|
||||
// Check whether the scanner encountered errors
|
||||
err = scanner.Err()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
return "", fmt.Errorf("getCPUInfo for pattern %s: %w", pattern, errdefs.ErrNotFound)
|
||||
}
|
||||
|
||||
// getCPUVariantFromArch get CPU variant from arch through a system call
|
||||
func getCPUVariantFromArch(arch string) (string, error) {
|
||||
|
||||
var variant string
|
||||
|
||||
arch = strings.ToLower(arch)
|
||||
|
||||
if arch == "aarch64" {
|
||||
variant = "8"
|
||||
} else if arch[0:4] == "armv" && len(arch) >= 5 {
|
||||
//Valid arch format is in form of armvXx
|
||||
switch arch[3:5] {
|
||||
case "v8":
|
||||
variant = "8"
|
||||
case "v7":
|
||||
variant = "7"
|
||||
case "v6":
|
||||
variant = "6"
|
||||
case "v5":
|
||||
variant = "5"
|
||||
case "v4":
|
||||
variant = "4"
|
||||
case "v3":
|
||||
variant = "3"
|
||||
default:
|
||||
variant = "unknown"
|
||||
}
|
||||
} else {
|
||||
return "", fmt.Errorf("getCPUVariantFromArch invalid arch: %s, %w", arch, errdefs.ErrInvalidArgument)
|
||||
}
|
||||
return variant, nil
|
||||
}
|
||||
|
||||
// getCPUVariant returns cpu variant for ARM
|
||||
// We first try reading "Cpu architecture" field from /proc/cpuinfo
|
||||
// If we can't find it, then fall back using a system call
|
||||
// This is to cover running ARM in emulated environment on x86 host as this field in /proc/cpuinfo
|
||||
// was not present.
|
||||
func getCPUVariant() (string, error) {
|
||||
|
||||
variant, err := getCPUInfo("Cpu architecture")
|
||||
if err != nil {
|
||||
if errdefs.IsNotFound(err) {
|
||||
//Let's try getting CPU variant from machine architecture
|
||||
arch, err := getMachineArch()
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failure getting machine architecture: %v", err)
|
||||
}
|
||||
|
||||
variant, err = getCPUVariantFromArch(arch)
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failure getting CPU variant from machine architecture: %v", err)
|
||||
}
|
||||
} else {
|
||||
return "", fmt.Errorf("failure getting CPU variant: %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
// handle edge case for Raspberry Pi ARMv6 devices (which due to a kernel quirk, report "CPU architecture: 7")
|
||||
// https://www.raspberrypi.org/forums/viewtopic.php?t=12614
|
||||
if runtime.GOARCH == "arm" && variant == "7" {
|
||||
model, err := getCPUInfo("model name")
|
||||
if err == nil && strings.HasPrefix(strings.ToLower(model), "armv6-compatible") {
|
||||
variant = "6"
|
||||
}
|
||||
}
|
||||
|
||||
switch strings.ToLower(variant) {
|
||||
case "8", "aarch64":
|
||||
variant = "v8"
|
||||
case "7", "7m", "?(12)", "?(13)", "?(14)", "?(15)", "?(16)", "?(17)":
|
||||
variant = "v7"
|
||||
case "6", "6tej":
|
||||
variant = "v6"
|
||||
case "5", "5t", "5te", "5tej":
|
||||
variant = "v5"
|
||||
case "4", "4t":
|
||||
variant = "v4"
|
||||
case "3":
|
||||
variant = "v3"
|
||||
default:
|
||||
variant = "unknown"
|
||||
}
|
||||
|
||||
return variant, nil
|
||||
}
|
59
vendor/github.com/containerd/containerd/platforms/cpuinfo_other.go
generated
vendored
Normal file
59
vendor/github.com/containerd/containerd/platforms/cpuinfo_other.go
generated
vendored
Normal file
@ -0,0 +1,59 @@
|
||||
//go:build !linux
|
||||
|
||||
/*
|
||||
Copyright The containerd Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package platforms
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"runtime"
|
||||
|
||||
"github.com/containerd/containerd/errdefs"
|
||||
)
|
||||
|
||||
func getCPUVariant() (string, error) {
|
||||
|
||||
var variant string
|
||||
|
||||
if runtime.GOOS == "windows" || runtime.GOOS == "darwin" {
|
||||
// Windows/Darwin only supports v7 for ARM32 and v8 for ARM64 and so we can use
|
||||
// runtime.GOARCH to determine the variants
|
||||
switch runtime.GOARCH {
|
||||
case "arm64":
|
||||
variant = "v8"
|
||||
case "arm":
|
||||
variant = "v7"
|
||||
default:
|
||||
variant = "unknown"
|
||||
}
|
||||
} else if runtime.GOOS == "freebsd" {
|
||||
// FreeBSD supports ARMv6 and ARMv7 as well as ARMv4 and ARMv5 (though deprecated)
|
||||
// detecting those variants is currently unimplemented
|
||||
switch runtime.GOARCH {
|
||||
case "arm64":
|
||||
variant = "v8"
|
||||
default:
|
||||
variant = "unknown"
|
||||
}
|
||||
|
||||
} else {
|
||||
return "", fmt.Errorf("getCPUVariant for OS %s: %v", runtime.GOOS, errdefs.ErrNotImplemented)
|
||||
|
||||
}
|
||||
|
||||
return variant, nil
|
||||
}
|
7
vendor/github.com/containerd/containerd/platforms/database.go
generated
vendored
7
vendor/github.com/containerd/containerd/platforms/database.go
generated
vendored
@ -21,13 +21,6 @@ import (
|
||||
"strings"
|
||||
)
|
||||
|
||||
// isLinuxOS returns true if the operating system is Linux.
|
||||
//
|
||||
// The OS value should be normalized before calling this function.
|
||||
func isLinuxOS(os string) bool {
|
||||
return os == "linux"
|
||||
}
|
||||
|
||||
// These function are generated from https://golang.org/src/go/build/syslist.go.
|
||||
//
|
||||
// We use switch statements because they are slightly faster than map lookups
|
||||
|
1
vendor/github.com/containerd/containerd/platforms/defaults_darwin.go
generated
vendored
1
vendor/github.com/containerd/containerd/platforms/defaults_darwin.go
generated
vendored
@ -1,5 +1,4 @@
|
||||
//go:build darwin
|
||||
// +build darwin
|
||||
|
||||
/*
|
||||
Copyright The containerd Authors.
|
||||
|
43
vendor/github.com/containerd/containerd/platforms/defaults_freebsd.go
generated
vendored
Normal file
43
vendor/github.com/containerd/containerd/platforms/defaults_freebsd.go
generated
vendored
Normal file
@ -0,0 +1,43 @@
|
||||
/*
|
||||
Copyright The containerd Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package platforms
|
||||
|
||||
import (
|
||||
"runtime"
|
||||
|
||||
specs "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
)
|
||||
|
||||
// DefaultSpec returns the current platform's default platform specification.
|
||||
func DefaultSpec() specs.Platform {
|
||||
return specs.Platform{
|
||||
OS: runtime.GOOS,
|
||||
Architecture: runtime.GOARCH,
|
||||
// The Variant field will be empty if arch != ARM.
|
||||
Variant: cpuVariant(),
|
||||
}
|
||||
}
|
||||
|
||||
// Default returns the default matcher for the platform.
|
||||
func Default() MatchComparer {
|
||||
return Ordered(DefaultSpec(), specs.Platform{
|
||||
OS: "linux",
|
||||
Architecture: runtime.GOARCH,
|
||||
// The Variant field will be empty if arch != ARM.
|
||||
Variant: cpuVariant(),
|
||||
})
|
||||
}
|
3
vendor/github.com/containerd/containerd/platforms/defaults_unix.go
generated
vendored
3
vendor/github.com/containerd/containerd/platforms/defaults_unix.go
generated
vendored
@ -1,5 +1,4 @@
|
||||
//go:build !windows && !darwin
|
||||
// +build !windows,!darwin
|
||||
//go:build !windows && !darwin && !freebsd
|
||||
|
||||
/*
|
||||
Copyright The containerd Authors.
|
||||
|
37
vendor/github.com/containerd/containerd/platforms/defaults_windows.go
generated
vendored
37
vendor/github.com/containerd/containerd/platforms/defaults_windows.go
generated
vendored
@ -22,7 +22,6 @@ import (
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
imagespec "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
specs "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
"golang.org/x/sys/windows"
|
||||
)
|
||||
@ -39,25 +38,28 @@ func DefaultSpec() specs.Platform {
|
||||
}
|
||||
}
|
||||
|
||||
type matchComparer struct {
|
||||
defaults Matcher
|
||||
type windowsmatcher struct {
|
||||
specs.Platform
|
||||
osVersionPrefix string
|
||||
defaultMatcher Matcher
|
||||
}
|
||||
|
||||
// Match matches platform with the same windows major, minor
|
||||
// and build version.
|
||||
func (m matchComparer) Match(p imagespec.Platform) bool {
|
||||
if m.defaults.Match(p) {
|
||||
// TODO(windows): Figure out whether OSVersion is deprecated.
|
||||
return strings.HasPrefix(p.OSVersion, m.osVersionPrefix)
|
||||
func (m windowsmatcher) Match(p specs.Platform) bool {
|
||||
match := m.defaultMatcher.Match(p)
|
||||
|
||||
if match && m.OS == "windows" {
|
||||
return strings.HasPrefix(p.OSVersion, m.osVersionPrefix) && m.defaultMatcher.Match(p)
|
||||
}
|
||||
return false
|
||||
|
||||
return match
|
||||
}
|
||||
|
||||
// Less sorts matched platforms in front of other platforms.
|
||||
// For matched platforms, it puts platforms with larger revision
|
||||
// number in front.
|
||||
func (m matchComparer) Less(p1, p2 imagespec.Platform) bool {
|
||||
func (m windowsmatcher) Less(p1, p2 specs.Platform) bool {
|
||||
m1, m2 := m.Match(p1), m.Match(p2)
|
||||
if m1 && m2 {
|
||||
r1, r2 := revision(p1.OSVersion), revision(p2.OSVersion)
|
||||
@ -78,14 +80,15 @@ func revision(v string) int {
|
||||
return r
|
||||
}
|
||||
|
||||
func prefix(v string) string {
|
||||
parts := strings.Split(v, ".")
|
||||
if len(parts) < 4 {
|
||||
return v
|
||||
}
|
||||
return strings.Join(parts[0:3], ".")
|
||||
}
|
||||
|
||||
// Default returns the current platform's default platform specification.
|
||||
func Default() MatchComparer {
|
||||
major, minor, build := windows.RtlGetNtVersionNumbers()
|
||||
return matchComparer{
|
||||
defaults: Ordered(DefaultSpec(), specs.Platform{
|
||||
OS: "linux",
|
||||
Architecture: runtime.GOARCH,
|
||||
}),
|
||||
osVersionPrefix: fmt.Sprintf("%d.%d.%d", major, minor, build),
|
||||
}
|
||||
return Only(DefaultSpec())
|
||||
}
|
||||
|
11
vendor/github.com/containerd/containerd/platforms/platforms.go
generated
vendored
11
vendor/github.com/containerd/containerd/platforms/platforms.go
generated
vendored
@ -114,14 +114,18 @@ import (
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/containerd/containerd/errdefs"
|
||||
specs "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
|
||||
"github.com/containerd/containerd/errdefs"
|
||||
)
|
||||
|
||||
var (
|
||||
specifierRe = regexp.MustCompile(`^[A-Za-z0-9_-]+$`)
|
||||
)
|
||||
|
||||
// Platform is a type alias for convenience, so there is no need to import image-spec package everywhere.
|
||||
type Platform = specs.Platform
|
||||
|
||||
// Matcher matches platforms specifications, provided by an image or runtime.
|
||||
type Matcher interface {
|
||||
Match(platform specs.Platform) bool
|
||||
@ -136,9 +140,7 @@ type Matcher interface {
|
||||
//
|
||||
// Applications should opt to use `Match` over directly parsing specifiers.
|
||||
func NewMatcher(platform specs.Platform) Matcher {
|
||||
return &matcher{
|
||||
Platform: Normalize(platform),
|
||||
}
|
||||
return newDefaultMatcher(platform)
|
||||
}
|
||||
|
||||
type matcher struct {
|
||||
@ -257,5 +259,6 @@ func Format(platform specs.Platform) string {
|
||||
func Normalize(platform specs.Platform) specs.Platform {
|
||||
platform.OS = normalizeOS(platform.OS)
|
||||
platform.Architecture, platform.Variant = normalizeArch(platform.Architecture, platform.Variant)
|
||||
|
||||
return platform
|
||||
}
|
||||
|
30
vendor/github.com/containerd/containerd/platforms/platforms_other.go
generated
vendored
Normal file
30
vendor/github.com/containerd/containerd/platforms/platforms_other.go
generated
vendored
Normal file
@ -0,0 +1,30 @@
|
||||
//go:build !windows
|
||||
|
||||
/*
|
||||
Copyright The containerd Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package platforms
|
||||
|
||||
import (
|
||||
specs "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
)
|
||||
|
||||
// NewMatcher returns the default Matcher for containerd
|
||||
func newDefaultMatcher(platform specs.Platform) Matcher {
|
||||
return &matcher{
|
||||
Platform: Normalize(platform),
|
||||
}
|
||||
}
|
34
vendor/github.com/containerd/containerd/platforms/platforms_windows.go
generated
vendored
Normal file
34
vendor/github.com/containerd/containerd/platforms/platforms_windows.go
generated
vendored
Normal file
@ -0,0 +1,34 @@
|
||||
/*
|
||||
Copyright The containerd Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package platforms
|
||||
|
||||
import (
|
||||
specs "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
)
|
||||
|
||||
// NewMatcher returns a Windows matcher that will match on osVersionPrefix if
|
||||
// the platform is Windows otherwise use the default matcher
|
||||
func newDefaultMatcher(platform specs.Platform) Matcher {
|
||||
prefix := prefix(platform.OSVersion)
|
||||
return windowsmatcher{
|
||||
Platform: platform,
|
||||
osVersionPrefix: prefix,
|
||||
defaultMatcher: &matcher{
|
||||
Platform: Normalize(platform),
|
||||
},
|
||||
}
|
||||
}
|
47
vendor/github.com/containerd/containerd/protobuf/any.go
generated
vendored
Normal file
47
vendor/github.com/containerd/containerd/protobuf/any.go
generated
vendored
Normal file
@ -0,0 +1,47 @@
|
||||
/*
|
||||
Copyright The containerd Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package protobuf
|
||||
|
||||
import (
|
||||
"github.com/containerd/typeurl"
|
||||
"google.golang.org/protobuf/types/known/anypb"
|
||||
)
|
||||
|
||||
// FromAny converts typeurl.Any to github.com/containerd/containerd/protobuf/types.Any.
|
||||
func FromAny(from typeurl.Any) *anypb.Any {
|
||||
if from == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
if pbany, ok := from.(*anypb.Any); ok {
|
||||
return pbany
|
||||
}
|
||||
|
||||
return &anypb.Any{
|
||||
TypeUrl: from.GetTypeUrl(),
|
||||
Value: from.GetValue(),
|
||||
}
|
||||
}
|
||||
|
||||
// FromAny converts an arbitrary interface to github.com/containerd/containerd/protobuf/types.Any.
|
||||
func MarshalAnyToProto(from interface{}) (*anypb.Any, error) {
|
||||
any, err := typeurl.MarshalAny(from)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return FromAny(any), nil
|
||||
}
|
41
vendor/github.com/containerd/containerd/protobuf/compare.go
generated
vendored
Normal file
41
vendor/github.com/containerd/containerd/protobuf/compare.go
generated
vendored
Normal file
@ -0,0 +1,41 @@
|
||||
/*
|
||||
Copyright The containerd Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package protobuf
|
||||
|
||||
import (
|
||||
"github.com/google/go-cmp/cmp"
|
||||
"google.golang.org/protobuf/proto"
|
||||
)
|
||||
|
||||
var Compare = cmp.FilterValues(
|
||||
func(x, y interface{}) bool {
|
||||
_, xok := x.(proto.Message)
|
||||
_, yok := y.(proto.Message)
|
||||
return xok && yok
|
||||
},
|
||||
cmp.Comparer(func(x, y interface{}) bool {
|
||||
vx, ok := x.(proto.Message)
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
vy, ok := y.(proto.Message)
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
return proto.Equal(vx, vy)
|
||||
}),
|
||||
)
|
36
vendor/github.com/containerd/containerd/protobuf/timestamp.go
generated
vendored
Normal file
36
vendor/github.com/containerd/containerd/protobuf/timestamp.go
generated
vendored
Normal file
@ -0,0 +1,36 @@
|
||||
/*
|
||||
Copyright The containerd Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package protobuf
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"google.golang.org/protobuf/types/known/timestamppb"
|
||||
)
|
||||
|
||||
// Once we migrate off from gogo/protobuf, we can use the function below, which don't return any errors.
|
||||
// https://github.com/protocolbuffers/protobuf-go/blob/v1.28.0/types/known/timestamppb/timestamp.pb.go#L200-L208
|
||||
|
||||
// ToTimestamp creates protobuf's Timestamp from time.Time.
|
||||
func ToTimestamp(from time.Time) *timestamppb.Timestamp {
|
||||
return timestamppb.New(from)
|
||||
}
|
||||
|
||||
// FromTimestamp creates time.Time from protobuf's Timestamp.
|
||||
func FromTimestamp(from *timestamppb.Timestamp) time.Time {
|
||||
return from.AsTime()
|
||||
}
|
28
vendor/github.com/containerd/containerd/protobuf/types/types.go
generated
vendored
Normal file
28
vendor/github.com/containerd/containerd/protobuf/types/types.go
generated
vendored
Normal file
@ -0,0 +1,28 @@
|
||||
/*
|
||||
Copyright The containerd Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
// Package types provides convinient aliases that make google.golang.org/protobuf migration easier.
|
||||
package types
|
||||
|
||||
import (
|
||||
"google.golang.org/genproto/protobuf/field_mask"
|
||||
"google.golang.org/protobuf/types/known/anypb"
|
||||
"google.golang.org/protobuf/types/known/emptypb"
|
||||
)
|
||||
|
||||
type Empty = emptypb.Empty
|
||||
type Any = anypb.Any
|
||||
type FieldMask = field_mask.FieldMask
|
58
vendor/github.com/containerd/containerd/reference/docker/helpers.go
generated
vendored
Normal file
58
vendor/github.com/containerd/containerd/reference/docker/helpers.go
generated
vendored
Normal file
@ -0,0 +1,58 @@
|
||||
/*
|
||||
Copyright The containerd Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package docker
|
||||
|
||||
import "path"
|
||||
|
||||
// IsNameOnly returns true if reference only contains a repo name.
|
||||
func IsNameOnly(ref Named) bool {
|
||||
if _, ok := ref.(NamedTagged); ok {
|
||||
return false
|
||||
}
|
||||
if _, ok := ref.(Canonical); ok {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// FamiliarName returns the familiar name string
|
||||
// for the given named, familiarizing if needed.
|
||||
func FamiliarName(ref Named) string {
|
||||
if nn, ok := ref.(normalizedNamed); ok {
|
||||
return nn.Familiar().Name()
|
||||
}
|
||||
return ref.Name()
|
||||
}
|
||||
|
||||
// FamiliarString returns the familiar string representation
|
||||
// for the given reference, familiarizing if needed.
|
||||
func FamiliarString(ref Reference) string {
|
||||
if nn, ok := ref.(normalizedNamed); ok {
|
||||
return nn.Familiar().String()
|
||||
}
|
||||
return ref.String()
|
||||
}
|
||||
|
||||
// FamiliarMatch reports whether ref matches the specified pattern.
|
||||
// See https://godoc.org/path#Match for supported patterns.
|
||||
func FamiliarMatch(pattern string, ref Reference) (bool, error) {
|
||||
matched, err := path.Match(pattern, FamiliarString(ref))
|
||||
if namedRef, isNamed := ref.(Named); isNamed && !matched {
|
||||
matched, _ = path.Match(pattern, FamiliarName(namedRef))
|
||||
}
|
||||
return matched, err
|
||||
}
|
196
vendor/github.com/containerd/containerd/reference/docker/normalize.go
generated
vendored
Normal file
196
vendor/github.com/containerd/containerd/reference/docker/normalize.go
generated
vendored
Normal file
@ -0,0 +1,196 @@
|
||||
/*
|
||||
Copyright The containerd Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package docker
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/opencontainers/go-digest"
|
||||
)
|
||||
|
||||
var (
|
||||
legacyDefaultDomain = "index.docker.io"
|
||||
defaultDomain = "docker.io"
|
||||
officialRepoName = "library"
|
||||
defaultTag = "latest"
|
||||
)
|
||||
|
||||
// normalizedNamed represents a name which has been
|
||||
// normalized and has a familiar form. A familiar name
|
||||
// is what is used in Docker UI. An example normalized
|
||||
// name is "docker.io/library/ubuntu" and corresponding
|
||||
// familiar name of "ubuntu".
|
||||
type normalizedNamed interface {
|
||||
Named
|
||||
Familiar() Named
|
||||
}
|
||||
|
||||
// ParseNormalizedNamed parses a string into a named reference
|
||||
// transforming a familiar name from Docker UI to a fully
|
||||
// qualified reference. If the value may be an identifier
|
||||
// use ParseAnyReference.
|
||||
func ParseNormalizedNamed(s string) (Named, error) {
|
||||
if ok := anchoredIdentifierRegexp.MatchString(s); ok {
|
||||
return nil, fmt.Errorf("invalid repository name (%s), cannot specify 64-byte hexadecimal strings", s)
|
||||
}
|
||||
domain, remainder := splitDockerDomain(s)
|
||||
var remoteName string
|
||||
if tagSep := strings.IndexRune(remainder, ':'); tagSep > -1 {
|
||||
remoteName = remainder[:tagSep]
|
||||
} else {
|
||||
remoteName = remainder
|
||||
}
|
||||
if strings.ToLower(remoteName) != remoteName {
|
||||
return nil, fmt.Errorf("invalid reference format: repository name (%s) must be lowercase", remoteName)
|
||||
}
|
||||
|
||||
ref, err := Parse(domain + "/" + remainder)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
named, isNamed := ref.(Named)
|
||||
if !isNamed {
|
||||
return nil, fmt.Errorf("reference %s has no name", ref.String())
|
||||
}
|
||||
return named, nil
|
||||
}
|
||||
|
||||
// ParseDockerRef normalizes the image reference following the docker convention. This is added
|
||||
// mainly for backward compatibility.
|
||||
// The reference returned can only be either tagged or digested. For reference contains both tag
|
||||
// and digest, the function returns digested reference, e.g. docker.io/library/busybox:latest@
|
||||
// sha256:7cc4b5aefd1d0cadf8d97d4350462ba51c694ebca145b08d7d41b41acc8db5aa will be returned as
|
||||
// docker.io/library/busybox@sha256:7cc4b5aefd1d0cadf8d97d4350462ba51c694ebca145b08d7d41b41acc8db5aa.
|
||||
func ParseDockerRef(ref string) (Named, error) {
|
||||
named, err := ParseNormalizedNamed(ref)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if _, ok := named.(NamedTagged); ok {
|
||||
if canonical, ok := named.(Canonical); ok {
|
||||
// The reference is both tagged and digested, only
|
||||
// return digested.
|
||||
newNamed, err := WithName(canonical.Name())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
newCanonical, err := WithDigest(newNamed, canonical.Digest())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return newCanonical, nil
|
||||
}
|
||||
}
|
||||
return TagNameOnly(named), nil
|
||||
}
|
||||
|
||||
// splitDockerDomain splits a repository name to domain and remotename string.
|
||||
// If no valid domain is found, the default domain is used. Repository name
|
||||
// needs to be already validated before.
|
||||
func splitDockerDomain(name string) (domain, remainder string) {
|
||||
i := strings.IndexRune(name, '/')
|
||||
if i == -1 || (!strings.ContainsAny(name[:i], ".:") && name[:i] != "localhost" && strings.ToLower(name[:i]) == name[:i]) {
|
||||
domain, remainder = defaultDomain, name
|
||||
} else {
|
||||
domain, remainder = name[:i], name[i+1:]
|
||||
}
|
||||
if domain == legacyDefaultDomain {
|
||||
domain = defaultDomain
|
||||
}
|
||||
if domain == defaultDomain && !strings.ContainsRune(remainder, '/') {
|
||||
remainder = officialRepoName + "/" + remainder
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// familiarizeName returns a shortened version of the name familiar
|
||||
// to the Docker UI. Familiar names have the default domain
|
||||
// "docker.io" and "library/" repository prefix removed.
|
||||
// For example, "docker.io/library/redis" will have the familiar
|
||||
// name "redis" and "docker.io/dmcgowan/myapp" will be "dmcgowan/myapp".
|
||||
// Returns a familiarized named only reference.
|
||||
func familiarizeName(named namedRepository) repository {
|
||||
repo := repository{
|
||||
domain: named.Domain(),
|
||||
path: named.Path(),
|
||||
}
|
||||
|
||||
if repo.domain == defaultDomain {
|
||||
repo.domain = ""
|
||||
// Handle official repositories which have the pattern "library/<official repo name>"
|
||||
if split := strings.Split(repo.path, "/"); len(split) == 2 && split[0] == officialRepoName {
|
||||
repo.path = split[1]
|
||||
}
|
||||
}
|
||||
return repo
|
||||
}
|
||||
|
||||
func (r reference) Familiar() Named {
|
||||
return reference{
|
||||
namedRepository: familiarizeName(r.namedRepository),
|
||||
tag: r.tag,
|
||||
digest: r.digest,
|
||||
}
|
||||
}
|
||||
|
||||
func (r repository) Familiar() Named {
|
||||
return familiarizeName(r)
|
||||
}
|
||||
|
||||
func (t taggedReference) Familiar() Named {
|
||||
return taggedReference{
|
||||
namedRepository: familiarizeName(t.namedRepository),
|
||||
tag: t.tag,
|
||||
}
|
||||
}
|
||||
|
||||
func (c canonicalReference) Familiar() Named {
|
||||
return canonicalReference{
|
||||
namedRepository: familiarizeName(c.namedRepository),
|
||||
digest: c.digest,
|
||||
}
|
||||
}
|
||||
|
||||
// TagNameOnly adds the default tag "latest" to a reference if it only has
|
||||
// a repo name.
|
||||
func TagNameOnly(ref Named) Named {
|
||||
if IsNameOnly(ref) {
|
||||
namedTagged, err := WithTag(ref, defaultTag)
|
||||
if err != nil {
|
||||
// Default tag must be valid, to create a NamedTagged
|
||||
// type with non-validated input the WithTag function
|
||||
// should be used instead
|
||||
panic(err)
|
||||
}
|
||||
return namedTagged
|
||||
}
|
||||
return ref
|
||||
}
|
||||
|
||||
// ParseAnyReference parses a reference string as a possible identifier,
|
||||
// full digest, or familiar name.
|
||||
func ParseAnyReference(ref string) (Reference, error) {
|
||||
if ok := anchoredIdentifierRegexp.MatchString(ref); ok {
|
||||
return digestReference("sha256:" + ref), nil
|
||||
}
|
||||
if dgst, err := digest.Parse(ref); err == nil {
|
||||
return digestReference(dgst), nil
|
||||
}
|
||||
|
||||
return ParseNormalizedNamed(ref)
|
||||
}
|
453
vendor/github.com/containerd/containerd/reference/docker/reference.go
generated
vendored
Normal file
453
vendor/github.com/containerd/containerd/reference/docker/reference.go
generated
vendored
Normal file
@ -0,0 +1,453 @@
|
||||
/*
|
||||
Copyright The containerd Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
// Package docker provides a general type to represent any way of referencing images within the registry.
|
||||
// Its main purpose is to abstract tags and digests (content-addressable hash).
|
||||
//
|
||||
// Grammar
|
||||
//
|
||||
// reference := name [ ":" tag ] [ "@" digest ]
|
||||
// name := [domain '/'] path-component ['/' path-component]*
|
||||
// domain := host [':' port-number]
|
||||
// host := domain-name | IPv4address | \[ IPv6address \] ; rfc3986 appendix-A
|
||||
// domain-name := domain-component ['.' domain-component]*
|
||||
// domain-component := /([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9])/
|
||||
// port-number := /[0-9]+/
|
||||
// path-component := alpha-numeric [separator alpha-numeric]*
|
||||
// alpha-numeric := /[a-z0-9]+/
|
||||
// separator := /[_.]|__|[-]*/
|
||||
//
|
||||
// tag := /[\w][\w.-]{0,127}/
|
||||
//
|
||||
// digest := digest-algorithm ":" digest-hex
|
||||
// digest-algorithm := digest-algorithm-component [ digest-algorithm-separator digest-algorithm-component ]*
|
||||
// digest-algorithm-separator := /[+.-_]/
|
||||
// digest-algorithm-component := /[A-Za-z][A-Za-z0-9]*/
|
||||
// digest-hex := /[0-9a-fA-F]{32,}/ ; At least 128 bit digest value
|
||||
//
|
||||
// identifier := /[a-f0-9]{64}/
|
||||
// short-identifier := /[a-f0-9]{6,64}/
|
||||
package docker
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/opencontainers/go-digest"
|
||||
)
|
||||
|
||||
const (
|
||||
// NameTotalLengthMax is the maximum total number of characters in a repository name.
|
||||
NameTotalLengthMax = 255
|
||||
)
|
||||
|
||||
var (
|
||||
// ErrReferenceInvalidFormat represents an error while trying to parse a string as a reference.
|
||||
ErrReferenceInvalidFormat = errors.New("invalid reference format")
|
||||
|
||||
// ErrTagInvalidFormat represents an error while trying to parse a string as a tag.
|
||||
ErrTagInvalidFormat = errors.New("invalid tag format")
|
||||
|
||||
// ErrDigestInvalidFormat represents an error while trying to parse a string as a tag.
|
||||
ErrDigestInvalidFormat = errors.New("invalid digest format")
|
||||
|
||||
// ErrNameContainsUppercase is returned for invalid repository names that contain uppercase characters.
|
||||
ErrNameContainsUppercase = errors.New("repository name must be lowercase")
|
||||
|
||||
// ErrNameEmpty is returned for empty, invalid repository names.
|
||||
ErrNameEmpty = errors.New("repository name must have at least one component")
|
||||
|
||||
// ErrNameTooLong is returned when a repository name is longer than NameTotalLengthMax.
|
||||
ErrNameTooLong = fmt.Errorf("repository name must not be more than %v characters", NameTotalLengthMax)
|
||||
|
||||
// ErrNameNotCanonical is returned when a name is not canonical.
|
||||
ErrNameNotCanonical = errors.New("repository name must be canonical")
|
||||
)
|
||||
|
||||
// Reference is an opaque object reference identifier that may include
|
||||
// modifiers such as a hostname, name, tag, and digest.
|
||||
type Reference interface {
|
||||
// String returns the full reference
|
||||
String() string
|
||||
}
|
||||
|
||||
// Field provides a wrapper type for resolving correct reference types when
|
||||
// working with encoding.
|
||||
type Field struct {
|
||||
reference Reference
|
||||
}
|
||||
|
||||
// AsField wraps a reference in a Field for encoding.
|
||||
func AsField(reference Reference) Field {
|
||||
return Field{reference}
|
||||
}
|
||||
|
||||
// Reference unwraps the reference type from the field to
|
||||
// return the Reference object. This object should be
|
||||
// of the appropriate type to further check for different
|
||||
// reference types.
|
||||
func (f Field) Reference() Reference {
|
||||
return f.reference
|
||||
}
|
||||
|
||||
// MarshalText serializes the field to byte text which
|
||||
// is the string of the reference.
|
||||
func (f Field) MarshalText() (p []byte, err error) {
|
||||
return []byte(f.reference.String()), nil
|
||||
}
|
||||
|
||||
// UnmarshalText parses text bytes by invoking the
|
||||
// reference parser to ensure the appropriately
|
||||
// typed reference object is wrapped by field.
|
||||
func (f *Field) UnmarshalText(p []byte) error {
|
||||
r, err := Parse(string(p))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
f.reference = r
|
||||
return nil
|
||||
}
|
||||
|
||||
// Named is an object with a full name
|
||||
type Named interface {
|
||||
Reference
|
||||
Name() string
|
||||
}
|
||||
|
||||
// Tagged is an object which has a tag
|
||||
type Tagged interface {
|
||||
Reference
|
||||
Tag() string
|
||||
}
|
||||
|
||||
// NamedTagged is an object including a name and tag.
|
||||
type NamedTagged interface {
|
||||
Named
|
||||
Tag() string
|
||||
}
|
||||
|
||||
// Digested is an object which has a digest
|
||||
// in which it can be referenced by
|
||||
type Digested interface {
|
||||
Reference
|
||||
Digest() digest.Digest
|
||||
}
|
||||
|
||||
// Canonical reference is an object with a fully unique
|
||||
// name including a name with domain and digest
|
||||
type Canonical interface {
|
||||
Named
|
||||
Digest() digest.Digest
|
||||
}
|
||||
|
||||
// namedRepository is a reference to a repository with a name.
|
||||
// A namedRepository has both domain and path components.
|
||||
type namedRepository interface {
|
||||
Named
|
||||
Domain() string
|
||||
Path() string
|
||||
}
|
||||
|
||||
// Domain returns the domain part of the Named reference
|
||||
func Domain(named Named) string {
|
||||
if r, ok := named.(namedRepository); ok {
|
||||
return r.Domain()
|
||||
}
|
||||
domain, _ := splitDomain(named.Name())
|
||||
return domain
|
||||
}
|
||||
|
||||
// Path returns the name without the domain part of the Named reference
|
||||
func Path(named Named) (name string) {
|
||||
if r, ok := named.(namedRepository); ok {
|
||||
return r.Path()
|
||||
}
|
||||
_, path := splitDomain(named.Name())
|
||||
return path
|
||||
}
|
||||
|
||||
func splitDomain(name string) (string, string) {
|
||||
match := anchoredNameRegexp.FindStringSubmatch(name)
|
||||
if len(match) != 3 {
|
||||
return "", name
|
||||
}
|
||||
return match[1], match[2]
|
||||
}
|
||||
|
||||
// SplitHostname splits a named reference into a
|
||||
// hostname and name string. If no valid hostname is
|
||||
// found, the hostname is empty and the full value
|
||||
// is returned as name
|
||||
// DEPRECATED: Use Domain or Path
|
||||
func SplitHostname(named Named) (string, string) {
|
||||
if r, ok := named.(namedRepository); ok {
|
||||
return r.Domain(), r.Path()
|
||||
}
|
||||
return splitDomain(named.Name())
|
||||
}
|
||||
|
||||
// Parse parses s and returns a syntactically valid Reference.
|
||||
// If an error was encountered it is returned, along with a nil Reference.
|
||||
// NOTE: Parse will not handle short digests.
|
||||
func Parse(s string) (Reference, error) {
|
||||
matches := ReferenceRegexp.FindStringSubmatch(s)
|
||||
if matches == nil {
|
||||
if s == "" {
|
||||
return nil, ErrNameEmpty
|
||||
}
|
||||
if ReferenceRegexp.FindStringSubmatch(strings.ToLower(s)) != nil {
|
||||
return nil, ErrNameContainsUppercase
|
||||
}
|
||||
return nil, ErrReferenceInvalidFormat
|
||||
}
|
||||
|
||||
if len(matches[1]) > NameTotalLengthMax {
|
||||
return nil, ErrNameTooLong
|
||||
}
|
||||
|
||||
var repo repository
|
||||
|
||||
nameMatch := anchoredNameRegexp.FindStringSubmatch(matches[1])
|
||||
if len(nameMatch) == 3 {
|
||||
repo.domain = nameMatch[1]
|
||||
repo.path = nameMatch[2]
|
||||
} else {
|
||||
repo.domain = ""
|
||||
repo.path = matches[1]
|
||||
}
|
||||
|
||||
ref := reference{
|
||||
namedRepository: repo,
|
||||
tag: matches[2],
|
||||
}
|
||||
if matches[3] != "" {
|
||||
var err error
|
||||
ref.digest, err = digest.Parse(matches[3])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
r := getBestReferenceType(ref)
|
||||
if r == nil {
|
||||
return nil, ErrNameEmpty
|
||||
}
|
||||
|
||||
return r, nil
|
||||
}
|
||||
|
||||
// ParseNamed parses s and returns a syntactically valid reference implementing
|
||||
// the Named interface. The reference must have a name and be in the canonical
|
||||
// form, otherwise an error is returned.
|
||||
// If an error was encountered it is returned, along with a nil Reference.
|
||||
// NOTE: ParseNamed will not handle short digests.
|
||||
func ParseNamed(s string) (Named, error) {
|
||||
named, err := ParseNormalizedNamed(s)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if named.String() != s {
|
||||
return nil, ErrNameNotCanonical
|
||||
}
|
||||
return named, nil
|
||||
}
|
||||
|
||||
// WithName returns a named object representing the given string. If the input
|
||||
// is invalid ErrReferenceInvalidFormat will be returned.
|
||||
func WithName(name string) (Named, error) {
|
||||
if len(name) > NameTotalLengthMax {
|
||||
return nil, ErrNameTooLong
|
||||
}
|
||||
|
||||
match := anchoredNameRegexp.FindStringSubmatch(name)
|
||||
if match == nil || len(match) != 3 {
|
||||
return nil, ErrReferenceInvalidFormat
|
||||
}
|
||||
return repository{
|
||||
domain: match[1],
|
||||
path: match[2],
|
||||
}, nil
|
||||
}
|
||||
|
||||
// WithTag combines the name from "name" and the tag from "tag" to form a
|
||||
// reference incorporating both the name and the tag.
|
||||
func WithTag(name Named, tag string) (NamedTagged, error) {
|
||||
if !anchoredTagRegexp.MatchString(tag) {
|
||||
return nil, ErrTagInvalidFormat
|
||||
}
|
||||
var repo repository
|
||||
if r, ok := name.(namedRepository); ok {
|
||||
repo.domain = r.Domain()
|
||||
repo.path = r.Path()
|
||||
} else {
|
||||
repo.path = name.Name()
|
||||
}
|
||||
if canonical, ok := name.(Canonical); ok {
|
||||
return reference{
|
||||
namedRepository: repo,
|
||||
tag: tag,
|
||||
digest: canonical.Digest(),
|
||||
}, nil
|
||||
}
|
||||
return taggedReference{
|
||||
namedRepository: repo,
|
||||
tag: tag,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// WithDigest combines the name from "name" and the digest from "digest" to form
|
||||
// a reference incorporating both the name and the digest.
|
||||
func WithDigest(name Named, digest digest.Digest) (Canonical, error) {
|
||||
if !anchoredDigestRegexp.MatchString(digest.String()) {
|
||||
return nil, ErrDigestInvalidFormat
|
||||
}
|
||||
var repo repository
|
||||
if r, ok := name.(namedRepository); ok {
|
||||
repo.domain = r.Domain()
|
||||
repo.path = r.Path()
|
||||
} else {
|
||||
repo.path = name.Name()
|
||||
}
|
||||
if tagged, ok := name.(Tagged); ok {
|
||||
return reference{
|
||||
namedRepository: repo,
|
||||
tag: tagged.Tag(),
|
||||
digest: digest,
|
||||
}, nil
|
||||
}
|
||||
return canonicalReference{
|
||||
namedRepository: repo,
|
||||
digest: digest,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// TrimNamed removes any tag or digest from the named reference.
|
||||
func TrimNamed(ref Named) Named {
|
||||
repo := repository{}
|
||||
if r, ok := ref.(namedRepository); ok {
|
||||
repo.domain, repo.path = r.Domain(), r.Path()
|
||||
} else {
|
||||
repo.domain, repo.path = splitDomain(ref.Name())
|
||||
}
|
||||
return repo
|
||||
}
|
||||
|
||||
func getBestReferenceType(ref reference) Reference {
|
||||
if ref.Name() == "" {
|
||||
// Allow digest only references
|
||||
if ref.digest != "" {
|
||||
return digestReference(ref.digest)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
if ref.tag == "" {
|
||||
if ref.digest != "" {
|
||||
return canonicalReference{
|
||||
namedRepository: ref.namedRepository,
|
||||
digest: ref.digest,
|
||||
}
|
||||
}
|
||||
return ref.namedRepository
|
||||
}
|
||||
if ref.digest == "" {
|
||||
return taggedReference{
|
||||
namedRepository: ref.namedRepository,
|
||||
tag: ref.tag,
|
||||
}
|
||||
}
|
||||
|
||||
return ref
|
||||
}
|
||||
|
||||
type reference struct {
|
||||
namedRepository
|
||||
tag string
|
||||
digest digest.Digest
|
||||
}
|
||||
|
||||
func (r reference) String() string {
|
||||
return r.Name() + ":" + r.tag + "@" + r.digest.String()
|
||||
}
|
||||
|
||||
func (r reference) Tag() string {
|
||||
return r.tag
|
||||
}
|
||||
|
||||
func (r reference) Digest() digest.Digest {
|
||||
return r.digest
|
||||
}
|
||||
|
||||
type repository struct {
|
||||
domain string
|
||||
path string
|
||||
}
|
||||
|
||||
func (r repository) String() string {
|
||||
return r.Name()
|
||||
}
|
||||
|
||||
func (r repository) Name() string {
|
||||
if r.domain == "" {
|
||||
return r.path
|
||||
}
|
||||
return r.domain + "/" + r.path
|
||||
}
|
||||
|
||||
func (r repository) Domain() string {
|
||||
return r.domain
|
||||
}
|
||||
|
||||
func (r repository) Path() string {
|
||||
return r.path
|
||||
}
|
||||
|
||||
type digestReference digest.Digest
|
||||
|
||||
func (d digestReference) String() string {
|
||||
return digest.Digest(d).String()
|
||||
}
|
||||
|
||||
func (d digestReference) Digest() digest.Digest {
|
||||
return digest.Digest(d)
|
||||
}
|
||||
|
||||
type taggedReference struct {
|
||||
namedRepository
|
||||
tag string
|
||||
}
|
||||
|
||||
func (t taggedReference) String() string {
|
||||
return t.Name() + ":" + t.tag
|
||||
}
|
||||
|
||||
func (t taggedReference) Tag() string {
|
||||
return t.tag
|
||||
}
|
||||
|
||||
type canonicalReference struct {
|
||||
namedRepository
|
||||
digest digest.Digest
|
||||
}
|
||||
|
||||
func (c canonicalReference) String() string {
|
||||
return c.Name() + "@" + c.digest.String()
|
||||
}
|
||||
|
||||
func (c canonicalReference) Digest() digest.Digest {
|
||||
return c.digest
|
||||
}
|
191
vendor/github.com/containerd/containerd/reference/docker/regexp.go
generated
vendored
Normal file
191
vendor/github.com/containerd/containerd/reference/docker/regexp.go
generated
vendored
Normal file
@ -0,0 +1,191 @@
|
||||
/*
|
||||
Copyright The containerd Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package docker
|
||||
|
||||
import "regexp"
|
||||
|
||||
var (
|
||||
// alphaNumeric defines the alpha numeric atom, typically a
|
||||
// component of names. This only allows lower case characters and digits.
|
||||
alphaNumeric = `[a-z0-9]+`
|
||||
|
||||
// separator defines the separators allowed to be embedded in name
|
||||
// components. This allow one period, one or two underscore and multiple
|
||||
// dashes. Repeated dashes and underscores are intentionally treated
|
||||
// differently. In order to support valid hostnames as name components,
|
||||
// supporting repeated dash was added. Additionally double underscore is
|
||||
// now allowed as a separator to loosen the restriction for previously
|
||||
// supported names.
|
||||
separator = `(?:[._]|__|[-]*)`
|
||||
|
||||
// nameComponent restricts registry path component names to start
|
||||
// with at least one letter or number, with following parts able to be
|
||||
// separated by one period, one or two underscore and multiple dashes.
|
||||
nameComponent = expression(
|
||||
alphaNumeric,
|
||||
optional(repeated(separator, alphaNumeric)))
|
||||
|
||||
// domainNameComponent restricts the registry domain component of a
|
||||
// repository name to start with a component as defined by DomainRegexp.
|
||||
domainNameComponent = `(?:[a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9-]*[a-zA-Z0-9])`
|
||||
|
||||
// ipv6address are enclosed between square brackets and may be represented
|
||||
// in many ways, see rfc5952. Only IPv6 in compressed or uncompressed format
|
||||
// are allowed, IPv6 zone identifiers (rfc6874) or Special addresses such as
|
||||
// IPv4-Mapped are deliberately excluded.
|
||||
ipv6address = expression(
|
||||
literal(`[`), `(?:[a-fA-F0-9:]+)`, literal(`]`),
|
||||
)
|
||||
|
||||
// domainName defines the structure of potential domain components
|
||||
// that may be part of image names. This is purposely a subset of what is
|
||||
// allowed by DNS to ensure backwards compatibility with Docker image
|
||||
// names. This includes IPv4 addresses on decimal format.
|
||||
domainName = expression(
|
||||
domainNameComponent,
|
||||
optional(repeated(literal(`.`), domainNameComponent)),
|
||||
)
|
||||
|
||||
// host defines the structure of potential domains based on the URI
|
||||
// Host subcomponent on rfc3986. It may be a subset of DNS domain name,
|
||||
// or an IPv4 address in decimal format, or an IPv6 address between square
|
||||
// brackets (excluding zone identifiers as defined by rfc6874 or special
|
||||
// addresses such as IPv4-Mapped).
|
||||
host = `(?:` + domainName + `|` + ipv6address + `)`
|
||||
|
||||
// allowed by the URI Host subcomponent on rfc3986 to ensure backwards
|
||||
// compatibility with Docker image names.
|
||||
domain = expression(
|
||||
host,
|
||||
optional(literal(`:`), `[0-9]+`))
|
||||
|
||||
// DomainRegexp defines the structure of potential domain components
|
||||
// that may be part of image names. This is purposely a subset of what is
|
||||
// allowed by DNS to ensure backwards compatibility with Docker image
|
||||
// names.
|
||||
DomainRegexp = regexp.MustCompile(domain)
|
||||
|
||||
tag = `[\w][\w.-]{0,127}`
|
||||
// TagRegexp matches valid tag names. From docker/docker:graph/tags.go.
|
||||
TagRegexp = regexp.MustCompile(tag)
|
||||
|
||||
anchoredTag = anchored(tag)
|
||||
// anchoredTagRegexp matches valid tag names, anchored at the start and
|
||||
// end of the matched string.
|
||||
anchoredTagRegexp = regexp.MustCompile(anchoredTag)
|
||||
|
||||
digestPat = `[A-Za-z][A-Za-z0-9]*(?:[-_+.][A-Za-z][A-Za-z0-9]*)*[:][[:xdigit:]]{32,}`
|
||||
// DigestRegexp matches valid digests.
|
||||
DigestRegexp = regexp.MustCompile(digestPat)
|
||||
|
||||
anchoredDigest = anchored(digestPat)
|
||||
// anchoredDigestRegexp matches valid digests, anchored at the start and
|
||||
// end of the matched string.
|
||||
anchoredDigestRegexp = regexp.MustCompile(anchoredDigest)
|
||||
|
||||
namePat = expression(
|
||||
optional(domain, literal(`/`)),
|
||||
nameComponent,
|
||||
optional(repeated(literal(`/`), nameComponent)))
|
||||
// NameRegexp is the format for the name component of references. The
|
||||
// regexp has capturing groups for the domain and name part omitting
|
||||
// the separating forward slash from either.
|
||||
NameRegexp = regexp.MustCompile(namePat)
|
||||
|
||||
anchoredName = anchored(
|
||||
optional(capture(domain), literal(`/`)),
|
||||
capture(nameComponent,
|
||||
optional(repeated(literal(`/`), nameComponent))))
|
||||
// anchoredNameRegexp is used to parse a name value, capturing the
|
||||
// domain and trailing components.
|
||||
anchoredNameRegexp = regexp.MustCompile(anchoredName)
|
||||
|
||||
referencePat = anchored(capture(namePat),
|
||||
optional(literal(":"), capture(tag)),
|
||||
optional(literal("@"), capture(digestPat)))
|
||||
// ReferenceRegexp is the full supported format of a reference. The regexp
|
||||
// is anchored and has capturing groups for name, tag, and digest
|
||||
// components.
|
||||
ReferenceRegexp = regexp.MustCompile(referencePat)
|
||||
|
||||
identifier = `([a-f0-9]{64})`
|
||||
// IdentifierRegexp is the format for string identifier used as a
|
||||
// content addressable identifier using sha256. These identifiers
|
||||
// are like digests without the algorithm, since sha256 is used.
|
||||
IdentifierRegexp = regexp.MustCompile(identifier)
|
||||
|
||||
shortIdentifier = `([a-f0-9]{6,64})`
|
||||
// ShortIdentifierRegexp is the format used to represent a prefix
|
||||
// of an identifier. A prefix may be used to match a sha256 identifier
|
||||
// within a list of trusted identifiers.
|
||||
ShortIdentifierRegexp = regexp.MustCompile(shortIdentifier)
|
||||
|
||||
anchoredIdentifier = anchored(identifier)
|
||||
// anchoredIdentifierRegexp is used to check or match an
|
||||
// identifier value, anchored at start and end of string.
|
||||
anchoredIdentifierRegexp = regexp.MustCompile(anchoredIdentifier)
|
||||
)
|
||||
|
||||
// literal compiles s into a literal regular expression, escaping any regexp
|
||||
// reserved characters.
|
||||
func literal(s string) string {
|
||||
re := regexp.MustCompile(regexp.QuoteMeta(s))
|
||||
|
||||
if _, complete := re.LiteralPrefix(); !complete {
|
||||
panic("must be a literal")
|
||||
}
|
||||
|
||||
return re.String()
|
||||
}
|
||||
|
||||
// expression defines a full expression, where each regular expression must
|
||||
// follow the previous.
|
||||
func expression(res ...string) string {
|
||||
var s string
|
||||
for _, re := range res {
|
||||
s += re
|
||||
}
|
||||
|
||||
return s
|
||||
}
|
||||
|
||||
// optional wraps the expression in a non-capturing group and makes the
|
||||
// production optional.
|
||||
func optional(res ...string) string {
|
||||
return group(expression(res...)) + `?`
|
||||
}
|
||||
|
||||
// repeated wraps the regexp in a non-capturing group to get one or more
|
||||
// matches.
|
||||
func repeated(res ...string) string {
|
||||
return group(expression(res...)) + `+`
|
||||
}
|
||||
|
||||
// group wraps the regexp in a non-capturing group.
|
||||
func group(res ...string) string {
|
||||
return `(?:` + expression(res...) + `)`
|
||||
}
|
||||
|
||||
// capture wraps the expression in a capturing group.
|
||||
func capture(res ...string) string {
|
||||
return `(` + expression(res...) + `)`
|
||||
}
|
||||
|
||||
// anchored anchors the regular expression by adding start and end delimiters.
|
||||
func anchored(res ...string) string {
|
||||
return `^` + expression(res...) + `$`
|
||||
}
|
73
vendor/github.com/containerd/containerd/reference/docker/sort.go
generated
vendored
Normal file
73
vendor/github.com/containerd/containerd/reference/docker/sort.go
generated
vendored
Normal file
@ -0,0 +1,73 @@
|
||||
/*
|
||||
Copyright The containerd Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package docker
|
||||
|
||||
import (
|
||||
"sort"
|
||||
)
|
||||
|
||||
// Sort sorts string references preferring higher information references
|
||||
// The precedence is as follows:
|
||||
// 1. Name + Tag + Digest
|
||||
// 2. Name + Tag
|
||||
// 3. Name + Digest
|
||||
// 4. Name
|
||||
// 5. Digest
|
||||
// 6. Parse error
|
||||
func Sort(references []string) []string {
|
||||
var prefs []Reference
|
||||
var bad []string
|
||||
|
||||
for _, ref := range references {
|
||||
pref, err := ParseAnyReference(ref)
|
||||
if err != nil {
|
||||
bad = append(bad, ref)
|
||||
} else {
|
||||
prefs = append(prefs, pref)
|
||||
}
|
||||
}
|
||||
sort.Slice(prefs, func(a, b int) bool {
|
||||
ar := refRank(prefs[a])
|
||||
br := refRank(prefs[b])
|
||||
if ar == br {
|
||||
return prefs[a].String() < prefs[b].String()
|
||||
}
|
||||
return ar < br
|
||||
})
|
||||
sort.Strings(bad)
|
||||
var refs []string
|
||||
for _, pref := range prefs {
|
||||
refs = append(refs, pref.String())
|
||||
}
|
||||
return append(refs, bad...)
|
||||
}
|
||||
|
||||
func refRank(ref Reference) uint8 {
|
||||
if _, ok := ref.(Named); ok {
|
||||
if _, ok = ref.(Tagged); ok {
|
||||
if _, ok = ref.(Digested); ok {
|
||||
return 1
|
||||
}
|
||||
return 2
|
||||
}
|
||||
if _, ok = ref.(Digested); ok {
|
||||
return 3
|
||||
}
|
||||
return 4
|
||||
}
|
||||
return 5
|
||||
}
|
9
vendor/github.com/containerd/containerd/remotes/docker/auth/fetch.go
generated
vendored
9
vendor/github.com/containerd/containerd/remotes/docker/auth/fetch.go
generated
vendored
@ -29,7 +29,6 @@ import (
|
||||
"github.com/containerd/containerd/log"
|
||||
remoteserrors "github.com/containerd/containerd/remotes/errors"
|
||||
"github.com/containerd/containerd/version"
|
||||
"golang.org/x/net/context/ctxhttp"
|
||||
)
|
||||
|
||||
var (
|
||||
@ -115,7 +114,7 @@ func FetchTokenWithOAuth(ctx context.Context, client *http.Client, headers http.
|
||||
form.Set("access_type", "offline")
|
||||
}
|
||||
|
||||
req, err := http.NewRequest("POST", to.Realm, strings.NewReader(form.Encode()))
|
||||
req, err := http.NewRequestWithContext(ctx, "POST", to.Realm, strings.NewReader(form.Encode()))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@ -127,7 +126,7 @@ func FetchTokenWithOAuth(ctx context.Context, client *http.Client, headers http.
|
||||
req.Header.Set("User-Agent", "containerd/"+version.Version)
|
||||
}
|
||||
|
||||
resp, err := ctxhttp.Do(ctx, client, req)
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@ -162,7 +161,7 @@ type FetchTokenResponse struct {
|
||||
|
||||
// FetchToken fetches a token using a GET request
|
||||
func FetchToken(ctx context.Context, client *http.Client, headers http.Header, to TokenOptions) (*FetchTokenResponse, error) {
|
||||
req, err := http.NewRequest("GET", to.Realm, nil)
|
||||
req, err := http.NewRequestWithContext(ctx, "GET", to.Realm, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@ -194,7 +193,7 @@ func FetchToken(ctx context.Context, client *http.Client, headers http.Header, t
|
||||
|
||||
req.URL.RawQuery = reqParams.Encode()
|
||||
|
||||
resp, err := ctxhttp.Do(ctx, client, req)
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
7
vendor/github.com/containerd/containerd/remotes/docker/authorizer.go
generated
vendored
7
vendor/github.com/containerd/containerd/remotes/docker/authorizer.go
generated
vendored
@ -45,13 +45,6 @@ type dockerAuthorizer struct {
|
||||
onFetchRefreshToken OnFetchRefreshToken
|
||||
}
|
||||
|
||||
// NewAuthorizer creates a Docker authorizer using the provided function to
|
||||
// get credentials for the token server or basic auth.
|
||||
// Deprecated: Use NewDockerAuthorizer
|
||||
func NewAuthorizer(client *http.Client, f func(string) (string, string, error)) Authorizer {
|
||||
return NewDockerAuthorizer(WithAuthClient(client), WithAuthCreds(f))
|
||||
}
|
||||
|
||||
type authorizerConfig struct {
|
||||
credentials func(string) (string, string, error)
|
||||
client *http.Client
|
||||
|
55
vendor/github.com/containerd/containerd/remotes/docker/converter_fuzz.go
generated
vendored
Normal file
55
vendor/github.com/containerd/containerd/remotes/docker/converter_fuzz.go
generated
vendored
Normal file
@ -0,0 +1,55 @@
|
||||
//go:build gofuzz
|
||||
|
||||
/*
|
||||
Copyright The containerd Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package docker
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
|
||||
fuzz "github.com/AdaLogics/go-fuzz-headers"
|
||||
"github.com/containerd/containerd/content/local"
|
||||
"github.com/containerd/containerd/log"
|
||||
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
"github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
func FuzzConvertManifest(data []byte) int {
|
||||
ctx := context.Background()
|
||||
|
||||
// Do not log the message below
|
||||
// level=warning msg="do nothing for media type: ..."
|
||||
log.G(ctx).Logger.SetLevel(logrus.PanicLevel)
|
||||
|
||||
f := fuzz.NewConsumer(data)
|
||||
desc := ocispec.Descriptor{}
|
||||
err := f.GenerateStruct(&desc)
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
tmpdir, err := os.MkdirTemp("", "fuzzing-")
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
cs, err := local.NewStore(tmpdir)
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
_, _ = ConvertManifest(ctx, cs, desc)
|
||||
return 1
|
||||
}
|
108
vendor/github.com/containerd/containerd/remotes/docker/fetcher.go
generated
vendored
108
vendor/github.com/containerd/containerd/remotes/docker/fetcher.go
generated
vendored
@ -29,6 +29,7 @@ import (
|
||||
"github.com/containerd/containerd/errdefs"
|
||||
"github.com/containerd/containerd/images"
|
||||
"github.com/containerd/containerd/log"
|
||||
digest "github.com/opencontainers/go-digest"
|
||||
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
)
|
||||
|
||||
@ -52,18 +53,17 @@ func (r dockerFetcher) Fetch(ctx context.Context, desc ocispec.Descriptor) (io.R
|
||||
return newHTTPReadSeeker(desc.Size, func(offset int64) (io.ReadCloser, error) {
|
||||
// firstly try fetch via external urls
|
||||
for _, us := range desc.URLs {
|
||||
ctx = log.WithLogger(ctx, log.G(ctx).WithField("url", us))
|
||||
|
||||
u, err := url.Parse(us)
|
||||
if err != nil {
|
||||
log.G(ctx).WithError(err).Debug("failed to parse")
|
||||
log.G(ctx).WithError(err).Debugf("failed to parse %q", us)
|
||||
continue
|
||||
}
|
||||
if u.Scheme != "http" && u.Scheme != "https" {
|
||||
log.G(ctx).Debug("non-http(s) alternative url is unsupported")
|
||||
continue
|
||||
}
|
||||
log.G(ctx).Debug("trying alternative url")
|
||||
ctx = log.WithLogger(ctx, log.G(ctx).WithField("url", u))
|
||||
log.G(ctx).Info("request")
|
||||
|
||||
// Try this first, parse it
|
||||
host := RegistryHost{
|
||||
@ -151,8 +151,106 @@ func (r dockerFetcher) Fetch(ctx context.Context, desc ocispec.Descriptor) (io.R
|
||||
})
|
||||
}
|
||||
|
||||
func (r dockerFetcher) createGetReq(ctx context.Context, host RegistryHost, ps ...string) (*request, int64, error) {
|
||||
headReq := r.request(host, http.MethodHead, ps...)
|
||||
if err := headReq.addNamespace(r.refspec.Hostname()); err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
|
||||
headResp, err := headReq.doWithRetries(ctx, nil)
|
||||
if err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
if headResp.Body != nil {
|
||||
headResp.Body.Close()
|
||||
}
|
||||
if headResp.StatusCode > 299 {
|
||||
return nil, 0, fmt.Errorf("unexpected HEAD status code %v: %s", headReq.String(), headResp.Status)
|
||||
}
|
||||
|
||||
getReq := r.request(host, http.MethodGet, ps...)
|
||||
if err := getReq.addNamespace(r.refspec.Hostname()); err != nil {
|
||||
return nil, 0, err
|
||||
}
|
||||
return getReq, headResp.ContentLength, nil
|
||||
}
|
||||
|
||||
func (r dockerFetcher) FetchByDigest(ctx context.Context, dgst digest.Digest) (io.ReadCloser, ocispec.Descriptor, error) {
|
||||
var desc ocispec.Descriptor
|
||||
ctx = log.WithLogger(ctx, log.G(ctx).WithField("digest", dgst))
|
||||
|
||||
hosts := r.filterHosts(HostCapabilityPull)
|
||||
if len(hosts) == 0 {
|
||||
return nil, desc, fmt.Errorf("no pull hosts: %w", errdefs.ErrNotFound)
|
||||
}
|
||||
|
||||
ctx, err := ContextWithRepositoryScope(ctx, r.refspec, false)
|
||||
if err != nil {
|
||||
return nil, desc, err
|
||||
}
|
||||
|
||||
var (
|
||||
getReq *request
|
||||
sz int64
|
||||
firstErr error
|
||||
)
|
||||
|
||||
for _, host := range r.hosts {
|
||||
getReq, sz, err = r.createGetReq(ctx, host, "blobs", dgst.String())
|
||||
if err == nil {
|
||||
break
|
||||
}
|
||||
// Store the error for referencing later
|
||||
if firstErr == nil {
|
||||
firstErr = err
|
||||
}
|
||||
}
|
||||
|
||||
if getReq == nil {
|
||||
// Fall back to the "manifests" endpoint
|
||||
for _, host := range r.hosts {
|
||||
getReq, sz, err = r.createGetReq(ctx, host, "manifests", dgst.String())
|
||||
if err == nil {
|
||||
break
|
||||
}
|
||||
// Store the error for referencing later
|
||||
if firstErr == nil {
|
||||
firstErr = err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if getReq == nil {
|
||||
if errdefs.IsNotFound(firstErr) {
|
||||
firstErr = fmt.Errorf("could not fetch content %v from remote: %w", dgst, errdefs.ErrNotFound)
|
||||
}
|
||||
if firstErr == nil {
|
||||
firstErr = fmt.Errorf("could not fetch content %v from remote: (unknown)", dgst)
|
||||
}
|
||||
return nil, desc, firstErr
|
||||
}
|
||||
|
||||
seeker, err := newHTTPReadSeeker(sz, func(offset int64) (io.ReadCloser, error) {
|
||||
return r.open(ctx, getReq, "", offset)
|
||||
})
|
||||
if err != nil {
|
||||
return nil, desc, err
|
||||
}
|
||||
|
||||
desc = ocispec.Descriptor{
|
||||
MediaType: "application/octet-stream",
|
||||
Digest: dgst,
|
||||
Size: sz,
|
||||
}
|
||||
return seeker, desc, nil
|
||||
}
|
||||
|
||||
func (r dockerFetcher) open(ctx context.Context, req *request, mediatype string, offset int64) (_ io.ReadCloser, retErr error) {
|
||||
req.header.Set("Accept", strings.Join([]string{mediatype, `*/*`}, ", "))
|
||||
if mediatype == "" {
|
||||
req.header.Set("Accept", "*/*")
|
||||
} else {
|
||||
req.header.Set("Accept", strings.Join([]string{mediatype, `*/*`}, ", "))
|
||||
}
|
||||
|
||||
if offset > 0 {
|
||||
// Note: "Accept-Ranges: bytes" cannot be trusted as some endpoints
|
||||
|
81
vendor/github.com/containerd/containerd/remotes/docker/fetcher_fuzz.go
generated
vendored
Normal file
81
vendor/github.com/containerd/containerd/remotes/docker/fetcher_fuzz.go
generated
vendored
Normal file
@ -0,0 +1,81 @@
|
||||
//go:build gofuzz
|
||||
|
||||
/*
|
||||
Copyright The containerd Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package docker
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"net/url"
|
||||
|
||||
refDocker "github.com/containerd/containerd/reference/docker"
|
||||
)
|
||||
|
||||
func FuzzFetcher(data []byte) int {
|
||||
dataLen := len(data)
|
||||
if dataLen == 0 {
|
||||
return -1
|
||||
}
|
||||
|
||||
s := httptest.NewServer(http.HandlerFunc(func(rw http.ResponseWriter, r *http.Request) {
|
||||
rw.Header().Set("content-range", fmt.Sprintf("bytes %d-%d/%d", 0, dataLen-1, dataLen))
|
||||
rw.Header().Set("content-length", fmt.Sprintf("%d", dataLen))
|
||||
rw.Write(data)
|
||||
}))
|
||||
defer s.Close()
|
||||
|
||||
u, err := url.Parse(s.URL)
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
|
||||
f := dockerFetcher{&dockerBase{
|
||||
repository: "nonempty",
|
||||
}}
|
||||
host := RegistryHost{
|
||||
Client: s.Client(),
|
||||
Host: u.Host,
|
||||
Scheme: u.Scheme,
|
||||
Path: u.Path,
|
||||
}
|
||||
|
||||
ctx := context.Background()
|
||||
req := f.request(host, http.MethodGet)
|
||||
rc, err := f.open(ctx, req, "", 0)
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
b, err := io.ReadAll(rc)
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
|
||||
expected := data
|
||||
if len(b) != len(expected) {
|
||||
panic("len of request is not equal to len of expected but should be")
|
||||
}
|
||||
return 1
|
||||
}
|
||||
|
||||
func FuzzParseDockerRef(data []byte) int {
|
||||
_, _ = refDocker.ParseDockerRef(string(data))
|
||||
return 1
|
||||
}
|
3
vendor/github.com/containerd/containerd/remotes/docker/pusher.go
generated
vendored
3
vendor/github.com/containerd/containerd/remotes/docker/pusher.go
generated
vendored
@ -191,6 +191,9 @@ func (p dockerPusher) push(ctx context.Context, desc ocispec.Descriptor, ref str
|
||||
if resp == nil {
|
||||
resp, err = req.doWithRetries(ctx, nil)
|
||||
if err != nil {
|
||||
if errors.Is(err, ErrInvalidAuthorization) {
|
||||
return nil, fmt.Errorf("push access denied, repository does not exist or may require authorization: %w", err)
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
55
vendor/github.com/containerd/containerd/remotes/docker/resolver.go
generated
vendored
55
vendor/github.com/containerd/containerd/remotes/docker/resolver.go
generated
vendored
@ -32,12 +32,13 @@ import (
|
||||
"github.com/containerd/containerd/log"
|
||||
"github.com/containerd/containerd/reference"
|
||||
"github.com/containerd/containerd/remotes"
|
||||
"github.com/containerd/containerd/remotes/docker/schema1"
|
||||
"github.com/containerd/containerd/remotes/docker/schema1" //nolint:staticcheck // Ignore SA1019. Need to keep deprecated package for compatibility.
|
||||
remoteerrors "github.com/containerd/containerd/remotes/errors"
|
||||
"github.com/containerd/containerd/tracing"
|
||||
"github.com/containerd/containerd/version"
|
||||
digest "github.com/opencontainers/go-digest"
|
||||
"github.com/opencontainers/go-digest"
|
||||
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
"github.com/sirupsen/logrus"
|
||||
"golang.org/x/net/context/ctxhttp"
|
||||
)
|
||||
|
||||
var (
|
||||
@ -70,6 +71,9 @@ type Authorizer interface {
|
||||
// unmodified. It may also add an `Authorization` header as
|
||||
// "bearer <some bearer token>"
|
||||
// "basic <base64 encoded credentials>"
|
||||
//
|
||||
// It may return remotes/errors.ErrUnexpectedStatus, which for example,
|
||||
// can be used by the caller to find out the status code returned by the registry.
|
||||
Authorize(context.Context, *http.Request) error
|
||||
|
||||
// AddResponses adds a 401 response for the authorizer to consider when
|
||||
@ -152,7 +156,8 @@ func NewResolver(options ResolverOptions) remotes.Resolver {
|
||||
images.MediaTypeDockerSchema2Manifest,
|
||||
images.MediaTypeDockerSchema2ManifestList,
|
||||
ocispec.MediaTypeImageManifest,
|
||||
ocispec.MediaTypeImageIndex, "*/*"}, ", "))
|
||||
ocispec.MediaTypeImageIndex, "*/*",
|
||||
}, ", "))
|
||||
} else {
|
||||
resolveHeader["Accept"] = options.Headers["Accept"]
|
||||
delete(options.Headers, "Accept")
|
||||
@ -299,11 +304,11 @@ func (r *dockerResolver) Resolve(ctx context.Context, ref string) (string, ocisp
|
||||
if resp.StatusCode > 399 {
|
||||
// Set firstErr when encountering the first non-404 status code.
|
||||
if firstErr == nil {
|
||||
firstErr = fmt.Errorf("pulling from host %s failed with status code %v: %v", host.Host, u, resp.Status)
|
||||
firstErr = remoteerrors.NewUnexpectedStatusErr(resp)
|
||||
}
|
||||
continue // try another host
|
||||
}
|
||||
return "", ocispec.Descriptor{}, fmt.Errorf("pulling from host %s failed with unexpected status code %v: %v", host.Host, u, resp.Status)
|
||||
return "", ocispec.Descriptor{}, remoteerrors.NewUnexpectedStatusErr(resp)
|
||||
}
|
||||
size := resp.ContentLength
|
||||
contentType := getManifestMediaType(resp)
|
||||
@ -340,26 +345,31 @@ func (r *dockerResolver) Resolve(ctx context.Context, ref string) (string, ocisp
|
||||
if err != nil {
|
||||
return "", ocispec.Descriptor{}, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
bodyReader := countingReader{reader: resp.Body}
|
||||
|
||||
contentType = getManifestMediaType(resp)
|
||||
if dgst == "" {
|
||||
err = func() error {
|
||||
defer resp.Body.Close()
|
||||
if dgst != "" {
|
||||
_, err = io.Copy(io.Discard, &bodyReader)
|
||||
return err
|
||||
}
|
||||
|
||||
if contentType == images.MediaTypeDockerSchema1Manifest {
|
||||
b, err := schema1.ReadStripSignature(&bodyReader)
|
||||
if err != nil {
|
||||
return "", ocispec.Descriptor{}, err
|
||||
return err
|
||||
}
|
||||
|
||||
dgst = digest.FromBytes(b)
|
||||
} else {
|
||||
dgst, err = digest.FromReader(&bodyReader)
|
||||
if err != nil {
|
||||
return "", ocispec.Descriptor{}, err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
} else if _, err := io.Copy(io.Discard, &bodyReader); err != nil {
|
||||
|
||||
dgst, err = digest.FromReader(&bodyReader)
|
||||
return err
|
||||
}()
|
||||
if err != nil {
|
||||
return "", ocispec.Descriptor{}, err
|
||||
}
|
||||
size = bodyReader.bytesRead
|
||||
@ -525,7 +535,7 @@ type request struct {
|
||||
|
||||
func (r *request) do(ctx context.Context) (*http.Response, error) {
|
||||
u := r.host.Scheme + "://" + r.host.Host + r.path
|
||||
req, err := http.NewRequest(r.method, u, nil)
|
||||
req, err := http.NewRequestWithContext(ctx, r.method, u, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@ -551,7 +561,7 @@ func (r *request) do(ctx context.Context) (*http.Response, error) {
|
||||
return nil, fmt.Errorf("failed to authorize: %w", err)
|
||||
}
|
||||
|
||||
var client = &http.Client{}
|
||||
client := &http.Client{}
|
||||
if r.host.Client != nil {
|
||||
*client = *r.host.Client
|
||||
}
|
||||
@ -566,11 +576,18 @@ func (r *request) do(ctx context.Context) (*http.Response, error) {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
resp, err := ctxhttp.Do(ctx, client, req)
|
||||
_, httpSpan := tracing.StartSpan(
|
||||
ctx,
|
||||
tracing.Name("remotes.docker.resolver", "HTTPRequest"),
|
||||
tracing.WithHTTPRequest(req),
|
||||
)
|
||||
defer httpSpan.End()
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
httpSpan.SetStatus(err)
|
||||
return nil, fmt.Errorf("failed to do request: %w", err)
|
||||
}
|
||||
httpSpan.SetAttributes(tracing.HTTPStatusCodeAttributes(resp.StatusCode)...)
|
||||
log.G(ctx).WithFields(responseFields(resp)).Debug("fetch response received")
|
||||
return resp, nil
|
||||
}
|
||||
|
12
vendor/github.com/containerd/containerd/remotes/docker/schema1/converter.go
generated
vendored
12
vendor/github.com/containerd/containerd/remotes/docker/schema1/converter.go
generated
vendored
@ -14,6 +14,9 @@
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
// Package schema1 provides a converter to fetch an image formatted in Docker Image Manifest v2, Schema 1.
|
||||
//
|
||||
// Deprecated: use images formatted in Docker Image Manifest v2, Schema 2, or OCI Image Spec v1.
|
||||
package schema1
|
||||
|
||||
import (
|
||||
@ -33,6 +36,7 @@ import (
|
||||
"github.com/containerd/containerd/content"
|
||||
"github.com/containerd/containerd/errdefs"
|
||||
"github.com/containerd/containerd/images"
|
||||
"github.com/containerd/containerd/labels"
|
||||
"github.com/containerd/containerd/log"
|
||||
"github.com/containerd/containerd/remotes"
|
||||
digest "github.com/opencontainers/go-digest"
|
||||
@ -363,12 +367,12 @@ func (c *Converter) fetchBlob(ctx context.Context, desc ocispec.Descriptor) erro
|
||||
cinfo := content.Info{
|
||||
Digest: desc.Digest,
|
||||
Labels: map[string]string{
|
||||
"containerd.io/uncompressed": state.diffID.String(),
|
||||
labels.LabelUncompressed: state.diffID.String(),
|
||||
labelDockerSchema1EmptyLayer: strconv.FormatBool(state.empty),
|
||||
},
|
||||
}
|
||||
|
||||
if _, err := c.contentStore.Update(ctx, cinfo, "labels.containerd.io/uncompressed", fmt.Sprintf("labels.%s", labelDockerSchema1EmptyLayer)); err != nil {
|
||||
if _, err := c.contentStore.Update(ctx, cinfo, "labels."+labels.LabelUncompressed, fmt.Sprintf("labels.%s", labelDockerSchema1EmptyLayer)); err != nil {
|
||||
return fmt.Errorf("failed to update uncompressed label: %w", err)
|
||||
}
|
||||
|
||||
@ -387,7 +391,7 @@ func (c *Converter) reuseLabelBlobState(ctx context.Context, desc ocispec.Descri
|
||||
}
|
||||
desc.Size = cinfo.Size
|
||||
|
||||
diffID, ok := cinfo.Labels["containerd.io/uncompressed"]
|
||||
diffID, ok := cinfo.Labels[labels.LabelUncompressed]
|
||||
if !ok {
|
||||
return false, nil
|
||||
}
|
||||
@ -406,7 +410,7 @@ func (c *Converter) reuseLabelBlobState(ctx context.Context, desc ocispec.Descri
|
||||
bState := blobState{empty: isEmpty}
|
||||
|
||||
if bState.diffID, err = digest.Parse(diffID); err != nil {
|
||||
log.G(ctx).WithField("id", desc.Digest).Warnf("failed to parse digest from label containerd.io/uncompressed: %v", diffID)
|
||||
log.G(ctx).WithField("id", desc.Digest).Warnf("failed to parse digest from label %s: %v", labels.LabelUncompressed, diffID)
|
||||
return false, nil
|
||||
}
|
||||
|
||||
|
2
vendor/github.com/containerd/containerd/remotes/errors/errors.go
generated
vendored
2
vendor/github.com/containerd/containerd/remotes/errors/errors.go
generated
vendored
@ -33,7 +33,7 @@ type ErrUnexpectedStatus struct {
|
||||
}
|
||||
|
||||
func (e ErrUnexpectedStatus) Error() string {
|
||||
return fmt.Sprintf("unexpected status: %s", e.Status)
|
||||
return fmt.Sprintf("unexpected status from %s request to %s: %s", e.RequestMethod, e.RequestURL, e.Status)
|
||||
}
|
||||
|
||||
// NewUnexpectedStatusErr creates an ErrUnexpectedStatus from HTTP response
|
||||
|
56
vendor/github.com/containerd/containerd/remotes/handlers.go
generated
vendored
56
vendor/github.com/containerd/containerd/remotes/handlers.go
generated
vendored
@ -100,20 +100,21 @@ func FetchHandler(ingester content.Ingester, fetcher Fetcher) images.HandlerFunc
|
||||
case images.MediaTypeDockerSchema1Manifest:
|
||||
return nil, fmt.Errorf("%v not supported", desc.MediaType)
|
||||
default:
|
||||
err := fetch(ctx, ingester, fetcher, desc)
|
||||
err := Fetch(ctx, ingester, fetcher, desc)
|
||||
if errdefs.IsAlreadyExists(err) {
|
||||
return nil, nil
|
||||
}
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func fetch(ctx context.Context, ingester content.Ingester, fetcher Fetcher, desc ocispec.Descriptor) error {
|
||||
// Fetch fetches the given digest into the provided ingester
|
||||
func Fetch(ctx context.Context, ingester content.Ingester, fetcher Fetcher, desc ocispec.Descriptor) error {
|
||||
log.G(ctx).Debug("fetch")
|
||||
|
||||
cw, err := content.OpenWriter(ctx, ingester, content.WithRef(MakeRefKey(ctx, desc)), content.WithDescriptor(desc))
|
||||
if err != nil {
|
||||
if errdefs.IsAlreadyExists(err) {
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
}
|
||||
defer cw.Close()
|
||||
@ -135,7 +136,7 @@ func fetch(ctx context.Context, ingester content.Ingester, fetcher Fetcher, desc
|
||||
if err != nil && !errdefs.IsAlreadyExists(err) {
|
||||
return fmt.Errorf("failed commit on ref %q: %w", ws.Ref, err)
|
||||
}
|
||||
return nil
|
||||
return err
|
||||
}
|
||||
|
||||
rc, err := fetcher.Fetch(ctx, desc)
|
||||
@ -197,17 +198,25 @@ func push(ctx context.Context, provider content.Provider, pusher Pusher, desc oc
|
||||
//
|
||||
// Base handlers can be provided which will be called before any push specific
|
||||
// handlers.
|
||||
func PushContent(ctx context.Context, pusher Pusher, desc ocispec.Descriptor, store content.Store, limiter *semaphore.Weighted, platform platforms.MatchComparer, wrapper func(h images.Handler) images.Handler) error {
|
||||
//
|
||||
// If the passed in content.Provider is also a content.Manager then this will
|
||||
// also annotate the distribution sources in the manager.
|
||||
func PushContent(ctx context.Context, pusher Pusher, desc ocispec.Descriptor, store content.Provider, limiter *semaphore.Weighted, platform platforms.MatchComparer, wrapper func(h images.Handler) images.Handler) error {
|
||||
|
||||
var m sync.Mutex
|
||||
manifestStack := []ocispec.Descriptor{}
|
||||
manifests := []ocispec.Descriptor{}
|
||||
indexStack := []ocispec.Descriptor{}
|
||||
|
||||
filterHandler := images.HandlerFunc(func(ctx context.Context, desc ocispec.Descriptor) ([]ocispec.Descriptor, error) {
|
||||
switch desc.MediaType {
|
||||
case images.MediaTypeDockerSchema2Manifest, ocispec.MediaTypeImageManifest,
|
||||
images.MediaTypeDockerSchema2ManifestList, ocispec.MediaTypeImageIndex:
|
||||
case images.MediaTypeDockerSchema2Manifest, ocispec.MediaTypeImageManifest:
|
||||
m.Lock()
|
||||
manifestStack = append(manifestStack, desc)
|
||||
manifests = append(manifests, desc)
|
||||
m.Unlock()
|
||||
return nil, images.ErrStopHandler
|
||||
case images.MediaTypeDockerSchema2ManifestList, ocispec.MediaTypeImageIndex:
|
||||
m.Lock()
|
||||
indexStack = append(indexStack, desc)
|
||||
m.Unlock()
|
||||
return nil, images.ErrStopHandler
|
||||
default:
|
||||
@ -219,13 +228,14 @@ func PushContent(ctx context.Context, pusher Pusher, desc ocispec.Descriptor, st
|
||||
|
||||
platformFilterhandler := images.FilterPlatforms(images.ChildrenHandler(store), platform)
|
||||
|
||||
annotateHandler := annotateDistributionSourceHandler(platformFilterhandler, store)
|
||||
var handler images.Handler
|
||||
if m, ok := store.(content.Manager); ok {
|
||||
annotateHandler := annotateDistributionSourceHandler(platformFilterhandler, m)
|
||||
handler = images.Handlers(annotateHandler, filterHandler, pushHandler)
|
||||
} else {
|
||||
handler = images.Handlers(platformFilterhandler, filterHandler, pushHandler)
|
||||
}
|
||||
|
||||
var handler images.Handler = images.Handlers(
|
||||
annotateHandler,
|
||||
filterHandler,
|
||||
pushHandler,
|
||||
)
|
||||
if wrapper != nil {
|
||||
handler = wrapper(handler)
|
||||
}
|
||||
@ -234,16 +244,18 @@ func PushContent(ctx context.Context, pusher Pusher, desc ocispec.Descriptor, st
|
||||
return err
|
||||
}
|
||||
|
||||
if err := images.Dispatch(ctx, pushHandler, limiter, manifests...); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Iterate in reverse order as seen, parent always uploaded after child
|
||||
for i := len(manifestStack) - 1; i >= 0; i-- {
|
||||
_, err := pushHandler(ctx, manifestStack[i])
|
||||
for i := len(indexStack) - 1; i >= 0; i-- {
|
||||
err := images.Dispatch(ctx, pushHandler, limiter, indexStack[i])
|
||||
if err != nil {
|
||||
// TODO(estesp): until we have a more complete method for index push, we need to report
|
||||
// missing dependencies in an index/manifest list by sensing the "400 Bad Request"
|
||||
// as a marker for this problem
|
||||
if (manifestStack[i].MediaType == ocispec.MediaTypeImageIndex ||
|
||||
manifestStack[i].MediaType == images.MediaTypeDockerSchema2ManifestList) &&
|
||||
errors.Unwrap(err) != nil && strings.Contains(errors.Unwrap(err).Error(), "400 Bad Request") {
|
||||
if errors.Unwrap(err) != nil && strings.Contains(errors.Unwrap(err).Error(), "400 Bad Request") {
|
||||
return fmt.Errorf("manifest list/index references to blobs and/or manifests are missing in your target registry: %w", err)
|
||||
}
|
||||
return err
|
||||
|
14
vendor/github.com/containerd/containerd/remotes/resolver.go
generated
vendored
14
vendor/github.com/containerd/containerd/remotes/resolver.go
generated
vendored
@ -21,6 +21,7 @@ import (
|
||||
"io"
|
||||
|
||||
"github.com/containerd/containerd/content"
|
||||
"github.com/opencontainers/go-digest"
|
||||
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
)
|
||||
|
||||
@ -50,12 +51,23 @@ type Resolver interface {
|
||||
Pusher(ctx context.Context, ref string) (Pusher, error)
|
||||
}
|
||||
|
||||
// Fetcher fetches content
|
||||
// Fetcher fetches content.
|
||||
// A fetcher implementation may implement the FetcherByDigest interface too.
|
||||
type Fetcher interface {
|
||||
// Fetch the resource identified by the descriptor.
|
||||
Fetch(ctx context.Context, desc ocispec.Descriptor) (io.ReadCloser, error)
|
||||
}
|
||||
|
||||
// FetcherByDigest fetches content by the digest.
|
||||
type FetcherByDigest interface {
|
||||
// FetchByDigest fetches the resource identified by the digest.
|
||||
//
|
||||
// FetcherByDigest usually returns an incomplete descriptor.
|
||||
// Typically, the media type is always set to "application/octet-stream",
|
||||
// and the annotations are unset.
|
||||
FetchByDigest(ctx context.Context, dgst digest.Digest) (io.ReadCloser, ocispec.Descriptor, error)
|
||||
}
|
||||
|
||||
// Pusher pushes content
|
||||
type Pusher interface {
|
||||
// Push returns a content writer for the given resource identified
|
||||
|
101
vendor/github.com/containerd/containerd/services/content/contentserver/contentserver.go
generated
vendored
101
vendor/github.com/containerd/containerd/services/content/contentserver/contentserver.go
generated
vendored
@ -26,7 +26,8 @@ import (
|
||||
"github.com/containerd/containerd/content"
|
||||
"github.com/containerd/containerd/errdefs"
|
||||
"github.com/containerd/containerd/log"
|
||||
ptypes "github.com/gogo/protobuf/types"
|
||||
"github.com/containerd/containerd/protobuf"
|
||||
ptypes "github.com/containerd/containerd/protobuf/types"
|
||||
digest "github.com/opencontainers/go-digest"
|
||||
ocispec "github.com/opencontainers/image-spec/specs-go/v1"
|
||||
"github.com/sirupsen/logrus"
|
||||
@ -37,6 +38,7 @@ import (
|
||||
|
||||
type service struct {
|
||||
store content.Store
|
||||
api.UnimplementedContentServer
|
||||
}
|
||||
|
||||
var bufPool = sync.Pool{
|
||||
@ -57,11 +59,12 @@ func (s *service) Register(server *grpc.Server) error {
|
||||
}
|
||||
|
||||
func (s *service) Info(ctx context.Context, req *api.InfoRequest) (*api.InfoResponse, error) {
|
||||
if err := req.Digest.Validate(); err != nil {
|
||||
dg, err := digest.Parse(req.Digest)
|
||||
if err != nil {
|
||||
return nil, status.Errorf(codes.InvalidArgument, "%q failed validation", req.Digest)
|
||||
}
|
||||
|
||||
bi, err := s.store.Info(ctx, req.Digest)
|
||||
bi, err := s.store.Info(ctx, dg)
|
||||
if err != nil {
|
||||
return nil, errdefs.ToGRPC(err)
|
||||
}
|
||||
@ -72,7 +75,8 @@ func (s *service) Info(ctx context.Context, req *api.InfoRequest) (*api.InfoResp
|
||||
}
|
||||
|
||||
func (s *service) Update(ctx context.Context, req *api.UpdateRequest) (*api.UpdateResponse, error) {
|
||||
if err := req.Info.Digest.Validate(); err != nil {
|
||||
_, err := digest.Parse(req.Info.Digest)
|
||||
if err != nil {
|
||||
return nil, status.Errorf(codes.InvalidArgument, "%q failed validation", req.Info.Digest)
|
||||
}
|
||||
|
||||
@ -88,8 +92,8 @@ func (s *service) Update(ctx context.Context, req *api.UpdateRequest) (*api.Upda
|
||||
|
||||
func (s *service) List(req *api.ListContentRequest, session api.Content_ListServer) error {
|
||||
var (
|
||||
buffer []api.Info
|
||||
sendBlock = func(block []api.Info) error {
|
||||
buffer []*api.Info
|
||||
sendBlock = func(block []*api.Info) error {
|
||||
// send last block
|
||||
return session.Send(&api.ListContentResponse{
|
||||
Info: block,
|
||||
@ -98,10 +102,10 @@ func (s *service) List(req *api.ListContentRequest, session api.Content_ListServ
|
||||
)
|
||||
|
||||
if err := s.store.Walk(session.Context(), func(info content.Info) error {
|
||||
buffer = append(buffer, api.Info{
|
||||
Digest: info.Digest,
|
||||
Size_: info.Size,
|
||||
CreatedAt: info.CreatedAt,
|
||||
buffer = append(buffer, &api.Info{
|
||||
Digest: info.Digest.String(),
|
||||
Size: info.Size,
|
||||
CreatedAt: protobuf.ToTimestamp(info.CreatedAt),
|
||||
Labels: info.Labels,
|
||||
})
|
||||
|
||||
@ -130,11 +134,12 @@ func (s *service) List(req *api.ListContentRequest, session api.Content_ListServ
|
||||
|
||||
func (s *service) Delete(ctx context.Context, req *api.DeleteContentRequest) (*ptypes.Empty, error) {
|
||||
log.G(ctx).WithField("digest", req.Digest).Debugf("delete content")
|
||||
if err := req.Digest.Validate(); err != nil {
|
||||
dg, err := digest.Parse(req.Digest)
|
||||
if err != nil {
|
||||
return nil, status.Errorf(codes.InvalidArgument, err.Error())
|
||||
}
|
||||
|
||||
if err := s.store.Delete(ctx, req.Digest); err != nil {
|
||||
if err := s.store.Delete(ctx, dg); err != nil {
|
||||
return nil, errdefs.ToGRPC(err)
|
||||
}
|
||||
|
||||
@ -142,16 +147,17 @@ func (s *service) Delete(ctx context.Context, req *api.DeleteContentRequest) (*p
|
||||
}
|
||||
|
||||
func (s *service) Read(req *api.ReadContentRequest, session api.Content_ReadServer) error {
|
||||
if err := req.Digest.Validate(); err != nil {
|
||||
dg, err := digest.Parse(req.Digest)
|
||||
if err != nil {
|
||||
return status.Errorf(codes.InvalidArgument, "%v: %v", req.Digest, err)
|
||||
}
|
||||
|
||||
oi, err := s.store.Info(session.Context(), req.Digest)
|
||||
oi, err := s.store.Info(session.Context(), dg)
|
||||
if err != nil {
|
||||
return errdefs.ToGRPC(err)
|
||||
}
|
||||
|
||||
ra, err := s.store.ReaderAt(session.Context(), ocispec.Descriptor{Digest: req.Digest})
|
||||
ra, err := s.store.ReaderAt(session.Context(), ocispec.Descriptor{Digest: dg})
|
||||
if err != nil {
|
||||
return errdefs.ToGRPC(err)
|
||||
}
|
||||
@ -161,7 +167,7 @@ func (s *service) Read(req *api.ReadContentRequest, session api.Content_ReadServ
|
||||
offset = req.Offset
|
||||
// size is read size, not the expected size of the blob (oi.Size), which the caller might not be aware of.
|
||||
// offset+size can be larger than oi.Size.
|
||||
size = req.Size_
|
||||
size = req.Size
|
||||
|
||||
// TODO(stevvooe): Using the global buffer pool. At 32KB, it is probably
|
||||
// little inefficient for work over a fast network. We can tune this later.
|
||||
@ -216,12 +222,12 @@ func (s *service) Status(ctx context.Context, req *api.StatusRequest) (*api.Stat
|
||||
|
||||
var resp api.StatusResponse
|
||||
resp.Status = &api.Status{
|
||||
StartedAt: status.StartedAt,
|
||||
UpdatedAt: status.UpdatedAt,
|
||||
StartedAt: protobuf.ToTimestamp(status.StartedAt),
|
||||
UpdatedAt: protobuf.ToTimestamp(status.UpdatedAt),
|
||||
Ref: status.Ref,
|
||||
Offset: status.Offset,
|
||||
Total: status.Total,
|
||||
Expected: status.Expected,
|
||||
Expected: status.Expected.String(),
|
||||
}
|
||||
|
||||
return &resp, nil
|
||||
@ -235,13 +241,13 @@ func (s *service) ListStatuses(ctx context.Context, req *api.ListStatusesRequest
|
||||
|
||||
var resp api.ListStatusesResponse
|
||||
for _, status := range statuses {
|
||||
resp.Statuses = append(resp.Statuses, api.Status{
|
||||
StartedAt: status.StartedAt,
|
||||
UpdatedAt: status.UpdatedAt,
|
||||
resp.Statuses = append(resp.Statuses, &api.Status{
|
||||
StartedAt: protobuf.ToTimestamp(status.StartedAt),
|
||||
UpdatedAt: protobuf.ToTimestamp(status.UpdatedAt),
|
||||
Ref: status.Ref,
|
||||
Offset: status.Offset,
|
||||
Total: status.Total,
|
||||
Expected: status.Expected,
|
||||
Expected: status.Expected.String(),
|
||||
})
|
||||
}
|
||||
|
||||
@ -293,7 +299,7 @@ func (s *service) Write(session api.Content_WriteServer) (err error) {
|
||||
"ref": ref,
|
||||
}
|
||||
total = req.Total
|
||||
expected = req.Expected
|
||||
expected = digest.Digest(req.Expected)
|
||||
if total > 0 {
|
||||
fields["total"] = total
|
||||
}
|
||||
@ -341,12 +347,13 @@ func (s *service) Write(session api.Content_WriteServer) (err error) {
|
||||
// Supporting these two paths is quite awkward but it lets both API
|
||||
// users use the same writer style for each with a minimum of overhead.
|
||||
if req.Expected != "" {
|
||||
if expected != "" && expected != req.Expected {
|
||||
log.G(ctx).Debugf("commit digest differs from writer digest: %v != %v", req.Expected, expected)
|
||||
dg := digest.Digest(req.Expected)
|
||||
if expected != "" && expected != dg {
|
||||
log.G(ctx).Debugf("commit digest differs from writer digest: %v != %v", dg, expected)
|
||||
}
|
||||
expected = req.Expected
|
||||
expected = dg
|
||||
|
||||
if _, err := s.store.Info(session.Context(), req.Expected); err == nil {
|
||||
if _, err := s.store.Info(session.Context(), dg); err == nil {
|
||||
if err := wr.Close(); err != nil {
|
||||
log.G(ctx).WithError(err).Error("failed to close writer")
|
||||
}
|
||||
@ -368,12 +375,12 @@ func (s *service) Write(session api.Content_WriteServer) (err error) {
|
||||
}
|
||||
|
||||
switch req.Action {
|
||||
case api.WriteActionStat:
|
||||
msg.Digest = wr.Digest()
|
||||
msg.StartedAt = ws.StartedAt
|
||||
msg.UpdatedAt = ws.UpdatedAt
|
||||
case api.WriteAction_STAT:
|
||||
msg.Digest = wr.Digest().String()
|
||||
msg.StartedAt = protobuf.ToTimestamp(ws.StartedAt)
|
||||
msg.UpdatedAt = protobuf.ToTimestamp(ws.UpdatedAt)
|
||||
msg.Total = total
|
||||
case api.WriteActionWrite, api.WriteActionCommit:
|
||||
case api.WriteAction_WRITE, api.WriteAction_COMMIT:
|
||||
if req.Offset > 0 {
|
||||
// validate the offset if provided
|
||||
if req.Offset != ws.Offset {
|
||||
@ -406,7 +413,7 @@ func (s *service) Write(session api.Content_WriteServer) (err error) {
|
||||
msg.Offset += int64(n)
|
||||
}
|
||||
|
||||
if req.Action == api.WriteActionCommit {
|
||||
if req.Action == api.WriteAction_COMMIT {
|
||||
var opts []content.Opt
|
||||
if req.Labels != nil {
|
||||
opts = append(opts, content.WithLabels(req.Labels))
|
||||
@ -416,14 +423,14 @@ func (s *service) Write(session api.Content_WriteServer) (err error) {
|
||||
}
|
||||
}
|
||||
|
||||
msg.Digest = wr.Digest()
|
||||
msg.Digest = wr.Digest().String()
|
||||
}
|
||||
|
||||
if err := session.Send(&msg); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if req.Action == api.WriteActionCommit {
|
||||
if req.Action == api.WriteAction_COMMIT {
|
||||
return nil
|
||||
}
|
||||
|
||||
@ -446,22 +453,22 @@ func (s *service) Abort(ctx context.Context, req *api.AbortRequest) (*ptypes.Emp
|
||||
return &ptypes.Empty{}, nil
|
||||
}
|
||||
|
||||
func infoToGRPC(info content.Info) api.Info {
|
||||
return api.Info{
|
||||
Digest: info.Digest,
|
||||
Size_: info.Size,
|
||||
CreatedAt: info.CreatedAt,
|
||||
UpdatedAt: info.UpdatedAt,
|
||||
func infoToGRPC(info content.Info) *api.Info {
|
||||
return &api.Info{
|
||||
Digest: info.Digest.String(),
|
||||
Size: info.Size,
|
||||
CreatedAt: protobuf.ToTimestamp(info.CreatedAt),
|
||||
UpdatedAt: protobuf.ToTimestamp(info.UpdatedAt),
|
||||
Labels: info.Labels,
|
||||
}
|
||||
}
|
||||
|
||||
func infoFromGRPC(info api.Info) content.Info {
|
||||
func infoFromGRPC(info *api.Info) content.Info {
|
||||
return content.Info{
|
||||
Digest: info.Digest,
|
||||
Size: info.Size_,
|
||||
CreatedAt: info.CreatedAt,
|
||||
UpdatedAt: info.UpdatedAt,
|
||||
Digest: digest.Digest(info.Digest),
|
||||
Size: info.Size,
|
||||
CreatedAt: protobuf.FromTimestamp(info.CreatedAt),
|
||||
UpdatedAt: protobuf.FromTimestamp(info.UpdatedAt),
|
||||
Labels: info.Labels,
|
||||
}
|
||||
}
|
||||
|
94
vendor/github.com/containerd/containerd/tracing/helpers.go
generated
vendored
Normal file
94
vendor/github.com/containerd/containerd/tracing/helpers.go
generated
vendored
Normal file
@ -0,0 +1,94 @@
|
||||
/*
|
||||
Copyright The containerd Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package tracing
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"go.opentelemetry.io/otel/attribute"
|
||||
)
|
||||
|
||||
const (
|
||||
spanDelimiter = "."
|
||||
)
|
||||
|
||||
func makeSpanName(names ...string) string {
|
||||
return strings.Join(names, spanDelimiter)
|
||||
}
|
||||
|
||||
func any(k string, v interface{}) attribute.KeyValue {
|
||||
if v == nil {
|
||||
return attribute.String(k, "<nil>")
|
||||
}
|
||||
|
||||
switch typed := v.(type) {
|
||||
case bool:
|
||||
return attribute.Bool(k, typed)
|
||||
case []bool:
|
||||
return attribute.BoolSlice(k, typed)
|
||||
case int:
|
||||
return attribute.Int(k, typed)
|
||||
case []int:
|
||||
return attribute.IntSlice(k, typed)
|
||||
case int8:
|
||||
return attribute.Int(k, int(typed))
|
||||
case []int8:
|
||||
ls := make([]int, 0, len(typed))
|
||||
for _, i := range typed {
|
||||
ls = append(ls, int(i))
|
||||
}
|
||||
return attribute.IntSlice(k, ls)
|
||||
case int16:
|
||||
return attribute.Int(k, int(typed))
|
||||
case []int16:
|
||||
ls := make([]int, 0, len(typed))
|
||||
for _, i := range typed {
|
||||
ls = append(ls, int(i))
|
||||
}
|
||||
return attribute.IntSlice(k, ls)
|
||||
case int32:
|
||||
return attribute.Int64(k, int64(typed))
|
||||
case []int32:
|
||||
ls := make([]int64, 0, len(typed))
|
||||
for _, i := range typed {
|
||||
ls = append(ls, int64(i))
|
||||
}
|
||||
return attribute.Int64Slice(k, ls)
|
||||
case int64:
|
||||
return attribute.Int64(k, typed)
|
||||
case []int64:
|
||||
return attribute.Int64Slice(k, typed)
|
||||
case float64:
|
||||
return attribute.Float64(k, typed)
|
||||
case []float64:
|
||||
return attribute.Float64Slice(k, typed)
|
||||
case string:
|
||||
return attribute.String(k, typed)
|
||||
case []string:
|
||||
return attribute.StringSlice(k, typed)
|
||||
}
|
||||
|
||||
if stringer, ok := v.(fmt.Stringer); ok {
|
||||
return attribute.String(k, stringer.String())
|
||||
}
|
||||
if b, err := json.Marshal(v); b != nil && err == nil {
|
||||
return attribute.String(k, string(b))
|
||||
}
|
||||
return attribute.String(k, fmt.Sprintf("%v", v))
|
||||
}
|
66
vendor/github.com/containerd/containerd/tracing/log.go
generated
vendored
Normal file
66
vendor/github.com/containerd/containerd/tracing/log.go
generated
vendored
Normal file
@ -0,0 +1,66 @@
|
||||
/*
|
||||
Copyright The containerd Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package tracing
|
||||
|
||||
import (
|
||||
"github.com/sirupsen/logrus"
|
||||
"go.opentelemetry.io/otel/attribute"
|
||||
"go.opentelemetry.io/otel/trace"
|
||||
)
|
||||
|
||||
// NewLogrusHook creates a new logrus hook
|
||||
func NewLogrusHook() *LogrusHook {
|
||||
return &LogrusHook{}
|
||||
}
|
||||
|
||||
// LogrusHook is a logrus hook which adds logrus events to active spans.
|
||||
// If the span is not recording or the span context is invalid, the hook is a no-op.
|
||||
type LogrusHook struct{}
|
||||
|
||||
// Levels returns the logrus levels that this hook is interested in.
|
||||
func (h *LogrusHook) Levels() []logrus.Level {
|
||||
return logrus.AllLevels
|
||||
}
|
||||
|
||||
// Fire is called when a log event occurs.
|
||||
func (h *LogrusHook) Fire(entry *logrus.Entry) error {
|
||||
span := trace.SpanFromContext(entry.Context)
|
||||
if span == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
if !span.SpanContext().IsValid() || !span.IsRecording() {
|
||||
return nil
|
||||
}
|
||||
|
||||
span.AddEvent(
|
||||
entry.Message,
|
||||
trace.WithAttributes(logrusDataToAttrs(entry.Data)...),
|
||||
trace.WithAttributes(attribute.String("level", entry.Level.String())),
|
||||
trace.WithTimestamp(entry.Time),
|
||||
)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func logrusDataToAttrs(data logrus.Fields) []attribute.KeyValue {
|
||||
attrs := make([]attribute.KeyValue, 0, len(data))
|
||||
for k, v := range data {
|
||||
attrs = append(attrs, any(k, v))
|
||||
}
|
||||
return attrs
|
||||
}
|
116
vendor/github.com/containerd/containerd/tracing/tracing.go
generated
vendored
Normal file
116
vendor/github.com/containerd/containerd/tracing/tracing.go
generated
vendored
Normal file
@ -0,0 +1,116 @@
|
||||
/*
|
||||
Copyright The containerd Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package tracing
|
||||
|
||||
import (
|
||||
"context"
|
||||
"net/http"
|
||||
|
||||
"go.opentelemetry.io/otel"
|
||||
"go.opentelemetry.io/otel/attribute"
|
||||
"go.opentelemetry.io/otel/codes"
|
||||
semconv "go.opentelemetry.io/otel/semconv/v1.12.0"
|
||||
"go.opentelemetry.io/otel/trace"
|
||||
)
|
||||
|
||||
// StartConfig defines configuration for a new span object.
|
||||
type StartConfig struct {
|
||||
spanOpts []trace.SpanStartOption
|
||||
}
|
||||
|
||||
type SpanOpt func(config *StartConfig)
|
||||
|
||||
// WithHTTPRequest marks span as a HTTP request operation from client to server.
|
||||
// It'll append attributes from the HTTP request object and mark it with `SpanKindClient` type.
|
||||
func WithHTTPRequest(request *http.Request) SpanOpt {
|
||||
return func(config *StartConfig) {
|
||||
config.spanOpts = append(config.spanOpts,
|
||||
trace.WithSpanKind(trace.SpanKindClient), // A client making a request to a server
|
||||
trace.WithAttributes(semconv.HTTPClientAttributesFromHTTPRequest(request)...), // Add HTTP attributes
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
// StartSpan starts child span in a context.
|
||||
func StartSpan(ctx context.Context, opName string, opts ...SpanOpt) (context.Context, *Span) {
|
||||
config := StartConfig{}
|
||||
for _, fn := range opts {
|
||||
fn(&config)
|
||||
}
|
||||
tracer := otel.Tracer("")
|
||||
if parent := trace.SpanFromContext(ctx); parent != nil && parent.SpanContext().IsValid() {
|
||||
tracer = parent.TracerProvider().Tracer("")
|
||||
}
|
||||
ctx, span := tracer.Start(ctx, opName, config.spanOpts...)
|
||||
return ctx, &Span{otelSpan: span}
|
||||
}
|
||||
|
||||
// SpanFromContext returns the current Span from the context.
|
||||
func SpanFromContext(ctx context.Context) *Span {
|
||||
return &Span{
|
||||
otelSpan: trace.SpanFromContext(ctx),
|
||||
}
|
||||
}
|
||||
|
||||
// Span is wrapper around otel trace.Span.
|
||||
// Span is the individual component of a trace. It represents a
|
||||
// single named and timed operation of a workflow that is traced.
|
||||
type Span struct {
|
||||
otelSpan trace.Span
|
||||
}
|
||||
|
||||
// End completes the span.
|
||||
func (s *Span) End() {
|
||||
s.otelSpan.End()
|
||||
}
|
||||
|
||||
// AddEvent adds an event with provided name and options.
|
||||
func (s *Span) AddEvent(name string, options ...trace.EventOption) {
|
||||
s.otelSpan.AddEvent(name, options...)
|
||||
}
|
||||
|
||||
// SetStatus sets the status of the current span.
|
||||
// If an error is encountered, it records the error and sets span status to Error.
|
||||
func (s *Span) SetStatus(err error) {
|
||||
if err != nil {
|
||||
s.otelSpan.RecordError(err)
|
||||
s.otelSpan.SetStatus(codes.Error, err.Error())
|
||||
} else {
|
||||
s.otelSpan.SetStatus(codes.Ok, "")
|
||||
}
|
||||
}
|
||||
|
||||
// SetAttributes sets kv as attributes of the span.
|
||||
func (s *Span) SetAttributes(kv ...attribute.KeyValue) {
|
||||
s.otelSpan.SetAttributes(kv...)
|
||||
}
|
||||
|
||||
// Name sets the span name by joining a list of strings in dot separated format.
|
||||
func Name(names ...string) string {
|
||||
return makeSpanName(names...)
|
||||
}
|
||||
|
||||
// Attribute takes a key value pair and returns attribute.KeyValue type.
|
||||
func Attribute(k string, v interface{}) attribute.KeyValue {
|
||||
return any(k, v)
|
||||
}
|
||||
|
||||
// HTTPStatusCodeAttributes generates attributes of the HTTP namespace as specified by the OpenTelemetry
|
||||
// specification for a span.
|
||||
func HTTPStatusCodeAttributes(code int) []attribute.KeyValue {
|
||||
return semconv.HTTPAttributesFromHTTPStatusCode(code)
|
||||
}
|
2
vendor/github.com/containerd/containerd/version/version.go
generated
vendored
2
vendor/github.com/containerd/containerd/version/version.go
generated
vendored
@ -23,7 +23,7 @@ var (
|
||||
Package = "github.com/containerd/containerd"
|
||||
|
||||
// Version holds the complete version number. Filled in at linking time.
|
||||
Version = "1.6.16+unknown"
|
||||
Version = "1.7.0-beta.3+unknown"
|
||||
|
||||
// Revision is filled with the VCS (e.g. git) revision being used to build
|
||||
// the program at linking time.
|
||||
|
1
vendor/github.com/containerd/ttrpc/.gitattributes
generated
vendored
Normal file
1
vendor/github.com/containerd/ttrpc/.gitattributes
generated
vendored
Normal file
@ -0,0 +1 @@
|
||||
*.go text eol=lf
|
2
vendor/github.com/containerd/ttrpc/.gitignore
generated
vendored
2
vendor/github.com/containerd/ttrpc/.gitignore
generated
vendored
@ -1,4 +1,5 @@
|
||||
# Binaries for programs and plugins
|
||||
/bin/
|
||||
*.exe
|
||||
*.dll
|
||||
*.so
|
||||
@ -9,3 +10,4 @@
|
||||
|
||||
# Output of the go coverage tool, specifically when used with LiteIDE
|
||||
*.out
|
||||
coverage.txt
|
||||
|
54
vendor/github.com/containerd/ttrpc/.golangci.yml
generated
vendored
Normal file
54
vendor/github.com/containerd/ttrpc/.golangci.yml
generated
vendored
Normal file
@ -0,0 +1,54 @@
|
||||
linters:
|
||||
enable:
|
||||
- structcheck
|
||||
- varcheck
|
||||
- staticcheck
|
||||
- unconvert
|
||||
- gofmt
|
||||
- goimports
|
||||
- revive
|
||||
- ineffassign
|
||||
- vet
|
||||
- unused
|
||||
- misspell
|
||||
disable:
|
||||
- errcheck
|
||||
|
||||
linters-settings:
|
||||
revive:
|
||||
ignore-generated-headers: true
|
||||
rules:
|
||||
- name: blank-imports
|
||||
- name: context-as-argument
|
||||
- name: context-keys-type
|
||||
- name: dot-imports
|
||||
- name: error-return
|
||||
- name: error-strings
|
||||
- name: error-naming
|
||||
- name: exported
|
||||
- name: if-return
|
||||
- name: increment-decrement
|
||||
- name: var-naming
|
||||
arguments: [["UID", "GID"], []]
|
||||
- name: var-declaration
|
||||
- name: package-comments
|
||||
- name: range
|
||||
- name: receiver-naming
|
||||
- name: time-naming
|
||||
- name: unexported-return
|
||||
- name: indent-error-flow
|
||||
- name: errorf
|
||||
- name: empty-block
|
||||
- name: superfluous-else
|
||||
- name: unused-parameter
|
||||
- name: unreachable-code
|
||||
- name: redefines-builtin-id
|
||||
|
||||
issues:
|
||||
include:
|
||||
- EXC0002
|
||||
|
||||
run:
|
||||
timeout: 8m
|
||||
skip-dirs:
|
||||
- example
|
180
vendor/github.com/containerd/ttrpc/Makefile
generated
vendored
Normal file
180
vendor/github.com/containerd/ttrpc/Makefile
generated
vendored
Normal file
@ -0,0 +1,180 @@
|
||||
# Copyright The containerd Authors.
|
||||
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
# Go command to use for build
|
||||
GO ?= go
|
||||
INSTALL ?= install
|
||||
|
||||
# Root directory of the project (absolute path).
|
||||
ROOTDIR=$(dir $(abspath $(lastword $(MAKEFILE_LIST))))
|
||||
|
||||
WHALE = "🇩"
|
||||
ONI = "👹"
|
||||
|
||||
# Project binaries.
|
||||
COMMANDS=protoc-gen-go-ttrpc protoc-gen-gogottrpc
|
||||
|
||||
ifdef BUILDTAGS
|
||||
GO_BUILDTAGS = ${BUILDTAGS}
|
||||
endif
|
||||
GO_BUILDTAGS ?=
|
||||
GO_TAGS=$(if $(GO_BUILDTAGS),-tags "$(strip $(GO_BUILDTAGS))",)
|
||||
|
||||
# Project packages.
|
||||
PACKAGES=$(shell $(GO) list ${GO_TAGS} ./... | grep -v /example)
|
||||
TESTPACKAGES=$(shell $(GO) list ${GO_TAGS} ./... | grep -v /cmd | grep -v /integration | grep -v /example)
|
||||
BINPACKAGES=$(addprefix ./cmd/,$(COMMANDS))
|
||||
|
||||
#Replaces ":" (*nix), ";" (windows) with newline for easy parsing
|
||||
GOPATHS=$(shell echo ${GOPATH} | tr ":" "\n" | tr ";" "\n")
|
||||
|
||||
TESTFLAGS_RACE=
|
||||
GO_BUILD_FLAGS=
|
||||
# See Golang issue re: '-trimpath': https://github.com/golang/go/issues/13809
|
||||
GO_GCFLAGS=$(shell \
|
||||
set -- ${GOPATHS}; \
|
||||
echo "-gcflags=-trimpath=$${1}/src"; \
|
||||
)
|
||||
|
||||
BINARIES=$(addprefix bin/,$(COMMANDS))
|
||||
|
||||
# Flags passed to `go test`
|
||||
TESTFLAGS ?= $(TESTFLAGS_RACE) $(EXTRA_TESTFLAGS)
|
||||
TESTFLAGS_PARALLEL ?= 8
|
||||
|
||||
# Use this to replace `go test` with, for instance, `gotestsum`
|
||||
GOTEST ?= $(GO) test
|
||||
|
||||
.PHONY: clean all AUTHORS build binaries test integration generate protos checkprotos coverage ci check help install vendor install-protobuf install-protobuild
|
||||
.DEFAULT: default
|
||||
|
||||
# Forcibly set the default goal to all, in case an include above brought in a rule definition.
|
||||
.DEFAULT_GOAL := all
|
||||
|
||||
all: binaries
|
||||
|
||||
check: proto-fmt ## run all linters
|
||||
@echo "$(WHALE) $@"
|
||||
GOGC=75 golangci-lint run
|
||||
|
||||
ci: check binaries checkprotos coverage # coverage-integration ## to be used by the CI
|
||||
|
||||
AUTHORS: .mailmap .git/HEAD
|
||||
git log --format='%aN <%aE>' | sort -fu > $@
|
||||
|
||||
generate: protos
|
||||
@echo "$(WHALE) $@"
|
||||
@PATH="${ROOTDIR}/bin:${PATH}" $(GO) generate -x ${PACKAGES}
|
||||
|
||||
protos: bin/protoc-gen-gogottrpc bin/protoc-gen-go-ttrpc ## generate protobuf
|
||||
@echo "$(WHALE) $@"
|
||||
@(PATH="${ROOTDIR}/bin:${PATH}" protobuild --quiet ${PACKAGES})
|
||||
|
||||
check-protos: protos ## check if protobufs needs to be generated again
|
||||
@echo "$(WHALE) $@"
|
||||
@test -z "$$(git status --short | grep ".pb.go" | tee /dev/stderr)" || \
|
||||
((git diff | cat) && \
|
||||
(echo "$(ONI) please run 'make protos' when making changes to proto files" && false))
|
||||
|
||||
check-api-descriptors: protos ## check that protobuf changes aren't present.
|
||||
@echo "$(WHALE) $@"
|
||||
@test -z "$$(git status --short | grep ".pb.txt" | tee /dev/stderr)" || \
|
||||
((git diff $$(find . -name '*.pb.txt') | cat) && \
|
||||
(echo "$(ONI) please run 'make protos' when making changes to proto files and check-in the generated descriptor file changes" && false))
|
||||
|
||||
proto-fmt: ## check format of proto files
|
||||
@echo "$(WHALE) $@"
|
||||
@test -z "$$(find . -name '*.proto' -type f -exec grep -Hn -e "^ " {} \; | tee /dev/stderr)" || \
|
||||
(echo "$(ONI) please indent proto files with tabs only" && false)
|
||||
@test -z "$$(find . -name '*.proto' -type f -exec grep -Hn "Meta meta = " {} \; | grep -v '(gogoproto.nullable) = false' | tee /dev/stderr)" || \
|
||||
(echo "$(ONI) meta fields in proto files must have option (gogoproto.nullable) = false" && false)
|
||||
|
||||
build: ## build the go packages
|
||||
@echo "$(WHALE) $@"
|
||||
@$(GO) build ${DEBUG_GO_GCFLAGS} ${GO_GCFLAGS} ${GO_BUILD_FLAGS} ${EXTRA_FLAGS} ${PACKAGES}
|
||||
|
||||
test: ## run tests, except integration tests and tests that require root
|
||||
@echo "$(WHALE) $@"
|
||||
@$(GOTEST) ${TESTFLAGS} ${TESTPACKAGES}
|
||||
|
||||
integration: ## run integration tests
|
||||
@echo "$(WHALE) $@"
|
||||
@cd "${ROOTDIR}/integration" && $(GOTEST) -v ${TESTFLAGS} -parallel ${TESTFLAGS_PARALLEL} .
|
||||
|
||||
benchmark: ## run benchmarks tests
|
||||
@echo "$(WHALE) $@"
|
||||
@$(GO) test ${TESTFLAGS} -bench . -run Benchmark
|
||||
|
||||
FORCE:
|
||||
|
||||
define BUILD_BINARY
|
||||
@echo "$(WHALE) $@"
|
||||
@$(GO) build ${DEBUG_GO_GCFLAGS} ${GO_GCFLAGS} ${GO_BUILD_FLAGS} -o $@ ${GO_TAGS} ./$<
|
||||
endef
|
||||
|
||||
# Build a binary from a cmd.
|
||||
bin/%: cmd/% FORCE
|
||||
$(call BUILD_BINARY)
|
||||
|
||||
binaries: $(BINARIES) ## build binaries
|
||||
@echo "$(WHALE) $@"
|
||||
|
||||
clean: ## clean up binaries
|
||||
@echo "$(WHALE) $@"
|
||||
@rm -f $(BINARIES)
|
||||
|
||||
install: ## install binaries
|
||||
@echo "$(WHALE) $@ $(BINPACKAGES)"
|
||||
@$(GO) install $(BINPACKAGES)
|
||||
|
||||
install-protobuf:
|
||||
@echo "$(WHALE) $@"
|
||||
@script/install-protobuf
|
||||
|
||||
install-protobuild:
|
||||
@echo "$(WHALE) $@"
|
||||
@$(GO) install google.golang.org/protobuf/cmd/protoc-gen-go@v1.27.1
|
||||
@$(GO) install github.com/containerd/protobuild@7e5ee24bc1f70e9e289fef15e2631eb3491320bf
|
||||
|
||||
coverage: ## generate coverprofiles from the unit tests, except tests that require root
|
||||
@echo "$(WHALE) $@"
|
||||
@rm -f coverage.txt
|
||||
@$(GO) test -i ${TESTFLAGS} ${TESTPACKAGES} 2> /dev/null
|
||||
@( for pkg in ${PACKAGES}; do \
|
||||
$(GO) test ${TESTFLAGS} \
|
||||
-cover \
|
||||
-coverprofile=profile.out \
|
||||
-covermode=atomic $$pkg || exit; \
|
||||
if [ -f profile.out ]; then \
|
||||
cat profile.out >> coverage.txt; \
|
||||
rm profile.out; \
|
||||
fi; \
|
||||
done )
|
||||
|
||||
vendor: ## ensure all the go.mod/go.sum files are up-to-date
|
||||
@echo "$(WHALE) $@"
|
||||
@$(GO) mod tidy
|
||||
@$(GO) mod verify
|
||||
|
||||
verify-vendor: ## verify if all the go.mod/go.sum files are up-to-date
|
||||
@echo "$(WHALE) $@"
|
||||
@$(GO) mod tidy
|
||||
@$(GO) mod verify
|
||||
@test -z "$$(git status --short | grep "go.sum" | tee /dev/stderr)" || \
|
||||
((git diff | cat) && \
|
||||
(echo "$(ONI) make sure to checkin changes after go mod tidy" && false))
|
||||
|
||||
help: ## this help
|
||||
@awk 'BEGIN {FS = ":.*?## "} /^[a-zA-Z_-]+:.*?## / {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' $(MAKEFILE_LIST) | sort
|
240
vendor/github.com/containerd/ttrpc/PROTOCOL.md
generated
vendored
Normal file
240
vendor/github.com/containerd/ttrpc/PROTOCOL.md
generated
vendored
Normal file
@ -0,0 +1,240 @@
|
||||
# Protocol Specification
|
||||
|
||||
The ttrpc protocol is client/server protocol to support multiple request streams
|
||||
over a single connection with lightweight framing. The client represents the
|
||||
process which initiated the underlying connection and the server is the process
|
||||
which accepted the connection. The protocol is currently defined as
|
||||
asymmetrical, with clients sending requests and servers sending responses. Both
|
||||
clients and servers are able to send stream data. The roles are also used in
|
||||
determining the stream identifiers, with client initiated streams using odd
|
||||
number identifiers and server initiated using even number. The protocol may be
|
||||
extended in the future to support server initiated streams, that is not
|
||||
supported in the latest version.
|
||||
|
||||
## Purpose
|
||||
|
||||
The ttrpc protocol is designed to be lightweight and optimized for low latency
|
||||
and reliable connections between processes on the same host. The protocol does
|
||||
not include features for handling unreliable connections such as handshakes,
|
||||
resets, pings, or flow control. The protocol is designed to make low-overhead
|
||||
implementations as simple as possible. It is not intended as a suitable
|
||||
replacement for HTTP2/3 over the network.
|
||||
|
||||
## Message Frame
|
||||
|
||||
Each Message Frame consists of a 10-byte message header followed
|
||||
by message data. The data length and stream ID are both big-endian
|
||||
4-byte unsigned integers. The message type is an unsigned 1-byte
|
||||
integer. The flags are also an unsigned 1-byte integer and
|
||||
use is defined by the message type.
|
||||
|
||||
+---------------------------------------------------------------+
|
||||
| Data Length (32) |
|
||||
+---------------------------------------------------------------+
|
||||
| Stream ID (32) |
|
||||
+---------------+-----------------------------------------------+
|
||||
| Msg Type (8) |
|
||||
+---------------+
|
||||
| Flags (8) |
|
||||
+---------------+-----------------------------------------------+
|
||||
| Data (*) |
|
||||
+---------------------------------------------------------------+
|
||||
|
||||
The Data Length field represents the number of bytes in the Data field. The
|
||||
total frame size will always be Data Length + 10 bytes. The maximum data length
|
||||
is 4MB and any larger size should be rejected. Due to the maximum data size
|
||||
being less than 16MB, the first frame byte should always be zero. This first
|
||||
byte should be considered reserved for future use.
|
||||
|
||||
The Stream ID must be odd for client initiated streams and even for server
|
||||
initiated streams. Server initiated streams are not currently supported.
|
||||
|
||||
## Mesage Types
|
||||
|
||||
| Message Type | Name | Description |
|
||||
|--------------|----------|----------------------------------|
|
||||
| 0x01 | Request | Initiates stream |
|
||||
| 0x02 | Response | Final stream data and terminates |
|
||||
| 0x03 | Data | Stream data |
|
||||
|
||||
### Request
|
||||
|
||||
The request message is used to initiate stream and send along request data for
|
||||
properly routing and handling the stream. The stream may indicate unary without
|
||||
any inbound or outbound stream data with only a response is expected on the
|
||||
stream. The request may also indicate the stream is still open for more data and
|
||||
no response is expected until data is finished. If the remote indicates the
|
||||
stream is closed, the request may be considered non-unary but without anymore
|
||||
stream data sent. In the case of `remote closed`, the remote still expects to
|
||||
receive a response or stream data. For compatibility with non streaming clients,
|
||||
a request with empty flags indicates a unary request.
|
||||
|
||||
#### Request Flags
|
||||
|
||||
| Flag | Name | Description |
|
||||
|------|-----------------|--------------------------------------------------|
|
||||
| 0x01 | `remote closed` | Non-unary, but no more data expected from remote |
|
||||
| 0x02 | `remote open` | Non-unary, remote is still sending data |
|
||||
|
||||
### Response
|
||||
|
||||
The response message is used to end a stream with data, an empty response, or
|
||||
an error. A response message is the only expected message after a unary request.
|
||||
A non-unary request does not require a response message if the server is sending
|
||||
back stream data. A non-unary stream may return a single response message but no
|
||||
other stream data may follow.
|
||||
|
||||
#### Response Flags
|
||||
|
||||
No response flags are defined at this time, flags should be empty.
|
||||
|
||||
### Data
|
||||
|
||||
The data message is used to send data on an already initialized stream. Either
|
||||
client or server may send data. A data message is not allowed on a unary stream.
|
||||
A data message should not be sent after indicating `remote closed` to the peer.
|
||||
The last data message on a stream must set the `remote closed` flag.
|
||||
|
||||
The `no data` flag is used to indicate that the data message does not include
|
||||
any data. This is normally used with the `remote closed` flag to indicate the
|
||||
stream is now closed without transmitting any data. Since ttrpc normally
|
||||
transmits a single object per message, a zero length data message may be
|
||||
interpreted as an empty object. For example, transmitting the number zero as a
|
||||
protobuf message ends up with a data length of zero, but the message is still
|
||||
considered data and should be processed.
|
||||
|
||||
#### Data Flags
|
||||
|
||||
| Flag | Name | Description |
|
||||
|------|-----------------|-----------------------------------|
|
||||
| 0x01 | `remote closed` | No more data expected from remote |
|
||||
| 0x04 | `no data` | This message does not have data |
|
||||
|
||||
## Streaming
|
||||
|
||||
All ttrpc requests use streams to transfer data. Unary streams will only have
|
||||
two messages sent per stream, a request from a client and a response from the
|
||||
server. Non-unary streams, however, may send any numbers of messages from the
|
||||
client and the server. This makes stream management more complicated than unary
|
||||
streams since both client and server need to track additional state. To keep
|
||||
this management as simple as possible, ttrpc minimizes the number of states and
|
||||
uses two flags instead of control frames. Each stream has two states while a
|
||||
stream is still alive: `local closed` and `remote closed`. Each peer considers
|
||||
local and remote from their own perspective and sets flags from the other peer's
|
||||
perspective. For example, if a client sends a data frame with the
|
||||
`remote closed` flag, that is indicating that the client is now `local closed`
|
||||
and the server will be `remote closed`. A unary operation does not need to send
|
||||
these flags since each received message always indicates `remote closed`. Once a
|
||||
peer is both `local closed` and `remote closed`, the stream is considered
|
||||
finished and may be cleaned up.
|
||||
|
||||
Due to the asymmetric nature of the current protocol, a client should
|
||||
always be in the `local closed` state before `remote closed` and a server should
|
||||
always be in the `remote closed` state before `local closed`. This happens
|
||||
because the client is always initiating requests and a client always expects a
|
||||
final response back from a server to indicate the initiated request has been
|
||||
fulfilled. This may mean server sends a final empty response to finish a stream
|
||||
even after it has already completed sending data before the client.
|
||||
|
||||
### Unary State Diagram
|
||||
|
||||
+--------+ +--------+
|
||||
| Client | | Server |
|
||||
+---+----+ +----+---+
|
||||
| +---------+ |
|
||||
local >---------------+ Request +--------------------> remote
|
||||
closed | +---------+ | closed
|
||||
| |
|
||||
| +----------+ |
|
||||
finished <--------------+ Response +--------------------< finished
|
||||
| +----------+ |
|
||||
| |
|
||||
|
||||
### Non-Unary State Diagrams
|
||||
|
||||
RC: `remote closed` flag
|
||||
RO: `remote open` flag
|
||||
|
||||
+--------+ +--------+
|
||||
| Client | | Server |
|
||||
+---+----+ +----+---+
|
||||
| +--------------+ |
|
||||
>-------------+ Request [RO] +----------------->
|
||||
| +--------------+ |
|
||||
| |
|
||||
| +------+ |
|
||||
>-----------------+ Data +--------------------->
|
||||
| +------+ |
|
||||
| |
|
||||
| +-----------+ |
|
||||
local >---------------+ Data [RC] +------------------> remote
|
||||
closed | +-----------+ | closed
|
||||
| |
|
||||
| +----------+ |
|
||||
finished <--------------+ Response +--------------------< finished
|
||||
| +----------+ |
|
||||
| |
|
||||
|
||||
+--------+ +--------+
|
||||
| Client | | Server |
|
||||
+---+----+ +----+---+
|
||||
| +--------------+ |
|
||||
local >-------------+ Request [RC] +-----------------> remote
|
||||
closed | +--------------+ | closed
|
||||
| |
|
||||
| +------+ |
|
||||
<-----------------+ Data +---------------------<
|
||||
| +------+ |
|
||||
| |
|
||||
| +-----------+ |
|
||||
finished <---------------+ Data [RC] +------------------< finished
|
||||
| +-----------+ |
|
||||
| |
|
||||
|
||||
+--------+ +--------+
|
||||
| Client | | Server |
|
||||
+---+----+ +----+---+
|
||||
| +--------------+ |
|
||||
>-------------+ Request [RO] +----------------->
|
||||
| +--------------+ |
|
||||
| |
|
||||
| +------+ |
|
||||
>-----------------+ Data +--------------------->
|
||||
| +------+ |
|
||||
| |
|
||||
| +------+ |
|
||||
<-----------------+ Data +---------------------<
|
||||
| +------+ |
|
||||
| |
|
||||
| +------+ |
|
||||
>-----------------+ Data +--------------------->
|
||||
| +------+ |
|
||||
| |
|
||||
| +-----------+ |
|
||||
local >---------------+ Data [RC] +------------------> remote
|
||||
closed | +-----------+ | closed
|
||||
| |
|
||||
| +------+ |
|
||||
<-----------------+ Data +---------------------<
|
||||
| +------+ |
|
||||
| |
|
||||
| +-----------+ |
|
||||
finished <---------------+ Data [RC] +------------------< finished
|
||||
| +-----------+ |
|
||||
| |
|
||||
|
||||
## RPC
|
||||
|
||||
While this protocol is defined primarily to support Remote Procedure Calls, the
|
||||
protocol does not define the request and response types beyond the messages
|
||||
defined in the protocol. The implementation provides a default protobuf
|
||||
definition of request and response which may be used for cross language rpc.
|
||||
All implementations should at least define a request type which support
|
||||
routing by procedure name and a response type which supports call status.
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Features |
|
||||
|---------|---------------------|
|
||||
| 1.0 | Unary requests only |
|
||||
| 1.2 | Streaming support |
|
25
vendor/github.com/containerd/ttrpc/Protobuild.toml
generated
vendored
Normal file
25
vendor/github.com/containerd/ttrpc/Protobuild.toml
generated
vendored
Normal file
@ -0,0 +1,25 @@
|
||||
version = "2"
|
||||
generators = ["go"]
|
||||
|
||||
# Control protoc include paths. Below are usually some good defaults, but feel
|
||||
# free to try it without them if it works for your project.
|
||||
[includes]
|
||||
# Include paths that will be added before all others. Typically, you want to
|
||||
# treat the root of the project as an include, but this may not be necessary.
|
||||
before = ["."]
|
||||
|
||||
# Paths that will be added untouched to the end of the includes. We use
|
||||
# `/usr/local/include` to pickup the common install location of protobuf.
|
||||
# This is the default.
|
||||
after = ["/usr/local/include"]
|
||||
|
||||
# This section maps protobuf imports to Go packages. These will become
|
||||
# `-M` directives in the call to the go protobuf generator.
|
||||
[packages]
|
||||
"google/protobuf/any.proto" = "github.com/gogo/protobuf/types"
|
||||
"proto/status.proto" = "google.golang.org/genproto/googleapis/rpc/status"
|
||||
|
||||
[[overrides]]
|
||||
# enable ttrpc and disable fieldpath and grpc for the shim
|
||||
prefixes = ["github.com/containerd/ttrpc/integration/streaming"]
|
||||
generators = ["go", "go-ttrpc"]
|
5
vendor/github.com/containerd/ttrpc/README.md
generated
vendored
5
vendor/github.com/containerd/ttrpc/README.md
generated
vendored
@ -20,6 +20,10 @@ Please note that while this project supports generating either end of the
|
||||
protocol, the generated service definitions will be incompatible with regular
|
||||
GRPC services, as they do not speak the same protocol.
|
||||
|
||||
# Protocol
|
||||
|
||||
See the [protocol specification](./PROTOCOL.md).
|
||||
|
||||
# Usage
|
||||
|
||||
Create a gogo vanity binary (see
|
||||
@ -43,7 +47,6 @@ directly, if required.
|
||||
|
||||
TODO:
|
||||
|
||||
- [ ] Document protocol layout
|
||||
- [ ] Add testing under concurrent load to ensure
|
||||
- [ ] Verify connection error handling
|
||||
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user