Compare commits

..

37 Commits

Author SHA1 Message Date
Tõnis Tiigi
fb7b670b76 Merge pull request #338 from tonistiigi/load-no-warn
build: avoid warning if default load disabled
2020-08-21 19:36:07 -07:00
Tõnis Tiigi
9ac5b075cf Merge pull request #323 from saulshanabrook/patch-1
Increase ls and inspect timeouts
2020-08-21 12:44:33 -07:00
Tonis Tiigi
0124b6b9c9 build: avoid warning if default load disabled
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2020-08-21 12:43:49 -07:00
Tõnis Tiigi
691a1479fc Merge pull request #351 from tonistiigi/multi-iidfile
build: remove warning for multi-platform iidfile
2020-08-21 12:29:47 -07:00
Tonis Tiigi
c9d69b082b build: remove warning for multi-platform iidfile
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2020-08-17 22:34:45 -07:00
Tõnis Tiigi
e24e04be57 Merge pull request #332 from tonistiigi/bootstrap-logs
docker-container: show logs on bootstrap error
2020-07-29 19:33:08 -07:00
Tõnis Tiigi
38f9241316 Merge pull request #337 from tonistiigi/cacheonly-exporter
build: support cacheonly exporter
2020-07-29 19:32:54 -07:00
Tonis Tiigi
3862ff269b build: add opt-out from default load behavior
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2020-07-27 22:38:06 -07:00
Tonis Tiigi
9e5321eab8 build: support cacheonly exporter
cacheonly is supported by moby so add support for buildx
as well so same flags can be used

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2020-07-27 22:33:44 -07:00
Tibor Vass
f2cf7cf281 Merge pull request #333 from tonistiigi/dockerd-log
demo-env: correct dockerd logging
2020-07-27 17:44:06 +02:00
Tonis Tiigi
8961d3573e hack: correct dockerd logging
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2020-07-26 00:13:02 -07:00
Tonis Tiigi
26570d05c1 docker-container: increase bootstrap timeout
Previous value was only 2 sec

Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2020-07-26 00:11:33 -07:00
Tonis Tiigi
8627f668f2 docker-container: show logs on bootstrap error
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2020-07-26 00:11:27 -07:00
Tõnis Tiigi
d8ca46066d Merge pull request #321 from errordeveloper/fix-320
bake: ensure `--builder` is wired from root options
2020-07-19 15:58:33 -07:00
Saul Shanabrook
c00c5a89e5 Increase inspect timeout from 5 to 20 seconds
Signed-off-by: Saul Shanabrook <s.shanabrook@gmail.com>
2020-07-16 11:56:40 -04:00
Saul Shanabrook
14b7936c3b Increase ls timeout from 7 to 20 seconds
Signed-off-by: Saul Shanabrook <s.shanabrook@gmail.com>
2020-07-16 11:56:40 -04:00
Ilya Dmitrichenko
40b41ac6e4 bake: ensure --builder is wired from root options
Signed-off-by: Ilya Dmitrichenko <errordeveloper@gmail.com>
2020-07-08 12:03:15 +01:00
Tõnis Tiigi
fd6de6b6ae Merge pull request #281 from tonistiigi/load-fix
build: improve error checking on load
2020-07-07 09:06:34 -07:00
Tõnis Tiigi
f3111bcbef Merge pull request #312 from donhui/master
README.md: update the content which not display in markdown
2020-06-20 17:41:31 -07:00
Donghui Wang
e6be472831 update the content which not display in markdown
Signed-off-by: Donghui Wang <977675308@qq.com>
2020-06-19 17:30:49 +08:00
Tibor Vass
e5217f26e2 Merge pull request #296 from tonistiigi/seed-fix
cmd: seed math rand
2020-05-18 12:03:57 -07:00
Tonis Tiigi
7f7acf7837 cmd: seed math rand
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2020-05-17 16:21:11 -07:00
Tonis Tiigi
baae4b2e71 build: improve error checking on load
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2020-05-08 17:46:28 -07:00
Tõnis Tiigi
42448c5f37 Merge pull request #280 from vanstee/hcl-json-support
Support parsing json config with hcl v2
2020-05-08 09:25:25 -07:00
Tõnis Tiigi
fc7875675c Merge pull request #277 from cpuguy83/moar_hcl_stdlib_funcs
Update go-cty to pull in more stdlib funcs.
2020-05-08 09:23:47 -07:00
Patrick Van Stee
355261e49e Parse bake config as hcl falling back to json
Signed-off-by: Patrick Van Stee <patrick@vanstee.me>
2020-05-07 23:53:49 -04:00
Patrick Van Stee
44c840b31d Add test of parsing a json bake config
Signed-off-by: Patrick Van Stee <patrick@vanstee.me>
2020-05-07 23:53:49 -04:00
Patrick Van Stee
1bc068a583 Fix json keys for groups and targets
Signed-off-by: Patrick Van Stee <patrick@vanstee.me>
2020-05-07 23:53:49 -04:00
Patrick Van Stee
340686a383 Support parsing json config with hcl v2
Signed-off-by: Patrick Van Stee <patrick@vanstee.me>
2020-05-07 23:53:49 -04:00
Brian Goff
1ad87c6ba6 Update go-cty to pull in more stdlib funcs.
I needed "split" specifically so I can do something like:

```hcl
variable PLATFORMS {
  default = "linux/amd64"
}

target foo {
  platforms = split(",", "${PLATFORMS}")
  # other stuff
}
```

Where the existing "csvdecode" does not work for this because it parses
the string into a list of objects instead of a list of strings.

I went ahead and just added all the available new functions.

Signed-off-by: Brian Goff <cpuguy83@gmail.com>
2020-05-07 16:05:17 -07:00
Tõnis Tiigi
eadf5eddbc Merge pull request #270 from thaJeztah/prune_force_shorthand
Add -f shorthand flag for prune --force
2020-05-07 11:41:32 -07:00
Sebastiaan van Stijn
f4f58003fb Add -f shorthand flag for prune --force
The docker builder prune command has a shorthand `-f` flag for `--force`:

    docker builder prune --help

    Usage:	docker builder prune

    Remove build cache

    Options:
      -a, --all                  Remove all unused build cache, not just dangling ones
          --filter filter        Provide filter values (e.g. 'until=24h')
      -f, --force                Do not prompt for confirmation
          --keep-storage bytes   Amount of disk space to keep for cache

Given that `buildx` can be used as a drop-in replacement for the native build
commands, it should match the UI, and also have a shorthand flag.

This patch also updates the flag's description to be in line with the docker commandline

With this patch applied;

    buildx prune --help
    Remove build cache

    Usage:
      buildx prune [flags]

    Flags:
      -a, --all                  Remove all unused images, not just dangling ones
          --filter filter        Provide filter values (e.g. 'until=24h')
      -f, --force                Do not prompt for confirmation
      -h, --help                 help for prune
          --keep-storage bytes   Amount of disk space to keep for cache
          --verbose              Provide a more verbose output

    Global Flags:
          --builder string   Override the configured builder instance

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
2020-05-02 21:29:23 +02:00
Tõnis Tiigi
bda4882a65 Merge pull request #268 from tiborvass/fix-tristate
Fix --pull and --no-cache behavior
2020-04-30 15:08:21 -07:00
Tibor Vass
77ddee9314 bake: fix pull and no-cache overrides
Signed-off-by: Tibor Vass <tibor@docker.com>
2020-04-30 14:05:21 -07:00
Tonis Tiigi
c9676c79d1 bake: fix hcl tags
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2020-04-30 13:41:49 -07:00
Tonis Tiigi
18095ee87b bake: reset no-cache and pull if not set
Signed-off-by: Tonis Tiigi <tonistiigi@gmail.com>
2020-04-30 13:01:45 -07:00
Tibor Vass
c4d07f67e3 commands: check if flag is set instead of using flagutil.Tristate
Fixes --pull and --no-cache without argument

Signed-off-by: Tibor Vass <tibor@docker.com>
2020-04-30 12:25:41 -07:00
42 changed files with 2507 additions and 13284 deletions

View File

@@ -180,7 +180,7 @@ Options:
| --pull | Always attempt to pull a newer version of the image
| --push | Shorthand for --output=type=registry
| --secret [] | Secret file to expose to the build: id=mysecret,src=/local/secret
| --ssh [] | SSH agent socket or keys to expose to the build (format: default|<id>[=<socket>|<key>[,<key>]])
| --ssh [] | SSH agent socket or keys to expose to the build (format: default\|&#60;id&#62;[=&#60;socket&#62;\|&#60;key&#62;[,&#60;key&#62;]])
| --tag [] | Name and optionally a tag in the 'name:tag' format
| --target string | Set the target build stage to build.

View File

@@ -72,8 +72,8 @@ func ParseFile(fn string) (*Config, error) {
type Config struct {
Variables []*Variable `json:"-" hcl:"variable,block"`
Groups []*Group `json:"groups" hcl:"group,block"`
Targets []*Target `json:"targets" hcl:"target,block"`
Groups []*Group `json:"group" hcl:"group,block"`
Targets []*Target `json:"target" hcl:"target,block"`
Remain hcl.Body `json:"-" hcl:",remain"`
}
@@ -227,13 +227,13 @@ func (c Config) newOverrides(v []string) (map[string]*Target, error) {
if err != nil {
return nil, errors.Errorf("invalid value %s for boolean key no-cache", parts[1])
}
t.NoCache = noCache
t.NoCache = &noCache
case "pull":
pull, err := strconv.ParseBool(parts[1])
if err != nil {
return nil, errors.Errorf("invalid value %s for boolean key pull", parts[1])
}
t.Pull = pull
t.Pull = &pull
default:
return nil, errors.Errorf("unknown key: %s", keys[1])
}
@@ -348,8 +348,8 @@ type Target struct {
SSH []string `json:"ssh,omitempty" hcl:"ssh,optional"`
Platforms []string `json:"platforms,omitempty" hcl:"platforms,optional"`
Outputs []string `json:"output,omitempty" hcl:"output,optional"`
Pull bool `json:"pull,omitempty": hcl:"pull,optional"`
NoCache bool `json:"no-cache,omitempty": hcl:"no-cache,optional"`
Pull *bool `json:"pull,omitempty" hcl:"pull,optional"`
NoCache *bool `json:"no-cache,omitempty" hcl:"no-cache,optional"`
// IMPORTANT: if you add more fields here, do not forget to update newOverrides and README.
}
@@ -396,6 +396,15 @@ func toBuildOpt(t *Target) (*build.Options, error) {
dockerfilePath = path.Join(contextPath, dockerfilePath)
}
noCache := false
if t.NoCache != nil {
noCache = *t.NoCache
}
pull := false
if t.Pull != nil {
pull = *t.Pull
}
bo := &build.Options{
Inputs: build.Inputs{
ContextPath: contextPath,
@@ -404,8 +413,8 @@ func toBuildOpt(t *Target) (*build.Options, error) {
Tags: t.Tags,
BuildArgs: t.Args,
Labels: t.Labels,
NoCache: t.NoCache,
Pull: t.Pull,
NoCache: noCache,
Pull: pull,
}
platforms, err := platformutil.Parse(t.Platforms)
@@ -500,6 +509,12 @@ func merge(t1, t2 *Target) *Target {
if t2.Outputs != nil { // no merge
t1.Outputs = t2.Outputs
}
if t2.Pull != nil {
t1.Pull = t2.Pull
}
if t2.NoCache != nil {
t1.NoCache = t2.NoCache
}
t1.Inherits = append(t1.Inherits, t2.Inherits...)
return t1
}

View File

@@ -23,6 +23,7 @@ target "webDEP" {
VAR_INHERITED = "webDEP"
VAR_BOTH = "webDEP"
}
no-cache = true
}
target "webapp" {
@@ -44,6 +45,8 @@ target "webapp" {
require.Equal(t, "Dockerfile.webapp", *m["webapp"].Dockerfile)
require.Equal(t, ".", *m["webapp"].Context)
require.Equal(t, "webDEP", m["webapp"].Args["VAR_INHERITED"])
require.Equal(t, true, *m["webapp"].NoCache)
require.Nil(t, m["webapp"].Pull)
})
t.Run("InvalidTargetOverrides", func(t *testing.T) {
@@ -106,6 +109,18 @@ target "webapp" {
require.Equal(t, "foo", *m["webapp"].Context)
})
t.Run("NoCacheOverride", func(t *testing.T) {
m, err := ReadTargets(ctx, []string{fp}, []string{"webapp"}, []string{"webapp.no-cache=false"})
require.NoError(t, err)
require.Equal(t, false, *m["webapp"].NoCache)
})
t.Run("PullOverride", func(t *testing.T) {
m, err := ReadTargets(ctx, []string{fp}, []string{"webapp"}, []string{"webapp.pull=false"})
require.NoError(t, err)
require.Equal(t, false, *m["webapp"].Pull)
})
t.Run("PatternOverride", func(t *testing.T) {
// same check for two cases
multiTargetCheck := func(t *testing.T, m map[string]*Target, err error) {

View File

@@ -8,6 +8,7 @@ import (
"github.com/hashicorp/hcl/v2/ext/userfunc"
"github.com/hashicorp/hcl/v2/hclsimple"
"github.com/hashicorp/hcl/v2/hclsyntax"
"github.com/hashicorp/hcl/v2/json"
"github.com/zclconf/go-cty/cty"
"github.com/zclconf/go-cty/cty/function"
"github.com/zclconf/go-cty/cty/function/stdlib"
@@ -22,26 +23,41 @@ var (
"and": stdlib.AndFunc,
"byteslen": stdlib.BytesLenFunc,
"bytesslice": stdlib.BytesSliceFunc,
"chomp": stdlib.ChompFunc,
"chunklist": stdlib.ChunklistFunc,
"ceil": stdlib.CeilFunc,
"csvdecode": stdlib.CSVDecodeFunc,
"coalesce": stdlib.CoalesceFunc,
"coalescelist": stdlib.CoalesceListFunc,
"concat": stdlib.ConcatFunc,
"contains": stdlib.ContainsFunc,
"distinct": stdlib.DistinctFunc,
"divide": stdlib.DivideFunc,
"element": stdlib.ElementFunc,
"equal": stdlib.EqualFunc,
"flatten": stdlib.FlattenFunc,
"floor": stdlib.FloorFunc,
"formatdate": stdlib.FormatDateFunc,
"format": stdlib.FormatFunc,
"formatlist": stdlib.FormatListFunc,
"greaterthan": stdlib.GreaterThanFunc,
"greaterthanorequalto": stdlib.GreaterThanOrEqualToFunc,
"hasindex": stdlib.HasIndexFunc,
"indent": stdlib.IndentFunc,
"index": stdlib.IndexFunc,
"int": stdlib.IntFunc,
"jsondecode": stdlib.JSONDecodeFunc,
"jsonencode": stdlib.JSONEncodeFunc,
"keys": stdlib.KeysFunc,
"join": stdlib.JoinFunc,
"length": stdlib.LengthFunc,
"lessthan": stdlib.LessThanFunc,
"lessthanorequalto": stdlib.LessThanOrEqualToFunc,
"log": stdlib.LogFunc,
"lookup": stdlib.LookupFunc,
"lower": stdlib.LowerFunc,
"max": stdlib.MaxFunc,
"merge": stdlib.MergeFunc,
"min": stdlib.MinFunc,
"modulo": stdlib.ModuloFunc,
"multiply": stdlib.MultiplyFunc,
@@ -49,19 +65,34 @@ var (
"notequal": stdlib.NotEqualFunc,
"not": stdlib.NotFunc,
"or": stdlib.OrFunc,
"parseint": stdlib.ParseIntFunc,
"pow": stdlib.PowFunc,
"range": stdlib.RangeFunc,
"regexall": stdlib.RegexAllFunc,
"regex": stdlib.RegexFunc,
"reverse": stdlib.ReverseFunc,
"reverselist": stdlib.ReverseListFunc,
"sethaselement": stdlib.SetHasElementFunc,
"setintersection": stdlib.SetIntersectionFunc,
"setsubtract": stdlib.SetSubtractFunc,
"setsymmetricdifference": stdlib.SetSymmetricDifferenceFunc,
"setunion": stdlib.SetUnionFunc,
"signum": stdlib.SignumFunc,
"slice": stdlib.SliceFunc,
"sort": stdlib.SortFunc,
"split": stdlib.SplitFunc,
"strlen": stdlib.StrlenFunc,
"substr": stdlib.SubstrFunc,
"subtract": stdlib.SubtractFunc,
"timeadd": stdlib.TimeAddFunc,
"title": stdlib.TitleFunc,
"trim": stdlib.TrimFunc,
"trimprefix": stdlib.TrimPrefixFunc,
"trimspace": stdlib.TrimSpaceFunc,
"trimsuffix": stdlib.TrimSuffixFunc,
"upper": stdlib.UpperFunc,
"values": stdlib.ValuesFunc,
"zipmap": stdlib.ZipmapFunc,
}
)
@@ -73,10 +104,20 @@ type staticConfig struct {
}
func ParseHCL(dt []byte, fn string) (*Config, error) {
// Decode user defined functions.
file, diags := hclsyntax.ParseConfig(dt, fn, hcl.Pos{Line: 1, Column: 1})
if diags.HasErrors() {
return nil, diags
// Decode user defined functions, first parsing as hcl and falling back to
// json, returning errors based on the file suffix.
file, hcldiags := hclsyntax.ParseConfig(dt, fn, hcl.Pos{Line: 1, Column: 1})
if hcldiags.HasErrors() {
var jsondiags hcl.Diagnostics
file, jsondiags = json.Parse(dt, fn)
if jsondiags.HasErrors() {
fnl := strings.ToLower(fn)
if strings.HasSuffix(fnl, ".json") {
return nil, jsondiags
} else {
return nil, hcldiags
}
}
}
userFunctions, _, diags := userfunc.DecodeUserFunctions(file.Body, "function", func() *hcl.EvalContext {

View File

@@ -68,6 +68,66 @@ func TestParseHCL(t *testing.T) {
require.Equal(t, map[string]string{"IAMCROSS": "true"}, c.Targets[3].Args)
})
t.Run("BasicInJSON", func(t *testing.T) {
dt := []byte(`
{
"group": {
"default": {
"targets": ["db", "webapp"]
}
},
"target": {
"db": {
"context": "./db",
"tags": ["docker.io/tonistiigi/db"]
},
"webapp": {
"context": "./dir",
"dockerfile": "Dockerfile-alternate",
"args": {
"buildno": "123"
}
},
"cross": {
"platforms": [
"linux/amd64",
"linux/arm64"
]
},
"webapp-plus": {
"inherits": ["webapp", "cross"],
"args": {
"IAMCROSS": "true"
}
}
}
}
`)
c, err := ParseHCL(dt, "docker-bake.json")
require.NoError(t, err)
require.Equal(t, 1, len(c.Groups))
require.Equal(t, "default", c.Groups[0].Name)
require.Equal(t, []string{"db", "webapp"}, c.Groups[0].Targets)
require.Equal(t, 4, len(c.Targets))
require.Equal(t, c.Targets[0].Name, "db")
require.Equal(t, "./db", *c.Targets[0].Context)
require.Equal(t, c.Targets[1].Name, "webapp")
require.Equal(t, 1, len(c.Targets[1].Args))
require.Equal(t, "123", c.Targets[1].Args["buildno"])
require.Equal(t, c.Targets[2].Name, "cross")
require.Equal(t, 2, len(c.Targets[2].Platforms))
require.Equal(t, []string{"linux/amd64", "linux/arm64"}, c.Targets[2].Platforms)
require.Equal(t, c.Targets[3].Name, "webapp-plus")
require.Equal(t, 1, len(c.Targets[3].Args))
require.Equal(t, map[string]string{"IAMCROSS": "true"}, c.Targets[3].Args)
})
t.Run("WithFunctions", func(t *testing.T) {
dt := []byte(`
group "default" {

View File

@@ -300,9 +300,6 @@ func toSolveOpt(d driver.Driver, multiDriver bool, opt Options, dl dockerLoadCal
}()
if opt.ImageIDFile != "" {
if multiDriver || len(opt.Platforms) != 0 {
return nil, nil, errors.Errorf("image ID file cannot be specified when building for multiple platforms")
}
// Avoid leaving a stale file if we eventually fail
if err := os.Remove(opt.ImageIDFile); err != nil && !os.IsNotExist(err) {
return nil, nil, errors.Wrap(err, "removing image ID file")
@@ -347,7 +344,7 @@ func toSolveOpt(d driver.Driver, multiDriver bool, opt Options, dl dockerLoadCal
case 1:
// valid
case 0:
if isDefaultMobyDriver {
if isDefaultMobyDriver && !noDefaultLoad() {
// backwards compat for docker driver only:
// this ensures the build results in a docker image.
opt.Exports = []client.ExportEntry{{Type: "image", Attrs: map[string]string{}}}
@@ -382,6 +379,15 @@ func toSolveOpt(d driver.Driver, multiDriver bool, opt Options, dl dockerLoadCal
}
}
// cacheonly is a fake exporter to opt out of default behaviors
exports := make([]client.ExportEntry, 0, len(opt.Exports))
for _, e := range opt.Exports {
if e.Type != "cacheonly" {
exports = append(exports, e)
}
}
opt.Exports = exports
// set up exporters
for i, e := range opt.Exports {
if (e.Type == "local" || e.Type == "tar") && opt.ImageIDFile != "" {
@@ -491,7 +497,7 @@ func Build(ctx context.Context, drivers []DriverInfo, opt map[string]Options, do
}
}
if noMobyDriver != nil {
if noMobyDriver != nil && !noDefaultLoad() {
for _, opt := range opt {
if len(opt.Exports) == 0 {
logrus.Warnf("No output specified for %s driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load", noMobyDriver.Factory().Name())
@@ -817,33 +823,50 @@ func newDockerLoader(ctx context.Context, d DockerAPI, name string, mw *progress
}
pr, pw := io.Pipe()
started := make(chan struct{})
w := &waitingWriter{
done := make(chan struct{})
ctx, cancel := context.WithCancel(ctx)
var w *waitingWriter
w = &waitingWriter{
PipeWriter: pw,
f: func() {
resp, err := c.ImageLoad(ctx, pr, false)
defer close(done)
if err != nil {
pr.CloseWithError(err)
w.mu.Lock()
w.err = err
w.mu.Unlock()
return
}
prog := mw.WithPrefix("", false)
close(started)
progress.FromReader(prog, "importing to docker", resp.Body)
},
started: started,
done: done,
cancel: cancel,
}
return w, func() {
pr.Close()
}, nil
}
func noDefaultLoad() bool {
v := os.Getenv("BUILDX_NO_DEFAULT_LOAD")
b, err := strconv.ParseBool(v)
if err != nil {
logrus.Warnf("invalid non-bool value for BUILDX_NO_DEFAULT_LOAD: %s", v)
}
return b
}
type waitingWriter struct {
*io.PipeWriter
f func()
once sync.Once
mu sync.Mutex
err error
started chan struct{}
f func()
once sync.Once
mu sync.Mutex
err error
done chan struct{}
cancel func()
}
func (w *waitingWriter) Write(dt []byte) (int, error) {
@@ -855,6 +878,11 @@ func (w *waitingWriter) Write(dt []byte) (int, error) {
func (w *waitingWriter) Close() error {
err := w.PipeWriter.Close()
<-w.started
<-w.done
if err == nil {
w.mu.Lock()
defer w.mu.Unlock()
return w.err
}
return err
}

View File

@@ -4,6 +4,7 @@ import (
"fmt"
"os"
"github.com/containerd/containerd/pkg/seed"
"github.com/docker/buildx/commands"
"github.com/docker/buildx/version"
"github.com/docker/cli/cli-plugins/manager"
@@ -24,6 +25,10 @@ import (
var experimental string
func init() {
seed.WithTimeAndRand()
}
func main() {
if os.Getenv("DOCKER_CLI_PLUGIN_ORIGINAL_CLI_COMMAND") == "" {
if len(os.Args) < 2 || os.Args[1] != manager.MetadataSubcommandName {

View File

@@ -107,6 +107,14 @@ func bakeCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
Aliases: []string{"f"},
Short: "Build from a file",
RunE: func(cmd *cobra.Command, args []string) error {
// reset to nil to avoid override is unset
if !cmd.Flags().Lookup("no-cache").Changed {
options.noCache = nil
}
if !cmd.Flags().Lookup("pull").Changed {
options.pull = nil
}
options.commonOptions.builder = rootOpts.builder
return runBake(dockerCli, args, options)
},
}

View File

@@ -7,7 +7,6 @@ import (
"strings"
"github.com/docker/buildx/build"
"github.com/docker/buildx/util/flagutil"
"github.com/docker/buildx/util/platformutil"
"github.com/docker/buildx/util/progress"
"github.com/docker/cli/cli"
@@ -305,9 +304,9 @@ func buildCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
}
func commonBuildFlags(options *commonOptions, flags *pflag.FlagSet) {
flags.Var(flagutil.Tristate(options.noCache), "no-cache", "Do not use cache when building the image")
options.noCache = flags.Bool("no-cache", false, "Do not use cache when building the image")
flags.StringVar(&options.progress, "progress", "auto", "Set type of progress output (auto, plain, tty). Use plain to show container output")
flags.Var(flagutil.Tristate(options.pull), "pull", "Always attempt to pull a newer version of the image")
options.pull = flags.Bool("pull", false, "Always attempt to pull a newer version of the image")
}
func listToMap(values []string, defaultEnv bool) map[string]string {

View File

@@ -74,7 +74,7 @@ func runInspect(dockerCli command.Cli, in inspectOptions) error {
ngi := &nginfo{ng: ng}
timeoutCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
timeoutCtx, cancel := context.WithTimeout(ctx, 20*time.Second)
defer cancel()
err = loadNodeGroupData(timeoutCtx, dockerCli, ngi)

View File

@@ -30,7 +30,7 @@ func runLs(dockerCli command.Cli, in lsOptions) error {
}
defer release()
ctx, cancel := context.WithTimeout(ctx, 7*time.Second)
ctx, cancel := context.WithTimeout(ctx, 20*time.Second)
defer cancel()
ll, err := txn.List()

View File

@@ -143,7 +143,7 @@ func pruneCmd(dockerCli command.Cli, rootOpts *rootOptions) *cobra.Command {
flags.Var(&options.filter, "filter", "Provide filter values (e.g. 'until=24h')")
flags.Var(&options.keepStorage, "keep-storage", "Amount of disk space to keep for cache")
flags.BoolVar(&options.verbose, "verbose", false, "Provide a more verbose output")
flags.BoolVar(&options.force, "force", false, "Skip the warning messages")
flags.BoolVarP(&options.force, "force", "f", false, "Do not prompt for confirmation")
return cmd
}

View File

@@ -44,7 +44,7 @@ func (d *Driver) Bootstrap(ctx context.Context, l progress.Logger) error {
if err := d.start(ctx, sub); err != nil {
return err
}
if err := d.wait(ctx); err != nil {
if err := d.wait(ctx, sub); err != nil {
return err
}
return nil
@@ -106,7 +106,7 @@ func (d *Driver) create(ctx context.Context, l progress.SubLogger) error {
if err := d.start(ctx, l); err != nil {
return err
}
if err := d.wait(ctx); err != nil {
if err := d.wait(ctx, l); err != nil {
return err
}
return nil
@@ -116,17 +116,28 @@ func (d *Driver) create(ctx context.Context, l progress.SubLogger) error {
return nil
}
func (d *Driver) wait(ctx context.Context) error {
try := 0
func (d *Driver) wait(ctx context.Context, l progress.SubLogger) error {
try := 1
for {
if err := d.run(ctx, []string{"buildctl", "debug", "workers"}); err != nil {
if try > 10 {
bufStdout := &bytes.Buffer{}
bufStderr := &bytes.Buffer{}
if err := d.run(ctx, []string{"buildctl", "debug", "workers"}, bufStdout, bufStderr); err != nil {
if try > 15 {
if err != nil {
d.copyLogs(context.TODO(), l)
if bufStdout.Len() != 0 {
l.Log(1, bufStdout.Bytes())
}
if bufStderr.Len() != 0 {
l.Log(2, bufStderr.Bytes())
}
}
return err
}
select {
case <-ctx.Done():
return ctx.Err()
case <-time.After(time.Duration(100+try*20) * time.Millisecond):
case <-time.After(time.Duration(try*120) * time.Millisecond):
try++
continue
}
@@ -135,6 +146,21 @@ func (d *Driver) wait(ctx context.Context) error {
}
}
func (d *Driver) copyLogs(ctx context.Context, l progress.SubLogger) error {
rc, err := d.DockerAPI.ContainerLogs(ctx, d.Name, types.ContainerLogsOptions{
ShowStdout: true, ShowStderr: true,
})
if err != nil {
return err
}
stdout := &logWriter{logger: l, stream: 1}
stderr := &logWriter{logger: l, stream: 2}
if _, err := stdcopy.StdCopy(stdout, stderr, rc); err != nil {
return err
}
return rc.Close()
}
func (d *Driver) exec(ctx context.Context, cmd []string) (string, net.Conn, error) {
execConfig := types.ExecConfig{
Cmd: cmd,
@@ -159,12 +185,12 @@ func (d *Driver) exec(ctx context.Context, cmd []string) (string, net.Conn, erro
return execID, resp.Conn, nil
}
func (d *Driver) run(ctx context.Context, cmd []string) error {
func (d *Driver) run(ctx context.Context, cmd []string, stdout, stderr io.Writer) (err error) {
id, conn, err := d.exec(ctx, cmd)
if err != nil {
return err
}
if _, err := io.Copy(ioutil.Discard, conn); err != nil {
if _, err := stdcopy.StdCopy(stdout, stderr, conn); err != nil {
return err
}
conn.Close()
@@ -259,7 +285,7 @@ func (d *Driver) Features() map[driver.Feature]bool {
func demuxConn(c net.Conn) net.Conn {
pr, pw := io.Pipe()
// TODO: rewrite parser with Reader() to avoid goroutine switch
go stdcopy.StdCopy(pw, os.Stdout, c)
go stdcopy.StdCopy(pw, os.Stderr, c)
return &demux{
Conn: c,
Reader: pr,
@@ -297,3 +323,13 @@ func readFileToTar(fn string) (*bytes.Buffer, error) {
}
return buf, nil
}
type logWriter struct {
logger progress.SubLogger
stream int
}
func (l *logWriter) Write(dt []byte) (int, error) {
l.logger.Log(l.stream, dt)
return len(dt), nil
}

2
go.mod
View File

@@ -55,7 +55,7 @@ require (
github.com/theupdateframework/notary v0.6.1 // indirect
github.com/tonistiigi/units v0.0.0-20180711220420-6950e57a87ea
github.com/xlab/handysort v0.0.0-20150421192137-fb3537ed64a1 // indirect
github.com/zclconf/go-cty v1.2.0
github.com/zclconf/go-cty v1.4.0
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e
gopkg.in/dancannon/gorethink.v3 v3.0.5 // indirect
gopkg.in/fatih/pool.v2 v2.0.0 // indirect

2
go.sum
View File

@@ -462,6 +462,8 @@ github.com/yvasiyarov/gorelic v0.0.0-20141212073537-a9bba5b9ab50/go.mod h1:NUSPS
github.com/yvasiyarov/newrelic_platform_go v0.0.0-20140908184405-b21fdbd4370f/go.mod h1:GlGEuHIJweS1mbCqG+7vt2nvWLzLLnRHbXz5JKd/Qbg=
github.com/zclconf/go-cty v1.2.0 h1:sPHsy7ADcIZQP3vILvTjrh74ZA175TFP5vqiNK1UmlI=
github.com/zclconf/go-cty v1.2.0/go.mod h1:hOPWgoHbaTUnI5k4D2ld+GRpFJSCe6bCM7m1q/N4PQ8=
github.com/zclconf/go-cty v1.4.0 h1:+q+tmgyUB94HIdH/uVTIi/+kt3pt4sHwEZAcTyLoGsQ=
github.com/zclconf/go-cty v1.4.0/go.mod h1:nHzOclRkoj++EU9ZjSrZvRG0BXIWt8c7loYc0qXAFGQ=
go.etcd.io/bbolt v1.3.3 h1:MUGmc65QhB3pIlaQ5bB4LwqSj6GIonVJXpZiaKNyaKk=
go.etcd.io/bbolt v1.3.3/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=

View File

@@ -10,7 +10,7 @@ if [ -n "$TMUX_ENTRYPOINT" ]; then
tmux new-window
tmux a -t demo
else
( $dockerdCmd 2>/var/log/dockerd.log & )
( $dockerdCmd &>/var/log/dockerd.log & )
exec ash
fi

View File

@@ -1,36 +0,0 @@
package flagutil
import "strconv"
type tristate struct {
opt *bool
}
// Tristate is a tri-state boolean flag type.
// It can be set, but not unset.
func Tristate(opt *bool) tristate {
return tristate{opt}
}
func (t tristate) Type() string {
return "tristate"
}
func (t tristate) String() string {
if t.opt == nil {
return "(unset)"
}
if *t.opt {
return "true"
}
return "false"
}
func (t tristate) Set(s string) error {
b, err := strconv.ParseBool(s)
if err != nil {
return err
}
t.opt = &b
return nil
}

View File

@@ -1,95 +0,0 @@
Copyright (c) 2017 Martin Atkins
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
---------
Unicode table generation programs are under a separate copyright and license:
Copyright (c) 2014 Couchbase, Inc.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file
except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the
License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
either express or implied. See the License for the specific language governing permissions
and limitations under the License.
---------
Grapheme break data is provided as part of the Unicode character database,
copright 2016 Unicode, Inc, which is provided with the following license:
Unicode Data Files include all data files under the directories
http://www.unicode.org/Public/, http://www.unicode.org/reports/,
http://www.unicode.org/cldr/data/, http://source.icu-project.org/repos/icu/, and
http://www.unicode.org/utility/trac/browser/.
Unicode Data Files do not include PDF online code charts under the
directory http://www.unicode.org/Public/.
Software includes any source code published in the Unicode Standard
or under the directories
http://www.unicode.org/Public/, http://www.unicode.org/reports/,
http://www.unicode.org/cldr/data/, http://source.icu-project.org/repos/icu/, and
http://www.unicode.org/utility/trac/browser/.
NOTICE TO USER: Carefully read the following legal agreement.
BY DOWNLOADING, INSTALLING, COPYING OR OTHERWISE USING UNICODE INC.'S
DATA FILES ("DATA FILES"), AND/OR SOFTWARE ("SOFTWARE"),
YOU UNEQUIVOCALLY ACCEPT, AND AGREE TO BE BOUND BY, ALL OF THE
TERMS AND CONDITIONS OF THIS AGREEMENT.
IF YOU DO NOT AGREE, DO NOT DOWNLOAD, INSTALL, COPY, DISTRIBUTE OR USE
THE DATA FILES OR SOFTWARE.
COPYRIGHT AND PERMISSION NOTICE
Copyright © 1991-2017 Unicode, Inc. All rights reserved.
Distributed under the Terms of Use in http://www.unicode.org/copyright.html.
Permission is hereby granted, free of charge, to any person obtaining
a copy of the Unicode data files and any associated documentation
(the "Data Files") or Unicode software and any associated documentation
(the "Software") to deal in the Data Files or Software
without restriction, including without limitation the rights to use,
copy, modify, merge, publish, distribute, and/or sell copies of
the Data Files or Software, and to permit persons to whom the Data Files
or Software are furnished to do so, provided that either
(a) this copyright and permission notice appear with all copies
of the Data Files or Software, or
(b) this copyright and permission notice appear in associated
Documentation.
THE DATA FILES AND SOFTWARE ARE PROVIDED "AS IS", WITHOUT WARRANTY OF
ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT OF THIRD PARTY RIGHTS.
IN NO EVENT SHALL THE COPYRIGHT HOLDER OR HOLDERS INCLUDED IN THIS
NOTICE BE LIABLE FOR ANY CLAIM, OR ANY SPECIAL INDIRECT OR CONSEQUENTIAL
DAMAGES, OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE,
DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER
TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
PERFORMANCE OF THE DATA FILES OR SOFTWARE.
Except as contained in this notice, the name of a copyright holder
shall not be used in advertising or otherwise to promote the sale,
use or other dealings in these Data Files or Software without prior
written authorization of the copyright holder.

View File

@@ -1,30 +0,0 @@
package textseg
import (
"bufio"
"bytes"
)
// AllTokens is a utility that uses a bufio.SplitFunc to produce a slice of
// all of the recognized tokens in the given buffer.
func AllTokens(buf []byte, splitFunc bufio.SplitFunc) ([][]byte, error) {
scanner := bufio.NewScanner(bytes.NewReader(buf))
scanner.Split(splitFunc)
var ret [][]byte
for scanner.Scan() {
ret = append(ret, scanner.Bytes())
}
return ret, scanner.Err()
}
// TokenCount is a utility that uses a bufio.SplitFunc to count the number of
// recognized tokens in the given buffer.
func TokenCount(buf []byte, splitFunc bufio.SplitFunc) (int, error) {
scanner := bufio.NewScanner(bytes.NewReader(buf))
scanner.Split(splitFunc)
var ret int
for scanner.Scan() {
ret++
}
return ret, scanner.Err()
}

View File

@@ -1,7 +0,0 @@
package textseg
//go:generate go run make_tables.go -output tables.go
//go:generate go run make_test_tables.go -output tables_test.go
//go:generate ruby unicode2ragel.rb --url=http://www.unicode.org/Public/9.0.0/ucd/auxiliary/GraphemeBreakProperty.txt -m GraphemeCluster -p "Prepend,CR,LF,Control,Extend,Regional_Indicator,SpacingMark,L,V,T,LV,LVT,E_Base,E_Modifier,ZWJ,Glue_After_Zwj,E_Base_GAZ" -o grapheme_clusters_table.rl
//go:generate ragel -Z grapheme_clusters.rl
//go:generate gofmt -w grapheme_clusters.go

File diff suppressed because it is too large Load Diff

View File

@@ -1,132 +0,0 @@
package textseg
import (
"errors"
"unicode/utf8"
)
// Generated from grapheme_clusters.rl. DO NOT EDIT
%%{
# (except you are actually in grapheme_clusters.rl here, so edit away!)
machine graphclust;
write data;
}%%
var Error = errors.New("invalid UTF8 text")
// ScanGraphemeClusters is a split function for bufio.Scanner that splits
// on grapheme cluster boundaries.
func ScanGraphemeClusters(data []byte, atEOF bool) (int, []byte, error) {
if len(data) == 0 {
return 0, nil, nil
}
// Ragel state
cs := 0 // Current State
p := 0 // "Pointer" into data
pe := len(data) // End-of-data "pointer"
ts := 0
te := 0
act := 0
eof := pe
// Make Go compiler happy
_ = ts
_ = te
_ = act
_ = eof
startPos := 0
endPos := 0
%%{
include GraphemeCluster "grapheme_clusters_table.rl";
action start {
startPos = p
}
action end {
endPos = p
}
action emit {
return endPos+1, data[startPos:endPos+1], nil
}
ZWJGlue = ZWJ (Glue_After_Zwj | E_Base_GAZ Extend* E_Modifier?)?;
AnyExtender = Extend | ZWJGlue | SpacingMark;
Extension = AnyExtender*;
ReplacementChar = (0xEF 0xBF 0xBD);
CRLFSeq = CR LF;
ControlSeq = Control | ReplacementChar;
HangulSeq = (
L+ (((LV? V+ | LVT) T*)?|LV?) |
LV V* T* |
V+ T* |
LVT T* |
T+
) Extension;
EmojiSeq = (E_Base | E_Base_GAZ) Extend* E_Modifier? Extension;
ZWJSeq = ZWJGlue Extension;
EmojiFlagSeq = Regional_Indicator Regional_Indicator? Extension;
UTF8Cont = 0x80 .. 0xBF;
AnyUTF8 = (
0x00..0x7F |
0xC0..0xDF . UTF8Cont |
0xE0..0xEF . UTF8Cont . UTF8Cont |
0xF0..0xF7 . UTF8Cont . UTF8Cont . UTF8Cont
);
# OtherSeq is any character that isn't at the start of one of the extended sequences above, followed by extension
OtherSeq = (AnyUTF8 - (CR|LF|Control|ReplacementChar|L|LV|V|LVT|T|E_Base|E_Base_GAZ|ZWJ|Regional_Indicator|Prepend)) Extension;
# PrependSeq is prepend followed by any of the other patterns above, except control characters which explicitly break
PrependSeq = Prepend+ (HangulSeq|EmojiSeq|ZWJSeq|EmojiFlagSeq|OtherSeq)?;
CRLFTok = CRLFSeq >start @end;
ControlTok = ControlSeq >start @end;
HangulTok = HangulSeq >start @end;
EmojiTok = EmojiSeq >start @end;
ZWJTok = ZWJSeq >start @end;
EmojiFlagTok = EmojiFlagSeq >start @end;
OtherTok = OtherSeq >start @end;
PrependTok = PrependSeq >start @end;
main := |*
CRLFTok => emit;
ControlTok => emit;
HangulTok => emit;
EmojiTok => emit;
ZWJTok => emit;
EmojiFlagTok => emit;
PrependTok => emit;
OtherTok => emit;
# any single valid UTF-8 character would also be valid per spec,
# but we'll handle that separately after the loop so we can deal
# with requesting more bytes if we're not at EOF.
*|;
write init;
write exec;
}%%
// If we fall out here then we were unable to complete a sequence.
// If we weren't able to complete a sequence then either we've
// reached the end of a partial buffer (so there's more data to come)
// or we have an isolated symbol that would normally be part of a
// grapheme cluster but has appeared in isolation here.
if !atEOF {
// Request more
return 0, nil, nil
}
// Just take the first UTF-8 sequence and return that.
_, seqLen := utf8.DecodeRune(data)
return seqLen, data[:seqLen], nil
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,335 +0,0 @@
#!/usr/bin/env ruby
#
# This scripted has been updated to accept more command-line arguments:
#
# -u, --url URL to process
# -m, --machine Machine name
# -p, --properties Properties to add to the machine
# -o, --output Write output to file
#
# Updated by: Marty Schoch <marty.schoch@gmail.com>
#
# This script uses the unicode spec to generate a Ragel state machine
# that recognizes unicode alphanumeric characters. It generates 5
# character classes: uupper, ulower, ualpha, udigit, and ualnum.
# Currently supported encodings are UTF-8 [default] and UCS-4.
#
# Usage: unicode2ragel.rb [options]
# -e, --encoding [ucs4 | utf8] Data encoding
# -h, --help Show this message
#
# This script was originally written as part of the Ferret search
# engine library.
#
# Author: Rakan El-Khalil <rakan@well.com>
require 'optparse'
require 'open-uri'
ENCODINGS = [ :utf8, :ucs4 ]
ALPHTYPES = { :utf8 => "byte", :ucs4 => "rune" }
DEFAULT_CHART_URL = "http://www.unicode.org/Public/5.1.0/ucd/DerivedCoreProperties.txt"
DEFAULT_MACHINE_NAME= "WChar"
###
# Display vars & default option
TOTAL_WIDTH = 80
RANGE_WIDTH = 23
@encoding = :utf8
@chart_url = DEFAULT_CHART_URL
machine_name = DEFAULT_MACHINE_NAME
properties = []
@output = $stdout
###
# Option parsing
cli_opts = OptionParser.new do |opts|
opts.on("-e", "--encoding [ucs4 | utf8]", "Data encoding") do |o|
@encoding = o.downcase.to_sym
end
opts.on("-h", "--help", "Show this message") do
puts opts
exit
end
opts.on("-u", "--url URL", "URL to process") do |o|
@chart_url = o
end
opts.on("-m", "--machine MACHINE_NAME", "Machine name") do |o|
machine_name = o
end
opts.on("-p", "--properties x,y,z", Array, "Properties to add to machine") do |o|
properties = o
end
opts.on("-o", "--output FILE", "output file") do |o|
@output = File.new(o, "w+")
end
end
cli_opts.parse(ARGV)
unless ENCODINGS.member? @encoding
puts "Invalid encoding: #{@encoding}"
puts cli_opts
exit
end
##
# Downloads the document at url and yields every alpha line's hex
# range and description.
def each_alpha( url, property )
open( url ) do |file|
file.each_line do |line|
next if line =~ /^#/;
next if line !~ /; #{property} #/;
range, description = line.split(/;/)
range.strip!
description.gsub!(/.*#/, '').strip!
if range =~ /\.\./
start, stop = range.split '..'
else start = stop = range
end
yield start.hex .. stop.hex, description
end
end
end
###
# Formats to hex at minimum width
def to_hex( n )
r = "%0X" % n
r = "0#{r}" unless (r.length % 2).zero?
r
end
###
# UCS4 is just a straight hex conversion of the unicode codepoint.
def to_ucs4( range )
rangestr = "0x" + to_hex(range.begin)
rangestr << "..0x" + to_hex(range.end) if range.begin != range.end
[ rangestr ]
end
##
# 0x00 - 0x7f -> 0zzzzzzz[7]
# 0x80 - 0x7ff -> 110yyyyy[5] 10zzzzzz[6]
# 0x800 - 0xffff -> 1110xxxx[4] 10yyyyyy[6] 10zzzzzz[6]
# 0x010000 - 0x10ffff -> 11110www[3] 10xxxxxx[6] 10yyyyyy[6] 10zzzzzz[6]
UTF8_BOUNDARIES = [0x7f, 0x7ff, 0xffff, 0x10ffff]
def to_utf8_enc( n )
r = 0
if n <= 0x7f
r = n
elsif n <= 0x7ff
y = 0xc0 | (n >> 6)
z = 0x80 | (n & 0x3f)
r = y << 8 | z
elsif n <= 0xffff
x = 0xe0 | (n >> 12)
y = 0x80 | (n >> 6) & 0x3f
z = 0x80 | n & 0x3f
r = x << 16 | y << 8 | z
elsif n <= 0x10ffff
w = 0xf0 | (n >> 18)
x = 0x80 | (n >> 12) & 0x3f
y = 0x80 | (n >> 6) & 0x3f
z = 0x80 | n & 0x3f
r = w << 24 | x << 16 | y << 8 | z
end
to_hex(r)
end
def from_utf8_enc( n )
n = n.hex
r = 0
if n <= 0x7f
r = n
elsif n <= 0xdfff
y = (n >> 8) & 0x1f
z = n & 0x3f
r = y << 6 | z
elsif n <= 0xefffff
x = (n >> 16) & 0x0f
y = (n >> 8) & 0x3f
z = n & 0x3f
r = x << 10 | y << 6 | z
elsif n <= 0xf7ffffff
w = (n >> 24) & 0x07
x = (n >> 16) & 0x3f
y = (n >> 8) & 0x3f
z = n & 0x3f
r = w << 18 | x << 12 | y << 6 | z
end
r
end
###
# Given a range, splits it up into ranges that can be continuously
# encoded into utf8. Eg: 0x00 .. 0xff => [0x00..0x7f, 0x80..0xff]
# This is not strictly needed since the current [5.1] unicode standard
# doesn't have ranges that straddle utf8 boundaries. This is included
# for completeness as there is no telling if that will ever change.
def utf8_ranges( range )
ranges = []
UTF8_BOUNDARIES.each do |max|
if range.begin <= max
if range.end <= max
ranges << range
return ranges
end
ranges << (range.begin .. max)
range = (max + 1) .. range.end
end
end
ranges
end
def build_range( start, stop )
size = start.size/2
left = size - 1
return [""] if size < 1
a = start[0..1]
b = stop[0..1]
###
# Shared prefix
if a == b
return build_range(start[2..-1], stop[2..-1]).map do |elt|
"0x#{a} " + elt
end
end
###
# Unshared prefix, end of run
return ["0x#{a}..0x#{b} "] if left.zero?
###
# Unshared prefix, not end of run
# Range can be 0x123456..0x56789A
# Which is equivalent to:
# 0x123456 .. 0x12FFFF
# 0x130000 .. 0x55FFFF
# 0x560000 .. 0x56789A
ret = []
ret << build_range(start, a + "FF" * left)
###
# Only generate middle range if need be.
if a.hex+1 != b.hex
max = to_hex(b.hex - 1)
max = "FF" if b == "FF"
ret << "0x#{to_hex(a.hex+1)}..0x#{max} " + "0x00..0xFF " * left
end
###
# Don't generate last range if it is covered by first range
ret << build_range(b + "00" * left, stop) unless b == "FF"
ret.flatten!
end
def to_utf8( range )
utf8_ranges( range ).map do |r|
begin_enc = to_utf8_enc(r.begin)
end_enc = to_utf8_enc(r.end)
build_range begin_enc, end_enc
end.flatten!
end
##
# Perform a 3-way comparison of the number of codepoints advertised by
# the unicode spec for the given range, the originally parsed range,
# and the resulting utf8 encoded range.
def count_codepoints( code )
code.split(' ').inject(1) do |acc, elt|
if elt =~ /0x(.+)\.\.0x(.+)/
if @encoding == :utf8
acc * (from_utf8_enc($2) - from_utf8_enc($1) + 1)
else
acc * ($2.hex - $1.hex + 1)
end
else
acc
end
end
end
def is_valid?( range, desc, codes )
spec_count = 1
spec_count = $1.to_i if desc =~ /\[(\d+)\]/
range_count = range.end - range.begin + 1
sum = codes.inject(0) { |acc, elt| acc + count_codepoints(elt) }
sum == spec_count and sum == range_count
end
##
# Generate the state maching to stdout
def generate_machine( name, property )
pipe = " "
@output.puts " #{name} = "
each_alpha( @chart_url, property ) do |range, desc|
codes = (@encoding == :ucs4) ? to_ucs4(range) : to_utf8(range)
#raise "Invalid encoding of range #{range}: #{codes.inspect}" unless
# is_valid? range, desc, codes
range_width = codes.map { |a| a.size }.max
range_width = RANGE_WIDTH if range_width < RANGE_WIDTH
desc_width = TOTAL_WIDTH - RANGE_WIDTH - 11
desc_width -= (range_width - RANGE_WIDTH) if range_width > RANGE_WIDTH
if desc.size > desc_width
desc = desc[0..desc_width - 4] + "..."
end
codes.each_with_index do |r, idx|
desc = "" unless idx.zero?
code = "%-#{range_width}s" % r
@output.puts " #{pipe} #{code} ##{desc}"
pipe = "|"
end
end
@output.puts " ;"
@output.puts ""
end
@output.puts <<EOF
# The following Ragel file was autogenerated with #{$0}
# from: #{@chart_url}
#
# It defines #{properties}.
#
# To use this, make sure that your alphtype is set to #{ALPHTYPES[@encoding]},
# and that your input is in #{@encoding}.
%%{
machine #{machine_name};
EOF
properties.each { |x| generate_machine( x, x ) }
@output.puts <<EOF
}%%
EOF

View File

@@ -1,19 +0,0 @@
package textseg
import "unicode/utf8"
// ScanGraphemeClusters is a split function for bufio.Scanner that splits
// on UTF8 sequence boundaries.
//
// This is included largely for completeness, since this behavior is already
// built in to Go when ranging over a string.
func ScanUTF8Sequences(data []byte, atEOF bool) (int, []byte, error) {
if len(data) == 0 {
return 0, nil, nil
}
r, seqLen := utf8.DecodeRune(data)
if r == utf8.RuneError && !atEOF {
return 0, nil, nil
}
return seqLen, data[:seqLen], nil
}

View File

@@ -0,0 +1,38 @@
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package seed
import (
"math/rand"
"time"
)
// WithTimeAndRand seeds the global math rand generator with nanoseconds
// XOR'ed with a crypto component if available for uniqueness.
func WithTimeAndRand() {
var (
b [4]byte
u int64
)
tryReadRandom(b[:])
// Set higher 32 bits, bottom 32 will be set with nanos
u |= (int64(b[0]) << 56) | (int64(b[1]) << 48) | (int64(b[2]) << 40) | (int64(b[3]) << 32)
rand.Seed(u ^ time.Now().UnixNano())
}

View File

@@ -0,0 +1,24 @@
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package seed
import "golang.org/x/sys/unix"
func tryReadRandom(p []byte) {
// Ignore errors, just decreases uniqueness of seed
unix.Getrandom(p, unix.GRND_NONBLOCK)
}

View File

@@ -0,0 +1,28 @@
// +build !linux
/*
Copyright The containerd Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package seed
import (
"crypto/rand"
"io"
)
func tryReadRandom(p []byte) {
io.ReadFull(rand.Reader, p)
}

View File

@@ -138,6 +138,15 @@ func getConversionKnown(in cty.Type, out cty.Type, unsafe bool) conversion {
outEty := out.ElementType()
return conversionObjectToMap(in, outEty, unsafe)
case out.IsObjectType() && in.IsMapType():
if !unsafe {
// Converting a map to an object is an "unsafe" conversion,
// because we don't know if all the map keys will correspond to
// object attributes.
return nil
}
return conversionMapToObject(in, out, unsafe)
case in.IsCapsuleType() || out.IsCapsuleType():
if !unsafe {
// Capsule types can only participate in "unsafe" conversions,

View File

@@ -15,18 +15,18 @@ func conversionCollectionToList(ety cty.Type, conv conversion) conversion {
return func(val cty.Value, path cty.Path) (cty.Value, error) {
elems := make([]cty.Value, 0, val.LengthInt())
i := int64(0)
path = append(path, nil)
elemPath := append(path.Copy(), nil)
it := val.ElementIterator()
for it.Next() {
_, val := it.Element()
var err error
path[len(path)-1] = cty.IndexStep{
elemPath[len(elemPath)-1] = cty.IndexStep{
Key: cty.NumberIntVal(i),
}
if conv != nil {
val, err = conv(val, path)
val, err = conv(val, elemPath)
if err != nil {
return cty.NilVal, err
}
@@ -37,6 +37,9 @@ func conversionCollectionToList(ety cty.Type, conv conversion) conversion {
}
if len(elems) == 0 {
if ety == cty.DynamicPseudoType {
ety = val.Type().ElementType()
}
return cty.ListValEmpty(ety), nil
}
@@ -55,18 +58,18 @@ func conversionCollectionToSet(ety cty.Type, conv conversion) conversion {
return func(val cty.Value, path cty.Path) (cty.Value, error) {
elems := make([]cty.Value, 0, val.LengthInt())
i := int64(0)
path = append(path, nil)
elemPath := append(path.Copy(), nil)
it := val.ElementIterator()
for it.Next() {
_, val := it.Element()
var err error
path[len(path)-1] = cty.IndexStep{
elemPath[len(elemPath)-1] = cty.IndexStep{
Key: cty.NumberIntVal(i),
}
if conv != nil {
val, err = conv(val, path)
val, err = conv(val, elemPath)
if err != nil {
return cty.NilVal, err
}
@@ -77,6 +80,11 @@ func conversionCollectionToSet(ety cty.Type, conv conversion) conversion {
}
if len(elems) == 0 {
// Prefer a concrete type over a dynamic type when returning an
// empty set
if ety == cty.DynamicPseudoType {
ety = val.Type().ElementType()
}
return cty.SetValEmpty(ety), nil
}
@@ -93,13 +101,13 @@ func conversionCollectionToSet(ety cty.Type, conv conversion) conversion {
func conversionCollectionToMap(ety cty.Type, conv conversion) conversion {
return func(val cty.Value, path cty.Path) (cty.Value, error) {
elems := make(map[string]cty.Value, 0)
path = append(path, nil)
elemPath := append(path.Copy(), nil)
it := val.ElementIterator()
for it.Next() {
key, val := it.Element()
var err error
path[len(path)-1] = cty.IndexStep{
elemPath[len(elemPath)-1] = cty.IndexStep{
Key: key,
}
@@ -107,11 +115,11 @@ func conversionCollectionToMap(ety cty.Type, conv conversion) conversion {
if err != nil {
// Should never happen, because keys can only be numbers or
// strings and both can convert to string.
return cty.DynamicVal, path.NewErrorf("cannot convert key type %s to string for map", key.Type().FriendlyName())
return cty.DynamicVal, elemPath.NewErrorf("cannot convert key type %s to string for map", key.Type().FriendlyName())
}
if conv != nil {
val, err = conv(val, path)
val, err = conv(val, elemPath)
if err != nil {
return cty.NilVal, err
}
@@ -121,9 +129,25 @@ func conversionCollectionToMap(ety cty.Type, conv conversion) conversion {
}
if len(elems) == 0 {
// Prefer a concrete type over a dynamic type when returning an
// empty map
if ety == cty.DynamicPseudoType {
ety = val.Type().ElementType()
}
return cty.MapValEmpty(ety), nil
}
if ety.IsCollectionType() || ety.IsObjectType() {
var err error
if elems, err = conversionUnifyCollectionElements(elems, path, false); err != nil {
return cty.NilVal, err
}
}
if err := conversionCheckMapElementTypes(elems, path); err != nil {
return cty.NilVal, err
}
return cty.MapVal(elems), nil
}
}
@@ -171,20 +195,20 @@ func conversionTupleToSet(tupleType cty.Type, listEty cty.Type, unsafe bool) con
// element conversions in elemConvs
return func(val cty.Value, path cty.Path) (cty.Value, error) {
elems := make([]cty.Value, 0, len(elemConvs))
path = append(path, nil)
elemPath := append(path.Copy(), nil)
i := int64(0)
it := val.ElementIterator()
for it.Next() {
_, val := it.Element()
var err error
path[len(path)-1] = cty.IndexStep{
elemPath[len(elemPath)-1] = cty.IndexStep{
Key: cty.NumberIntVal(i),
}
conv := elemConvs[i]
if conv != nil {
val, err = conv(val, path)
val, err = conv(val, elemPath)
if err != nil {
return cty.NilVal, err
}
@@ -241,20 +265,20 @@ func conversionTupleToList(tupleType cty.Type, listEty cty.Type, unsafe bool) co
// element conversions in elemConvs
return func(val cty.Value, path cty.Path) (cty.Value, error) {
elems := make([]cty.Value, 0, len(elemConvs))
path = append(path, nil)
elemPath := append(path.Copy(), nil)
i := int64(0)
it := val.ElementIterator()
for it.Next() {
_, val := it.Element()
var err error
path[len(path)-1] = cty.IndexStep{
elemPath[len(elemPath)-1] = cty.IndexStep{
Key: cty.NumberIntVal(i),
}
conv := elemConvs[i]
if conv != nil {
val, err = conv(val, path)
val, err = conv(val, elemPath)
if err != nil {
return cty.NilVal, err
}
@@ -315,19 +339,19 @@ func conversionObjectToMap(objectType cty.Type, mapEty cty.Type, unsafe bool) co
// element conversions in elemConvs
return func(val cty.Value, path cty.Path) (cty.Value, error) {
elems := make(map[string]cty.Value, len(elemConvs))
path = append(path, nil)
elemPath := append(path.Copy(), nil)
it := val.ElementIterator()
for it.Next() {
name, val := it.Element()
var err error
path[len(path)-1] = cty.IndexStep{
elemPath[len(elemPath)-1] = cty.IndexStep{
Key: name,
}
conv := elemConvs[name.AsString()]
if conv != nil {
val, err = conv(val, path)
val, err = conv(val, elemPath)
if err != nil {
return cty.NilVal, err
}
@@ -335,6 +359,130 @@ func conversionObjectToMap(objectType cty.Type, mapEty cty.Type, unsafe bool) co
elems[name.AsString()] = val
}
if mapEty.IsCollectionType() || mapEty.IsObjectType() {
var err error
if elems, err = conversionUnifyCollectionElements(elems, path, unsafe); err != nil {
return cty.NilVal, err
}
}
if err := conversionCheckMapElementTypes(elems, path); err != nil {
return cty.NilVal, err
}
return cty.MapVal(elems), nil
}
}
// conversionMapToObject returns a conversion that will take a value of the
// given map type and return an object of the given type. The object attribute
// types must all be compatible with the map element type.
//
// Will panic if the given mapType and objType are not maps and objects
// respectively.
func conversionMapToObject(mapType cty.Type, objType cty.Type, unsafe bool) conversion {
objectAtys := objType.AttributeTypes()
mapEty := mapType.ElementType()
elemConvs := make(map[string]conversion, len(objectAtys))
for name, objectAty := range objectAtys {
if objectAty.Equals(mapEty) {
// no conversion required
continue
}
elemConvs[name] = getConversion(mapEty, objectAty, unsafe)
if elemConvs[name] == nil {
// If any of our element conversions are impossible, then the our
// whole conversion is impossible.
return nil
}
}
// If we fall out here then a conversion is possible, using the
// element conversions in elemConvs
return func(val cty.Value, path cty.Path) (cty.Value, error) {
elems := make(map[string]cty.Value, len(elemConvs))
elemPath := append(path.Copy(), nil)
it := val.ElementIterator()
for it.Next() {
name, val := it.Element()
// if there is no corresponding attribute, we skip this key
if _, ok := objectAtys[name.AsString()]; !ok {
continue
}
var err error
elemPath[len(elemPath)-1] = cty.IndexStep{
Key: name,
}
conv := elemConvs[name.AsString()]
if conv != nil {
val, err = conv(val, elemPath)
if err != nil {
return cty.NilVal, err
}
}
elems[name.AsString()] = val
}
return cty.ObjectVal(elems), nil
}
}
func conversionUnifyCollectionElements(elems map[string]cty.Value, path cty.Path, unsafe bool) (map[string]cty.Value, error) {
elemTypes := make([]cty.Type, 0, len(elems))
for _, elem := range elems {
elemTypes = append(elemTypes, elem.Type())
}
unifiedType, _ := unify(elemTypes, unsafe)
if unifiedType == cty.NilType {
}
unifiedElems := make(map[string]cty.Value)
elemPath := append(path.Copy(), nil)
for name, elem := range elems {
if elem.Type().Equals(unifiedType) {
unifiedElems[name] = elem
continue
}
conv := getConversion(elem.Type(), unifiedType, unsafe)
if conv == nil {
}
elemPath[len(elemPath)-1] = cty.IndexStep{
Key: cty.StringVal(name),
}
val, err := conv(elem, elemPath)
if err != nil {
return nil, err
}
unifiedElems[name] = val
}
return unifiedElems, nil
}
func conversionCheckMapElementTypes(elems map[string]cty.Value, path cty.Path) error {
elementType := cty.NilType
elemPath := append(path.Copy(), nil)
for name, elem := range elems {
if elementType == cty.NilType {
elementType = elem.Type()
continue
}
if !elementType.Equals(elem.Type()) {
elemPath[len(elemPath)-1] = cty.IndexStep{
Key: cty.StringVal(name),
}
return elemPath.NewErrorf("%s is required", elementType.FriendlyName())
}
}
return nil
}

View File

@@ -28,11 +28,14 @@ func unify(types []cty.Type, unsafe bool) (cty.Type, []Conversion) {
// a subset of that type, which would be a much less useful conversion for
// unification purposes.
{
mapCt := 0
objectCt := 0
tupleCt := 0
dynamicCt := 0
for _, ty := range types {
switch {
case ty.IsMapType():
mapCt++
case ty.IsObjectType():
objectCt++
case ty.IsTupleType():
@@ -44,6 +47,8 @@ func unify(types []cty.Type, unsafe bool) (cty.Type, []Conversion) {
}
}
switch {
case mapCt > 0 && (mapCt+dynamicCt) == len(types):
return unifyMapTypes(types, unsafe, dynamicCt > 0)
case objectCt > 0 && (objectCt+dynamicCt) == len(types):
return unifyObjectTypes(types, unsafe, dynamicCt > 0)
case tupleCt > 0 && (tupleCt+dynamicCt) == len(types):
@@ -95,6 +100,44 @@ Preferences:
return cty.NilType, nil
}
func unifyMapTypes(types []cty.Type, unsafe bool, hasDynamic bool) (cty.Type, []Conversion) {
// If we had any dynamic types in the input here then we can't predict
// what path we'll take through here once these become known types, so
// we'll conservatively produce DynamicVal for these.
if hasDynamic {
return unifyAllAsDynamic(types)
}
elemTypes := make([]cty.Type, 0, len(types))
for _, ty := range types {
elemTypes = append(elemTypes, ty.ElementType())
}
retElemType, _ := unify(elemTypes, unsafe)
if retElemType == cty.NilType {
return cty.NilType, nil
}
retTy := cty.Map(retElemType)
conversions := make([]Conversion, len(types))
for i, ty := range types {
if ty.Equals(retTy) {
continue
}
if unsafe {
conversions[i] = GetConversionUnsafe(ty, retTy)
} else {
conversions[i] = GetConversion(ty, retTy)
}
if conversions[i] == nil {
// Shouldn't be reachable, since we were able to unify
return cty.NilType, nil
}
}
return retTy, conversions
}
func unifyObjectTypes(types []cty.Type, unsafe bool, hasDynamic bool) (cty.Type, []Conversion) {
// If we had any dynamic types in the input here then we can't predict
// what path we'll take through here once these become known types, so

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,87 @@
package stdlib
import (
"strconv"
"github.com/zclconf/go-cty/cty"
"github.com/zclconf/go-cty/cty/convert"
"github.com/zclconf/go-cty/cty/function"
)
// MakeToFunc constructs a "to..." function, like "tostring", which converts
// its argument to a specific type or type kind.
//
// The given type wantTy can be any type constraint that cty's "convert" package
// would accept. In particular, this means that you can pass
// cty.List(cty.DynamicPseudoType) to mean "list of any single type", which
// will then cause cty to attempt to unify all of the element types when given
// a tuple.
func MakeToFunc(wantTy cty.Type) function.Function {
return function.New(&function.Spec{
Params: []function.Parameter{
{
Name: "v",
// We use DynamicPseudoType rather than wantTy here so that
// all values will pass through the function API verbatim and
// we can handle the conversion logic within the Type and
// Impl functions. This allows us to customize the error
// messages to be more appropriate for an explicit type
// conversion, whereas the cty function system produces
// messages aimed at _implicit_ type conversions.
Type: cty.DynamicPseudoType,
AllowNull: true,
},
},
Type: func(args []cty.Value) (cty.Type, error) {
gotTy := args[0].Type()
if gotTy.Equals(wantTy) {
return wantTy, nil
}
conv := convert.GetConversionUnsafe(args[0].Type(), wantTy)
if conv == nil {
// We'll use some specialized errors for some trickier cases,
// but most we can handle in a simple way.
switch {
case gotTy.IsTupleType() && wantTy.IsTupleType():
return cty.NilType, function.NewArgErrorf(0, "incompatible tuple type for conversion: %s", convert.MismatchMessage(gotTy, wantTy))
case gotTy.IsObjectType() && wantTy.IsObjectType():
return cty.NilType, function.NewArgErrorf(0, "incompatible object type for conversion: %s", convert.MismatchMessage(gotTy, wantTy))
default:
return cty.NilType, function.NewArgErrorf(0, "cannot convert %s to %s", gotTy.FriendlyName(), wantTy.FriendlyNameForConstraint())
}
}
// If a conversion is available then everything is fine.
return wantTy, nil
},
Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) {
// We didn't set "AllowUnknown" on our argument, so it is guaranteed
// to be known here but may still be null.
ret, err := convert.Convert(args[0], retType)
if err != nil {
// Because we used GetConversionUnsafe above, conversion can
// still potentially fail in here. For example, if the user
// asks to convert the string "a" to bool then we'll
// optimistically permit it during type checking but fail here
// once we note that the value isn't either "true" or "false".
gotTy := args[0].Type()
switch {
case gotTy == cty.String && wantTy == cty.Bool:
what := "string"
if !args[0].IsNull() {
what = strconv.Quote(args[0].AsString())
}
return cty.NilVal, function.NewArgErrorf(0, `cannot convert %s to bool; only the strings "true" or "false" are allowed`, what)
case gotTy == cty.String && wantTy == cty.Number:
what := "string"
if !args[0].IsNull() {
what = strconv.Quote(args[0].AsString())
}
return cty.NilVal, function.NewArgErrorf(0, `cannot convert %s to number; given string must be a decimal representation of a number`, what)
default:
return cty.NilVal, function.NewArgErrorf(0, "cannot convert %s to %s", gotTy.FriendlyName(), wantTy.FriendlyNameForConstraint())
}
}
return ret, nil
},
})
}

View File

@@ -203,6 +203,33 @@ var FormatDateFunc = function.New(&function.Spec{
},
})
// TimeAddFunc is a function that adds a duration to a timestamp, returning a new timestamp.
var TimeAddFunc = function.New(&function.Spec{
Params: []function.Parameter{
{
Name: "timestamp",
Type: cty.String,
},
{
Name: "duration",
Type: cty.String,
},
},
Type: function.StaticReturnType(cty.String),
Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) {
ts, err := parseTimestamp(args[0].AsString())
if err != nil {
return cty.UnknownVal(cty.String), err
}
duration, err := time.ParseDuration(args[1].AsString())
if err != nil {
return cty.UnknownVal(cty.String), err
}
return cty.StringVal(ts.Add(duration).Format(time.RFC3339)), nil
},
})
// FormatDate reformats a timestamp given in RFC3339 syntax into another time
// syntax defined by a given format string.
//
@@ -383,3 +410,20 @@ func splitDateFormat(data []byte, atEOF bool) (advance int, token []byte, err er
func startsDateFormatVerb(b byte) bool {
return (b >= 'a' && b <= 'z') || (b >= 'A' && b <= 'Z')
}
// TimeAdd adds a duration to a timestamp, returning a new timestamp.
//
// In the HCL language, timestamps are conventionally represented as
// strings using RFC 3339 "Date and Time format" syntax. Timeadd requires
// the timestamp argument to be a string conforming to this syntax.
//
// `duration` is a string representation of a time difference, consisting of
// sequences of number and unit pairs, like `"1.5h"` or `1h30m`. The accepted
// units are `ns`, `us` (or `µs`), `"ms"`, `"s"`, `"m"`, and `"h"`. The first
// number may be negative to indicate a negative duration, like `"-2h5m"`.
//
// The result is a string, also in RFC 3339 format, representing the result
// of adding the given direction to the given timestamp.
func TimeAdd(timestamp cty.Value, duration cty.Value) (cty.Value, error) {
return TimeAddFunc.Call([]cty.Value{timestamp, duration})
}

View File

@@ -6,7 +6,7 @@ import (
"math/big"
"strings"
"github.com/apparentlymart/go-textseg/textseg"
"github.com/apparentlymart/go-textseg/v12/textseg"
"github.com/zclconf/go-cty/cty"
"github.com/zclconf/go-cty/cty/convert"

View File

@@ -2,10 +2,12 @@ package stdlib
import (
"fmt"
"math"
"math/big"
"github.com/zclconf/go-cty/cty"
"github.com/zclconf/go-cty/cty/function"
"github.com/zclconf/go-cty/cty/gocty"
)
var AbsoluteFunc = function.New(&function.Spec{
@@ -358,6 +360,188 @@ var IntFunc = function.New(&function.Spec{
},
})
// CeilFunc is a function that returns the closest whole number greater
// than or equal to the given value.
var CeilFunc = function.New(&function.Spec{
Params: []function.Parameter{
{
Name: "num",
Type: cty.Number,
},
},
Type: function.StaticReturnType(cty.Number),
Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) {
var val float64
if err := gocty.FromCtyValue(args[0], &val); err != nil {
return cty.UnknownVal(cty.String), err
}
if math.IsInf(val, 0) {
return cty.NumberFloatVal(val), nil
}
return cty.NumberIntVal(int64(math.Ceil(val))), nil
},
})
// FloorFunc is a function that returns the closest whole number lesser
// than or equal to the given value.
var FloorFunc = function.New(&function.Spec{
Params: []function.Parameter{
{
Name: "num",
Type: cty.Number,
},
},
Type: function.StaticReturnType(cty.Number),
Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) {
var val float64
if err := gocty.FromCtyValue(args[0], &val); err != nil {
return cty.UnknownVal(cty.String), err
}
if math.IsInf(val, 0) {
return cty.NumberFloatVal(val), nil
}
return cty.NumberIntVal(int64(math.Floor(val))), nil
},
})
// LogFunc is a function that returns the logarithm of a given number in a given base.
var LogFunc = function.New(&function.Spec{
Params: []function.Parameter{
{
Name: "num",
Type: cty.Number,
},
{
Name: "base",
Type: cty.Number,
},
},
Type: function.StaticReturnType(cty.Number),
Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) {
var num float64
if err := gocty.FromCtyValue(args[0], &num); err != nil {
return cty.UnknownVal(cty.String), err
}
var base float64
if err := gocty.FromCtyValue(args[1], &base); err != nil {
return cty.UnknownVal(cty.String), err
}
return cty.NumberFloatVal(math.Log(num) / math.Log(base)), nil
},
})
// PowFunc is a function that returns the logarithm of a given number in a given base.
var PowFunc = function.New(&function.Spec{
Params: []function.Parameter{
{
Name: "num",
Type: cty.Number,
},
{
Name: "power",
Type: cty.Number,
},
},
Type: function.StaticReturnType(cty.Number),
Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) {
var num float64
if err := gocty.FromCtyValue(args[0], &num); err != nil {
return cty.UnknownVal(cty.String), err
}
var power float64
if err := gocty.FromCtyValue(args[1], &power); err != nil {
return cty.UnknownVal(cty.String), err
}
return cty.NumberFloatVal(math.Pow(num, power)), nil
},
})
// SignumFunc is a function that determines the sign of a number, returning a
// number between -1 and 1 to represent the sign..
var SignumFunc = function.New(&function.Spec{
Params: []function.Parameter{
{
Name: "num",
Type: cty.Number,
},
},
Type: function.StaticReturnType(cty.Number),
Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) {
var num int
if err := gocty.FromCtyValue(args[0], &num); err != nil {
return cty.UnknownVal(cty.String), err
}
switch {
case num < 0:
return cty.NumberIntVal(-1), nil
case num > 0:
return cty.NumberIntVal(+1), nil
default:
return cty.NumberIntVal(0), nil
}
},
})
// ParseIntFunc is a function that parses a string argument and returns an integer of the specified base.
var ParseIntFunc = function.New(&function.Spec{
Params: []function.Parameter{
{
Name: "number",
Type: cty.DynamicPseudoType,
},
{
Name: "base",
Type: cty.Number,
},
},
Type: func(args []cty.Value) (cty.Type, error) {
if !args[0].Type().Equals(cty.String) {
return cty.Number, function.NewArgErrorf(0, "first argument must be a string, not %s", args[0].Type().FriendlyName())
}
return cty.Number, nil
},
Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) {
var numstr string
var base int
var err error
if err = gocty.FromCtyValue(args[0], &numstr); err != nil {
return cty.UnknownVal(cty.String), function.NewArgError(0, err)
}
if err = gocty.FromCtyValue(args[1], &base); err != nil {
return cty.UnknownVal(cty.Number), function.NewArgError(1, err)
}
if base < 2 || base > 62 {
return cty.UnknownVal(cty.Number), function.NewArgErrorf(
1,
"base must be a whole number between 2 and 62 inclusive",
)
}
num, ok := (&big.Int{}).SetString(numstr, base)
if !ok {
return cty.UnknownVal(cty.Number), function.NewArgErrorf(
0,
"cannot parse %q as a base %d integer",
numstr,
base,
)
}
parsedNum := cty.NumberVal((&big.Float{}).SetInt(num))
return parsedNum, nil
},
})
// Absolute returns the magnitude of the given number, without its sign.
// That is, it turns negative values into positive values.
func Absolute(num cty.Value) (cty.Value, error) {
@@ -436,3 +620,34 @@ func Int(num cty.Value) (cty.Value, error) {
}
return IntFunc.Call([]cty.Value{num})
}
// Ceil returns the closest whole number greater than or equal to the given value.
func Ceil(num cty.Value) (cty.Value, error) {
return CeilFunc.Call([]cty.Value{num})
}
// Floor returns the closest whole number lesser than or equal to the given value.
func Floor(num cty.Value) (cty.Value, error) {
return FloorFunc.Call([]cty.Value{num})
}
// Log returns returns the logarithm of a given number in a given base.
func Log(num, base cty.Value) (cty.Value, error) {
return LogFunc.Call([]cty.Value{num, base})
}
// Pow returns the logarithm of a given number in a given base.
func Pow(num, power cty.Value) (cty.Value, error) {
return PowFunc.Call([]cty.Value{num, power})
}
// Signum determines the sign of a number, returning a number between -1 and
// 1 to represent the sign.
func Signum(num cty.Value) (cty.Value, error) {
return SignumFunc.Call([]cty.Value{num})
}
// ParseInt parses a string argument and returns an integer of the specified base.
func ParseInt(num cty.Value, base cty.Value) (cty.Value, error) {
return ParseIntFunc.Call([]cty.Value{num, base})
}

View File

@@ -1,9 +1,13 @@
package stdlib
import (
"fmt"
"regexp"
"sort"
"strings"
"github.com/apparentlymart/go-textseg/textseg"
"github.com/apparentlymart/go-textseg/v12/textseg"
"github.com/zclconf/go-cty/cty"
"github.com/zclconf/go-cty/cty/function"
"github.com/zclconf/go-cty/cty/gocty"
@@ -140,8 +144,14 @@ var SubstrFunc = function.New(&function.Spec{
}
offset += totalLen
} else if length == 0 {
// Short circuit here, after error checks, because if a
// string of length 0 has been requested it will always
// be the empty string
return cty.StringVal(""), nil
}
sub := in
pos := 0
var i int
@@ -187,6 +197,252 @@ var SubstrFunc = function.New(&function.Spec{
},
})
var JoinFunc = function.New(&function.Spec{
Params: []function.Parameter{
{
Name: "separator",
Type: cty.String,
},
},
VarParam: &function.Parameter{
Name: "lists",
Type: cty.List(cty.String),
},
Type: function.StaticReturnType(cty.String),
Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) {
sep := args[0].AsString()
listVals := args[1:]
if len(listVals) < 1 {
return cty.UnknownVal(cty.String), fmt.Errorf("at least one list is required")
}
l := 0
for _, list := range listVals {
if !list.IsWhollyKnown() {
return cty.UnknownVal(cty.String), nil
}
l += list.LengthInt()
}
items := make([]string, 0, l)
for ai, list := range listVals {
ei := 0
for it := list.ElementIterator(); it.Next(); {
_, val := it.Element()
if val.IsNull() {
if len(listVals) > 1 {
return cty.UnknownVal(cty.String), function.NewArgErrorf(ai+1, "element %d of list %d is null; cannot concatenate null values", ei, ai+1)
}
return cty.UnknownVal(cty.String), function.NewArgErrorf(ai+1, "element %d is null; cannot concatenate null values", ei)
}
items = append(items, val.AsString())
ei++
}
}
return cty.StringVal(strings.Join(items, sep)), nil
},
})
var SortFunc = function.New(&function.Spec{
Params: []function.Parameter{
{
Name: "list",
Type: cty.List(cty.String),
},
},
Type: function.StaticReturnType(cty.List(cty.String)),
Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) {
listVal := args[0]
if !listVal.IsWhollyKnown() {
// If some of the element values aren't known yet then we
// can't yet predict the order of the result.
return cty.UnknownVal(retType), nil
}
if listVal.LengthInt() == 0 { // Easy path
return listVal, nil
}
list := make([]string, 0, listVal.LengthInt())
for it := listVal.ElementIterator(); it.Next(); {
iv, v := it.Element()
if v.IsNull() {
return cty.UnknownVal(retType), fmt.Errorf("given list element %s is null; a null string cannot be sorted", iv.AsBigFloat().String())
}
list = append(list, v.AsString())
}
sort.Strings(list)
retVals := make([]cty.Value, len(list))
for i, s := range list {
retVals[i] = cty.StringVal(s)
}
return cty.ListVal(retVals), nil
},
})
var SplitFunc = function.New(&function.Spec{
Params: []function.Parameter{
{
Name: "separator",
Type: cty.String,
},
{
Name: "str",
Type: cty.String,
},
},
Type: function.StaticReturnType(cty.List(cty.String)),
Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) {
sep := args[0].AsString()
str := args[1].AsString()
elems := strings.Split(str, sep)
elemVals := make([]cty.Value, len(elems))
for i, s := range elems {
elemVals[i] = cty.StringVal(s)
}
if len(elemVals) == 0 {
return cty.ListValEmpty(cty.String), nil
}
return cty.ListVal(elemVals), nil
},
})
// ChompFunc is a function that removes newline characters at the end of a
// string.
var ChompFunc = function.New(&function.Spec{
Params: []function.Parameter{
{
Name: "str",
Type: cty.String,
},
},
Type: function.StaticReturnType(cty.String),
Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) {
newlines := regexp.MustCompile(`(?:\r\n?|\n)*\z`)
return cty.StringVal(newlines.ReplaceAllString(args[0].AsString(), "")), nil
},
})
// IndentFunc is a function that adds a given number of spaces to the
// beginnings of all but the first line in a given multi-line string.
var IndentFunc = function.New(&function.Spec{
Params: []function.Parameter{
{
Name: "spaces",
Type: cty.Number,
},
{
Name: "str",
Type: cty.String,
},
},
Type: function.StaticReturnType(cty.String),
Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) {
var spaces int
if err := gocty.FromCtyValue(args[0], &spaces); err != nil {
return cty.UnknownVal(cty.String), err
}
data := args[1].AsString()
pad := strings.Repeat(" ", spaces)
return cty.StringVal(strings.Replace(data, "\n", "\n"+pad, -1)), nil
},
})
// TitleFunc is a function that converts the first letter of each word in the
// given string to uppercase.
var TitleFunc = function.New(&function.Spec{
Params: []function.Parameter{
{
Name: "str",
Type: cty.String,
},
},
Type: function.StaticReturnType(cty.String),
Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) {
return cty.StringVal(strings.Title(args[0].AsString())), nil
},
})
// TrimSpaceFunc is a function that removes any space characters from the start
// and end of the given string.
var TrimSpaceFunc = function.New(&function.Spec{
Params: []function.Parameter{
{
Name: "str",
Type: cty.String,
},
},
Type: function.StaticReturnType(cty.String),
Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) {
return cty.StringVal(strings.TrimSpace(args[0].AsString())), nil
},
})
// TrimFunc is a function that removes the specified characters from the start
// and end of the given string.
var TrimFunc = function.New(&function.Spec{
Params: []function.Parameter{
{
Name: "str",
Type: cty.String,
},
{
Name: "cutset",
Type: cty.String,
},
},
Type: function.StaticReturnType(cty.String),
Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) {
str := args[0].AsString()
cutset := args[1].AsString()
return cty.StringVal(strings.Trim(str, cutset)), nil
},
})
// TrimPrefixFunc is a function that removes the specified characters from the
// start the given string.
var TrimPrefixFunc = function.New(&function.Spec{
Params: []function.Parameter{
{
Name: "str",
Type: cty.String,
},
{
Name: "prefix",
Type: cty.String,
},
},
Type: function.StaticReturnType(cty.String),
Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) {
str := args[0].AsString()
prefix := args[1].AsString()
return cty.StringVal(strings.TrimPrefix(str, prefix)), nil
},
})
// TrimSuffixFunc is a function that removes the specified characters from the
// end of the given string.
var TrimSuffixFunc = function.New(&function.Spec{
Params: []function.Parameter{
{
Name: "str",
Type: cty.String,
},
{
Name: "suffix",
Type: cty.String,
},
},
Type: function.StaticReturnType(cty.String),
Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) {
str := args[0].AsString()
cutset := args[1].AsString()
return cty.StringVal(strings.TrimSuffix(str, cutset)), nil
},
})
// Upper is a Function that converts a given string to uppercase.
func Upper(str cty.Value) (cty.Value, error) {
return UpperFunc.Call([]cty.Value{str})
@@ -232,3 +488,60 @@ func Strlen(str cty.Value) (cty.Value, error) {
func Substr(str cty.Value, offset cty.Value, length cty.Value) (cty.Value, error) {
return SubstrFunc.Call([]cty.Value{str, offset, length})
}
// Join concatenates together the string elements of one or more lists with a
// given separator.
func Join(sep cty.Value, lists ...cty.Value) (cty.Value, error) {
args := make([]cty.Value, len(lists)+1)
args[0] = sep
copy(args[1:], lists)
return JoinFunc.Call(args)
}
// Sort re-orders the elements of a given list of strings so that they are
// in ascending lexicographical order.
func Sort(list cty.Value) (cty.Value, error) {
return SortFunc.Call([]cty.Value{list})
}
// Split divides a given string by a given separator, returning a list of
// strings containing the characters between the separator sequences.
func Split(sep, str cty.Value) (cty.Value, error) {
return SplitFunc.Call([]cty.Value{sep, str})
}
// Chomp removes newline characters at the end of a string.
func Chomp(str cty.Value) (cty.Value, error) {
return ChompFunc.Call([]cty.Value{str})
}
// Indent adds a given number of spaces to the beginnings of all but the first
// line in a given multi-line string.
func Indent(spaces, str cty.Value) (cty.Value, error) {
return IndentFunc.Call([]cty.Value{spaces, str})
}
// Title converts the first letter of each word in the given string to uppercase.
func Title(str cty.Value) (cty.Value, error) {
return TitleFunc.Call([]cty.Value{str})
}
// TrimSpace removes any space characters from the start and end of the given string.
func TrimSpace(str cty.Value) (cty.Value, error) {
return TrimSpaceFunc.Call([]cty.Value{str})
}
// Trim removes the specified characters from the start and end of the given string.
func Trim(str, cutset cty.Value) (cty.Value, error) {
return TrimFunc.Call([]cty.Value{str, cutset})
}
// TrimPrefix removes the specified prefix from the start of the given string.
func TrimPrefix(str, prefix cty.Value) (cty.Value, error) {
return TrimPrefixFunc.Call([]cty.Value{str, prefix})
}
// TrimSuffix removes the specified suffix from the end of the given string.
func TrimSuffix(str, suffix cty.Value) (cty.Value, error) {
return TrimSuffixFunc.Call([]cty.Value{str, suffix})
}

View File

@@ -0,0 +1,80 @@
package stdlib
import (
"regexp"
"strings"
"github.com/zclconf/go-cty/cty"
"github.com/zclconf/go-cty/cty/function"
)
// ReplaceFunc is a function that searches a given string for another given
// substring, and replaces each occurence with a given replacement string.
// The substr argument is a simple string.
var ReplaceFunc = function.New(&function.Spec{
Params: []function.Parameter{
{
Name: "str",
Type: cty.String,
},
{
Name: "substr",
Type: cty.String,
},
{
Name: "replace",
Type: cty.String,
},
},
Type: function.StaticReturnType(cty.String),
Impl: func(args []cty.Value, retType cty.Type) (cty.Value, error) {
str := args[0].AsString()
substr := args[1].AsString()
replace := args[2].AsString()
return cty.StringVal(strings.Replace(str, substr, replace, -1)), nil
},
})
// RegexReplaceFunc is a function that searches a given string for another
// given substring, and replaces each occurence with a given replacement
// string. The substr argument must be a valid regular expression.
var RegexReplaceFunc = function.New(&function.Spec{
Params: []function.Parameter{
{
Name: "str",
Type: cty.String,
},
{
Name: "substr",
Type: cty.String,
},
{
Name: "replace",
Type: cty.String,
},
},
Type: function.StaticReturnType(cty.String),
Impl: func(args []cty.Value, retType cty.Type) (ret cty.Value, err error) {
str := args[0].AsString()
substr := args[1].AsString()
replace := args[2].AsString()
re, err := regexp.Compile(substr)
if err != nil {
return cty.UnknownVal(cty.String), err
}
return cty.StringVal(re.ReplaceAllString(str, replace)), nil
},
})
// Replace searches a given string for another given substring,
// and replaces all occurrences with a given replacement string.
func Replace(str, substr, replace cty.Value) (cty.Value, error) {
return ReplaceFunc.Call([]cty.Value{str, substr, replace})
}
func RegexReplace(str, substr, replace cty.Value) (cty.Value, error) {
return RegexReplaceFunc.Call([]cty.Value{str, substr, replace})
}

View File

@@ -200,7 +200,7 @@ func (val Value) Unmark() (Value, ValueMarks) {
func (val Value) UnmarkDeep() (Value, ValueMarks) {
marks := make(ValueMarks)
ret, _ := Transform(val, func(_ Path, v Value) (Value, error) {
unmarkedV, valueMarks := val.Unmark()
unmarkedV, valueMarks := v.Unmark()
for m, s := range valueMarks {
marks[m] = s
}

View File

@@ -51,11 +51,31 @@ func (p Path) Index(v Value) Path {
return ret
}
// IndexInt is a typed convenience method for Index.
func (p Path) IndexInt(v int) Path {
return p.Index(NumberIntVal(int64(v)))
}
// IndexString is a typed convenience method for Index.
func (p Path) IndexString(v string) Path {
return p.Index(StringVal(v))
}
// IndexPath is a convenience method to start a new Path with an IndexStep.
func IndexPath(v Value) Path {
return Path{}.Index(v)
}
// IndexIntPath is a typed convenience method for IndexPath.
func IndexIntPath(v int) Path {
return IndexPath(NumberIntVal(int64(v)))
}
// IndexStringPath is a typed convenience method for IndexPath.
func IndexStringPath(v string) Path {
return IndexPath(StringVal(v))
}
// GetAttr returns a new Path that is the reciever with a GetAttrStep appended
// to the end.
//

5
vendor/modules.txt vendored
View File

@@ -30,8 +30,6 @@ github.com/agext/levenshtein
# github.com/agl/ed25519 v0.0.0-20170116200512-5312a6153412
github.com/agl/ed25519
github.com/agl/ed25519/edwards25519
# github.com/apparentlymart/go-textseg v1.0.0
github.com/apparentlymart/go-textseg/textseg
# github.com/apparentlymart/go-textseg/v12 v12.0.0
github.com/apparentlymart/go-textseg/v12/textseg
# github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973
@@ -51,6 +49,7 @@ github.com/containerd/containerd/filters
github.com/containerd/containerd/images
github.com/containerd/containerd/labels
github.com/containerd/containerd/log
github.com/containerd/containerd/pkg/seed
github.com/containerd/containerd/platforms
github.com/containerd/containerd/reference
github.com/containerd/containerd/remotes
@@ -342,7 +341,7 @@ github.com/xeipuuv/gojsonpointer
github.com/xeipuuv/gojsonreference
# github.com/xeipuuv/gojsonschema v0.0.0-20180618132009-1d523034197f
github.com/xeipuuv/gojsonschema
# github.com/zclconf/go-cty v1.2.0
# github.com/zclconf/go-cty v1.4.0
github.com/zclconf/go-cty/cty
github.com/zclconf/go-cty/cty/convert
github.com/zclconf/go-cty/cty/function