mirror of
https://gitea.com/Lydanne/buildx.git
synced 2025-07-09 21:17:09 +08:00
66
vendor/github.com/hashicorp/hcl/v2/CHANGELOG.md
generated
vendored
Normal file
66
vendor/github.com/hashicorp/hcl/v2/CHANGELOG.md
generated
vendored
Normal file
@ -0,0 +1,66 @@
|
||||
# HCL Changelog
|
||||
|
||||
## v2.4.0 (Apr 13, 2020)
|
||||
|
||||
### Enhancements
|
||||
|
||||
* The Unicode data tables that HCL uses to produce user-perceived "column" positions in diagnostics and other source ranges are now updated to Unicode 12.0.0, which will cause HCL to produce more accurate column numbers for combining characters introduced to Unicode since Unicode 9.0.0.
|
||||
|
||||
### Bugs Fixed
|
||||
|
||||
* json: Fix panic when parsing malformed JSON. ([#358](https://github.com/hashicorp/hcl/pull/358))
|
||||
|
||||
## v2.3.0 (Jan 3, 2020)
|
||||
|
||||
### Enhancements
|
||||
|
||||
* ext/tryfunc: Optional functions `try` and `can` to include in your `hcl.EvalContext` when evaluating expressions, which allow users to make decisions based on the success of expressions. ([#330](https://github.com/hashicorp/hcl/pull/330))
|
||||
* ext/typeexpr: Now has an optional function `convert` which you can include in your `hcl.EvalContext` when evaluating expressions, allowing users to convert values to specific type constraints using the type constraint expression syntax. ([#330](https://github.com/hashicorp/hcl/pull/330))
|
||||
* ext/typeexpr: A new `cty` capsule type `typeexpr.TypeConstraintType` which, when used as either a type constraint for a function parameter or as a type constraint for a `hcldec` attribute specification will cause the given expression to be interpreted as a type constraint expression rather than a value expression. ([#330](https://github.com/hashicorp/hcl/pull/330))
|
||||
* ext/customdecode: An optional extension that allows overriding the static decoding behavior for expressions either in function arguments or `hcldec` attribute specifications. ([#330](https://github.com/hashicorp/hcl/pull/330))
|
||||
* ext/customdecode: New `cty` capsuletypes `customdecode.ExpressionType` and `customdecode.ExpressionClosureType` which, when used as either a type constraint for a function parameter or as a type constraint for a `hcldec` attribute specification will cause the given expression (and, for the closure type, also the `hcl.EvalContext` it was evaluated in) to be captured for later analysis, rather than immediately evaluated. ([#330](https://github.com/hashicorp/hcl/pull/330))
|
||||
|
||||
## v2.2.0 (Dec 11, 2019)
|
||||
|
||||
### Enhancements
|
||||
|
||||
* hcldec: Attribute evaluation (as part of `AttrSpec` or `BlockAttrsSpec`) now captures expression evaluation metadata in any errors it produces during type conversions, allowing for better feedback in calling applications that are able to make use of this metadata when printing diagnostic messages. ([#329](https://github.com/hashicorp/hcl/pull/329))
|
||||
|
||||
### Bugs Fixed
|
||||
|
||||
* hclsyntax: `IndexExpr`, `SplatExpr`, and `RelativeTraversalExpr` will now report a source range that covers all of their child expression nodes. Previously they would report only the operator part, such as `["foo"]`, `[*]`, or `.foo`, which was problematic for callers using source ranges for code analysis. ([#328](https://github.com/hashicorp/hcl/pull/328))
|
||||
* hclwrite: Parser will no longer panic when the input includes index, splat, or relative traversal syntax. ([#328](https://github.com/hashicorp/hcl/pull/328))
|
||||
|
||||
## v2.1.0 (Nov 19, 2019)
|
||||
|
||||
### Enhancements
|
||||
|
||||
* gohcl: When decoding into a struct value with some fields already populated, those values will be retained if not explicitly overwritten in the given HCL body, with similar overriding/merging behavior as `json.Unmarshal` in the Go standard library.
|
||||
* hclwrite: New interface to set the expression for an attribute to be a raw token sequence, with no special processing. This has some caveats, so if you intend to use it please refer to the godoc comments. ([#320](https://github.com/hashicorp/hcl/pull/320))
|
||||
|
||||
### Bugs Fixed
|
||||
|
||||
* hclwrite: The `Body.Blocks` method was returing the blocks in an indefined order, rather than preserving the order of declaration in the source input. ([#313](https://github.com/hashicorp/hcl/pull/313))
|
||||
* hclwrite: The `TokensForTraversal` function (and thus in turn the `Body.SetAttributeTraversal` method) was not correctly handling index steps in traversals, and thus producing invalid results. ([#319](https://github.com/hashicorp/hcl/pull/319))
|
||||
|
||||
## v2.0.0 (Oct 2, 2019)
|
||||
|
||||
Initial release of HCL 2, which is a new implementating combining the HCL 1
|
||||
language with the HIL expression language to produce a single language
|
||||
supporting both nested configuration structures and arbitrary expressions.
|
||||
|
||||
HCL 2 has an entirely new Go library API and so is _not_ a drop-in upgrade
|
||||
relative to HCL 1. It's possible to import both versions of HCL into a single
|
||||
program using Go's _semantic import versioning_ mechanism:
|
||||
|
||||
```
|
||||
import (
|
||||
hcl1 "github.com/hashicorp/hcl"
|
||||
hcl2 "github.com/hashicorp/hcl/v2"
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
Prior to v2.0.0 there was not a curated changelog. Consult the git history
|
||||
from the latest v1.x.x tag for information on the changes to HCL 1.
|
353
vendor/github.com/hashicorp/hcl/v2/LICENSE
generated
vendored
Normal file
353
vendor/github.com/hashicorp/hcl/v2/LICENSE
generated
vendored
Normal file
@ -0,0 +1,353 @@
|
||||
Mozilla Public License, version 2.0
|
||||
|
||||
1. Definitions
|
||||
|
||||
1.1. “Contributor”
|
||||
|
||||
means each individual or legal entity that creates, contributes to the
|
||||
creation of, or owns Covered Software.
|
||||
|
||||
1.2. “Contributor Version”
|
||||
|
||||
means the combination of the Contributions of others (if any) used by a
|
||||
Contributor and that particular Contributor’s Contribution.
|
||||
|
||||
1.3. “Contribution”
|
||||
|
||||
means Covered Software of a particular Contributor.
|
||||
|
||||
1.4. “Covered Software”
|
||||
|
||||
means Source Code Form to which the initial Contributor has attached the
|
||||
notice in Exhibit A, the Executable Form of such Source Code Form, and
|
||||
Modifications of such Source Code Form, in each case including portions
|
||||
thereof.
|
||||
|
||||
1.5. “Incompatible With Secondary Licenses”
|
||||
means
|
||||
|
||||
a. that the initial Contributor has attached the notice described in
|
||||
Exhibit B to the Covered Software; or
|
||||
|
||||
b. that the Covered Software was made available under the terms of version
|
||||
1.1 or earlier of the License, but not also under the terms of a
|
||||
Secondary License.
|
||||
|
||||
1.6. “Executable Form”
|
||||
|
||||
means any form of the work other than Source Code Form.
|
||||
|
||||
1.7. “Larger Work”
|
||||
|
||||
means a work that combines Covered Software with other material, in a separate
|
||||
file or files, that is not Covered Software.
|
||||
|
||||
1.8. “License”
|
||||
|
||||
means this document.
|
||||
|
||||
1.9. “Licensable”
|
||||
|
||||
means having the right to grant, to the maximum extent possible, whether at the
|
||||
time of the initial grant or subsequently, any and all of the rights conveyed by
|
||||
this License.
|
||||
|
||||
1.10. “Modifications”
|
||||
|
||||
means any of the following:
|
||||
|
||||
a. any file in Source Code Form that results from an addition to, deletion
|
||||
from, or modification of the contents of Covered Software; or
|
||||
|
||||
b. any new file in Source Code Form that contains any Covered Software.
|
||||
|
||||
1.11. “Patent Claims” of a Contributor
|
||||
|
||||
means any patent claim(s), including without limitation, method, process,
|
||||
and apparatus claims, in any patent Licensable by such Contributor that
|
||||
would be infringed, but for the grant of the License, by the making,
|
||||
using, selling, offering for sale, having made, import, or transfer of
|
||||
either its Contributions or its Contributor Version.
|
||||
|
||||
1.12. “Secondary License”
|
||||
|
||||
means either the GNU General Public License, Version 2.0, the GNU Lesser
|
||||
General Public License, Version 2.1, the GNU Affero General Public
|
||||
License, Version 3.0, or any later versions of those licenses.
|
||||
|
||||
1.13. “Source Code Form”
|
||||
|
||||
means the form of the work preferred for making modifications.
|
||||
|
||||
1.14. “You” (or “Your”)
|
||||
|
||||
means an individual or a legal entity exercising rights under this
|
||||
License. For legal entities, “You” includes any entity that controls, is
|
||||
controlled by, or is under common control with You. For purposes of this
|
||||
definition, “control” means (a) the power, direct or indirect, to cause
|
||||
the direction or management of such entity, whether by contract or
|
||||
otherwise, or (b) ownership of more than fifty percent (50%) of the
|
||||
outstanding shares or beneficial ownership of such entity.
|
||||
|
||||
|
||||
2. License Grants and Conditions
|
||||
|
||||
2.1. Grants
|
||||
|
||||
Each Contributor hereby grants You a world-wide, royalty-free,
|
||||
non-exclusive license:
|
||||
|
||||
a. under intellectual property rights (other than patent or trademark)
|
||||
Licensable by such Contributor to use, reproduce, make available,
|
||||
modify, display, perform, distribute, and otherwise exploit its
|
||||
Contributions, either on an unmodified basis, with Modifications, or as
|
||||
part of a Larger Work; and
|
||||
|
||||
b. under Patent Claims of such Contributor to make, use, sell, offer for
|
||||
sale, have made, import, and otherwise transfer either its Contributions
|
||||
or its Contributor Version.
|
||||
|
||||
2.2. Effective Date
|
||||
|
||||
The licenses granted in Section 2.1 with respect to any Contribution become
|
||||
effective for each Contribution on the date the Contributor first distributes
|
||||
such Contribution.
|
||||
|
||||
2.3. Limitations on Grant Scope
|
||||
|
||||
The licenses granted in this Section 2 are the only rights granted under this
|
||||
License. No additional rights or licenses will be implied from the distribution
|
||||
or licensing of Covered Software under this License. Notwithstanding Section
|
||||
2.1(b) above, no patent license is granted by a Contributor:
|
||||
|
||||
a. for any code that a Contributor has removed from Covered Software; or
|
||||
|
||||
b. for infringements caused by: (i) Your and any other third party’s
|
||||
modifications of Covered Software, or (ii) the combination of its
|
||||
Contributions with other software (except as part of its Contributor
|
||||
Version); or
|
||||
|
||||
c. under Patent Claims infringed by Covered Software in the absence of its
|
||||
Contributions.
|
||||
|
||||
This License does not grant any rights in the trademarks, service marks, or
|
||||
logos of any Contributor (except as may be necessary to comply with the
|
||||
notice requirements in Section 3.4).
|
||||
|
||||
2.4. Subsequent Licenses
|
||||
|
||||
No Contributor makes additional grants as a result of Your choice to
|
||||
distribute the Covered Software under a subsequent version of this License
|
||||
(see Section 10.2) or under the terms of a Secondary License (if permitted
|
||||
under the terms of Section 3.3).
|
||||
|
||||
2.5. Representation
|
||||
|
||||
Each Contributor represents that the Contributor believes its Contributions
|
||||
are its original creation(s) or it has sufficient rights to grant the
|
||||
rights to its Contributions conveyed by this License.
|
||||
|
||||
2.6. Fair Use
|
||||
|
||||
This License is not intended to limit any rights You have under applicable
|
||||
copyright doctrines of fair use, fair dealing, or other equivalents.
|
||||
|
||||
2.7. Conditions
|
||||
|
||||
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in
|
||||
Section 2.1.
|
||||
|
||||
|
||||
3. Responsibilities
|
||||
|
||||
3.1. Distribution of Source Form
|
||||
|
||||
All distribution of Covered Software in Source Code Form, including any
|
||||
Modifications that You create or to which You contribute, must be under the
|
||||
terms of this License. You must inform recipients that the Source Code Form
|
||||
of the Covered Software is governed by the terms of this License, and how
|
||||
they can obtain a copy of this License. You may not attempt to alter or
|
||||
restrict the recipients’ rights in the Source Code Form.
|
||||
|
||||
3.2. Distribution of Executable Form
|
||||
|
||||
If You distribute Covered Software in Executable Form then:
|
||||
|
||||
a. such Covered Software must also be made available in Source Code Form,
|
||||
as described in Section 3.1, and You must inform recipients of the
|
||||
Executable Form how they can obtain a copy of such Source Code Form by
|
||||
reasonable means in a timely manner, at a charge no more than the cost
|
||||
of distribution to the recipient; and
|
||||
|
||||
b. You may distribute such Executable Form under the terms of this License,
|
||||
or sublicense it under different terms, provided that the license for
|
||||
the Executable Form does not attempt to limit or alter the recipients’
|
||||
rights in the Source Code Form under this License.
|
||||
|
||||
3.3. Distribution of a Larger Work
|
||||
|
||||
You may create and distribute a Larger Work under terms of Your choice,
|
||||
provided that You also comply with the requirements of this License for the
|
||||
Covered Software. If the Larger Work is a combination of Covered Software
|
||||
with a work governed by one or more Secondary Licenses, and the Covered
|
||||
Software is not Incompatible With Secondary Licenses, this License permits
|
||||
You to additionally distribute such Covered Software under the terms of
|
||||
such Secondary License(s), so that the recipient of the Larger Work may, at
|
||||
their option, further distribute the Covered Software under the terms of
|
||||
either this License or such Secondary License(s).
|
||||
|
||||
3.4. Notices
|
||||
|
||||
You may not remove or alter the substance of any license notices (including
|
||||
copyright notices, patent notices, disclaimers of warranty, or limitations
|
||||
of liability) contained within the Source Code Form of the Covered
|
||||
Software, except that You may alter any license notices to the extent
|
||||
required to remedy known factual inaccuracies.
|
||||
|
||||
3.5. Application of Additional Terms
|
||||
|
||||
You may choose to offer, and to charge a fee for, warranty, support,
|
||||
indemnity or liability obligations to one or more recipients of Covered
|
||||
Software. However, You may do so only on Your own behalf, and not on behalf
|
||||
of any Contributor. You must make it absolutely clear that any such
|
||||
warranty, support, indemnity, or liability obligation is offered by You
|
||||
alone, and You hereby agree to indemnify every Contributor for any
|
||||
liability incurred by such Contributor as a result of warranty, support,
|
||||
indemnity or liability terms You offer. You may include additional
|
||||
disclaimers of warranty and limitations of liability specific to any
|
||||
jurisdiction.
|
||||
|
||||
4. Inability to Comply Due to Statute or Regulation
|
||||
|
||||
If it is impossible for You to comply with any of the terms of this License
|
||||
with respect to some or all of the Covered Software due to statute, judicial
|
||||
order, or regulation then You must: (a) comply with the terms of this License
|
||||
to the maximum extent possible; and (b) describe the limitations and the code
|
||||
they affect. Such description must be placed in a text file included with all
|
||||
distributions of the Covered Software under this License. Except to the
|
||||
extent prohibited by statute or regulation, such description must be
|
||||
sufficiently detailed for a recipient of ordinary skill to be able to
|
||||
understand it.
|
||||
|
||||
5. Termination
|
||||
|
||||
5.1. The rights granted under this License will terminate automatically if You
|
||||
fail to comply with any of its terms. However, if You become compliant,
|
||||
then the rights granted under this License from a particular Contributor
|
||||
are reinstated (a) provisionally, unless and until such Contributor
|
||||
explicitly and finally terminates Your grants, and (b) on an ongoing basis,
|
||||
if such Contributor fails to notify You of the non-compliance by some
|
||||
reasonable means prior to 60 days after You have come back into compliance.
|
||||
Moreover, Your grants from a particular Contributor are reinstated on an
|
||||
ongoing basis if such Contributor notifies You of the non-compliance by
|
||||
some reasonable means, this is the first time You have received notice of
|
||||
non-compliance with this License from such Contributor, and You become
|
||||
compliant prior to 30 days after Your receipt of the notice.
|
||||
|
||||
5.2. If You initiate litigation against any entity by asserting a patent
|
||||
infringement claim (excluding declaratory judgment actions, counter-claims,
|
||||
and cross-claims) alleging that a Contributor Version directly or
|
||||
indirectly infringes any patent, then the rights granted to You by any and
|
||||
all Contributors for the Covered Software under Section 2.1 of this License
|
||||
shall terminate.
|
||||
|
||||
5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user
|
||||
license agreements (excluding distributors and resellers) which have been
|
||||
validly granted by You or Your distributors under this License prior to
|
||||
termination shall survive termination.
|
||||
|
||||
6. Disclaimer of Warranty
|
||||
|
||||
Covered Software is provided under this License on an “as is” basis, without
|
||||
warranty of any kind, either expressed, implied, or statutory, including,
|
||||
without limitation, warranties that the Covered Software is free of defects,
|
||||
merchantable, fit for a particular purpose or non-infringing. The entire
|
||||
risk as to the quality and performance of the Covered Software is with You.
|
||||
Should any Covered Software prove defective in any respect, You (not any
|
||||
Contributor) assume the cost of any necessary servicing, repair, or
|
||||
correction. This disclaimer of warranty constitutes an essential part of this
|
||||
License. No use of any Covered Software is authorized under this License
|
||||
except under this disclaimer.
|
||||
|
||||
7. Limitation of Liability
|
||||
|
||||
Under no circumstances and under no legal theory, whether tort (including
|
||||
negligence), contract, or otherwise, shall any Contributor, or anyone who
|
||||
distributes Covered Software as permitted above, be liable to You for any
|
||||
direct, indirect, special, incidental, or consequential damages of any
|
||||
character including, without limitation, damages for lost profits, loss of
|
||||
goodwill, work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses, even if such party shall have been
|
||||
informed of the possibility of such damages. This limitation of liability
|
||||
shall not apply to liability for death or personal injury resulting from such
|
||||
party’s negligence to the extent applicable law prohibits such limitation.
|
||||
Some jurisdictions do not allow the exclusion or limitation of incidental or
|
||||
consequential damages, so this exclusion and limitation may not apply to You.
|
||||
|
||||
8. Litigation
|
||||
|
||||
Any litigation relating to this License may be brought only in the courts of
|
||||
a jurisdiction where the defendant maintains its principal place of business
|
||||
and such litigation shall be governed by laws of that jurisdiction, without
|
||||
reference to its conflict-of-law provisions. Nothing in this Section shall
|
||||
prevent a party’s ability to bring cross-claims or counter-claims.
|
||||
|
||||
9. Miscellaneous
|
||||
|
||||
This License represents the complete agreement concerning the subject matter
|
||||
hereof. If any provision of this License is held to be unenforceable, such
|
||||
provision shall be reformed only to the extent necessary to make it
|
||||
enforceable. Any law or regulation which provides that the language of a
|
||||
contract shall be construed against the drafter shall not be used to construe
|
||||
this License against a Contributor.
|
||||
|
||||
|
||||
10. Versions of the License
|
||||
|
||||
10.1. New Versions
|
||||
|
||||
Mozilla Foundation is the license steward. Except as provided in Section
|
||||
10.3, no one other than the license steward has the right to modify or
|
||||
publish new versions of this License. Each version will be given a
|
||||
distinguishing version number.
|
||||
|
||||
10.2. Effect of New Versions
|
||||
|
||||
You may distribute the Covered Software under the terms of the version of
|
||||
the License under which You originally received the Covered Software, or
|
||||
under the terms of any subsequent version published by the license
|
||||
steward.
|
||||
|
||||
10.3. Modified Versions
|
||||
|
||||
If you create software not governed by this License, and you want to
|
||||
create a new license for such software, you may create and use a modified
|
||||
version of this License if you rename the license and remove any
|
||||
references to the name of the license steward (except to note that such
|
||||
modified license differs from this License).
|
||||
|
||||
10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses
|
||||
If You choose to distribute Source Code Form that is Incompatible With
|
||||
Secondary Licenses under the terms of this version of the License, the
|
||||
notice described in Exhibit B of this License must be attached.
|
||||
|
||||
Exhibit A - Source Code Form License Notice
|
||||
|
||||
This Source Code Form is subject to the
|
||||
terms of the Mozilla Public License, v.
|
||||
2.0. If a copy of the MPL was not
|
||||
distributed with this file, You can
|
||||
obtain one at
|
||||
http://mozilla.org/MPL/2.0/.
|
||||
|
||||
If it is not possible or desirable to put the notice in a particular file, then
|
||||
You may include the notice in a location (such as a LICENSE file in a relevant
|
||||
directory) where a recipient would be likely to look for such a notice.
|
||||
|
||||
You may add additional accurate notices of copyright ownership.
|
||||
|
||||
Exhibit B - “Incompatible With Secondary Licenses” Notice
|
||||
|
||||
This Source Code Form is “Incompatible
|
||||
With Secondary Licenses”, as defined by
|
||||
the Mozilla Public License, v. 2.0.
|
205
vendor/github.com/hashicorp/hcl/v2/README.md
generated
vendored
Normal file
205
vendor/github.com/hashicorp/hcl/v2/README.md
generated
vendored
Normal file
@ -0,0 +1,205 @@
|
||||
# HCL
|
||||
|
||||
HCL is a toolkit for creating structured configuration languages that are
|
||||
both human- and machine-friendly, for use with command-line tools.
|
||||
Although intended to be generally useful, it is primarily targeted
|
||||
towards devops tools, servers, etc.
|
||||
|
||||
> **NOTE:** This is major version 2 of HCL, whose Go API is incompatible with
|
||||
> major version 1. Both versions are available for selection in Go Modules
|
||||
> projects. HCL 2 _cannot_ be imported from Go projects that are not using Go Modules. For more information, see
|
||||
> [our version selection guide](https://github.com/hashicorp/hcl/wiki/Version-Selection).
|
||||
|
||||
HCL has both a _native syntax_, intended to be pleasant to read and write for
|
||||
humans, and a JSON-based variant that is easier for machines to generate
|
||||
and parse.
|
||||
|
||||
The HCL native syntax is inspired by [libucl](https://github.com/vstakhov/libucl),
|
||||
[nginx configuration](http://nginx.org/en/docs/beginners_guide.html#conf_structure),
|
||||
and others.
|
||||
|
||||
It includes an expression syntax that allows basic inline computation and,
|
||||
with support from the calling application, use of variables and functions
|
||||
for more dynamic configuration languages.
|
||||
|
||||
HCL provides a set of constructs that can be used by a calling application to
|
||||
construct a configuration language. The application defines which attribute
|
||||
names and nested block types are expected, and HCL parses the configuration
|
||||
file, verifies that it conforms to the expected structure, and returns
|
||||
high-level objects that the application can use for further processing.
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"log"
|
||||
"github.com/hashicorp/hcl/v2/hclsimple"
|
||||
)
|
||||
|
||||
type Config struct {
|
||||
LogLevel string `hcl:"log_level"`
|
||||
}
|
||||
|
||||
func main() {
|
||||
var config Config
|
||||
err := hclsimple.DecodeFile("config.hcl", nil, &config)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to load configuration: %s", err)
|
||||
}
|
||||
log.Printf("Configuration is %#v", config)
|
||||
}
|
||||
```
|
||||
|
||||
A lower-level API is available for applications that need more control over
|
||||
the parsing, decoding, and evaluation of configuration. For more information,
|
||||
see [the package documentation](https://pkg.go.dev/github.com/hashicorp/hcl/v2).
|
||||
|
||||
## Why?
|
||||
|
||||
Newcomers to HCL often ask: why not JSON, YAML, etc?
|
||||
|
||||
Whereas JSON and YAML are formats for serializing data structures, HCL is
|
||||
a syntax and API specifically designed for building structured configuration
|
||||
formats.
|
||||
|
||||
HCL attempts to strike a compromise between generic serialization formats
|
||||
such as JSON and configuration formats built around full programming languages
|
||||
such as Ruby. HCL syntax is designed to be easily read and written by humans,
|
||||
and allows _declarative_ logic to permit its use in more complex applications.
|
||||
|
||||
HCL is intended as a base syntax for configuration formats built
|
||||
around key-value pairs and hierarchical blocks whose structure is well-defined
|
||||
by the calling application, and this definition of the configuration structure
|
||||
allows for better error messages and more convenient definition within the
|
||||
calling application.
|
||||
|
||||
It can't be denied that JSON is very convenient as a _lingua franca_
|
||||
for interoperability between different pieces of software. Because of this,
|
||||
HCL defines a common configuration model that can be parsed from either its
|
||||
native syntax or from a well-defined equivalent JSON structure. This allows
|
||||
configuration to be provided as a mixture of human-authored configuration
|
||||
files in the native syntax and machine-generated files in JSON.
|
||||
|
||||
## Information Model and Syntax
|
||||
|
||||
HCL is built around two primary concepts: _attributes_ and _blocks_. In
|
||||
native syntax, a configuration file for a hypothetical application might look
|
||||
something like this:
|
||||
|
||||
```hcl
|
||||
io_mode = "async"
|
||||
|
||||
service "http" "web_proxy" {
|
||||
listen_addr = "127.0.0.1:8080"
|
||||
|
||||
process "main" {
|
||||
command = ["/usr/local/bin/awesome-app", "server"]
|
||||
}
|
||||
|
||||
process "mgmt" {
|
||||
command = ["/usr/local/bin/awesome-app", "mgmt"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The JSON equivalent of this configuration is the following:
|
||||
|
||||
```json
|
||||
{
|
||||
"io_mode": "async",
|
||||
"service": {
|
||||
"http": {
|
||||
"web_proxy": {
|
||||
"listen_addr": "127.0.0.1:8080",
|
||||
"process": {
|
||||
"main": {
|
||||
"command": ["/usr/local/bin/awesome-app", "server"]
|
||||
},
|
||||
"mgmt": {
|
||||
"command": ["/usr/local/bin/awesome-app", "mgmt"]
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Regardless of which syntax is used, the API within the calling application
|
||||
is the same. It can either work directly with the low-level attributes and
|
||||
blocks, for more advanced use-cases, or it can use one of the _decoder_
|
||||
packages to declaratively extract into either Go structs or dynamic value
|
||||
structures.
|
||||
|
||||
Attribute values can be expressions as well as just literal values:
|
||||
|
||||
```hcl
|
||||
# Arithmetic with literals and application-provided variables
|
||||
sum = 1 + addend
|
||||
|
||||
# String interpolation and templates
|
||||
message = "Hello, ${name}!"
|
||||
|
||||
# Application-provided functions
|
||||
shouty_message = upper(message)
|
||||
```
|
||||
|
||||
Although JSON syntax doesn't permit direct use of expressions, the interpolation
|
||||
syntax allows use of arbitrary expressions within JSON strings:
|
||||
|
||||
```json
|
||||
{
|
||||
"sum": "${1 + addend}",
|
||||
"message": "Hello, ${name}!",
|
||||
"shouty_message": "${upper(message)}"
|
||||
}
|
||||
```
|
||||
|
||||
For more information, see the detailed specifications:
|
||||
|
||||
* [Syntax-agnostic Information Model](spec.md)
|
||||
* [HCL Native Syntax](hclsyntax/spec.md)
|
||||
* [JSON Representation](json/spec.md)
|
||||
|
||||
## Changes in 2.0
|
||||
|
||||
Version 2.0 of HCL combines the features of HCL 1.0 with those of the
|
||||
interpolation language HIL to produce a single configuration language that
|
||||
supports arbitrary expressions.
|
||||
|
||||
This new version has a completely new parser and Go API, with no direct
|
||||
migration path. Although the syntax is similar, the implementation takes some
|
||||
very different approaches to improve on some "rough edges" that existed with
|
||||
the original implementation and to allow for more robust error handling.
|
||||
|
||||
It's possible to import both HCL 1 and HCL 2 into the same program using Go's
|
||||
_semantic import versioning_ mechanism:
|
||||
|
||||
```go
|
||||
import (
|
||||
hcl1 "github.com/hashicorp/hcl"
|
||||
hcl2 "github.com/hashicorp/hcl/v2"
|
||||
)
|
||||
```
|
||||
|
||||
## Acknowledgements
|
||||
|
||||
HCL was heavily inspired by [libucl](https://github.com/vstakhov/libucl),
|
||||
by [Vsevolod Stakhov](https://github.com/vstakhov).
|
||||
|
||||
HCL and HIL originate in [HashiCorp Terraform](https://terraform.io/),
|
||||
with the original parsers for each written by
|
||||
[Mitchell Hashimoto](https://github.com/mitchellh).
|
||||
|
||||
The original HCL parser was ported to pure Go (from yacc) by
|
||||
[Fatih Arslan](https://github.com/fatih). The structure-related portions of
|
||||
the new native syntax parser build on that work.
|
||||
|
||||
The original HIL parser was ported to pure Go (from yacc) by
|
||||
[Martin Atkins](https://github.com/apparentlymart). The expression-related
|
||||
portions of the new native syntax parser build on that work.
|
||||
|
||||
HCL 2, which merged the original HCL and HIL languages into this single new
|
||||
language, builds on design and prototyping work by
|
||||
[Martin Atkins](https://github.com/apparentlymart) in
|
||||
[zcl](https://github.com/zclconf/go-zcl).
|
13
vendor/github.com/hashicorp/hcl/v2/appveyor.yml
generated
vendored
Normal file
13
vendor/github.com/hashicorp/hcl/v2/appveyor.yml
generated
vendored
Normal file
@ -0,0 +1,13 @@
|
||||
build: off
|
||||
|
||||
clone_folder: c:\gopath\src\github.com\hashicorp\hcl
|
||||
|
||||
environment:
|
||||
GOPATH: c:\gopath
|
||||
GO111MODULE: on
|
||||
GOPROXY: https://goproxy.io
|
||||
|
||||
stack: go 1.12
|
||||
|
||||
test_script:
|
||||
- go test ./...
|
143
vendor/github.com/hashicorp/hcl/v2/diagnostic.go
generated
vendored
Normal file
143
vendor/github.com/hashicorp/hcl/v2/diagnostic.go
generated
vendored
Normal file
@ -0,0 +1,143 @@
|
||||
package hcl
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
)
|
||||
|
||||
// DiagnosticSeverity represents the severity of a diagnostic.
|
||||
type DiagnosticSeverity int
|
||||
|
||||
const (
|
||||
// DiagInvalid is the invalid zero value of DiagnosticSeverity
|
||||
DiagInvalid DiagnosticSeverity = iota
|
||||
|
||||
// DiagError indicates that the problem reported by a diagnostic prevents
|
||||
// further progress in parsing and/or evaluating the subject.
|
||||
DiagError
|
||||
|
||||
// DiagWarning indicates that the problem reported by a diagnostic warrants
|
||||
// user attention but does not prevent further progress. It is most
|
||||
// commonly used for showing deprecation notices.
|
||||
DiagWarning
|
||||
)
|
||||
|
||||
// Diagnostic represents information to be presented to a user about an
|
||||
// error or anomoly in parsing or evaluating configuration.
|
||||
type Diagnostic struct {
|
||||
Severity DiagnosticSeverity
|
||||
|
||||
// Summary and Detail contain the English-language description of the
|
||||
// problem. Summary is a terse description of the general problem and
|
||||
// detail is a more elaborate, often-multi-sentence description of
|
||||
// the probem and what might be done to solve it.
|
||||
Summary string
|
||||
Detail string
|
||||
|
||||
// Subject and Context are both source ranges relating to the diagnostic.
|
||||
//
|
||||
// Subject is a tight range referring to exactly the construct that
|
||||
// is problematic, while Context is an optional broader range (which should
|
||||
// fully contain Subject) that ought to be shown around Subject when
|
||||
// generating isolated source-code snippets in diagnostic messages.
|
||||
// If Context is nil, the Subject is also the Context.
|
||||
//
|
||||
// Some diagnostics have no source ranges at all. If Context is set then
|
||||
// Subject should always also be set.
|
||||
Subject *Range
|
||||
Context *Range
|
||||
|
||||
// For diagnostics that occur when evaluating an expression, Expression
|
||||
// may refer to that expression and EvalContext may point to the
|
||||
// EvalContext that was active when evaluating it. This may allow for the
|
||||
// inclusion of additional useful information when rendering a diagnostic
|
||||
// message to the user.
|
||||
//
|
||||
// It is not always possible to select a single EvalContext for a
|
||||
// diagnostic, and so in some cases this field may be nil even when an
|
||||
// expression causes a problem.
|
||||
//
|
||||
// EvalContexts form a tree, so the given EvalContext may refer to a parent
|
||||
// which in turn refers to another parent, etc. For a full picture of all
|
||||
// of the active variables and functions the caller must walk up this
|
||||
// chain, preferring definitions that are "closer" to the expression in
|
||||
// case of colliding names.
|
||||
Expression Expression
|
||||
EvalContext *EvalContext
|
||||
}
|
||||
|
||||
// Diagnostics is a list of Diagnostic instances.
|
||||
type Diagnostics []*Diagnostic
|
||||
|
||||
// error implementation, so that diagnostics can be returned via APIs
|
||||
// that normally deal in vanilla Go errors.
|
||||
//
|
||||
// This presents only minimal context about the error, for compatibility
|
||||
// with usual expectations about how errors will present as strings.
|
||||
func (d *Diagnostic) Error() string {
|
||||
return fmt.Sprintf("%s: %s; %s", d.Subject, d.Summary, d.Detail)
|
||||
}
|
||||
|
||||
// error implementation, so that sets of diagnostics can be returned via
|
||||
// APIs that normally deal in vanilla Go errors.
|
||||
func (d Diagnostics) Error() string {
|
||||
count := len(d)
|
||||
switch {
|
||||
case count == 0:
|
||||
return "no diagnostics"
|
||||
case count == 1:
|
||||
return d[0].Error()
|
||||
default:
|
||||
return fmt.Sprintf("%s, and %d other diagnostic(s)", d[0].Error(), count-1)
|
||||
}
|
||||
}
|
||||
|
||||
// Append appends a new error to a Diagnostics and return the whole Diagnostics.
|
||||
//
|
||||
// This is provided as a convenience for returning from a function that
|
||||
// collects and then returns a set of diagnostics:
|
||||
//
|
||||
// return nil, diags.Append(&hcl.Diagnostic{ ... })
|
||||
//
|
||||
// Note that this modifies the array underlying the diagnostics slice, so
|
||||
// must be used carefully within a single codepath. It is incorrect (and rude)
|
||||
// to extend a diagnostics created by a different subsystem.
|
||||
func (d Diagnostics) Append(diag *Diagnostic) Diagnostics {
|
||||
return append(d, diag)
|
||||
}
|
||||
|
||||
// Extend concatenates the given Diagnostics with the receiver and returns
|
||||
// the whole new Diagnostics.
|
||||
//
|
||||
// This is similar to Append but accepts multiple diagnostics to add. It has
|
||||
// all the same caveats and constraints.
|
||||
func (d Diagnostics) Extend(diags Diagnostics) Diagnostics {
|
||||
return append(d, diags...)
|
||||
}
|
||||
|
||||
// HasErrors returns true if the receiver contains any diagnostics of
|
||||
// severity DiagError.
|
||||
func (d Diagnostics) HasErrors() bool {
|
||||
for _, diag := range d {
|
||||
if diag.Severity == DiagError {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func (d Diagnostics) Errs() []error {
|
||||
var errs []error
|
||||
for _, diag := range d {
|
||||
if diag.Severity == DiagError {
|
||||
errs = append(errs, diag)
|
||||
}
|
||||
}
|
||||
|
||||
return errs
|
||||
}
|
||||
|
||||
// A DiagnosticWriter emits diagnostics somehow.
|
||||
type DiagnosticWriter interface {
|
||||
WriteDiagnostic(*Diagnostic) error
|
||||
WriteDiagnostics(Diagnostics) error
|
||||
}
|
311
vendor/github.com/hashicorp/hcl/v2/diagnostic_text.go
generated
vendored
Normal file
311
vendor/github.com/hashicorp/hcl/v2/diagnostic_text.go
generated
vendored
Normal file
@ -0,0 +1,311 @@
|
||||
package hcl
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"sort"
|
||||
|
||||
wordwrap "github.com/mitchellh/go-wordwrap"
|
||||
"github.com/zclconf/go-cty/cty"
|
||||
)
|
||||
|
||||
type diagnosticTextWriter struct {
|
||||
files map[string]*File
|
||||
wr io.Writer
|
||||
width uint
|
||||
color bool
|
||||
}
|
||||
|
||||
// NewDiagnosticTextWriter creates a DiagnosticWriter that writes diagnostics
|
||||
// to the given writer as formatted text.
|
||||
//
|
||||
// It is designed to produce text appropriate to print in a monospaced font
|
||||
// in a terminal of a particular width, or optionally with no width limit.
|
||||
//
|
||||
// The given width may be zero to disable word-wrapping of the detail text
|
||||
// and truncation of source code snippets.
|
||||
//
|
||||
// If color is set to true, the output will include VT100 escape sequences to
|
||||
// color-code the severity indicators. It is suggested to turn this off if
|
||||
// the target writer is not a terminal.
|
||||
func NewDiagnosticTextWriter(wr io.Writer, files map[string]*File, width uint, color bool) DiagnosticWriter {
|
||||
return &diagnosticTextWriter{
|
||||
files: files,
|
||||
wr: wr,
|
||||
width: width,
|
||||
color: color,
|
||||
}
|
||||
}
|
||||
|
||||
func (w *diagnosticTextWriter) WriteDiagnostic(diag *Diagnostic) error {
|
||||
if diag == nil {
|
||||
return errors.New("nil diagnostic")
|
||||
}
|
||||
|
||||
var colorCode, highlightCode, resetCode string
|
||||
if w.color {
|
||||
switch diag.Severity {
|
||||
case DiagError:
|
||||
colorCode = "\x1b[31m"
|
||||
case DiagWarning:
|
||||
colorCode = "\x1b[33m"
|
||||
}
|
||||
resetCode = "\x1b[0m"
|
||||
highlightCode = "\x1b[1;4m"
|
||||
}
|
||||
|
||||
var severityStr string
|
||||
switch diag.Severity {
|
||||
case DiagError:
|
||||
severityStr = "Error"
|
||||
case DiagWarning:
|
||||
severityStr = "Warning"
|
||||
default:
|
||||
// should never happen
|
||||
severityStr = "???????"
|
||||
}
|
||||
|
||||
fmt.Fprintf(w.wr, "%s%s%s: %s\n\n", colorCode, severityStr, resetCode, diag.Summary)
|
||||
|
||||
if diag.Subject != nil {
|
||||
snipRange := *diag.Subject
|
||||
highlightRange := snipRange
|
||||
if diag.Context != nil {
|
||||
// Show enough of the source code to include both the subject
|
||||
// and context ranges, which overlap in all reasonable
|
||||
// situations.
|
||||
snipRange = RangeOver(snipRange, *diag.Context)
|
||||
}
|
||||
// We can't illustrate an empty range, so we'll turn such ranges into
|
||||
// single-character ranges, which might not be totally valid (may point
|
||||
// off the end of a line, or off the end of the file) but are good
|
||||
// enough for the bounds checks we do below.
|
||||
if snipRange.Empty() {
|
||||
snipRange.End.Byte++
|
||||
snipRange.End.Column++
|
||||
}
|
||||
if highlightRange.Empty() {
|
||||
highlightRange.End.Byte++
|
||||
highlightRange.End.Column++
|
||||
}
|
||||
|
||||
file := w.files[diag.Subject.Filename]
|
||||
if file == nil || file.Bytes == nil {
|
||||
fmt.Fprintf(w.wr, " on %s line %d:\n (source code not available)\n\n", diag.Subject.Filename, diag.Subject.Start.Line)
|
||||
} else {
|
||||
|
||||
var contextLine string
|
||||
if diag.Subject != nil {
|
||||
contextLine = contextString(file, diag.Subject.Start.Byte)
|
||||
if contextLine != "" {
|
||||
contextLine = ", in " + contextLine
|
||||
}
|
||||
}
|
||||
|
||||
fmt.Fprintf(w.wr, " on %s line %d%s:\n", diag.Subject.Filename, diag.Subject.Start.Line, contextLine)
|
||||
|
||||
src := file.Bytes
|
||||
sc := NewRangeScanner(src, diag.Subject.Filename, bufio.ScanLines)
|
||||
|
||||
for sc.Scan() {
|
||||
lineRange := sc.Range()
|
||||
if !lineRange.Overlaps(snipRange) {
|
||||
continue
|
||||
}
|
||||
|
||||
beforeRange, highlightedRange, afterRange := lineRange.PartitionAround(highlightRange)
|
||||
if highlightedRange.Empty() {
|
||||
fmt.Fprintf(w.wr, "%4d: %s\n", lineRange.Start.Line, sc.Bytes())
|
||||
} else {
|
||||
before := beforeRange.SliceBytes(src)
|
||||
highlighted := highlightedRange.SliceBytes(src)
|
||||
after := afterRange.SliceBytes(src)
|
||||
fmt.Fprintf(
|
||||
w.wr, "%4d: %s%s%s%s%s\n",
|
||||
lineRange.Start.Line,
|
||||
before,
|
||||
highlightCode, highlighted, resetCode,
|
||||
after,
|
||||
)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
w.wr.Write([]byte{'\n'})
|
||||
}
|
||||
|
||||
if diag.Expression != nil && diag.EvalContext != nil {
|
||||
// We will attempt to render the values for any variables
|
||||
// referenced in the given expression as additional context, for
|
||||
// situations where the same expression is evaluated multiple
|
||||
// times in different scopes.
|
||||
expr := diag.Expression
|
||||
ctx := diag.EvalContext
|
||||
|
||||
vars := expr.Variables()
|
||||
stmts := make([]string, 0, len(vars))
|
||||
seen := make(map[string]struct{}, len(vars))
|
||||
for _, traversal := range vars {
|
||||
val, diags := traversal.TraverseAbs(ctx)
|
||||
if diags.HasErrors() {
|
||||
// Skip anything that generates errors, since we probably
|
||||
// already have the same error in our diagnostics set
|
||||
// already.
|
||||
continue
|
||||
}
|
||||
|
||||
traversalStr := w.traversalStr(traversal)
|
||||
if _, exists := seen[traversalStr]; exists {
|
||||
continue // don't show duplicates when the same variable is referenced multiple times
|
||||
}
|
||||
switch {
|
||||
case !val.IsKnown():
|
||||
// Can't say anything about this yet, then.
|
||||
continue
|
||||
case val.IsNull():
|
||||
stmts = append(stmts, fmt.Sprintf("%s set to null", traversalStr))
|
||||
default:
|
||||
stmts = append(stmts, fmt.Sprintf("%s as %s", traversalStr, w.valueStr(val)))
|
||||
}
|
||||
seen[traversalStr] = struct{}{}
|
||||
}
|
||||
|
||||
sort.Strings(stmts) // FIXME: Should maybe use a traversal-aware sort that can sort numeric indexes properly?
|
||||
last := len(stmts) - 1
|
||||
|
||||
for i, stmt := range stmts {
|
||||
switch i {
|
||||
case 0:
|
||||
w.wr.Write([]byte{'w', 'i', 't', 'h', ' '})
|
||||
default:
|
||||
w.wr.Write([]byte{' ', ' ', ' ', ' ', ' '})
|
||||
}
|
||||
w.wr.Write([]byte(stmt))
|
||||
switch i {
|
||||
case last:
|
||||
w.wr.Write([]byte{'.', '\n', '\n'})
|
||||
default:
|
||||
w.wr.Write([]byte{',', '\n'})
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if diag.Detail != "" {
|
||||
detail := diag.Detail
|
||||
if w.width != 0 {
|
||||
detail = wordwrap.WrapString(detail, w.width)
|
||||
}
|
||||
fmt.Fprintf(w.wr, "%s\n\n", detail)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (w *diagnosticTextWriter) WriteDiagnostics(diags Diagnostics) error {
|
||||
for _, diag := range diags {
|
||||
err := w.WriteDiagnostic(diag)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (w *diagnosticTextWriter) traversalStr(traversal Traversal) string {
|
||||
// This is a specialized subset of traversal rendering tailored to
|
||||
// producing helpful contextual messages in diagnostics. It is not
|
||||
// comprehensive nor intended to be used for other purposes.
|
||||
|
||||
var buf bytes.Buffer
|
||||
for _, step := range traversal {
|
||||
switch tStep := step.(type) {
|
||||
case TraverseRoot:
|
||||
buf.WriteString(tStep.Name)
|
||||
case TraverseAttr:
|
||||
buf.WriteByte('.')
|
||||
buf.WriteString(tStep.Name)
|
||||
case TraverseIndex:
|
||||
buf.WriteByte('[')
|
||||
if keyTy := tStep.Key.Type(); keyTy.IsPrimitiveType() {
|
||||
buf.WriteString(w.valueStr(tStep.Key))
|
||||
} else {
|
||||
// We'll just use a placeholder for more complex values,
|
||||
// since otherwise our result could grow ridiculously long.
|
||||
buf.WriteString("...")
|
||||
}
|
||||
buf.WriteByte(']')
|
||||
}
|
||||
}
|
||||
return buf.String()
|
||||
}
|
||||
|
||||
func (w *diagnosticTextWriter) valueStr(val cty.Value) string {
|
||||
// This is a specialized subset of value rendering tailored to producing
|
||||
// helpful but concise messages in diagnostics. It is not comprehensive
|
||||
// nor intended to be used for other purposes.
|
||||
|
||||
ty := val.Type()
|
||||
switch {
|
||||
case val.IsNull():
|
||||
return "null"
|
||||
case !val.IsKnown():
|
||||
// Should never happen here because we should filter before we get
|
||||
// in here, but we'll do something reasonable rather than panic.
|
||||
return "(not yet known)"
|
||||
case ty == cty.Bool:
|
||||
if val.True() {
|
||||
return "true"
|
||||
}
|
||||
return "false"
|
||||
case ty == cty.Number:
|
||||
bf := val.AsBigFloat()
|
||||
return bf.Text('g', 10)
|
||||
case ty == cty.String:
|
||||
// Go string syntax is not exactly the same as HCL native string syntax,
|
||||
// but we'll accept the minor edge-cases where this is different here
|
||||
// for now, just to get something reasonable here.
|
||||
return fmt.Sprintf("%q", val.AsString())
|
||||
case ty.IsCollectionType() || ty.IsTupleType():
|
||||
l := val.LengthInt()
|
||||
switch l {
|
||||
case 0:
|
||||
return "empty " + ty.FriendlyName()
|
||||
case 1:
|
||||
return ty.FriendlyName() + " with 1 element"
|
||||
default:
|
||||
return fmt.Sprintf("%s with %d elements", ty.FriendlyName(), l)
|
||||
}
|
||||
case ty.IsObjectType():
|
||||
atys := ty.AttributeTypes()
|
||||
l := len(atys)
|
||||
switch l {
|
||||
case 0:
|
||||
return "object with no attributes"
|
||||
case 1:
|
||||
var name string
|
||||
for k := range atys {
|
||||
name = k
|
||||
}
|
||||
return fmt.Sprintf("object with 1 attribute %q", name)
|
||||
default:
|
||||
return fmt.Sprintf("object with %d attributes", l)
|
||||
}
|
||||
default:
|
||||
return ty.FriendlyName()
|
||||
}
|
||||
}
|
||||
|
||||
func contextString(file *File, offset int) string {
|
||||
type contextStringer interface {
|
||||
ContextString(offset int) string
|
||||
}
|
||||
|
||||
if cser, ok := file.Nav.(contextStringer); ok {
|
||||
return cser.ContextString(offset)
|
||||
}
|
||||
return ""
|
||||
}
|
24
vendor/github.com/hashicorp/hcl/v2/didyoumean.go
generated
vendored
Normal file
24
vendor/github.com/hashicorp/hcl/v2/didyoumean.go
generated
vendored
Normal file
@ -0,0 +1,24 @@
|
||||
package hcl
|
||||
|
||||
import (
|
||||
"github.com/agext/levenshtein"
|
||||
)
|
||||
|
||||
// nameSuggestion tries to find a name from the given slice of suggested names
|
||||
// that is close to the given name and returns it if found. If no suggestion
|
||||
// is close enough, returns the empty string.
|
||||
//
|
||||
// The suggestions are tried in order, so earlier suggestions take precedence
|
||||
// if the given string is similar to two or more suggestions.
|
||||
//
|
||||
// This function is intended to be used with a relatively-small number of
|
||||
// suggestions. It's not optimized for hundreds or thousands of them.
|
||||
func nameSuggestion(given string, suggestions []string) string {
|
||||
for _, suggestion := range suggestions {
|
||||
dist := levenshtein.Distance(given, suggestion, nil)
|
||||
if dist < 3 { // threshold determined experimentally
|
||||
return suggestion
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
34
vendor/github.com/hashicorp/hcl/v2/doc.go
generated
vendored
Normal file
34
vendor/github.com/hashicorp/hcl/v2/doc.go
generated
vendored
Normal file
@ -0,0 +1,34 @@
|
||||
// Package hcl contains the main modelling types and general utility functions
|
||||
// for HCL.
|
||||
//
|
||||
// For a simple entry point into HCL, see the package in the subdirectory
|
||||
// "hclsimple", which has an opinionated function Decode that can decode HCL
|
||||
// configurations in either native HCL syntax or JSON syntax into a Go struct
|
||||
// type:
|
||||
//
|
||||
// package main
|
||||
//
|
||||
// import (
|
||||
// "log"
|
||||
// "github.com/hashicorp/hcl/v2/hclsimple"
|
||||
// )
|
||||
//
|
||||
// type Config struct {
|
||||
// LogLevel string `hcl:"log_level"`
|
||||
// }
|
||||
//
|
||||
// func main() {
|
||||
// var config Config
|
||||
// err := hclsimple.DecodeFile("config.hcl", nil, &config)
|
||||
// if err != nil {
|
||||
// log.Fatalf("Failed to load configuration: %s", err)
|
||||
// }
|
||||
// log.Printf("Configuration is %#v", config)
|
||||
// }
|
||||
//
|
||||
// If your application needs more control over the evaluation of the
|
||||
// configuration, you can use the functions in the subdirectories hclparse,
|
||||
// gohcl, hcldec, etc. Splitting the handling of configuration into multiple
|
||||
// phases allows for advanced patterns such as allowing expressions in one
|
||||
// part of the configuration to refer to data defined in another part.
|
||||
package hcl
|
25
vendor/github.com/hashicorp/hcl/v2/eval_context.go
generated
vendored
Normal file
25
vendor/github.com/hashicorp/hcl/v2/eval_context.go
generated
vendored
Normal file
@ -0,0 +1,25 @@
|
||||
package hcl
|
||||
|
||||
import (
|
||||
"github.com/zclconf/go-cty/cty"
|
||||
"github.com/zclconf/go-cty/cty/function"
|
||||
)
|
||||
|
||||
// An EvalContext provides the variables and functions that should be used
|
||||
// to evaluate an expression.
|
||||
type EvalContext struct {
|
||||
Variables map[string]cty.Value
|
||||
Functions map[string]function.Function
|
||||
parent *EvalContext
|
||||
}
|
||||
|
||||
// NewChild returns a new EvalContext that is a child of the receiver.
|
||||
func (ctx *EvalContext) NewChild() *EvalContext {
|
||||
return &EvalContext{parent: ctx}
|
||||
}
|
||||
|
||||
// Parent returns the parent of the receiver, or nil if the receiver has
|
||||
// no parent.
|
||||
func (ctx *EvalContext) Parent() *EvalContext {
|
||||
return ctx.parent
|
||||
}
|
46
vendor/github.com/hashicorp/hcl/v2/expr_call.go
generated
vendored
Normal file
46
vendor/github.com/hashicorp/hcl/v2/expr_call.go
generated
vendored
Normal file
@ -0,0 +1,46 @@
|
||||
package hcl
|
||||
|
||||
// ExprCall tests if the given expression is a function call and,
|
||||
// if so, extracts the function name and the expressions that represent
|
||||
// the arguments. If the given expression is not statically a function call,
|
||||
// error diagnostics are returned.
|
||||
//
|
||||
// A particular Expression implementation can support this function by
|
||||
// offering a method called ExprCall that takes no arguments and returns
|
||||
// *StaticCall. This method should return nil if a static call cannot
|
||||
// be extracted. Alternatively, an implementation can support
|
||||
// UnwrapExpression to delegate handling of this function to a wrapped
|
||||
// Expression object.
|
||||
func ExprCall(expr Expression) (*StaticCall, Diagnostics) {
|
||||
type exprCall interface {
|
||||
ExprCall() *StaticCall
|
||||
}
|
||||
|
||||
physExpr := UnwrapExpressionUntil(expr, func(expr Expression) bool {
|
||||
_, supported := expr.(exprCall)
|
||||
return supported
|
||||
})
|
||||
|
||||
if exC, supported := physExpr.(exprCall); supported {
|
||||
if call := exC.ExprCall(); call != nil {
|
||||
return call, nil
|
||||
}
|
||||
}
|
||||
return nil, Diagnostics{
|
||||
&Diagnostic{
|
||||
Severity: DiagError,
|
||||
Summary: "Invalid expression",
|
||||
Detail: "A static function call is required.",
|
||||
Subject: expr.StartRange().Ptr(),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// StaticCall represents a function call that was extracted statically from
|
||||
// an expression using ExprCall.
|
||||
type StaticCall struct {
|
||||
Name string
|
||||
NameRange Range
|
||||
Arguments []Expression
|
||||
ArgsRange Range
|
||||
}
|
37
vendor/github.com/hashicorp/hcl/v2/expr_list.go
generated
vendored
Normal file
37
vendor/github.com/hashicorp/hcl/v2/expr_list.go
generated
vendored
Normal file
@ -0,0 +1,37 @@
|
||||
package hcl
|
||||
|
||||
// ExprList tests if the given expression is a static list construct and,
|
||||
// if so, extracts the expressions that represent the list elements.
|
||||
// If the given expression is not a static list, error diagnostics are
|
||||
// returned.
|
||||
//
|
||||
// A particular Expression implementation can support this function by
|
||||
// offering a method called ExprList that takes no arguments and returns
|
||||
// []Expression. This method should return nil if a static list cannot
|
||||
// be extracted. Alternatively, an implementation can support
|
||||
// UnwrapExpression to delegate handling of this function to a wrapped
|
||||
// Expression object.
|
||||
func ExprList(expr Expression) ([]Expression, Diagnostics) {
|
||||
type exprList interface {
|
||||
ExprList() []Expression
|
||||
}
|
||||
|
||||
physExpr := UnwrapExpressionUntil(expr, func(expr Expression) bool {
|
||||
_, supported := expr.(exprList)
|
||||
return supported
|
||||
})
|
||||
|
||||
if exL, supported := physExpr.(exprList); supported {
|
||||
if list := exL.ExprList(); list != nil {
|
||||
return list, nil
|
||||
}
|
||||
}
|
||||
return nil, Diagnostics{
|
||||
&Diagnostic{
|
||||
Severity: DiagError,
|
||||
Summary: "Invalid expression",
|
||||
Detail: "A static list expression is required.",
|
||||
Subject: expr.StartRange().Ptr(),
|
||||
},
|
||||
}
|
||||
}
|
44
vendor/github.com/hashicorp/hcl/v2/expr_map.go
generated
vendored
Normal file
44
vendor/github.com/hashicorp/hcl/v2/expr_map.go
generated
vendored
Normal file
@ -0,0 +1,44 @@
|
||||
package hcl
|
||||
|
||||
// ExprMap tests if the given expression is a static map construct and,
|
||||
// if so, extracts the expressions that represent the map elements.
|
||||
// If the given expression is not a static map, error diagnostics are
|
||||
// returned.
|
||||
//
|
||||
// A particular Expression implementation can support this function by
|
||||
// offering a method called ExprMap that takes no arguments and returns
|
||||
// []KeyValuePair. This method should return nil if a static map cannot
|
||||
// be extracted. Alternatively, an implementation can support
|
||||
// UnwrapExpression to delegate handling of this function to a wrapped
|
||||
// Expression object.
|
||||
func ExprMap(expr Expression) ([]KeyValuePair, Diagnostics) {
|
||||
type exprMap interface {
|
||||
ExprMap() []KeyValuePair
|
||||
}
|
||||
|
||||
physExpr := UnwrapExpressionUntil(expr, func(expr Expression) bool {
|
||||
_, supported := expr.(exprMap)
|
||||
return supported
|
||||
})
|
||||
|
||||
if exM, supported := physExpr.(exprMap); supported {
|
||||
if pairs := exM.ExprMap(); pairs != nil {
|
||||
return pairs, nil
|
||||
}
|
||||
}
|
||||
return nil, Diagnostics{
|
||||
&Diagnostic{
|
||||
Severity: DiagError,
|
||||
Summary: "Invalid expression",
|
||||
Detail: "A static map expression is required.",
|
||||
Subject: expr.StartRange().Ptr(),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// KeyValuePair represents a pair of expressions that serve as a single item
|
||||
// within a map or object definition construct.
|
||||
type KeyValuePair struct {
|
||||
Key Expression
|
||||
Value Expression
|
||||
}
|
68
vendor/github.com/hashicorp/hcl/v2/expr_unwrap.go
generated
vendored
Normal file
68
vendor/github.com/hashicorp/hcl/v2/expr_unwrap.go
generated
vendored
Normal file
@ -0,0 +1,68 @@
|
||||
package hcl
|
||||
|
||||
type unwrapExpression interface {
|
||||
UnwrapExpression() Expression
|
||||
}
|
||||
|
||||
// UnwrapExpression removes any "wrapper" expressions from the given expression,
|
||||
// to recover the representation of the physical expression given in source
|
||||
// code.
|
||||
//
|
||||
// Sometimes wrapping expressions are used to modify expression behavior, e.g.
|
||||
// in extensions that need to make some local variables available to certain
|
||||
// sub-trees of the configuration. This can make it difficult to reliably
|
||||
// type-assert on the physical AST types used by the underlying syntax.
|
||||
//
|
||||
// Unwrapping an expression may modify its behavior by stripping away any
|
||||
// additional constraints or capabilities being applied to the Value and
|
||||
// Variables methods, so this function should generally only be used prior
|
||||
// to operations that concern themselves with the static syntax of the input
|
||||
// configuration, and not with the effective value of the expression.
|
||||
//
|
||||
// Wrapper expression types must support unwrapping by implementing a method
|
||||
// called UnwrapExpression that takes no arguments and returns the embedded
|
||||
// Expression. Implementations of this method should peel away only one level
|
||||
// of wrapping, if multiple are present. This method may return nil to
|
||||
// indicate _dynamically_ that no wrapped expression is available, for
|
||||
// expression types that might only behave as wrappers in certain cases.
|
||||
func UnwrapExpression(expr Expression) Expression {
|
||||
for {
|
||||
unwrap, wrapped := expr.(unwrapExpression)
|
||||
if !wrapped {
|
||||
return expr
|
||||
}
|
||||
innerExpr := unwrap.UnwrapExpression()
|
||||
if innerExpr == nil {
|
||||
return expr
|
||||
}
|
||||
expr = innerExpr
|
||||
}
|
||||
}
|
||||
|
||||
// UnwrapExpressionUntil is similar to UnwrapExpression except it gives the
|
||||
// caller an opportunity to test each level of unwrapping to see each a
|
||||
// particular expression is accepted.
|
||||
//
|
||||
// This could be used, for example, to unwrap until a particular other
|
||||
// interface is satisfied, regardless of wrap wrapping level it is satisfied
|
||||
// at.
|
||||
//
|
||||
// The given callback function must return false to continue wrapping, or
|
||||
// true to accept and return the proposed expression given. If the callback
|
||||
// function rejects even the final, physical expression then the result of
|
||||
// this function is nil.
|
||||
func UnwrapExpressionUntil(expr Expression, until func(Expression) bool) Expression {
|
||||
for {
|
||||
if until(expr) {
|
||||
return expr
|
||||
}
|
||||
unwrap, wrapped := expr.(unwrapExpression)
|
||||
if !wrapped {
|
||||
return nil
|
||||
}
|
||||
expr = unwrap.UnwrapExpression()
|
||||
if expr == nil {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
}
|
23
vendor/github.com/hashicorp/hcl/v2/go.mod
generated
vendored
Normal file
23
vendor/github.com/hashicorp/hcl/v2/go.mod
generated
vendored
Normal file
@ -0,0 +1,23 @@
|
||||
module github.com/hashicorp/hcl/v2
|
||||
|
||||
go 1.12
|
||||
|
||||
require (
|
||||
github.com/agext/levenshtein v1.2.1
|
||||
github.com/apparentlymart/go-dump v0.0.0-20180507223929-23540a00eaa3
|
||||
github.com/apparentlymart/go-textseg/v12 v12.0.0
|
||||
github.com/davecgh/go-spew v1.1.1
|
||||
github.com/go-test/deep v1.0.3
|
||||
github.com/google/go-cmp v0.3.1
|
||||
github.com/kr/pretty v0.1.0
|
||||
github.com/kylelemons/godebug v0.0.0-20170820004349-d65d576e9348
|
||||
github.com/mitchellh/go-wordwrap v0.0.0-20150314170334-ad45545899c7
|
||||
github.com/pmezard/go-difflib v1.0.0 // indirect
|
||||
github.com/sergi/go-diff v1.0.0
|
||||
github.com/spf13/pflag v1.0.2
|
||||
github.com/stretchr/testify v1.2.2 // indirect
|
||||
github.com/zclconf/go-cty v1.2.0
|
||||
golang.org/x/crypto v0.0.0-20190426145343-a29dc8fdc734
|
||||
golang.org/x/sys v0.0.0-20190502175342-a43fa875dd82 // indirect
|
||||
golang.org/x/text v0.3.2 // indirect
|
||||
)
|
53
vendor/github.com/hashicorp/hcl/v2/go.sum
generated
vendored
Normal file
53
vendor/github.com/hashicorp/hcl/v2/go.sum
generated
vendored
Normal file
@ -0,0 +1,53 @@
|
||||
github.com/agext/levenshtein v1.2.1 h1:QmvMAjj2aEICytGiWzmxoE0x2KZvE0fvmqMOfy2tjT8=
|
||||
github.com/agext/levenshtein v1.2.1/go.mod h1:JEDfjyjHDjOF/1e4FlBE/PkbqA9OfWu2ki2W0IB5558=
|
||||
github.com/apparentlymart/go-dump v0.0.0-20180507223929-23540a00eaa3 h1:ZSTrOEhiM5J5RFxEaFvMZVEAM1KvT1YzbEOwB2EAGjA=
|
||||
github.com/apparentlymart/go-dump v0.0.0-20180507223929-23540a00eaa3/go.mod h1:oL81AME2rN47vu18xqj1S1jPIPuN7afo62yKTNn3XMM=
|
||||
github.com/apparentlymart/go-textseg v1.0.0 h1:rRmlIsPEEhUTIKQb7T++Nz/A5Q6C9IuX2wFoYVvnCs0=
|
||||
github.com/apparentlymart/go-textseg v1.0.0/go.mod h1:z96Txxhf3xSFMPmb5X/1W05FF/Nj9VFpLOpjS5yuumk=
|
||||
github.com/apparentlymart/go-textseg/v12 v12.0.0 h1:bNEQyAGak9tojivJNkoqWErVCQbjdL7GzRt3F8NvfJ0=
|
||||
github.com/apparentlymart/go-textseg/v12 v12.0.0/go.mod h1:S/4uRK2UtaQttw1GenVJEynmyUenKwP++x/+DdGV/Ec=
|
||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/go-test/deep v1.0.3 h1:ZrJSEWsXzPOxaZnFteGEfooLba+ju3FYIbOrS+rQd68=
|
||||
github.com/go-test/deep v1.0.3/go.mod h1:wGDj63lr65AM2AQyKZd/NYHGb0R+1RLqB8NKt3aSFNA=
|
||||
github.com/golang/protobuf v1.1.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||
github.com/google/go-cmp v0.3.1 h1:Xye71clBPdm5HgqGwUkwhbynsUJZhDbS20FvLhQ2izg=
|
||||
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
|
||||
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
|
||||
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
||||
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
|
||||
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
|
||||
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
|
||||
github.com/kylelemons/godebug v0.0.0-20170820004349-d65d576e9348 h1:MtvEpTB6LX3vkb4ax0b5D2DHbNAUsen0Gx5wZoq3lV4=
|
||||
github.com/kylelemons/godebug v0.0.0-20170820004349-d65d576e9348/go.mod h1:B69LEHPfb2qLo0BaaOLcbitczOKLWTsrBG9LczfCD4k=
|
||||
github.com/mitchellh/go-wordwrap v0.0.0-20150314170334-ad45545899c7 h1:DpOJ2HYzCv8LZP15IdmG+YdwD2luVPHITV96TkirNBM=
|
||||
github.com/mitchellh/go-wordwrap v0.0.0-20150314170334-ad45545899c7/go.mod h1:ZXFpozHsX6DPmq2I0TCekCxypsnAUbP2oI0UX1GXzOo=
|
||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/sergi/go-diff v1.0.0 h1:Kpca3qRNrduNnOQeazBd0ysaKrUJiIuISHxogkT9RPQ=
|
||||
github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo=
|
||||
github.com/spf13/pflag v1.0.2 h1:Fy0orTDgHdbnzHcsOgfCN4LtHf0ec3wwtiwJqwvf3Gc=
|
||||
github.com/spf13/pflag v1.0.2/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
|
||||
github.com/stretchr/testify v1.2.2 h1:bSDNvY7ZPG5RlJ8otE/7V6gMiyenm9RtJ7IUVIAoJ1w=
|
||||
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
||||
github.com/vmihailenco/msgpack v3.3.3+incompatible/go.mod h1:fy3FlTQTDXWkZ7Bh6AcGMlsjHatGryHQYUTf1ShIgkk=
|
||||
github.com/zclconf/go-cty v1.2.0 h1:sPHsy7ADcIZQP3vILvTjrh74ZA175TFP5vqiNK1UmlI=
|
||||
github.com/zclconf/go-cty v1.2.0/go.mod h1:hOPWgoHbaTUnI5k4D2ld+GRpFJSCe6bCM7m1q/N4PQ8=
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
golang.org/x/crypto v0.0.0-20190426145343-a29dc8fdc734 h1:p/H982KKEjUnLJkM3tt/LemDnOc1GiZL5FCVlORJ5zo=
|
||||
golang.org/x/crypto v0.0.0-20190426145343-a29dc8fdc734/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/net v0.0.0-20180811021610-c39426892332/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
|
||||
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20190502175342-a43fa875dd82 h1:vsphBvatvfbhlb4PO1BYSr9dzugGxJ/SQHoNufZJq1w=
|
||||
golang.org/x/sys v0.0.0-20190502175342-a43fa875dd82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/text v0.3.0 h1:g61tztE5qeGQ89tm6NTjjM9VPIm088od1l6aSorWRWg=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
|
||||
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
|
||||
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
|
||||
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
226
vendor/github.com/hashicorp/hcl/v2/merged.go
generated
vendored
Normal file
226
vendor/github.com/hashicorp/hcl/v2/merged.go
generated
vendored
Normal file
@ -0,0 +1,226 @@
|
||||
package hcl
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
)
|
||||
|
||||
// MergeFiles combines the given files to produce a single body that contains
|
||||
// configuration from all of the given files.
|
||||
//
|
||||
// The ordering of the given files decides the order in which contained
|
||||
// elements will be returned. If any top-level attributes are defined with
|
||||
// the same name across multiple files, a diagnostic will be produced from
|
||||
// the Content and PartialContent methods describing this error in a
|
||||
// user-friendly way.
|
||||
func MergeFiles(files []*File) Body {
|
||||
var bodies []Body
|
||||
for _, file := range files {
|
||||
bodies = append(bodies, file.Body)
|
||||
}
|
||||
return MergeBodies(bodies)
|
||||
}
|
||||
|
||||
// MergeBodies is like MergeFiles except it deals directly with bodies, rather
|
||||
// than with entire files.
|
||||
func MergeBodies(bodies []Body) Body {
|
||||
if len(bodies) == 0 {
|
||||
// Swap out for our singleton empty body, to reduce the number of
|
||||
// empty slices we have hanging around.
|
||||
return emptyBody
|
||||
}
|
||||
|
||||
// If any of the given bodies are already merged bodies, we'll unpack
|
||||
// to flatten to a single mergedBodies, since that's conceptually simpler.
|
||||
// This also, as a side-effect, eliminates any empty bodies, since
|
||||
// empties are merged bodies with no inner bodies.
|
||||
var newLen int
|
||||
var flatten bool
|
||||
for _, body := range bodies {
|
||||
if children, merged := body.(mergedBodies); merged {
|
||||
newLen += len(children)
|
||||
flatten = true
|
||||
} else {
|
||||
newLen++
|
||||
}
|
||||
}
|
||||
|
||||
if !flatten { // not just newLen == len, because we might have mergedBodies with single bodies inside
|
||||
return mergedBodies(bodies)
|
||||
}
|
||||
|
||||
if newLen == 0 {
|
||||
// Don't allocate a new empty when we already have one
|
||||
return emptyBody
|
||||
}
|
||||
|
||||
new := make([]Body, 0, newLen)
|
||||
for _, body := range bodies {
|
||||
if children, merged := body.(mergedBodies); merged {
|
||||
new = append(new, children...)
|
||||
} else {
|
||||
new = append(new, body)
|
||||
}
|
||||
}
|
||||
return mergedBodies(new)
|
||||
}
|
||||
|
||||
var emptyBody = mergedBodies([]Body{})
|
||||
|
||||
// EmptyBody returns a body with no content. This body can be used as a
|
||||
// placeholder when a body is required but no body content is available.
|
||||
func EmptyBody() Body {
|
||||
return emptyBody
|
||||
}
|
||||
|
||||
type mergedBodies []Body
|
||||
|
||||
// Content returns the content produced by applying the given schema to all
|
||||
// of the merged bodies and merging the result.
|
||||
//
|
||||
// Although required attributes _are_ supported, they should be used sparingly
|
||||
// with merged bodies since in this case there is no contextual information
|
||||
// with which to return good diagnostics. Applications working with merged
|
||||
// bodies may wish to mark all attributes as optional and then check for
|
||||
// required attributes afterwards, to produce better diagnostics.
|
||||
func (mb mergedBodies) Content(schema *BodySchema) (*BodyContent, Diagnostics) {
|
||||
// the returned body will always be empty in this case, because mergedContent
|
||||
// will only ever call Content on the child bodies.
|
||||
content, _, diags := mb.mergedContent(schema, false)
|
||||
return content, diags
|
||||
}
|
||||
|
||||
func (mb mergedBodies) PartialContent(schema *BodySchema) (*BodyContent, Body, Diagnostics) {
|
||||
return mb.mergedContent(schema, true)
|
||||
}
|
||||
|
||||
func (mb mergedBodies) JustAttributes() (Attributes, Diagnostics) {
|
||||
attrs := make(map[string]*Attribute)
|
||||
var diags Diagnostics
|
||||
|
||||
for _, body := range mb {
|
||||
thisAttrs, thisDiags := body.JustAttributes()
|
||||
|
||||
if len(thisDiags) != 0 {
|
||||
diags = append(diags, thisDiags...)
|
||||
}
|
||||
|
||||
if thisAttrs != nil {
|
||||
for name, attr := range thisAttrs {
|
||||
if existing := attrs[name]; existing != nil {
|
||||
diags = diags.Append(&Diagnostic{
|
||||
Severity: DiagError,
|
||||
Summary: "Duplicate argument",
|
||||
Detail: fmt.Sprintf(
|
||||
"Argument %q was already set at %s",
|
||||
name, existing.NameRange.String(),
|
||||
),
|
||||
Subject: &attr.NameRange,
|
||||
})
|
||||
continue
|
||||
}
|
||||
|
||||
attrs[name] = attr
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return attrs, diags
|
||||
}
|
||||
|
||||
func (mb mergedBodies) MissingItemRange() Range {
|
||||
if len(mb) == 0 {
|
||||
// Nothing useful to return here, so we'll return some garbage.
|
||||
return Range{
|
||||
Filename: "<empty>",
|
||||
}
|
||||
}
|
||||
|
||||
// arbitrarily use the first body's missing item range
|
||||
return mb[0].MissingItemRange()
|
||||
}
|
||||
|
||||
func (mb mergedBodies) mergedContent(schema *BodySchema, partial bool) (*BodyContent, Body, Diagnostics) {
|
||||
// We need to produce a new schema with none of the attributes marked as
|
||||
// required, since _any one_ of our bodies can contribute an attribute value.
|
||||
// We'll separately check that all required attributes are present at
|
||||
// the end.
|
||||
mergedSchema := &BodySchema{
|
||||
Blocks: schema.Blocks,
|
||||
}
|
||||
for _, attrS := range schema.Attributes {
|
||||
mergedAttrS := attrS
|
||||
mergedAttrS.Required = false
|
||||
mergedSchema.Attributes = append(mergedSchema.Attributes, mergedAttrS)
|
||||
}
|
||||
|
||||
var mergedLeftovers []Body
|
||||
content := &BodyContent{
|
||||
Attributes: map[string]*Attribute{},
|
||||
}
|
||||
|
||||
var diags Diagnostics
|
||||
for _, body := range mb {
|
||||
var thisContent *BodyContent
|
||||
var thisLeftovers Body
|
||||
var thisDiags Diagnostics
|
||||
|
||||
if partial {
|
||||
thisContent, thisLeftovers, thisDiags = body.PartialContent(mergedSchema)
|
||||
} else {
|
||||
thisContent, thisDiags = body.Content(mergedSchema)
|
||||
}
|
||||
|
||||
if thisLeftovers != nil {
|
||||
mergedLeftovers = append(mergedLeftovers, thisLeftovers)
|
||||
}
|
||||
if len(thisDiags) != 0 {
|
||||
diags = append(diags, thisDiags...)
|
||||
}
|
||||
|
||||
if thisContent.Attributes != nil {
|
||||
for name, attr := range thisContent.Attributes {
|
||||
if existing := content.Attributes[name]; existing != nil {
|
||||
diags = diags.Append(&Diagnostic{
|
||||
Severity: DiagError,
|
||||
Summary: "Duplicate argument",
|
||||
Detail: fmt.Sprintf(
|
||||
"Argument %q was already set at %s",
|
||||
name, existing.NameRange.String(),
|
||||
),
|
||||
Subject: &attr.NameRange,
|
||||
})
|
||||
continue
|
||||
}
|
||||
content.Attributes[name] = attr
|
||||
}
|
||||
}
|
||||
|
||||
if len(thisContent.Blocks) != 0 {
|
||||
content.Blocks = append(content.Blocks, thisContent.Blocks...)
|
||||
}
|
||||
}
|
||||
|
||||
// Finally, we check for required attributes.
|
||||
for _, attrS := range schema.Attributes {
|
||||
if !attrS.Required {
|
||||
continue
|
||||
}
|
||||
|
||||
if content.Attributes[attrS.Name] == nil {
|
||||
// We don't have any context here to produce a good diagnostic,
|
||||
// which is why we warn in the Content docstring to minimize the
|
||||
// use of required attributes on merged bodies.
|
||||
diags = diags.Append(&Diagnostic{
|
||||
Severity: DiagError,
|
||||
Summary: "Missing required argument",
|
||||
Detail: fmt.Sprintf(
|
||||
"The argument %q is required, but was not set.",
|
||||
attrS.Name,
|
||||
),
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
leftoverBody := MergeBodies(mergedLeftovers)
|
||||
return content, leftoverBody, diags
|
||||
}
|
288
vendor/github.com/hashicorp/hcl/v2/ops.go
generated
vendored
Normal file
288
vendor/github.com/hashicorp/hcl/v2/ops.go
generated
vendored
Normal file
@ -0,0 +1,288 @@
|
||||
package hcl
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"math/big"
|
||||
|
||||
"github.com/zclconf/go-cty/cty"
|
||||
"github.com/zclconf/go-cty/cty/convert"
|
||||
)
|
||||
|
||||
// Index is a helper function that performs the same operation as the index
|
||||
// operator in the HCL expression language. That is, the result is the
|
||||
// same as it would be for collection[key] in a configuration expression.
|
||||
//
|
||||
// This is exported so that applications can perform indexing in a manner
|
||||
// consistent with how the language does it, including handling of null and
|
||||
// unknown values, etc.
|
||||
//
|
||||
// Diagnostics are produced if the given combination of values is not valid.
|
||||
// Therefore a pointer to a source range must be provided to use in diagnostics,
|
||||
// though nil can be provided if the calling application is going to
|
||||
// ignore the subject of the returned diagnostics anyway.
|
||||
func Index(collection, key cty.Value, srcRange *Range) (cty.Value, Diagnostics) {
|
||||
if collection.IsNull() {
|
||||
return cty.DynamicVal, Diagnostics{
|
||||
{
|
||||
Severity: DiagError,
|
||||
Summary: "Attempt to index null value",
|
||||
Detail: "This value is null, so it does not have any indices.",
|
||||
Subject: srcRange,
|
||||
},
|
||||
}
|
||||
}
|
||||
if key.IsNull() {
|
||||
return cty.DynamicVal, Diagnostics{
|
||||
{
|
||||
Severity: DiagError,
|
||||
Summary: "Invalid index",
|
||||
Detail: "Can't use a null value as an indexing key.",
|
||||
Subject: srcRange,
|
||||
},
|
||||
}
|
||||
}
|
||||
ty := collection.Type()
|
||||
kty := key.Type()
|
||||
if kty == cty.DynamicPseudoType || ty == cty.DynamicPseudoType {
|
||||
return cty.DynamicVal, nil
|
||||
}
|
||||
|
||||
switch {
|
||||
|
||||
case ty.IsListType() || ty.IsTupleType() || ty.IsMapType():
|
||||
var wantType cty.Type
|
||||
switch {
|
||||
case ty.IsListType() || ty.IsTupleType():
|
||||
wantType = cty.Number
|
||||
case ty.IsMapType():
|
||||
wantType = cty.String
|
||||
default:
|
||||
// should never happen
|
||||
panic("don't know what key type we want")
|
||||
}
|
||||
|
||||
key, keyErr := convert.Convert(key, wantType)
|
||||
if keyErr != nil {
|
||||
return cty.DynamicVal, Diagnostics{
|
||||
{
|
||||
Severity: DiagError,
|
||||
Summary: "Invalid index",
|
||||
Detail: fmt.Sprintf(
|
||||
"The given key does not identify an element in this collection value: %s.",
|
||||
keyErr.Error(),
|
||||
),
|
||||
Subject: srcRange,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
has := collection.HasIndex(key)
|
||||
if !has.IsKnown() {
|
||||
if ty.IsTupleType() {
|
||||
return cty.DynamicVal, nil
|
||||
} else {
|
||||
return cty.UnknownVal(ty.ElementType()), nil
|
||||
}
|
||||
}
|
||||
if has.False() {
|
||||
// We have a more specialized error message for the situation of
|
||||
// using a fractional number to index into a sequence, because
|
||||
// that will tend to happen if the user is trying to use division
|
||||
// to calculate an index and not realizing that HCL does float
|
||||
// division rather than integer division.
|
||||
if (ty.IsListType() || ty.IsTupleType()) && key.Type().Equals(cty.Number) {
|
||||
if key.IsKnown() && !key.IsNull() {
|
||||
bf := key.AsBigFloat()
|
||||
if _, acc := bf.Int(nil); acc != big.Exact {
|
||||
return cty.DynamicVal, Diagnostics{
|
||||
{
|
||||
Severity: DiagError,
|
||||
Summary: "Invalid index",
|
||||
Detail: fmt.Sprintf("The given key does not identify an element in this collection value: indexing a sequence requires a whole number, but the given index (%g) has a fractional part.", bf),
|
||||
Subject: srcRange,
|
||||
},
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return cty.DynamicVal, Diagnostics{
|
||||
{
|
||||
Severity: DiagError,
|
||||
Summary: "Invalid index",
|
||||
Detail: "The given key does not identify an element in this collection value.",
|
||||
Subject: srcRange,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
return collection.Index(key), nil
|
||||
|
||||
case ty.IsObjectType():
|
||||
key, keyErr := convert.Convert(key, cty.String)
|
||||
if keyErr != nil {
|
||||
return cty.DynamicVal, Diagnostics{
|
||||
{
|
||||
Severity: DiagError,
|
||||
Summary: "Invalid index",
|
||||
Detail: fmt.Sprintf(
|
||||
"The given key does not identify an element in this collection value: %s.",
|
||||
keyErr.Error(),
|
||||
),
|
||||
Subject: srcRange,
|
||||
},
|
||||
}
|
||||
}
|
||||
if !collection.IsKnown() {
|
||||
return cty.DynamicVal, nil
|
||||
}
|
||||
if !key.IsKnown() {
|
||||
return cty.DynamicVal, nil
|
||||
}
|
||||
|
||||
attrName := key.AsString()
|
||||
|
||||
if !ty.HasAttribute(attrName) {
|
||||
return cty.DynamicVal, Diagnostics{
|
||||
{
|
||||
Severity: DiagError,
|
||||
Summary: "Invalid index",
|
||||
Detail: "The given key does not identify an element in this collection value.",
|
||||
Subject: srcRange,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
return collection.GetAttr(attrName), nil
|
||||
|
||||
default:
|
||||
return cty.DynamicVal, Diagnostics{
|
||||
{
|
||||
Severity: DiagError,
|
||||
Summary: "Invalid index",
|
||||
Detail: "This value does not have any indices.",
|
||||
Subject: srcRange,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
// GetAttr is a helper function that performs the same operation as the
|
||||
// attribute access in the HCL expression language. That is, the result is the
|
||||
// same as it would be for obj.attr in a configuration expression.
|
||||
//
|
||||
// This is exported so that applications can access attributes in a manner
|
||||
// consistent with how the language does it, including handling of null and
|
||||
// unknown values, etc.
|
||||
//
|
||||
// Diagnostics are produced if the given combination of values is not valid.
|
||||
// Therefore a pointer to a source range must be provided to use in diagnostics,
|
||||
// though nil can be provided if the calling application is going to
|
||||
// ignore the subject of the returned diagnostics anyway.
|
||||
func GetAttr(obj cty.Value, attrName string, srcRange *Range) (cty.Value, Diagnostics) {
|
||||
if obj.IsNull() {
|
||||
return cty.DynamicVal, Diagnostics{
|
||||
{
|
||||
Severity: DiagError,
|
||||
Summary: "Attempt to get attribute from null value",
|
||||
Detail: "This value is null, so it does not have any attributes.",
|
||||
Subject: srcRange,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
ty := obj.Type()
|
||||
switch {
|
||||
case ty.IsObjectType():
|
||||
if !ty.HasAttribute(attrName) {
|
||||
return cty.DynamicVal, Diagnostics{
|
||||
{
|
||||
Severity: DiagError,
|
||||
Summary: "Unsupported attribute",
|
||||
Detail: fmt.Sprintf("This object does not have an attribute named %q.", attrName),
|
||||
Subject: srcRange,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
if !obj.IsKnown() {
|
||||
return cty.UnknownVal(ty.AttributeType(attrName)), nil
|
||||
}
|
||||
|
||||
return obj.GetAttr(attrName), nil
|
||||
case ty.IsMapType():
|
||||
if !obj.IsKnown() {
|
||||
return cty.UnknownVal(ty.ElementType()), nil
|
||||
}
|
||||
|
||||
idx := cty.StringVal(attrName)
|
||||
if obj.HasIndex(idx).False() {
|
||||
return cty.DynamicVal, Diagnostics{
|
||||
{
|
||||
Severity: DiagError,
|
||||
Summary: "Missing map element",
|
||||
Detail: fmt.Sprintf("This map does not have an element with the key %q.", attrName),
|
||||
Subject: srcRange,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
return obj.Index(idx), nil
|
||||
case ty == cty.DynamicPseudoType:
|
||||
return cty.DynamicVal, nil
|
||||
default:
|
||||
return cty.DynamicVal, Diagnostics{
|
||||
{
|
||||
Severity: DiagError,
|
||||
Summary: "Unsupported attribute",
|
||||
Detail: "This value does not have any attributes.",
|
||||
Subject: srcRange,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
// ApplyPath is a helper function that applies a cty.Path to a value using the
|
||||
// indexing and attribute access operations from HCL.
|
||||
//
|
||||
// This is similar to calling the path's own Apply method, but ApplyPath uses
|
||||
// the more relaxed typing rules that apply to these operations in HCL, rather
|
||||
// than cty's relatively-strict rules. ApplyPath is implemented in terms of
|
||||
// Index and GetAttr, and so it has the same behavior for individual steps
|
||||
// but will stop and return any errors returned by intermediate steps.
|
||||
//
|
||||
// Diagnostics are produced if the given path cannot be applied to the given
|
||||
// value. Therefore a pointer to a source range must be provided to use in
|
||||
// diagnostics, though nil can be provided if the calling application is going
|
||||
// to ignore the subject of the returned diagnostics anyway.
|
||||
func ApplyPath(val cty.Value, path cty.Path, srcRange *Range) (cty.Value, Diagnostics) {
|
||||
var diags Diagnostics
|
||||
|
||||
for _, step := range path {
|
||||
var stepDiags Diagnostics
|
||||
switch ts := step.(type) {
|
||||
case cty.IndexStep:
|
||||
val, stepDiags = Index(val, ts.Key, srcRange)
|
||||
case cty.GetAttrStep:
|
||||
val, stepDiags = GetAttr(val, ts.Name, srcRange)
|
||||
default:
|
||||
// Should never happen because the above are all of the step types.
|
||||
diags = diags.Append(&Diagnostic{
|
||||
Severity: DiagError,
|
||||
Summary: "Invalid path step",
|
||||
Detail: fmt.Sprintf("Go type %T is not a valid path step. This is a bug in this program.", step),
|
||||
Subject: srcRange,
|
||||
})
|
||||
return cty.DynamicVal, diags
|
||||
}
|
||||
|
||||
diags = append(diags, stepDiags...)
|
||||
if stepDiags.HasErrors() {
|
||||
return cty.DynamicVal, diags
|
||||
}
|
||||
}
|
||||
|
||||
return val, diags
|
||||
}
|
275
vendor/github.com/hashicorp/hcl/v2/pos.go
generated
vendored
Normal file
275
vendor/github.com/hashicorp/hcl/v2/pos.go
generated
vendored
Normal file
@ -0,0 +1,275 @@
|
||||
package hcl
|
||||
|
||||
import "fmt"
|
||||
|
||||
// Pos represents a single position in a source file, by addressing the
|
||||
// start byte of a unicode character encoded in UTF-8.
|
||||
//
|
||||
// Pos is generally used only in the context of a Range, which then defines
|
||||
// which source file the position is within.
|
||||
type Pos struct {
|
||||
// Line is the source code line where this position points. Lines are
|
||||
// counted starting at 1 and incremented for each newline character
|
||||
// encountered.
|
||||
Line int
|
||||
|
||||
// Column is the source code column where this position points, in
|
||||
// unicode characters, with counting starting at 1.
|
||||
//
|
||||
// Column counts characters as they appear visually, so for example a
|
||||
// latin letter with a combining diacritic mark counts as one character.
|
||||
// This is intended for rendering visual markers against source code in
|
||||
// contexts where these diacritics would be rendered in a single character
|
||||
// cell. Technically speaking, Column is counting grapheme clusters as
|
||||
// used in unicode normalization.
|
||||
Column int
|
||||
|
||||
// Byte is the byte offset into the file where the indicated character
|
||||
// begins. This is a zero-based offset to the first byte of the first
|
||||
// UTF-8 codepoint sequence in the character, and thus gives a position
|
||||
// that can be resolved _without_ awareness of Unicode characters.
|
||||
Byte int
|
||||
}
|
||||
|
||||
// InitialPos is a suitable position to use to mark the start of a file.
|
||||
var InitialPos = Pos{Byte: 0, Line: 1, Column: 1}
|
||||
|
||||
// Range represents a span of characters between two positions in a source
|
||||
// file.
|
||||
//
|
||||
// This struct is usually used by value in types that represent AST nodes,
|
||||
// but by pointer in types that refer to the positions of other objects,
|
||||
// such as in diagnostics.
|
||||
type Range struct {
|
||||
// Filename is the name of the file into which this range's positions
|
||||
// point.
|
||||
Filename string
|
||||
|
||||
// Start and End represent the bounds of this range. Start is inclusive
|
||||
// and End is exclusive.
|
||||
Start, End Pos
|
||||
}
|
||||
|
||||
// RangeBetween returns a new range that spans from the beginning of the
|
||||
// start range to the end of the end range.
|
||||
//
|
||||
// The result is meaningless if the two ranges do not belong to the same
|
||||
// source file or if the end range appears before the start range.
|
||||
func RangeBetween(start, end Range) Range {
|
||||
return Range{
|
||||
Filename: start.Filename,
|
||||
Start: start.Start,
|
||||
End: end.End,
|
||||
}
|
||||
}
|
||||
|
||||
// RangeOver returns a new range that covers both of the given ranges and
|
||||
// possibly additional content between them if the two ranges do not overlap.
|
||||
//
|
||||
// If either range is empty then it is ignored. The result is empty if both
|
||||
// given ranges are empty.
|
||||
//
|
||||
// The result is meaningless if the two ranges to not belong to the same
|
||||
// source file.
|
||||
func RangeOver(a, b Range) Range {
|
||||
if a.Empty() {
|
||||
return b
|
||||
}
|
||||
if b.Empty() {
|
||||
return a
|
||||
}
|
||||
|
||||
var start, end Pos
|
||||
if a.Start.Byte < b.Start.Byte {
|
||||
start = a.Start
|
||||
} else {
|
||||
start = b.Start
|
||||
}
|
||||
if a.End.Byte > b.End.Byte {
|
||||
end = a.End
|
||||
} else {
|
||||
end = b.End
|
||||
}
|
||||
return Range{
|
||||
Filename: a.Filename,
|
||||
Start: start,
|
||||
End: end,
|
||||
}
|
||||
}
|
||||
|
||||
// ContainsPos returns true if and only if the given position is contained within
|
||||
// the receiving range.
|
||||
//
|
||||
// In the unlikely case that the line/column information disagree with the byte
|
||||
// offset information in the given position or receiving range, the byte
|
||||
// offsets are given priority.
|
||||
func (r Range) ContainsPos(pos Pos) bool {
|
||||
return r.ContainsOffset(pos.Byte)
|
||||
}
|
||||
|
||||
// ContainsOffset returns true if and only if the given byte offset is within
|
||||
// the receiving Range.
|
||||
func (r Range) ContainsOffset(offset int) bool {
|
||||
return offset >= r.Start.Byte && offset < r.End.Byte
|
||||
}
|
||||
|
||||
// Ptr returns a pointer to a copy of the receiver. This is a convenience when
|
||||
// ranges in places where pointers are required, such as in Diagnostic, but
|
||||
// the range in question is returned from a method. Go would otherwise not
|
||||
// allow one to take the address of a function call.
|
||||
func (r Range) Ptr() *Range {
|
||||
return &r
|
||||
}
|
||||
|
||||
// String returns a compact string representation of the receiver.
|
||||
// Callers should generally prefer to present a range more visually,
|
||||
// e.g. via markers directly on the relevant portion of source code.
|
||||
func (r Range) String() string {
|
||||
if r.Start.Line == r.End.Line {
|
||||
return fmt.Sprintf(
|
||||
"%s:%d,%d-%d",
|
||||
r.Filename,
|
||||
r.Start.Line, r.Start.Column,
|
||||
r.End.Column,
|
||||
)
|
||||
} else {
|
||||
return fmt.Sprintf(
|
||||
"%s:%d,%d-%d,%d",
|
||||
r.Filename,
|
||||
r.Start.Line, r.Start.Column,
|
||||
r.End.Line, r.End.Column,
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
func (r Range) Empty() bool {
|
||||
return r.Start.Byte == r.End.Byte
|
||||
}
|
||||
|
||||
// CanSliceBytes returns true if SliceBytes could return an accurate
|
||||
// sub-slice of the given slice.
|
||||
//
|
||||
// This effectively tests whether the start and end offsets of the range
|
||||
// are within the bounds of the slice, and thus whether SliceBytes can be
|
||||
// trusted to produce an accurate start and end position within that slice.
|
||||
func (r Range) CanSliceBytes(b []byte) bool {
|
||||
switch {
|
||||
case r.Start.Byte < 0 || r.Start.Byte > len(b):
|
||||
return false
|
||||
case r.End.Byte < 0 || r.End.Byte > len(b):
|
||||
return false
|
||||
case r.End.Byte < r.Start.Byte:
|
||||
return false
|
||||
default:
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
// SliceBytes returns a sub-slice of the given slice that is covered by the
|
||||
// receiving range, assuming that the given slice is the source code of the
|
||||
// file indicated by r.Filename.
|
||||
//
|
||||
// If the receiver refers to any byte offsets that are outside of the slice
|
||||
// then the result is constrained to the overlapping portion only, to avoid
|
||||
// a panic. Use CanSliceBytes to determine if the result is guaranteed to
|
||||
// be an accurate span of the requested range.
|
||||
func (r Range) SliceBytes(b []byte) []byte {
|
||||
start := r.Start.Byte
|
||||
end := r.End.Byte
|
||||
if start < 0 {
|
||||
start = 0
|
||||
} else if start > len(b) {
|
||||
start = len(b)
|
||||
}
|
||||
if end < 0 {
|
||||
end = 0
|
||||
} else if end > len(b) {
|
||||
end = len(b)
|
||||
}
|
||||
if end < start {
|
||||
end = start
|
||||
}
|
||||
return b[start:end]
|
||||
}
|
||||
|
||||
// Overlaps returns true if the receiver and the other given range share any
|
||||
// characters in common.
|
||||
func (r Range) Overlaps(other Range) bool {
|
||||
switch {
|
||||
case r.Filename != other.Filename:
|
||||
// If the ranges are in different files then they can't possibly overlap
|
||||
return false
|
||||
case r.Empty() || other.Empty():
|
||||
// Empty ranges can never overlap
|
||||
return false
|
||||
case r.ContainsOffset(other.Start.Byte) || r.ContainsOffset(other.End.Byte):
|
||||
return true
|
||||
case other.ContainsOffset(r.Start.Byte) || other.ContainsOffset(r.End.Byte):
|
||||
return true
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
// Overlap finds a range that is either identical to or a sub-range of both
|
||||
// the receiver and the other given range. It returns an empty range
|
||||
// within the receiver if there is no overlap between the two ranges.
|
||||
//
|
||||
// A non-empty result is either identical to or a subset of the receiver.
|
||||
func (r Range) Overlap(other Range) Range {
|
||||
if !r.Overlaps(other) {
|
||||
// Start == End indicates an empty range
|
||||
return Range{
|
||||
Filename: r.Filename,
|
||||
Start: r.Start,
|
||||
End: r.Start,
|
||||
}
|
||||
}
|
||||
|
||||
var start, end Pos
|
||||
if r.Start.Byte > other.Start.Byte {
|
||||
start = r.Start
|
||||
} else {
|
||||
start = other.Start
|
||||
}
|
||||
if r.End.Byte < other.End.Byte {
|
||||
end = r.End
|
||||
} else {
|
||||
end = other.End
|
||||
}
|
||||
|
||||
return Range{
|
||||
Filename: r.Filename,
|
||||
Start: start,
|
||||
End: end,
|
||||
}
|
||||
}
|
||||
|
||||
// PartitionAround finds the portion of the given range that overlaps with
|
||||
// the reciever and returns three ranges: the portion of the reciever that
|
||||
// precedes the overlap, the overlap itself, and then the portion of the
|
||||
// reciever that comes after the overlap.
|
||||
//
|
||||
// If the two ranges do not overlap then all three returned ranges are empty.
|
||||
//
|
||||
// If the given range aligns with or extends beyond either extent of the
|
||||
// reciever then the corresponding outer range will be empty.
|
||||
func (r Range) PartitionAround(other Range) (before, overlap, after Range) {
|
||||
overlap = r.Overlap(other)
|
||||
if overlap.Empty() {
|
||||
return overlap, overlap, overlap
|
||||
}
|
||||
|
||||
before = Range{
|
||||
Filename: r.Filename,
|
||||
Start: r.Start,
|
||||
End: overlap.Start,
|
||||
}
|
||||
after = Range{
|
||||
Filename: r.Filename,
|
||||
Start: overlap.End,
|
||||
End: r.End,
|
||||
}
|
||||
|
||||
return before, overlap, after
|
||||
}
|
152
vendor/github.com/hashicorp/hcl/v2/pos_scanner.go
generated
vendored
Normal file
152
vendor/github.com/hashicorp/hcl/v2/pos_scanner.go
generated
vendored
Normal file
@ -0,0 +1,152 @@
|
||||
package hcl
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
|
||||
"github.com/apparentlymart/go-textseg/v12/textseg"
|
||||
)
|
||||
|
||||
// RangeScanner is a helper that will scan over a buffer using a bufio.SplitFunc
|
||||
// and visit a source range for each token matched.
|
||||
//
|
||||
// For example, this can be used with bufio.ScanLines to find the source range
|
||||
// for each line in the file, skipping over the actual newline characters, which
|
||||
// may be useful when printing source code snippets as part of diagnostic
|
||||
// messages.
|
||||
//
|
||||
// The line and column information in the returned ranges is produced by
|
||||
// counting newline characters and grapheme clusters respectively, which
|
||||
// mimics the behavior we expect from a parser when producing ranges.
|
||||
type RangeScanner struct {
|
||||
filename string
|
||||
b []byte
|
||||
cb bufio.SplitFunc
|
||||
|
||||
pos Pos // position of next byte to process in b
|
||||
cur Range // latest range
|
||||
tok []byte // slice of b that is covered by cur
|
||||
err error // error from last scan, if any
|
||||
}
|
||||
|
||||
// NewRangeScanner creates a new RangeScanner for the given buffer, producing
|
||||
// ranges for the given filename.
|
||||
//
|
||||
// Since ranges have grapheme-cluster granularity rather than byte granularity,
|
||||
// the scanner will produce incorrect results if the given SplitFunc creates
|
||||
// tokens between grapheme cluster boundaries. In particular, it is incorrect
|
||||
// to use RangeScanner with bufio.ScanRunes because it will produce tokens
|
||||
// around individual UTF-8 sequences, which will split any multi-sequence
|
||||
// grapheme clusters.
|
||||
func NewRangeScanner(b []byte, filename string, cb bufio.SplitFunc) *RangeScanner {
|
||||
return NewRangeScannerFragment(b, filename, InitialPos, cb)
|
||||
}
|
||||
|
||||
// NewRangeScannerFragment is like NewRangeScanner but the ranges it produces
|
||||
// will be offset by the given starting position, which is appropriate for
|
||||
// sub-slices of a file, whereas NewRangeScanner assumes it is scanning an
|
||||
// entire file.
|
||||
func NewRangeScannerFragment(b []byte, filename string, start Pos, cb bufio.SplitFunc) *RangeScanner {
|
||||
return &RangeScanner{
|
||||
filename: filename,
|
||||
b: b,
|
||||
cb: cb,
|
||||
pos: start,
|
||||
}
|
||||
}
|
||||
|
||||
func (sc *RangeScanner) Scan() bool {
|
||||
if sc.pos.Byte >= len(sc.b) || sc.err != nil {
|
||||
// All done
|
||||
return false
|
||||
}
|
||||
|
||||
// Since we're operating on an in-memory buffer, we always pass the whole
|
||||
// remainder of the buffer to our SplitFunc and set isEOF to let it know
|
||||
// that it has the whole thing.
|
||||
advance, token, err := sc.cb(sc.b[sc.pos.Byte:], true)
|
||||
|
||||
// Since we are setting isEOF to true this should never happen, but
|
||||
// if it does we will just abort and assume the SplitFunc is misbehaving.
|
||||
if advance == 0 && token == nil && err == nil {
|
||||
return false
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
sc.err = err
|
||||
sc.cur = Range{
|
||||
Filename: sc.filename,
|
||||
Start: sc.pos,
|
||||
End: sc.pos,
|
||||
}
|
||||
sc.tok = nil
|
||||
return false
|
||||
}
|
||||
|
||||
sc.tok = token
|
||||
start := sc.pos
|
||||
end := sc.pos
|
||||
new := sc.pos
|
||||
|
||||
// adv is similar to token but it also includes any subsequent characters
|
||||
// we're being asked to skip over by the SplitFunc.
|
||||
// adv is a slice covering any additional bytes we are skipping over, based
|
||||
// on what the SplitFunc told us to do with advance.
|
||||
adv := sc.b[sc.pos.Byte : sc.pos.Byte+advance]
|
||||
|
||||
// We now need to scan over our token to count the grapheme clusters
|
||||
// so we can correctly advance Column, and count the newlines so we
|
||||
// can correctly advance Line.
|
||||
advR := bytes.NewReader(adv)
|
||||
gsc := bufio.NewScanner(advR)
|
||||
advanced := 0
|
||||
gsc.Split(textseg.ScanGraphemeClusters)
|
||||
for gsc.Scan() {
|
||||
gr := gsc.Bytes()
|
||||
new.Byte += len(gr)
|
||||
new.Column++
|
||||
|
||||
// We rely here on the fact that \r\n is considered a grapheme cluster
|
||||
// and so we don't need to worry about miscounting additional lines
|
||||
// on files with Windows-style line endings.
|
||||
if len(gr) != 0 && (gr[0] == '\r' || gr[0] == '\n') {
|
||||
new.Column = 1
|
||||
new.Line++
|
||||
}
|
||||
|
||||
if advanced < len(token) {
|
||||
// If we've not yet found the end of our token then we'll
|
||||
// also push our "end" marker along.
|
||||
// (if advance > len(token) then we'll stop moving "end" early
|
||||
// so that the caller only sees the range covered by token.)
|
||||
end = new
|
||||
}
|
||||
advanced += len(gr)
|
||||
}
|
||||
|
||||
sc.cur = Range{
|
||||
Filename: sc.filename,
|
||||
Start: start,
|
||||
End: end,
|
||||
}
|
||||
sc.pos = new
|
||||
return true
|
||||
}
|
||||
|
||||
// Range returns a range that covers the latest token obtained after a call
|
||||
// to Scan returns true.
|
||||
func (sc *RangeScanner) Range() Range {
|
||||
return sc.cur
|
||||
}
|
||||
|
||||
// Bytes returns the slice of the input buffer that is covered by the range
|
||||
// that would be returned by Range.
|
||||
func (sc *RangeScanner) Bytes() []byte {
|
||||
return sc.tok
|
||||
}
|
||||
|
||||
// Err can be called after Scan returns false to determine if the latest read
|
||||
// resulted in an error, and obtain that error if so.
|
||||
func (sc *RangeScanner) Err() error {
|
||||
return sc.err
|
||||
}
|
21
vendor/github.com/hashicorp/hcl/v2/schema.go
generated
vendored
Normal file
21
vendor/github.com/hashicorp/hcl/v2/schema.go
generated
vendored
Normal file
@ -0,0 +1,21 @@
|
||||
package hcl
|
||||
|
||||
// BlockHeaderSchema represents the shape of a block header, and is
|
||||
// used for matching blocks within bodies.
|
||||
type BlockHeaderSchema struct {
|
||||
Type string
|
||||
LabelNames []string
|
||||
}
|
||||
|
||||
// AttributeSchema represents the requirements for an attribute, and is used
|
||||
// for matching attributes within bodies.
|
||||
type AttributeSchema struct {
|
||||
Name string
|
||||
Required bool
|
||||
}
|
||||
|
||||
// BodySchema represents the desired shallow structure of a body.
|
||||
type BodySchema struct {
|
||||
Attributes []AttributeSchema
|
||||
Blocks []BlockHeaderSchema
|
||||
}
|
691
vendor/github.com/hashicorp/hcl/v2/spec.md
generated
vendored
Normal file
691
vendor/github.com/hashicorp/hcl/v2/spec.md
generated
vendored
Normal file
@ -0,0 +1,691 @@
|
||||
# HCL Syntax-Agnostic Information Model
|
||||
|
||||
This is the specification for the general information model (abstract types and
|
||||
semantics) for hcl. HCL is a system for defining configuration languages for
|
||||
applications. The HCL information model is designed to support multiple
|
||||
concrete syntaxes for configuration, each with a mapping to the model defined
|
||||
in this specification.
|
||||
|
||||
The two primary syntaxes intended for use in conjunction with this model are
|
||||
[the HCL native syntax](./hclsyntax/spec.md) and [the JSON syntax](./json/spec.md).
|
||||
In principle other syntaxes are possible as long as either their language model
|
||||
is sufficiently rich to express the concepts described in this specification
|
||||
or the language targets a well-defined subset of the specification.
|
||||
|
||||
## Structural Elements
|
||||
|
||||
The primary structural element is the _body_, which is a container representing
|
||||
a set of zero or more _attributes_ and a set of zero or more _blocks_.
|
||||
|
||||
A _configuration file_ is the top-level object, and will usually be produced
|
||||
by reading a file from disk and parsing it as a particular syntax. A
|
||||
configuration file has its own _body_, representing the top-level attributes
|
||||
and blocks.
|
||||
|
||||
An _attribute_ is a name and value pair associated with a body. Attribute names
|
||||
are unique within a given body. Attribute values are provided as _expressions_,
|
||||
which are discussed in detail in a later section.
|
||||
|
||||
A _block_ is a nested structure that has a _type name_, zero or more string
|
||||
_labels_ (e.g. identifiers), and a nested body.
|
||||
|
||||
Together the structural elements create a hierarchical data structure, with
|
||||
attributes intended to represent the direct properties of a particular object
|
||||
in the calling application, and blocks intended to represent child objects
|
||||
of a particular object.
|
||||
|
||||
## Body Content
|
||||
|
||||
To support the expression of the HCL concepts in languages whose information
|
||||
model is a subset of HCL's, such as JSON, a _body_ is an opaque container
|
||||
whose content can only be accessed by providing information on the expected
|
||||
structure of the content.
|
||||
|
||||
The specification for each syntax must describe how its physical constructs
|
||||
are mapped on to body content given a schema. For syntaxes that have
|
||||
first-class syntax distinguishing attributes and bodies this can be relatively
|
||||
straightforward, while more detailed mapping rules may be required in syntaxes
|
||||
where the representation of attributes vs. blocks is ambiguous.
|
||||
|
||||
### Schema-driven Processing
|
||||
|
||||
Schema-driven processing is the primary way to access body content.
|
||||
A _body schema_ is a description of what is expected within a particular body,
|
||||
which can then be used to extract the _body content_, which then provides
|
||||
access to the specific attributes and blocks requested.
|
||||
|
||||
A _body schema_ consists of a list of _attribute schemata_ and
|
||||
_block header schemata_:
|
||||
|
||||
- An _attribute schema_ provides the name of an attribute and whether its
|
||||
presence is required.
|
||||
|
||||
- A _block header schema_ provides a block type name and the semantic names
|
||||
assigned to each of the labels of that block type, if any.
|
||||
|
||||
Within a schema, it is an error to request the same attribute name twice or
|
||||
to request a block type whose name is also an attribute name. While this can
|
||||
in principle be supported in some syntaxes, in other syntaxes the attribute
|
||||
and block namespaces are combined and so an attribute cannot coexist with
|
||||
a block whose type name is identical to the attribute name.
|
||||
|
||||
The result of applying a body schema to a body is _body content_, which
|
||||
consists of an _attribute map_ and a _block sequence_:
|
||||
|
||||
- The _attribute map_ is a map data structure whose keys are attribute names
|
||||
and whose values are _expressions_ that represent the corresponding attribute
|
||||
values.
|
||||
|
||||
- The _block sequence_ is an ordered sequence of blocks, with each specifying
|
||||
a block _type name_, the sequence of _labels_ specified for the block,
|
||||
and the body object (not body _content_) representing the block's own body.
|
||||
|
||||
After obtaining _body content_, the calling application may continue processing
|
||||
by evaluating attribute expressions and/or recursively applying further
|
||||
schema-driven processing to the child block bodies.
|
||||
|
||||
**Note:** The _body schema_ is intentionally minimal, to reduce the set of
|
||||
mapping rules that must be defined for each syntax. Higher-level utility
|
||||
libraries may be provided to assist in the construction of a schema and
|
||||
perform additional processing, such as automatically evaluating attribute
|
||||
expressions and assigning their result values into a data structure, or
|
||||
recursively applying a schema to child blocks. Such utilities are not part of
|
||||
this core specification and will vary depending on the capabilities and idiom
|
||||
of the implementation language.
|
||||
|
||||
### _Dynamic Attributes_ Processing
|
||||
|
||||
The _schema-driven_ processing model is useful when the expected structure
|
||||
of a body is known a priori by the calling application. Some blocks are
|
||||
instead more free-form, such as a user-provided set of arbitrary key/value
|
||||
pairs.
|
||||
|
||||
The alternative _dynamic attributes_ processing mode allows for this more
|
||||
ad-hoc approach. Processing in this mode behaves as if a schema had been
|
||||
constructed without any _block header schemata_ and with an attribute
|
||||
schema for each distinct key provided within the physical representation
|
||||
of the body.
|
||||
|
||||
The means by which _distinct keys_ are identified is dependent on the
|
||||
physical syntax; this processing mode assumes that the syntax has a way
|
||||
to enumerate keys provided by the author and identify expressions that
|
||||
correspond with those keys, but does not define the means by which this is
|
||||
done.
|
||||
|
||||
The result of _dynamic attributes_ processing is an _attribute map_ as
|
||||
defined in the previous section. No _block sequence_ is produced in this
|
||||
processing mode.
|
||||
|
||||
### Partial Processing of Body Content
|
||||
|
||||
Under _schema-driven processing_, by default the given schema is assumed
|
||||
to be exhaustive, such that any attribute or block not matched by schema
|
||||
elements is considered an error. This allows feedback about unsupported
|
||||
attributes and blocks (such as typos) to be provided.
|
||||
|
||||
An alternative is _partial processing_, where any additional elements within
|
||||
the body are not considered an error.
|
||||
|
||||
Under partial processing, the result is both body content as described
|
||||
above _and_ a new body that represents any body elements that remain after
|
||||
the schema has been processed.
|
||||
|
||||
Specifically:
|
||||
|
||||
- Any attribute whose name is specified in the schema is returned in body
|
||||
content and elided from the new body.
|
||||
|
||||
- Any block whose type is specified in the schema is returned in body content
|
||||
and elided from the new body.
|
||||
|
||||
- Any attribute or block _not_ meeting the above conditions is placed into
|
||||
the new body, unmodified.
|
||||
|
||||
The new body can then be recursively processed using any of the body
|
||||
processing models. This facility allows different subsets of body content
|
||||
to be processed by different parts of the calling application.
|
||||
|
||||
Processing a body in two steps — first partial processing of a source body,
|
||||
then exhaustive processing of the returned body — is equivalent to single-step
|
||||
processing with a schema that is the union of the schemata used
|
||||
across the two steps.
|
||||
|
||||
## Expressions
|
||||
|
||||
Attribute values are represented by _expressions_. Depending on the concrete
|
||||
syntax in use, an expression may just be a literal value or it may describe
|
||||
a computation in terms of literal values, variables, and functions.
|
||||
|
||||
Each syntax defines its own representation of expressions. For syntaxes based
|
||||
in languages that do not have any non-literal expression syntax, it is
|
||||
recommended to embed the template language from
|
||||
[the native syntax](./hclsyntax/spec.md) e.g. as a post-processing step on
|
||||
string literals.
|
||||
|
||||
### Expression Evaluation
|
||||
|
||||
In order to obtain a concrete value, each expression must be _evaluated_.
|
||||
Evaluation is performed in terms of an evaluation context, which
|
||||
consists of the following:
|
||||
|
||||
- An _evaluation mode_, which is defined below.
|
||||
- A _variable scope_, which provides a set of named variables for use in
|
||||
expressions.
|
||||
- A _function table_, which provides a set of named functions for use in
|
||||
expressions.
|
||||
|
||||
The _evaluation mode_ allows for two different interpretations of an
|
||||
expression:
|
||||
|
||||
- In _literal-only mode_, variables and functions are not available and it
|
||||
is assumed that the calling application's intent is to treat the attribute
|
||||
value as a literal.
|
||||
|
||||
- In _full expression mode_, variables and functions are defined and it is
|
||||
assumed that the calling application wishes to provide a full expression
|
||||
language for definition of the attribute value.
|
||||
|
||||
The actual behavior of these two modes depends on the syntax in use. For
|
||||
languages with first-class expression syntax, these two modes may be considered
|
||||
equivalent, with _literal-only mode_ simply not defining any variables or
|
||||
functions. For languages that embed arbitrary expressions via string templates,
|
||||
_literal-only mode_ may disable such processing, allowing literal strings to
|
||||
pass through without interpretation as templates.
|
||||
|
||||
Since literal-only mode does not support variables and functions, it is an
|
||||
error for the calling application to enable this mode and yet provide a
|
||||
variable scope and/or function table.
|
||||
|
||||
## Values and Value Types
|
||||
|
||||
The result of expression evaluation is a _value_. Each value has a _type_,
|
||||
which is dynamically determined during evaluation. The _variable scope_ in
|
||||
the evaluation context is a map from variable name to value, using the same
|
||||
definition of value.
|
||||
|
||||
The type system for HCL values is intended to be of a level abstraction
|
||||
suitable for configuration of various applications. A well-defined,
|
||||
implementation-language-agnostic type system is defined to allow for
|
||||
consistent processing of configuration across many implementation languages.
|
||||
Concrete implementations may provide additional functionality to lower
|
||||
HCL values and types to corresponding native language types, which may then
|
||||
impose additional constraints on the values outside of the scope of this
|
||||
specification.
|
||||
|
||||
Two values are _equal_ if and only if they have identical types and their
|
||||
values are equal according to the rules of their shared type.
|
||||
|
||||
### Primitive Types
|
||||
|
||||
The primitive types are _string_, _bool_, and _number_.
|
||||
|
||||
A _string_ is a sequence of unicode characters. Two strings are equal if
|
||||
NFC normalization ([UAX#15](http://unicode.org/reports/tr15/)
|
||||
of each string produces two identical sequences of characters.
|
||||
NFC normalization ensures that, for example, a precomposed combination of a
|
||||
latin letter and a diacritic compares equal with the letter followed by
|
||||
a combining diacritic.
|
||||
|
||||
The _bool_ type has only two non-null values: _true_ and _false_. Two bool
|
||||
values are equal if and only if they are either both true or both false.
|
||||
|
||||
A _number_ is an arbitrary-precision floating point value. An implementation
|
||||
_must_ make the full-precision values available to the calling application
|
||||
for interpretation into any suitable number representation. An implementation
|
||||
may in practice implement numbers with limited precision so long as the
|
||||
following constraints are met:
|
||||
|
||||
- Integers are represented with at least 256 bits.
|
||||
- Non-integer numbers are represented as floating point values with a
|
||||
mantissa of at least 256 bits and a signed binary exponent of at least
|
||||
16 bits.
|
||||
- An error is produced if an integer value given in source cannot be
|
||||
represented precisely.
|
||||
- An error is produced if a non-integer value cannot be represented due to
|
||||
overflow.
|
||||
- A non-integer number is rounded to the nearest possible value when a
|
||||
value is of too high a precision to be represented.
|
||||
|
||||
The _number_ type also requires representation of both positive and negative
|
||||
infinity. A "not a number" (NaN) value is _not_ provided nor used.
|
||||
|
||||
Two number values are equal if they are numerically equal to the precision
|
||||
associated with the number. Positive infinity and negative infinity are
|
||||
equal to themselves but not to each other. Positive infinity is greater than
|
||||
any other number value, and negative infinity is less than any other number
|
||||
value.
|
||||
|
||||
Some syntaxes may be unable to represent numeric literals of arbitrary
|
||||
precision. This must be defined in the syntax specification as part of its
|
||||
description of mapping numeric literals to HCL values.
|
||||
|
||||
### Structural Types
|
||||
|
||||
_Structural types_ are types that are constructed by combining other types.
|
||||
Each distinct combination of other types is itself a distinct type. There
|
||||
are two structural type _kinds_:
|
||||
|
||||
- _Object types_ are constructed of a set of named attributes, each of which
|
||||
has a type. Attribute names are always strings. (_Object_ attributes are a
|
||||
distinct idea from _body_ attributes, though calling applications
|
||||
may choose to blur the distinction by use of common naming schemes.)
|
||||
- _Tuple types_ are constructed of a sequence of elements, each of which
|
||||
has a type.
|
||||
|
||||
Values of structural types are compared for equality in terms of their
|
||||
attributes or elements. A structural type value is equal to another if and
|
||||
only if all of the corresponding attributes or elements are equal.
|
||||
|
||||
Two structural types are identical if they are of the same kind and
|
||||
have attributes or elements with identical types.
|
||||
|
||||
### Collection Types
|
||||
|
||||
_Collection types_ are types that combine together an arbitrary number of
|
||||
values of some other single type. There are three collection type _kinds_:
|
||||
|
||||
- _List types_ represent ordered sequences of values of their element type.
|
||||
- _Map types_ represent values of their element type accessed via string keys.
|
||||
- _Set types_ represent unordered sets of distinct values of their element type.
|
||||
|
||||
For each of these kinds and each distinct element type there is a distinct
|
||||
collection type. For example, "list of string" is a distinct type from
|
||||
"set of string", and "list of number" is a distinct type from "list of string".
|
||||
|
||||
Values of collection types are compared for equality in terms of their
|
||||
elements. A collection type value is equal to another if and only if both
|
||||
have the same number of elements and their corresponding elements are equal.
|
||||
|
||||
Two collection types are identical if they are of the same kind and have
|
||||
the same element type.
|
||||
|
||||
### Null values
|
||||
|
||||
Each type has a null value. The null value of a type represents the absence
|
||||
of a value, but with type information retained to allow for type checking.
|
||||
|
||||
Null values are used primarily to represent the conditional absence of a
|
||||
body attribute. In a syntax with a conditional operator, one of the result
|
||||
values of that conditional may be null to indicate that the attribute should be
|
||||
considered not present in that case.
|
||||
|
||||
Calling applications _should_ consider an attribute with a null value as
|
||||
equivalent to the value not being present at all.
|
||||
|
||||
A null value of a particular type is equal to itself.
|
||||
|
||||
### Unknown Values and the Dynamic Pseudo-type
|
||||
|
||||
An _unknown value_ is a placeholder for a value that is not yet known.
|
||||
Operations on unknown values themselves return unknown values that have a
|
||||
type appropriate to the operation. For example, adding together two unknown
|
||||
numbers yields an unknown number, while comparing two unknown values of any
|
||||
type for equality yields an unknown bool.
|
||||
|
||||
Each type has a distinct unknown value. For example, an unknown _number_ is
|
||||
a distinct value from an unknown _string_.
|
||||
|
||||
_The dynamic pseudo-type_ is a placeholder for a type that is not yet known.
|
||||
The only values of this type are its null value and its unknown value. It is
|
||||
referred to as a _pseudo-type_ because it should not be considered a type in
|
||||
its own right, but rather as a placeholder for a type yet to be established.
|
||||
The unknown value of the dynamic pseudo-type is referred to as _the dynamic
|
||||
value_.
|
||||
|
||||
Operations on values of the dynamic pseudo-type behave as if it is a value
|
||||
of the expected type, optimistically assuming that once the value and type
|
||||
are known they will be valid for the operation. For example, adding together
|
||||
a number and the dynamic value produces an unknown number.
|
||||
|
||||
Unknown values and the dynamic pseudo-type can be used as a mechanism for
|
||||
partial type checking and semantic checking: by evaluating an expression with
|
||||
all variables set to an unknown value, the expression can be evaluated to
|
||||
produce an unknown value of a given type, or produce an error if any operation
|
||||
is provably invalid with only type information.
|
||||
|
||||
Unknown values and the dynamic pseudo-type must never be returned from
|
||||
operations unless at least one operand is unknown or dynamic. Calling
|
||||
applications are guaranteed that unless the global scope includes unknown
|
||||
values, or the function table includes functions that return unknown values,
|
||||
no expression will evaluate to an unknown value. The calling application is
|
||||
thus in total control over the use and meaning of unknown values.
|
||||
|
||||
The dynamic pseudo-type is identical only to itself.
|
||||
|
||||
### Capsule Types
|
||||
|
||||
A _capsule type_ is a custom type defined by the calling application. A value
|
||||
of a capsule type is considered opaque to HCL, but may be accepted
|
||||
by functions provided by the calling application.
|
||||
|
||||
A particular capsule type is identical only to itself. The equality of two
|
||||
values of the same capsule type is defined by the calling application. No
|
||||
other operations are supported for values of capsule types.
|
||||
|
||||
Support for capsule types in a HCL implementation is optional. Capsule types
|
||||
are intended to allow calling applications to pass through values that are
|
||||
not part of the standard type system. For example, an application that
|
||||
deals with raw binary data may define a capsule type representing a byte
|
||||
array, and provide functions that produce or operate on byte arrays.
|
||||
|
||||
### Type Specifications
|
||||
|
||||
In certain situations it is necessary to define expectations about the expected
|
||||
type of a value. Whereas two _types_ have a commutative _identity_ relationship,
|
||||
a type has a non-commutative _matches_ relationship with a _type specification_.
|
||||
A type specification is, in practice, just a different interpretation of a
|
||||
type such that:
|
||||
|
||||
- Any type _matches_ any type that it is identical to.
|
||||
|
||||
- Any type _matches_ the dynamic pseudo-type.
|
||||
|
||||
For example, given a type specification "list of dynamic pseudo-type", the
|
||||
concrete types "list of string" and "list of map" match, but the
|
||||
type "set of string" does not.
|
||||
|
||||
## Functions and Function Calls
|
||||
|
||||
The evaluation context used to evaluate an expression includes a function
|
||||
table, which represents an application-defined set of named functions
|
||||
available for use in expressions.
|
||||
|
||||
Each syntax defines whether function calls are supported and how they are
|
||||
physically represented in source code, but the semantics of function calls are
|
||||
defined here to ensure consistent results across syntaxes and to allow
|
||||
applications to provide functions that are interoperable with all syntaxes.
|
||||
|
||||
A _function_ is defined from the following elements:
|
||||
|
||||
- Zero or more _positional parameters_, each with a name used for documentation,
|
||||
a type specification for expected argument values, and a flag for whether
|
||||
each of null values, unknown values, and values of the dynamic pseudo-type
|
||||
are accepted.
|
||||
|
||||
- Zero or one _variadic parameters_, with the same structure as the _positional_
|
||||
parameters, which if present collects any additional arguments provided at
|
||||
the function call site.
|
||||
|
||||
- A _result type definition_, which specifies the value type returned for each
|
||||
valid sequence of argument values.
|
||||
|
||||
- A _result value definition_, which specifies the value returned for each
|
||||
valid sequence of argument values.
|
||||
|
||||
A _function call_, regardless of source syntax, consists of a sequence of
|
||||
argument values. The argument values are each mapped to a corresponding
|
||||
parameter as follows:
|
||||
|
||||
- For each of the function's positional parameters in sequence, take the next
|
||||
argument. If there are no more arguments, the call is erroneous.
|
||||
|
||||
- If the function has a variadic parameter, take all remaining arguments that
|
||||
where not yet assigned to a positional parameter and collect them into
|
||||
a sequence of variadic arguments that each correspond to the variadic
|
||||
parameter.
|
||||
|
||||
- If the function has _no_ variadic parameter, it is an error if any arguments
|
||||
remain after taking one argument for each positional parameter.
|
||||
|
||||
After mapping each argument to a parameter, semantic checking proceeds
|
||||
for each argument:
|
||||
|
||||
- If the argument value corresponding to a parameter does not match the
|
||||
parameter's type specification, the call is erroneous.
|
||||
|
||||
- If the argument value corresponding to a parameter is null and the parameter
|
||||
is not specified as accepting nulls, the call is erroneous.
|
||||
|
||||
- If the argument value corresponding to a parameter is the dynamic value
|
||||
and the parameter is not specified as accepting values of the dynamic
|
||||
pseudo-type, the call is valid but its _result type_ is forced to be the
|
||||
dynamic pseudo type.
|
||||
|
||||
- If neither of the above conditions holds for any argument, the call is
|
||||
valid and the function's value type definition is used to determine the
|
||||
call's _result type_. A function _may_ vary its result type depending on
|
||||
the argument _values_ as well as the argument _types_; for example, a
|
||||
function that decodes a JSON value will return a different result type
|
||||
depending on the data structure described by the given JSON source code.
|
||||
|
||||
If semantic checking succeeds without error, the call is _executed_:
|
||||
|
||||
- For each argument, if its value is unknown and its corresponding parameter
|
||||
is not specified as accepting unknowns, the _result value_ is forced to be an
|
||||
unknown value of the result type.
|
||||
|
||||
- If the previous condition does not apply, the function's result value
|
||||
definition is used to determine the call's _result value_.
|
||||
|
||||
The result of a function call expression is either an error, if one of the
|
||||
erroneous conditions above applies, or the _result value_.
|
||||
|
||||
## Type Conversions and Unification
|
||||
|
||||
Values given in configuration may not always match the expectations of the
|
||||
operations applied to them or to the calling application. In such situations,
|
||||
automatic type conversion is attempted as a convenience to the user.
|
||||
|
||||
Along with conversions to a _specified_ type, it is sometimes necessary to
|
||||
ensure that a selection of values are all of the _same_ type, without any
|
||||
constraint on which type that is. This is the process of _type unification_,
|
||||
which attempts to find the most general type that all of the given types can
|
||||
be converted to.
|
||||
|
||||
Both type conversions and unification are defined in the syntax-agnostic
|
||||
model to ensure consistency of behavior between syntaxes.
|
||||
|
||||
Type conversions are broadly characterized into two categories: _safe_ and
|
||||
_unsafe_. A conversion is "safe" if any distinct value of the source type
|
||||
has a corresponding distinct value in the target type. A conversion is
|
||||
"unsafe" if either the target type values are _not_ distinct (information
|
||||
may be lost in conversion) or if some values of the source type do not have
|
||||
any corresponding value in the target type. An unsafe conversion may result
|
||||
in an error.
|
||||
|
||||
A given type can always be converted to itself, which is a no-op.
|
||||
|
||||
### Conversion of Null Values
|
||||
|
||||
All null values are safely convertable to a null value of any other type,
|
||||
regardless of other type-specific rules specified in the sections below.
|
||||
|
||||
### Conversion to and from the Dynamic Pseudo-type
|
||||
|
||||
Conversion _from_ the dynamic pseudo-type _to_ any other type always succeeds,
|
||||
producing an unknown value of the target type.
|
||||
|
||||
Conversion of any value _to_ the dynamic pseudo-type is a no-op. The result
|
||||
is the input value, verbatim. This is the only situation where the conversion
|
||||
result value is not of the given target type.
|
||||
|
||||
### Primitive Type Conversions
|
||||
|
||||
Bidirectional conversions are available between the string and number types,
|
||||
and between the string and boolean types.
|
||||
|
||||
The bool value true corresponds to the string containing the characters "true",
|
||||
while the bool value false corresponds to the string containing the characters
|
||||
"false". Conversion from bool to string is safe, while the converse is
|
||||
unsafe. The strings "1" and "0" are alternative string representations
|
||||
of true and false respectively. It is an error to convert a string other than
|
||||
the four in this paragraph to type bool.
|
||||
|
||||
A number value is converted to string by translating its integer portion
|
||||
into a sequence of decimal digits (`0` through `9`), and then if it has a
|
||||
non-zero fractional part, a period `.` followed by a sequence of decimal
|
||||
digits representing its fractional part. No exponent portion is included.
|
||||
The number is converted at its full precision. Conversion from number to
|
||||
string is safe.
|
||||
|
||||
A string is converted to a number value by reversing the above mapping.
|
||||
No exponent portion is allowed. Conversion from string to number is unsafe.
|
||||
It is an error to convert a string that does not comply with the expected
|
||||
syntax to type number.
|
||||
|
||||
No direct conversion is available between the bool and number types.
|
||||
|
||||
### Collection and Structural Type Conversions
|
||||
|
||||
Conversion from set types to list types is _safe_, as long as their
|
||||
element types are safely convertable. If the element types are _unsafely_
|
||||
convertable, then the collection conversion is also unsafe. Each set element
|
||||
becomes a corresponding list element, in an undefined order. Although no
|
||||
particular ordering is required, implementations _should_ produce list
|
||||
elements in a consistent order for a given input set, as a convenience
|
||||
to calling applications.
|
||||
|
||||
Conversion from list types to set types is _unsafe_, as long as their element
|
||||
types are convertable. Each distinct list item becomes a distinct set item.
|
||||
If two list items are equal, one of the two is lost in the conversion.
|
||||
|
||||
Conversion from tuple types to list types permitted if all of the
|
||||
tuple element types are convertable to the target list element type.
|
||||
The safety of the conversion depends on the safety of each of the element
|
||||
conversions. Each element in turn is converted to the list element type,
|
||||
producing a list of identical length.
|
||||
|
||||
Conversion from tuple types to set types is permitted, behaving as if the
|
||||
tuple type was first converted to a list of the same element type and then
|
||||
that list converted to the target set type.
|
||||
|
||||
Conversion from object types to map types is permitted if all of the object
|
||||
attribute types are convertable to the target map element type. The safety
|
||||
of the conversion depends on the safety of each of the attribute conversions.
|
||||
Each attribute in turn is converted to the map element type, and map element
|
||||
keys are set to the name of each corresponding object attribute.
|
||||
|
||||
Conversion from list and set types to tuple types is permitted, following
|
||||
the opposite steps as the converse conversions. Such conversions are _unsafe_.
|
||||
It is an error to convert a list or set to a tuple type whose number of
|
||||
elements does not match the list or set length.
|
||||
|
||||
Conversion from map types to object types is permitted if each map key
|
||||
corresponds to an attribute in the target object type. It is an error to
|
||||
convert from a map value whose set of keys does not exactly match the target
|
||||
type's attributes. The conversion takes the opposite steps of the converse
|
||||
conversion.
|
||||
|
||||
Conversion from one object type to another is permitted as long as the
|
||||
common attribute names have convertable types. Any attribute present in the
|
||||
target type but not in the source type is populated with a null value of
|
||||
the appropriate type.
|
||||
|
||||
Conversion from one tuple type to another is permitted as long as the
|
||||
tuples have the same length and the elements have convertable types.
|
||||
|
||||
### Type Unification
|
||||
|
||||
Type unification is an operation that takes a list of types and attempts
|
||||
to find a single type to which they can all be converted. Since some
|
||||
type pairs have bidirectional conversions, preference is given to _safe_
|
||||
conversions. In technical terms, all possible types are arranged into
|
||||
a lattice, from which a most general supertype is selected where possible.
|
||||
|
||||
The type resulting from type unification may be one of the input types, or
|
||||
it may be an entirely new type produced by combination of two or more
|
||||
input types.
|
||||
|
||||
The following rules do not guarantee a valid result. In addition to these
|
||||
rules, unification fails if any of the given types are not convertable
|
||||
(per the above rules) to the selected result type.
|
||||
|
||||
The following unification rules apply transitively. That is, if a rule is
|
||||
defined from A to B, and one from B to C, then A can unify to C.
|
||||
|
||||
Number and bool types both unify with string by preferring string.
|
||||
|
||||
Two collection types of the same kind unify according to the unification
|
||||
of their element types.
|
||||
|
||||
List and set types unify by preferring the list type.
|
||||
|
||||
Map and object types unify by preferring the object type.
|
||||
|
||||
List, set and tuple types unify by preferring the tuple type.
|
||||
|
||||
The dynamic pseudo-type unifies with any other type by selecting that other
|
||||
type. The dynamic pseudo-type is the result type only if _all_ input types
|
||||
are the dynamic pseudo-type.
|
||||
|
||||
Two object types unify by constructing a new type whose attributes are
|
||||
the union of those of the two input types. Any common attributes themselves
|
||||
have their types unified.
|
||||
|
||||
Two tuple types of the same length unify constructing a new type of the
|
||||
same length whose elements are the unification of the corresponding elements
|
||||
in the two input types.
|
||||
|
||||
## Static Analysis
|
||||
|
||||
In most applications, full expression evaluation is sufficient for understanding
|
||||
the provided configuration. However, some specialized applications require more
|
||||
direct access to the physical structures in the expressions, which can for
|
||||
example allow the construction of new language constructs in terms of the
|
||||
existing syntax elements.
|
||||
|
||||
Since static analysis analyses the physical structure of configuration, the
|
||||
details will vary depending on syntax. Each syntax must decide which of its
|
||||
physical structures corresponds to the following analyses, producing error
|
||||
diagnostics if they are applied to inappropriate expressions.
|
||||
|
||||
The following are the required static analysis functions:
|
||||
|
||||
- **Static List**: Require list/tuple construction syntax to be used and
|
||||
return a list of expressions for each of the elements given.
|
||||
|
||||
- **Static Map**: Require map/object construction syntax to be used and
|
||||
return a list of key/value pairs -- both expressions -- for each of
|
||||
the elements given. The usual constraint that a map key must be a string
|
||||
must not apply to this analysis, thus allowing applications to interpret
|
||||
arbitrary keys as they see fit.
|
||||
|
||||
- **Static Call**: Require function call syntax to be used and return an
|
||||
object describing the called function name and a list of expressions
|
||||
representing each of the call arguments.
|
||||
|
||||
- **Static Traversal**: Require a reference to a symbol in the variable
|
||||
scope and return a description of the path from the root scope to the
|
||||
accessed attribute or index.
|
||||
|
||||
The intent of a calling application using these features is to require a more
|
||||
rigid interpretation of the configuration than in expression evaluation.
|
||||
Syntax implementations should make use of the extra contextual information
|
||||
provided in order to make an intuitive mapping onto the constructs of the
|
||||
underlying syntax, possibly interpreting the expression slightly differently
|
||||
than it would be interpreted in normal evaluation.
|
||||
|
||||
Each syntax must define which of its expression elements each of the analyses
|
||||
above applies to, and how those analyses behave given those expression elements.
|
||||
|
||||
## Implementation Considerations
|
||||
|
||||
Implementations of this specification are free to adopt any strategy that
|
||||
produces behavior consistent with the specification. This non-normative
|
||||
section describes some possible implementation strategies that are consistent
|
||||
with the goals of this specification.
|
||||
|
||||
### Language-agnosticism
|
||||
|
||||
The language-agnosticism of this specification assumes that certain behaviors
|
||||
are implemented separately for each syntax:
|
||||
|
||||
- Matching of a body schema with the physical elements of a body in the
|
||||
source language, to determine correspondence between physical constructs
|
||||
and schema elements.
|
||||
|
||||
- Implementing the _dynamic attributes_ body processing mode by either
|
||||
interpreting all physical constructs as attributes or producing an error
|
||||
if non-attribute constructs are present.
|
||||
|
||||
- Providing an evaluation function for all possible expressions that produces
|
||||
a value given an evaluation context.
|
||||
|
||||
- Providing the static analysis functionality described above in a manner that
|
||||
makes sense within the convention of the syntax.
|
||||
|
||||
The suggested implementation strategy is to use an implementation language's
|
||||
closest concept to an _abstract type_, _virtual type_ or _interface type_
|
||||
to represent both Body and Expression. Each language-specific implementation
|
||||
can then provide an implementation of each of these types wrapping AST nodes
|
||||
or other physical constructs from the language parser.
|
40
vendor/github.com/hashicorp/hcl/v2/static_expr.go
generated
vendored
Normal file
40
vendor/github.com/hashicorp/hcl/v2/static_expr.go
generated
vendored
Normal file
@ -0,0 +1,40 @@
|
||||
package hcl
|
||||
|
||||
import (
|
||||
"github.com/zclconf/go-cty/cty"
|
||||
)
|
||||
|
||||
type staticExpr struct {
|
||||
val cty.Value
|
||||
rng Range
|
||||
}
|
||||
|
||||
// StaticExpr returns an Expression that always evaluates to the given value.
|
||||
//
|
||||
// This is useful to substitute default values for expressions that are
|
||||
// not explicitly given in configuration and thus would otherwise have no
|
||||
// Expression to return.
|
||||
//
|
||||
// Since expressions are expected to have a source range, the caller must
|
||||
// provide one. Ideally this should be a real source range, but it can
|
||||
// be a synthetic one (with an empty-string filename) if no suitable range
|
||||
// is available.
|
||||
func StaticExpr(val cty.Value, rng Range) Expression {
|
||||
return staticExpr{val, rng}
|
||||
}
|
||||
|
||||
func (e staticExpr) Value(ctx *EvalContext) (cty.Value, Diagnostics) {
|
||||
return e.val, nil
|
||||
}
|
||||
|
||||
func (e staticExpr) Variables() []Traversal {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (e staticExpr) Range() Range {
|
||||
return e.rng
|
||||
}
|
||||
|
||||
func (e staticExpr) StartRange() Range {
|
||||
return e.rng
|
||||
}
|
151
vendor/github.com/hashicorp/hcl/v2/structure.go
generated
vendored
Normal file
151
vendor/github.com/hashicorp/hcl/v2/structure.go
generated
vendored
Normal file
@ -0,0 +1,151 @@
|
||||
package hcl
|
||||
|
||||
import (
|
||||
"github.com/zclconf/go-cty/cty"
|
||||
)
|
||||
|
||||
// File is the top-level node that results from parsing a HCL file.
|
||||
type File struct {
|
||||
Body Body
|
||||
Bytes []byte
|
||||
|
||||
// Nav is used to integrate with the "hcled" editor integration package,
|
||||
// and with diagnostic information formatters. It is not for direct use
|
||||
// by a calling application.
|
||||
Nav interface{}
|
||||
}
|
||||
|
||||
// Block represents a nested block within a Body.
|
||||
type Block struct {
|
||||
Type string
|
||||
Labels []string
|
||||
Body Body
|
||||
|
||||
DefRange Range // Range that can be considered the "definition" for seeking in an editor
|
||||
TypeRange Range // Range for the block type declaration specifically.
|
||||
LabelRanges []Range // Ranges for the label values specifically.
|
||||
}
|
||||
|
||||
// Blocks is a sequence of Block.
|
||||
type Blocks []*Block
|
||||
|
||||
// Attributes is a set of attributes keyed by their names.
|
||||
type Attributes map[string]*Attribute
|
||||
|
||||
// Body is a container for attributes and blocks. It serves as the primary
|
||||
// unit of hierarchical structure within configuration.
|
||||
//
|
||||
// The content of a body cannot be meaningfully interpreted without a schema,
|
||||
// so Body represents the raw body content and has methods that allow the
|
||||
// content to be extracted in terms of a given schema.
|
||||
type Body interface {
|
||||
// Content verifies that the entire body content conforms to the given
|
||||
// schema and then returns it, and/or returns diagnostics. The returned
|
||||
// body content is valid if non-nil, regardless of whether Diagnostics
|
||||
// are provided, but diagnostics should still be eventually shown to
|
||||
// the user.
|
||||
Content(schema *BodySchema) (*BodyContent, Diagnostics)
|
||||
|
||||
// PartialContent is like Content except that it permits the configuration
|
||||
// to contain additional blocks or attributes not specified in the
|
||||
// schema. If any are present, the returned Body is non-nil and contains
|
||||
// the remaining items from the body that were not selected by the schema.
|
||||
PartialContent(schema *BodySchema) (*BodyContent, Body, Diagnostics)
|
||||
|
||||
// JustAttributes attempts to interpret all of the contents of the body
|
||||
// as attributes, allowing for the contents to be accessed without a priori
|
||||
// knowledge of the structure.
|
||||
//
|
||||
// The behavior of this method depends on the body's source language.
|
||||
// Some languages, like JSON, can't distinguish between attributes and
|
||||
// blocks without schema hints, but for languages that _can_ error
|
||||
// diagnostics will be generated if any blocks are present in the body.
|
||||
//
|
||||
// Diagnostics may be produced for other reasons too, such as duplicate
|
||||
// declarations of the same attribute.
|
||||
JustAttributes() (Attributes, Diagnostics)
|
||||
|
||||
// MissingItemRange returns a range that represents where a missing item
|
||||
// might hypothetically be inserted. This is used when producing
|
||||
// diagnostics about missing required attributes or blocks. Not all bodies
|
||||
// will have an obvious single insertion point, so the result here may
|
||||
// be rather arbitrary.
|
||||
MissingItemRange() Range
|
||||
}
|
||||
|
||||
// BodyContent is the result of applying a BodySchema to a Body.
|
||||
type BodyContent struct {
|
||||
Attributes Attributes
|
||||
Blocks Blocks
|
||||
|
||||
MissingItemRange Range
|
||||
}
|
||||
|
||||
// Attribute represents an attribute from within a body.
|
||||
type Attribute struct {
|
||||
Name string
|
||||
Expr Expression
|
||||
|
||||
Range Range
|
||||
NameRange Range
|
||||
}
|
||||
|
||||
// Expression is a literal value or an expression provided in the
|
||||
// configuration, which can be evaluated within a scope to produce a value.
|
||||
type Expression interface {
|
||||
// Value returns the value resulting from evaluating the expression
|
||||
// in the given evaluation context.
|
||||
//
|
||||
// The context may be nil, in which case the expression may contain
|
||||
// only constants and diagnostics will be produced for any non-constant
|
||||
// sub-expressions. (The exact definition of this depends on the source
|
||||
// language.)
|
||||
//
|
||||
// The context may instead be set but have either its Variables or
|
||||
// Functions maps set to nil, in which case only use of these features
|
||||
// will return diagnostics.
|
||||
//
|
||||
// Different diagnostics are provided depending on whether the given
|
||||
// context maps are nil or empty. In the former case, the message
|
||||
// tells the user that variables/functions are not permitted at all,
|
||||
// while in the latter case usage will produce a "not found" error for
|
||||
// the specific symbol in question.
|
||||
Value(ctx *EvalContext) (cty.Value, Diagnostics)
|
||||
|
||||
// Variables returns a list of variables referenced in the receiving
|
||||
// expression. These are expressed as absolute Traversals, so may include
|
||||
// additional information about how the variable is used, such as
|
||||
// attribute lookups, which the calling application can potentially use
|
||||
// to only selectively populate the scope.
|
||||
Variables() []Traversal
|
||||
|
||||
Range() Range
|
||||
StartRange() Range
|
||||
}
|
||||
|
||||
// OfType filters the receiving block sequence by block type name,
|
||||
// returning a new block sequence including only the blocks of the
|
||||
// requested type.
|
||||
func (els Blocks) OfType(typeName string) Blocks {
|
||||
ret := make(Blocks, 0)
|
||||
for _, el := range els {
|
||||
if el.Type == typeName {
|
||||
ret = append(ret, el)
|
||||
}
|
||||
}
|
||||
return ret
|
||||
}
|
||||
|
||||
// ByType transforms the receiving block sequence into a map from type
|
||||
// name to block sequences of only that type.
|
||||
func (els Blocks) ByType() map[string]Blocks {
|
||||
ret := make(map[string]Blocks)
|
||||
for _, el := range els {
|
||||
ty := el.Type
|
||||
if ret[ty] == nil {
|
||||
ret[ty] = make(Blocks, 0, 1)
|
||||
}
|
||||
ret[ty] = append(ret[ty], el)
|
||||
}
|
||||
return ret
|
||||
}
|
117
vendor/github.com/hashicorp/hcl/v2/structure_at_pos.go
generated
vendored
Normal file
117
vendor/github.com/hashicorp/hcl/v2/structure_at_pos.go
generated
vendored
Normal file
@ -0,0 +1,117 @@
|
||||
package hcl
|
||||
|
||||
// -----------------------------------------------------------------------------
|
||||
// The methods in this file all have the general pattern of making a best-effort
|
||||
// to find one or more constructs that contain a given source position.
|
||||
//
|
||||
// These all operate by delegating to an optional method of the same name and
|
||||
// signature on the file's root body, allowing each syntax to potentially
|
||||
// provide its own implementations of these. For syntaxes that don't implement
|
||||
// them, the result is always nil.
|
||||
// -----------------------------------------------------------------------------
|
||||
|
||||
// BlocksAtPos attempts to find all of the blocks that contain the given
|
||||
// position, ordered so that the outermost block is first and the innermost
|
||||
// block is last. This is a best-effort method that may not be able to produce
|
||||
// a complete result for all positions or for all HCL syntaxes.
|
||||
//
|
||||
// If the returned slice is non-empty, the first element is guaranteed to
|
||||
// represent the same block as would be the result of OutermostBlockAtPos and
|
||||
// the last element the result of InnermostBlockAtPos. However, the
|
||||
// implementation may return two different objects describing the same block,
|
||||
// so comparison by pointer identity is not possible.
|
||||
//
|
||||
// The result is nil if no blocks at all contain the given position.
|
||||
func (f *File) BlocksAtPos(pos Pos) []*Block {
|
||||
// The root body of the file must implement this interface in order
|
||||
// to support BlocksAtPos.
|
||||
type Interface interface {
|
||||
BlocksAtPos(pos Pos) []*Block
|
||||
}
|
||||
|
||||
impl, ok := f.Body.(Interface)
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
return impl.BlocksAtPos(pos)
|
||||
}
|
||||
|
||||
// OutermostBlockAtPos attempts to find a top-level block in the receiving file
|
||||
// that contains the given position. This is a best-effort method that may not
|
||||
// be able to produce a result for all positions or for all HCL syntaxes.
|
||||
//
|
||||
// The result is nil if no single block could be selected for any reason.
|
||||
func (f *File) OutermostBlockAtPos(pos Pos) *Block {
|
||||
// The root body of the file must implement this interface in order
|
||||
// to support OutermostBlockAtPos.
|
||||
type Interface interface {
|
||||
OutermostBlockAtPos(pos Pos) *Block
|
||||
}
|
||||
|
||||
impl, ok := f.Body.(Interface)
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
return impl.OutermostBlockAtPos(pos)
|
||||
}
|
||||
|
||||
// InnermostBlockAtPos attempts to find the most deeply-nested block in the
|
||||
// receiving file that contains the given position. This is a best-effort
|
||||
// method that may not be able to produce a result for all positions or for
|
||||
// all HCL syntaxes.
|
||||
//
|
||||
// The result is nil if no single block could be selected for any reason.
|
||||
func (f *File) InnermostBlockAtPos(pos Pos) *Block {
|
||||
// The root body of the file must implement this interface in order
|
||||
// to support InnermostBlockAtPos.
|
||||
type Interface interface {
|
||||
InnermostBlockAtPos(pos Pos) *Block
|
||||
}
|
||||
|
||||
impl, ok := f.Body.(Interface)
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
return impl.InnermostBlockAtPos(pos)
|
||||
}
|
||||
|
||||
// OutermostExprAtPos attempts to find an expression in the receiving file
|
||||
// that contains the given position. This is a best-effort method that may not
|
||||
// be able to produce a result for all positions or for all HCL syntaxes.
|
||||
//
|
||||
// Since expressions are often nested inside one another, this method returns
|
||||
// the outermost "root" expression that is not contained by any other.
|
||||
//
|
||||
// The result is nil if no single expression could be selected for any reason.
|
||||
func (f *File) OutermostExprAtPos(pos Pos) Expression {
|
||||
// The root body of the file must implement this interface in order
|
||||
// to support OutermostExprAtPos.
|
||||
type Interface interface {
|
||||
OutermostExprAtPos(pos Pos) Expression
|
||||
}
|
||||
|
||||
impl, ok := f.Body.(Interface)
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
return impl.OutermostExprAtPos(pos)
|
||||
}
|
||||
|
||||
// AttributeAtPos attempts to find an attribute definition in the receiving
|
||||
// file that contains the given position. This is a best-effort method that may
|
||||
// not be able to produce a result for all positions or for all HCL syntaxes.
|
||||
//
|
||||
// The result is nil if no single attribute could be selected for any reason.
|
||||
func (f *File) AttributeAtPos(pos Pos) *Attribute {
|
||||
// The root body of the file must implement this interface in order
|
||||
// to support OutermostExprAtPos.
|
||||
type Interface interface {
|
||||
AttributeAtPos(pos Pos) *Attribute
|
||||
}
|
||||
|
||||
impl, ok := f.Body.(Interface)
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
return impl.AttributeAtPos(pos)
|
||||
}
|
293
vendor/github.com/hashicorp/hcl/v2/traversal.go
generated
vendored
Normal file
293
vendor/github.com/hashicorp/hcl/v2/traversal.go
generated
vendored
Normal file
@ -0,0 +1,293 @@
|
||||
package hcl
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/zclconf/go-cty/cty"
|
||||
)
|
||||
|
||||
// A Traversal is a description of traversing through a value through a
|
||||
// series of operations such as attribute lookup, index lookup, etc.
|
||||
//
|
||||
// It is used to look up values in scopes, for example.
|
||||
//
|
||||
// The traversal operations are implementations of interface Traverser.
|
||||
// This is a closed set of implementations, so the interface cannot be
|
||||
// implemented from outside this package.
|
||||
//
|
||||
// A traversal can be absolute (its first value is a symbol name) or relative
|
||||
// (starts from an existing value).
|
||||
type Traversal []Traverser
|
||||
|
||||
// TraversalJoin appends a relative traversal to an absolute traversal to
|
||||
// produce a new absolute traversal.
|
||||
func TraversalJoin(abs Traversal, rel Traversal) Traversal {
|
||||
if abs.IsRelative() {
|
||||
panic("first argument to TraversalJoin must be absolute")
|
||||
}
|
||||
if !rel.IsRelative() {
|
||||
panic("second argument to TraversalJoin must be relative")
|
||||
}
|
||||
|
||||
ret := make(Traversal, len(abs)+len(rel))
|
||||
copy(ret, abs)
|
||||
copy(ret[len(abs):], rel)
|
||||
return ret
|
||||
}
|
||||
|
||||
// TraverseRel applies the receiving traversal to the given value, returning
|
||||
// the resulting value. This is supported only for relative traversals,
|
||||
// and will panic if applied to an absolute traversal.
|
||||
func (t Traversal) TraverseRel(val cty.Value) (cty.Value, Diagnostics) {
|
||||
if !t.IsRelative() {
|
||||
panic("can't use TraverseRel on an absolute traversal")
|
||||
}
|
||||
|
||||
current := val
|
||||
var diags Diagnostics
|
||||
for _, tr := range t {
|
||||
var newDiags Diagnostics
|
||||
current, newDiags = tr.TraversalStep(current)
|
||||
diags = append(diags, newDiags...)
|
||||
if newDiags.HasErrors() {
|
||||
return cty.DynamicVal, diags
|
||||
}
|
||||
}
|
||||
return current, diags
|
||||
}
|
||||
|
||||
// TraverseAbs applies the receiving traversal to the given eval context,
|
||||
// returning the resulting value. This is supported only for absolute
|
||||
// traversals, and will panic if applied to a relative traversal.
|
||||
func (t Traversal) TraverseAbs(ctx *EvalContext) (cty.Value, Diagnostics) {
|
||||
if t.IsRelative() {
|
||||
panic("can't use TraverseAbs on a relative traversal")
|
||||
}
|
||||
|
||||
split := t.SimpleSplit()
|
||||
root := split.Abs[0].(TraverseRoot)
|
||||
name := root.Name
|
||||
|
||||
thisCtx := ctx
|
||||
hasNonNil := false
|
||||
for thisCtx != nil {
|
||||
if thisCtx.Variables == nil {
|
||||
thisCtx = thisCtx.parent
|
||||
continue
|
||||
}
|
||||
hasNonNil = true
|
||||
val, exists := thisCtx.Variables[name]
|
||||
if exists {
|
||||
return split.Rel.TraverseRel(val)
|
||||
}
|
||||
thisCtx = thisCtx.parent
|
||||
}
|
||||
|
||||
if !hasNonNil {
|
||||
return cty.DynamicVal, Diagnostics{
|
||||
{
|
||||
Severity: DiagError,
|
||||
Summary: "Variables not allowed",
|
||||
Detail: "Variables may not be used here.",
|
||||
Subject: &root.SrcRange,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
suggestions := make([]string, 0, len(ctx.Variables))
|
||||
thisCtx = ctx
|
||||
for thisCtx != nil {
|
||||
for k := range thisCtx.Variables {
|
||||
suggestions = append(suggestions, k)
|
||||
}
|
||||
thisCtx = thisCtx.parent
|
||||
}
|
||||
suggestion := nameSuggestion(name, suggestions)
|
||||
if suggestion != "" {
|
||||
suggestion = fmt.Sprintf(" Did you mean %q?", suggestion)
|
||||
}
|
||||
|
||||
return cty.DynamicVal, Diagnostics{
|
||||
{
|
||||
Severity: DiagError,
|
||||
Summary: "Unknown variable",
|
||||
Detail: fmt.Sprintf("There is no variable named %q.%s", name, suggestion),
|
||||
Subject: &root.SrcRange,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// IsRelative returns true if the receiver is a relative traversal, or false
|
||||
// otherwise.
|
||||
func (t Traversal) IsRelative() bool {
|
||||
if len(t) == 0 {
|
||||
return true
|
||||
}
|
||||
if _, firstIsRoot := t[0].(TraverseRoot); firstIsRoot {
|
||||
return false
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// SimpleSplit returns a TraversalSplit where the name lookup is the absolute
|
||||
// part and the remainder is the relative part. Supported only for
|
||||
// absolute traversals, and will panic if applied to a relative traversal.
|
||||
//
|
||||
// This can be used by applications that have a relatively-simple variable
|
||||
// namespace where only the top-level is directly populated in the scope, with
|
||||
// everything else handled by relative lookups from those initial values.
|
||||
func (t Traversal) SimpleSplit() TraversalSplit {
|
||||
if t.IsRelative() {
|
||||
panic("can't use SimpleSplit on a relative traversal")
|
||||
}
|
||||
return TraversalSplit{
|
||||
Abs: t[0:1],
|
||||
Rel: t[1:],
|
||||
}
|
||||
}
|
||||
|
||||
// RootName returns the root name for a absolute traversal. Will panic if
|
||||
// called on a relative traversal.
|
||||
func (t Traversal) RootName() string {
|
||||
if t.IsRelative() {
|
||||
panic("can't use RootName on a relative traversal")
|
||||
|
||||
}
|
||||
return t[0].(TraverseRoot).Name
|
||||
}
|
||||
|
||||
// SourceRange returns the source range for the traversal.
|
||||
func (t Traversal) SourceRange() Range {
|
||||
if len(t) == 0 {
|
||||
// Nothing useful to return here, but we'll return something
|
||||
// that's correctly-typed at least.
|
||||
return Range{}
|
||||
}
|
||||
|
||||
return RangeBetween(t[0].SourceRange(), t[len(t)-1].SourceRange())
|
||||
}
|
||||
|
||||
// TraversalSplit represents a pair of traversals, the first of which is
|
||||
// an absolute traversal and the second of which is relative to the first.
|
||||
//
|
||||
// This is used by calling applications that only populate prefixes of the
|
||||
// traversals in the scope, with Abs representing the part coming from the
|
||||
// scope and Rel representing the remaining steps once that part is
|
||||
// retrieved.
|
||||
type TraversalSplit struct {
|
||||
Abs Traversal
|
||||
Rel Traversal
|
||||
}
|
||||
|
||||
// TraverseAbs traverses from a scope to the value resulting from the
|
||||
// absolute traversal.
|
||||
func (t TraversalSplit) TraverseAbs(ctx *EvalContext) (cty.Value, Diagnostics) {
|
||||
return t.Abs.TraverseAbs(ctx)
|
||||
}
|
||||
|
||||
// TraverseRel traverses from a given value, assumed to be the result of
|
||||
// TraverseAbs on some scope, to a final result for the entire split traversal.
|
||||
func (t TraversalSplit) TraverseRel(val cty.Value) (cty.Value, Diagnostics) {
|
||||
return t.Rel.TraverseRel(val)
|
||||
}
|
||||
|
||||
// Traverse is a convenience function to apply TraverseAbs followed by
|
||||
// TraverseRel.
|
||||
func (t TraversalSplit) Traverse(ctx *EvalContext) (cty.Value, Diagnostics) {
|
||||
v1, diags := t.TraverseAbs(ctx)
|
||||
if diags.HasErrors() {
|
||||
return cty.DynamicVal, diags
|
||||
}
|
||||
v2, newDiags := t.TraverseRel(v1)
|
||||
diags = append(diags, newDiags...)
|
||||
return v2, diags
|
||||
}
|
||||
|
||||
// Join concatenates together the Abs and Rel parts to produce a single
|
||||
// absolute traversal.
|
||||
func (t TraversalSplit) Join() Traversal {
|
||||
return TraversalJoin(t.Abs, t.Rel)
|
||||
}
|
||||
|
||||
// RootName returns the root name for the absolute part of the split.
|
||||
func (t TraversalSplit) RootName() string {
|
||||
return t.Abs.RootName()
|
||||
}
|
||||
|
||||
// A Traverser is a step within a Traversal.
|
||||
type Traverser interface {
|
||||
TraversalStep(cty.Value) (cty.Value, Diagnostics)
|
||||
SourceRange() Range
|
||||
isTraverserSigil() isTraverser
|
||||
}
|
||||
|
||||
// Embed this in a struct to declare it as a Traverser
|
||||
type isTraverser struct {
|
||||
}
|
||||
|
||||
func (tr isTraverser) isTraverserSigil() isTraverser {
|
||||
return isTraverser{}
|
||||
}
|
||||
|
||||
// TraverseRoot looks up a root name in a scope. It is used as the first step
|
||||
// of an absolute Traversal, and cannot itself be traversed directly.
|
||||
type TraverseRoot struct {
|
||||
isTraverser
|
||||
Name string
|
||||
SrcRange Range
|
||||
}
|
||||
|
||||
// TraversalStep on a TraverseName immediately panics, because absolute
|
||||
// traversals cannot be directly traversed.
|
||||
func (tn TraverseRoot) TraversalStep(cty.Value) (cty.Value, Diagnostics) {
|
||||
panic("Cannot traverse an absolute traversal")
|
||||
}
|
||||
|
||||
func (tn TraverseRoot) SourceRange() Range {
|
||||
return tn.SrcRange
|
||||
}
|
||||
|
||||
// TraverseAttr looks up an attribute in its initial value.
|
||||
type TraverseAttr struct {
|
||||
isTraverser
|
||||
Name string
|
||||
SrcRange Range
|
||||
}
|
||||
|
||||
func (tn TraverseAttr) TraversalStep(val cty.Value) (cty.Value, Diagnostics) {
|
||||
return GetAttr(val, tn.Name, &tn.SrcRange)
|
||||
}
|
||||
|
||||
func (tn TraverseAttr) SourceRange() Range {
|
||||
return tn.SrcRange
|
||||
}
|
||||
|
||||
// TraverseIndex applies the index operation to its initial value.
|
||||
type TraverseIndex struct {
|
||||
isTraverser
|
||||
Key cty.Value
|
||||
SrcRange Range
|
||||
}
|
||||
|
||||
func (tn TraverseIndex) TraversalStep(val cty.Value) (cty.Value, Diagnostics) {
|
||||
return Index(val, tn.Key, &tn.SrcRange)
|
||||
}
|
||||
|
||||
func (tn TraverseIndex) SourceRange() Range {
|
||||
return tn.SrcRange
|
||||
}
|
||||
|
||||
// TraverseSplat applies the splat operation to its initial value.
|
||||
type TraverseSplat struct {
|
||||
isTraverser
|
||||
Each Traversal
|
||||
SrcRange Range
|
||||
}
|
||||
|
||||
func (tn TraverseSplat) TraversalStep(val cty.Value) (cty.Value, Diagnostics) {
|
||||
panic("TraverseSplat not yet implemented")
|
||||
}
|
||||
|
||||
func (tn TraverseSplat) SourceRange() Range {
|
||||
return tn.SrcRange
|
||||
}
|
124
vendor/github.com/hashicorp/hcl/v2/traversal_for_expr.go
generated
vendored
Normal file
124
vendor/github.com/hashicorp/hcl/v2/traversal_for_expr.go
generated
vendored
Normal file
@ -0,0 +1,124 @@
|
||||
package hcl
|
||||
|
||||
// AbsTraversalForExpr attempts to interpret the given expression as
|
||||
// an absolute traversal, or returns error diagnostic(s) if that is
|
||||
// not possible for the given expression.
|
||||
//
|
||||
// A particular Expression implementation can support this function by
|
||||
// offering a method called AsTraversal that takes no arguments and
|
||||
// returns either a valid absolute traversal or nil to indicate that
|
||||
// no traversal is possible. Alternatively, an implementation can support
|
||||
// UnwrapExpression to delegate handling of this function to a wrapped
|
||||
// Expression object.
|
||||
//
|
||||
// In most cases the calling application is interested in the value
|
||||
// that results from an expression, but in rarer cases the application
|
||||
// needs to see the the name of the variable and subsequent
|
||||
// attributes/indexes itself, for example to allow users to give references
|
||||
// to the variables themselves rather than to their values. An implementer
|
||||
// of this function should at least support attribute and index steps.
|
||||
func AbsTraversalForExpr(expr Expression) (Traversal, Diagnostics) {
|
||||
type asTraversal interface {
|
||||
AsTraversal() Traversal
|
||||
}
|
||||
|
||||
physExpr := UnwrapExpressionUntil(expr, func(expr Expression) bool {
|
||||
_, supported := expr.(asTraversal)
|
||||
return supported
|
||||
})
|
||||
|
||||
if asT, supported := physExpr.(asTraversal); supported {
|
||||
if traversal := asT.AsTraversal(); traversal != nil {
|
||||
return traversal, nil
|
||||
}
|
||||
}
|
||||
return nil, Diagnostics{
|
||||
&Diagnostic{
|
||||
Severity: DiagError,
|
||||
Summary: "Invalid expression",
|
||||
Detail: "A single static variable reference is required: only attribute access and indexing with constant keys. No calculations, function calls, template expressions, etc are allowed here.",
|
||||
Subject: expr.Range().Ptr(),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// RelTraversalForExpr is similar to AbsTraversalForExpr but it returns
|
||||
// a relative traversal instead. Due to the nature of HCL expressions, the
|
||||
// first element of the returned traversal is always a TraverseAttr, and
|
||||
// then it will be followed by zero or more other expressions.
|
||||
//
|
||||
// Any expression accepted by AbsTraversalForExpr is also accepted by
|
||||
// RelTraversalForExpr.
|
||||
func RelTraversalForExpr(expr Expression) (Traversal, Diagnostics) {
|
||||
traversal, diags := AbsTraversalForExpr(expr)
|
||||
if len(traversal) > 0 {
|
||||
ret := make(Traversal, len(traversal))
|
||||
copy(ret, traversal)
|
||||
root := traversal[0].(TraverseRoot)
|
||||
ret[0] = TraverseAttr{
|
||||
Name: root.Name,
|
||||
SrcRange: root.SrcRange,
|
||||
}
|
||||
return ret, diags
|
||||
}
|
||||
return traversal, diags
|
||||
}
|
||||
|
||||
// ExprAsKeyword attempts to interpret the given expression as a static keyword,
|
||||
// returning the keyword string if possible, and the empty string if not.
|
||||
//
|
||||
// A static keyword, for the sake of this function, is a single identifier.
|
||||
// For example, the following attribute has an expression that would produce
|
||||
// the keyword "foo":
|
||||
//
|
||||
// example = foo
|
||||
//
|
||||
// This function is a variant of AbsTraversalForExpr, which uses the same
|
||||
// interface on the given expression. This helper constrains the result
|
||||
// further by requiring only a single root identifier.
|
||||
//
|
||||
// This function is intended to be used with the following idiom, to recognize
|
||||
// situations where one of a fixed set of keywords is required and arbitrary
|
||||
// expressions are not allowed:
|
||||
//
|
||||
// switch hcl.ExprAsKeyword(expr) {
|
||||
// case "allow":
|
||||
// // (take suitable action for keyword "allow")
|
||||
// case "deny":
|
||||
// // (take suitable action for keyword "deny")
|
||||
// default:
|
||||
// diags = append(diags, &hcl.Diagnostic{
|
||||
// // ... "invalid keyword" diagnostic message ...
|
||||
// })
|
||||
// }
|
||||
//
|
||||
// The above approach will generate the same message for both the use of an
|
||||
// unrecognized keyword and for not using a keyword at all, which is usually
|
||||
// reasonable if the message specifies that the given value must be a keyword
|
||||
// from that fixed list.
|
||||
//
|
||||
// Note that in the native syntax the keywords "true", "false", and "null" are
|
||||
// recognized as literal values during parsing and so these reserved words
|
||||
// cannot not be accepted as keywords by this function.
|
||||
//
|
||||
// Since interpreting an expression as a keyword bypasses usual expression
|
||||
// evaluation, it should be used sparingly for situations where e.g. one of
|
||||
// a fixed set of keywords is used in a structural way in a special attribute
|
||||
// to affect the further processing of a block.
|
||||
func ExprAsKeyword(expr Expression) string {
|
||||
type asTraversal interface {
|
||||
AsTraversal() Traversal
|
||||
}
|
||||
|
||||
physExpr := UnwrapExpressionUntil(expr, func(expr Expression) bool {
|
||||
_, supported := expr.(asTraversal)
|
||||
return supported
|
||||
})
|
||||
|
||||
if asT, supported := physExpr.(asTraversal); supported {
|
||||
if traversal := asT.AsTraversal(); len(traversal) == 1 {
|
||||
return traversal.RootName()
|
||||
}
|
||||
}
|
||||
return ""
|
||||
}
|
Reference in New Issue
Block a user