Package monkit is a flexible code instrumenting and data collection library. I'm going to try and sell you as fast as I can on this library. Example usage We've got tools that capture distribution information (including quantiles) about int64, float64, and bool types. We have tools that capture data about events (we've got meters for deltas, rates, etc). We have rich tools for capturing information about tasks and functions, and literally anything that can generate a name and a number. Almost just as importantly, the amount of boilerplate and code you have to write to get these features is very minimal. Data that's hard to measure probably won't get measured. This data can be collected and sent to Graphite (http://graphite.wikidot.com/) or any other time-series database. Here's a selection of live stats from one of our storage nodes: This library generates call graphs of your live process for you. These call graphs aren't created through sampling. They're full pictures of all of the interesting functions you've annotated, along with quantile information about their successes, failures, how often they panic, return an error (if so instrumented), how many are currently running, etc. The data can be returned in dot format, in json, in text, and can be about just the functions that are currently executing, or all the functions the monitoring system has ever seen. Here's another example of one of our production nodes: https://raw.githubusercontent.com/spacemonkeygo/monkit/master/images/callgraph2.png This library generates trace graphs of your live process for you directly, without requiring standing up some tracing system such as Zipkin (though you can do that too). Inspired by Google's Dapper (http://research.google.com/pubs/pub36356.html) and Twitter's Zipkin (http://zipkin.io), we have process-internal trace graphs, triggerable by a number of different methods. You get this trace information for free whenever you use Go contexts (https://blog.golang.org/context) and function monitoring. The output formats are svg and json. Additionally, the library supports trace observation plugins, and we've written a plugin that sends this data to Zipkin (http://github.com/spacemonkeygo/monkit-zipkin). https://raw.githubusercontent.com/spacemonkeygo/monkit/master/images/trace.png Before our crazy Go rewrite of everything (https://www.spacemonkey.com/blog/posts/go-space-monkey) (and before we had even seen Google's Dapper paper), we were a Python shop, and all of our "interesting" functions were decorated with a helper that collected timing information and sent it to Graphite. When we transliterated to Go, we wanted to preserve that functionality, so the first version of our monitoring package was born. Over time it started to get janky, especially as we found Zipkin and started adding tracing functionality to it. We rewrote all of our Go code to use Google contexts, and then realized we could get call graph information. We decided a refactor and then an all-out rethinking of our monitoring package was best, and so now we have this library. Sometimes you really want callstack contextual information without having to pass arguments through everything on the call stack. In other languages, many people implement this with thread-local storage. Example: let's say you have written a big system that responds to user requests. All of your libraries log using your log library. During initial development everything is easy to debug, since there's low user load, but now you've scaled and there's OVER TEN USERS and it's kind of hard to tell what log lines were caused by what. Wouldn't it be nice to add request ids to all of the log lines kicked off by that request? Then you could grep for all log lines caused by a specific request id. Geez, it would suck to have to pass all contextual debugging information through all of your callsites. Google solved this problem by always passing a context.Context interface through from call to call. A Context is basically just a mapping of arbitrary keys to arbitrary values that users can add new values for. This way if you decide to add a request context, you can add it to your Context and then all callsites that decend from that place will have the new data in their contexts. It is admittedly very verbose to add contexts to every function call. Painfully so. I hope to write more about it in the future, but Google also wrote up their thoughts about it (https://blog.golang.org/context), which you can go read. For now, just swallow your disgust and let's keep moving. Let's make a super simple Varnish (https://www.varnish-cache.org/) clone. Open up gedit! (Okay just kidding, open whatever text editor you want.) For this motivating program, we won't even add the caching, though there's comments for where to add it if you'd like. For now, let's just make a barebones system that will proxy HTTP requests. We'll call it VLite, but maybe we should call it VReallyLite. Run and build this and open localhost:8080 in your browser. If you use the default proxy target, it should inform you that the world hasn't been destroyed yet. The first thing you'll want to do is add the small amount of boilerplate to make the instrumentation we're going to add to your process observable later. Import the basic monkit packages: and then register environmental statistics and kick off a goroutine in your main method to serve debug requests: Rebuild, and then check out localhost:9000/stats (or localhost:9000/stats/json, if you prefer) in your browser! Remember what I said about Google's contexts (https://blog.golang.org/context)? It might seem a bit overkill for such a small project, but it's time to add them. To help out here, I've created a library that constructs contexts for you for incoming HTTP requests. Nothing that's about to happen requires my webhelp library (https://godoc.org/github.com/jtolds/webhelp), but here is the code now refactored to receive and pass contexts through our two per-request calls. You can create a new context for a request however you want. One reason to use something like webhelp is that the cancelation feature of Contexts is hooked up to the HTTP request getting canceled. Let's start to get statistics about how many requests we receive! First, this package (main) will need to get a monitoring Scope. Add this global definition right after all your imports, much like you'd create a logger with many logging libraries: Now, make the error return value of HandleHTTP named (so, (err error)), and add this defer line as the very first instruction of HandleHTTP: Let's also add the same line (albeit modified for the lack of error) to Proxy, replacing &err with nil: You should now have something like: We'll unpack what's going on here, but for now: For this new funcs dataset, if you want a graph, you can download a dot graph at localhost:9000/funcs/dot and json information from localhost:9000/funcs/json. You should see something like: with a similar report for the Proxy method, or a graph like: https://raw.githubusercontent.com/spacemonkeygo/monkit/master/images/handlehttp.png This data reports the overall callgraph of execution for known traces, along with how many of each function are currently running, the most running concurrently (the highwater), how many were successful along with quantile timing information, how many errors there were (with quantile timing information if applicable), and how many panics there were. Since the Proxy method isn't capturing a returned err value, and since HandleHTTP always returns nil, this example won't ever have failures. If you're wondering about the success count being higher than you expected, keep in mind your browser probably requested a favicon.ico. Cool, eh? How it works is an interesting line of code - there's three function calls. If you look at the Go spec, all of the function calls will run at the time the function starts except for the very last one. The first function call, mon.Task(), creates or looks up a wrapper around a Func. You could get this yourself by requesting mon.Func() inside of the appropriate function or mon.FuncNamed(). Both mon.Task() and mon.Func() are inspecting runtime.Caller to determine the name of the function. Because this is a heavy operation, you can actually store the result of mon.Task() and reuse it somehow else if you prefer, so instead of you could instead use which is more performant every time after the first time. runtime.Caller only gets called once. Careful! Don't use the same myFuncMon in different functions unless you want to screw up your statistics! The second function call starts all the various stop watches and bookkeeping to keep track of the function. It also mutates the context pointer it's given to extend the context with information about what current span (in Zipkin parlance) is active. Notably, you *can* pass nil for the context if you really don't want a context. You just lose callgraph information. The last function call stops all the stop watches ad makes a note of any observed errors or panics (it repanics after observing them). Turns out, we don't even need to change our program anymore to get rich tracing information! Open your browser and go to localhost:9000/trace/svg?regex=HandleHTTP. It won't load, and in fact, it's waiting for you to open another tab and refresh localhost:8080 again. Once you retrigger the actual application behavior, the trace regex will capture a trace starting on the first function that matches the supplied regex, and return an svg. Go back to your first tab, and you should see a relatively uninteresting but super promising svg. Let's make the trace more interesting. Add a to your HandleHTTP method, rebuild, and restart. Load localhost:8080, then start a new request to your trace URL, then reload localhost:8080 again. Flip back to your trace, and you should see that the Proxy method only takes a portion of the time of HandleHTTP! https://cdn.rawgit.com/spacemonkeygo/monkit/master/images/trace.svg There's multiple ways to select a trace. You can select by regex using the preselect method (default), which first evaluates the regex on all known functions for sanity checking. Sometimes, however, the function you want to trace may not yet be known to monkit, in which case you'll want to turn preselection off. You may have a bad regex, or you may be in this case if you get the error "Bad Request: regex preselect matches 0 functions." Another way to select a trace is by providing a trace id, which we'll get to next! Make sure to check out what the addition of the time.Sleep call did to the other reports. It's easy to write plugins for monkit! Check out our first one that exports data to Zipkin (http://zipkin.io/)'s Scribe API: https://github.com/spacemonkeygo/monkit-zipkin We plan to have more (for HTrace, OpenTracing, etc, etc), soon!
This package reads and writes pickled data. The format is the same as the Python "pickle" module. Protocols 0,1,2 are implemented. These are the versions written by the Python 2.x series. Python 3 defines newer protocol versions, but can write the older protocol versions so they are readable by this package. To read data, see stalecucumber.Unpickle. To write data, see stalecucumber.NewPickler. Read a pickled string or unicode object Read a pickled integer Read a pickled list of numbers into a structure Read a pickled dictionary into a structure Pickle a struct You can pickle recursive objects like so Python's pickler is intelligent enough not to emit an infinite data structure when a recursive object is pickled. I recommend against pickling recursive objects in the first place, but this library handles unpickling them without a problem. The result of unpickling the above is map[interface{}]interface{} with a key "a" that contains a reference to itself. Attempting to unpack the result of the above python code into a structure with UnpackInto would either fail or recurse forever. The Python Pickle module can pickle most Python objects. By default, some Python objects such as the set type and bytearray type are automatically supported by this library. To support unpickling custom Python objects, you need to implement a resolver. A resolver meets the PythonResolver interface, which is just this function The module and name are the class name. So if you have a class called "Foo" in the module "bar" the first argument would be "bar" and the second would be "Foo". You can pass in your custom resolver by calling The third argument of the Resolve function is originally a Python tuple, so it is slice of anything. For most user defined objects this is just a Python dictionary. However, if a Python object implements the __reduce__ method it could be anything. If your resolver can't identify the type named by module & string, just return stalecucumber.ErrUnresolvablePythonGlobal. Otherwise convert the args into whatever you want and return that as the value from the function with nil for the error. To avoid reimplementing the same logic over and over, you can chain resolvers together. You can use your resolver in addition to the default resolver by doing the following If the version of Python you are using supports protocol version 1 or 2, you should always specify that protocol version. By default the "pickle" and "cPickle" modules in Python write using protocol 0. Protocol 0 requires much more space to represent the same values and is much slower to parse. The pickle format is incredibly flexible and as a result has some features that are impractical or unimportant when implementing a reader in another language. Each set of opcodes is listed below by protocol version with the impact. Protocol 0 This opcode is used to reference concrete definitions of objects between a pickler and an unpickler by an ID number. The pickle protocol doesn't define what a persistent ID means. This opcode is unlikely to ever be supported by this package. Protocol 1 This opcodes is used in recreating pickled python objects. That is currently not supported by this package. This opcode will supported in a future revision to this package that allows the unpickling of instances of Python classes. This opcode is equivalent to PERSID in protocol 0 and won't be supported for the same reason. Protocol 2 This opcodes is used in recreating pickled python objects. That is currently not supported by this package. This opcode will supported in a future revision to this package that allows the unpickling of instances of Python classes. These opcodes allow using a registry of popular objects that are pickled by name, typically classes. It is envisioned that through a global negotiation and registration process, third parties can set up a mapping between ints and object names. These opcodes are unlikely to ever be supported by this package.
Package gdal provides a wrapper for GDAL, the Geospatial Data Abstraction Library. This C/C++ library provides access to a large number of geospatial raster data formats. It also contains a wrapper for the related OGR Simple Feature Library which provides similar functionality for vector formats. Some less oftenly used functions are not yet implemented. The majoriry of these involve style tables, asynchronous I/O, and GCPs. The documentation is fairly limited, but the functionality fairly closely matches that of the C++ api. This wrapper has most recently been tested on Windows7, using the MinGW32_x64 compiler and GDAL version 1.11. A simple program to create a georeferenced blank 256x256 GeoTIFF: More examples can be found in the ./examples subdirectory.
Package xmlpath implements a strict subset of the XPath specification for the Go language. The XPath specification is available at: Path expressions supported by this package are in the following format, with all components being optional: At the moment, xmlpath is compatible with the XPath specification to the following extent: For example, assuming the following document: The following examples are valid path expressions, and the first match has the indicated value: To run an expression, compile it, and then apply the compiled path to any number of context nodes, from one or more parsed xml documents:
Package goreadme generates readme markdown file from go doc. The package can be used as a command line tool and as Github action, described below: Github actions can be configured to update the README file automatically every time it is needed. Below there is an example that on every time a new change is pushed to the main branch, the action is trigerred, generates a new README file, and if there is a change - commits and pushes it to the main branch. In pull requests that affect the README content, if the `GITHUB_TOKEN` is given, the action will post a comment on the pull request with changes that will be made to the README file. To use this with Github actions, add the following content to `.github/workflows/goreadme.yml`. See ./action.yml for all available input options. Use as a command line tool Both Go doc and readme files are important. Go doc to be used by your user's library, and README file to welcome users to use your library. They share common content, which is usually duplicated from the doc to the readme or vice versa once the library is ready. The problem is that keeping documentation updated is important, and hard enough - keeping both updated is twice as hard. The formatting of the README.md is done by the go doc parser. This makes the result README.md a bit more limited. Currently, `goreadme` supports the formatting as explained in (godoc page) https://blog.golang.org/godoc-documenting-go-code, or (here) https://pkg.go.dev/github.com/fluhus/godoc-tricks. Meaning: * A header is a single line that is separated from a paragraph above. * Code block is recognized by indentation as Go code. * Inline code is marked with `backticks`. * URLs will just automatically be converted to links: https://github.com/posener/goreadme Additionally, the syntax was extended to include some more markdown features while keeping the Go doc readable: * Bulleted and numbered lists are possible when each bullet item is followed by an empty line. * Diff blocks are automatically detected when each line in a code block starts with a `' '`, `'-'` or `'+'`: * A repository file can be linked when providing a path that start with `./`: ./goreadme.go. * A link can have a link text by prefixing it with parenthesised text: (goreadme page) https://github.com/posener/goreadme. * A link to repository file and can have a link text: (goreadme main file) ./goreamde.go. * An image can be added by prefixing a link to an image with `(image/<image title>)`: (image/title of image) https://github.githubassets.com/images/icons/emoji/unicode/1f44c.png The goreadme tests the test cases in the ./testdata directory. It generates readme files for all the packages in that directory and asserts that the result readme matches the existing one. When modifying goreadme behavior, there is no need to manually change these readme files. It is possible to run `WRITE_READMES=1 go test ./...` which regenerates them and check the changes match the expected (optionally using `git diff`).
Package gcfg reads "INI-style" text-based configuration files with "name=value" pairs grouped into sections (gcfg files). This package is still a work in progress; see the sections below for planned changes. The syntax is based on that used by git config: http://git-scm.com/docs/git-config#_syntax . There are some (planned) differences compared to the git config format: The functions in this package read values into a user-defined struct. Each section corresponds to a struct field in the config struct, and each variable in a section corresponds to a data field in the section struct. The mapping of each section or variable name to fields is done either based on the "gcfg" struct tag or by matching the name of the section or variable, ignoring case. In the latter case, hyphens '-' in section and variable names correspond to underscores '_' in field names. Fields must be exported; to use a section or variable name starting with a letter that is neither upper- or lower-case, prefix the field name with 'X'. (See https://code.google.com/p/go/issues/detail?id=5763#c4 .) For sections with subsections, the corresponding field in config must be a map, rather than a struct, with string keys and pointer-to-struct values. Values for subsection variables are stored in the map with the subsection name used as the map key. (Note that unlike section and variable names, subsection names are case sensitive.) When using a map, and there is a section with the same section name but without a subsection name, its values are stored with the empty string used as the key. It is possible to provide default values for subsections in the section "default-<sectionname>" (or by setting values in the corresponding struct field "Default_<sectionname>"). The functions in this package panic if config is not a pointer to a struct, or when a field is not of a suitable type (either a struct or a map with string keys and pointer-to-struct values). The section structs in the config struct may contain single-valued or multi-valued variables. Variables of unnamed slice type (that is, a type starting with `[]`) are treated as multi-value; all others (including named slice types) are treated as single-valued variables. Single-valued variables are handled based on the type as follows. Unnamed pointer types (that is, types starting with `*`) are dereferenced, and if necessary, a new instance is allocated. For types implementing the encoding.TextUnmarshaler interface, the UnmarshalText method is used to set the value. Implementing this method is the recommended way for parsing user-defined types. For fields of string kind, the value string is assigned to the field, after unquoting and unescaping as needed. For fields of bool kind, the field is set to true if the value is "true", "yes", "on" or "1", and set to false if the value is "false", "no", "off" or "0", ignoring case. In addition, single-valued bool fields can be specified with a "blank" value (variable name without equals sign and value); in such case the value is set to true. Predefined integer types [u]int(|8|16|32|64) and big.Int are parsed as decimal or hexadecimal (if having '0x' prefix). (This is to prevent unintuitively handling zero-padded numbers as octal.) Other types having [u]int* as the underlying type, such as os.FileMode and uintptr allow decimal, hexadecimal, or octal values. Parsing mode for integer types can be overridden using the struct tag option ",int=mode" where mode is a combination of the 'd', 'h', and 'o' characters (each standing for decimal, hexadecimal, and octal, respectively.) All other types are parsed using fmt.Sscanf with the "%v" verb. For multi-valued variables, each individual value is parsed as above and appended to the slice. If the first value is specified as a "blank" value (variable name without equals sign and value), a new slice is allocated; that is any values previously set in the slice will be ignored. The types subpackage for provides helpers for parsing "enum-like" and integer types. There are 3 types of errors: Programmer errors trigger panics. These are should be fixed by the programmer before releasing code that uses gcfg. Data errors cause gcfg to return a non-nil error value. This includes the case when there are extra unknown key-value definitions in the configuration data (extra data). However, in some occasions it is desirable to be able to proceed in situations when the only data error is that of extra data. These errors are handled at a different (warning) priority and can be filtered out programmatically. To ignore extra data warnings, wrap the gcfg.Read*Into invocation into a call to gcfg.FatalOnly. The following is a list of changes under consideration:
Package ql implements a pure Go embedded SQL database engine. Builder results available at QL is a member of the SQL family of languages. It is less complex and less powerful than SQL (whichever specification SQL is considered to be). 2020-12-10: sql/database driver now supports url parameter removeemptywal=N which has the same semantics as passing RemoveEmptyWAL = N != 0 to OpenFile options. 2020-11-09: Add IF NOT EXISTS support for the INSERT INTO statement. Add IsDuplicateUniqueIndexError function. 2018-11-04: Back end file format V2 is now released. To use the new format for newly created databases set the FileFormat field in *Options passed to OpenFile to value 2 or use the driver named "ql2" instead of "ql". - Both the old and new driver will properly open and use, read and write the old (V1) or new file (V2) format of an existing database. - V1 format has a record size limit of ~64 kB. V2 format record size limit is math.MaxInt32. - V1 format uncommitted transaction size is limited by memory resources. V2 format uncommitted transaction is limited by free disk space. - A direct consequence of the previous is that small transactions perform better using V1 format and big transactions perform better using V2 format. - V2 format uses substantially less memory. 2018-08-02: Release v1.2.0 adds initial support for Go modules. 2017-01-10: Release v1.1.0 fixes some bugs and adds a configurable WAL headroom. 2016-07-29: Release v1.0.6 enables alternatively using = instead of == for equality operation. 2016-07-11: Release v1.0.5 undoes vendoring of lldb. QL now uses stable lldb (modernc.org/lldb). 2016-07-06: Release v1.0.4 fixes a panic when closing the WAL file. 2016-04-03: Release v1.0.3 fixes a data race. 2016-03-23: Release v1.0.2 vendors gitlab.com/cznic/exp/lldb and github.com/camlistore/go4/lock. 2016-03-17: Release v1.0.1 adjusts for latest goyacc. Parser error messages are improved and changed, but their exact form is not considered a API change. 2016-03-05: The current version has been tagged v1.0.0. 2015-06-15: To improve compatibility with other SQL implementations, the count built-in aggregate function now accepts * as its argument. 2015-05-29: The execution planner was rewritten from scratch. It should use indices in all places where they were used before plus in some additional situations. It is possible to investigate the plan using the newly added EXPLAIN statement. The QL tool is handy for such analysis. If the planner would have used an index, but no such exists, the plan includes hints in form of copy/paste ready CREATE INDEX statements. The planner is still quite simple and a lot of work on it is yet ahead. You can help this process by filling an issue with a schema and query which fails to use an index or indices when it should, in your opinion. Bonus points for including output of `ql 'explain <query>'`. 2015-05-09: The grammar of the CREATE INDEX statement now accepts an expression list instead of a single expression, which was further limited to just a column name or the built-in id(). As a side effect, composite indices are now functional. However, the values in the expression-list style index are not yet used by other statements or the statement/query planner. The composite index is useful while having UNIQUE clause to check for semantically duplicate rows before they get added to the table or when such a row is mutated using the UPDATE statement and the expression-list style index tuple of the row is thus recomputed. 2015-05-02: The Schema field of table __Table now correctly reflects any column constraints and/or defaults. Also, the (*DB).Info method now has that information provided in new ColumInfo fields NotNull, Constraint and Default. 2015-04-20: Added support for {LEFT,RIGHT,FULL} [OUTER] JOIN. 2015-04-18: Column definitions can now have constraints and defaults. Details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. 2015-03-06: New built-in functions formatFloat and formatInt. Thanks urandom! (https://github.com/urandom) 2015-02-16: IN predicate now accepts a SELECT statement. See the updated "Predicates" section. 2015-01-17: Logical operators || and && have now alternative spellings: OR and AND (case insensitive). AND was a keyword before, but OR is a new one. This can possibly break existing queries. For the record, it's a good idea to not use any name appearing in, for example, [7] in your queries as the list of QL's keywords may expand for gaining better compatibility with existing SQL "standards". 2015-01-12: ACID guarantees were tightened at the cost of performance in some cases. The write collecting window mechanism, a formerly used implementation detail, was removed. Inserting rows one by one in a transaction is now slow. I mean very slow. Try to avoid inserting single rows in a transaction. Instead, whenever possible, perform batch updates of tens to, say thousands of rows in a single transaction. See also: http://www.sqlite.org/faq.html#q19, the discussed synchronization principles involved are the same as for QL, modulo minor details. Note: A side effect is that closing a DB before exiting an application, both for the Go API and through database/sql driver, is no more required, strictly speaking. Beware that exiting an application while there is an open (uncommitted) transaction in progress means losing the transaction data. However, the DB will not become corrupted because of not closing it. Nor that was the case before, but formerly failing to close a DB could have resulted in losing the data of the last transaction. 2014-09-21: id() now optionally accepts a single argument - a table name. 2014-09-01: Added the DB.Flush() method and the LIKE pattern matching predicate. 2014-08-08: The built in functions max and min now accept also time values. Thanks opennota! (https://github.com/opennota) 2014-06-05: RecordSet interface extended by new methods FirstRow and Rows. 2014-06-02: Indices on id() are now used by SELECT statements. 2014-05-07: Introduction of Marshal, Schema, Unmarshal. 2014-04-15: Added optional IF NOT EXISTS clause to CREATE INDEX and optional IF EXISTS clause to DROP INDEX. 2014-04-12: The column Unique in the virtual table __Index was renamed to IsUnique because the old name is a keyword. Unfortunately, this is a breaking change, sorry. 2014-04-11: Introduction of LIMIT, OFFSET. 2014-04-10: Introduction of query rewriting. 2014-04-07: Introduction of indices. QL imports zappy[8], a block-based compressor, which speeds up its performance by using a C version of the compression/decompression algorithms. If a CGO-free (pure Go) version of QL, or an app using QL, is required, please include 'purego' in the -tags option of go {build,get,install}. For example: If zappy was installed before installing QL, it might be necessary to rebuild zappy first (or rebuild QL with all its dependencies using the -a option): The syntax is specified using Extended Backus-Naur Form (EBNF) Lower-case production names are used to identify lexical tokens. Non-terminals are in CamelCase. Lexical tokens are enclosed in double quotes "" or back quotes “. The form a … b represents the set of characters from a through b as alternatives. The horizontal ellipsis … is also used elsewhere in the spec to informally denote various enumerations or code snippets that are not further specified. QL source code is Unicode text encoded in UTF-8. The text is not canonicalized, so a single accented code point is distinct from the same character constructed from combining an accent and a letter; those are treated as two code points. For simplicity, this document will use the unqualified term character to refer to a Unicode code point in the source text. Each code point is distinct; for instance, upper and lower case letters are different characters. Implementation restriction: For compatibility with other tools, the parser may disallow the NUL character (U+0000) in the statement. Implementation restriction: A byte order mark is disallowed anywhere in QL statements. The following terms are used to denote specific character classes The underscore character _ (U+005F) is considered a letter. Lexical elements are comments, tokens, identifiers, keywords, operators and delimiters, integer, floating-point, imaginary, rune and string literals and QL parameters. Line comments start with the character sequence // or -- and stop at the end of the line. A line comment acts like a space. General comments start with the character sequence /* and continue through the character sequence */. A general comment acts like a space. Comments do not nest. Tokens form the vocabulary of QL. There are four classes: identifiers, keywords, operators and delimiters, and literals. White space, formed from spaces (U+0020), horizontal tabs (U+0009), carriage returns (U+000D), and newlines (U+000A), is ignored except as it separates tokens that would otherwise combine into a single token. The formal grammar uses semicolons ";" as separators of QL statements. A single QL statement or the last QL statement in a list of statements can have an optional semicolon terminator. (Actually a separator from the following empty statement.) Identifiers name entities such as tables or record set columns. There are two kinds of identifiers, normal idententifiers and quoted identifiers. An normal identifier is a sequence of one or more letters and digits. The first character in an identifier must be a letter. For example A quoted identifier is a string of any charaters between guillmets «». Quoted identifiers allow QL key words or phrases with spaces to be used as identifiers. The guillemets were chosen because QL already uses double quotes, single quotes, and backticks for other quoting purposes. «TRANSACTION» «duration» «lovely stories» No identifiers are predeclared, however note that no keyword can be used as a normal identifier. Identifiers starting with two underscores are used for meta data virtual tables names. For forward compatibility, users should generally avoid using any identifiers starting with two underscores. For example The following keywords are reserved and may not be used as identifiers. Keywords are not case sensitive. The following character sequences represent operators, delimiters, and other special tokens Operators consisting of more than one character are referred to by names in the rest of the documentation An integer literal is a sequence of digits representing an integer constant. An optional prefix sets a non-decimal base: 0 for octal, 0x or 0X for hexadecimal. In hexadecimal literals, letters a-f and A-F represent values 10 through 15. For example A floating-point literal is a decimal representation of a floating-point constant. It has an integer part, a decimal point, a fractional part, and an exponent part. The integer and fractional part comprise decimal digits; the exponent part is an e or E followed by an optionally signed decimal exponent. One of the integer part or the fractional part may be elided; one of the decimal point or the exponent may be elided. For example An imaginary literal is a decimal representation of the imaginary part of a complex constant. It consists of a floating-point literal or decimal integer followed by the lower-case letter i. For example A rune literal represents a rune constant, an integer value identifying a Unicode code point. A rune literal is expressed as one or more characters enclosed in single quotes. Within the quotes, any character may appear except single quote and newline. A single quoted character represents the Unicode value of the character itself, while multi-character sequences beginning with a backslash encode values in various formats. The simplest form represents the single character within the quotes; since QL statements are Unicode characters encoded in UTF-8, multiple UTF-8-encoded bytes may represent a single integer value. For instance, the literal 'a' holds a single byte representing a literal a, Unicode U+0061, value 0x61, while 'ä' holds two bytes (0xc3 0xa4) representing a literal a-dieresis, U+00E4, value 0xe4. Several backslash escapes allow arbitrary values to be encoded as ASCII text. There are four ways to represent the integer value as a numeric constant: \x followed by exactly two hexadecimal digits; \u followed by exactly four hexadecimal digits; \U followed by exactly eight hexadecimal digits, and a plain backslash \ followed by exactly three octal digits. In each case the value of the literal is the value represented by the digits in the corresponding base. Although these representations all result in an integer, they have different valid ranges. Octal escapes must represent a value between 0 and 255 inclusive. Hexadecimal escapes satisfy this condition by construction. The escapes \u and \U represent Unicode code points so within them some values are illegal, in particular those above 0x10FFFF and surrogate halves. After a backslash, certain single-character escapes represent special values All other sequences starting with a backslash are illegal inside rune literals. For example A string literal represents a string constant obtained from concatenating a sequence of characters. There are two forms: raw string literals and interpreted string literals. Raw string literals are character sequences between back quotes “. Within the quotes, any character is legal except back quote. The value of a raw string literal is the string composed of the uninterpreted (implicitly UTF-8-encoded) characters between the quotes; in particular, backslashes have no special meaning and the string may contain newlines. Carriage returns inside raw string literals are discarded from the raw string value. Interpreted string literals are character sequences between double quotes "". The text between the quotes, which may not contain newlines, forms the value of the literal, with backslash escapes interpreted as they are in rune literals (except that \' is illegal and \" is legal), with the same restrictions. The three-digit octal (\nnn) and two-digit hexadecimal (\xnn) escapes represent individual bytes of the resulting string; all other escapes represent the (possibly multi-byte) UTF-8 encoding of individual characters. Thus inside a string literal \377 and \xFF represent a single byte of value 0xFF=255, while ÿ, \u00FF, \U000000FF and \xc3\xbf represent the two bytes 0xc3 0xbf of the UTF-8 encoding of character U+00FF. For example These examples all represent the same string If the statement source represents a character as two code points, such as a combining form involving an accent and a letter, the result will be an error if placed in a rune literal (it is not a single code point), and will appear as two code points if placed in a string literal. Literals are assigned their values from the respective text representation at "compile" (parse) time. QL parameters provide the same functionality as literals, but their value is assigned at execution time from an expression list passed to DB.Run or DB.Execute. Using '?' or '$' is completely equivalent. For example Keywords 'false' and 'true' (not case sensitive) represent the two possible constant values of type bool (also not case sensitive). Keyword 'NULL' (not case sensitive) represents an untyped constant which is assignable to any type. NULL is distinct from any other value of any type. A type determines the set of values and operations specific to values of that type. A type is specified by a type name. Named instances of the boolean, numeric, and string types are keywords. The names are not case sensitive. Note: The blob type is exchanged between the back end and the API as []byte. On 32 bit platforms this limits the size which the implementation can handle to 2G. A boolean type represents the set of Boolean truth values denoted by the predeclared constants true and false. The predeclared boolean type is bool. A duration type represents the elapsed time between two instants as an int64 nanosecond count. The representation limits the largest representable duration to approximately 290 years. A numeric type represents sets of integer or floating-point values. The predeclared architecture-independent numeric types are The value of an n-bit integer is n bits wide and represented using two's complement arithmetic. Conversions are required when different numeric types are mixed in an expression or assignment. A string type represents the set of string values. A string value is a (possibly empty) sequence of bytes. The case insensitive keyword for the string type is 'string'. The length of a string (its size in bytes) can be discovered using the built-in function len. A time type represents an instant in time with nanosecond precision. Each time has associated with it a location, consulted when computing the presentation form of the time. The following functions are implicitly declared An expression specifies the computation of a value by applying operators and functions to operands. Operands denote the elementary values in an expression. An operand may be a literal, a (possibly qualified) identifier denoting a constant or a function or a table/record set column, or a parenthesized expression. A qualified identifier is an identifier qualified with a table/record set name prefix. For example Primary expression are the operands for unary and binary expressions. For example A primary expression of the form denotes the element of a string indexed by x. Its type is byte. The value x is called the index. The following rules apply - The index x must be of integer type except bigint or duration; it is in range if 0 <= x < len(s), otherwise it is out of range. - A constant index must be non-negative and representable by a value of type int. - A constant index must be in range if the string a is a literal. - If x is out of range at run time, a run-time error occurs. - s[x] is the byte at index x and the type of s[x] is byte. If s is NULL or x is NULL then the result is NULL. Otherwise s[x] is illegal. For a string, the primary expression constructs a substring. The indices low and high select which elements appear in the result. The result has indices starting at 0 and length equal to high - low. For convenience, any of the indices may be omitted. A missing low index defaults to zero; a missing high index defaults to the length of the sliced operand The indices low and high are in range if 0 <= low <= high <= len(a), otherwise they are out of range. A constant index must be non-negative and representable by a value of type int. If both indices are constant, they must satisfy low <= high. If the indices are out of range at run time, a run-time error occurs. Integer values of type bigint or duration cannot be used as indices. If s is NULL the result is NULL. If low or high is not omitted and is NULL then the result is NULL. Given an identifier f denoting a predeclared function, calls f with arguments a1, a2, … an. Arguments are evaluated before the function is called. The type of the expression is the result type of f. In a function call, the function value and arguments are evaluated in the usual order. After they are evaluated, the parameters of the call are passed by value to the function and the called function begins execution. The return value of the function is passed by value when the function returns. Calling an undefined function causes a compile-time error. Operators combine operands into expressions. Comparisons are discussed elsewhere. For other binary operators, the operand types must be identical unless the operation involves shifts or untyped constants. For operations involving constants only, see the section on constant expressions. Except for shift operations, if one operand is an untyped constant and the other operand is not, the constant is converted to the type of the other operand. The right operand in a shift expression must have unsigned integer type or be an untyped constant that can be converted to unsigned integer type. If the left operand of a non-constant shift expression is an untyped constant, the type of the constant is what it would be if the shift expression were replaced by its left operand alone. Expressions of the form yield a boolean value true if expr2, a regular expression, matches expr1 (see also [6]). Both expression must be of type string. If any one of the expressions is NULL the result is NULL. Predicates are special form expressions having a boolean result type. Expressions of the form are equivalent, including NULL handling, to The types of involved expressions must be comparable as defined in "Comparison operators". Another form of the IN predicate creates the expression list from a result of a SelectStmt. The SelectStmt must select only one column. The produced expression list is resource limited by the memory available to the process. NULL values produced by the SelectStmt are ignored, but if all records of the SelectStmt are NULL the predicate yields NULL. The select statement is evaluated only once. If the type of expr is not the same as the type of the field returned by the SelectStmt then the set operation yields false. The type of the column returned by the SelectStmt must be one of the simple (non blob-like) types: Expressions of the form are equivalent, including NULL handling, to The types of involved expressions must be ordered as defined in "Comparison operators". Expressions of the form yield a boolean value true if expr does not have a specific type (case A) or if expr has a specific type (case B). In other cases the result is a boolean value false. Unary operators have the highest precedence. There are five precedence levels for binary operators. Multiplication operators bind strongest, followed by addition operators, comparison operators, && (logical AND), and finally || (logical OR) Binary operators of the same precedence associate from left to right. For instance, x / y * z is the same as (x / y) * z. Note that the operator precedence is reflected explicitly by the grammar. Arithmetic operators apply to numeric values and yield a result of the same type as the first operand. The four standard arithmetic operators (+, -, *, /) apply to integer, rational, floating-point, and complex types; + also applies to strings; +,- also applies to times. All other arithmetic operators apply to integers only. sum integers, rationals, floats, complex values, strings difference integers, rationals, floats, complex values, times product integers, rationals, floats, complex values / quotient integers, rationals, floats, complex values % remainder integers & bitwise AND integers | bitwise OR integers ^ bitwise XOR integers &^ bit clear (AND NOT) integers << left shift integer << unsigned integer >> right shift integer >> unsigned integer Strings can be concatenated using the + operator String addition creates a new string by concatenating the operands. A value of type duration can be added to or subtracted from a value of type time. Times can subtracted from each other producing a value of type duration. For two integer values x and y, the integer quotient q = x / y and remainder r = x % y satisfy the following relationships with x / y truncated towards zero ("truncated division"). As an exception to this rule, if the dividend x is the most negative value for the int type of x, the quotient q = x / -1 is equal to x (and r = 0). If the divisor is a constant expression, it must not be zero. If the divisor is zero at run time, a run-time error occurs. If the dividend is non-negative and the divisor is a constant power of 2, the division may be replaced by a right shift, and computing the remainder may be replaced by a bitwise AND operation The shift operators shift the left operand by the shift count specified by the right operand. They implement arithmetic shifts if the left operand is a signed integer and logical shifts if it is an unsigned integer. There is no upper limit on the shift count. Shifts behave as if the left operand is shifted n times by 1 for a shift count of n. As a result, x << 1 is the same as x*2 and x >> 1 is the same as x/2 but truncated towards negative infinity. For integer operands, the unary operators +, -, and ^ are defined as follows For floating-point and complex numbers, +x is the same as x, while -x is the negation of x. The result of a floating-point or complex division by zero is not specified beyond the IEEE-754 standard; whether a run-time error occurs is implementation-specific. Whenever any operand of any arithmetic operation, unary or binary, is NULL, as well as in the case of the string concatenating operation, the result is NULL. For unsigned integer values, the operations +, -, *, and << are computed modulo 2n, where n is the bit width of the unsigned integer's type. Loosely speaking, these unsigned integer operations discard high bits upon overflow, and expressions may rely on “wrap around”. For signed integers with a finite bit width, the operations +, -, *, and << may legally overflow and the resulting value exists and is deterministically defined by the signed integer representation, the operation, and its operands. No exception is raised as a result of overflow. An evaluator may not optimize an expression under the assumption that overflow does not occur. For instance, it may not assume that x < x + 1 is always true. Integers of type bigint and rationals do not overflow but their handling is limited by the memory resources available to the program. Comparison operators compare two operands and yield a boolean value. In any comparison, the first operand must be of same type as is the second operand, or vice versa. The equality operators == and != apply to operands that are comparable. The ordering operators <, <=, >, and >= apply to operands that are ordered. These terms and the result of the comparisons are defined as follows - Boolean values are comparable. Two boolean values are equal if they are either both true or both false. - Complex values are comparable. Two complex values u and v are equal if both real(u) == real(v) and imag(u) == imag(v). - Integer values are comparable and ordered, in the usual way. Note that durations are integers. - Floating point values are comparable and ordered, as defined by the IEEE-754 standard. - Rational values are comparable and ordered, in the usual way. - String and Blob values are comparable and ordered, lexically byte-wise. - Time values are comparable and ordered. Whenever any operand of any comparison operation is NULL, the result is NULL. Note that slices are always of type string. Logical operators apply to boolean values and yield a boolean result. The right operand is evaluated conditionally. The truth tables for logical operations with NULL values Conversions are expressions of the form T(x) where T is a type and x is an expression that can be converted to type T. A constant value x can be converted to type T in any of these cases: - x is representable by a value of type T. - x is a floating-point constant, T is a floating-point type, and x is representable by a value of type T after rounding using IEEE 754 round-to-even rules. The constant T(x) is the rounded value. - x is an integer constant and T is a string type. The same rule as for non-constant x applies in this case. Converting a constant yields a typed constant as result. A non-constant value x can be converted to type T in any of these cases: - x has type T. - x's type and T are both integer or floating point types. - x's type and T are both complex types. - x is an integer, except bigint or duration, and T is a string type. Specific rules apply to (non-constant) conversions between numeric types or to and from a string type. These conversions may change the representation of x and incur a run-time cost. All other conversions only change the type but not the representation of x. A conversion of NULL to any type yields NULL. For the conversion of non-constant numeric values, the following rules apply 1. When converting between integer types, if the value is a signed integer, it is sign extended to implicit infinite precision; otherwise it is zero extended. It is then truncated to fit in the result type's size. For example, if v == uint16(0x10F0), then uint32(int8(v)) == 0xFFFFFFF0. The conversion always yields a valid value; there is no indication of overflow. 2. When converting a floating-point number to an integer, the fraction is discarded (truncation towards zero). 3. When converting an integer or floating-point number to a floating-point type, or a complex number to another complex type, the result value is rounded to the precision specified by the destination type. For instance, the value of a variable x of type float32 may be stored using additional precision beyond that of an IEEE-754 32-bit number, but float32(x) represents the result of rounding x's value to 32-bit precision. Similarly, x + 0.1 may use more than 32 bits of precision, but float32(x + 0.1) does not. In all non-constant conversions involving floating-point or complex values, if the result type cannot represent the value the conversion succeeds but the result value is implementation-dependent. 1. Converting a signed or unsigned integer value to a string type yields a string containing the UTF-8 representation of the integer. Values outside the range of valid Unicode code points are converted to "\uFFFD". 2. Converting a blob to a string type yields a string whose successive bytes are the elements of the blob. 3. Converting a value of a string type to a blob yields a blob whose successive elements are the bytes of the string. 4. Converting a value of a bigint type to a string yields a string containing the decimal decimal representation of the integer. 5. Converting a value of a string type to a bigint yields a bigint value containing the integer represented by the string value. A prefix of “0x” or “0X” selects base 16; the “0” prefix selects base 8, and a “0b” or “0B” prefix selects base 2. Otherwise the value is interpreted in base 10. An error occurs if the string value is not in any valid format. 6. Converting a value of a rational type to a string yields a string containing the decimal decimal representation of the rational in the form "a/b" (even if b == 1). 7. Converting a value of a string type to a bigrat yields a bigrat value containing the rational represented by the string value. The string can be given as a fraction "a/b" or as a floating-point number optionally followed by an exponent. An error occurs if the string value is not in any valid format. 8. Converting a value of a duration type to a string returns a string representing the duration in the form "72h3m0.5s". Leading zero units are omitted. As a special case, durations less than one second format using a smaller unit (milli-, micro-, or nanoseconds) to ensure that the leading digit is non-zero. The zero duration formats as 0, with no unit. 9. Converting a string value to a duration yields a duration represented by the string. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". 10. Converting a time value to a string returns the time formatted using the format string When evaluating the operands of an expression or of function calls, operations are evaluated in lexical left-to-right order. For example, in the evaluation of the function calls and evaluation of c happen in the order h(), i(), j(), c. Floating-point operations within a single expression are evaluated according to the associativity of the operators. Explicit parentheses affect the evaluation by overriding the default associativity. In the expression x + (y + z) the addition y + z is performed before adding x. Statements control execution. The empty statement does nothing. Alter table statements modify existing tables. With the ADD clause it adds a new column to the table. The column must not exist. With the DROP clause it removes an existing column from a table. The column must exist and it must be not the only (last) column of the table. IOW, there cannot be a table with no columns. For example When adding a column to a table with existing data, the constraint clause of the ColumnDef cannot be used. Adding a constrained column to an empty table is fine. Begin transactions statements introduce a new transaction level. Every transaction level must be eventually balanced by exactly one of COMMIT or ROLLBACK statements. Note that when a transaction is roll-backed because of a statement failure then no explicit balancing of the respective BEGIN TRANSACTION is statement is required nor permitted. Failure to properly balance any opened transaction level may cause dead locks and/or lose of data updated in the uppermost opened but never properly closed transaction level. For example A database cannot be updated (mutated) outside of a transaction. Statements requiring a transaction A database is effectively read only outside of a transaction. Statements not requiring a transaction The commit statement closes the innermost transaction nesting level. If that's the outermost level then the updates to the DB made by the transaction are atomically made persistent. For example Create index statements create new indices. Index is a named projection of ordered values of a table column to the respective records. As a special case the id() of the record can be indexed. Index name must not be the same as any of the existing tables and it also cannot be the same as of any column name of the table the index is on. For example Now certain SELECT statements may use the indices to speed up joins and/or to speed up record set filtering when the WHERE clause is used; or the indices might be used to improve the performance when the ORDER BY clause is present. The UNIQUE modifier requires the indexed values tuple to be index-wise unique or have all values NULL. The optional IF NOT EXISTS clause makes the statement a no operation if the index already exists. A simple index consists of only one expression which must be either a column name or the built-in id(). A more complex and more general index is one that consists of more than one expression or its single expression does not qualify as a simple index. In this case the type of all expressions in the list must be one of the non blob-like types. Note: Blob-like types are blob, bigint, bigrat, time and duration. Create table statements create new tables. A column definition declares the column name and type. Table names and column names are case sensitive. Neither a table or an index of the same name may exist in the DB. For example The optional IF NOT EXISTS clause makes the statement a no operation if the table already exists. The optional constraint clause has two forms. The first one is found in many SQL dialects. This form prevents the data in column DepartmentName to be NULL. The second form allows an arbitrary boolean expression to be used to validate the column. If the value of the expression is true then the validation succeeded. If the value of the expression is false or NULL then the validation fails. If the value of the expression is not of type bool an error occurs. The optional DEFAULT clause is an expression which, if present, is substituted instead of a NULL value when the colum is assigned a value. Note that the constraint and/or default expressions may refer to other columns by name: When a table row is inserted by the INSERT INTO statement or when a table row is updated by the UPDATE statement, the order of operations is as follows: 1. The new values of the affected columns are set and the values of all the row columns become the named values which can be referred to in default expressions evaluated in step 2. 2. If any row column value is NULL and the DEFAULT clause is present in the column's definition, the default expression is evaluated and its value is set as the respective column value. 3. The values, potentially updated, of row columns become the named values which can be referred to in constraint expressions evaluated during step 4. 4. All row columns which definition has the constraint clause present will have that constraint checked. If any constraint violation is detected, the overall operation fails and no changes to the table are made. Delete from statements remove rows from a table, which must exist. For example If the WHERE clause is not present then all rows are removed and the statement is equivalent to the TRUNCATE TABLE statement. Drop index statements remove indices from the DB. The index must exist. For example The optional IF EXISTS clause makes the statement a no operation if the index does not exist. Drop table statements remove tables from the DB. The table must exist. For example The optional IF EXISTS clause makes the statement a no operation if the table does not exist. Insert into statements insert new rows into tables. New rows come from literal data, if using the VALUES clause, or are a result of select statement. In the later case the select statement is fully evaluated before the insertion of any rows is performed, allowing to insert values calculated from the same table rows are to be inserted into. If the ColumnNameList part is omitted then the number of values inserted in the row must be the same as are columns in the table. If the ColumnNameList part is present then the number of values per row must be same as the same number of column names. All other columns of the record are set to NULL. The type of the value assigned to a column must be the same as is the column's type or the value must be NULL. If there exists an unique index that would make the insert statement fail, the optional IF NOT EXISTS turns the insert statement in such case into a no-op. For example If any of the columns of the table were defined using the optional constraints clause or the optional defaults clause then those are processed on a per row basis. The details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. Explain statement produces a recordset consisting of lines of text which describe the execution plan of a statement, if any. For example, the QL tool treats the explain statement specially and outputs the joined lines: The explanation may aid in uderstanding how a statement/query would be executed and if indices are used as expected - or which indices may possibly improve the statement performance. The create index statements above were directly copy/pasted in the terminal from the suggestions provided by the filter recordset pipeline part returned by the explain statement. If the statement has nothing special in its plan, the result is the original statement. To get an explanation of the select statement of the IN predicate, use the EXPLAIN statement with that particular select statement. The rollback statement closes the innermost transaction nesting level discarding any updates to the DB made by it. If that's the outermost level then the effects on the DB are as if the transaction never happened. For example The (temporary) record set from the last statement is returned and can be processed by the client. In this case the rollback is the same as 'DROP TABLE tmp;' but it can be a more complex operation. Select from statements produce recordsets. The optional DISTINCT modifier ensures all rows in the result recordset are unique. Either all of the resulting fields are returned ('*') or only those named in FieldList. RecordSetList is a list of table names or parenthesized select statements, optionally (re)named using the AS clause. The result can be filtered using a WhereClause and orderd by the OrderBy clause. For example If Recordset is a nested, parenthesized SelectStmt then it must be given a name using the AS clause if its field are to be accessible in expressions. A field is an named expression. Identifiers, not used as a type in conversion or a function name in the Call clause, denote names of (other) fields, values of which should be used in the expression. The expression can be named using the AS clause. If the AS clause is not present and the expression consists solely of a field name, then that field name is used as the name of the resulting field. Otherwise the field is unnamed. For example The SELECT statement can optionally enumerate the desired/resulting fields in a list. No two identical field names can appear in the list. When more than one record set is used in the FROM clause record set list, the result record set field names are rewritten to be qualified using the record set names. If a particular record set doesn't have a name, its respective fields became unnamed. The optional JOIN clause, for example is mostly equal to except that the rows from a which, when they appear in the cross join, never made expr to evaluate to true, are combined with a virtual row from b, containing all nulls, and added to the result set. For the RIGHT JOIN variant the discussed rules are used for rows from b not satisfying expr == true and the virtual, all-null row "comes" from a. The FULL JOIN adds the respective rows which would be otherwise provided by the separate executions of the LEFT JOIN and RIGHT JOIN variants. For more thorough OUTER JOIN discussion please see the Wikipedia article at [10]. Resultins rows of a SELECT statement can be optionally ordered by the ORDER BY clause. Collating proceeds by considering the expressions in the expression list left to right until a collating order is determined. Any possibly remaining expressions are not evaluated. All of the expression values must yield an ordered type or NULL. Ordered types are defined in "Comparison operators". Collating of elements having a NULL value is different compared to what the comparison operators yield in expression evaluation (NULL result instead of a boolean value). Below, T denotes a non NULL value of any QL type. NULL collates before any non NULL value (is considered smaller than T). Two NULLs have no collating order (are considered equal). The WHERE clause restricts records considered by some statements, like SELECT FROM, DELETE FROM, or UPDATE. It is an error if the expression evaluates to a non null value of non bool type. Another form of the WHERE clause is an existence predicate of a parenthesized select statement. The EXISTS form evaluates to true if the parenthesized SELECT statement produces a non empty record set. The NOT EXISTS form evaluates to true if the parenthesized SELECT statement produces an empty record set. The parenthesized SELECT statement is evaluated only once (TODO issue #159). The GROUP BY clause is used to project rows having common values into a smaller set of rows. For example Using the GROUP BY without any aggregate functions in the selected fields is in certain cases equal to using the DISTINCT modifier. The last two examples above produce the same resultsets. The optional OFFSET clause allows to ignore first N records. For example The above will produce only rows 11, 12, ... of the record set, if they exist. The value of the expression must a non negative integer, but not bigint or duration. The optional LIMIT clause allows to ignore all but first N records. For example The above will return at most the first 10 records of the record set. The value of the expression must a non negative integer, but not bigint or duration. The LIMIT and OFFSET clauses can be combined. For example Considering table t has, say 10 records, the above will produce only records 4 - 8. After returning record #8, no more result rows/records are computed. 1. The FROM clause is evaluated, producing a Cartesian product of its source record sets (tables or nested SELECT statements). 2. If present, the JOIN cluase is evaluated on the result set of the previous evaluation and the recordset specified by the JOIN clause. (... JOIN Recordset ON ...) 3. If present, the WHERE clause is evaluated on the result set of the previous evaluation. 4. If present, the GROUP BY clause is evaluated on the result set of the previous evaluation(s). 5. The SELECT field expressions are evaluated on the result set of the previous evaluation(s). 6. If present, the DISTINCT modifier is evaluated on the result set of the previous evaluation(s). 7. If present, the ORDER BY clause is evaluated on the result set of the previous evaluation(s). 8. If present, the OFFSET clause is evaluated on the result set of the previous evaluation(s). The offset expression is evaluated once for the first record produced by the previous evaluations. 9. If present, the LIMIT clause is evaluated on the result set of the previous evaluation(s). The limit expression is evaluated once for the first record produced by the previous evaluations. Truncate table statements remove all records from a table. The table must exist. For example Update statements change values of fields in rows of a table. For example Note: The SET clause is optional. If any of the columns of the table were defined using the optional constraints clause or the optional defaults clause then those are processed on a per row basis. The details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. To allow to query for DB meta data, there exist specially named tables, some of them being virtual. Note: Virtual system tables may have fake table-wise unique but meaningless and unstable record IDs. Do not apply the built-in id() to any system table. The table __Table lists all tables in the DB. The schema is The Schema column returns the statement to (re)create table Name. This table is virtual. The table __Colum lists all columns of all tables in the DB. The schema is The Ordinal column defines the 1-based index of the column in the record. This table is virtual. The table __Colum2 lists all columns of all tables in the DB which have the constraint NOT NULL or which have a constraint expression defined or which have a default expression defined. The schema is It's possible to obtain a consolidated recordset for all properties of all DB columns using The Name column is the column name in TableName. The table __Index lists all indices in the DB. The schema is The IsUnique columns reflects if the index was created using the optional UNIQUE clause. This table is virtual. Built-in functions are predeclared. The built-in aggregate function avg returns the average of values of an expression. Avg ignores NULL values, but returns NULL if all values of a column are NULL or if avg is applied to an empty record set. The column values must be of a numeric type. The built-in function coalesce takes at least one argument and returns the first of its arguments which is not NULL. If all arguments are NULL, this function returns NULL. This is useful for providing defaults for NULL values in a select query. The built-in function contains returns true if substr is within s. If any argument to contains is NULL the result is NULL. The built-in aggregate function count returns how many times an expression has a non NULL values or the number of rows in a record set. Note: count() returns 0 for an empty record set. For example Date returns the time corresponding to in the appropriate zone for that time in the given location. The month, day, hour, min, sec, and nsec values may be outside their usual ranges and will be normalized during the conversion. For example, October 32 converts to November 1. A daylight savings time transition skips or repeats times. For example, in the United States, March 13, 2011 2:15am never occurred, while November 6, 2011 1:15am occurred twice. In such cases, the choice of time zone, and therefore the time, is not well-defined. Date returns a time that is correct in one of the two zones involved in the transition, but it does not guarantee which. A location maps time instants to the zone in use at that time. Typically, the location represents the collection of time offsets in use in a geographical area, such as "CEST" and "CET" for central Europe. "local" represents the system's local time zone. "UTC" represents Universal Coordinated Time (UTC). The month specifies a month of the year (January = 1, ...). If any argument to date is NULL the result is NULL. The built-in function day returns the day of the month specified by t. If the argument to day is NULL the result is NULL. The built-in function formatTime returns a textual representation of the time value formatted according to layout, which defines the format by showing how the reference time, would be displayed if it were the value; it serves as an example of the desired output. The same display rules will then be applied to the time value. If any argument to formatTime is NULL the result is NULL. NOTE: The string value of the time zone, like "CET" or "ACDT", is dependent on the time zone of the machine the function is run on. For example, if the t value is in "CET", but the machine is in "ACDT", instead of "CET" the result is "+0100". This is the same what Go (time.Time).String() returns and in fact formatTime directly calls t.String(). returns on a machine in the CET time zone, but may return on a machine in the ACDT zone. The time value is in both cases the same so its ordering and comparing is correct. Only the display value can differ. The built-in functions formatFloat and formatInt format numbers to strings using go's number format functions in the `strconv` package. For all three functions, only the first argument is mandatory. The default values of the rest are shown in the examples. If the first argument is NULL, the result is NULL. returns returns returns Unlike the `strconv` equivalent, the formatInt function handles all integer types, both signed and unsigned. The built-in function hasPrefix tests whether the string s begins with prefix. If any argument to hasPrefix is NULL the result is NULL. The built-in function hasSuffix tests whether the string s ends with suffix. If any argument to hasSuffix is NULL the result is NULL. The built-in function hour returns the hour within the day specified by t, in the range [0, 23]. If the argument to hour is NULL the result is NULL. The built-in function hours returns the duration as a floating point number of hours. If the argument to hours is NULL the result is NULL. The built-in function id takes zero or one arguments. If no argument is provided, id() returns a table-unique automatically assigned numeric identifier of type int. Ids of deleted records are not reused unless the DB becomes completely empty (has no tables). For example If id() without arguments is called for a row which is not a table record then the result value is NULL. For example If id() has one argument it must be a table name of a table in a cross join. For example The built-in function len takes a string argument and returns the lentgh of the string in bytes. The expression len(s) is constant if s is a string constant. If the argument to len is NULL the result is NULL. The built-in aggregate function max returns the largest value of an expression in a record set. Max ignores NULL values, but returns NULL if all values of a column are NULL or if max is applied to an empty record set. The expression values must be of an ordered type. For example The built-in aggregate function min returns the smallest value of an expression in a record set. Min ignores NULL values, but returns NULL if all values of a column are NULL or if min is applied to an empty record set. For example The column values must be of an ordered type. The built-in function minute returns the minute offset within the hour specified by t, in the range [0, 59]. If the argument to minute is NULL the result is NULL. The built-in function minutes returns the duration as a floating point number of minutes. If the argument to minutes is NULL the result is NULL. The built-in function month returns the month of the year specified by t (January = 1, ...). If the argument to month is NULL the result is NULL. The built-in function nanosecond returns the nanosecond offset within the second specified by t, in the range [0, 999999999]. If the argument to nanosecond is NULL the result is NULL. The built-in function nanoseconds returns the duration as an integer nanosecond count. If the argument to nanoseconds is NULL the result is NULL. The built-in function now returns the current local time. The built-in function parseTime parses a formatted string and returns the time value it represents. The layout defines the format by showing how the reference time, would be interpreted if it were the value; it serves as an example of the input format. The same interpretation will then be made to the input string. Elements omitted from the value are assumed to be zero or, when zero is impossible, one, so parsing "3:04pm" returns the time corresponding to Jan 1, year 0, 15:04:00 UTC (note that because the year is 0, this time is before the zero Time). Years must be in the range 0000..9999. The day of the week is checked for syntax but it is otherwise ignored. In the absence of a time zone indicator, parseTime returns a time in UTC. When parsing a time with a zone offset like -0700, if the offset corresponds to a time zone used by the current location, then parseTime uses that location and zone in the returned time. Otherwise it records the time as being in a fabricated location with time fixed at the given zone offset. When parsing a time with a zone abbreviation like MST, if the zone abbreviation has a defined offset in the current location, then that offset is used. The zone abbreviation "UTC" is recognized as UTC regardless of location. If the zone abbreviation is unknown, Parse records the time as being in a fabricated location with the given zone abbreviation and a zero offset. This choice means that such a time can be parses and reformatted with the same layout losslessly, but the exact instant used in the representation will differ by the actual zone offset. To avoid such problems, prefer time layouts that use a numeric zone offset. If any argument to parseTime is NULL the result is NULL. The built-in function second returns the second offset within the minute specified by t, in the range [0, 59]. If the argument to second is NULL the result is NULL. The built-in function seconds returns the duration as a floating point number of seconds. If the argument to seconds is NULL the result is NULL. The built-in function since returns the time elapsed since t. It is shorthand for now()-t. If the argument to since is NULL the result is NULL. The built-in aggregate function sum returns the sum of values of an expression for all rows of a record set. Sum ignores NULL values, but returns NULL if all values of a column are NULL or if sum is applied to an empty record set. The column values must be of a numeric type. The built-in function timeIn returns t with the location information set to loc. For discussion of the loc argument please see date(). If any argument to timeIn is NULL the result is NULL. The built-in function weekday returns the day of the week specified by t. Sunday == 0, Monday == 1, ... If the argument to weekday is NULL the result is NULL. The built-in function year returns the year in which t occurs. If the argument to year is NULL the result is NULL. The built-in function yearDay returns the day of the year specified by t, in the range [1,365] for non-leap years, and [1,366] in leap years. If the argument to yearDay is NULL the result is NULL. Three functions assemble and disassemble complex numbers. The built-in function complex constructs a complex value from a floating-point real and imaginary part, while real and imag extract the real and imaginary parts of a complex value. The type of the arguments and return value correspond. For complex, the two arguments must be of the same floating-point type and the return type is the complex type with the corresponding floating-point constituents: complex64 for float32, complex128 for float64. The real and imag functions together form the inverse, so for a complex value z, z == complex(real(z), imag(z)). If the operands of these functions are all constants, the return value is a constant. If any argument to any of complex, real, imag functions is NULL the result is NULL. For the numeric types, the following sizes are guaranteed Portions of this specification page are modifications based on work[2] created and shared by Google[3] and used according to terms described in the Creative Commons 3.0 Attribution License[4]. This specification is licensed under the Creative Commons Attribution 3.0 License, and code is licensed under a BSD license[5]. Links from the above documentation This section is not part of the specification. WARNING: The implementation of indices is new and it surely needs more time to become mature. Indices are used currently used only by the WHERE clause. The following expression patterns of 'WHERE expression' are recognized and trigger index use. The relOp is one of the relation operators <, <=, ==, >=, >. For the equality operator both operands must be of comparable types. For all other operators both operands must be of ordered types. The constant expression is a compile time constant expression. Some constant folding is still a TODO. Parameter is a QL parameter ($1 etc.). Consider tables t and u, both with an indexed field f. The WHERE expression doesn't comply with the above simple detected cases. However, such query is now automatically rewritten to which will use both of the indices. The impact of using the indices can be substantial (cf. BenchmarkCrossJoin*) if the resulting rows have low "selectivity", ie. only few rows from both tables are selected by the respective WHERE filtering. Note: Existing QL DBs can be used and indices can be added to them. However, once any indices are present in the DB, the old QL versions cannot work with such DB anymore. Running a benchmark with -v (-test.v) outputs information about the scale used to report records/s and a brief description of the benchmark. For example Running the full suite of benchmarks takes a lot of time. Use the -timeout flag to avoid them being killed after the default time limit (10 minutes).
Meeus implements algorithms from the book "Astronomical Algorithms" by Jean Meeus. It follows the second edition, copyright 1998, with corrections as of August 10, 2009. It requires Go 1.1 or later. Jean Meeus's book has long been respected as a broad-reaching source of astronomical algorithms, and many code libraries have been based on it. This library will be distinct in several respects, I hope. First of all it is in the Go language, a programming language new enough that it well postdates the book itself. Go has many advantages for a large and diverse library such as this and it is a fine language for for scientific computations. I hope that a Go implementation will prove relevant for some time in the future. Next, this library attempts fairly comprehensive coverage of the book. Each chapter of the book is addressed, and in the very few cases where there seems no code from the chapter that is applicable in Go, similar and more appropriate techniques are at least discussed in documentation. If meaningful, examples are given as well. While this library attempts fairly comprehensive coverage of the book, it does not attempt to present a complete, well rounded, and polished astronomy library. Such a production-quality astronomy library would likely include some updated routines and data, routines and data from other sources, and would fill in various holes of functionality which Meeus elects to gloss over. Such a library could certainly be derived from this one, but it is beyond the scope of what is attempted here. Thus, this library should represent a solid foundation for the development of a broad range of astronomy software. Much software should be able to use this library directly. Some software will need routines from additional sources. When the API of this library begins to present friction with that of other code, it may be time to fork or otherwise derive a new library from this one. Please feel free to do this, respecting of course the MIT license under which this software is offered. By Go convention, each package is in its own subdirectory. The "subdirectories" list of this documentation page lists all packages of of the library. Each package also corresponds to exacly one chapter of the book. The package documentation heading references the chapter number and a cross reference is given below of chapter numbers and package names. Within a chapter of the book, Meeus presents explanatory text, numbered formulas, numbered examples, and other exercises. Within a package of this library, there are library functions and other codified definitions; there are Go examples which appear in documentation and which are also evaluated and verified to produce correct output by the go test feature; and there is test code which is neither part of the API nor the documentation but which verified by the go test feature. The "API", or choice of functions to implement in Go, covers many of Meeus's numbered formulas and covers the algorithms needed to work most of the numbered examples. The correspondence is not one-to-one, but often "refactored" into functions that seem more idiomatic to Go. This is set as the limit of the API however, and thus the limit of the functionality offered by this library. Each numbered example in the book is also translated to a Go example function. This typically shows how to use the implemented API to compute the results of the example. As the go test feature validates these results, the examples also serve as baseline tests of the correctness of the API code. Relevant "exercises" from the book are also often implemented as Go examples. A few packages remain incomplete. A package is considered complete if it implements all major formulas and algorithms and if it implements all numbered examples. For incomplete packages, the package documentation will describe the ways in which it is incomplete and typically give reasons for the incompleteness. In addition to the chapter packages, there is a package called "base". This contains a few definitions that are provided by Meeus but are of such general use that they really don't belong with any one chapter. The much greater bulk of base however, are functions which Meeus does not explicitly provide, but again are of general use. The nature of these functions is as helper subroutines or IO subroutines. The functions do not offer additional astronomy algorithms beyond those provided by Meeus. To more closely follow the book's use of Greek letters and other symbols, Unicode is freely used in the source code. Recognizing that these symbols are awkard to enter in many environments however, they are avoided for exported symbols that comprise the library API. The function Coord.EclToEq for example, returns (α, δ float64) but of course you can assign these return values to whatever variables you like. The struct Coord.Equatorial on the other hand, has exported fields RA and Dec. ASCII is used in this case to simplify using these symbols in your code. Some identifiers use the prime symbol (ʹ). That's Unicode U+02B9, not the ASCII '. Go uses ASCII ' for raw strings and does not allow it in identifiers. U+02B9 on the other hand is Unicode category Lm, and is perfectly valid in Go identifiers. An earler version of this library used the Go type float64 for most parameters and return values. This allowed terse, efficient code but required careful attention to the scaling or units used. Go defined types are now used for Time, RA, HourAngle, and general Angle quantities in the interest of making units and coversions more clear. These types are defined in the external package github.com/soniakeys/unit. An earlier version of this library included routines for formatting sexagesimal quantities. These have been moved to the external package github.com/soniakeys/sexagesimal and use of this package is now restricted to examples and tests. Meeus packages and the sexagesimal package both depend on the unit package. Meeus packages do not depend on sexagesimal, although the Meeus tests do. .
Package protocol implements Language Server Protocol specification in Go. This package contains the structs that map directly to the wire format of the Language Server Protocol. It is a literal transcription, with unmodified comments, and only the changes required to make it Go code. - Names are uppercased to export them. - All fields have JSON tags added to correct the names. - Fields marked with a ? are also marked as "omitempty". - Fields that are "|| null" are made pointers. - Fields that are string or number are left as string. - Fields that are type "number" are made float64.
Package tds is a pure Go Sybase ASE/IQ/RS driver for the database/sql package. This is a beta release. This driver has yet to be battle tested on production workload. Version 1.0 will be released when this driver will be production ready Requirements Package installation is done via go-get: It implements most of the database/sql functionalities. To connect to a sybase instance, import the package and use the regular database/sql APIs: The connection string is pretty standard and uses the URL format: The most common ones are: Less frequently used ones: Most of the database/sql APIs are implemented, with a major one missing: named parameters. Please use the question mark '?' as a placeholder for parameters : Almost all of the sybase ASE datatypes are supported, with the exception of lob locators. The type mapping between the server and the go data types is as follows: decimal/numeric/money/smallmoney data can be given as parameters using any of the go numerical types. However one should never use float64 if a loss of precision is not tolerated. To implement precise floating point numbers, this driver provides a "Num" datatype, which is a wrapper around big.Rat. It implements the value.Scanner interface, you can thus instanciate it this way: To access the underlying big.Rat: Num also implements the stringer interface to pretty print its value. Please refer to the tds.Num godoc for more information. This driver assumes by default that the client uses utf8 strings and will ask the server to convert back and forth to/from this charset. If utf8 charset conversion is not supported on the server, and if the charset conversion is not supported by golang.org/x/text/encoding, you can add client-side character set conversion with the following code: You will have to handle it yourself and use a charset supported by the server. One can set a custom error callback to process server errors before the regular error processing routing. This allows handling showplan messages and print statement. The following demonstrates how to handle showplan and print messages: As of now the driver does not support bulk insert and named parameters. Password encryption only works for Sybase ASE > 15.5. You can use stmt_test.go and session_test.go for sample usage, as follows: Credits
Package couchdb is a driver for connecting with a CouchDB server over HTTP. Use the `couch` driver name when using this driver. The DSN should be a full URL, likely with login credentials: The CouchDB driver generally interprets kivik.Options keys and values as URL query parameters. Values of the following types will be converted to their appropriate string representation when URL-encoded: Passing any other type will return an error. The only exceptions to the above rule are: The CouchDB driver supports a number of authentication methods. For most uses, you don't need to worry about authentication at all--just include authentication credentials in your connection DSN: This will use Cookie authentication by default. To use one of the explicit authentication mechanisms, you'll need to use kivik's Authenticate method. For example: Normally, to include an attachment in a CouchDB document, it must be base-64 encoded, which leads to increased network traffic and higher CPU load. CouchDB also supports the option to upload multiple attachments in a single request using the 'multipart/related' content type. See http://docs.couchdb.org/en/stable/api/document/common.html#creating-multiple-attachments As an experimental feature, this is now supported by the Kivik CouchDB driver as well. To take advantage of this capability, the `doc` argument to the Put() method must be either: With this in place, the CouchDB driver will switch to `multipart/related` mode, sending each attachment in binary format, rather than base-64 encoding it. To function properly, each attachment must have an accurate Size value. If the Size value is unset, the entirely attachment may be read to determine its size, prior to sending it over the network, leading to delays and unnecessary I/O and CPU usage. The simplest way to ensure efficiency is to use the NewAttachment() method, provided by this package. See the documentation on that method for proper usage. Example: To disable the `multipart/related` capabilities entirely, you may pass the `NoMultipartPut` option, with any value. This will fallback to the default of inline base-64 encoding the attachments. Example: If you find yourself wanting to disable this feature, due to bugs or performance, please consider filing a bug report against Kivik as well, so we can look for a solution that will allow using this optimization.
Package zap provides fast, structured, leveled logging. For applications that log in the hot path, reflection-based serialization and string formatting are prohibitively expensive - they're CPU-intensive and make many small allocations. Put differently, using json.Marshal and fmt.Fprintf to log tons of interface{} makes your application slow. Zap takes a different approach. It includes a reflection-free, zero-allocation JSON encoder, and the base Logger strives to avoid serialization overhead and allocations wherever possible. By building the high-level SugaredLogger on that foundation, zap lets users choose when they need to count every allocation and when they'd prefer a more familiar, loosely typed API. In contexts where performance is nice, but not critical, use the SugaredLogger. It's 4-10x faster than other structured logging packages and supports both structured and printf-style logging. Like log15 and go-kit, the SugaredLogger's structured logging APIs are loosely typed and accept a variadic number of key-value pairs. (For more advanced use cases, they also accept strongly typed fields - see the SugaredLogger.With documentation for details.) By default, loggers are unbuffered. However, since zap's low-level APIs allow buffering, calling Sync before letting your process exit is a good habit. In the rare contexts where every microsecond and every allocation matter, use the Logger. It's even faster than the SugaredLogger and allocates far less, but it only supports strongly-typed, structured logging. Choosing between the Logger and SugaredLogger doesn't need to be an application-wide decision: converting between the two is simple and inexpensive. The simplest way to build a Logger is to use zap's opinionated presets: NewExample, NewProduction, and NewDevelopment. These presets build a logger with a single function call: Presets are fine for small projects, but larger projects and organizations naturally require a bit more customization. For most users, zap's Config struct strikes the right balance between flexibility and convenience. See the package-level BasicConfiguration example for sample code. More unusual configurations (splitting output between files, sending logs to a message queue, etc.) are possible, but require direct use of go.uber.org/zap/zapcore. See the package-level AdvancedConfiguration example for sample code. The zap package itself is a relatively thin wrapper around the interfaces in go.uber.org/zap/zapcore. Extending zap to support a new encoding (e.g., BSON), a new log sink (e.g., Kafka), or something more exotic (perhaps an exception aggregation service, like Sentry or Rollbar) typically requires implementing the zapcore.Encoder, zapcore.WriteSyncer, or zapcore.Core interfaces. See the zapcore documentation for details. Similarly, package authors can use the high-performance Encoder and Core implementations in the zapcore package to build their own loggers. An FAQ covering everything from installation errors to design decisions is available at https://github.com/uber-go/zap/blob/master/FAQ.md.
Package couchdb is a driver for connecting with a CouchDB server over HTTP. Use the `couch` driver name when using this driver. The DSN should be a full URL, likely with login credentials: The CouchDB driver generally interprets kivik.Options keys and values as URL query parameters. Values of the following types will be converted to their appropriate string representation when URL-encoded: Passing any other type will return an error. The only exceptions to the above rule are: The CouchDB driver supports a number of authentication methods. For most uses, you don't need to worry about authentication at all--just include authentication credentials in your connection DSN: This will use Cookie authentication by default. To use one of the explicit authentication mechanisms, you'll need to use kivik's Authenticate method. For example: Normally, to include an attachment in a CouchDB document, it must be base-64 encoded, which leads to increased network traffic and higher CPU load. CouchDB also supports the option to upload multiple attachments in a single request using the 'multipart/related' content type. See http://docs.couchdb.org/en/stable/api/document/common.html#creating-multiple-attachments As an experimental feature, this is now supported by the Kivik CouchDB driver as well. To take advantage of this capability, the `doc` argument to the Put() method must be either: With this in place, the CouchDB driver will switch to `multipart/related` mode, sending each attachment in binary format, rather than base-64 encoding it. To function properly, each attachment must have an accurate Size value. If the Size value is unset, the entirely attachment may be read to determine its size, prior to sending it over the network, leading to delays and unnecessary I/O and CPU usage. The simplest way to ensure efficiency is to use the NewAttachment() method, provided by this package. See the documentation on that method for proper usage. Example: To disable the `multipart/related` capabilities entirely, you may pass the `NoMultipartPut` option, with any value. This will fallback to the default of inline base-64 encoding the attachments. Example: If you find yourself wanting to disable this feature, due to bugs or performance, please consider filing a bug report against Kivik as well, so we can look for a solution that will allow using this optimization.
Package nanolog is a package to speed up your logging. The format string is inspired by the full fledged fmt.Fprintf function. The codes are unique to this package, so normal fmt documentation is not be applicable. The format string is similar to fmt in that it uses the percent sign (a.k.a. the modulo operator) to signify the start of a format code. The reader is greedy, meaning that the parser will attempt to read as much as it can for a code before it stops. E.g. if you have a generic int in the middle of your format string immediately followed by the number 1 and a space ("%i1 "), the parser may complain saying that it encountered an invalid code. To fix this, use curly braces after the percent sign to surround the code: "%{i}1 ". Kinds and their corresponding format codes: The file format has two categories of data: The differentiation is done with the entryType, which is prefixed on to the record. The log line records are formatted as follows: The log entry records are formatted as follows: The data is serialized as follows: Bool: 1 byte False: 0 or True: 1 String: 4 + len(string) bytes Length: 4 bytes - little endian uint32 String bytes: Length bytes int family: int: 8 bytes - int64 as little endian uint64 int8: 1 byte int16: 2 bytes - int16 as little endian uint16 int32: 4 bytes - int32 as little endian uint32 int64: 8 bytes - int64 as little endian uint64 uint family: uint: 8 bytes - little endian uint64 uint8: 1 byte uint16: 2 bytes - little endian uint16 uint32: 4 bytes - little endian uint32 uint64: 8 bytes - little endian uint64 float32: 4 bytes as little endian uint32 from float32 bits float64: 8 bytes as little endian uint64 from float64 bits complex64: Real: 4 bytes as little endian uint32 from float32 bits Complex: 4 bytes as little endian uint32 from float32 bits complex128: Real: 8 bytes as little endian uint64 from float64 bits Complex: 8 bytes as little endian uint64 from float64 bits
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross-Field and Cross-Struct validation for nested structs and has the ability to dive into arrays and maps of any type. Why not a better error message? Because this library intends for you to handle your own error messages. Why should I handle my own errors? Many reasons. We built an internationalized application and needed to know the field, and what validation failed so we could provide a localized error. Doing things this way is actually the way the standard library does, see the file.Open method here: The authors return type "error" to avoid the issue discussed in the following, where err is always != nil: Validator only returns nil or ValidationErrors as type error; so, in your code all you need to do is check if the error returned is not nil, and if it's not type cast it to type ValidationErrors like so err.(validator.ValidationErrors). Custom functions can be added. Example: Cross-Field Validation can be done via the following tags: If, however, some custom cross-field validation is required, it can be done using a custom validation. Why not just have cross-fields validation tags (i.e. only eqcsfield and not eqfield)? The reason is efficiency. If you want to check a field within the same struct "eqfield" only has to find the field on the same struct (1 level). But, if we used "eqcsfield" it could be multiple levels down. Example: Multiple validators on a field will process in the order defined. Example: Bad Validator definitions are not handled by the library. Example: Baked In Cross-Field validation only compares fields on the same struct. If Cross-Field + Cross-Struct validation is needed you should implement your own custom validator. Comma (",") is the default separator of validation tags. If you wish to have a comma included within the parameter (i.e. excludesall=,) you will need to use the UTF-8 hex representation 0x2C, which is replaced in the code as a comma, so the above will become excludesall=0x2C. Pipe ("|") is the default separator of validation tags. If you wish to have a pipe included within the parameter i.e. excludesall=| you will need to use the UTF-8 hex representation 0x7C, which is replaced in the code as a pipe, so the above will become excludesall=0x7C Here is a list of the current built in validators: Tells the validation to skip this struct field; this is particularly handy in ignoring embedded structs from being validated. (Usage: -) This is the 'or' operator allowing multiple validators to be used and accepted. (Usage: rbg|rgba) <-- this would allow either rgb or rgba colors to be accepted. This can also be combined with 'and' for example ( Usage: omitempty,rgb|rgba) When a field that is a nested struct is encountered, and contains this flag any validation on the nested struct will be run, but none of the nested struct fields will be validated. This is usefull if inside of you program you know the struct will be valid, but need to verify it has been assigned. NOTE: only "required" and "omitempty" can be used on a struct itself. Same as structonly tag except that any struct level validations will not run. Is a special tag without a validation function attached. It is used when a field is a Pointer, Interface or Invalid and you wish to validate that it exists. Example: want to ensure a bool exists if you define the bool as a pointer and use exists it will ensure there is a value; couldn't use required as it would fail when the bool was false. exists will fail is the value is a Pointer, Interface or Invalid and is nil. Allows conditional validation, for example if a field is not set with a value (Determined by the "required" validator) then other validation such as min or max won't run, but if a value is set validation will run. This tells the validator to dive into a slice, array or map and validate that level of the slice, array or map with the validation tags that follow. Multidimensional nesting is also supported, each level you wish to dive will require another dive tag. Example #1 Example #2 This validates that the value is not the data types default zero value. For numbers ensures value is not zero. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For numbers, max will ensure that the value is equal to the parameter given. For strings, it checks that the string length is exactly that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, max will ensure that the value is less than or equal to the parameter given. For strings, it checks that the string length is at most that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, min will ensure that the value is greater or equal to the parameter given. For strings, it checks that the string length is at least that number of characters. For slices, arrays, and maps, validates the number of items. For strings & numbers, eq will ensure that the value is equal to the parameter given. For slices, arrays, and maps, validates the number of items. For strings & numbers, ne will ensure that the value is not equal to the parameter given. For slices, arrays, and maps, validates the number of items. For numbers, this will ensure that the value is greater than the parameter given. For strings, it checks that the string length is greater than that number of characters. For slices, arrays and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than time.Now.UTC(). Same as 'min' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than or equal to time.Now.UTC(). For numbers, this will ensure that the value is less than the parameter given. For strings, it checks that the string length is less than that number of characters. For slices, arrays, and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than time.Now.UTC(). Same as 'max' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than or equal to time.Now.UTC(). This will validate the field value against another fields value either within a struct or passed in field. Example #1: Example #2: Field Equals Another Field (relative) This does the same as eqfield except that it validates the field provided relative to the top level struct. This will validate the field value against another fields value either within a struct or passed in field. Examples: Field Does Not Equal Another Field (relative) This does the same as nefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltefield except that it validates the field provided relative to the top level struct. This validates that a string value contains alpha characters only This validates that a string value contains alphanumeric characters only This validates that a string value contains a basic numeric value. basic excludes exponents etc... This validates that a string value contains a valid hexadecimal. This validates that a string value contains a valid hex color including hashtag (#) This validates that a string value contains a valid rgb color This validates that a string value contains a valid rgba color This validates that a string value contains a valid hsl color This validates that a string value contains a valid hsla color This validates that a string value contains a valid email This may not conform to all possibilities of any rfc standard, but neither does any email provider accept all posibilities. This validates that a string value contains a valid url This will accept any url the golang request uri accepts but must contain a schema for example http:// or rtmp:// This validates that a string value contains a valid uri This will accept any uri the golang request uri accepts This validates that a string value contains a valid base64 value. Although an empty string is valid base64 this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains the substring value. This validates that a string value contains any Unicode code points in the substring value. This validates that a string value contains the supplied rune value. This validates that a string value does not contain the substring value. This validates that a string value does not contain any Unicode code points in the substring value. This validates that a string value does not contain the supplied rune value. This validates that a string value contains a valid isbn10 or isbn13 value. This validates that a string value contains a valid isbn10 value. This validates that a string value contains a valid isbn13 value. This validates that a string value contains a valid UUID. This validates that a string value contains a valid version 3 UUID. This validates that a string value contains a valid version 4 UUID. This validates that a string value contains a valid version 5 UUID. This validates that a string value contains only ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains only printable ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains one or more multibyte characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains a valid DataURI. NOTE: this will also validate that the data portion is valid base64 This validates that a string value contains a valid latitude. This validates that a string value contains a valid longitude. This validates that a string value contains a valid U.S. Social Security Number. This validates that a string value contains a valid IP Adress. This validates that a string value contains a valid v4 IP Adress. This validates that a string value contains a valid v6 IP Adress. This validates that a string value contains a valid CIDR Adress. This validates that a string value contains a valid v4 CIDR Adress. This validates that a string value contains a valid v6 CIDR Adress. This validates that a string value contains a valid resolvable TCP Adress. This validates that a string value contains a valid resolvable v4 TCP Adress. This validates that a string value contains a valid resolvable v6 TCP Adress. This validates that a string value contains a valid resolvable UDP Adress. This validates that a string value contains a valid resolvable v4 UDP Adress. This validates that a string value contains a valid resolvable v6 UDP Adress. This validates that a string value contains a valid resolvable IP Adress. This validates that a string value contains a valid resolvable v4 IP Adress. This validates that a string value contains a valid resolvable v6 IP Adress. This validates that a string value contains a valid Unix Adress. This validates that a string value contains a valid MAC Adress. Note: See Go's ParseMAC for accepted formats and types: NOTE: When returning an error, the tag returned in "FieldError" will be the alias tag unless the dive tag is part of the alias. Everything after the dive tag is not reported as the alias tag. Also, the "ActualTag" in the before case will be the actual tag within the alias that failed. Here is a list of the current built in alias tags: Validator notes: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
Package passwordreset implements creation and verification of secure tokens useful for implementation of "reset forgotten password" feature in web applications. This package generates and verifies signed one-time tokens that can be embedded in a link sent to users when they initiate the password reset procedure. When a user changes their password, or when the expiry time passes, the token becomes invalid. Secure token format: where expiration time is the number of seconds since Unix epoch UTC indicating when this token must expire (4 bytes, big-endian, uint32), login is a byte string of arbitrary length (at least 1 byte, not null-terminated), and signature is 32 bytes of HMAC-SHA256(expiration_time || login, k), where k = HMAC-SHA256(expiration_time || login, userkey), where userkey = HMAC-SHA256(password value, secret key), where password value is any piece of information derived from user's password, which will change once the user changes their password (for example, a hash of the password), and secret key is an application-specific secret key. Password value is used to make tokens one-time, that is, once a user changes their password, the token which they used to do a reset, becomes invalid. Usage example: Your application must have a strong secret key for password reset purposes. This key will be used to generate and verify password reset tokens. (If you already have a secret key, for example, for authcookie package, it's better not to reuse it, just use a different one.) Create a function that will query your users database and return some password-related value for the given login. A password-related value means some value that will change once a user changes their password, for example: a password hash, a random salt used to generate it, or time of password creation. This value, mixed with app-specific secret key, will be used as a key for password reset token, thus it will be kept secret. When a user initiates password reset (by entering their login, and maybe answering a secret question), generate a reset token: Send a link with this token to the user by email, for example: https://www.example.com/reset?token=Talo3mRjaGVzdITUAGOXYZwCMq7EtHfYH4ILcBgKaoWXDHTJOIlBUfcr Once a user clicks this link, read a token from it, then verify this token by passing it to VerifyToken function along with the getPasswordHash function, and an app-specific secret key: If verification succeeded, allow to change password for the returned login.
Package stl implements functions to read, write, and transform files in the Stereolithography/Surface Tesselation Language (.stl) file format used in 3D modelling. The format specification was taken from http://www.ennex.com/~fabbers/StL.asp, found at http://en.wikipedia.org/wiki/STL_%28file_format%29. While STL stores the data in single precision 32 bit floating point numbers, the stl package does all calculations beyond simple addition in double precision 64 bit (float64). Usage Example Everything that operates on a model is defined as a method of Solid. Note that The STL format has two variants, a human-readable ASCII variant, and a more compact and precise binary variant which is preferrable. The Solid.BinaryHeader field and the Triangle.Attributes fields will be empty, after reading, as these are not part of the ASCII format. The Solid.Name field is read from the first line after "solid ". It is not checked against the name at the end of the file after "endsolid ". The stl package will also not cope with Unicode byte order marks, which some text editors might automatically place at the beginning of a file. The Solid.BinaryHeader field is filled with all 80 bytes of header data. Then, ReadFile will try to fill solid.Name with an ASCII string read from the header data from the first byte until a \0 or a non-ASCII character is detected. As always when you do linear transformations on floating point numbers, you get numerical errors. So you should expect a vertex being rotated for 360° not to end up at exactly the original coordinates, but instead just very close to them. As the error is usually far smaller than the available precision of 3D printing applications, this is not an issue in most cases. You can implement the Writer interface to directly write into your own data structures. This way you can use the CopyFile and CopyAll functions.
Package utter implements a deep pretty printer for Go data structures to aid data snapshotting. A quick overview of the additional features utter provides over the built-in printing facilities for Go data types are as follows: The approach utter allows for dumping Go data structures is less flexible than its parent tool. It has just a: This section demonstrates how to quickly get started with utter. See the sections below for further details on formatting and configuration options. To dump a variable with full newlines, indentation, type, and pointer information use Dump, Fdump, or Sdump: Configuration of utter is handled by fields in the ConfigState type. For convenience, all of the top-level functions use a global state available via the utter.Config global. It is also possible to create a ConfigState instance that provides methods equivalent to the top-level functions. This allows concurrent configuration options. See the ConfigState documentation for more details. The following configuration options are available: Indent String to use for each indentation level for Dump functions. It is a single space by default. A popular alternative is "\t". NumericWidth NumericWidth specifies the number of columns to use when dumping a numeric slice or array (including bool). Zero specifies all entries on one line. StringWidth StringWidth specifies the number of columns to use when dumping a string slice or array. Zero specifies all entries on one line. BytesWidth Number of byte columns to use when dumping byte slices and arrays. CommentBytes Specifies whether ASCII comment annotations are attached to byte slice and array dumps. CommentPointers CommentPointers specifies whether pointer information will be added as comments. IgnoreUnexported Specifies that unexported fields should be ignored. ElideType ElideType specifies that type information defined by context should not be printed in a dump. SortKeys Specifies map keys should be sorted before being printed. Use this to have a more deterministic, diffable output. Note that only native types (bool, int, uint, floats, uintptr and string) are supported with other types sorted according to the reflect.Value.String() output which guarantees display stability. Natural map order is used by default. Simply call utter.Dump with a list of variables you want to dump: You may also call utter.Fdump if you would prefer to output to an arbitrary io.Writer. For example, to dump to standard error: A third option is to call utter.Sdump to get the formatted output as a string: See the Dump example for details on the setup of the types and variables being shown here. Byte (and uint8) arrays and slices are displayed uniquely, similar to the hexdump -C command as shown.
Package amt provides a reference implementation of the IPLD AMT (Array Mapped Trie) used in the Filecoin blockchain. The AMT algorithm is similar to a HAMT https://en.wikipedia.org/wiki/Hash_array_mapped_trie but instead presents an array-like interface where the indexes themselves form the mapping to nodes in the trie structure. An AMT is suitable for storing sparse array data as a minimum amount of intermediate nodes are required to address a small number of entries even when their indexes span a large distance. AMT is also a suitable means of storing non-sparse array data as required, with a small amount of storage and algorithmic overhead required to handle mapping that assumes that some elements within any range of data may not be present. The AMT algorithm produces a tree-like graph, with a single root node addressing a collection of child nodes which connect downward toward leaf nodes which store the actual entries. No terminal entries are stored in intermediate elements of the tree, unlike in a HAMT. We can divide up the AMT tree structure into "levels" or "heights", where a height of zero contains the terminal elements, and the maximum height of the tree contains the single root node. Intermediate nodes are used to span across the range of indexes. Any AMT instance uses a fixed "width" that is consistent across the tree's nodes. An AMT's "bitWidth" dictates the width, or maximum-brancing factor (arity) of the AMT's nodes by determining how many bits of the original index are used to determine the index at any given level. A bitWidth of 3 (the default for this implementation) can generate indexes in the range of 0 to (3^2)-1=7, i.e. a "width" of 8. In practice, this means that an AMT with a bitWidth of 3 has a branching factor of _between 1 and 8_ for any node in the structure. Considering the minimal case: a minimal AMT contains a single node which serves as both the root and the leaf node and can hold zero or more elements (an empty AMT is possible, although a special-case, and consists of a zero-length root). This minimal AMT can store array indexes from 0 to width-1 (8 for the default bitWidth of 3) without requiring the addition of additional nodes. Attempts to add additional indexes beyond width-1 will result in additional nodes being added and a tree structure in order to address the new elements. The minimal AMT node is said to have a height of 0. Every node in an AMT has a height that indicates its distance from the leaf nodes. All leaf nodes have a height of 0. The height of the root node dictates the overall height of the entire AMT. In the case of the minimal AMT, this is 0. Elements are stored in a compacted form within nodes, they are "position-mapped" by a bitmap field that is stored with the node. The bitmap is a simple byte array, where each bit represents an element of the data that can be stored in the node. With a width of 8, the bitmap is a single byte and up to 8 elements can be stored in the node. The data array of a node _only stores elements that are present in that node_, so the array is commonly shorter than the maximum width. An empty AMT is a special-case where the single node can have zero elements, therefore a zero-length data array and a bitmap of `0x00`. In all other cases, the data array must have between 1 and width elements. Determining the position of an index within the data array requires counting the number of set bits within the bitmap up to the element we are concerned with. If the bitmap has bits 2, 4 and 6 set, we can see that only 3 of the bits are set so our data array should hold 3 elements. To address index 4, we know that the first element will be index 2 and therefore the second will hold index 4. This format allows us to store only the elements that are set in the node. Overflow beyond the single node AMT by adding an index beyond width-1 requires an increase in height in order to address all elements. If an element in the range of width to (width*2)-1 is added, a single additional height is required which will result in a new root node which is used to address two consecutive leaf nodes. Because we have an arity of up to width at any node, the addition of indexes in the range of 0 to (width^2)-1 will still require only the addition of a single additional height above the leaf nodes, i.e. height 1. From the width of an AMT we can derive the maximum range of indexes that can be contained by an AMT at any given `height` with the formula width^(height+1)-1. e.g. an AMT with a width of 8 and a height of 2 can address indexes 0 to 8^(2+1)-1=511. Incrementing the height doubles the range of indexes that can be contained within that structure. Nodes above height 0 (non-leaf nodes) do not contain terminal elements, but instead, their data array contains links to child nodes. The index compaction using the bitmap is the same as for leaf nodes, so each non-leaf node only stores as many links as it has child nodes. Because additional height is required to address larger indexes, even a single-element AMT will require more than one node where the index is greater than the width of the AMT. For a width of 8, indexes 8 to 63 require a height of 1, indexes 64 to 511 require a height of 2, indexes 512 to 4095 require a height of 3, etc. Retrieving elements from the AMT requires extracting only the portion of the requested index that is required at each height to determine the position in the data array to navigate into. When traversing through the tree, we only need to select from indexes 0 to width-1. To do this, we take log2(width) bits from the index to form a number that is between 0 and width-1. e.g. for a width of 8, we only need 3 bits to form a number between 0 and 7, so we only consume 3 bits per level of the AMT as we traverse. A simple method to calculate this at any height in the AMT (assuming bitWidth of 3, i.e. a width of 8) is: 1. Calculate the maximum number of nodes (not entries) that may be present in an sub-tree rooted at the current height. width^height provides this number. e.g. at height 0, only 1 node can be present, but at height 3, we may have a tree of up to 512 nodes (storing up to 8^(3+1)=4096 entries). 2. Divide the index by this number to find the index for this height. e.g. an index of 3 at height 0 will be 3/1=3, or an index of 20 at height 1 will be 20/8=2. 3. If we are at height 0, the element we want is at the data index, position-mapped via the bitmap. 4. If we are above height 0, we need to navigate to the child element at the index we calculated, position-mapped via the bitmap. When traversing to the child, we discard the upper portion of the index that we no longer need. This can be achieved by a mod operation against the number-of-nodes value. e.g. an index of 20 at height 1 requires navigation to the element at position 2, when moving to that element (which is height 0), we truncate the index with 20%8=4, at height 0 this index will be the index in our data array (position-mapped via the bitmap). In this way, each sub-tree root consumes a small slice, log2(width) bits long, of the original index. Adding new elements to an AMT may require up to 3 steps: 1. Increasing the height to accommodate a new index if the current height is not sufficient to address the new index. Increasing the height requires turning the current root node into an intermediate and adding a new root which links to the old (repeated until the required height is reached). 2. Adding any missing intermediate and leaf nodes that are required to address the new index. Depending on the density of existing indexes, this may require the addition of up to height-1 new nodes to connect the root to the required leaf. Sparse indexes will mean large gaps in the tree that will need filling to address new, equally sparse, indexes. 3. Setting the element at the leaf node in the appropriate position in the data array and setting the appropriate bit in the bitmap. Removing elements requires a reversal of this process. Any empty node (other than the case of a completely empty AMT) must be removed and its parent should have its child link removed. This removal may recurse up the tree to remove many unnecessary intermediate nodes. The root node may also be removed if the current height is no longer necessary to contain the range of indexes still in the AMT. This can be easily determined if _only_ the first bit of the root's bitmap is set, meaning only the left-most is present, which will become the new root node (repeated until the new root has more than the first bit set or height of 0, the single-node case). See https://github.com/ipld/specs/blob/master/data-structures/hashmap.md for a description of a HAMT algorithm. And https://github.com/ipld/specs/blob/master/data-structures/vector.md for a description of a similar algorithm to an AMT that doesn't support internal node compression and therefore doesn't support sparse arrays. Unlike a HAMT, the AMT algorithm doesn't benefit from randomness introduced by a hash algorithm. Therefore an AMT used in cases where user-input can influence indexes, larger-than-necessary tree structures may present risks as well as the challenge imposed by having a strict upper-limit on the indexes addressable by the AMT. A width of 8, using 64-bit integers for indexing, allows for a tree height of up to 64/log2(8)=21 (i.e. a width of 8 has a bitWidth of 3, dividing the 64 bits of the uint into 21 separate per-height indexes). Careful placement of indexes could create extremely sub-optimal forms with large heights connecting leaf nodes that are sparsely packed. The overhead of the large number of intermediate nodes required to connect leaf nodes in AMTs that contain high indexes can be abused to create perverse forms that contain large numbers of nodes to store a minimal number of elements. Minimal nodes will be created where indexes are all in the lower-range. The optimal case for an AMT is contiguous index values starting from zero. As larger indexes are introduced that span beyond the current maximum, more nodes are required to address the new nodes _and_ the existing lower index nodes. Consider a case where a width=8 AMT is only addressing indexes less than 8 and requiring a single height. The introduction of a single index within 8 of the maximum 64-bit unsigned integer range will require the new root to have a height of 21 and have enough connecting nodes between it and both the existing elements and the new upper index. This pattern of behavior may be acceptable if there is significant density of entries under a particular maximum index. There is a direct relationship between the sparseness of index values and the number of nodes required to address the entries. This should be the key consideration when determining whether an AMT is a suitable data-structure for a given application.
Package amt provides a reference implementation of the IPLD AMT (Array Mapped Trie) used in the Filecoin blockchain. The AMT algorithm is similar to a HAMT https://en.wikipedia.org/wiki/Hash_array_mapped_trie but instead presents an array-like interface where the indexes themselves form the mapping to nodes in the trie structure. An AMT is suitable for storing sparse array data as a minimum amount of intermediate nodes are required to address a small number of entries even when their indexes span a large distance. AMT is also a suitable means of storing non-sparse array data as required, with a small amount of storage and algorithmic overhead required to handle mapping that assumes that some elements within any range of data may not be present. The AMT algorithm produces a tree-like graph, with a single root node addressing a collection of child nodes which connect downward toward leaf nodes which store the actual entries. No terminal entries are stored in intermediate elements of the tree, unlike in a HAMT. We can divide up the AMT tree structure into "levels" or "heights", where a height of zero contains the terminal elements, and the maximum height of the tree contains the single root node. Intermediate nodes are used to span across the range of indexes. Any AMT instance uses a fixed "width" that is consistent across the tree's nodes. An AMT's "bitWidth" dictates the width, or maximum-brancing factor (arity) of the AMT's nodes by determining how many bits of the original index are used to determine the index at any given level. A bitWidth of 3 (the default for this implementation) can generate indexes in the range of 0 to (2^3)-1=7, i.e. a "width" of 8. In practice, this means that an AMT with a bitWidth of 3 has a branching factor of _between 1 and 8_ for any node in the structure. Considering the minimal case: a minimal AMT contains a single node which serves as both the root and the leaf node and can hold zero or more elements (an empty AMT is possible, although a special-case, and consists of a zero-length root). This minimal AMT can store array indexes from 0 to width-1 (8 for the default bitWidth of 3) without requiring the addition of additional nodes. Attempts to add additional indexes beyond width-1 will result in additional nodes being added and a tree structure in order to address the new elements. The minimal AMT node is said to have a height of 0. Every node in an AMT has a height that indicates its distance from the leaf nodes. All leaf nodes have a height of 0. The height of the root node dictates the overall height of the entire AMT. In the case of the minimal AMT, this is 0. Elements are stored in a compacted form within nodes, they are "position-mapped" by a bitmap field that is stored with the node. The bitmap is a simple byte array, where each bit represents an element of the data that can be stored in the node. With a width of 8, the bitmap is a single byte and up to 8 elements can be stored in the node. The data array of a node _only stores elements that are present in that node_, so the array is commonly shorter than the maximum width. An empty AMT is a special-case where the single node can have zero elements, therefore a zero-length data array and a bitmap of `0x00`. In all other cases, the data array must have between 1 and width elements. Determining the position of an index within the data array requires counting the number of set bits within the bitmap up to the element we are concerned with. If the bitmap has bits 2, 4 and 6 set, we can see that only 3 of the bits are set so our data array should hold 3 elements. To address index 4, we know that the first element will be index 2 and therefore the second will hold index 4. This format allows us to store only the elements that are set in the node. Overflow beyond the single node AMT by adding an index beyond width-1 requires an increase in height in order to address all elements. If an element in the range of width to (width*2)-1 is added, a single additional height is required which will result in a new root node which is used to address two consecutive leaf nodes. Because we have an arity of up to width at any node, the addition of indexes in the range of 0 to (width^2)-1 will still require only the addition of a single additional height above the leaf nodes, i.e. height 1. From the width of an AMT we can derive the maximum range of indexes that can be contained by an AMT at any given `height` with the formula width^(height+1)-1. e.g. an AMT with a width of 8 and a height of 2 can address indexes 0 to 8^(2+1)-1=511. Incrementing the height doubles the range of indexes that can be contained within that structure. Nodes above height 0 (non-leaf nodes) do not contain terminal elements, but instead, their data array contains links to child nodes. The index compaction using the bitmap is the same as for leaf nodes, so each non-leaf node only stores as many links as it has child nodes. Because additional height is required to address larger indexes, even a single-element AMT will require more than one node where the index is greater than the width of the AMT. For a width of 8, indexes 8 to 63 require a height of 1, indexes 64 to 511 require a height of 2, indexes 512 to 4095 require a height of 3, etc. Retrieving elements from the AMT requires extracting only the portion of the requested index that is required at each height to determine the position in the data array to navigate into. When traversing through the tree, we only need to select from indexes 0 to width-1. To do this, we take log2(width) bits from the index to form a number that is between 0 and width-1. e.g. for a width of 8, we only need 3 bits to form a number between 0 and 7, so we only consume 3 bits per level of the AMT as we traverse. A simple method to calculate this at any height in the AMT (assuming bitWidth of 3, i.e. a width of 8) is: 1. Calculate the maximum number of nodes (not entries) that may be present in an sub-tree rooted at the current height. width^height provides this number. e.g. at height 0, only 1 node can be present, but at height 3, we may have a tree of up to 512 nodes (storing up to 8^(3+1)=4096 entries). 2. Divide the index by this number to find the index for this height. e.g. an index of 3 at height 0 will be 3/1=3, or an index of 20 at height 1 will be 20/8=2. 3. If we are at height 0, the element we want is at the data index, position-mapped via the bitmap. 4. If we are above height 0, we need to navigate to the child element at the index we calculated, position-mapped via the bitmap. When traversing to the child, we discard the upper portion of the index that we no longer need. This can be achieved by a mod operation against the number-of-nodes value. e.g. an index of 20 at height 1 requires navigation to the element at position 2, when moving to that element (which is height 0), we truncate the index with 20%8=4, at height 0 this index will be the index in our data array (position-mapped via the bitmap). In this way, each sub-tree root consumes a small slice, log2(width) bits long, of the original index. Adding new elements to an AMT may require up to 3 steps: 1. Increasing the height to accommodate a new index if the current height is not sufficient to address the new index. Increasing the height requires turning the current root node into an intermediate and adding a new root which links to the old (repeated until the required height is reached). 2. Adding any missing intermediate and leaf nodes that are required to address the new index. Depending on the density of existing indexes, this may require the addition of up to height-1 new nodes to connect the root to the required leaf. Sparse indexes will mean large gaps in the tree that will need filling to address new, equally sparse, indexes. 3. Setting the element at the leaf node in the appropriate position in the data array and setting the appropriate bit in the bitmap. Removing elements requires a reversal of this process. Any empty node (other than the case of a completely empty AMT) must be removed and its parent should have its child link removed. This removal may recurse up the tree to remove many unnecessary intermediate nodes. The root node may also be removed if the current height is no longer necessary to contain the range of indexes still in the AMT. This can be easily determined if _only_ the first bit of the root's bitmap is set, meaning only the left-most is present, which will become the new root node (repeated until the new root has more than the first bit set or height of 0, the single-node case). See https://github.com/ipld/specs/blob/master/data-structures/hashmap.md for a description of a HAMT algorithm. And https://github.com/ipld/specs/blob/master/data-structures/vector.md for a description of a similar algorithm to an AMT that doesn't support internal node compression and therefore doesn't support sparse arrays. Unlike a HAMT, the AMT algorithm doesn't benefit from randomness introduced by a hash algorithm. Therefore an AMT used in cases where user-input can influence indexes, larger-than-necessary tree structures may present risks as well as the challenge imposed by having a strict upper-limit on the indexes addressable by the AMT. A width of 8, using 64-bit integers for indexing, allows for a tree height of up to 64/log2(8)=21 (i.e. a width of 8 has a bitWidth of 3, dividing the 64 bits of the uint into 21 separate per-height indexes). Careful placement of indexes could create extremely sub-optimal forms with large heights connecting leaf nodes that are sparsely packed. The overhead of the large number of intermediate nodes required to connect leaf nodes in AMTs that contain high indexes can be abused to create perverse forms that contain large numbers of nodes to store a minimal number of elements. Minimal nodes will be created where indexes are all in the lower-range. The optimal case for an AMT is contiguous index values starting from zero. As larger indexes are introduced that span beyond the current maximum, more nodes are required to address the new nodes _and_ the existing lower index nodes. Consider a case where a width=8 AMT is only addressing indexes less than 8 and requiring a single height. The introduction of a single index within 8 of the maximum 64-bit unsigned integer range will require the new root to have a height of 21 and have enough connecting nodes between it and both the existing elements and the new upper index. This pattern of behavior may be acceptable if there is significant density of entries under a particular maximum index. There is a direct relationship between the sparseness of index values and the number of nodes required to address the entries. This should be the key consideration when determining whether an AMT is a suitable data-structure for a given application.
Package ogdl is used to process OGDL, the Ordered Graph Data Language. OGDL is a textual format to write trees or graphs of text, where indentation and spaces define the structure. Here is an example: The languange is simple, either in its textual representation or its number of productions (the specification rules), allowing for compact implementations. OGDL character streams are normally formed by Unicode characters, and encoded as UTF-8 strings, but any encoding that is ASCII transparent is compatible with the specification. See the full spec at http://ogdl.org. To install this package just do: If we have a text file 'config.ogdl' containing: then, will print If the timeout parameter was not present, then the default value (60) will be assigned to 'to'. The default value is optional, but be aware that Int64() will return 0 in case that the parameter doesn't exist. The configuration file can be written in a conciser way: The package includes a template processor. It takes an arbitrary input stream with some variables in it, and produces an output stream with the variables resolved out of a Graph object which acts as context. For example (given the previous config file): string(b) is then: Some rules are followed:
Package ovirtclient provides a human-friendly Go client for the oVirt Engine. It provides an abstraction layer for the oVirt API, as well as a mocking facility for testing purposes. This documentation contains two parts. This introduction explains setting up the client with the credentials. The API doc contains the individual API calls. When reading the API doc, start with the Client interface: it contains all components of the API. The individual API's, their documentation and examples are located in subinterfaces, such as DiskClient. There are several ways to create a client instance. The most basic way is to use the New() function as follows: The mock client simulates the oVirt engine behavior in-memory without needing an actual running engine. This is a good way to provide a testing facility. It can be created using the NewMock method: That's it! However, to make it really useful, you will need the test helper which can set up test fixtures. The test helper can work in two ways: Either it sets up test fixtures in the mock client, or it sets up a live connection and identifies a usable storage domain, cluster, etc. for testing purposes. The ovirtclient.NewMockTestHelper() function can be used to create a test helper with a mock client in the backend: The easiest way to set up the test helper for a live connection is by using environment variables. To do that, you can use the ovirtclient.NewLiveTestHelperFromEnv() function: This function will inspect environment variables to determine if a connection to a live oVirt engine can be established. The following environment variables are supported: URL of the oVirt engine API. Mandatory. The username for the oVirt engine. Mandatory. The password for the oVirt engine. Mandatory. A file containing the CA certificate in PEM format. Provide the CA certificate in PEM format directly. Disable certificate verification if set. Not recommended. The cluster to use for testing. Will be automatically chosen if not provided. ID of the blank template. Will be automatically chosen if not provided. Storage domain to use for testing. Will be automatically chosen if not provided. VNIC profile to use for testing. Will be automatically chosen if not provided. You can also create the test helper manually: This library provides extensive logging. Each API interaction is logged on the debug level, and other messages are added on other levels. In order to provide logging this library uses the go-ovirt-client-log (https://github.com/oVirt/go-ovirt-client-log) interface definition. As long as your logger implements this interface, you will be able to receive log messages. The logging library also provides a few built-in loggers. For example, you can log via the default Go log interface: Or, you can also log in tests: You can also disable logging: Finally, we also provide an adapter library for klog here: https://github.com/oVirt/go-ovirt-client-log-klog Modern-day oVirt engines run secured with TLS. This means that the client needs a way to verify the certificate the server is presenting. This is controlled by the tls parameter of the New() function. You can implement your own source by implementing the TLSProvider interface, but the package also includes a ready-to-use provider. Create the provider using the TLS() function: This provider has several functions. The easiest to set up is using the system trust root for certificates. However, this won't work own Windows: Now you need to add your oVirt engine certificate to your system trust root. If you don't want to, or can't add the certificate to the system trust root, you can also directly provide it to the client. Finally, you can also disable certificate verification. Do we need to say that this is a very, very bad idea? The configured tls variable can then be passed to the New() function to create an oVirt client. This library attempts to retry API calls that can be retried if possible. Each function has a sensible retry policy. However, you may want to customize the retries by passing one or more retry flags. The following retry flags are supported: This strategy will stop retries when the context parameter is canceled. This strategy adds a wait time after each time, which is increased by the given factor on each try. The default is a backoff with a factor of 2. This strategy will cancel retries if the error in question is a permanent error. This is enabled by default. This strategy will abort retries if a maximum number of tries is reached. On complex calls the retries are counted per underlying API call. This strategy will abort retries if a certain time has been elapsed for the higher level call. This strategy will abort retries if a certain underlying API call takes longer than the specified duration.
Package quamina instances support adding Patterns and then presenting Events, generating a report of which Patterns match the Event. Patterns and Events are both represented as JSON objects, although there is a provided Flattener interface by which structured objects in formats other than JSON can be matched by quamina. Quamina instances match Events quickly and with a latency that is not strongly affected by the number of Patterns which have been added.
Package zappy implements the zappy block-based compression format. It aims for a combination of good speed and reasonable compression. Zappy is a format incompatible, API compatible fork of snappy[1]. The C++ snappy implementation is at [2]. The snappy compression is pretty good. Yet it has one problem built into its format definition[3] - the maximum length of a copy "instruction" is 64 bytes. For some specific usage patterns with long runs of repeated data, it turns out the compression is suboptimal. For example a 1:1000 "sparseness" 64kB bit index with only few set bits is compressed to about 3kB (about 1000 of 64B copy, 3 byte "instructions"). Zappy uses much less complicated format than snappy. Each encoded block begins with the uvarint-encoded[4] length of the decoded data, followed by a sequence of chunks. Chunks begin and end on byte boundaries. The chunk starts with a varint encoded number N: Compression rate is roughly the same as of snappy for the reference data set: Zappy has better RLE handling (1/1000+1 non zero bytes in each index): When compiled with CGO_ENABLED=1, zappy is now faster than Go snappy. Old=Go snappy, new=zappy: The package builds with CGO_ENABLED=0 as well, but the performance is worse. If a constraint 'purego' appears in the build constraints [5] then a pure Go version is built regardless of the $CGO_ENABLED value. ... referenced from the above documentation.
Package xmlpath implements a strict subset of the XPath specification for the Go language. The XPath specification is available at: Path expressions supported by this package are in the following format, with all components being optional: At the moment, xmlpath is compatible with the XPath specification to the following extent: For example, assuming the following document: The following examples are valid path expressions, and the first match has the indicated value: To run an expression, compile it, and then apply the compiled path to any number of context nodes, from one or more parsed xml documents:
Package uuid implements a fast representation of UUIDs (Universally Unique Identifiers) and integrates with JSON and SQL drivers. This package supports reading of multiple formats of UUIDs, including but not limited to: The parsing-speed of UUIDs in this package is achieved in several ways: A lookup table converts hexadecimal digits to bytes. Scanning and parsing is done in place without allocating anything. Resulting bytes are written to the UUID as it is parsed. On parse errors this will leave the UUID only partially populated with data from the input string, leaving the rest of the UUID unmodified. This package just ignores non-hexadecimal digits when scanning. This can cause some odd representations of hexadecimal data to be parsed as valid UUIDs, and longer strings like these will parse successfully: However, the hexadecimal digits MUST come in pairs, and the total number of bytes represented by them MUST equal 16, or it will generate a parse error. For example, invalid UUIDs like these will not parse: All string-creating functions will generate UUIDs in the canonical format of:
Package protocol implements Language Server Protocol specification in Go. This package contains the structs that map directly to the wire format of the Language Server Protocol. It is a literal transcription, with unmodified comments, and only the changes required to make it Go code. - Names are uppercased to export them. - All fields have JSON tags added to correct the names. - Fields marked with a ? are also marked as "omitempty". - Fields that are "|| null" are made pointers. - Fields that are string or number are left as string. - Fields that are type "number" are made float64.
Package json implements semantic processing of JSON as specified in RFC 8259. JSON is a simple data interchange format that can represent primitive data types such as booleans, strings, and numbers, in addition to structured data types such as objects and arrays. Marshal and Unmarshal encode and decode Go values to/from JSON text contained within a []byte. MarshalWrite and UnmarshalRead operate on JSON text by writing to or reading from an io.Writer or io.Reader. MarshalEncode and UnmarshalDecode operate on JSON text by encoding to or decoding from a jsontext.Encoder or jsontext.Decoder. Options may be passed to each of the marshal or unmarshal functions to configure the semantic behavior of marshaling and unmarshaling (i.e., alter how JSON data is understood as Go data and vice versa). jsontext.Options may also be passed to the marshal or unmarshal functions to configure the syntactic behavior of encoding or decoding. The data types of JSON are mapped to/from the data types of Go based on the closest logical equivalent between the two type systems. For example, a JSON boolean corresponds with a Go bool, a JSON string corresponds with a Go string, a JSON number corresponds with a Go int, uint or float, a JSON array corresponds with a Go slice or array, and a JSON object corresponds with a Go struct or map. See the documentation on Marshal and Unmarshal for a comprehensive list of how the JSON and Go type systems correspond. Arbitrary Go types can customize their JSON representation by implementing MarshalerV1, MarshalerV2, UnmarshalerV1, or UnmarshalerV2. This provides authors of Go types with control over how their types are serialized as JSON. Alternatively, users can implement functions that match MarshalFuncV1, MarshalFuncV2, UnmarshalFuncV1, or UnmarshalFuncV2 to specify the JSON representation for arbitrary types. This provides callers of JSON functionality with control over how any arbitrary type is serialized as JSON. A Go struct is naturally represented as a JSON object, where each Go struct field corresponds with a JSON object member. When marshaling, all Go struct fields are recursively encoded in depth-first order as JSON object members except those that are ignored or omitted. When unmarshaling, JSON object members are recursively decoded into the corresponding Go struct fields. Object members that do not match any struct fields, also known as “unknown members”, are ignored by default or rejected if RejectUnknownMembers is specified. The representation of each struct field can be customized in the "json" struct field tag, where the tag is a comma separated list of options. As a special case, if the entire tag is `json:"-"`, then the field is ignored with regard to its JSON representation. The first option is the JSON object name override for the Go struct field. If the name is not specified, then the Go struct field name is used as the JSON object name. JSON names containing commas or quotes, or names identical to "" or "-", can be specified using a single-quoted string literal, where the syntax is identical to the Go grammar for a double-quoted string literal, but instead uses single quotes as the delimiters. By default, unmarshaling uses case-sensitive matching to identify the Go struct field associated with a JSON object name. After the name, the following tag options are supported: omitzero: When marshaling, the "omitzero" option specifies that the struct field should be omitted if the field value is zero as determined by the "IsZero() bool" method if present, otherwise based on whether the field is the zero Go value. This option has no effect when unmarshaling. omitempty: When marshaling, the "omitempty" option specifies that the struct field should be omitted if the field value would have been encoded as a JSON null, empty string, empty object, or empty array. This option has no effect when unmarshaling. string: The "string" option specifies that StringifyNumbers be set when marshaling or unmarshaling a struct field value. This causes numeric types to be encoded as a JSON number within a JSON string, and to be decoded from either a JSON number or a JSON string containing a JSON number. This extra level of encoding is often necessary since many JSON parsers cannot precisely represent 64-bit integers. nocase: When unmarshaling, the "nocase" option specifies that if the JSON object name does not exactly match the JSON name for any of the struct fields, then it attempts to match the struct field using a case-insensitive match that also ignores dashes and underscores. If multiple fields match, the first declared field in breadth-first order takes precedence. This takes precedence even if MatchCaseInsensitiveNames is set to false. This cannot be specified together with the "strictcase" option. strictcase: When unmarshaling, the "strictcase" option specifies that the JSON object name must exactly match the JSON name for the struct field. This takes precedence even if MatchCaseInsensitiveNames is set to true. This cannot be specified together with the "nocase" option. inline: The "inline" option specifies that the JSON representable content of this field type is to be promoted as if they were specified in the parent struct. It is the JSON equivalent of Go struct embedding. A Go embedded field is implicitly inlined unless an explicit JSON name is specified. The inlined field must be a Go struct (that does not implement any JSON methods), jsontext.Value, map[string]T, or an unnamed pointer to such types. When marshaling, inlined fields from a pointer type are omitted if it is nil. Inlined fields of type jsontext.Value and map[string]T are called “inlined fallbacks” as they can represent all possible JSON object members not directly handled by the parent struct. Only one inlined fallback field may be specified in a struct, while many non-fallback fields may be specified. This option must not be specified with any other option (including the JSON name). unknown: The "unknown" option is a specialized variant of the inlined fallback to indicate that this Go struct field contains any number of unknown JSON object members. The field type must be a jsontext.Value, map[string]T, or an unnamed pointer to such types. If DiscardUnknownMembers is specified when marshaling, the contents of this field are ignored. If RejectUnknownMembers is specified when unmarshaling, any unknown object members are rejected regardless of whether an inlined fallback with the "unknown" option exists. This option must not be specified with any other option (including the JSON name). format: The "format" option specifies a format flag used to specialize the formatting of the field value. The option is a key-value pair specified as "format:value" where the value must be either a literal consisting of letters and numbers (e.g., "format:RFC3339") or a single-quoted string literal (e.g., "format:'2006-01-02'"). The interpretation of the format flag is determined by the struct field type. The "omitzero" and "omitempty" options are mostly semantically identical. The former is defined in terms of the Go type system, while the latter in terms of the JSON type system. Consequently they behave differently in some circumstances. For example, only a nil slice or map is omitted under "omitzero", while an empty slice or map is omitted under "omitempty" regardless of nilness. The "omitzero" option is useful for types with a well-defined zero value (e.g., net/netip.Addr) or have an IsZero method (e.g., time.Time.IsZero). Every Go struct corresponds to a list of JSON representable fields which is constructed by performing a breadth-first search over all struct fields (excluding unexported or ignored fields), where the search recursively descends into inlined structs. The set of non-inlined fields in a struct must have unique JSON names. If multiple fields all have the same JSON name, then the one at shallowest depth takes precedence and the other fields at deeper depths are excluded from the list of JSON representable fields. If multiple fields at the shallowest depth have the same JSON name, but exactly one is explicitly tagged with a JSON name, then that field takes precedence and all others are excluded from the list. This is analogous to Go visibility rules for struct field selection with embedded struct types. Marshaling or unmarshaling a non-empty struct without any JSON representable fields results in a SemanticError. Unexported fields must not have any `json` tags except for `json:"-"`. Unmarshal matches JSON object names with Go struct fields using a case-sensitive match, but can be configured to use a case-insensitive match with the "nocase" option. This permits unmarshaling from inputs that use naming conventions such as camelCase, snake_case, or kebab-case. By default, JSON object names for Go struct fields are derived from the Go field name, but may be specified in the `json` tag. Due to JSON's heritage in JavaScript, the most common naming convention used for JSON object names is camelCase. The "format" tag option can be used to alter the formatting of certain types. JSON objects can be inlined within a parent object similar to how Go structs can be embedded within a parent struct. The inlining rules are similar to those of Go embedding, but operates upon the JSON namespace. Go struct fields can be omitted from the output depending on either the input Go value or the output JSON encoding of the value. The "omitzero" option omits a field if it is the zero Go value or implements a "IsZero() bool" method that reports true. The "omitempty" option omits a field if it encodes as an empty JSON value, which we define as a JSON null or empty JSON string, object, or array. In many cases, the behavior of "omitzero" and "omitempty" are equivalent. If both provide the desired effect, then using "omitzero" is preferred. The exact order of JSON object can be preserved through the use of a specialized type that implements MarshalerV2 and UnmarshalerV2. Some Go types have a custom JSON representation where the implementation is delegated to some external package. Consequently, the "json" package will not know how to use that external implementation. For example, the google.golang.org/protobuf/encoding/protojson package implements JSON for all google.golang.org/protobuf/proto.Message types. WithMarshalers and WithUnmarshalers can be used to configure "json" and "protojson" to cooperate together. When implementing HTTP endpoints, it is common to be operating with an io.Reader and an io.Writer. The MarshalWrite and UnmarshalRead functions assist in operating on such input/output types. UnmarshalRead reads the entirety of the io.Reader to ensure that io.EOF is encountered without any unexpected bytes after the top-level JSON value. If a type implements encoding.TextMarshaler and/or encoding.TextUnmarshaler, then the MarshalText and UnmarshalText methods are used to encode/decode the value to/from a JSON string. Due to version skew, the set of JSON object members known at compile-time may differ from the set of members encountered at execution-time. As such, it may be useful to have finer grain handling of unknown members. This package supports preserving, rejecting, or discarding such members.
Package gcfg reads "INI-style" text-based configuration files with "name=value" pairs grouped into sections (gcfg files). This package is still a work in progress; see the sections below for planned changes. The syntax is based on that used by git config: http://git-scm.com/docs/git-config#_syntax . There are some (planned) differences compared to the git config format: The functions in this package read values into a user-defined struct. Each section corresponds to a struct field in the config struct, and each variable in a section corresponds to a data field in the section struct. The mapping of each section or variable name to fields is done either based on the "gcfg" struct tag or by matching the name of the section or variable, ignoring case. In the latter case, hyphens '-' in section and variable names correspond to underscores '_' in field names. Fields must be exported; to use a section or variable name starting with a letter that is neither upper- or lower-case, prefix the field name with 'X'. (See https://code.google.com/p/go/issues/detail?id=5763#c4 .) For sections with subsections, the corresponding field in config must be a map, rather than a struct, with string keys and pointer-to-struct values. Values for subsection variables are stored in the map with the subsection name used as the map key. (Note that unlike section and variable names, subsection names are case sensitive.) When using a map, and there is a section with the same section name but without a subsection name, its values are stored with the empty string used as the key. It is possible to provide default values for subsections in the section "default-<sectionname>" (or by setting values in the corresponding struct field "Default_<sectionname>"). The functions in this package panic if config is not a pointer to a struct, or when a field is not of a suitable type (either a struct or a map with string keys and pointer-to-struct values). The section structs in the config struct may contain single-valued or multi-valued variables. Variables of unnamed slice type (that is, a type starting with `[]`) are treated as multi-value; all others (including named slice types) are treated as single-valued variables. Single-valued variables are handled based on the type as follows. Unnamed pointer types (that is, types starting with `*`) are dereferenced, and if necessary, a new instance is allocated. For types implementing the encoding.TextUnmarshaler interface, the UnmarshalText method is used to set the value. Implementing this method is the recommended way for parsing user-defined types. For fields of string kind, the value string is assigned to the field, after unquoting and unescaping as needed. For fields of bool kind, the field is set to true if the value is "true", "yes", "on" or "1", and set to false if the value is "false", "no", "off" or "0", ignoring case. In addition, single-valued bool fields can be specified with a "blank" value (variable name without equals sign and value); in such case the value is set to true. Predefined integer types [u]int(|8|16|32|64) and big.Int are parsed as decimal or hexadecimal (if having '0x' prefix). (This is to prevent unintuitively handling zero-padded numbers as octal.) Other types having [u]int* as the underlying type, such as os.FileMode and uintptr allow decimal, hexadecimal, or octal values. Parsing mode for integer types can be overridden using the struct tag option ",int=mode" where mode is a combination of the 'd', 'h', and 'o' characters (each standing for decimal, hexadecimal, and octal, respectively.) All other types are parsed using fmt.Sscanf with the "%v" verb. For multi-valued variables, each individual value is parsed as above and appended to the slice. If the first value is specified as a "blank" value (variable name without equals sign and value), a new slice is allocated; that is any values previously set in the slice will be ignored. The types subpackage for provides helpers for parsing "enum-like" and integer types. There are 3 types of errors: Programmer errors trigger panics. These are should be fixed by the programmer before releasing code that uses gcfg. Data errors cause gcfg to return a non-nil error value. This includes the case when there are extra unknown key-value definitions in the configuration data (extra data). However, in some occasions it is desirable to be able to proceed in situations when the only data error is that of extra data. These errors are handled at a different (warning) priority and can be filtered out programmatically. To ignore extra data warnings, wrap the gcfg.Read*Into invocation into a call to gcfg.FatalOnly. The following is a list of changes under consideration:
Package jsonschema provides json-schema compilation and validation. Features: The schema is compiled against the version specified in "$schema" property. If "$schema" property is missing, it uses latest draft which currently implemented by this library. You can force to use specific draft, when "$schema" is missing, as follows: you can also validate go value using schema.ValidateInterface(interface{}) method. but the argument should not be user-defined struct. This package supports loading json-schema from filePath and fileURL. To load json-schema from HTTPURL, add following import: you can validate yaml documents. see https://play.golang.org/p/sJy1qY7dXgA Example_fromString shows how to load schema from string. Example_fromStrings shows how to load schema from more than one string. Example_userDefinedContent shows how to define "hex" contentEncoding and "application/xml" contentMediaType Example_userDefinedFormat shows how to define 'odd-number' format. Example_userDefinedLoader shows how to define custom schema loader. we are implementing a "map" protocol which servers schemas from go map variable.
Package dosjson provides primitives for working with the Decred JSON-RPC API. When communicating via the JSON-RPC protocol, all of the commands need to be marshalled to and from the the wire in the appropriate format. This package provides data structures and primitives to ease this process. In addition, it also provides some additional features such as custom command registration, command categorization, and reflection-based help generation. This information is not necessary in order to use this package, but it does provide some intuition into what the marshalling and unmarshalling that is discussed below is doing under the hood. As defined by the JSON-RPC spec, there are effectively two forms of messages on the wire: Request Objects {"jsonrpc":"1.0","id":"SOMEID","method":"SOMEMETHOD","params":[SOMEPARAMS]} NOTE: Notifications are the same format except the id field is null. Response Objects {"result":SOMETHING,"error":null,"id":"SOMEID"} {"result":null,"error":{"code":SOMEINT,"message":SOMESTRING},"id":"SOMEID"} For requests, the params field can vary in what it contains depending on the method (a.k.a. command) being sent. Each parameter can be as simple as an int or a complex structure containing many nested fields. The id field is used to identify a request and will be included in the associated response. When working with asynchronous transports, such as websockets, spontaneous notifications are also possible. As indicated, they are the same as a request object, except they have the id field set to null. Therefore, servers will ignore requests with the id field set to null, while clients can choose to consume or ignore them. Unfortunately, the original Bitcoin JSON-RPC API (and hence anything compatible with it) doesn't always follow the spec and will sometimes return an error string in the result field with a null error for certain commands. However, for the most part, the error field will be set as described on failure. Based upon the discussion above, it should be easy to see how the types of this package map into the required parts of the protocol To simplify the marshalling of the requests and responses, the MarshalCmd and MarshalResponse functions are provided. They return the raw bytes ready to be sent across the wire. Unmarshalling a received Request object is a two step process: This approach is used since it provides the caller with access to the additional fields in the request that are not part of the command such as the ID. Unmarshalling a received Response object is also a two step process: As above, this approach is used since it provides the caller with access to the fields in the response such as the ID and Error. This package provides two approaches for creating a new command. This first, and preferred, method is to use one of the New<Foo>Cmd functions. This allows static compile-time checking to help ensure the parameters stay in sync with the struct definitions. The second approach is the NewCmd function which takes a method (command) name and variable arguments. The function includes full checking to ensure the parameters are accurate according to provided method, however these checks are, obviously, run-time which means any mistakes won't be found until the code is actually executed. However, it is quite useful for user-supplied commands that are intentionally dynamic. The command handling of this package is built around the concept of registered commands. This is true for the wide variety of commands already provided by the package, but it also means caller can easily provide custom commands with all of the same functionality as the built-in commands. Use the RegisterCmd function for this purpose. A list of all registered methods can be obtained with the RegisteredCmdMethods function. All registered commands are registered with flags that identify information such as whether the command applies to a chain server, wallet server, or is a notification along with the method name to use. These flags can be obtained with the MethodUsageFlags flags, and the method can be obtained with the CmdMethod function. To facilitate providing consistent help to users of the RPC server, this package exposes the GenerateHelp and function which uses reflection on registered commands or notifications, as well as the provided expected result types, to generate the final help text. In addition, the MethodUsageText function is provided to generate consistent one-line usage for registered commands and notifications using reflection. There are 2 distinct type of errors supported by this package: The first category of errors (type Error) typically indicates a programmer error and can be avoided by properly using the API. Errors of this type will be returned from the various functions available in this package. They identify issues such as unsupported field types, attempts to register malformed commands, and attempting to create a new command with an improper number of parameters. The specific reason for the error can be detected by type asserting it to a *dosjson.Error and accessing the ErrorCode field. The second category of errors (type RPCError), on the other hand, are useful for returning errors to RPC clients. Consequently, they are used in the previously described Response type. This example demonstrates how to unmarshal a JSON-RPC response and then unmarshal the result field in the response to a concrete type.
Package CloudForest implements ensembles of decision trees for machine learning in pure Go (golang to search engines). It allows for a number of related algorithms for classification, regression, feature selection and structure analysis on heterogeneous numerical/categorical data with missing values. These include: Breiman and Cutler's Random Forest for Classification and Regression Adaptive Boosting (AdaBoost) Classification Gradiant Boosting Tree Regression Entropy and Cost driven classification L1 regression Feature selection with artificial contrasts Proximity and model structure analysis Roughly balanced bagging for unbalanced classification The API hasn't stabilized yet and may change rapidly. Tests and benchmarks have been performed only on embargoed data sets and can not yet be released. Library Documentation is in code and can be viewed with godoc or live at: http://godoc.org/github.com/ryanbressler/CloudForest Documentation of command line utilities and file formats can be found in README.md, which can be viewed fromated on github: http://github.com/ryanbressler/CloudForest Pull requests and bug reports are welcome. CloudForest was created by Ryan Bressler and is being developed in the Shumelivich Lab at the Institute for Systems Biology for use on genomic/biomedical data with partial support from The Cancer Genome Atlas and the Inova Translational Medicine Institute. CloudForest is intended to provide fast, comprehensible building blocks that can be used to implement ensembles of decision trees. CloudForest is written in Go to allow a data scientist to develop and scale new models and analysis quickly instead of having to modify complex legacy code. Data structures and file formats are chosen with use in multi threaded and cluster environments in mind. Go's support for function types is used to provide a interface to run code as data is percolated through a tree. This method is flexible enough that it can extend the tree being analyzed. Growing a decision tree using Breiman and Cutler's method can be done in an anonymous function/closure passed to a tree's root node's Recurse method: This allows a researcher to include whatever additional analysis they need (importance scores, proximity etc) in tree growth. The same Recurse method can also be used to analyze existing forests to tabulate scores or extract structure. Utilities like leafcount and errorrate use this method to tabulate data about the tree in collection objects. Decision tree's are grown with the goal of reducing "Impurity" which is usually defined as Gini Impurity for categorical targets or mean squared error for numerical targets. CloudForest grows trees against the Target interface which allows for alternative definitions of impurity. CloudForest includes several alternative targets: Additional targets can be stacked on top of these target to add boosting functionality: Repeatedly splitting the data and searching for the best split at each node of a decision tree are the most computationally intensive parts of decision tree learning and CloudForest includes optimized code to perform these tasks. Go's slices are used extensively in CloudForest to make it simple to interact with optimized code. Many previous implementations of Random Forest have avoided reallocation by reordering data in place and keeping track of start and end indexes. In go, slices pointing at the same underlying arrays make this sort of optimization transparent. For example a function like: can return left and right slices that point to the same underlying array as the original slice of cases but these slices should not have their values changed. Functions used while searching for the best split also accepts pointers to reusable slices and structs to maximize speed by keeping memory allocations to a minimum. BestSplitAllocs contains pointers to these items and its use can be seen in functions like: For categorical predictors, BestSplit will also attempt to intelligently choose between 4 different implementations depending on user input and the number of categories. These include exhaustive, random, and iterative searches for the best combination of categories implemented with bitwise operations against int and big.Int. See BestCatSplit, BestCatSplitIter, BestCatSplitBig and BestCatSplitIterBig. All numerical predictors are handled by BestNumSplit which relies on go's sorting package. Training a Random forest is an inherently parallel process and CloudForest is designed to allow parallel implementations that can tackle large problems while keeping memory usage low by writing and using data structures directly to/from disk. Trees can be grown in separate go routines. The growforest utility provides an example of this that uses go routines and channels to grow trees in parallel and write trees to disk as the are finished by the "worker" go routines. The few summary statistics like mean impurity decrease per feature (importance) can be calculated using thread safe data structures like RunningMean. Trees can also be grown on separate machines. The .sf stochastic forest format allows several small forests to be combined by concatenation and the ForestReader and ForestWriter structs allow these forests to be accessed tree by tree (or even node by node) from disk. For data sets that are too big to fit in memory on a single machine Tree.Grow and FeatureMatrix.BestSplitter can be reimplemented to load candidate features from disk, distributed database etc. By default cloud forest uses a fast heuristic for missing values. When proposing a split on a feature with missing data the missing cases are removed and the impurity value is corrected to use three way impurity which reduces the bias towards features with lots of missing data: Missing values in the target variable are left out of impurity calculations. This provided generally good results at a fraction of the computational costs of imputing data. Optionally, feature.ImputeMissing or featurematrixImputeMissing can be called before forest growth to impute missing values to the feature mean/mode which Brieman [2] suggests as a fast method for imputing values. This forest could also be analyzed for proximity (using leafcount or tree.GetLeaves) to do the more accurate proximity weighted imputation Brieman describes. Experimental support is provided for 3 way splitting which splits missing cases onto a third branch. [2] This has so far yielded mixed results in testing. At some point in the future support may be added for local imputing of missing values during tree growth as described in [3] [1] http://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm#missing1 [2] https://code.google.com/p/rf-ace/ [3] http://projecteuclid.org/DPubS?verb=Display&version=1.0&service=UI&handle=euclid.aoas/1223908043&page=record In CloudForest data is stored using the FeatureMatrix struct which contains Features. The Feature struct implements storage and methods for both categorical and numerical data and calculations of impurity etc and the search for the best split. The Target interface abstracts the methods of Feature that are needed for a feature to be predictable. This allows for the implementation of alternative types of regression and classification. Trees are built from Nodes and Splitters and stored within a Forest. Tree has a Grow implements Brieman and Cutler's method (see extract above) for growing a tree. A GrowForest method is also provided that implements the rest of the method including sampling cases but it may be faster to grow the forest to disk as in the growforest utility. Prediction and Voting is done using Tree.Vote and CatBallotBox and NumBallotBox which implement the VoteTallyer interface.
Package protobuf implements Protocol Buffers reflectively using Go types to define message formats. This approach provides convenience similar to Gob encoding, but with a widely-used and language-neutral wire format. For general information on Protocol buffers see http://protobuf.googlecode.com. In contrast with goprotobuf, this package does not require users to write or compile .proto files; you just define the message formats you want as Go struct types. Consider for example this example message format definition from the Protocol Buffers overview: The following Go type and const definitions express exactly the same format, for purposes of encoding and decoding with this protobuf package: To encode a message, you simply call the Encode() function with a pointer to the struct you wish to encode, and Encode() returns a []byte slice containing the protobuf-encoded struct: To decode an encoded message, simply call Decode() on the byte-slice: If you want to interoperate with code in other languages using the same message formats, you may of course still end up writing .proto files for the code in those other languages. However, defining message formats with native Go types enables these types to be tailored to the code using them without affecting wire compatibility, such as by attaching useful methods to these struct types. The translation between a Go struct definition and a basic Protocol Buffers message format definition is straightforward; the rules are as follows. A message definition in a .proto file translates to a Go struct, whose fields are implicitly assigned consecutive numbers starting from 1. If you need to leave gaps in the field number sequence (e.g., to delete an obsolete field without breaking wire compatibility), then you can skip that field number using a blank Go field, like this: A 'required' protobuf field translates to a plain field of a corresponding type in the Go struct. The following table summarizes the correspondence between .proto definition types and Go field types: An 'optional' protobuf field is expressed as a pointer field in Go. Encode() will transmit the field only if the pointer is non-nil. Decode() will instantiate the pointed-to type and fill in the pointer if the field is present in the message being decoded, leaving the pointer unmodified (usually nil) if the field is not present. A 'repeated' protobuf field translates to a slice field in Go. Slices of primitive bool, integer, and float types are encoded and decoded in packed format, as if the [packed=true] option was declared for the field in the .proto file. For flexibility and convenience, struct fields may have interface types, which this package interprets as having dynamic types to be bound at runtime. Encode() follows the interface's implicit pointer and uses reflection to determine the referred-to object's actual type for encoding Decode() takes an optional map of interface types to constructor functions, which it uses to instantiate concrete types for interfaces while decoding. Furthermore, if the instantiated types support the Encoding interface, Encode() and Decode() will invoke the methods of that interface, allowing objects to implement their own custom encoding/decoding methods. This package does not try to support all possible protobuf formats. It currently does not support nonzero default value declarations for enums, the legacy unpacked formats for repeated numeric fields, messages with extremely sparse field numbering, or other more exotic features like extensions or oneof. If you need to interoperate with existing protobuf code using these features, then you should probably use goprotobuf, at least for those particular message formats. Another downside of this reflective approach to protobuf implementation is that reflective code is generally less efficient than statically generated code, as gogoprotobuf produces for example. If we decide we want the convenience of format definitions in Go with the runtime performance of static code generation, we could in principle achieve that by adding a "Go-format" message format compiler frontend to goprotobuf or gogoprotobuf - but we leave this as an exercise for the reader. This example defines, encodes, and decodes a Person message format equivalent to the example used in the Protocol Buffers overview.
Package uuid can be used to generate and parse universally unique identifiers, a standardized format in the form of a 128 bit number. http://tools.ietf.org/html/rfc4122
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross-Field and Cross-Struct validation for nested structs and has the ability to dive into arrays and maps of any type. see more examples https://github.com/go-playground/validator/tree/v9/_examples Doing things this way is actually the way the standard library does, see the file.Open method here: The authors return type "error" to avoid the issue discussed in the following, where err is always != nil: Validator only InvalidValidationError for bad validation input, nil or ValidationErrors as type error; so, in your code all you need to do is check if the error returned is not nil, and if it's not check if error is InvalidValidationError ( if necessary, most of the time it isn't ) type cast it to type ValidationErrors like so err.(validator.ValidationErrors). Custom Validation functions can be added. Example: Cross-Field Validation can be done via the following tags: If, however, some custom cross-field validation is required, it can be done using a custom validation. Why not just have cross-fields validation tags (i.e. only eqcsfield and not eqfield)? The reason is efficiency. If you want to check a field within the same struct "eqfield" only has to find the field on the same struct (1 level). But, if we used "eqcsfield" it could be multiple levels down. Example: Multiple validators on a field will process in the order defined. Example: Bad Validator definitions are not handled by the library. Example: Baked In Cross-Field validation only compares fields on the same struct. If Cross-Field + Cross-Struct validation is needed you should implement your own custom validator. Comma (",") is the default separator of validation tags. If you wish to have a comma included within the parameter (i.e. excludesall=,) you will need to use the UTF-8 hex representation 0x2C, which is replaced in the code as a comma, so the above will become excludesall=0x2C. Pipe ("|") is the 'or' validation tags deparator. If you wish to have a pipe included within the parameter i.e. excludesall=| you will need to use the UTF-8 hex representation 0x7C, which is replaced in the code as a pipe, so the above will become excludesall=0x7C Here is a list of the current built in validators: Tells the validation to skip this struct field; this is particularly handy in ignoring embedded structs from being validated. (Usage: -) This is the 'or' operator allowing multiple validators to be used and accepted. (Usage: rbg|rgba) <-- this would allow either rgb or rgba colors to be accepted. This can also be combined with 'and' for example ( Usage: omitempty,rgb|rgba) When a field that is a nested struct is encountered, and contains this flag any validation on the nested struct will be run, but none of the nested struct fields will be validated. This is useful if inside of your program you know the struct will be valid, but need to verify it has been assigned. NOTE: only "required" and "omitempty" can be used on a struct itself. Same as structonly tag except that any struct level validations will not run. Allows conditional validation, for example if a field is not set with a value (Determined by the "required" validator) then other validation such as min or max won't run, but if a value is set validation will run. This tells the validator to dive into a slice, array or map and validate that level of the slice, array or map with the validation tags that follow. Multidimensional nesting is also supported, each level you wish to dive will require another dive tag. dive has some sub-tags, 'keys' & 'endkeys', please see the Keys & EndKeys section just below. Example #1 Example #2 Keys & EndKeys These are to be used together directly after the dive tag and tells the validator that anything between 'keys' and 'endkeys' applies to the keys of a map and not the values; think of it like the 'dive' tag, but for map keys instead of values. Multidimensional nesting is also supported, each level you wish to validate will require another 'keys' and 'endkeys' tag. These tags are only valid for maps. Example #1 Example #2 This validates that the value is not the data types default zero value. For numbers ensures value is not zero. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. The field under validation must be present and not empty only if any of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Examples: The field under validation must be present and not empty only if all of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Example: The field under validation must be present and not empty only when any of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Examples: The field under validation must be present and not empty only when all of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Example: This validates that the value is the default value and is almost the opposite of required. For numbers, length will ensure that the value is equal to the parameter given. For strings, it checks that the string length is exactly that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, max will ensure that the value is less than or equal to the parameter given. For strings, it checks that the string length is at most that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, min will ensure that the value is greater or equal to the parameter given. For strings, it checks that the string length is at least that number of characters. For slices, arrays, and maps, validates the number of items. For strings & numbers, eq will ensure that the value is equal to the parameter given. For slices, arrays, and maps, validates the number of items. For strings & numbers, ne will ensure that the value is not equal to the parameter given. For slices, arrays, and maps, validates the number of items. For strings, ints, and uints, oneof will ensure that the value is one of the values in the parameter. The parameter should be a list of values separated by whitespace. Values may be strings or numbers. For numbers, this will ensure that the value is greater than the parameter given. For strings, it checks that the string length is greater than that number of characters. For slices, arrays and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than time.Now.UTC(). Same as 'min' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than or equal to time.Now.UTC(). For numbers, this will ensure that the value is less than the parameter given. For strings, it checks that the string length is less than that number of characters. For slices, arrays, and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than time.Now.UTC(). Same as 'max' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than or equal to time.Now.UTC(). This will validate the field value against another fields value either within a struct or passed in field. Example #1: Example #2: Field Equals Another Field (relative) This does the same as eqfield except that it validates the field provided relative to the top level struct. This will validate the field value against another fields value either within a struct or passed in field. Examples: Field Does Not Equal Another Field (relative) This does the same as nefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltefield except that it validates the field provided relative to the top level struct. This does the same as contains except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. This does the same as excludes except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. For arrays & slices, unique will ensure that there are no duplicates. For maps, unique will ensure that there are no duplicate values. For slices of struct, unique will ensure that there are no duplicate values in a field of the struct specified via a parameter. This validates that a string value contains ASCII alpha characters only This validates that a string value contains ASCII alphanumeric characters only This validates that a string value contains unicode alpha characters only This validates that a string value contains unicode alphanumeric characters only This validates that a string value contains a basic numeric value. basic excludes exponents etc... for integers or float it returns true. This validates that a string value contains a valid hexadecimal. This validates that a string value contains a valid hex color including hashtag (#) This validates that a string value contains a valid rgb color This validates that a string value contains a valid rgba color This validates that a string value contains a valid hsl color This validates that a string value contains a valid hsla color This validates that a string value contains a valid email This may not conform to all possibilities of any rfc standard, but neither does any email provider accept all possibilities. This validates that a string value contains a valid file path and that the file exists on the machine. This is done using os.Stat, which is a platform independent function. This validates that a string value contains a valid url This will accept any url the golang request uri accepts but must contain a schema for example http:// or rtmp:// This validates that a string value contains a valid uri This will accept any uri the golang request uri accepts This validataes that a string value contains a valid URN according to the RFC 2141 spec. This validates that a string value contains a valid base64 value. Although an empty string is valid base64 this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid base64 URL safe value according the the RFC4648 spec. Although an empty string is a valid base64 URL safe value, this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid bitcoin address. The format of the string is checked to ensure it matches one of the three formats P2PKH, P2SH and performs checksum validation. Bitcoin Bech32 Address (segwit) This validates that a string value contains a valid bitcoin Bech32 address as defined by bip-0173 (https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki) Special thanks to Pieter Wuille for providng reference implementations. This validates that a string value contains a valid ethereum address. The format of the string is checked to ensure it matches the standard Ethereum address format Full validation is blocked by https://github.com/golang/crypto/pull/28 This validates that a string value contains the substring value. This validates that a string value contains any Unicode code points in the substring value. This validates that a string value contains the supplied rune value. This validates that a string value does not contain the substring value. This validates that a string value does not contain any Unicode code points in the substring value. This validates that a string value does not contain the supplied rune value. This validates that a string value starts with the supplied string value This validates that a string value ends with the supplied string value This validates that a string value contains a valid isbn10 or isbn13 value. This validates that a string value contains a valid isbn10 value. This validates that a string value contains a valid isbn13 value. This validates that a string value contains a valid UUID. Uppercase UUID values will not pass - use `uuid_rfc4122` instead. This validates that a string value contains a valid version 3 UUID. Uppercase UUID values will not pass - use `uuid3_rfc4122` instead. This validates that a string value contains a valid version 4 UUID. Uppercase UUID values will not pass - use `uuid4_rfc4122` instead. This validates that a string value contains a valid version 5 UUID. Uppercase UUID values will not pass - use `uuid5_rfc4122` instead. This validates that a string value contains only ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains only printable ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains one or more multibyte characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains a valid DataURI. NOTE: this will also validate that the data portion is valid base64 This validates that a string value contains a valid latitude. This validates that a string value contains a valid longitude. This validates that a string value contains a valid U.S. Social Security Number. This validates that a string value contains a valid IP Address. This validates that a string value contains a valid v4 IP Address. This validates that a string value contains a valid v6 IP Address. This validates that a string value contains a valid CIDR Address. This validates that a string value contains a valid v4 CIDR Address. This validates that a string value contains a valid v6 CIDR Address. This validates that a string value contains a valid resolvable TCP Address. This validates that a string value contains a valid resolvable v4 TCP Address. This validates that a string value contains a valid resolvable v6 TCP Address. This validates that a string value contains a valid resolvable UDP Address. This validates that a string value contains a valid resolvable v4 UDP Address. This validates that a string value contains a valid resolvable v6 UDP Address. This validates that a string value contains a valid resolvable IP Address. This validates that a string value contains a valid resolvable v4 IP Address. This validates that a string value contains a valid resolvable v6 IP Address. This validates that a string value contains a valid Unix Address. This validates that a string value contains a valid MAC Address. Note: See Go's ParseMAC for accepted formats and types: This validates that a string value is a valid Hostname according to RFC 952 https://tools.ietf.org/html/rfc952 This validates that a string value is a valid Hostname according to RFC 1123 https://tools.ietf.org/html/rfc1123 Full Qualified Domain Name (FQDN) This validates that a string value contains a valid FQDN. This validates that a string value appears to be an HTML element tag including those described at https://developer.mozilla.org/en-US/docs/Web/HTML/Element This validates that a string value is a proper character reference in decimal or hexadecimal format This validates that a string value is percent-encoded (URL encoded) according to https://tools.ietf.org/html/rfc3986#section-2.1 This validates that a string value contains a valid directory and that it exists on the machine. This is done using os.Stat, which is a platform independent function. NOTE: When returning an error, the tag returned in "FieldError" will be the alias tag unless the dive tag is part of the alias. Everything after the dive tag is not reported as the alias tag. Also, the "ActualTag" in the before case will be the actual tag within the alias that failed. Here is a list of the current built in alias tags: Validator notes: A collection of validation rules that are frequently needed but are more complex than the ones found in the baked in validators. A non standard validator must be registered manually like you would with your own custom validation functions. Example of registration and use: Here is a list of the current non standard validators: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross-Field and Cross-Struct validation for nested structs and has the ability to dive into arrays and maps of any type. see more examples https://github.com/go-playground/validator/tree/master/_examples Validator is designed to be thread-safe and used as a singleton instance. It caches information about your struct and validations, in essence only parsing your validation tags once per struct type. Using multiple instances neglects the benefit of caching. The not thread-safe functions are explicitly marked as such in the documentation. Doing things this way is actually the way the standard library does, see the file.Open method here: The authors return type "error" to avoid the issue discussed in the following, where err is always != nil: Validator only InvalidValidationError for bad validation input, nil or ValidationErrors as type error; so, in your code all you need to do is check if the error returned is not nil, and if it's not check if error is InvalidValidationError ( if necessary, most of the time it isn't ) type cast it to type ValidationErrors like so err.(validator.ValidationErrors). Custom Validation functions can be added. Example: Cross-Field Validation can be done via the following tags: If, however, some custom cross-field validation is required, it can be done using a custom validation. Why not just have cross-fields validation tags (i.e. only eqcsfield and not eqfield)? The reason is efficiency. If you want to check a field within the same struct "eqfield" only has to find the field on the same struct (1 level). But, if we used "eqcsfield" it could be multiple levels down. Example: Multiple validators on a field will process in the order defined. Example: Bad Validator definitions are not handled by the library. Example: Baked In Cross-Field validation only compares fields on the same struct. If Cross-Field + Cross-Struct validation is needed you should implement your own custom validator. Comma (",") is the default separator of validation tags. If you wish to have a comma included within the parameter (i.e. excludesall=,) you will need to use the UTF-8 hex representation 0x2C, which is replaced in the code as a comma, so the above will become excludesall=0x2C. Pipe ("|") is the 'or' validation tags deparator. If you wish to have a pipe included within the parameter i.e. excludesall=| you will need to use the UTF-8 hex representation 0x7C, which is replaced in the code as a pipe, so the above will become excludesall=0x7C Here is a list of the current built in validators: Tells the validation to skip this struct field; this is particularly handy in ignoring embedded structs from being validated. (Usage: -) This is the 'or' operator allowing multiple validators to be used and accepted. (Usage: rgb|rgba) <-- this would allow either rgb or rgba colors to be accepted. This can also be combined with 'and' for example ( Usage: omitempty,rgb|rgba) When a field that is a nested struct is encountered, and contains this flag any validation on the nested struct will be run, but none of the nested struct fields will be validated. This is useful if inside of your program you know the struct will be valid, but need to verify it has been assigned. NOTE: only "required" and "omitempty" can be used on a struct itself. Same as structonly tag except that any struct level validations will not run. Allows conditional validation, for example if a field is not set with a value (Determined by the "required" validator) then other validation such as min or max won't run, but if a value is set validation will run. Allows to skip the validation if the value is nil (same as omitempty, but only for the nil-values). This tells the validator to dive into a slice, array or map and validate that level of the slice, array or map with the validation tags that follow. Multidimensional nesting is also supported, each level you wish to dive will require another dive tag. dive has some sub-tags, 'keys' & 'endkeys', please see the Keys & EndKeys section just below. Example #1 Example #2 Keys & EndKeys These are to be used together directly after the dive tag and tells the validator that anything between 'keys' and 'endkeys' applies to the keys of a map and not the values; think of it like the 'dive' tag, but for map keys instead of values. Multidimensional nesting is also supported, each level you wish to validate will require another 'keys' and 'endkeys' tag. These tags are only valid for maps. Example #1 Example #2 This validates that the value is not the data types default zero value. For numbers ensures value is not zero. For strings ensures value is not "". For booleans ensures value is not false. For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value when using WithRequiredStructEnabled. The field under validation must be present and not empty only if all the other specified fields are equal to the value following the specified field. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: The field under validation must be present and not empty unless all the other specified fields are equal to the value following the specified field. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: The field under validation must be present and not empty only if any of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: The field under validation must be present and not empty only if all of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Example: The field under validation must be present and not empty only when any of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: The field under validation must be present and not empty only when all of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Example: The field under validation must not be present or not empty only if all the other specified fields are equal to the value following the specified field. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: The field under validation must not be present or empty unless all the other specified fields are equal to the value following the specified field. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: This validates that the value is the default value and is almost the opposite of required. For numbers, length will ensure that the value is equal to the parameter given. For strings, it checks that the string length is exactly that number of characters. For slices, arrays, and maps, validates the number of items. Example #1 Example #2 (time.Duration) For time.Duration, len will ensure that the value is equal to the duration given in the parameter. For numbers, max will ensure that the value is less than or equal to the parameter given. For strings, it checks that the string length is at most that number of characters. For slices, arrays, and maps, validates the number of items. Example #1 Example #2 (time.Duration) For time.Duration, max will ensure that the value is less than or equal to the duration given in the parameter. For numbers, min will ensure that the value is greater or equal to the parameter given. For strings, it checks that the string length is at least that number of characters. For slices, arrays, and maps, validates the number of items. Example #1 Example #2 (time.Duration) For time.Duration, min will ensure that the value is greater than or equal to the duration given in the parameter. For strings & numbers, eq will ensure that the value is equal to the parameter given. For slices, arrays, and maps, validates the number of items. Example #1 Example #2 (time.Duration) For time.Duration, eq will ensure that the value is equal to the duration given in the parameter. For strings & numbers, ne will ensure that the value is not equal to the parameter given. For slices, arrays, and maps, validates the number of items. Example #1 Example #2 (time.Duration) For time.Duration, ne will ensure that the value is not equal to the duration given in the parameter. For strings, ints, and uints, oneof will ensure that the value is one of the values in the parameter. The parameter should be a list of values separated by whitespace. Values may be strings or numbers. To match strings with spaces in them, include the target string between single quotes. For numbers, this will ensure that the value is greater than the parameter given. For strings, it checks that the string length is greater than that number of characters. For slices, arrays and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than time.Now.UTC(). Example #3 (time.Duration) For time.Duration, gt will ensure that the value is greater than the duration given in the parameter. Same as 'min' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than or equal to time.Now.UTC(). Example #3 (time.Duration) For time.Duration, gte will ensure that the value is greater than or equal to the duration given in the parameter. For numbers, this will ensure that the value is less than the parameter given. For strings, it checks that the string length is less than that number of characters. For slices, arrays, and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than time.Now.UTC(). Example #3 (time.Duration) For time.Duration, lt will ensure that the value is less than the duration given in the parameter. Same as 'max' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than or equal to time.Now.UTC(). Example #3 (time.Duration) For time.Duration, lte will ensure that the value is less than or equal to the duration given in the parameter. This will validate the field value against another fields value either within a struct or passed in field. Example #1: Example #2: Field Equals Another Field (relative) This does the same as eqfield except that it validates the field provided relative to the top level struct. This will validate the field value against another fields value either within a struct or passed in field. Examples: Field Does Not Equal Another Field (relative) This does the same as nefield except that it validates the field provided relative to the top level struct. Only valid for Numbers, time.Duration and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtfield except that it validates the field provided relative to the top level struct. Only valid for Numbers, time.Duration and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtefield except that it validates the field provided relative to the top level struct. Only valid for Numbers, time.Duration and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltfield except that it validates the field provided relative to the top level struct. Only valid for Numbers, time.Duration and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltefield except that it validates the field provided relative to the top level struct. This does the same as contains except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. This does the same as excludes except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. For arrays & slices, unique will ensure that there are no duplicates. For maps, unique will ensure that there are no duplicate values. For slices of struct, unique will ensure that there are no duplicate values in a field of the struct specified via a parameter. This validates that a string value contains ASCII alpha characters only This validates that a string value contains ASCII alphanumeric characters only This validates that a string value contains unicode alpha characters only This validates that a string value contains unicode alphanumeric characters only This validates that a string value can successfully be parsed into a boolean with strconv.ParseBool This validates that a string value contains number values only. For integers or float it returns true. This validates that a string value contains a basic numeric value. basic excludes exponents etc... for integers or float it returns true. This validates that a string value contains a valid hexadecimal. This validates that a string value contains a valid hex color including hashtag (#) This validates that a string value contains only lowercase characters. An empty string is not a valid lowercase string. This validates that a string value contains only uppercase characters. An empty string is not a valid uppercase string. This validates that a string value contains a valid rgb color This validates that a string value contains a valid rgba color This validates that a string value contains a valid hsl color This validates that a string value contains a valid hsla color This validates that a string value contains a valid E.164 Phone number https://en.wikipedia.org/wiki/E.164 (ex. +1123456789) This validates that a string value contains a valid email This may not conform to all possibilities of any rfc standard, but neither does any email provider accept all possibilities. This validates that a string value is valid JSON This validates that a string value is a valid JWT This validates that a string value contains a valid file path and that the file exists on the machine. This is done using os.Stat, which is a platform independent function. This validates that a string value contains a valid file path and that the file exists on the machine and is an image. This is done using os.Stat and github.com/gabriel-vasile/mimetype This validates that a string value contains a valid file path but does not validate the existence of that file. This is done using os.Stat, which is a platform independent function. This validates that a string value contains a valid url This will accept any url the golang request uri accepts but must contain a schema for example http:// or rtmp:// This validates that a string value contains a valid uri This will accept any uri the golang request uri accepts This validates that a string value contains a valid URN according to the RFC 2141 spec. This validates that a string value contains a valid bas324 value. Although an empty string is valid base32 this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid base64 value. Although an empty string is valid base64 this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid base64 URL safe value according the RFC4648 spec. Although an empty string is a valid base64 URL safe value, this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid base64 URL safe value, but without = padding, according the RFC4648 spec, section 3.2. Although an empty string is a valid base64 URL safe value, this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid bitcoin address. The format of the string is checked to ensure it matches one of the three formats P2PKH, P2SH and performs checksum validation. Bitcoin Bech32 Address (segwit) This validates that a string value contains a valid bitcoin Bech32 address as defined by bip-0173 (https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki) Special thanks to Pieter Wuille for providing reference implementations. This validates that a string value contains a valid ethereum address. The format of the string is checked to ensure it matches the standard Ethereum address format. This validates that a string value contains the substring value. This validates that a string value contains any Unicode code points in the substring value. This validates that a string value contains the supplied rune value. This validates that a string value does not contain the substring value. This validates that a string value does not contain any Unicode code points in the substring value. This validates that a string value does not contain the supplied rune value. This validates that a string value starts with the supplied string value This validates that a string value ends with the supplied string value This validates that a string value does not start with the supplied string value This validates that a string value does not end with the supplied string value This validates that a string value contains a valid isbn10 or isbn13 value. This validates that a string value contains a valid isbn10 value. This validates that a string value contains a valid isbn13 value. This validates that a string value contains a valid UUID. Uppercase UUID values will not pass - use `uuid_rfc4122` instead. This validates that a string value contains a valid version 3 UUID. Uppercase UUID values will not pass - use `uuid3_rfc4122` instead. This validates that a string value contains a valid version 4 UUID. Uppercase UUID values will not pass - use `uuid4_rfc4122` instead. This validates that a string value contains a valid version 5 UUID. Uppercase UUID values will not pass - use `uuid5_rfc4122` instead. This validates that a string value contains a valid ULID value. This validates that a string value contains only ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains only printable ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains one or more multibyte characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains a valid DataURI. NOTE: this will also validate that the data portion is valid base64 This validates that a string value contains a valid latitude. This validates that a string value contains a valid longitude. This validates that a string value contains a valid U.S. Social Security Number. This validates that a string value contains a valid IP Address. This validates that a string value contains a valid v4 IP Address. This validates that a string value contains a valid v6 IP Address. This validates that a string value contains a valid CIDR Address. This validates that a string value contains a valid v4 CIDR Address. This validates that a string value contains a valid v6 CIDR Address. This validates that a string value contains a valid resolvable TCP Address. This validates that a string value contains a valid resolvable v4 TCP Address. This validates that a string value contains a valid resolvable v6 TCP Address. This validates that a string value contains a valid resolvable UDP Address. This validates that a string value contains a valid resolvable v4 UDP Address. This validates that a string value contains a valid resolvable v6 UDP Address. This validates that a string value contains a valid resolvable IP Address. This validates that a string value contains a valid resolvable v4 IP Address. This validates that a string value contains a valid resolvable v6 IP Address. This validates that a string value contains a valid Unix Address. This validates that a string value contains a valid MAC Address. Note: See Go's ParseMAC for accepted formats and types: This validates that a string value is a valid Hostname according to RFC 952 https://tools.ietf.org/html/rfc952 This validates that a string value is a valid Hostname according to RFC 1123 https://tools.ietf.org/html/rfc1123 Full Qualified Domain Name (FQDN) This validates that a string value contains a valid FQDN. This validates that a string value appears to be an HTML element tag including those described at https://developer.mozilla.org/en-US/docs/Web/HTML/Element This validates that a string value is a proper character reference in decimal or hexadecimal format This validates that a string value is percent-encoded (URL encoded) according to https://tools.ietf.org/html/rfc3986#section-2.1 This validates that a string value contains a valid directory and that it exists on the machine. This is done using os.Stat, which is a platform independent function. This validates that a string value contains a valid directory but does not validate the existence of that directory. This is done using os.Stat, which is a platform independent function. It is safest to suffix the string with os.PathSeparator if the directory may not exist at the time of validation. This validates that a string value contains a valid DNS hostname and port that can be used to validate fields typically passed to sockets and connections. This validates that a string value is a valid datetime based on the supplied datetime format. Supplied format must match the official Go time format layout as documented in https://golang.org/pkg/time/ This validates that a string value is a valid country code based on iso3166-1 alpha-2 standard. see: https://www.iso.org/iso-3166-country-codes.html This validates that a string value is a valid country code based on iso3166-1 alpha-3 standard. see: https://www.iso.org/iso-3166-country-codes.html This validates that a string value is a valid country code based on iso3166-1 alpha-numeric standard. see: https://www.iso.org/iso-3166-country-codes.html This validates that a string value is a valid BCP 47 language tag, as parsed by language.Parse. More information on https://pkg.go.dev/golang.org/x/text/language BIC (SWIFT code) This validates that a string value is a valid Business Identifier Code (SWIFT code), defined in ISO 9362. More information on https://www.iso.org/standard/60390.html This validates that a string value is a valid dns RFC 1035 label, defined in RFC 1035. More information on https://datatracker.ietf.org/doc/html/rfc1035 This validates that a string value is a valid time zone based on the time zone database present on the system. Although empty value and Local value are allowed by time.LoadLocation golang function, they are not allowed by this validator. More information on https://golang.org/pkg/time/#LoadLocation This validates that a string value is a valid semver version, defined in Semantic Versioning 2.0.0. More information on https://semver.org/ This validates that a string value is a valid cve id, defined in cve mitre. More information on https://cve.mitre.org/ This validates that a string value contains a valid credit card number using Luhn algorithm. This validates that a string or (u)int value contains a valid checksum using the Luhn algorithm. This validates that a string is a valid 24 character hexadecimal string or valid connection string. Example: This validates that a string value contains a valid cron expression. This validates that a string is valid for use with SpiceDb for the indicated purpose. If no purpose is given, a purpose of 'id' is assumed. Alias Validators and Tags NOTE: When returning an error, the tag returned in "FieldError" will be the alias tag unless the dive tag is part of the alias. Everything after the dive tag is not reported as the alias tag. Also, the "ActualTag" in the before case will be the actual tag within the alias that failed. Here is a list of the current built in alias tags: Validator notes: A collection of validation rules that are frequently needed but are more complex than the ones found in the baked in validators. A non standard validator must be registered manually like you would with your own custom validation functions. Example of registration and use: Here is a list of the current non standard validators: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross-Field and Cross-Struct validation for nested structs and has the ability to dive into arrays and maps of any type. see more examples https://github.com/go-playground/validator/tree/v9/_examples Doing things this way is actually the way the standard library does, see the file.Open method here: The authors return type "error" to avoid the issue discussed in the following, where err is always != nil: Validator only InvalidValidationError for bad validation input, nil or ValidationErrors as type error; so, in your code all you need to do is check if the error returned is not nil, and if it's not check if error is InvalidValidationError ( if necessary, most of the time it isn't ) type cast it to type ValidationErrors like so err.(validator.ValidationErrors). Custom Validation functions can be added. Example: Cross-Field Validation can be done via the following tags: If, however, some custom cross-field validation is required, it can be done using a custom validation. Why not just have cross-fields validation tags (i.e. only eqcsfield and not eqfield)? The reason is efficiency. If you want to check a field within the same struct "eqfield" only has to find the field on the same struct (1 level). But, if we used "eqcsfield" it could be multiple levels down. Example: Multiple validators on a field will process in the order defined. Example: Bad Validator definitions are not handled by the library. Example: Baked In Cross-Field validation only compares fields on the same struct. If Cross-Field + Cross-Struct validation is needed you should implement your own custom validator. Comma (",") is the default separator of validation tags. If you wish to have a comma included within the parameter (i.e. excludesall=,) you will need to use the UTF-8 hex representation 0x2C, which is replaced in the code as a comma, so the above will become excludesall=0x2C. Pipe ("|") is the 'or' validation tags deparator. If you wish to have a pipe included within the parameter i.e. excludesall=| you will need to use the UTF-8 hex representation 0x7C, which is replaced in the code as a pipe, so the above will become excludesall=0x7C Here is a list of the current built in validators: Tells the validation to skip this struct field; this is particularly handy in ignoring embedded structs from being validated. (Usage: -) This is the 'or' operator allowing multiple validators to be used and accepted. (Usage: rbg|rgba) <-- this would allow either rgb or rgba colors to be accepted. This can also be combined with 'and' for example ( Usage: omitempty,rgb|rgba) When a field that is a nested struct is encountered, and contains this flag any validation on the nested struct will be run, but none of the nested struct fields will be validated. This is useful if inside of your program you know the struct will be valid, but need to verify it has been assigned. NOTE: only "required" and "omitempty" can be used on a struct itself. Same as structonly tag except that any struct level validations will not run. Allows conditional validation, for example if a field is not set with a value (Determined by the "required" validator) then other validation such as min or max won't run, but if a value is set validation will run. This tells the validator to dive into a slice, array or map and validate that level of the slice, array or map with the validation tags that follow. Multidimensional nesting is also supported, each level you wish to dive will require another dive tag. dive has some sub-tags, 'keys' & 'endkeys', please see the Keys & EndKeys section just below. Example #1 Example #2 Keys & EndKeys These are to be used together directly after the dive tag and tells the validator that anything between 'keys' and 'endkeys' applies to the keys of a map and not the values; think of it like the 'dive' tag, but for map keys instead of values. Multidimensional nesting is also supported, each level you wish to validate will require another 'keys' and 'endkeys' tag. These tags are only valid for maps. Example #1 Example #2 This validates that the value is not the data types default zero value. For numbers ensures value is not zero. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. The field under validation must be present and not empty only if any of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Examples: The field under validation must be present and not empty only if all of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Example: The field under validation must be present and not empty only when any of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Examples: The field under validation must be present and not empty only when all of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Example: This validates that the value is the default value and is almost the opposite of required. For numbers, length will ensure that the value is equal to the parameter given. For strings, it checks that the string length is exactly that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, max will ensure that the value is less than or equal to the parameter given. For strings, it checks that the string length is at most that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, min will ensure that the value is greater or equal to the parameter given. For strings, it checks that the string length is at least that number of characters. For slices, arrays, and maps, validates the number of items. For strings & numbers, eq will ensure that the value is equal to the parameter given. For slices, arrays, and maps, validates the number of items. For strings & numbers, ne will ensure that the value is not equal to the parameter given. For slices, arrays, and maps, validates the number of items. For strings, ints, and uints, oneof will ensure that the value is one of the values in the parameter. The parameter should be a list of values separated by whitespace. Values may be strings or numbers. For numbers, this will ensure that the value is greater than the parameter given. For strings, it checks that the string length is greater than that number of characters. For slices, arrays and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than time.Now.UTC(). Same as 'min' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than or equal to time.Now.UTC(). For numbers, this will ensure that the value is less than the parameter given. For strings, it checks that the string length is less than that number of characters. For slices, arrays, and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than time.Now.UTC(). Same as 'max' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than or equal to time.Now.UTC(). This will validate the field value against another fields value either within a struct or passed in field. Example #1: Example #2: Field Equals Another Field (relative) This does the same as eqfield except that it validates the field provided relative to the top level struct. This will validate the field value against another fields value either within a struct or passed in field. Examples: Field Does Not Equal Another Field (relative) This does the same as nefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltefield except that it validates the field provided relative to the top level struct. This does the same as contains except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. This does the same as excludes except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. For arrays & slices, unique will ensure that there are no duplicates. For maps, unique will ensure that there are no duplicate values. For slices of struct, unique will ensure that there are no duplicate values in a field of the struct specified via a parameter. This validates that a string value contains ASCII alpha characters only This validates that a string value contains ASCII alphanumeric characters only This validates that a string value contains unicode alpha characters only This validates that a string value contains unicode alphanumeric characters only This validates that a string value contains a basic numeric value. basic excludes exponents etc... for integers or float it returns true. This validates that a string value contains a valid hexadecimal. This validates that a string value contains a valid hex color including hashtag (#) This validates that a string value contains a valid rgb color This validates that a string value contains a valid rgba color This validates that a string value contains a valid hsl color This validates that a string value contains a valid hsla color This validates that a string value contains a valid email This may not conform to all possibilities of any rfc standard, but neither does any email provider accept all possibilities. This validates that a string value contains a valid file path and that the file exists on the machine. This is done using os.Stat, which is a platform independent function. This validates that a string value contains a valid url This will accept any url the golang request uri accepts but must contain a schema for example http:// or rtmp:// This validates that a string value contains a valid uri This will accept any uri the golang request uri accepts This validataes that a string value contains a valid URN according to the RFC 2141 spec. This validates that a string value contains a valid base64 value. Although an empty string is valid base64 this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid base64 URL safe value according the the RFC4648 spec. Although an empty string is a valid base64 URL safe value, this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid bitcoin address. The format of the string is checked to ensure it matches one of the three formats P2PKH, P2SH and performs checksum validation. Bitcoin Bech32 Address (segwit) This validates that a string value contains a valid bitcoin Bech32 address as defined by bip-0173 (https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki) Special thanks to Pieter Wuille for providng reference implementations. This validates that a string value contains a valid ethereum address. The format of the string is checked to ensure it matches the standard Ethereum address format Full validation is blocked by https://github.com/golang/crypto/pull/28 This validates that a string value contains the substring value. This validates that a string value contains any Unicode code points in the substring value. This validates that a string value contains the supplied rune value. This validates that a string value does not contain the substring value. This validates that a string value does not contain any Unicode code points in the substring value. This validates that a string value does not contain the supplied rune value. This validates that a string value starts with the supplied string value This validates that a string value ends with the supplied string value This validates that a string value contains a valid isbn10 or isbn13 value. This validates that a string value contains a valid isbn10 value. This validates that a string value contains a valid isbn13 value. This validates that a string value contains a valid UUID. Uppercase UUID values will not pass - use `uuid_rfc4122` instead. This validates that a string value contains a valid version 3 UUID. Uppercase UUID values will not pass - use `uuid3_rfc4122` instead. This validates that a string value contains a valid version 4 UUID. Uppercase UUID values will not pass - use `uuid4_rfc4122` instead. This validates that a string value contains a valid version 5 UUID. Uppercase UUID values will not pass - use `uuid5_rfc4122` instead. This validates that a string value contains only ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains only printable ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains one or more multibyte characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains a valid DataURI. NOTE: this will also validate that the data portion is valid base64 This validates that a string value contains a valid latitude. This validates that a string value contains a valid longitude. This validates that a string value contains a valid U.S. Social Security Number. This validates that a string value contains a valid IP Address. This validates that a string value contains a valid v4 IP Address. This validates that a string value contains a valid v6 IP Address. This validates that a string value contains a valid CIDR Address. This validates that a string value contains a valid v4 CIDR Address. This validates that a string value contains a valid v6 CIDR Address. This validates that a string value contains a valid resolvable TCP Address. This validates that a string value contains a valid resolvable v4 TCP Address. This validates that a string value contains a valid resolvable v6 TCP Address. This validates that a string value contains a valid resolvable UDP Address. This validates that a string value contains a valid resolvable v4 UDP Address. This validates that a string value contains a valid resolvable v6 UDP Address. This validates that a string value contains a valid resolvable IP Address. This validates that a string value contains a valid resolvable v4 IP Address. This validates that a string value contains a valid resolvable v6 IP Address. This validates that a string value contains a valid Unix Address. This validates that a string value contains a valid MAC Address. Note: See Go's ParseMAC for accepted formats and types: This validates that a string value is a valid Hostname according to RFC 952 https://tools.ietf.org/html/rfc952 This validates that a string value is a valid Hostname according to RFC 1123 https://tools.ietf.org/html/rfc1123 Full Qualified Domain Name (FQDN) This validates that a string value contains a valid FQDN. This validates that a string value appears to be an HTML element tag including those described at https://developer.mozilla.org/en-US/docs/Web/HTML/Element This validates that a string value is a proper character reference in decimal or hexadecimal format This validates that a string value is percent-encoded (URL encoded) according to https://tools.ietf.org/html/rfc3986#section-2.1 This validates that a string value contains a valid directory and that it exists on the machine. This is done using os.Stat, which is a platform independent function. NOTE: When returning an error, the tag returned in "FieldError" will be the alias tag unless the dive tag is part of the alias. Everything after the dive tag is not reported as the alias tag. Also, the "ActualTag" in the before case will be the actual tag within the alias that failed. Here is a list of the current built in alias tags: Validator notes: A collection of validation rules that are frequently needed but are more complex than the ones found in the baked in validators. A non standard validator must be registered manually like you would with your own custom validation functions. Example of registration and use: Here is a list of the current non standard validators: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
IronFunctions daemon Refer to detailed documentation at https://github.com/iron-io/functions/tree/master/docs Environment Variables: DB_URL The database URL to use in URL format. See [Databases](databases/README.md) for more information. Default: BoltDB in current working directory `bolt.db`. MQ_URL The message queue to use in URL format. See [Message Queues](mqs/README.md) for more information. Default: BoltDB in current working directory `queue.db`. API_URL The primary IronFunctions API URL to that this instance will talk to. In a production environment, this would be your load balancer URL. PORT Sets the port to run on. Default: `8080`. NUM_ASYNC The number of async runners in the functions process (default 1). LOG_LEVEL Set to `DEBUG` to enable debugging. Default: INFO.
Package beanstalk provides a beanstalk client. The Producer is used to put jobs into tubes. It provides a connection pool: Putting a job in a tube is done by calling Put, which will select a random connection for its operation: If a Put operation fails on a connection, another connection in the pool will be selected for a retry. The Consumer is used to reserve jobs from tubes. It provides a connection pool: The ratio of Multiply and NumGoroutines is important. Multiply determines the size of the connection pool and NumGoroutines determines how many reserved jobs you have in-flight. If you have a limited number of connections, but a high number of reserved jobs in-flight, your TCP connection pool might experience congestion and your processing speed will suffer. Although the ratio depends on the speed by which jobs are processed, a good rule of thumb is 1:10. Reserve jobs from the tubes specified in NewConsumer is done by calling Receive, which will reserve jobs on any of the connections in the pool: If the context passed to Receive is cancelled, Receive will finish processing the jobs it has reserved before returning. When Receive offers a job the goroutine is responsible for processing that job and finishing it up. A job can either be deleted, released or buried: If the Producer and Consumer abstractions are too high, then Conn provides the lower level abstraction of a single connection to a beanstalk server: The Config structure offers hooks for info and error logs that allows hooking in to a custom log solution. NewProducer, NewConsumer and Dial take a URI or a list of URIs as their first argument, who can be described in various formats. In the above examples the beanstalk server was referenced by the host:port notation. This package also supports URI formats like beanstalk:// for a plaintext connection, and beanstalks:// or tls:// for encrypted connections. In the case of encrypted connections, if no port has been specified it will default to port 11400 as opposed to the default 11300 port.
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross-Field and Cross-Struct validation for nested structs and has the ability to dive into arrays and maps of any type. see more examples https://github.com/go-playground/validator/tree/master/_examples Validator is designed to be thread-safe and used as a singleton instance. It caches information about your struct and validations, in essence only parsing your validation tags once per struct type. Using multiple instances neglects the benefit of caching. The not thread-safe functions are explicitly marked as such in the documentation. Doing things this way is actually the way the standard library does, see the file.Open method here: The authors return type "error" to avoid the issue discussed in the following, where err is always != nil: Validator only InvalidValidationError for bad validation input, nil or ValidationErrors as type error; so, in your code all you need to do is check if the error returned is not nil, and if it's not check if error is InvalidValidationError ( if necessary, most of the time it isn't ) type cast it to type ValidationErrors like so err.(validator.ValidationErrors). Custom Validation functions can be added. Example: Cross-Field Validation can be done via the following tags: If, however, some custom cross-field validation is required, it can be done using a custom validation. Why not just have cross-fields validation tags (i.e. only eqcsfield and not eqfield)? The reason is efficiency. If you want to check a field within the same struct "eqfield" only has to find the field on the same struct (1 level). But, if we used "eqcsfield" it could be multiple levels down. Example: Multiple validators on a field will process in the order defined. Example: Bad Validator definitions are not handled by the library. Example: Baked In Cross-Field validation only compares fields on the same struct. If Cross-Field + Cross-Struct validation is needed you should implement your own custom validator. Comma (",") is the default separator of validation tags. If you wish to have a comma included within the parameter (i.e. excludesall=,) you will need to use the UTF-8 hex representation 0x2C, which is replaced in the code as a comma, so the above will become excludesall=0x2C. Pipe ("|") is the 'or' validation tags deparator. If you wish to have a pipe included within the parameter i.e. excludesall=| you will need to use the UTF-8 hex representation 0x7C, which is replaced in the code as a pipe, so the above will become excludesall=0x7C Here is a list of the current built in validators: Tells the validation to skip this struct field; this is particularly handy in ignoring embedded structs from being validated. (Usage: -) This is the 'or' operator allowing multiple validators to be used and accepted. (Usage: rgb|rgba) <-- this would allow either rgb or rgba colors to be accepted. This can also be combined with 'and' for example ( Usage: omitempty,rgb|rgba) When a field that is a nested struct is encountered, and contains this flag any validation on the nested struct will be run, but none of the nested struct fields will be validated. This is useful if inside of your program you know the struct will be valid, but need to verify it has been assigned. NOTE: only "required" and "omitempty" can be used on a struct itself. Same as structonly tag except that any struct level validations will not run. Allows conditional validation, for example if a field is not set with a value (Determined by the "required" validator) then other validation such as min or max won't run, but if a value is set validation will run. Allows to skip the validation if the value is nil (same as omitempty, but only for the nil-values). This tells the validator to dive into a slice, array or map and validate that level of the slice, array or map with the validation tags that follow. Multidimensional nesting is also supported, each level you wish to dive will require another dive tag. dive has some sub-tags, 'keys' & 'endkeys', please see the Keys & EndKeys section just below. Example #1 Example #2 Keys & EndKeys These are to be used together directly after the dive tag and tells the validator that anything between 'keys' and 'endkeys' applies to the keys of a map and not the values; think of it like the 'dive' tag, but for map keys instead of values. Multidimensional nesting is also supported, each level you wish to validate will require another 'keys' and 'endkeys' tag. These tags are only valid for maps. Example #1 Example #2 This validates that the value is not the data types default zero value. For numbers ensures value is not zero. For strings ensures value is not "". For booleans ensures value is not false. For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value when using WithRequiredStructEnabled. The field under validation must be present and not empty only if all the other specified fields are equal to the value following the specified field. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: The field under validation must be present and not empty unless all the other specified fields are equal to the value following the specified field. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: The field under validation must be present and not empty only if any of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: The field under validation must be present and not empty only if all of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Example: The field under validation must be present and not empty only when any of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: The field under validation must be present and not empty only when all of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Example: The field under validation must not be present or not empty only if all the other specified fields are equal to the value following the specified field. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: The field under validation must not be present or empty unless all the other specified fields are equal to the value following the specified field. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. For structs ensures value is not the zero value. Examples: This validates that the value is the default value and is almost the opposite of required. For numbers, length will ensure that the value is equal to the parameter given. For strings, it checks that the string length is exactly that number of characters. For slices, arrays, and maps, validates the number of items. Example #1 Example #2 (time.Duration) For time.Duration, len will ensure that the value is equal to the duration given in the parameter. For numbers, max will ensure that the value is less than or equal to the parameter given. For strings, it checks that the string length is at most that number of characters. For slices, arrays, and maps, validates the number of items. Example #1 Example #2 (time.Duration) For time.Duration, max will ensure that the value is less than or equal to the duration given in the parameter. For numbers, min will ensure that the value is greater or equal to the parameter given. For strings, it checks that the string length is at least that number of characters. For slices, arrays, and maps, validates the number of items. Example #1 Example #2 (time.Duration) For time.Duration, min will ensure that the value is greater than or equal to the duration given in the parameter. For strings & numbers, eq will ensure that the value is equal to the parameter given. For slices, arrays, and maps, validates the number of items. Example #1 Example #2 (time.Duration) For time.Duration, eq will ensure that the value is equal to the duration given in the parameter. For strings & numbers, ne will ensure that the value is not equal to the parameter given. For slices, arrays, and maps, validates the number of items. Example #1 Example #2 (time.Duration) For time.Duration, ne will ensure that the value is not equal to the duration given in the parameter. For strings, ints, and uints, oneof will ensure that the value is one of the values in the parameter. The parameter should be a list of values separated by whitespace. Values may be strings or numbers. To match strings with spaces in them, include the target string between single quotes. For numbers, this will ensure that the value is greater than the parameter given. For strings, it checks that the string length is greater than that number of characters. For slices, arrays and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than time.Now.UTC(). Example #3 (time.Duration) For time.Duration, gt will ensure that the value is greater than the duration given in the parameter. Same as 'min' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than or equal to time.Now.UTC(). Example #3 (time.Duration) For time.Duration, gte will ensure that the value is greater than or equal to the duration given in the parameter. For numbers, this will ensure that the value is less than the parameter given. For strings, it checks that the string length is less than that number of characters. For slices, arrays, and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than time.Now.UTC(). Example #3 (time.Duration) For time.Duration, lt will ensure that the value is less than the duration given in the parameter. Same as 'max' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than or equal to time.Now.UTC(). Example #3 (time.Duration) For time.Duration, lte will ensure that the value is less than or equal to the duration given in the parameter. This will validate the field value against another fields value either within a struct or passed in field. Example #1: Example #2: Field Equals Another Field (relative) This does the same as eqfield except that it validates the field provided relative to the top level struct. This will validate the field value against another fields value either within a struct or passed in field. Examples: Field Does Not Equal Another Field (relative) This does the same as nefield except that it validates the field provided relative to the top level struct. Only valid for Numbers, time.Duration and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtfield except that it validates the field provided relative to the top level struct. Only valid for Numbers, time.Duration and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtefield except that it validates the field provided relative to the top level struct. Only valid for Numbers, time.Duration and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltfield except that it validates the field provided relative to the top level struct. Only valid for Numbers, time.Duration and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltefield except that it validates the field provided relative to the top level struct. This does the same as contains except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. This does the same as excludes except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. For arrays & slices, unique will ensure that there are no duplicates. For maps, unique will ensure that there are no duplicate values. For slices of struct, unique will ensure that there are no duplicate values in a field of the struct specified via a parameter. This validates that a string value contains ASCII alpha characters only This validates that a string value contains ASCII alphanumeric characters only This validates that a string value contains unicode alpha characters only This validates that a string value contains unicode alphanumeric characters only This validates that a string value can successfully be parsed into a boolean with strconv.ParseBool This validates that a string value contains number values only. For integers or float it returns true. This validates that a string value contains a basic numeric value. basic excludes exponents etc... for integers or float it returns true. This validates that a string value contains a valid hexadecimal. This validates that a string value contains a valid hex color including hashtag (#) This validates that a string value contains only lowercase characters. An empty string is not a valid lowercase string. This validates that a string value contains only uppercase characters. An empty string is not a valid uppercase string. This validates that a string value contains a valid rgb color This validates that a string value contains a valid rgba color This validates that a string value contains a valid hsl color This validates that a string value contains a valid hsla color This validates that a string value contains a valid E.164 Phone number https://en.wikipedia.org/wiki/E.164 (ex. +1123456789) This validates that a string value contains a valid email This may not conform to all possibilities of any rfc standard, but neither does any email provider accept all possibilities. This validates that a string value is valid JSON This validates that a string value is a valid JWT This validates that a string value contains a valid file path and that the file exists on the machine. This is done using os.Stat, which is a platform independent function. This validates that a string value contains a valid file path and that the file exists on the machine and is an image. This is done using os.Stat and github.com/gabriel-vasile/mimetype This validates that a string value contains a valid file path but does not validate the existence of that file. This is done using os.Stat, which is a platform independent function. This validates that a string value contains a valid url This will accept any url the golang request uri accepts but must contain a schema for example http:// or rtmp:// This validates that a string value contains a valid uri This will accept any uri the golang request uri accepts This validates that a string value contains a valid URN according to the RFC 2141 spec. This validates that a string value contains a valid bas324 value. Although an empty string is valid base32 this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid base64 value. Although an empty string is valid base64 this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid base64 URL safe value according the RFC4648 spec. Although an empty string is a valid base64 URL safe value, this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid base64 URL safe value, but without = padding, according the RFC4648 spec, section 3.2. Although an empty string is a valid base64 URL safe value, this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid bitcoin address. The format of the string is checked to ensure it matches one of the three formats P2PKH, P2SH and performs checksum validation. Bitcoin Bech32 Address (segwit) This validates that a string value contains a valid bitcoin Bech32 address as defined by bip-0173 (https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki) Special thanks to Pieter Wuille for providing reference implementations. This validates that a string value contains a valid ethereum address. The format of the string is checked to ensure it matches the standard Ethereum address format. This validates that a string value contains the substring value. This validates that a string value contains any Unicode code points in the substring value. This validates that a string value contains the supplied rune value. This validates that a string value does not contain the substring value. This validates that a string value does not contain any Unicode code points in the substring value. This validates that a string value does not contain the supplied rune value. This validates that a string value starts with the supplied string value This validates that a string value ends with the supplied string value This validates that a string value does not start with the supplied string value This validates that a string value does not end with the supplied string value This validates that a string value contains a valid isbn10 or isbn13 value. This validates that a string value contains a valid isbn10 value. This validates that a string value contains a valid isbn13 value. This validates that a string value contains a valid UUID. Uppercase UUID values will not pass - use `uuid_rfc4122` instead. This validates that a string value contains a valid version 3 UUID. Uppercase UUID values will not pass - use `uuid3_rfc4122` instead. This validates that a string value contains a valid version 4 UUID. Uppercase UUID values will not pass - use `uuid4_rfc4122` instead. This validates that a string value contains a valid version 5 UUID. Uppercase UUID values will not pass - use `uuid5_rfc4122` instead. This validates that a string value contains a valid ULID value. This validates that a string value contains only ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains only printable ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains one or more multibyte characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains a valid DataURI. NOTE: this will also validate that the data portion is valid base64 This validates that a string value contains a valid latitude. This validates that a string value contains a valid longitude. This validates that a string value contains a valid U.S. Social Security Number. This validates that a string value contains a valid IP Address. This validates that a string value contains a valid v4 IP Address. This validates that a string value contains a valid v6 IP Address. This validates that a string value contains a valid CIDR Address. This validates that a string value contains a valid v4 CIDR Address. This validates that a string value contains a valid v6 CIDR Address. This validates that a string value contains a valid resolvable TCP Address. This validates that a string value contains a valid resolvable v4 TCP Address. This validates that a string value contains a valid resolvable v6 TCP Address. This validates that a string value contains a valid resolvable UDP Address. This validates that a string value contains a valid resolvable v4 UDP Address. This validates that a string value contains a valid resolvable v6 UDP Address. This validates that a string value contains a valid resolvable IP Address. This validates that a string value contains a valid resolvable v4 IP Address. This validates that a string value contains a valid resolvable v6 IP Address. This validates that a string value contains a valid Unix Address. This validates that a string value contains a valid MAC Address. Note: See Go's ParseMAC for accepted formats and types: This validates that a string value is a valid Hostname according to RFC 952 https://tools.ietf.org/html/rfc952 This validates that a string value is a valid Hostname according to RFC 1123 https://tools.ietf.org/html/rfc1123 Full Qualified Domain Name (FQDN) This validates that a string value contains a valid FQDN. This validates that a string value appears to be an HTML element tag including those described at https://developer.mozilla.org/en-US/docs/Web/HTML/Element This validates that a string value is a proper character reference in decimal or hexadecimal format This validates that a string value is percent-encoded (URL encoded) according to https://tools.ietf.org/html/rfc3986#section-2.1 This validates that a string value contains a valid directory and that it exists on the machine. This is done using os.Stat, which is a platform independent function. This validates that a string value contains a valid directory but does not validate the existence of that directory. This is done using os.Stat, which is a platform independent function. It is safest to suffix the string with os.PathSeparator if the directory may not exist at the time of validation. This validates that a string value contains a valid DNS hostname and port that can be used to validate fields typically passed to sockets and connections. This validates that a string value is a valid datetime based on the supplied datetime format. Supplied format must match the official Go time format layout as documented in https://golang.org/pkg/time/ This validates that a string value is a valid country code based on iso3166-1 alpha-2 standard. see: https://www.iso.org/iso-3166-country-codes.html This validates that a string value is a valid country code based on iso3166-1 alpha-3 standard. see: https://www.iso.org/iso-3166-country-codes.html This validates that a string value is a valid country code based on iso3166-1 alpha-numeric standard. see: https://www.iso.org/iso-3166-country-codes.html This validates that a string value is a valid BCP 47 language tag, as parsed by language.Parse. More information on https://pkg.go.dev/golang.org/x/text/language BIC (SWIFT code) This validates that a string value is a valid Business Identifier Code (SWIFT code), defined in ISO 9362. More information on https://www.iso.org/standard/60390.html This validates that a string value is a valid dns RFC 1035 label, defined in RFC 1035. More information on https://datatracker.ietf.org/doc/html/rfc1035 This validates that a string value is a valid time zone based on the time zone database present on the system. Although empty value and Local value are allowed by time.LoadLocation golang function, they are not allowed by this validator. More information on https://golang.org/pkg/time/#LoadLocation This validates that a string value is a valid semver version, defined in Semantic Versioning 2.0.0. More information on https://semver.org/ This validates that a string value is a valid cve id, defined in cve mitre. More information on https://cve.mitre.org/ This validates that a string value contains a valid credit card number using Luhn algorithm. This validates that a string or (u)int value contains a valid checksum using the Luhn algorithm. This validates that a string is a valid 24 character hexadecimal string or valid connection string. Example: This validates that a string value contains a valid cron expression. This validates that a string is valid for use with SpiceDb for the indicated purpose. If no purpose is given, a purpose of 'id' is assumed. Alias Validators and Tags NOTE: When returning an error, the tag returned in "FieldError" will be the alias tag unless the dive tag is part of the alias. Everything after the dive tag is not reported as the alias tag. Also, the "ActualTag" in the before case will be the actual tag within the alias that failed. Here is a list of the current built in alias tags: Validator notes: A collection of validation rules that are frequently needed but are more complex than the ones found in the baked in validators. A non standard validator must be registered manually like you would with your own custom validation functions. Example of registration and use: Here is a list of the current non standard validators: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
Package qs encodes structs into url.Values. Package exports `NewEncoder()` function to create an encoder. Use `WithTagAlias()` func to register custom tag alias (default is `qs`) Encoder has `.Values()` and `Encode()` functions to encode structs into url.Values. Supported data types: Example Ignoring Fields Omitempty By default, package encodes time.Time values as RFC3339 format. Including the `"second"` or `"millis"` option to signal that the field should be encoded as second or millisecond. Slice and Array default to encoding into multiple URL values of the same value name. Including the `comma` option to signal that the field should be encoded as a single comma-delimited value. Including the `bracket` option to signal that the multiple URL values should have "[]" appended to the value name. The `index` option will append an index number with brackets to value name All nested structs are encoded including the parent value name with brackets for scoping. Custom type Implement `EncodeParam` to encode itself into query param. Implement `IsZero` to check whether an object is zero to determine whether it should be omitted when encoding. Limitation
Package goproperties implements read operations on .properties file. .properties is a file extension for files mainly used in Java related technologies to store the configurable parameters of an application. They can also be used for storing strings for Internationalization and localization; these are known as Property Resource Bundles. Each parameter is stored as a pair of strings, one storing the name of the parameter (called the key), and the other storing the value. Each line in a .properties file normally stores a single property. Several formats are possible for each line, including key=value, key = value, key:value, and key value. .properties files can use the number sign (#) or the exclamation mark (!) as the first non blank character in a line to denote that all text following it is a comment. The backwards slash is used to escape a character. An example of a properties file is provided below. In the example above, website would be a key, and its corresponding value would be http://en.wikipedia.org/. While the number sign and the exclamation mark marks text as comments, it has no effect when it is part of a property. Thus, the key message has the value Welcome to Wikipedia! and not Welcome to Wikipedia. Note also that all of the whitespace in front of Wikipedia! is excluded completely. The encoding of a .properties file is ISO-8859-1, also known as Latin-1. All non-Latin-1 characters must be entered by using Unicode escape characters, e. g. \uHHHH where HHHH is a hexadecimal index of the character in the Unicode character set. This allows for using .properties files as resource bundles for localization. A non-Latin-1 text file can be converted to a correct .properties file by using the native2ascii tool that is shipped with the JDK or by using a tool, such as po2prop, that manages the transformation from a bilingual localization format into .properties escaping. From Wikipedia, the free encyclopedia http://en.wikipedia.org/wiki/.properties
Package gofpdf implements a PDF document generator with high level support for text, drawing and images. - UTF-8 support - Choice of measurement unit, page format and margins - Page header and footer management - Automatic page breaks, line breaks, and text justification - Inclusion of JPEG, PNG, GIF, TIFF and basic path-only SVG images - Colors, gradients and alpha channel transparency - Outline bookmarks - Internal and external links - TrueType, Type1 and encoding support - Page compression - Lines, Bézier curves, arcs, and ellipses - Rotation, scaling, skewing, translation, and mirroring - Clipping - Document protection - Layers - Templates - Barcodes - Charting facility - Import PDFs as templates gofpdf has no dependencies other than the Go standard library. All tests pass on Linux, Mac and Windows platforms. gofpdf supports UTF-8 TrueType fonts and “right-to-left” languages. Note that Chinese, Japanese, and Korean characters may not be included in many general purpose fonts. For these languages, a specialized font (for example, NotoSansSC for simplified Chinese) can be used. Also, support is provided to automatically translate UTF-8 runes to code page encodings for languages that have fewer than 256 glyphs. This repository will not be maintained, at least for some unknown duration. But it is hoped that gofpdf has a bright future in the open source world. Due to Go’s promise of compatibility, gofpdf should continue to function without modification for a longer time than would be the case with many other languages. Forks should be based on the last viable commit. Tools such as active-forks can be used to select a fork that looks promising for your needs. If a particular fork looks like it has taken the lead in attracting followers, this README will be updated to point people in that direction. The efforts of all contributors to this project have been deeply appreciated. Best wishes to all of you. If you currently use the $GOPATH scheme, install the package with the following command. To test the installation, run The following Go code generates a simple PDF file. See the functions in the fpdf_test.go file (shown as examples in this documentation) for more advanced PDF examples. If an error occurs in an Fpdf method, an internal error field is set. After this occurs, Fpdf method calls typically return without performing any operations and the error state is retained. This error management scheme facilitates PDF generation since individual method calls do not need to be examined for failure; it is generally sufficient to wait until after Output() is called. For the same reason, if an error occurs in the calling application during PDF generation, it may be desirable for the application to transfer the error to the Fpdf instance by calling the SetError() method or the SetErrorf() method. At any time during the life cycle of the Fpdf instance, the error state can be determined with a call to Ok() or Err(). The error itself can be retrieved with a call to Error(). This package is a relatively straightforward translation from the original FPDF library written in PHP (despite the caveat in the introduction to Effective Go). The API names have been retained even though the Go idiom would suggest otherwise (for example, pdf.GetX() is used rather than simply pdf.X()). The similarity of the two libraries makes the original FPDF website a good source of information. It includes a forum and FAQ. However, some internal changes have been made. Page content is built up using buffers (of type bytes.Buffer) rather than repeated string concatenation. Errors are handled as explained above rather than panicking. Output is generated through an interface of type io.Writer or io.WriteCloser. A number of the original PHP methods behave differently based on the type of the arguments that are passed to them; in these cases additional methods have been exported to provide similar functionality. Font definition files are produced in JSON rather than PHP. A side effect of running go test ./... is the production of a number of example PDFs. These can be found in the gofpdf/pdf directory after the tests complete. Please note that these examples run in the context of a test. In order run an example as a standalone application, you’ll need to examine fpdf_test.go for some helper routines, for example exampleFilename() and summary(). Example PDFs can be compared with reference copies in order to verify that they have been generated as expected. This comparison will be performed if a PDF with the same name as the example PDF is placed in the gofpdf/pdf/reference directory and if the third argument to ComparePDFFiles() in internal/example/example.go is true. (By default it is false.) The routine that summarizes an example will look for this file and, if found, will call ComparePDFFiles() to check the example PDF for equality with its reference PDF. If differences exist between the two files they will be printed to standard output and the test will fail. If the reference file is missing, the comparison is considered to succeed. In order to successfully compare two PDFs, the placement of internal resources must be consistent and the internal creation timestamps must be the same. To do this, the methods SetCatalogSort() and SetCreationDate() need to be called for both files. This is done automatically for all examples. Nothing special is required to use the standard PDF fonts (courier, helvetica, times, zapfdingbats) in your documents other than calling SetFont(). You should use AddUTF8Font() or AddUTF8FontFromBytes() to add a TrueType UTF-8 encoded font. Use RTL() and LTR() methods switch between “right-to-left” and “left-to-right” mode. In order to use a different non-UTF-8 TrueType or Type1 font, you will need to generate a font definition file and, if the font will be embedded into PDFs, a compressed version of the font file. This is done by calling the MakeFont function or using the included makefont command line utility. To create the utility, cd into the makefont subdirectory and run “go build”. This will produce a standalone executable named makefont. Select the appropriate encoding file from the font subdirectory and run the command as in the following example. In your PDF generation code, call AddFont() to load the font and, as with the standard fonts, SetFont() to begin using it. Most examples, including the package example, demonstrate this method. Good sources of free, open-source fonts include Google Fonts and DejaVu Fonts. The draw2d package is a two dimensional vector graphics library that can generate output in different forms. It uses gofpdf for its document production mode. gofpdf is a global community effort and you are invited to make it even better. If you have implemented a new feature or corrected a problem, please consider contributing your change to the project. A contribution that does not directly pertain to the core functionality of gofpdf should be placed in its own directory directly beneath the contrib directory. Here are guidelines for making submissions. Your change should - be compatible with the MIT License - be properly documented - be formatted with go fmt - include an example in fpdf_test.go if appropriate - conform to the standards of golint and go vet, that is, golint . and go vet . should not generate any warnings - not diminish test coverage Pull requests are the preferred means of accepting your changes. gofpdf is released under the MIT License. It is copyrighted by Kurt Jung and the contributors acknowledged below. This package’s code and documentation are closely derived from the FPDF library created by Olivier Plathey, and a number of font and image resources are copied directly from it. Bruno Michel has provided valuable assistance with the code. Drawing support is adapted from the FPDF geometric figures script by David Hernández Sanz. Transparency support is adapted from the FPDF transparency script by Martin Hall-May. Support for gradients and clipping is adapted from FPDF scripts by Andreas Würmser. Support for outline bookmarks is adapted from Olivier Plathey by Manuel Cornes. Layer support is adapted from Olivier Plathey. Support for transformations is adapted from the FPDF transformation script by Moritz Wagner and Andreas Würmser. PDF protection is adapted from the work of Klemen Vodopivec for the FPDF product. Lawrence Kesteloot provided code to allow an image’s extent to be determined prior to placement. Support for vertical alignment within a cell was provided by Stefan Schroeder. Ivan Daniluk generalized the font and image loading code to use the Reader interface while maintaining backward compatibility. Anthony Starks provided code for the Polygon function. Robert Lillack provided the Beziergon function and corrected some naming issues with the internal curve function. Claudio Felber provided implementations for dashed line drawing and generalized font loading. Stani Michiels provided support for multi-segment path drawing with smooth line joins, line join styles, enhanced fill modes, and has helped greatly with package presentation and tests. Templating is adapted by Marcus Downing from the FPDF_Tpl library created by Jan Slabon and Setasign. Jelmer Snoeck contributed packages that generate a variety of barcodes and help with registering images on the web. Jelmer Snoek and Guillermo Pascual augmented the basic HTML functionality with aligned text. Kent Quirk implemented backwards-compatible support for reading DPI from images that support it, and for setting DPI manually and then having it properly taken into account when calculating image size. Paulo Coutinho provided support for static embedded fonts. Dan Meyers added support for embedded JavaScript. David Fish added a generic alias-replacement function to enable, among other things, table of contents functionality. Andy Bakun identified and corrected a problem in which the internal catalogs were not sorted stably. Paul Montag added encoding and decoding functionality for templates, including images that are embedded in templates; this allows templates to be stored independently of gofpdf. Paul also added support for page boxes used in printing PDF documents. Wojciech Matusiak added supported for word spacing. Artem Korotkiy added support of UTF-8 fonts. Dave Barnes added support for imported objects and templates. Brigham Thompson added support for rounded rectangles. Joe Westcott added underline functionality and optimized image storage. Benoit KUGLER contributed support for rectangles with corners of unequal radius, modification times, and for file attachments and annotations. - Remove all legacy code page font support; use UTF-8 exclusively - Improve test coverage as reported by the coverage tool. Example demonstrates the generation of a simple PDF document. Note that since only core fonts are used (in this case Arial, a synonym for Helvetica), an empty string can be specified for the font directory in the call to New(). Note also that the example.Filename() and example.Summary() functions belong to a separate, internal package and are not part of the gofpdf library. If an error occurs at some point during the construction of the document, subsequent method calls exit immediately and the error is finally retrieved with the output call where it can be handled by the application.
Package gofpdf implements a PDF document generator with high level support for text, drawing and images. • Choice of measurement unit, page format and margins • Page header and footer management • Automatic page breaks, line breaks, and text justification • Inclusion of JPEG, PNG, GIF, TIFF and basic path-only SVG images • Colors, gradients and alpha channel transparency • Outline bookmarks • Internal and external links • TrueType, Type1 and encoding support • Page compression • Lines, Bézier curves, arcs, and ellipses • Rotation, scaling, skewing, translation, and mirroring • Clipping • Document protection • Layers • Templates • Barcodes gofpdf has no dependencies other than the Go standard library. All tests pass on Linux, Mac and Windows platforms. Like FPDF version 1.7, from which gofpdf is derived, this package does not yet support UTF-8 fonts. In particular, languages that require more than one code page such as Chinese, Japanese, and Arabic are not currently supported. This is explained in issue 109. However, support is provided to automatically translate UTF-8 runes to code page encodings for languages that have fewer than 256 glyphs. To install the package on your system, run Later, to receive updates, run The following Go code generates a simple PDF file. See the functions in the fpdf_test.go file (shown as examples in this documentation) for more advanced PDF examples. If an error occurs in an Fpdf method, an internal error field is set. After this occurs, Fpdf method calls typically return without performing any operations and the error state is retained. This error management scheme facilitates PDF generation since individual method calls do not need to be examined for failure; it is generally sufficient to wait until after Output() is called. For the same reason, if an error occurs in the calling application during PDF generation, it may be desirable for the application to transfer the error to the Fpdf instance by calling the SetError() method or the SetErrorf() method. At any time during the life cycle of the Fpdf instance, the error state can be determined with a call to Ok() or Err(). The error itself can be retrieved with a call to Error(). This package is a relatively straightforward translation from the original FPDF library written in PHP (despite the caveat in the introduction to Effective Go). The API names have been retained even though the Go idiom would suggest otherwise (for example, pdf.GetX() is used rather than simply pdf.X()). The similarity of the two libraries makes the original FPDF website a good source of information. It includes a forum and FAQ. However, some internal changes have been made. Page content is built up using buffers (of type bytes.Buffer) rather than repeated string concatenation. Errors are handled as explained above rather than panicking. Output is generated through an interface of type io.Writer or io.WriteCloser. A number of the original PHP methods behave differently based on the type of the arguments that are passed to them; in these cases additional methods have been exported to provide similar functionality. Font definition files are produced in JSON rather than PHP. A side effect of running "go test ./..." is the production of a number of example PDFs. These can be found in the gofpdf/pdf directory after the tests complete. Please note that these examples run in the context of a test. In order run an example as a standalone application, you'll need to examine fpdf_test.go for some helper routines, for example exampleFilename() and summary(). Example PDFs can be compared with reference copies in order to verify that they have been generated as expected. This comparison will be performed if a PDF with the same name as the example PDF is placed in the gofpdf/pdf/reference directory. The routine that summarizes an example will look for this file and, if found, will call ComparePDFFiles() to check the example PDF for equality with its reference PDF. If differences exist between the two files they will be printed to standard output and the test will fail. If the reference file is missing, the comparison is considered to succeed. In order to successfully compare two PDFs, the placement of internal resources must be consistent and the internal creation timestamps must be the same. To do this, the methods SetCatalogSort() and SetCreationDate() need to be called for both files. This is done automatically for all examples. Nothing special is required to use the standard PDF fonts (courier, helvetica, times, zapfdingbats) in your documents other than calling SetFont(). In order to use a different TrueType or Type1 font, you will need to generate a font definition file and, if the font will be embedded into PDFs, a compressed version of the font file. This is done by calling the MakeFont function or using the included makefont command line utility. To create the utility, cd into the makefont subdirectory and run "go build". This will produce a standalone executable named makefont. Select the appropriate encoding file from the font subdirectory and run the command as in the following example. In your PDF generation code, call AddFont() to load the font and, as with the standard fonts, SetFont() to begin using it. Most examples, including the package example, demonstrate this method. Good sources of free, open-source fonts include http://www.google.com/fonts/ and http://dejavu-fonts.org/. The draw2d package (https://github.com/llgcode/draw2d) is a two dimensional vector graphics library that can generate output in different forms. It uses gofpdf for its document production mode. gofpdf is a global community effort and you are invited to make it even better. If you have implemented a new feature or corrected a problem, please consider contributing your change to the project. A contribution that does not directly pertain to the core functionality of gofpdf should be placed in its own directory directly beneath the `contrib` directory. Here are guidelines for making submissions. Your change should • be compatible with the MIT License • be properly documented • be formatted with `go fmt` • include an example in fpdf_test.go if appropriate • conform to the standards of golint (https://github.com/golang/lint) and go vet (https://godoc.org/golang.org/x/tools/cmd/vet), that is, `golint .` and `go vet .` should not generate any warnings • not diminish test coverage (https://blog.golang.org/cover) Pull requests (https://help.github.com/articles/using-pull-requests/) work nicely as a means of contributing your changes. gofpdf is released under the MIT License. It is copyrighted by Kurt Jung and the contributors acknowledged below. This package's code and documentation are closely derived from the FPDF library (http://www.fpdf.org/) created by Olivier Plathey, and a number of font and image resources are copied directly from it. Drawing support is adapted from the FPDF geometric figures script by David Hernández Sanz. Transparency support is adapted from the FPDF transparency script by Martin Hall-May. Support for gradients and clipping is adapted from FPDF scripts by Andreas Würmser. Support for outline bookmarks is adapted from Olivier Plathey by Manuel Cornes. Layer support is adapted from Olivier Plathey. Support for transformations is adapted from the FPDF transformation script by Moritz Wagner and Andreas Würmser. PDF protection is adapted from the work of Klemen Vodopivec for the FPDF product. Lawrence Kesteloot provided code to allow an image's extent to be determined prior to placement. Support for vertical alignment within a cell was provided by Stefan Schroeder. Ivan Daniluk generalized the font and image loading code to use the Reader interface while maintaining backward compatibility. Anthony Starks provided code for the Polygon function. Robert Lillack provided the Beziergon function and corrected some naming issues with the internal curve function. Claudio Felber provided implementations for dashed line drawing and generalized font loading. Stani Michiels provided support for multi-segment path drawing with smooth line joins, line join styles, enhanced fill modes, and has helped greatly with package presentation and tests. Templating is adapted by Marcus Downing from the FPDF_Tpl library created by Jan Slabon and Setasign. Jelmer Snoeck contributed packages that generate a variety of barcodes and help with registering images on the web. Jelmer Snoek and Guillermo Pascual augmented the basic HTML functionality with aligned text. Kent Quirk implemented backwards-compatible support for reading DPI from images that support it, and for setting DPI manually and then having it properly taken into account when calculating image size. Paulo Coutinho provided support for static embedded fonts. Bruno Michel has provided valuable assistance with the code. • Handle UTF-8 source text natively. Until then, automatic translation of UTF-8 runes to code page bytes is provided. • Improve test coverage as reported by the coverage tool. This example demonstrates the generation of a simple PDF document. Note that since only core fonts are used (in this case Arial, a synonym for Helvetica), an empty string can be specified for the font directory in the call to New(). Note also that the example.Filename() and example.Summary() functions belong to a separate, internal package and are not part of the gofpdf library. If an error occurs at some point during the construction of the document, subsequent method calls exit immediately and the error is finally retrieved with the output call where it can be handled by the application.
Package gofpdf implements a PDF document generator with high level support for text, drawing and images. - UTF-8 support - Choice of measurement unit, page format and margins - Page header and footer management - Automatic page breaks, line breaks, and text justification - Inclusion of JPEG, PNG, GIF, TIFF and basic path-only SVG images - Colors, gradients and alpha channel transparency - Outline bookmarks - Internal and external links - TrueType, Type1 and encoding support - Page compression - Lines, Bézier curves, arcs, and ellipses - Rotation, scaling, skewing, translation, and mirroring - Clipping - Document protection - Layers - Templates - Barcodes - Charting facility - Import PDFs as templates gofpdf has no dependencies other than the Go standard library. All tests pass on Linux, Mac and Windows platforms. gofpdf supports UTF-8 TrueType fonts and “right-to-left” languages. Note that Chinese, Japanese, and Korean characters may not be included in many general purpose fonts. For these languages, a specialized font (for example, NotoSansSC for simplified Chinese) can be used. Also, support is provided to automatically translate UTF-8 runes to code page encodings for languages that have fewer than 256 glyphs. This repository will not be maintained, at least for some unknown duration. But it is hoped that gofpdf has a bright future in the open source world. Due to Go’s promise of compatibility, gofpdf should continue to function without modification for a longer time than would be the case with many other languages. Forks should be based on the last viable commit. Tools such as active-forks can be used to select a fork that looks promising for your needs. If a particular fork looks like it has taken the lead in attracting followers, this README will be updated to point people in that direction. The efforts of all contributors to this project have been deeply appreciated. Best wishes to all of you. To install the package on your system, run Later, to receive updates, run The following Go code generates a simple PDF file. See the functions in the fpdf_test.go file (shown as examples in this documentation) for more advanced PDF examples. If an error occurs in an Fpdf method, an internal error field is set. After this occurs, Fpdf method calls typically return without performing any operations and the error state is retained. This error management scheme facilitates PDF generation since individual method calls do not need to be examined for failure; it is generally sufficient to wait until after Output() is called. For the same reason, if an error occurs in the calling application during PDF generation, it may be desirable for the application to transfer the error to the Fpdf instance by calling the SetError() method or the SetErrorf() method. At any time during the life cycle of the Fpdf instance, the error state can be determined with a call to Ok() or Err(). The error itself can be retrieved with a call to Error(). This package is a relatively straightforward translation from the original FPDF library written in PHP (despite the caveat in the introduction to Effective Go). The API names have been retained even though the Go idiom would suggest otherwise (for example, pdf.GetX() is used rather than simply pdf.X()). The similarity of the two libraries makes the original FPDF website a good source of information. It includes a forum and FAQ. However, some internal changes have been made. Page content is built up using buffers (of type bytes.Buffer) rather than repeated string concatenation. Errors are handled as explained above rather than panicking. Output is generated through an interface of type io.Writer or io.WriteCloser. A number of the original PHP methods behave differently based on the type of the arguments that are passed to them; in these cases additional methods have been exported to provide similar functionality. Font definition files are produced in JSON rather than PHP. A side effect of running go test ./... is the production of a number of example PDFs. These can be found in the gofpdf/pdf directory after the tests complete. Please note that these examples run in the context of a test. In order run an example as a standalone application, you’ll need to examine fpdf_test.go for some helper routines, for example exampleFilename() and summary(). Example PDFs can be compared with reference copies in order to verify that they have been generated as expected. This comparison will be performed if a PDF with the same name as the example PDF is placed in the gofpdf/pdf/reference directory and if the third argument to ComparePDFFiles() in internal/example/example.go is true. (By default it is false.) The routine that summarizes an example will look for this file and, if found, will call ComparePDFFiles() to check the example PDF for equality with its reference PDF. If differences exist between the two files they will be printed to standard output and the test will fail. If the reference file is missing, the comparison is considered to succeed. In order to successfully compare two PDFs, the placement of internal resources must be consistent and the internal creation timestamps must be the same. To do this, the methods SetCatalogSort() and SetCreationDate() need to be called for both files. This is done automatically for all examples. Nothing special is required to use the standard PDF fonts (courier, helvetica, times, zapfdingbats) in your documents other than calling SetFont(). You should use AddUTF8Font() or AddUTF8FontFromBytes() to add a TrueType UTF-8 encoded font. Use RTL() and LTR() methods switch between “right-to-left” and “left-to-right” mode. In order to use a different non-UTF-8 TrueType or Type1 font, you will need to generate a font definition file and, if the font will be embedded into PDFs, a compressed version of the font file. This is done by calling the MakeFont function or using the included makefont command line utility. To create the utility, cd into the makefont subdirectory and run “go build”. This will produce a standalone executable named makefont. Select the appropriate encoding file from the font subdirectory and run the command as in the following example. In your PDF generation code, call AddFont() to load the font and, as with the standard fonts, SetFont() to begin using it. Most examples, including the package example, demonstrate this method. Good sources of free, open-source fonts include Google Fonts and DejaVu Fonts. The draw2d package is a two dimensional vector graphics library that can generate output in different forms. It uses gofpdf for its document production mode. gofpdf is a global community effort and you are invited to make it even better. If you have implemented a new feature or corrected a problem, please consider contributing your change to the project. A contribution that does not directly pertain to the core functionality of gofpdf should be placed in its own directory directly beneath the contrib directory. Here are guidelines for making submissions. Your change should - be compatible with the MIT License - be properly documented - be formatted with go fmt - include an example in fpdf_test.go if appropriate - conform to the standards of golint and go vet, that is, golint . and go vet . should not generate any warnings - not diminish test coverage Pull requests are the preferred means of accepting your changes. gofpdf is released under the MIT License. It is copyrighted by Kurt Jung and the contributors acknowledged below. This package’s code and documentation are closely derived from the FPDF library created by Olivier Plathey, and a number of font and image resources are copied directly from it. Bruno Michel has provided valuable assistance with the code. Drawing support is adapted from the FPDF geometric figures script by David Hernández Sanz. Transparency support is adapted from the FPDF transparency script by Martin Hall-May. Support for gradients and clipping is adapted from FPDF scripts by Andreas Würmser. Support for outline bookmarks is adapted from Olivier Plathey by Manuel Cornes. Layer support is adapted from Olivier Plathey. Support for transformations is adapted from the FPDF transformation script by Moritz Wagner and Andreas Würmser. PDF protection is adapted from the work of Klemen Vodopivec for the FPDF product. Lawrence Kesteloot provided code to allow an image’s extent to be determined prior to placement. Support for vertical alignment within a cell was provided by Stefan Schroeder. Ivan Daniluk generalized the font and image loading code to use the Reader interface while maintaining backward compatibility. Anthony Starks provided code for the Polygon function. Robert Lillack provided the Beziergon function and corrected some naming issues with the internal curve function. Claudio Felber provided implementations for dashed line drawing and generalized font loading. Stani Michiels provided support for multi-segment path drawing with smooth line joins, line join styles, enhanced fill modes, and has helped greatly with package presentation and tests. Templating is adapted by Marcus Downing from the FPDF_Tpl library created by Jan Slabon and Setasign. Jelmer Snoeck contributed packages that generate a variety of barcodes and help with registering images on the web. Jelmer Snoek and Guillermo Pascual augmented the basic HTML functionality with aligned text. Kent Quirk implemented backwards-compatible support for reading DPI from images that support it, and for setting DPI manually and then having it properly taken into account when calculating image size. Paulo Coutinho provided support for static embedded fonts. Dan Meyers added support for embedded JavaScript. David Fish added a generic alias-replacement function to enable, among other things, table of contents functionality. Andy Bakun identified and corrected a problem in which the internal catalogs were not sorted stably. Paul Montag added encoding and decoding functionality for templates, including images that are embedded in templates; this allows templates to be stored independently of gofpdf. Paul also added support for page boxes used in printing PDF documents. Wojciech Matusiak added supported for word spacing. Artem Korotkiy added support of UTF-8 fonts. Dave Barnes added support for imported objects and templates. Brigham Thompson added support for rounded rectangles. Joe Westcott added underline functionality and optimized image storage. Benoit KUGLER contributed support for rectangles with corners of unequal radius, modification times, and for file attachments and annotations. - Remove all legacy code page font support; use UTF-8 exclusively - Improve test coverage as reported by the coverage tool. Example demonstrates the generation of a simple PDF document. Note that since only core fonts are used (in this case Arial, a synonym for Helvetica), an empty string can be specified for the font directory in the call to New(). Note also that the example.Filename() and example.Summary() functions belong to a separate, internal package and are not part of the gofpdf library. If an error occurs at some point during the construction of the document, subsequent method calls exit immediately and the error is finally retrieved with the output call where it can be handled by the application.
Multiprecision decimal numbers. For floating-point formatting only; not general purpose. Only operations are assign and (binary) left/right shift. Can do binary floating point in multiprecision decimal precisely because 2 divides 10; cannot do decimal floating point in multiprecision binary precisely. Package decimal implements an arbitrary precision fixed-point decimal. To use as part of a struct: The zero-value of a Decimal is 0, as you would expect. The best way to create a new Decimal is to use decimal.NewFromString, ex: NOTE: This can "only" represent numbers with a maximum of 2^31 digits after the decimal point. Multiprecision decimal numbers. For floating-point formatting only; not general purpose. Only operations are assign and (binary) left/right shift. Can do binary floating point in multiprecision decimal precisely because 2 divides 10; cannot do decimal floating point in multiprecision binary precisely.
Multiprecision decimal numbers. For floating-point formatting only; not general purpose. Only operations are assign and (binary) left/right shift. Can do binary floating point in multiprecision decimal precisely because 2 divides 10; cannot do decimal floating point in multiprecision binary precisely. Package decimal implements an arbitrary precision fixed-point decimal. To use as part of a struct: The zero-value of a Decimal is 0, as you would expect. The best way to create a new Decimal is to use decimal.NewFromString, ex: NOTE: This can "only" represent numbers with a maximum of 2^31 digits after the decimal point. Multiprecision decimal numbers. For floating-point formatting only; not general purpose. Only operations are assign and (binary) left/right shift. Can do binary floating point in multiprecision decimal precisely because 2 divides 10; cannot do decimal floating point in multiprecision binary precisely.
Multiprecision decimal numbers. For floating-point formatting only; not general purpose. Only operations are assign and (binary) left/right shift. Can do binary floating point in multiprecision decimal precisely because 2 divides 10; cannot do decimal floating point in multiprecision binary precisely. Package decimal implements an arbitrary precision fixed-point decimal. To use as part of a struct: The zero-value of a Decimal is 0, as you would expect. The best way to create a new Decimal is to use decimal.NewFromString, ex: NOTE: This can "only" represent numbers with a maximum of 2^31 digits after the decimal point. Multiprecision decimal numbers. For floating-point formatting only; not general purpose. Only operations are assign and (binary) left/right shift. Can do binary floating point in multiprecision decimal precisely because 2 divides 10; cannot do decimal floating point in multiprecision binary precisely.
Multiprecision decimal numbers. For floating-point formatting only; not general purpose. Only operations are assign and (binary) left/right shift. Can do binary floating point in multiprecision decimal precisely because 2 divides 10; cannot do decimal floating point in multiprecision binary precisely. Package decimal implements an arbitrary precision fixed-point decimal. To use as part of a struct: The zero-value of a Decimal is 0, as you would expect. The best way to create a new Decimal is to use decimal.NewFromString, ex: NOTE: This can "only" represent numbers with a maximum of 2^31 digits after the decimal point. Multiprecision decimal numbers. For floating-point formatting only; not general purpose. Only operations are assign and (binary) left/right shift. Can do binary floating point in multiprecision decimal precisely because 2 divides 10; cannot do decimal floating point in multiprecision binary precisely.
Multiprecision decimal numbers. For floating-point formatting only; not general purpose. Only operations are assign and (binary) left/right shift. Can do binary floating point in multiprecision decimal precisely because 2 divides 10; cannot do decimal floating point in multiprecision binary precisely. Package decimal implements an arbitrary precision fixed-point decimal. To use as part of a struct: The zero-value of a Decimal is 0, as you would expect. The best way to create a new Decimal is to use decimal.NewFromString, ex: NOTE: This can "only" represent numbers with a maximum of 2^31 digits after the decimal point. Multiprecision decimal numbers. For floating-point formatting only; not general purpose. Only operations are assign and (binary) left/right shift. Can do binary floating point in multiprecision decimal precisely because 2 divides 10; cannot do decimal floating point in multiprecision binary precisely.