Package embd provides a hardware abstraction layer for doing embedded programming on supported platforms like the Raspberry Pi, BeagleBone Black and CHIP. Most of the examples below will work without change (i.e. the same binary) on all supported platforms. == Overall structure It's best to think of the top-level embd package as a switchboard that doesn't implement anything on its own but rather relies on sub-packages for hosts drivers and devices and stitches them together. The exports in the top-level package serve a number of different purposes, which can be confusing at first: - it defines a number of driver interfaces, such as the GPIODriver, this is the interface that the driver for each specific platform must implement and is not something of concern to the typical user. - it defines the main low-level hardware interface types: analog pins, digital pins, interrupt pins, I2Cbuses, SPI buses, PWM pins and LEDs. Each type has a New function to instantiate one of these pins or buses. - it defines a number of InitXXX functions that initialize the various drivers, however, these are called by the coresponding NewXXX functions, so can be ignored. - it defines a number of top-level convenience functions, such as DigitalWrite, that can be called as 1-liners instead of first instantiating a DigitalPin and then writing to it To get started a host driver needs to be registered with the top-level embd package. This is most easily accomplished by doing an "underscore import" on of the sub-packages of embd/host, e.g., `import _ "github.com/kidoman/embd/host/chip"`. An `Init()` function in the host driver registers all the individual drivers with embd. After getting the host driver the next step might be to instantiate a GPIO pin using `NewDigitalPin` or an I2CBus using `NewI2CBus`. Such a pin or bus can be used directly but often it is passed into the initializer of a sensor, controller or other user-level driver which provides a high-level interface to some device. For example, the New function for the BMP180 type in the `embd/sensor/bmp180` package takes an I2CBus as argument, which it will use to reach the sensor. == Samples This section shows a few choice samples, more are available in the samples folder. Use the LED driver to toggle LEDs on the BBB: Even shorter while prototyping: BBB + PWM: Control GPIO pins on the RaspberryPi / BeagleBone Black: Could also do: Or read data from the Bosch BMP085 barometric sensor: Even find out the heading from the LSM303 magnetometer: The above two examples depend on I2C and therefore will work without change on almost all platforms.
Package monkit is a flexible code instrumenting and data collection library. I'm going to try and sell you as fast as I can on this library. Example usage We've got tools that capture distribution information (including quantiles) about int64, float64, and bool types. We have tools that capture data about events (we've got meters for deltas, rates, etc). We have rich tools for capturing information about tasks and functions, and literally anything that can generate a name and a number. Almost just as importantly, the amount of boilerplate and code you have to write to get these features is very minimal. Data that's hard to measure probably won't get measured. This data can be collected and sent to Graphite (http://graphite.wikidot.com/) or any other time-series database. Here's a selection of live stats from one of our storage nodes: This library generates call graphs of your live process for you. These call graphs aren't created through sampling. They're full pictures of all of the interesting functions you've annotated, along with quantile information about their successes, failures, how often they panic, return an error (if so instrumented), how many are currently running, etc. The data can be returned in dot format, in json, in text, and can be about just the functions that are currently executing, or all the functions the monitoring system has ever seen. Here's another example of one of our production nodes: https://raw.githubusercontent.com/spacemonkeygo/monkit/master/images/callgraph2.png This library generates trace graphs of your live process for you directly, without requiring standing up some tracing system such as Zipkin (though you can do that too). Inspired by Google's Dapper (http://research.google.com/pubs/pub36356.html) and Twitter's Zipkin (http://zipkin.io), we have process-internal trace graphs, triggerable by a number of different methods. You get this trace information for free whenever you use Go contexts (https://blog.golang.org/context) and function monitoring. The output formats are svg and json. Additionally, the library supports trace observation plugins, and we've written a plugin that sends this data to Zipkin (http://github.com/spacemonkeygo/monkit-zipkin). https://raw.githubusercontent.com/spacemonkeygo/monkit/master/images/trace.png Before our crazy Go rewrite of everything (https://www.spacemonkey.com/blog/posts/go-space-monkey) (and before we had even seen Google's Dapper paper), we were a Python shop, and all of our "interesting" functions were decorated with a helper that collected timing information and sent it to Graphite. When we transliterated to Go, we wanted to preserve that functionality, so the first version of our monitoring package was born. Over time it started to get janky, especially as we found Zipkin and started adding tracing functionality to it. We rewrote all of our Go code to use Google contexts, and then realized we could get call graph information. We decided a refactor and then an all-out rethinking of our monitoring package was best, and so now we have this library. Sometimes you really want callstack contextual information without having to pass arguments through everything on the call stack. In other languages, many people implement this with thread-local storage. Example: let's say you have written a big system that responds to user requests. All of your libraries log using your log library. During initial development everything is easy to debug, since there's low user load, but now you've scaled and there's OVER TEN USERS and it's kind of hard to tell what log lines were caused by what. Wouldn't it be nice to add request ids to all of the log lines kicked off by that request? Then you could grep for all log lines caused by a specific request id. Geez, it would suck to have to pass all contextual debugging information through all of your callsites. Google solved this problem by always passing a context.Context interface through from call to call. A Context is basically just a mapping of arbitrary keys to arbitrary values that users can add new values for. This way if you decide to add a request context, you can add it to your Context and then all callsites that decend from that place will have the new data in their contexts. It is admittedly very verbose to add contexts to every function call. Painfully so. I hope to write more about it in the future, but Google also wrote up their thoughts about it (https://blog.golang.org/context), which you can go read. For now, just swallow your disgust and let's keep moving. Let's make a super simple Varnish (https://www.varnish-cache.org/) clone. Open up gedit! (Okay just kidding, open whatever text editor you want.) For this motivating program, we won't even add the caching, though there's comments for where to add it if you'd like. For now, let's just make a barebones system that will proxy HTTP requests. We'll call it VLite, but maybe we should call it VReallyLite. Run and build this and open localhost:8080 in your browser. If you use the default proxy target, it should inform you that the world hasn't been destroyed yet. The first thing you'll want to do is add the small amount of boilerplate to make the instrumentation we're going to add to your process observable later. Import the basic monkit packages: and then register environmental statistics and kick off a goroutine in your main method to serve debug requests: Rebuild, and then check out localhost:9000/stats (or localhost:9000/stats/json, if you prefer) in your browser! Remember what I said about Google's contexts (https://blog.golang.org/context)? It might seem a bit overkill for such a small project, but it's time to add them. To help out here, I've created a library that constructs contexts for you for incoming HTTP requests. Nothing that's about to happen requires my webhelp library (https://godoc.org/github.com/jtolds/webhelp), but here is the code now refactored to receive and pass contexts through our two per-request calls. You can create a new context for a request however you want. One reason to use something like webhelp is that the cancelation feature of Contexts is hooked up to the HTTP request getting canceled. Let's start to get statistics about how many requests we receive! First, this package (main) will need to get a monitoring Scope. Add this global definition right after all your imports, much like you'd create a logger with many logging libraries: Now, make the error return value of HandleHTTP named (so, (err error)), and add this defer line as the very first instruction of HandleHTTP: Let's also add the same line (albeit modified for the lack of error) to Proxy, replacing &err with nil: You should now have something like: We'll unpack what's going on here, but for now: For this new funcs dataset, if you want a graph, you can download a dot graph at localhost:9000/funcs/dot and json information from localhost:9000/funcs/json. You should see something like: with a similar report for the Proxy method, or a graph like: https://raw.githubusercontent.com/spacemonkeygo/monkit/master/images/handlehttp.png This data reports the overall callgraph of execution for known traces, along with how many of each function are currently running, the most running concurrently (the highwater), how many were successful along with quantile timing information, how many errors there were (with quantile timing information if applicable), and how many panics there were. Since the Proxy method isn't capturing a returned err value, and since HandleHTTP always returns nil, this example won't ever have failures. If you're wondering about the success count being higher than you expected, keep in mind your browser probably requested a favicon.ico. Cool, eh? How it works is an interesting line of code - there's three function calls. If you look at the Go spec, all of the function calls will run at the time the function starts except for the very last one. The first function call, mon.Task(), creates or looks up a wrapper around a Func. You could get this yourself by requesting mon.Func() inside of the appropriate function or mon.FuncNamed(). Both mon.Task() and mon.Func() are inspecting runtime.Caller to determine the name of the function. Because this is a heavy operation, you can actually store the result of mon.Task() and reuse it somehow else if you prefer, so instead of you could instead use which is more performant every time after the first time. runtime.Caller only gets called once. Careful! Don't use the same myFuncMon in different functions unless you want to screw up your statistics! The second function call starts all the various stop watches and bookkeeping to keep track of the function. It also mutates the context pointer it's given to extend the context with information about what current span (in Zipkin parlance) is active. Notably, you *can* pass nil for the context if you really don't want a context. You just lose callgraph information. The last function call stops all the stop watches ad makes a note of any observed errors or panics (it repanics after observing them). Turns out, we don't even need to change our program anymore to get rich tracing information! Open your browser and go to localhost:9000/trace/svg?regex=HandleHTTP. It won't load, and in fact, it's waiting for you to open another tab and refresh localhost:8080 again. Once you retrigger the actual application behavior, the trace regex will capture a trace starting on the first function that matches the supplied regex, and return an svg. Go back to your first tab, and you should see a relatively uninteresting but super promising svg. Let's make the trace more interesting. Add a to your HandleHTTP method, rebuild, and restart. Load localhost:8080, then start a new request to your trace URL, then reload localhost:8080 again. Flip back to your trace, and you should see that the Proxy method only takes a portion of the time of HandleHTTP! https://cdn.rawgit.com/spacemonkeygo/monkit/master/images/trace.svg There's multiple ways to select a trace. You can select by regex using the preselect method (default), which first evaluates the regex on all known functions for sanity checking. Sometimes, however, the function you want to trace may not yet be known to monkit, in which case you'll want to turn preselection off. You may have a bad regex, or you may be in this case if you get the error "Bad Request: regex preselect matches 0 functions." Another way to select a trace is by providing a trace id, which we'll get to next! Make sure to check out what the addition of the time.Sleep call did to the other reports. It's easy to write plugins for monkit! Check out our first one that exports data to Zipkin (http://zipkin.io/)'s Scribe API: https://github.com/spacemonkeygo/monkit-zipkin We plan to have more (for HTrace, OpenTracing, etc, etc), soon!
esc embeds files into go programs and provides http.FileSystem interfaces to them. It adds all named files or files recursively under named directories at the path specified. The output file provides an http.FileSystem interface with zero dependencies on packages outside the standard library. Usage: The flags are: After producing an output file, the assets may be accessed with the FS() function, which takes a flag to use local assets instead (for local development). FS(Must)?(Byte|String) returns an asset as a (byte slice|string). FSMust(Byte|String) panics if the asset is not found. esc can be invoked by go generate: Embedded assets can be served with HTTP using the http.FileServer. Assuming you have a directory structure similar to the following: Where main.go contains:
CBSD 3-Clause License Copyright (c) 2017-2022, Gerasimos (Makis) Maropoulos (kataras2006@hotmail.com) All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. /* Package golog provides an easy to use foundation for your logging operations. Source code and other details for the project are available at GitHub: 0.1.12 The only requirement is the Go Programming Language Example code: Golog has a default, package-level initialized instance for you, however you can choose to create and use a logger instance for a specific part of your application. Example Code: Golog sets colors to levels when its `Printer.Output` is actual a compatible terminal which can renders colors, otherwise it will downgrade itself to a white foreground. Golog has functions to print a formatted log too. Example Code: Golog takes a simple `io.Writer` as its underline Printer's Output. Example Code: You can even override the default line braker, "\n", by using the `golog#NewLine` function at startup. Example Code: Golog is a leveled logger, therefore you can set a level and print whenever the print level is valid with the set-ed one. Available built'n levels are: Below you'll learn a way to add a custom level or modify an existing level. The default colorful text(or raw text for unsupported outputs) for levels can be overridden by using the `golog#ErrorText, golog#WarnText, golog#InfoText and golog#DebugText` functions. Example Code: Golog gives you the power to add or modify existing levels is via Level Metadata. Example Code: The logger's level can be changed via passing one of the level constants to the `Level` field or by passing its string representation to the `SetLevel` function. Example Code: Transaction with your favorite, but deprecated logger is easy. Golog offers two basic interfaces, the `ExternalLogger` and the `StdLogger` that can be directly used as arguments to the `Install` function in order to adapt an external logger. Outline: Example Code: Example Code: But you should have a basic idea of the golog package by now, we just scratched the surface. If you enjoy what you just saw and want to learn more, please follow the below links: Examples:
Package channels provides a collection of helper functions, interfaces and implementations for working with and extending the capabilities of golang's existing channels. The main interface of interest is Channel, though sub-interfaces are also provided for cases where the full Channel interface cannot be met (for example, InChannel for write-only channels). For integration with native typed golang channels, functions Wrap and Unwrap are provided which do the appropriate type conversions. The NativeChannel, NativeInChannel and NativeOutChannel type definitions are also provided for use with native channels which already carry values of type interface{}. The heart of the package consists of several distinct implementations of the Channel interface, including channels backed by special buffers (resizable, infinite, ring buffers, etc) and other useful types. A "black hole" channel for discarding unwanted values (similar in purpose to ioutil.Discard or /dev/null) rounds out the set. Helper functions for operating on Channels include Pipe and Tee (which behave much like their Unix namesakes), as well as Multiplex and Distribute. "Weak" versions of these functions also exist, which do not close their output channel(s) on completion. Due to limitations of Go's type system, importing this library directly is often not practical for production code. It serves equally well, however, as a reference guide and template for implementing many common idioms; if you use it in this way I would appreciate the inclusion of some sort of credit in the resulting code. Warning: several types in this package provide so-called "infinite" buffers. Be *very* careful using these, as no buffer is truly infinite - if such a buffer grows too large your program will run out of memory and crash. Caveat emptor.
Package skipper provides an HTTP routing library with flexible configuration as well as a runtime update of the routing rules. Skipper works as an HTTP reverse proxy that is responsible for mapping incoming requests to multiple HTTP backend services, based on routes that are selected by the request attributes. At the same time, both the requests and the responses can be augmented by a filter chain that is specifically defined for each route. Optionally, it can provide circuit breaker mechanism individually for each backend host. Skipper can load and update the route definitions from multiple data sources without being restarted. It provides a default executable command with a few built-in filters, however, its primary use case is to be extended with custom filters, predicates or data sources. For further information read 'Extending Skipper'. Skipper took the core design and inspiration from Vulcand: https://github.com/mailgun/vulcand. Skipper is 'go get' compatible. If needed, create a 'go workspace' first: Get the Skipper packages: Create a file with a route: Optionally, verify the syntax of the file: Start Skipper and make an HTTP request: The core of Skipper's request processing is implemented by a reverse proxy in the 'proxy' package. The proxy receives the incoming request, forwards it to the routing engine in order to receive the most specific matching route. When a route matches, the request is forwarded to all filters defined by it. The filters can modify the request or execute any kind of program logic. Once the request has been processed by all the filters, it is forwarded to the backend endpoint of the route. The response from the backend goes once again through all the filters in reverse order. Finally, it is mapped as the response of the original incoming request. Besides the default proxying mechanism, it is possible to define routes without a real network backend endpoint. One of these cases is called a 'shunt' backend, in which case one of the filters needs to handle the request providing its own response (e.g. the 'static' filter). Actually, filters themselves can instruct the request flow to shunt by calling the Serve(*http.Response) method of the filter context. Another case of a route without a network backend is the 'loopback'. A loopback route can be used to match a request, modified by filters, against the lookup tree with different conditions and then execute a different route. One example scenario can be to use a single route as an entry point to execute some calculation to get an A/B testing decision and then matching the updated request metadata for the actual destination route. This way the calculation can be executed for only those requests that don't contain information about a previously calculated decision. For further details, see the 'proxy' and 'filters' package documentation. Finding a request's route happens by matching the request attributes to the conditions in the route's definitions. Such definitions may have the following conditions: - method - path (optionally with wildcards) - path regular expressions - host regular expressions - headers - header regular expressions It is also possible to create custom predicates with any other matching criteria. The relation between the conditions in a route definition is 'and', meaning, that a request must fulfill each condition to match a route. For further details, see the 'routing' package documentation. Filters are applied in order of definition to the request and in reverse order to the response. They are used to modify request and response attributes, such as headers, or execute background tasks, like logging. Some filters may handle the requests without proxying them to service backends. Filters, depending on their implementation, may accept/require parameters, that are set specifically to the route. For further details, see the 'filters' package documentation. Each route has one of the following backends: HTTP endpoint, shunt, loopback or dynamic. Backend endpoints can be any HTTP service. They are specified by their network address, including the protocol scheme, the domain name or the IP address, and optionally the port number: e.g. "https://www.example.org:4242". (The path and query are sent from the original request, or set by filters.) A shunt route means that Skipper handles the request alone and doesn't make requests to a backend service. In this case, it is the responsibility of one of the filters to generate the response. A loopback route executes the routing mechanism on current state of the request from the start, including the route lookup. This way it serves as a form of an internal redirect. A dynamic route means that the final target will be defined in a filter. One of the filters in the chain must set the target backend url explicitly. Route definitions consist of the following: - request matching conditions (predicates) - filter chain (optional) - backend The eskip package implements the in-memory and text representations of route definitions, including a parser. (Note to contributors: in order to stay compatible with 'go get', the generated part of the parser is stored in the repository. When changing the grammar, 'go generate' needs to be executed explicitly to update the parser.) For further details, see the 'eskip' package documentation Skipper has filter implementations of basic auth and OAuth2. It can be integrated with tokeninfo based OAuth2 providers. For details, see: https://godoc.org/github.com/zalando/skipper/filters/auth. Skipper's route definitions of Skipper are loaded from one or more data sources. It can receive incremental updates from those data sources at runtime. It provides three different data clients: - Kubernetes: Skipper can be used as part of a Kubernetes Ingress Controller implementation together with https://github.com/zalando-incubator/kube-ingress-aws-controller . In this scenario, Skipper uses the Kubernetes API's Ingress extensions as a source for routing. For a complete deployment example, see more details in: https://github.com/zalando-incubator/kubernetes-on-aws/ . - Innkeeper: the Innkeeper service implements a storage for large sets of Skipper routes, with an HTTP+JSON API, OAuth2 authentication and role management. See the 'innkeeper' package and https://github.com/zalando/innkeeper. - etcd: Skipper can load routes and receive updates from etcd clusters (https://github.com/coreos/etcd). See the 'etcd' package. - static file: package eskipfile implements a simple data client, which can load route definitions from a static file in eskip format. Currently, it loads the routes on startup. It doesn't support runtime updates. Skipper can use additional data sources, provided by extensions. Sources must implement the DataClient interface in the routing package. Skipper provides circuit breakers, configured either globally, based on backend hosts or based on individual routes. It supports two types of circuit breaker behavior: open on N consecutive failures, or open on N failures out of M requests. For details, see: https://godoc.org/github.com/zalando/skipper/circuit. Skipper can be started with the default executable command 'skipper', or as a library built into an application. The easiest way to start Skipper as a library is to execute the 'Run' function of the current, root package. Each option accepted by the 'Run' function is wired in the default executable as well, as a command line flag. E.g. EtcdUrls becomes -etcd-urls as a comma separated list. For command line help, enter: An additional utility, eskip, can be used to verify, print, update and delete routes from/to files or etcd (Innkeeper on the roadmap). See the cmd/eskip command package, and/or enter in the command line: Skipper doesn't use dynamically loaded plugins, however, it can be used as a library, and it can be extended with custom predicates, filters and/or custom data sources. To create a custom predicate, one needs to implement the PredicateSpec interface in the routing package. Instances of the PredicateSpec are used internally by the routing package to create the actual Predicate objects as referenced in eskip routes, with concrete arguments. Example, randompredicate.go: In the above example, a custom predicate is created, that can be referenced in eskip definitions with the name 'Random': To create a custom filter we need to implement the Spec interface of the filters package. 'Spec' is the specification of a filter, and it is used to create concrete filter instances, while the raw route definitions are processed. Example, hellofilter.go: The above example creates a filter specification, and in the routes where they are included, the filter instances will set the 'X-Hello' header for each and every response. The name of the filter is 'hello', and in a route definition it is referenced as: The easiest way to create a custom Skipper variant is to implement the required filters (as in the example above) by importing the Skipper package, and starting it with the 'Run' command. Example, hello.go: A file containing the routes, routes.eskip: Start the custom router: The 'Run' function in the root Skipper package starts its own listener but it doesn't provide the best composability. The proxy package, however, provides a standard http.Handler, so it is possible to use it in a more complex solution as a building block for routing. Skipper provides detailed logging of failures, and access logs in Apache log format. Skipper also collects detailed performance metrics, and exposes them on a separate listener endpoint for pulling snapshots. For details, see the 'logging' and 'metrics' packages documentation. The router's performance depends on the environment and on the used filters. Under ideal circumstances, and without filters, the biggest time factor is the route lookup. Skipper is able to scale to thousands of routes with logarithmic performance degradation. However, this comes at the cost of increased memory consumption, due to storing the whole lookup tree in a single structure. Benchmarks for the tree lookup can be run by: In case more aggressive scale is needed, it is possible to setup Skipper in a cascade model, with multiple Skipper instances for specific route segments.
Command pigeon generates parsers in Go from a PEG grammar. From Wikipedia [0]: Its features and syntax are inspired by the PEG.js project [1], while the implementation is loosely based on [2]. Formal presentation of the PEG theory by Bryan Ford is also an important reference [3]. An introductory blog post can be found at [4]. The pigeon tool must be called with PEG input as defined by the accepted PEG syntax below. The grammar may be provided by a file or read from stdin. The generated parser is written to stdout by default. The following options can be specified: If the code blocks in the grammar (see below, section "Code block") are golint- and go vet-compliant, then the resulting generated code will also be golint- and go vet-compliant. The generated code doesn't use any third-party dependency unless code blocks in the grammar require such a dependency. The accepted syntax for the grammar is formally defined in the grammar/pigeon.peg file, using the PEG syntax. What follows is an informal description of this syntax. Identifiers, whitespace, comments and literals follow the same notation as the Go language, as defined in the language specification (http://golang.org/ref/spec#Source_code_representation): The grammar must be Unicode text encoded in UTF-8. New lines are identified by the \n character (U+000A). Space (U+0020), horizontal tabs (U+0009) and carriage returns (U+000D) are considered whitespace and are ignored except to separate tokens. A PEG grammar consists of a set of rules. A rule is an identifier followed by a rule definition operator and an expression. An optional display name - a string literal used in error messages instead of the rule identifier - can be specified after the rule identifier. E.g.: The rule definition operator can be any one of those: A rule is defined by an expression. The following sections describe the various expression types. Expressions can be grouped by using parentheses, and a rule can be referenced by its identifier in place of an expression. The choice expression is a list of expressions that will be tested in the order they are defined. The first one that matches will be used. Expressions are separated by the forward slash character "/". E.g.: Because the first match is used, it is important to think about the order of expressions. For example, in this rule, "<=" would never be used because the "<" expression comes first: The sequence expression is a list of expressions that must all match in that same order for the sequence expression to be considered a match. Expressions are separated by whitespace. E.g.: A labeled expression consists of an identifier followed by a colon ":" and an expression. A labeled expression introduces a variable named with the label that can be referenced in the code blocks in the same scope. The variable will have the value of the expression that follows the colon. E.g.: The variable is typed as an empty interface, and the underlying type depends on the following: For terminals (character and string literals, character classes and the any matcher), the value is []byte. E.g.: For predicates (& and !), the value is always nil. E.g.: For a sequence, the value is a slice of empty interfaces, one for each expression value in the sequence. The underlying types of each value in the slice follow the same rules described here, recursively. E.g.: For a repetition (+ and *), the value is a slice of empty interfaces, one for each repetition. The underlying types of each value in the slice follow the same rules described here, recursively. E.g.: For a choice expression, the value is that of the matching choice. E.g.: For the optional expression (?), the value is nil or the value of the expression. E.g.: Of course, the type of the value can be anything once an action code block is used. E.g.: An expression prefixed with the ampersand "&" is the "and" predicate expression: it is considered a match if the following expression is a match, but it does not consume any input. An expression prefixed with the exclamation point "!" is the "not" predicate expression: it is considered a match if the following expression is not a match, but it does not consume any input. E.g.: The expression following the & and ! operators can be a code block. In that case, the code block must return a bool and an error. The operator's semantic is the same, & is a match if the code block returns true, ! is a match if the code block returns false. The code block has access to any labeled value defined in its scope. E.g.: An expression followed by "*", "?" or "+" is a match if the expression occurs zero or more times ("*"), zero or one time "?" or one or more times ("+") respectively. The match is greedy, it will match as many times as possible. E.g. A literal matcher tries to match the input against a single character or a string literal. The literal may be a single-quoted single character, a double-quoted string or a backtick-quoted raw string. The same rules as in Go apply regarding the allowed characters and escapes. The literal may be followed by a lowercase "i" (outside the ending quote) to indicate that the match is case-insensitive. E.g.: A character class matcher tries to match the input against a class of characters inside square brackets "[...]". Inside the brackets, characters represent themselves and the same escapes as in string literals are available, except that the single- and double-quote escape is not valid, instead the closing square bracket "]" must be escaped to be used. Character ranges can be specified using the "[a-z]" notation. Unicode classes can be specified using the "[\pL]" notation, where L is a single-letter Unicode class of characters, or using the "[\p{Class}]" notation where Class is a valid Unicode class (e.g. "Latin"). As for string literals, a lowercase "i" may follow the matcher (outside the ending square bracket) to indicate that the match is case-insensitive. A "^" as first character inside the square brackets indicates that the match is inverted (it is a match if the input does not match the character class matcher). E.g.: The any matcher is represented by the dot ".". It matches any character except the end of file, thus the "!." expression is used to indicate "match the end of file". E.g.: Code blocks can be added to generate custom Go code. There are three kinds of code blocks: the initializer, the action and the predicate. All code blocks appear inside curly braces "{...}". The initializer must appear first in the grammar, before any rule. It is copied as-is (minus the wrapping curly braces) at the top of the generated parser. It may contain function declarations, types, variables, etc. just like any Go file. Every symbol declared here will be available to all other code blocks. Although the initializer is optional in a valid grammar, it is usually required to generate a valid Go source code file (for the package clause). E.g.: Action code blocks are code blocks declared after an expression in a rule. Those code blocks are turned into a method on the "*current" type in the generated source code. The method receives any labeled expression's value as argument (as any) and must return two values, the first being the value of the expression (an any), and the second an error. If a non-nil error is returned, it is added to the list of errors that the parser will return. E.g.: Predicate code blocks are code blocks declared immediately after the and "&" or the not "!" operators. Like action code blocks, predicate code blocks are turned into a method on the "*current" type in the generated source code. The method receives any labeled expression's value as argument (as any) and must return two opt, the first being a bool and the second an error. If a non-nil error is returned, it is added to the list of errors that the parser will return. E.g.: State change code blocks are code blocks starting with "#". In contrast to action and predicate code blocks, state change code blocks are allowed to modify values in the global "state" store (see below). State change code blocks are turned into a method on the "*current" type in the generated source code. The method is passed any labeled expression's value as an argument (of type any) and must return a value of type error. If a non-nil error is returned, it is added to the list of errors that the parser will return, note that the parser does NOT backtrack if a non-nil error is returned. E.g: The "*current" type is a struct that provides four useful fields that can be accessed in action, state change, and predicate code blocks: "pos", "text", "state" and "globalStore". The "pos" field indicates the current position of the parser in the source input. It is itself a struct with three fields: "line", "col" and "offset". Line is a 1-based line number, col is a 1-based column number that counts runes from the start of the line, and offset is a 0-based byte offset. The "text" field is the slice of bytes of the current match. It is empty in a predicate code block. The "state" field is a global store, with backtrack support, of type "map[string]any". The values in the store are tied to the parser's backtracking, in particular if a rule fails to match then all updates to the state that occurred in the process of matching the rule are rolled back. For a key-value store that is not tied to the parser's backtracking, see the "globalStore". The values in the "state" store are available for read access in action and predicate code blocks, any changes made to the "state" store will be reverted once the action or predicate code block is finished running. To update values in the "state" use state change code blocks ("#{}"). IMPORTANT: The "globalStore" field is a global store of type "map[string]any", which allows to store arbitrary values, which are available in action and predicate code blocks for read as well as write access. It is important to notice, that the global store is completely independent from the backtrack mechanism of PEG and is therefore not set back to its old state during backtrack. The initialization of the global store may be achieved by using the GlobalStore function (http://godoc.org/github.com/mna/pigeon/test/predicates#GlobalStore). Be aware, that all keys starting with "_pigeon" are reserved for internal use of pigeon and should not be used nor modified. Those keys are treated as internal implementation details and therefore there are no guarantees given in regards of API stability. With options -support-left-recursion pigeon supports left recursion. E.g.: Supports indirect recursion: The implementation is based on the [Left-recursive PEG Grammars][9] article that links to [Left Recursion in Parsing Expression Grammars][10] and [Packrat Parsers Can Support Left Recursion][11] papers. References: pigeon supports an extension of the classical PEG syntax called failure labels, proposed by Maidl et al. in their paper "Error Reporting in Parsing Expression Grammars" [7]. The used syntax for the introduced expressions is borrowed from their lpeglabel [8] implementation. This extension allows to signal different kinds of errors and to specify, which recovery pattern should handle a given label. With labeled failures it is possible to distinguish between an ordinary failure and an error. Usually, an ordinary failure is produced when the matching of a character fails, and this failure is caught by ordered choice. An error (a non-ordinary failure), by its turn, is produced by the throw operator and may be caught by the recovery operator. In pigeon, the recovery expression consists of the regular expression, the recovery expression and a set of labels to be matched. First, the regular expression is tried. If this fails with one of the provided labels, the recovery expression is tried. If this fails as well, the error is propagated. E.g.: To signal a failure condition, the throw expression is used. E.g.: For concrete examples, how to use throw and recover, have a look at the examples "labeled_failures" and "thrownrecover" in the "test" folder. The implementation of the throw and recover operators work as follows: The failure recover expression adds the recover expression for every failure label to the recovery stack and runs the regular expression. The throw expression checks the recovery stack in reversed order for the provided failure label. If the label is found, the respective recovery expression is run. If this expression is successful, the parser continues the processing of the input. If the recovery expression is not successful, the parsing fails and the parser starts to backtrack. If throw and recover expressions are used together with global state, it is the responsibility of the author of the grammar to reset the global state to a valid state during the recovery operation. The parser generated by pigeon exports a few symbols so that it can be used as a package with public functions to parse input text. The exported API is: See the godoc page of the generated parser for the test/predicates grammar for an example documentation page of the exported API: http://godoc.org/github.com/mna/pigeon/test/predicates. Like the grammar used to generate the parser, the input text must be UTF-8-encoded Unicode. The start rule of the parser is the first rule in the PEG grammar used to generate the parser. A call to any of the Parse* functions returns the value generated by executing the grammar on the provided input text, and an optional error. Typically, the grammar should generate some kind of abstract syntax tree (AST), but for simple grammars it may evaluate the result immediately, such as in the examples/calculator example. There are no constraints imposed on the author of the grammar, it can return whatever is needed. When the parser returns a non-nil error, the error is always of type errList, which is defined as a slice of errors ([]error). Each error in the list is of type *parserError. This is a struct that has an "Inner" field that can be used to access the original error. So if a code block returns some well-known error like: The original error can be accessed this way: By default the parser will continue after an error is returned and will cumulate all errors found during parsing. If the grammar reaches a point where it shouldn't continue, a panic statement can be used to terminate parsing. The panic will be caught at the top-level of the Parse* call and will be converted into a *parserError like any error, and an errList will still be returned to the caller. The divide by zero error in the examples/calculator grammar leverages this feature (no special code is needed to handle division by zero, if it happens, the runtime panics and it is recovered and returned as a parsing error). Providing good error reporting in a parser is not a trivial task. Part of it is provided by the pigeon tool, by offering features such as filename, position, expected literals and rule name in the error message, but an important part of good error reporting needs to be done by the grammar author. For example, many programming languages use double-quotes for string literals. Usually, if the opening quote is found, the closing quote is expected, and if none is found, there won't be any other rule that will match, there's no need to backtrack and try other choices, an error should be added to the list and the match should be consumed. In order to do this, the grammar can look something like this: This is just one example, but it illustrates the idea that error reporting needs to be thought out when designing the grammar. Because the above mentioned error types (errList and parserError) are not exported, additional steps have to be taken, ff the generated parser is used as library package in other packages (e.g. if the same parser is used in multiple command line tools). One possible implementation for exported errors (based on interfaces) and customized error reporting (caret style formatting of the position, where the parsing failed) is available in the json example and its command line tool: http://godoc.org/github.com/mna/pigeon/examples/json Generated parsers have user-provided code mixed with pigeon code in the same package, so there is no package boundary in the resulting code to prevent access to unexported symbols. What is meant to be implementation details in pigeon is also available to user code - which doesn't mean it should be used. For this reason, it is important to precisely define what is intended to be the supported API of pigeon, the parts that will be stable in future versions. The "stability" of the version 1.0 API attempts to make a similar guarantee as the Go 1 compatibility [5]. The following lists what part of the current pigeon code falls under that guarantee (features may be added in the future): The pigeon command-line flags and arguments: those will not be removed and will maintain the same semantics. The explicitly exported API generated by pigeon. See [6] for the documentation of this API on a generated parser. The PEG syntax, as documented above. The code blocks (except the initializer) will always be generated as methods on the *current type, and this type is guaranteed to have the fields pos (type position) and text (type []byte). There are no guarantees on other fields and methods of this type. The position type will always have the fields line, col and offset, all defined as int. There are no guarantees on other fields and methods of this type. The type of the error value returned by the Parse* functions, when not nil, will always be errList defined as a []error. There are no guarantees on methods of this type, other than the fact it implements the error interface. Individual errors in the errList will always be of type *parserError, and this type is guaranteed to have an Inner field that contains the original error value. There are no guarantees on other fields and methods of this type. The above guarantee is given to the version 1.0 (https://github.com/mna/pigeon/releases/tag/v1.0.0) of pigeon, which has entered maintenance mode (bug fixes only). The current master branch includes the development toward a future version 2.0, which intends to further improve pigeon. While the given API stability should be maintained as far as it makes sense, breaking changes may be necessary to be able to improve pigeon. The new version 2.0 API has not yet stabilized and therefore changes to the API may occur at any time. References:
Package dicom provides a set of tools to read, write, and generally work with DICOM (https://dicom.nema.org/) medical image files in Go. dicom.Parse and dicom.Write provide the core functionality to read and write DICOM Datasets. This package provides Go data structures that represent DICOM concepts (for example, dicom.Dataset and dicom.Element). These structures will pretty-print by default and are JSON serializable out of the box. This package provides some advanced functionality as well, including: streaming image frames to an output channel, reading elements one-by-one (like an iterator pattern), flat iteration over nested elements in a Dataset, and more. General usage is simple. Check out the package examples below and some function specific examples. It may also be helpful to take a look at the example cmd/dicomutil program, which is a CLI built around this library to save out image frames from DICOMs and print out metadata to STDOUT.
Simple live charts for memory consumption and GC pauses. To use debugcharts, link this package into your program: If your application is not already running an http server, you need to start one. Add "net/http" and "log" to your imports and the following code to your main function: Then go look at charts:
go-ipld-prime is a series of go interfaces for manipulating IPLD data. See https://ipld.io/ for more information about the basics of "What is IPLD?". Here in the godoc, the first couple of types to look at should be: These types provide a generic description of the data model. A Node is a piece of IPLD data which can be inspected. A NodeAssembler is used to create Nodes. (A NodeBuilder is just like a NodeAssembler, but allocates memory (whereas a NodeAssembler just fills up memory; using these carefully allows construction of very efficient code.) Different NodePrototypes can be used to describe Nodes which follow certain logical rules (e.g., we use these as part of implementing Schemas), and can also be used so that programs can use different memory layouts for different data (which can be useful for constructing efficient programs when data has known shape for which we can use specific or compacted memory layouts). If working with linked data (data which is split into multiple trees of Nodes, loaded separately, and connected by some kind of "link" reference), the next types you should look at are: The most typical use of LinkSystem is to use the linking/cid package to get a LinkSystem that works with CIDs: ... and then assign the StorageWriteOpener and StorageReadOpener fields in order to control where data is stored to and read from. Methods on the LinkSystem then provide the functions typically used to get data in and out of Nodes so you can work with it. This root package gathers some of the most important ease-of-use functions all in one place, but is mostly aliases out to features originally found in other more specific sub-packages. (If you're interested in keeping your binary sizes small, and don't use some of the features of this library, you'll probably want to look into using the relevant sub-packages directly.) Particularly interesting subpackages include: Example_createDataAndMarshal shows how you can feed data into a NodeBuilder, and also how to then hand that to an Encoder. Often you'll encoding implicitly through a LinkSystem.Store call instead, but you can do it directly, too. Example_goValueWithSchema shows how to combine a Go value with an IPLD schema, which can then be used as an IPLD node. For more examples and documentation, see the node/bindnode package. Example_unmarshalData shows how you can use a Decoder and a NodeBuilder (or NodePrototype) together to do unmarshalling. Often you'll do this implicitly through a LinkSystem.Load call instead, but you can do it directly, too.
Package grip provides a flexible logging package for basic Go programs. Drawing inspiration from Go and Python's standard library logging, as well as systemd's journal service, and other logging systems, Grip provides a number of very powerful logging abstractions in one high-level package. The central type of the grip package is the Journaler type, instances of which provide distinct log capturing system. For ease, following from the Go standard library, the grip package provides parallel public methods that use an internal "standard" Jouernaler instance in the grip package, which has some defaults configured and may be sufficient for many use cases. The send.Sender interface provides a way of changing the logging backend, and the send package provides a number of alternate implementations of logging systems, including: systemd's journal, logging to standard output, logging to a file, and generic syslog support. The message.Composer interface is the representation of all messages. They are implemented to provide a raw structured form as well as a string representation for more conentional logging output. Furthermore they are intended to be easy to produce, and defer more expensive processing until they're being logged, to prevent expensive operations producing messages that are below threshold. Loging helpers exist for the following levels: These methods accept both strings (message content,) or types that implement the message.MessageComposer interface. Composer types make it possible to delay generating a message unless the logger is over the logging threshold. Use this to avoid expensive serialization operations for suppressed logging operations. All levels also have additional methods with `ln` and `f` appended to the end of the method name which allow Println() and Printf() style functionality. You must pass printf/println-style arguments to these methods. The Conditional logging methods take two arguments, a Boolean, and a message argument. Messages can be strings, objects that implement the MessageComposer interface, or errors. If condition boolean is true, the threshold level is met, and the message to log is not an empty string, then it logs the resolved message. Use conditional logging methods to potentially suppress log messages based on situations orthogonal to log level, with "log sometimes" or "log rarely" semantics. Combine with MessageComposers to to avoid expensive message building operations. The MutiCatcher type makes it possible to collect from a group of operations and then aggregate them as a single error.
Package skylark provides a Skylark interpreter. Skylark values are represented by the Value interface. The following built-in Value types are known to the evaluator: Client applications may define new data types that satisfy at least the Value interface. Such types may provide additional operations by implementing any of these optional interfaces: Client applications may also define domain-specific functions in Go and make them available to Skylark programs. Use NewBuiltin to construct a built-in value that wraps a Go function. The implementation of the Go function may use UnpackArgs to make sense of the positional and keyword arguments provided by the caller. Skylark's None value is not equal to Go's nil, but nil may be assigned to a Skylark Value. Be careful to avoid allowing Go nil values to leak into Skylark data structures. The Compare operation requires two arguments of the same type, but this constraint cannot be expressed in Go's type system. (This is the classic "binary method problem".) So, each Value type's CompareSameType method is a partial function that compares a value only against others of the same type. Use the package's standalone Compare (or Equal) function to compare an arbitrary pair of values. To parse and evaluate a Skylark source file, use ExecFile. The Eval function evaluates a single expression. All evaluator functions require a Thread parameter which defines the "thread-local storage" of a Skylark thread and may be used to plumb application state through Sklyark code and into callbacks. When evaluation fails it returns an EvalError from which the application may obtain a backtrace of active Skylark calls.
Package update provides functionality to implement secure, self-updating Go programs (or other single-file targets). For complete updating solutions please see Equinox (https://equinox.io) and go-tuf (https://github.com/flynn/go-tuf). This example shows how to update a program remotely from a URL. Go binaries can often be large. It can be advantageous to only ship a binary patch to a client instead of the complete program text of a new version. This example shows how to update a program with a bsdiff binary patch. Other patch formats may be applied by implementing the Patcher interface. Updating executable code on a computer can be a dangerous operation unless you take the appropriate steps to guarantee the authenticity of the new code. While checksum verification is important, it should always be combined with signature verification (next section) to guarantee that the code came from a trusted party. selfupdate validates SHA256 checksums by default, but this is pluggable via the Hash property on the Options struct. This example shows how to guarantee that the newly-updated binary is verified to have an appropriate checksum (that was otherwise retrieved via a secure channel) specified as a hex string. Cryptographic verification of new code from an update is an extremely important way to guarantee the security and integrity of your updates. Verification is performed by validating the signature of a hash of the new file. This means nothing changes if you apply your update with a patch. This example shows how to add signature verification to your updates. To make all of this work an application distributor must first create a public/private key pair and embed the public key into their application. When they issue a new release, the issuer must sign the new executable file with the private key and distribute the signature along with the selfupdate. In order to update a Go application with selfupdate, you must distribute it as a single executable. This is often easy, but some applications require static assets (like HTML and CSS asset files or TLS certificates). In order to update applications like these, you'll want to make sure to embed those asset files into the distributed binary with a tool like go-bindata (my favorite): https://github.com/jteeuwen/go-bindata Mechanisms and protocols for determining whether an update should be applied and, if so, which one are out of scope for this package. Please consult go-tuf (https://github.com/flynn/go-tuf) or Equinox (https://equinox.io) for more complete solutions. selfupdate only works for self-updating applications that are distributed as a single binary, i.e. applications that do not have additional assets or dependency files. Updating application that are distributed as multiple on-disk files is out of scope, although this may change in future versions of this library.
Package pointer implements Andersen's analysis, an inclusion-based pointer analysis algorithm first described in (Andersen, 1994). A pointer analysis relates every pointer expression in a whole program to the set of memory locations to which it might point. This information can be used to construct a call graph of the program that precisely represents the destinations of dynamic function and method calls. It can also be used to determine, for example, which pairs of channel operations operate on the same channel. The package allows the client to request a set of expressions of interest for which the points-to information will be returned once the analysis is complete. In addition, the client may request that a callgraph is constructed. The example program in example_test.go demonstrates both of these features. Clients should not request more information than they need since it may increase the cost of the analysis significantly. Our algorithm is INCLUSION-BASED: the points-to sets for x and y will be related by pts(y) ⊇ pts(x) if the program contains the statement y = x. It is FLOW-INSENSITIVE: it ignores all control flow constructs and the order of statements in a program. It is therefore a "MAY ALIAS" analysis: its facts are of the form "P may/may not point to L", not "P must point to L". It is FIELD-SENSITIVE: it builds separate points-to sets for distinct fields, such as x and y in struct { x, y *int }. It is mostly CONTEXT-INSENSITIVE: most functions are analyzed once, so values can flow in at one call to the function and return out at another. Only some smaller functions are analyzed with consideration of their calling context. It has a CONTEXT-SENSITIVE HEAP: objects are named by both allocation site and context, so the objects returned by two distinct calls to f: are distinguished up to the limits of the calling context. It is a WHOLE PROGRAM analysis: it requires SSA-form IR for the complete Go program and summaries for native code. See the (Hind, PASTE'01) survey paper for an explanation of these terms. The analysis is fully sound when invoked on pure Go programs that do not use reflection or unsafe.Pointer conversions. In other words, if there is any possible execution of the program in which pointer P may point to object O, the analysis will report that fact. By default, the "reflect" library is ignored by the analysis, as if all its functions were no-ops, but if the client enables the Reflection flag, the analysis will make a reasonable attempt to model the effects of calls into this library. However, this comes at a significant performance cost, and not all features of that library are yet implemented. In addition, some simplifying approximations must be made to ensure that the analysis terminates; for example, reflection can be used to construct an infinite set of types and values of those types, but the analysis arbitrarily bounds the depth of such types. Most but not all reflection operations are supported. In particular, addressable reflect.Values are not yet implemented, so operations such as (reflect.Value).Set have no analytic effect. The pointer analysis makes no attempt to understand aliasing between the operand x and result y of an unsafe.Pointer conversion: It is as if the conversion allocated an entirely new object: The analysis cannot model the aliasing effects of functions written in languages other than Go, such as runtime intrinsics in C or assembly, or code accessed via cgo. The result is as if such functions are no-ops. However, various important intrinsics are understood by the analysis, along with built-ins such as append. The analysis currently provides no way for users to specify the aliasing effects of native code. ------------------------------------------------------------------------ The remaining documentation is intended for package maintainers and pointer analysis specialists. Maintainers should have a solid understanding of the referenced papers (especially those by H&L and PKH) before making making significant changes. The implementation is similar to that described in (Pearce et al, PASTE'04). Unlike many algorithms which interleave constraint generation and solving, constructing the callgraph as they go, this implementation for the most part observes a phase ordering (generation before solving), with only simple (copy) constraints being generated during solving. (The exception is reflection, which creates various constraints during solving as new types flow to reflect.Value operations.) This improves the traction of presolver optimisations, but imposes certain restrictions, e.g. potential context sensitivity is limited since all variants must be created a priori. A type is said to be "pointer-like" if it is a reference to an object. Pointer-like types include pointers and also interfaces, maps, channels, functions and slices. We occasionally use C's x->f notation to distinguish the case where x is a struct pointer from x.f where is a struct value. Pointer analysis literature (and our comments) often uses the notation dst=*src+offset to mean something different than what it means in Go. It means: for each node index p in pts(src), the node index p+offset is in pts(dst). Similarly *dst+offset=src is used for store constraints and dst=src+offset for offset-address constraints. Nodes are the key datastructure of the analysis, and have a dual role: they represent both constraint variables (equivalence classes of pointers) and members of points-to sets (things that can be pointed at, i.e. "labels"). Nodes are naturally numbered. The numbering enables compact representations of sets of nodes such as bitvectors (or BDDs); and the ordering enables a very cheap way to group related nodes together. For example, passing n parameters consists of generating n parallel constraints from caller+i to callee+i for 0<=i<n. The zero nodeid means "not a pointer". For simplicity, we generate flow constraints even for non-pointer types such as int. The pointer equivalence (PE) presolver optimization detects which variables cannot point to anything; this includes not only all variables of non-pointer types (such as int) but also variables of pointer-like types if they are always nil, or are parameters to a function that is never called. Each node represents a scalar part of a value or object. Aggregate types (structs, tuples, arrays) are recursively flattened out into a sequential list of scalar component types, and all the elements of an array are represented by a single node. (The flattening of a basic type is a list containing a single node.) Nodes are connected into a graph with various kinds of labelled edges: simple edges (or copy constraints) represent value flow. Complex edges (load, store, etc) trigger the creation of new simple edges during the solving phase. Conceptually, an "object" is a contiguous sequence of nodes denoting an addressable location: something that a pointer can point to. The first node of an object has a non-nil obj field containing information about the allocation: its size, context, and ssa.Value. Objects include: Many objects have no Go types. For example, the func, map and chan type kinds in Go are all varieties of pointers, but their respective objects are actual functions (executable code), maps (hash tables), and channels (synchronized queues). Given the way we model interfaces, they too are pointers to "tagged" objects with no Go type. And an *ssa.Global denotes the address of a global variable, but the object for a Global is the actual data. So, the types of an ssa.Value that creates an object is "off by one indirection": a pointer to the object. The individual nodes of an object are sometimes referred to as "labels". For uniformity, all objects have a non-zero number of fields, even those of the empty type struct{}. (All arrays are treated as if of length 1, so there are no empty arrays. The empty tuple is never address-taken, so is never an object.) An tagged object has the following layout: The T node's typ field is the dynamic type of the "payload": the value v which follows, flattened out. The T node's obj has the otTagged flag. Tagged objects are needed when generalizing across types: interfaces, reflect.Values, reflect.Types. Each of these three types is modelled as a pointer that exclusively points to tagged objects. Tagged objects may be indirect (obj.flags ⊇ {otIndirect}) meaning that the value v is not of type T but *T; this is used only for reflect.Values that represent lvalues. (These are not implemented yet.) Variables of the following "scalar" types may be represented by a single node: basic types, pointers, channels, maps, slices, 'func' pointers, interfaces. Pointers: Nothing to say here, oddly. Basic types (bool, string, numbers, unsafe.Pointer): Currently all fields in the flattening of a type, including non-pointer basic types such as int, are represented in objects and values. Though non-pointer nodes within values are uninteresting, non-pointer nodes in objects may be useful (if address-taken) because they permit the analysis to deduce, in this example, that p points to s.x. If we ignored such object fields, we could only say that p points somewhere within s. All other basic types are ignored. Expressions of these types have zero nodeid, and fields of these types within aggregate other types are omitted. unsafe.Pointers are not modelled as pointers, so a conversion of an unsafe.Pointer to *T is (unsoundly) treated equivalent to new(T). Channels: An expression of type 'chan T' is a kind of pointer that points exclusively to channel objects, i.e. objects created by MakeChan (or reflection). 'chan T' is treated like *T. *ssa.MakeChan is treated as equivalent to new(T). *ssa.Send and receive (*ssa.UnOp(ARROW)) and are equivalent to store Maps: An expression of type 'map[K]V' is a kind of pointer that points exclusively to map objects, i.e. objects created by MakeMap (or reflection). map K[V] is treated like *M where M = struct{k K; v V}. *ssa.MakeMap is equivalent to new(M). *ssa.MapUpdate is equivalent to *y=x where *y and x have type M. *ssa.Lookup is equivalent to y=x.v where x has type *M. Slices: A slice []T, which dynamically resembles a struct{array *T, len, cap int}, is treated as if it were just a *T pointer; the len and cap fields are ignored. *ssa.MakeSlice is treated like new([1]T): an allocation of a *ssa.Index on a slice is equivalent to a load. *ssa.IndexAddr on a slice returns the address of the sole element of the slice, i.e. the same address. *ssa.Slice is treated as a simple copy. Functions: An expression of type 'func...' is a kind of pointer that points exclusively to function objects. A function object has the following layout: There may be multiple function objects for the same *ssa.Function due to context-sensitive treatment of some functions. The first node is the function's identity node. Associated with every callsite is a special "targets" variable, whose pts() contains the identity node of each function to which the call may dispatch. Identity words are not otherwise used during the analysis, but we construct the call graph from the pts() solution for such nodes. The following block of contiguous nodes represents the flattened-out types of the parameters ("P-block") and results ("R-block") of the function object. The treatment of free variables of closures (*ssa.FreeVar) is like that of global variables; it is not context-sensitive. *ssa.MakeClosure instructions create copy edges to Captures. A Go value of type 'func' (i.e. a pointer to one or more functions) is a pointer whose pts() contains function objects. The valueNode() for an *ssa.Function returns a singleton for that function. Interfaces: An expression of type 'interface{...}' is a kind of pointer that points exclusively to tagged objects. All tagged objects pointed to by an interface are direct (the otIndirect flag is clear) and concrete (the tag type T is not itself an interface type). The associated ssa.Value for an interface's tagged objects may be an *ssa.MakeInterface instruction, or nil if the tagged object was created by an instrinsic (e.g. reflection). Constructing an interface value causes generation of constraints for all of the concrete type's methods; we can't tell a priori which ones may be called. TypeAssert y = x.(T) is implemented by a dynamic constraint triggered by each tagged object O added to pts(x): a typeFilter constraint if T is an interface type, or an untag constraint if T is a concrete type. A typeFilter tests whether O.typ implements T; if so, O is added to pts(y). An untagFilter tests whether O.typ is assignable to T,and if so, a copy edge O.v -> y is added. ChangeInterface is a simple copy because the representation of tagged objects is independent of the interface type (in contrast to the "method tables" approach used by the gc runtime). y := Invoke x.m(...) is implemented by allocating contiguous P/R blocks for the callsite and adding a dynamic rule triggered by each tagged object added to pts(x). The rule adds param/results copy edges to/from each discovered concrete method. (Q. Why do we model an interface as a pointer to a pair of type and value, rather than as a pair of a pointer to type and a pointer to value? A. Control-flow joins would merge interfaces ({T1}, {V1}) and ({T2}, {V2}) to make ({T1,T2}, {V1,V2}), leading to the infeasible and type-unsafe combination (T1,V2). Treating the value and its concrete type as inseparable makes the analysis type-safe.) Type parameters: Type parameters are not directly supported by the analysis. Calls to generic functions will be left as if they had empty bodies. Users of the package are expected to use the ssa.InstantiateGenerics builder mode when building code that uses or depends on code containing generics. reflect.Value: A reflect.Value is modelled very similar to an interface{}, i.e. as a pointer exclusively to tagged objects, but with two generalizations. 1. a reflect.Value that represents an lvalue points to an indirect (obj.flags ⊇ {otIndirect}) tagged object, which has a similar layout to an tagged object except that the value is a pointer to the dynamic type. Indirect tagged objects preserve the correct aliasing so that mutations made by (reflect.Value).Set can be observed. Indirect objects only arise when an lvalue is derived from an rvalue by indirection, e.g. the following code: Whether indirect or not, the concrete type of the tagged object corresponds to the user-visible dynamic type, and the existence of a pointer is an implementation detail. (NB: indirect tagged objects are not yet implemented) 2. The dynamic type tag of a tagged object pointed to by a reflect.Value may be an interface type; it need not be concrete. This arises in code such as this: pts(eface) is a singleton containing an interface{}-tagged object. That tagged object's payload is an interface{} value, i.e. the pts of the payload contains only concrete-tagged objects, although in this example it's the zero interface{} value, so its pts is empty. reflect.Type: Just as in the real "reflect" library, we represent a reflect.Type as an interface whose sole implementation is the concrete type, *reflect.rtype. (This choice is forced on us by go/types: clients cannot fabricate types with arbitrary method sets.) rtype instances are canonical: there is at most one per dynamic type. (rtypes are in fact large structs but since identity is all that matters, we represent them by a single node.) The payload of each *rtype-tagged object is an *rtype pointer that points to exactly one such canonical rtype object. We exploit this by setting the node.typ of the payload to the dynamic type, not '*rtype'. This saves us an indirection in each resolution rule. As an optimisation, *rtype-tagged objects are canonicalized too. Aggregate types: Aggregate types are treated as if all directly contained aggregates are recursively flattened out. Structs: *ssa.Field y = x.f creates a simple edge to y from x's node at f's offset. *ssa.FieldAddr y = &x->f requires a dynamic closure rule to create The nodes of a struct consist of a special 'identity' node (whose type is that of the struct itself), followed by the nodes for all the struct's fields, recursively flattened out. A pointer to the struct is a pointer to its identity node. That node allows us to distinguish a pointer to a struct from a pointer to its first field. Field offsets are logical field offsets (plus one for the identity node), so the sizes of the fields can be ignored by the analysis. (The identity node is non-traditional but enables the distinction described above, which is valuable for code comprehension tools. Typical pointer analyses for C, whose purpose is compiler optimization, must soundly model unsafe.Pointer (void*) conversions, and this requires fidelity to the actual memory layout using physical field offsets.) *ssa.Field y = x.f creates a simple edge to y from x's node at f's offset. *ssa.FieldAddr y = &x->f requires a dynamic closure rule to create Arrays: We model an array by an identity node (whose type is that of the array itself) followed by a node representing all the elements of the array; the analysis does not distinguish elements with different indices. Effectively, an array is treated like struct{elem T}, a load y=x[i] like y=x.elem, and a store x[i]=y like x.elem=y; the index i is ignored. A pointer to an array is pointer to its identity node. (A slice is also a pointer to an array's identity node.) The identity node allows us to distinguish a pointer to an array from a pointer to one of its elements, but it is rather costly because it introduces more offset constraints into the system. Furthermore, sound treatment of unsafe.Pointer would require us to dispense with this node. Arrays may be allocated by Alloc, by make([]T), by calls to append, and via reflection. Tuples (T, ...): Tuples are treated like structs with naturally numbered fields. *ssa.Extract is analogous to *ssa.Field. However, tuples have no identity field since by construction, they cannot be address-taken. There are three kinds of function call: Cases 1 and 2 apply equally to methods and standalone functions. Static calls: A static call consists three steps: A static function call is little more than two struct value copies between the P/R blocks of caller and callee: Context sensitivity: Static calls (alone) may be treated context sensitively, i.e. each callsite may cause a distinct re-analysis of the callee, improving precision. Our current context-sensitivity policy treats all intrinsics and getter/setter methods in this manner since such functions are small and seem like an obvious source of spurious confluences, though this has not yet been evaluated. Dynamic function calls: Dynamic calls work in a similar manner except that the creation of copy edges occurs dynamically, in a similar fashion to a pair of struct copies in which the callee is indirect: (Recall that the function object's P- and R-blocks are contiguous.) Interface method invocation: For invoke-mode calls, we create a params/results block for the callsite and attach a dynamic closure rule to the interface. For each new tagged object that flows to the interface, we look up the concrete method, find its function object, and connect its P/R blocks to the callsite's P/R blocks, adding copy edges to the graph during solving. Recording call targets: The analysis notifies its clients of each callsite it encounters, passing a CallSite interface. Among other things, the CallSite contains a synthetic constraint variable ("targets") whose points-to solution includes the set of all function objects to which the call may dispatch. It is via this mechanism that the callgraph is made available. Clients may also elect to be notified of callgraph edges directly; internally this just iterates all "targets" variables' pts(·)s. We implement Hash-Value Numbering (HVN), a pre-solver constraint optimization described in Hardekopf & Lin, SAS'07. This is documented in more detail in hvn.go. We intend to add its cousins HR and HU in future. The solver is currently a naive Andersen-style implementation; it does not perform online cycle detection, though we plan to add solver optimisations such as Hybrid- and Lazy- Cycle Detection from (Hardekopf & Lin, PLDI'07). It uses difference propagation (Pearce et al, SQC'04) to avoid redundant re-triggering of closure rules for values already seen. Points-to sets are represented using sparse bit vectors (similar to those used in LLVM and gcc), which are more space- and time-efficient than sets based on Go's built-in map type or dense bit vectors. Nodes are permuted prior to solving so that object nodes (which may appear in points-to sets) are lower numbered than non-object (var) nodes. This improves the density of the set over which the PTSs range, and thus the efficiency of the representation. Partly thanks to avoiding map iteration, the execution of the solver is 100% deterministic, a great help during debugging. Andersen, L. O. 1994. Program analysis and specialization for the C programming language. Ph.D. dissertation. DIKU, University of Copenhagen. David J. Pearce, Paul H. J. Kelly, and Chris Hankin. 2004. Efficient field-sensitive pointer analysis for C. In Proceedings of the 5th ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering (PASTE '04). ACM, New York, NY, USA, 37-42. http://doi.acm.org/10.1145/996821.996835 David J. Pearce, Paul H. J. Kelly, and Chris Hankin. 2004. Online Cycle Detection and Difference Propagation: Applications to Pointer Analysis. Software Quality Control 12, 4 (December 2004), 311-337. http://dx.doi.org/10.1023/B:SQJO.0000039791.93071.a2 David Grove and Craig Chambers. 2001. A framework for call graph construction algorithms. ACM Trans. Program. Lang. Syst. 23, 6 (November 2001), 685-746. http://doi.acm.org/10.1145/506315.506316 Ben Hardekopf and Calvin Lin. 2007. The ant and the grasshopper: fast and accurate pointer analysis for millions of lines of code. In Proceedings of the 2007 ACM SIGPLAN conference on Programming language design and implementation (PLDI '07). ACM, New York, NY, USA, 290-299. http://doi.acm.org/10.1145/1250734.1250767 Ben Hardekopf and Calvin Lin. 2007. Exploiting pointer and location equivalence to optimize pointer analysis. In Proceedings of the 14th international conference on Static Analysis (SAS'07), Hanne Riis Nielson and Gilberto Filé (Eds.). Springer-Verlag, Berlin, Heidelberg, 265-280. Atanas Rountev and Satish Chandra. 2000. Off-line variable substitution for scaling points-to analysis. In Proceedings of the ACM SIGPLAN 2000 conference on Programming language design and implementation (PLDI '00). ACM, New York, NY, USA, 47-56. DOI=10.1145/349299.349310 http://doi.acm.org/10.1145/349299.349310 This program demonstrates how to use the pointer analysis to obtain a conservative call-graph of a Go program. It also shows how to compute the points-to set of a variable, in this case, (C).f's ch parameter.
Package grpcreplay supports the capture and replay of gRPC calls. Its main goal is to improve testing. Once you capture the calls of a test that runs against a real service, you have an "automatic mock" that can be replayed against the same test, yielding a unit test that is fast and flake-free. To record a sequence of gRPC calls to a file, create a Recorder and pass its DialOptions to grpc.Dial: It is essential to close the Recorder when the interaction is finished. There is also a NewRecorderWriter function for capturing to an arbitrary io.Writer. To replay a captured file, create a Replayer and ask it for a (fake) connection. We don't actually have to dial a server. (Since we're reading the file and not writing it, we don't have to be as careful about the error returned from Close). A test might use random or time-sensitive values, for instance to create unique resources for isolation from other tests. The test therefore has initial values, such as the current time, or a random seed, that differ from run to run. You must record this initial state and re-establish it on replay. To record the initial state, serialize it into a []byte and pass it as the second argument to NewRecorder: On replay, get the bytes from Replayer.Initial: Recorders and replayers have support for running callbacks before messages are written to or read from the replay file. A Recorder has a BeforeFunc that can modify a request or response before it is written to the replay file. The actual RPCs sent to the service during recording remain unaltered; only what is saved in the replay file can be changed. A Replayer has a BeforeFunc that can modify a request before it is sent for matching. Example uses for these callbacks include customized logging, or scrubbing data before RPCs are written to the replay file. If requests are modified by the callbacks during recording, it is important to perform the same modifications to the requests when replaying, or RPC matching on replay will fail. A common way to analyze and modify the various messages is to use a type switch. A nondeterministic program may invoke RPCs in a different order each time it is run. The order in which RPCs are called during recording may differ from the order during replay. The replayer matches incoming to recorded requests by method name and request contents, so nondeterminism is only a concern for identical requests that result in different responses. A nondeterministic program whose behavior differs depending on the order of such RPCs probably has a race condition: since both the recorded sequence of RPCs and the sequence during replay are valid orderings, the program should behave the same under both. The same is not true of streaming RPCs. The replayer matches streams only by method name, since it has no other information at the time the stream is opened. Two streams with the same method name that are started concurrently may replay in the wrong order. Besides the differences in replay mentioned above, other differences may cause issues for some programs. We list them here. The Replayer delivers a response to an RPC immediately, without waiting for other incoming RPCs. This can violate causality. For example, in a Pub/Sub program where one goroutine publishes and another subscribes, during replay the Subscribe call may finish before the Publish call begins. For streaming RPCs, the Replayer delivers the result of Send and Recv calls in the order they were recorded. No attempt is made to match message contents. At present, this package does not record or replay stream headers and trailers, or the result of the CloseSend method.
Package demangle defines functions that demangle GCC/LLVM C++ and Rust symbol names. This package recognizes names that were mangled according to the C++ ABI defined at http://codesourcery.com/cxx-abi/ and the Rust ABI defined at https://rust-lang.github.io/rfcs/2603-rust-symbol-name-mangling-v0.html Most programs will want to call Filter or ToString.
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross-Field and Cross-Struct validation for nested structs and has the ability to dive into arrays and maps of any type. see more examples https://github.com/go-playground/validator/tree/v9/_examples Doing things this way is actually the way the standard library does, see the file.Open method here: The authors return type "error" to avoid the issue discussed in the following, where err is always != nil: Validator only InvalidValidationError for bad validation input, nil or ValidationErrors as type error; so, in your code all you need to do is check if the error returned is not nil, and if it's not check if error is InvalidValidationError ( if necessary, most of the time it isn't ) type cast it to type ValidationErrors like so err.(validator.ValidationErrors). Custom Validation functions can be added. Example: Cross-Field Validation can be done via the following tags: If, however, some custom cross-field validation is required, it can be done using a custom validation. Why not just have cross-fields validation tags (i.e. only eqcsfield and not eqfield)? The reason is efficiency. If you want to check a field within the same struct "eqfield" only has to find the field on the same struct (1 level). But, if we used "eqcsfield" it could be multiple levels down. Example: Multiple validators on a field will process in the order defined. Example: Bad Validator definitions are not handled by the library. Example: Baked In Cross-Field validation only compares fields on the same struct. If Cross-Field + Cross-Struct validation is needed you should implement your own custom validator. Comma (",") is the default separator of validation tags. If you wish to have a comma included within the parameter (i.e. excludesall=,) you will need to use the UTF-8 hex representation 0x2C, which is replaced in the code as a comma, so the above will become excludesall=0x2C. Pipe ("|") is the 'or' validation tags deparator. If you wish to have a pipe included within the parameter i.e. excludesall=| you will need to use the UTF-8 hex representation 0x7C, which is replaced in the code as a pipe, so the above will become excludesall=0x7C Here is a list of the current built in validators: Tells the validation to skip this struct field; this is particularly handy in ignoring embedded structs from being validated. (Usage: -) This is the 'or' operator allowing multiple validators to be used and accepted. (Usage: rbg|rgba) <-- this would allow either rgb or rgba colors to be accepted. This can also be combined with 'and' for example ( Usage: omitempty,rgb|rgba) When a field that is a nested struct is encountered, and contains this flag any validation on the nested struct will be run, but none of the nested struct fields will be validated. This is useful if inside of your program you know the struct will be valid, but need to verify it has been assigned. NOTE: only "required" and "omitempty" can be used on a struct itself. Same as structonly tag except that any struct level validations will not run. Allows conditional validation, for example if a field is not set with a value (Determined by the "required" validator) then other validation such as min or max won't run, but if a value is set validation will run. This tells the validator to dive into a slice, array or map and validate that level of the slice, array or map with the validation tags that follow. Multidimensional nesting is also supported, each level you wish to dive will require another dive tag. dive has some sub-tags, 'keys' & 'endkeys', please see the Keys & EndKeys section just below. Example #1 Example #2 Keys & EndKeys These are to be used together directly after the dive tag and tells the validator that anything between 'keys' and 'endkeys' applies to the keys of a map and not the values; think of it like the 'dive' tag, but for map keys instead of values. Multidimensional nesting is also supported, each level you wish to validate will require another 'keys' and 'endkeys' tag. These tags are only valid for maps. Example #1 Example #2 This validates that the value is not the data types default zero value. For numbers ensures value is not zero. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. The field under validation must be present and not empty only if any of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Examples: The field under validation must be present and not empty only if all of the other specified fields are present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Example: The field under validation must be present and not empty only when any of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Examples: The field under validation must be present and not empty only when all of the other specified fields are not present. For strings ensures value is not "". For slices, maps, pointers, interfaces, channels and functions ensures the value is not nil. Example: This validates that the value is the default value and is almost the opposite of required. For numbers, length will ensure that the value is equal to the parameter given. For strings, it checks that the string length is exactly that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, max will ensure that the value is less than or equal to the parameter given. For strings, it checks that the string length is at most that number of characters. For slices, arrays, and maps, validates the number of items. For numbers, min will ensure that the value is greater or equal to the parameter given. For strings, it checks that the string length is at least that number of characters. For slices, arrays, and maps, validates the number of items. For strings & numbers, eq will ensure that the value is equal to the parameter given. For slices, arrays, and maps, validates the number of items. For strings & numbers, ne will ensure that the value is not equal to the parameter given. For slices, arrays, and maps, validates the number of items. For strings, ints, and uints, oneof will ensure that the value is one of the values in the parameter. The parameter should be a list of values separated by whitespace. Values may be strings or numbers. For numbers, this will ensure that the value is greater than the parameter given. For strings, it checks that the string length is greater than that number of characters. For slices, arrays and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than time.Now.UTC(). Same as 'min' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is greater than or equal to time.Now.UTC(). For numbers, this will ensure that the value is less than the parameter given. For strings, it checks that the string length is less than that number of characters. For slices, arrays, and maps it validates the number of items. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than time.Now.UTC(). Same as 'max' above. Kept both to make terminology with 'len' easier. Example #1 Example #2 (time.Time) For time.Time ensures the time value is less than or equal to time.Now.UTC(). This will validate the field value against another fields value either within a struct or passed in field. Example #1: Example #2: Field Equals Another Field (relative) This does the same as eqfield except that it validates the field provided relative to the top level struct. This will validate the field value against another fields value either within a struct or passed in field. Examples: Field Does Not Equal Another Field (relative) This does the same as nefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as gtefield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltfield except that it validates the field provided relative to the top level struct. Only valid for Numbers and time.Time types, this will validate the field value against another fields value either within a struct or passed in field. usage examples are for validation of a Start and End date: Example #1: Example #2: This does the same as ltefield except that it validates the field provided relative to the top level struct. This does the same as contains except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. This does the same as excludes except for struct fields. It should only be used with string types. See the behavior of reflect.Value.String() for behavior on other types. For arrays & slices, unique will ensure that there are no duplicates. For maps, unique will ensure that there are no duplicate values. For slices of struct, unique will ensure that there are no duplicate values in a field of the struct specified via a parameter. This validates that a string value contains ASCII alpha characters only This validates that a string value contains ASCII alphanumeric characters only This validates that a string value contains unicode alpha characters only This validates that a string value contains unicode alphanumeric characters only This validates that a string value contains a basic numeric value. basic excludes exponents etc... for integers or float it returns true. This validates that a string value contains a valid hexadecimal. This validates that a string value contains a valid hex color including hashtag (#) This validates that a string value contains a valid rgb color This validates that a string value contains a valid rgba color This validates that a string value contains a valid hsl color This validates that a string value contains a valid hsla color This validates that a string value contains a valid email This may not conform to all possibilities of any rfc standard, but neither does any email provider accept all possibilities. This validates that a string value contains a valid file path and that the file exists on the machine. This is done using os.Stat, which is a platform independent function. This validates that a string value contains a valid url This will accept any url the golang request uri accepts but must contain a schema for example http:// or rtmp:// This validates that a string value contains a valid uri This will accept any uri the golang request uri accepts This validataes that a string value contains a valid URN according to the RFC 2141 spec. This validates that a string value contains a valid base64 value. Although an empty string is valid base64 this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid base64 URL safe value according the the RFC4648 spec. Although an empty string is a valid base64 URL safe value, this will report an empty string as an error, if you wish to accept an empty string as valid you can use this with the omitempty tag. This validates that a string value contains a valid bitcoin address. The format of the string is checked to ensure it matches one of the three formats P2PKH, P2SH and performs checksum validation. Bitcoin Bech32 Address (segwit) This validates that a string value contains a valid bitcoin Bech32 address as defined by bip-0173 (https://github.com/bitcoin/bips/blob/master/bip-0173.mediawiki) Special thanks to Pieter Wuille for providng reference implementations. This validates that a string value contains a valid ethereum address. The format of the string is checked to ensure it matches the standard Ethereum address format Full validation is blocked by https://github.com/golang/crypto/pull/28 This validates that a string value contains the substring value. This validates that a string value contains any Unicode code points in the substring value. This validates that a string value contains the supplied rune value. This validates that a string value does not contain the substring value. This validates that a string value does not contain any Unicode code points in the substring value. This validates that a string value does not contain the supplied rune value. This validates that a string value starts with the supplied string value This validates that a string value ends with the supplied string value This validates that a string value contains a valid isbn10 or isbn13 value. This validates that a string value contains a valid isbn10 value. This validates that a string value contains a valid isbn13 value. This validates that a string value contains a valid UUID. Uppercase UUID values will not pass - use `uuid_rfc4122` instead. This validates that a string value contains a valid version 3 UUID. Uppercase UUID values will not pass - use `uuid3_rfc4122` instead. This validates that a string value contains a valid version 4 UUID. Uppercase UUID values will not pass - use `uuid4_rfc4122` instead. This validates that a string value contains a valid version 5 UUID. Uppercase UUID values will not pass - use `uuid5_rfc4122` instead. This validates that a string value contains only ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains only printable ASCII characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains one or more multibyte characters. NOTE: if the string is blank, this validates as true. This validates that a string value contains a valid DataURI. NOTE: this will also validate that the data portion is valid base64 This validates that a string value contains a valid latitude. This validates that a string value contains a valid longitude. This validates that a string value contains a valid U.S. Social Security Number. This validates that a string value contains a valid IP Address. This validates that a string value contains a valid v4 IP Address. This validates that a string value contains a valid v6 IP Address. This validates that a string value contains a valid CIDR Address. This validates that a string value contains a valid v4 CIDR Address. This validates that a string value contains a valid v6 CIDR Address. This validates that a string value contains a valid resolvable TCP Address. This validates that a string value contains a valid resolvable v4 TCP Address. This validates that a string value contains a valid resolvable v6 TCP Address. This validates that a string value contains a valid resolvable UDP Address. This validates that a string value contains a valid resolvable v4 UDP Address. This validates that a string value contains a valid resolvable v6 UDP Address. This validates that a string value contains a valid resolvable IP Address. This validates that a string value contains a valid resolvable v4 IP Address. This validates that a string value contains a valid resolvable v6 IP Address. This validates that a string value contains a valid Unix Address. This validates that a string value contains a valid MAC Address. Note: See Go's ParseMAC for accepted formats and types: This validates that a string value is a valid Hostname according to RFC 952 https://tools.ietf.org/html/rfc952 This validates that a string value is a valid Hostname according to RFC 1123 https://tools.ietf.org/html/rfc1123 Full Qualified Domain Name (FQDN) This validates that a string value contains a valid FQDN. This validates that a string value appears to be an HTML element tag including those described at https://developer.mozilla.org/en-US/docs/Web/HTML/Element This validates that a string value is a proper character reference in decimal or hexadecimal format This validates that a string value is percent-encoded (URL encoded) according to https://tools.ietf.org/html/rfc3986#section-2.1 This validates that a string value contains a valid directory and that it exists on the machine. This is done using os.Stat, which is a platform independent function. NOTE: When returning an error, the tag returned in "FieldError" will be the alias tag unless the dive tag is part of the alias. Everything after the dive tag is not reported as the alias tag. Also, the "ActualTag" in the before case will be the actual tag within the alias that failed. Here is a list of the current built in alias tags: Validator notes: A collection of validation rules that are frequently needed but are more complex than the ones found in the baked in validators. A non standard validator must be registered manually like you would with your own custom validation functions. Example of registration and use: Here is a list of the current non standard validators: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
Package aw is a "plug-and-play" workflow development library/framework for Alfred 3 & 4 (https://www.alfredapp.com/). It requires Go 1.13 or later. It provides everything you need to create a polished and blazing-fast Alfred frontend for your project. As of AwGo 0.26, all applicable features of Alfred 4.1 are supported. The main features are: AwGo is an opinionated framework that expects to be used in a certain way in order to eliminate boilerplate. It *will* panic if not run in a valid, minimally Alfred-like environment. At a minimum the following environment variables should be set to meaningful values: NOTE: AwGo is currently in development. The API *will* change and should not be considered stable until v1.0. Until then, be sure to pin a version using go modules or similar. Be sure to also check out the _examples/ subdirectory, which contains some simple, but complete, workflows that demonstrate the features of AwGo and useful workflow idioms. Typically, you'd call your program's main entry point via Workflow.Run(). This way, the library will rescue any panic, log the stack trace and show an error message to the user in Alfred. In the Script box (Language = "/bin/bash"): To generate results for Alfred to show in a Script Filter, use the feedback API of Workflow: You can set workflow variables (via feedback) with Workflow.Var, Item.Var and Modifier.Var. See Workflow.SendFeedback for more documentation. Alfred requires a different JSON format if you wish to set workflow variables. Use the ArgVars (named for its equivalent element in Alfred) struct to generate output from Run Script actions. Be sure to set TextErrors to true to prevent Workflow from generating Alfred JSON if it catches a panic: See ArgVars for more information. New() creates a *Workflow using the default values and workflow settings read from environment variables set by Alfred. You can change defaults by passing one or more Options to New(). If you do not want to use Alfred's environment variables, or they aren't set (i.e. you're not running the code in Alfred), use NewFromEnv() with a custom Env implementation. A Workflow can be re-configured later using its Configure() method. See the documentation for Option for more information on configuring a Workflow. AwGo can check for and install new versions of your workflow. Subpackage update provides an implementation of the Updater interface and sources to load updates from GitHub or Gitea releases, or from the URL of an Alfred `metadata.json` file. See subpackage update and _examples/update. AwGo can filter Script Filter feedback using a Sublime Text-like fuzzy matching algorithm. Workflow.Filter() sorts feedback Items against the provided query, removing those that do not match. See _examples/fuzzy for a basic demonstration, and _examples/bookmarks for a demonstration of implementing fuzzy.Sortable on your own structs and customising the fuzzy sort settings. Fuzzy matching is done by package https://godoc.org/go.deanishe.net/fuzzy AwGo automatically configures the default log package to write to STDERR (Alfred's debugger) and a log file in the workflow's cache directory. The log file is necessary because background processes aren't connected to Alfred, so their output is only visible in the log. It is rotated when it exceeds 1 MiB in size. One previous log is kept. AwGo detects when Alfred's debugger is open (Workflow.Debug() returns true) and in this case prepends filename:linenumber: to log messages. The Config struct (which is included in Workflow as Workflow.Config) provides an interface to the workflow's settings from the Workflow Environment Variables panel (see https://www.alfredapp.com/help/workflows/advanced/variables/#environment). Alfred exports these settings as environment variables, and you can read them ad-hoc with the Config.Get*() methods, and save values back to Alfred/info.plist with Config.Set(). Using Config.To() and Config.From(), you can "bind" your own structs to the settings in Alfred: See the documentation for Config.To and Config.From for more information, and _examples/settings for a demo workflow based on the API. The Alfred struct provides methods for the rest of Alfred's AppleScript API. Amongst other things, you can use it to tell Alfred to open, to search for a query, to browse/action files & directories, or to run External Triggers. See documentation of the Alfred struct for more information. AwGo provides a basic, but useful, API for loading and saving data. In addition to reading/writing bytes and marshalling/unmarshalling to/from JSON, the API can auto-refresh expired cache data. See Cache and Session for the API documentation. Workflow has three caches tied to different directories: These all share (almost) the same API. The difference is in when the data go away. Data saved with Session are deleted after the user closes Alfred or starts using a different workflow. The Cache directory is in a system cache directory, so may be deleted by the system or "system maintenance" tools. The Data directory lives with Alfred's application data and would not normally be deleted. Subpackage util provides several functions for running script files and snippets of AppleScript/JavaScript code. See util for documentation and examples. AwGo offers a simple API to start/stop background processes via Workflow's RunInBackground(), IsRunning() and Kill() methods. This is useful for running checks for updates and other jobs that hit the network or take a significant amount of time to complete, allowing you to keep your Script Filters extremely responsive. See _examples/update and _examples/workflows for demonstrations of this API.
Package stomp provides operations that allow communication with a message broker that supports the STOMP protocol. STOMP is the Streaming Text-Oriented Messaging Protocol. See http://stomp.github.com/ for more details. This package provides support for all STOMP protocol features in the STOMP protocol specifications, versions 1.0, 1.1 and 1.2. These features including protocol negotiation, heart-beating, value encoding, and graceful shutdown. Connecting to a STOMP server is achieved using the stomp.Dial function, or the stomp.Connect function. See the examples section for a summary of how to use these functions. Both functions return a stomp.Conn object for subsequent interaction with the STOMP server. Once a connection (stomp.Conn) is created, it can be used to send messages to the STOMP server, or create subscriptions for receiving messages from the STOMP server. Transactions can be created to send multiple messages and/ or acknowledge multiple received messages from the server in one, atomic transaction. The examples section has examples of using subscriptions and transactions. The client program can instruct the stomp.Conn to gracefully disconnect from the STOMP server using the Disconnect method. This will perform a graceful shutdown sequence as specified in the STOMP specification. Source code and other details for the project are available at GitHub:
Package bugsnag captures errors in real-time and reports them to BugSnag (http://bugsnag.com). Using bugsnag-go is a three-step process. 1. As early as possible in your program configure the notifier with your APIKey. This sets up handling of panics that would otherwise crash your app. 2. Add bugsnag to places that already catch panics. For example you should add it to the HTTP server when you call ListenAndServer: If that's not possible, you can also wrap each HTTP handler manually: 3. To notify BugSnag of an error that is not a panic, pass it to bugsnag.Notify. This will also log the error message using the configured Logger. For detailed integration instructions see https://docs.bugsnag.com/platforms/go. The only required configuration is the BugSnag API key which can be obtained by clicking "Project Settings" on the top of your BugSnag dashboard after signing up. We also recommend you set the ReleaseStage, AppType, and AppVersion if these make sense for your deployment workflow. If you need to attach extra data to BugSnag events, you can do that using the rawData mechanism. Most of the functions that send errors to BugSnag allow you to pass in any number of interface{} values as rawData. The rawData can consist of the Severity, Context, User or MetaData types listed below, and there is also builtin support for *http.Requests. If you want to add custom tabs to your bugsnag dashboard you can pass any value in as rawData, and then process it into the event's metadata using a bugsnag.OnBeforeNotify() hook. If necessary you can pass Configuration in as rawData, or modify the Configuration object passed into OnBeforeNotify hooks. Configuration passed in this way only affects the current notification.
Package graphql-go-tools is library to create GraphQL services using the go programming language. GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. GraphQL provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools. Source: https://graphql.org This library is intended to be a set of low level building blocks to write high performance and secure GraphQL applications. Use cases could range from writing layer seven GraphQL proxies, firewalls, caches etc.. You would usually not use this library to write a GraphQL server yourself but to build tools for the GraphQL ecosystem. To achieve this goal the library has zero dependencies at its core functionality. It has a full implementation of the GraphQL AST and supports lexing, parsing, validation, normalization, introspection, query planning as well as query execution etc. With the execution package it's possible to write a fully functional GraphQL server that is capable to mediate between various protocols and formats. In it's current state you can use the following DataSources to resolve fields: - Static data (embed static data into a schema to extend a field in a simple way) - HTTP JSON APIs (combine multiple Restful APIs into one single GraphQL Endpoint, nesting is possible) - GraphQL APIs (you can combine multiple GraphQL APIs into one single GraphQL Endpoint, nesting is possible) - Webassembly/WASM Lambdas (e.g. resolve a field using a Rust lambda) If you're looking for a ready to use solution that has all this functionality packaged as a Gateway have a look at: https://wundergraph.com Created by Jens Neuse
Package ora implements an Oracle database driver. ### Golang Oracle Database Driver ### #### TL;DR; just use it #### Call stored procedure with OUT parameters: An Oracle database may be accessed through the database/sql(http://golang.org/pkg/database/sql) package or through the ora package directly. database/sql offers connection pooling, thread safety, a consistent API to multiple database technologies and a common set of Go types. The ora package offers additional features including pointers, slices, nullable types, numerics of various sizes, Oracle-specific types, Go return type configuration, and Oracle abstractions such as environment, server and session. The ora package is written with the Oracle Call Interface (OCI) C-language libraries provided by Oracle. The OCI libraries are a standard for client application communication and driver communication with Oracle databases. The ora package has been verified to work with: * Oracle Standard 11g (11.2.0.4.0), Linux x86_64 (RHEL6) * Oracle Enterprise 12c (12.1.0.1.0), Windows 8.1 and AMD64. --- * [Installation](https://github.com/rana/ora#installation) * [Data Types](https://github.com/rana/ora#data-types) * [SQL Placeholder Syntax](https://github.com/rana/ora#sql-placeholder-syntax) * [Working With The Sql Package](https://github.com/rana/ora#working-with-the-sql-package) * [Working With The Oracle Package Directly](https://github.com/rana/ora#working-with-the-oracle-package-directly) * [Logging](https://github.com/rana/ora#logging) * [Test Database Setup](https://github.com/rana/ora#test-database-setup) * [Limitations](https://github.com/rana/ora#limitations) * [License](https://github.com/rana/ora#license) * [API Reference](http://godoc.org/github.com/rana/ora#pkg-index) * [Examples](./examples) --- Minimum requirements are Go 1.3 with CGO enabled, a GCC C compiler, and Oracle 11g (11.2.0.4.0) or Oracle Instant Client (11.2.0.4.0). Install Oracle or Oracle Instant Client. Copy the [oci8.pc](contrib/oci8.pc) from the `contrib` folder (or the one for your system, maybe tailored to your specific locations) to a folder in `$PKG_CONFIG_PATH` or a system folder, such as The ora package has no external Go dependencies and is available on GitHub and gopkg.in: *WARNING*: If you have Oracle Instant Client 11.2, you'll need to add "=lnnz11" to the list of linked libs! Otherwise, you may encounter "undefined reference to `nzosSCSP_SetCertSelectionParams' " errors. Oracle Instant Client 12.1 does not need this. The ora package supports all built-in Oracle data types. The supported Oracle built-in data types are NUMBER, BINARY_DOUBLE, BINARY_FLOAT, FLOAT, DATE, TIMESTAMP, TIMESTAMP WITH TIME ZONE, TIMESTAMP WITH LOCAL TIME ZONE, INTERVAL YEAR TO MONTH, INTERVAL DAY TO SECOND, CHAR, NCHAR, VARCHAR, VARCHAR2, NVARCHAR2, LONG, CLOB, NCLOB, BLOB, LONG RAW, RAW, ROWID and BFILE. SYS_REFCURSOR is also supported. Oracle does not provide a built-in boolean type. Oracle provides a single-byte character type. A common practice is to define two single-byte characters which represent true and false. The ora package adopts this approach. The oracle package associates a Go bool value to a Go rune and sends and receives the rune to a CHAR(1 BYTE) column or CHAR(1 CHAR) column. The default false rune is zero '0'. The default true rune is one '1'. The bool rune association may be configured or disabled when directly using the ora package but not with the database/sql package. Within a SQL string a placeholder may be specified to indicate where a Go variable is placed. The SQL placeholder is an Oracle identifier, from 1 to 30 characters, prefixed with a colon (:). For example: Placeholders within a SQL statement are bound by position. The actual name is not used by the ora package driver e.g., placeholder names :c1, :1, or :xyz are treated equally. The `database/sql` package provides a LastInsertId method to return the last inserted row's id. Oracle does not provide such functionality, but if you append `... RETURNING col /*LastInsertId*/` to your SQL, then it will be presented as LastInsertId. Note that you have to mark with a `/*LastInsertId*/` (case insensitive) your `RETURNING` part, to allow ora to return the last column as `LastInsertId()`. That column must fit in `int64`, though! You may access an Oracle database through the database/sql package. The database/sql package offers a consistent API across different databases, connection pooling, thread safety and a set of common Go types. database/sql makes working with Oracle straight-forward. The ora package implements interfaces in the database/sql/driver package enabling database/sql to communicate with an Oracle database. Using database/sql ensures you never have to call the ora package directly. When using database/sql, the mapping between Go types and Oracle types may be changed slightly. The database/sql package has strict expectations on Go return types. The Go-to-Oracle type mapping for database/sql is: The "ora" driver is automatically registered for use with sql.Open, but you can call ora.SetCfg to set the used configuration options including statement configuration and Rset configuration. When configuring the driver for use with database/sql, keep in mind that database/sql has strict Go type-to-Oracle type mapping expectations. The ora package allows programming with pointers, slices, nullable types, numerics of various sizes, Oracle-specific types, Go return type configuration, and Oracle abstractions such as environment, server and session. When working with the ora package directly, the API is slightly different than database/sql. When using the ora package directly, the mapping between Go types and Oracle types may be changed. The Go-to-Oracle type mapping for the ora package is: An example of using the ora package directly: Pointers may be used to capture out-bound values from a SQL statement such as an insert or stored procedure call. For example, a numeric pointer captures an identity value: A string pointer captures an out parameter from a stored procedure: Slices may be used to insert multiple records with a single insert statement: The ora package provides nullable Go types to support DML operations such as insert and select. The nullable Go types provided by the ora package are Int64, Int32, Int16, Int8, Uint64, Uint32, Uint16, Uint8, Float64, Float32, Time, IntervalYM, IntervalDS, String, Bool, Binary and Bfile. For example, you may insert nullable Strings and select nullable Strings: The `Stmt.Prep` method is variadic accepting zero or more `GoColumnType` which define a Go return type for a select-list column. For example, a Prep call can be configured to return an int64 and a nullable Int64 from the same column: Go numerics of various sizes are supported in DML operations. The ora package supports int64, int32, int16, int8, uint64, uint32, uint16, uint8, float64 and float32. For example, you may insert a uint16 and select numerics of various sizes: If a non-nullable type is defined for a nullable column returning null, the Go type's zero value is returned. GoColumnTypes defined by the ora package are: When Stmt.Prep doesn't receive a GoColumnType, or receives an incorrect GoColumnType, the default value defined in RsetCfg is used. EnvCfg, SrvCfg, SesCfg, StmtCfg and RsetCfg are the main configuration structs. EnvCfg configures aspects of an Env. SrvCfg configures aspects of a Srv. SesCfg configures aspects of a Ses. StmtCfg configures aspects of a Stmt. RsetCfg configures aspects of Rset. StmtCfg and RsetCfg have the most options to configure. RsetCfg defines the default mapping between an Oracle select-list column and a Go type. StmtCfg may be set in an EnvCfg, SrvCfg, SesCfg and StmtCfg. RsetCfg may be set in a Stmt. EnvCfg.StmtCfg, SrvCfg.StmtCfg, SesCfg.StmtCfg may optionally be specified to configure a statement. If StmtCfg isn't specified default values are applied. EnvCfg.StmtCfg, SrvCfg.StmtCfg, SesCfg.StmtCfg cascade to new descendent structs. When ora.OpenEnv() is called a specified EnvCfg is used or a default EnvCfg is created. Creating a Srv with env.OpenSrv() will use SrvCfg.StmtCfg if it is specified; otherwise, EnvCfg.StmtCfg is copied by value to SrvCfg.StmtCfg. Creating a Ses with srv.OpenSes() will use SesCfg.StmtCfg if it is specified; otherwise, SrvCfg.StmtCfg is copied by value to SesCfg.StmtCfg. Creating a Stmt with ses.Prep() will use SesCfg.StmtCfg if it is specified; otherwise, a new StmtCfg with default values is set on the Stmt. Call Stmt.Cfg() to change a Stmt's configuration. An Env may contain multiple Srv. A Srv may contain multiple Ses. A Ses may contain multiple Stmt. A Stmt may contain multiple Rset. Setting a RsetCfg on a StmtCfg does not cascade through descendent structs. Configuration of Stmt.Cfg takes effect prior to calls to Stmt.Exe and Stmt.Qry; consequently, any updates to Stmt.Cfg after a call to Stmt.Exe or Stmt.Qry are not observed. One configuration scenario may be to set a server's select statements to return nullable Go types by default: Another scenario may be to configure the runes mapped to bool values: Oracle-specific types offered by the ora package are ora.Rset, ora.IntervalYM, ora.IntervalDS, ora.Raw, ora.Lob and ora.Bfile. ora.Rset represents an Oracle SYS_REFCURSOR. ora.IntervalYM represents an Oracle INTERVAL YEAR TO MONTH. ora.IntervalDS represents an Oracle INTERVAL DAY TO SECOND. ora.Raw represents an Oracle RAW or LONG RAW. ora.Lob may represent an Oracle BLOB or Oracle CLOB. And ora.Bfile represents an Oracle BFILE. ROWID columns are returned as strings and don't have a unique Go type. #### LOBs The default for SELECTing [BC]LOB columns is a safe Bin or S, which means all the contents of the LOB is slurped into memory and returned as a []byte or string. The DefaultLOBFetchLen says LOBs are prefetched only a minimal way, to minimize extra memory usage - you can override this using `stmt.SetCfg(stmt.Cfg().SetLOBFetchLen(100))`. If you want more control, you can use ora.L in Prep, Qry or `ses.SetCfg(ses.Cfg().SetBlob(ora.L))`. But keep in mind that Oracle restricts the use of LOBs: it is forbidden to do ANYTHING while reading the LOB! No another query, no exec, no close of the Rset - even *advance* to the next record in the result set is forbidden! Failing to adhere these rules results in "Invalid handle" and ORA-03127 errors. You cannot start reading another LOB till you haven't finished reading the previous LOB, not even in the same row! Failing this results in ORA-24804! For examples, see [z_lob_test.go](z_lob_test.go). #### Rset Rset is used to obtain Go values from a SQL select statement. Methods Rset.Next, Rset.NextRow, and Rset.Len are available. Fields Rset.Row, Rset.Err, Rset.Index, and Rset.ColumnNames are also available. The Next method attempts to load data from an Oracle buffer into Row, returning true when successful. When no data is available, or if an error occurs, Next returns false setting Row to nil. Any error in Next is assigned to Err. Calling Next increments Index and method Len returns the total number of rows processed. The NextRow method is convenient for returning a single row. NextRow calls Next and returns Row. ColumnNames returns the names of columns defined by the SQL select statement. Rset has two usages. Rset may be returned from Stmt.Qry when prepared with a SQL select statement: Or, *Rset may be passed to Stmt.Exe when prepared with a stored procedure accepting an OUT SYS_REFCURSOR parameter: Stored procedures with multiple OUT SYS_REFCURSOR parameters enable a single Exe call to obtain multiple Rsets: The types of values assigned to Row may be configured in StmtCfg.Rset. For configuration to take effect, assign StmtCfg.Rset prior to calling Stmt.Qry or Stmt.Exe. Rset prefetching may be controlled by StmtCfg.PrefetchRowCount and StmtCfg.PrefetchMemorySize. PrefetchRowCount works in coordination with PrefetchMemorySize. When PrefetchRowCount is set to zero only PrefetchMemorySize is used; otherwise, the minimum of PrefetchRowCount and PrefetchMemorySize is used. The default uses a PrefetchMemorySize of 134MB. Opening and closing Rsets is managed internally. Rset does not have an Open method or Close method. IntervalYM may be be inserted and selected: IntervalDS may be be inserted and selected: Transactions on an Oracle server are supported. DML statements auto-commit unless a transaction has started: Ses.PrepAndExe, Ses.PrepAndQry, Ses.Ins, Ses.Upd, and Ses.Sel are convenient one-line methods. Ses.PrepAndExe offers a convenient one-line call to Ses.Prep and Stmt.Exe. Ses.PrepAndQry offers a convenient one-line call to Ses.Prep and Stmt.Qry. Ses.Ins composes, prepares and executes a sql INSERT statement. Ses.Ins is useful when you have to create and maintain a simple INSERT statement with a long list of columns. As table columns are added and dropped over the lifetime of a table Ses.Ins is easy to read and revise. Ses.Upd composes, prepares and executes a sql UPDATE statement. Ses.Upd is useful when you have to create and maintain a simple UPDATE statement with a long list of columns. As table columns are added and dropped over the lifetime of a table Ses.Upd is easy to read and revise. Ses.Sel composes, prepares and queries a sql SELECT statement. Ses.Sel is useful when you have to create and maintain a simple SELECT statement with a long list of columns that have non-default GoColumnTypes. As table columns are added and dropped over the lifetime of a table Ses.Sel is easy to read and revise. The Ses.Ping method checks whether the client's connection to an Oracle server is valid. A call to Ping requires an open Ses. Ping will return a nil error when the connection is fine: The Srv.Version method is available to obtain the Oracle server version. A call to Version requires an open Ses: Further code examples are available in the [example file](https://github.com/rana/ora/blob/master/z_example_test.go), test files and [samples folder](https://github.com/rana/ora/tree/master/samples). The ora package provides a simple ora.Logger interface for logging. Logging is disabled by default. Specify one of three optional built-in logging packages to enable logging; or, use your own logging package. ora.Cfg().Log offers various options to enable or disable logging of specific ora driver methods. For example: To use the standard Go log package: which produces a sample log of: Messages are prefixed with 'ORA I' for information or 'ORA E' for an error. The log package is configured to write to os.Stderr by default. Use the ora/lg.Std type to configure an alternative io.Writer. To use the glog package: which produces a sample log of: To use the log15 package: which produces a sample log of: See https://github.com/rana/ora/tree/master/samples/lg15/main.go for sample code which uses the log15 package. Tests are available and require some setup. Setup varies depending on whether the Oracle server is configured as a container database or non-container database. It's simpler to setup a non-container database. An example for each setup is explained. Non-container test database setup steps: Container test database setup steps: Some helpful SQL maintenance statements: Run the tests. database/sql method Stmt.QueryRow is not supported. Go 1.6 introduced stricter cgo (call C from Go) rules, and introduced runtime checks. This is good, as the possibility of C code corrupting Go code is almost completely eliminated, but it also means a severe call overhead grow. [Sometimes](https://groups.google.com/forum/#!topic/golang-nuts/ccMkPG6Bi5k) this can be 22x the go 1.5.3 call time! So if you need performance more than correctness, start your programs with "GODEBUG=cgocheck=0" environment setting. Copyright 2017 Rana Ian, Tamás Gulácsi. All rights reserved. Use of this source code is governed by The MIT License found in the accompanying LICENSE file.
Package getopt (v1) provides traditional getopt processing for implementing commands that use traditional command lines. The standard Go flag package cannot be used to write a program that parses flags the way ls or ssh does, for example. A new version of this package (v2) (whose package name is also getopt) is available as: Getopt supports functionality found in both the standard BSD getopt as well as (one of the many versions of) the GNU getopt_long. Being a Go package, this package makes common usage easy, but still enables more controlled usage if needed. Typical usage: If you don't want the program to exit on error, use getopt.Getopt: Support is provided for both short (-f) and long (--flag) options. A single option may have both a short and a long name. Each option may be a flag or a value. A value takes an argument. Declaring no long names causes this package to process arguments like the traditional BSD getopt. Short flags may be combined into a single parameter. For example, "-a -b -c" may also be expressed "-abc". Long flags must stand on their own "--alpha --beta" Values require an argument. For short options the argument may either be immediately following the short name or as the next argument. Only one short value may be combined with short flags in a single argument; the short value must be after all short flags. For example, if f is a flag and v is a value, then: For the long value option val: Values with an optional value only set the value if the value is part of the same argument. In any event, the option count is increased and the option is marked as seen. There is no convience function defined for making the value optional. The SetOptional method must be called on the actual Option. Parsing continues until the first non-option or "--" is encountered. The short name "-" can be used, but it either is specified as "-" or as part of a group of options, for example "-f-". If there are no long options specified then "--f" could also be used. If "-" is not declared as an option then the single "-" will also terminate the option processing but unlike "--", the "-" will be part of the remaining arguments. Normally the parsing is performed by calling the Parse function. If it is important to see the order of the options then the Getopt function should be used. The standard Parse function does the equivalent of: When calling Getopt it is the responsibility of the caller to print any errors. Normally the default option set, CommandLine, is used. Other option sets may be created with New. After parsing, the sets Args will contain the non-option arguments. If an error is encountered then Args will begin with argument that caused the error. It is valid to call a set's Parse a second time to amend the current set of flags or values. As an example: If called with set to { "prog", "-a", "cmd", "-b", "arg" } then both and and b would be set, cmd would be set to "cmd", and opts.Args() would return { "arg" }. Unless an option type explicitly prohibits it, an option may appear more than once in the arguments. The last value provided to the option is the value. For each option type there are an unfortunately large number of ways, 8, to initialize the option. This number is derived from three attributes: The first two variations provide 4 signature: Foo can actually be expressed in terms of FooLong: Normally Foo is used, unless long options are needed. Setting short to 0 creates only a long option. The difference bentween Foo and FooVar is that you pass a pointer, p, to the location of the value to FooVar. The default value is simply *p. The initial value of *p is the defaut value of the option. Foo is actually a wrapper around FooVar: The third variation provides a top-level function and a method on a Set: The top-level function is simply: To simplfy documentation, typically only the main top-level function is fully documented. The others will have documentation when there is something special about them. All non-flag options are created with a "valuehelp" as the last parameter. Valuehelp should be 0, 1, or 2 strings. The first string, if provided, is the usage message for the option. If the second string, if provided, is the name to use for the value when displaying the usage. If not provided the term "value" is assumed. The usage message for the option created with is is
Package conf provides support for using environmental variables and command line arguments for configuration. It is compatible with the GNU extensions to the POSIX recommendations for command-line options. See http://www.gnu.org/software/libc/manual/html_node/Argument-Syntax.html There are no hard bindings for this package. This package takes a struct value and parses it for both the environment and flags. It supports several tags to customize the flag options. The field name and any parent struct name will be used for the long form of the command name unless the name is overridden. As an example, using this config struct: The following usage information would be output you can display. Usage: conf.test [options] [arguments] OPTIONS There is an API called Parse that can process a config struct with environment variable and command line flag overrides. There is also YAML support using the yaml package that is part of this module. There is a WithParse function that takes a slice of bytes containing the YAML or WithParseReader that takes any concrete value that knows how to Read. Additionally, if the config struct has a field of the slice type conf.Args then it will be populated with any remaining arguments from the command line after flags have been processed. For example a program with a config struct like this: If that program is executed from the command line like this: Then the cfg.Args field will contain the string values ["serve", "http"]. The Args type has a method Num for convenient access to these arguments such as this: You can add a version with a description by adding the Version type to your config type and set these values at run time for display.
Package template implements data-driven templates for generating textual output. To generate HTML output, see package html/template, which has the same interface as this package but automatically secures HTML output against certain attacks. Templates are executed by applying them to a data structure. Annotations in the template refer to elements of the data structure (typically a field of a struct or a key in a map) to control execution and derive values to be displayed. Execution of the template walks the structure and sets the cursor, represented by a period '.' and called "dot", to the value at the current location in the structure as execution proceeds. The input text for a template is UTF-8-encoded text in any format. "Actions"--data evaluations or control structures--are delimited by "{{" and "}}"; all text outside actions is copied to the output unchanged. Actions may not span newlines, although comments can. Once parsed, a template may be executed safely in parallel. Here is a trivial example that prints "17 items are made of wool". More intricate examples appear below. Here is the list of actions. "Arguments" and "pipelines" are evaluations of data, defined in detail below. An argument is a simple value, denoted by one of the following. Arguments may evaluate to any type; if they are pointers the implementation automatically indirects to the base type when required. If an evaluation yields a function value, such as a function-valued field of a struct, the function is not invoked automatically, but it can be used as a truth value for an if action and the like. To invoke it, use the call function, defined below. A pipeline is a possibly chained sequence of "commands". A command is a simple value (argument) or a function or method call, possibly with multiple arguments: A pipeline may be "chained" by separating a sequence of commands with pipeline characters '|'. In a chained pipeline, the result of the each command is passed as the last argument of the following command. The output of the final command in the pipeline is the value of the pipeline. The output of a command will be either one value or two values, the second of which has type error. If that second value is present and evaluates to non-nil, execution terminates and the error is returned to the caller of Execute. A pipeline inside an action may initialize a variable to capture the result. The initialization has syntax where $variable is the name of the variable. An action that declares a variable produces no output. If a "range" action initializes a variable, the variable is set to the successive elements of the iteration. Also, a "range" may declare two variables, separated by a comma: in which case $index and $element are set to the successive values of the array/slice index or map key and element, respectively. Note that if there is only one variable, it is assigned the element; this is opposite to the convention in Go range clauses. A variable's scope extends to the "end" action of the control structure ("if", "with", or "range") in which it is declared, or to the end of the template if there is no such control structure. A template invocation does not inherit variables from the point of its invocation. When execution begins, $ is set to the data argument passed to Execute, that is, to the starting value of dot. Here are some example one-line templates demonstrating pipelines and variables. All produce the quoted word "output": During execution functions are found in two function maps: first in the template, then in the global function map. By default, no functions are defined in the template but the Funcs method can be used to add them. Predefined global functions are named as follows. The boolean functions take any zero value to be false and a non-zero value to be true. There is also a set of binary comparison operators defined as functions: For simpler multi-way equality tests, eq (only) accepts two or more arguments and compares the second and subsequent to the first, returning in effect (Unlike with || in Go, however, eq is a function call and all the arguments will be evaluated.) The comparison functions work on basic types only (or named basic types, such as "type Celsius float32"). They implement the Go rules for comparison of values, except that size and exact type are ignored, so any integer value, signed or unsigned, may be compared with any other integer value. (The arithmetic value is compared, not the bit pattern, so all negative integers are less than all unsigned integers.) However, as usual, one may not compare an int with a float32 and so on. Each template is named by a string specified when it is created. Also, each template is associated with zero or more other templates that it may invoke by name; such associations are transitive and form a name space of templates. A template may use a template invocation to instantiate another associated template; see the explanation of the "template" action above. The name must be that of a template associated with the template that contains the invocation. When parsing a template, another template may be defined and associated with the template being parsed. Template definitions must appear at the top level of the template, much like global variables in a Go program. The syntax of such definitions is to surround each template declaration with a "define" and "end" action. The define action names the template being created by providing a string constant. Here is a simple example: This defines two templates, T1 and T2, and a third T3 that invokes the other two when it is executed. Finally it invokes T3. If executed this template will produce the text By construction, a template may reside in only one association. If it's necessary to have a template addressable from multiple associations, the template definition must be parsed multiple times to create distinct *Template values, or must be copied with the Clone or AddParseTree method. Parse may be called multiple times to assemble the various associated templates; see the ParseFiles and ParseGlob functions and methods for simple ways to parse related templates stored in files. A template may be executed directly or through ExecuteTemplate, which executes an associated template identified by name. To invoke our example above, we might write, or to invoke a particular template explicitly by name,
Package zenity provides cross-platform access to simple dialogs that interact graphically with the user. It is inspired by, and closely follows the API of, the zenity program, which it uses to provide the functionality on various Unixes. See: https://help.gnome.org/users/zenity/stable/ This package does not require cgo, and it does not impose any threading or initialization requirements.
Package cgosymbolizer provides a cgo symbolizer based on libbacktrace. This will be used to provide a symbolic backtrace of cgo functions. This package does not export any symbols. To use it, add a line like somewhere in your program.
Package goncurses is a new curses (ncurses) library for the Go programming language. It implements all the ncurses extension libraries: form, menu and panel. Minimal operation would consist of initializing the display: It is important to always call End() before your program exits. If you fail to do so, the terminal will not perform properly and will either need to be reset or restarted completely. CAUTION: Calls to ncurses functions are normally not atomic nor reentrant and therefore extreme care should be taken to ensure ncurses functions are not called concurrently. Specifically, never write data to the same window concurrently nor accept input and send output to the same window as both alter the underlying C data structures in a non safe manner. Ideally, you should structure your program to ensure all ncurses related calls happen in a single goroutine. This is probably most easily achieved via channels and Go's built-in select. Alternatively, or additionally, you can use a mutex to protect any calls in multiple goroutines from happening concurrently. Failure to do so will result in unpredictable and undefined behaviour in your program. The examples directory contains demonstrations of many of the capabilities goncurses can provide.
Package grip provides a flexible logging package for basic Go programs. Drawing inspiration from Go and Python's standard library logging, as well as systemd's journal service, and other logging systems, Grip provides a number of very powerful logging abstractions in one high-level package. The central type of the grip package is the Journaler type, instances of which provide distinct log capturing system. For ease, following from the Go standard library, the grip package provides parallel public methods that use an internal "standard" Jouernaler instance in the grip package, which has some defaults configured and may be sufficient for many use cases. The send.Sender interface provides a way of changing the logging backend, and the send package provides a number of alternate implementations of logging systems, including: systemd's journal, logging to standard output, logging to a file, and generic syslog support. The message.Composer interface is the representation of all messages. They are implemented to provide a raw structured form as well as a string representation for more conentional logging output. Furthermore they are intended to be easy to produce, and defer more expensive processing until they're being logged, to prevent expensive operations producing messages that are below threshold. Loging helpers exist for the following levels: These methods accept both strings (message content,) or types that implement the message.MessageComposer interface. Composer types make it possible to delay generating a message unless the logger is over the logging threshold. Use this to avoid expensive serialization operations for suppressed logging operations. All levels also have additional methods with `ln` and `f` appended to the end of the method name which allow Println() and Printf() style functionality. You must pass printf/println-style arguments to these methods. The Conditional logging methods take two arguments, a Boolean, and a message argument. Messages can be strings, objects that implement the MessageComposer interface, or errors. If condition boolean is true, the threshold level is met, and the message to log is not an empty string, then it logs the resolved message. Use conditional logging methods to potentially suppress log messages based on situations orthogonal to log level, with "log sometimes" or "log rarely" semantics. Combine with MessageComposers to to avoid expensive message building operations. The MutiCatcher type makes it possible to collect from a group of operations and then aggregate them as a single error.
Golex is a lex/flex like (not fully POSIX lex compatible) utility. It renders .l formated data (http://flex.sourceforge.net/manual/Format.html#Format) to Go source code. The .l data can come from a file named in a command line argument. If no non-opt args are given, golex reads stdin. Options: To get the latest golex version: Please see http://godoc.org/github.com/cznic/golex/lex. 2014-11-18: Golex now supports %yym - a hook which can be used to mark an accepting state. Consider for example this .l file: Execution and output: 2014-11-15: Golex's output is now gofmt'ed, if possible. Missing/differing functionality of the current renderer (compared to flex): Further limitations on the .l source are listed in the cznic/lex package godocs. A simple golex program example (make example1 && ./example1):
Package monkit is a flexible code instrumenting and data collection library. I'm going to try and sell you as fast as I can on this library. Example usage We've got tools that capture distribution information (including quantiles) about int64, float64, and bool types. We have tools that capture data about events (we've got meters for deltas, rates, etc). We have rich tools for capturing information about tasks and functions, and literally anything that can generate a name and a number. Almost just as importantly, the amount of boilerplate and code you have to write to get these features is very minimal. Data that's hard to measure probably won't get measured. This data can be collected and sent to Graphite (http://graphite.wikidot.com/) or any other time-series database. Here's a selection of live stats from one of our storage nodes: This library generates call graphs of your live process for you. These call graphs aren't created through sampling. They're full pictures of all of the interesting functions you've annotated, along with quantile information about their successes, failures, how often they panic, return an error (if so instrumented), how many are currently running, etc. The data can be returned in dot format, in json, in text, and can be about just the functions that are currently executing, or all the functions the monitoring system has ever seen. Here's another example of one of our production nodes: https://raw.githubusercontent.com/spacemonkeygo/monkit/master/images/callgraph2.png This library generates trace graphs of your live process for you directly, without requiring standing up some tracing system such as Zipkin (though you can do that too). Inspired by Google's Dapper (http://research.google.com/pubs/pub36356.html) and Twitter's Zipkin (http://zipkin.io), we have process-internal trace graphs, triggerable by a number of different methods. You get this trace information for free whenever you use Go contexts (https://blog.golang.org/context) and function monitoring. The output formats are svg and json. Additionally, the library supports trace observation plugins, and we've written a plugin that sends this data to Zipkin (http://github.com/spacemonkeygo/monkit-zipkin). https://raw.githubusercontent.com/spacemonkeygo/monkit/master/images/trace.png Before our crazy Go rewrite of everything (https://www.spacemonkey.com/blog/posts/go-space-monkey) (and before we had even seen Google's Dapper paper), we were a Python shop, and all of our "interesting" functions were decorated with a helper that collected timing information and sent it to Graphite. When we transliterated to Go, we wanted to preserve that functionality, so the first version of our monitoring package was born. Over time it started to get janky, especially as we found Zipkin and started adding tracing functionality to it. We rewrote all of our Go code to use Google contexts, and then realized we could get call graph information. We decided a refactor and then an all-out rethinking of our monitoring package was best, and so now we have this library. Sometimes you really want callstack contextual information without having to pass arguments through everything on the call stack. In other languages, many people implement this with thread-local storage. Example: let's say you have written a big system that responds to user requests. All of your libraries log using your log library. During initial development everything is easy to debug, since there's low user load, but now you've scaled and there's OVER TEN USERS and it's kind of hard to tell what log lines were caused by what. Wouldn't it be nice to add request ids to all of the log lines kicked off by that request? Then you could grep for all log lines caused by a specific request id. Geez, it would suck to have to pass all contextual debugging information through all of your callsites. Google solved this problem by always passing a context.Context interface through from call to call. A Context is basically just a mapping of arbitrary keys to arbitrary values that users can add new values for. This way if you decide to add a request context, you can add it to your Context and then all callsites that decend from that place will have the new data in their contexts. It is admittedly very verbose to add contexts to every function call. Painfully so. I hope to write more about it in the future, but Google also wrote up their thoughts about it (https://blog.golang.org/context), which you can go read. For now, just swallow your disgust and let's keep moving. Let's make a super simple Varnish (https://www.varnish-cache.org/) clone. Open up gedit! (Okay just kidding, open whatever text editor you want.) For this motivating program, we won't even add the caching, though there's comments for where to add it if you'd like. For now, let's just make a barebones system that will proxy HTTP requests. We'll call it VLite, but maybe we should call it VReallyLite. Run and build this and open localhost:8080 in your browser. If you use the default proxy target, it should inform you that the world hasn't been destroyed yet. The first thing you'll want to do is add the small amount of boilerplate to make the instrumentation we're going to add to your process observable later. Import the basic monkit packages: and then register environmental statistics and kick off a goroutine in your main method to serve debug requests: Rebuild, and then check out localhost:9000/stats (or localhost:9000/stats/json, if you prefer) in your browser! Remember what I said about Google's contexts (https://blog.golang.org/context)? It might seem a bit overkill for such a small project, but it's time to add them. To help out here, I've created a library that constructs contexts for you for incoming HTTP requests. Nothing that's about to happen requires my webhelp library (https://godoc.org/github.com/jtolds/webhelp), but here is the code now refactored to receive and pass contexts through our two per-request calls. You can create a new context for a request however you want. One reason to use something like webhelp is that the cancelation feature of Contexts is hooked up to the HTTP request getting canceled. Let's start to get statistics about how many requests we receive! First, this package (main) will need to get a monitoring Scope. Add this global definition right after all your imports, much like you'd create a logger with many logging libraries: Now, make the error return value of HandleHTTP named (so, (err error)), and add this defer line as the very first instruction of HandleHTTP: Let's also add the same line (albeit modified for the lack of error) to Proxy, replacing &err with nil: You should now have something like: We'll unpack what's going on here, but for now: For this new funcs dataset, if you want a graph, you can download a dot graph at localhost:9000/funcs/dot and json information from localhost:9000/funcs/json. You should see something like: with a similar report for the Proxy method, or a graph like: https://raw.githubusercontent.com/spacemonkeygo/monkit/master/images/handlehttp.png This data reports the overall callgraph of execution for known traces, along with how many of each function are currently running, the most running concurrently (the highwater), how many were successful along with quantile timing information, how many errors there were (with quantile timing information if applicable), and how many panics there were. Since the Proxy method isn't capturing a returned err value, and since HandleHTTP always returns nil, this example won't ever have failures. If you're wondering about the success count being higher than you expected, keep in mind your browser probably requested a favicon.ico. Cool, eh? How it works is an interesting line of code - there's three function calls. If you look at the Go spec, all of the function calls will run at the time the function starts except for the very last one. The first function call, mon.Task(), creates or looks up a wrapper around a Func. You could get this yourself by requesting mon.Func() inside of the appropriate function or mon.FuncNamed(). Both mon.Task() and mon.Func() are inspecting runtime.Caller to determine the name of the function. Because this is a heavy operation, you can actually store the result of mon.Task() and reuse it somehow else if you prefer, so instead of you could instead use which is more performant every time after the first time. runtime.Caller only gets called once. Careful! Don't use the same myFuncMon in different functions unless you want to screw up your statistics! The second function call starts all the various stop watches and bookkeeping to keep track of the function. It also mutates the context pointer it's given to extend the context with information about what current span (in Zipkin parlance) is active. Notably, you *can* pass nil for the context if you really don't want a context. You just lose callgraph information. The last function call stops all the stop watches ad makes a note of any observed errors or panics (it repanics after observing them). Turns out, we don't even need to change our program anymore to get rich tracing information! Open your browser and go to localhost:9000/trace/svg?regex=HandleHTTP. It won't load, and in fact, it's waiting for you to open another tab and refresh localhost:8080 again. Once you retrigger the actual application behavior, the trace regex will capture a trace starting on the first function that matches the supplied regex, and return an svg. Go back to your first tab, and you should see a relatively uninteresting but super promising svg. Let's make the trace more interesting. Add a to your HandleHTTP method, rebuild, and restart. Load localhost:8080, then start a new request to your trace URL, then reload localhost:8080 again. Flip back to your trace, and you should see that the Proxy method only takes a portion of the time of HandleHTTP! https://cdn.rawgit.com/spacemonkeygo/monkit/master/images/trace.svg There's multiple ways to select a trace. You can select by regex using the preselect method (default), which first evaluates the regex on all known functions for sanity checking. Sometimes, however, the function you want to trace may not yet be known to monkit, in which case you'll want to turn preselection off. You may have a bad regex, or you may be in this case if you get the error "Bad Request: regex preselect matches 0 functions." Another way to select a trace is by providing a trace id, which we'll get to next! Make sure to check out what the addition of the time.Sleep call did to the other reports. It's easy to write plugins for monkit! Check out our first one that exports data to Zipkin (http://zipkin.io/)'s Scribe API: https://github.com/spacemonkeygo/monkit-zipkin We plan to have more (for HTrace, OpenTracing, etc, etc), soon!
Package qml offers graphical QML application support for the Go language. This package is in an alpha stage, and still in heavy development. APIs may change, and things may break. At this time contributors and developers that are interested in tracking the development closely are encouraged to use it. If you'd prefer a more stable release, please hold on a bit and subscribe to the mailing list for news. It's in a pretty good state, so it shall not take too long. See http://github.com/go-qml/qml for details. The qml package enables Go programs to display and manipulate graphical content using Qt's QML framework. QML uses a declarative language to express structure and style, and supports JavaScript for in-place manipulation of the described content. When using the Go qml package, such QML content can also interact with Go values, making use of its exported fields and methods, and even explicitly creating new instances of registered Go types. A simple Go application that integrates with QML may perform the following steps for offering a graphical interface: Some of these topics are covered below, and may also be observed in practice in the following examples: The following logic demonstrates loading a QML file into a window: Any QML object may be manipulated by Go via the Object interface. That interface is implemented both by dynamic QML values obtained from a running engine, and by Go types in the qml package that represent QML values, such as Window, Context, and Engine. For example, the following logic creates a window and prints its width whenever it's made visible: Information about the methods, properties, and signals that are available for QML objects may be obtained in the Qt documentation. As a reference, the "visibleChanged" signal and the "width" property used in the example above are described at: When in doubt about what type is being manipulated, the Object.TypeName method provides the type name of the underlying value. The simplest way of making a Go value available to QML code is setting it as a variable of the engine's root context, as in: This logic would enable the following QML code to successfully run: While registering an individual Go value as described above is a quick way to get started, it is also fairly limited. For more flexibility, a Go type may be registered so that QML code can natively create new instances in an arbitrary position of the structure. This may be achieved via the RegisterType function, as the following example demonstrates: With this logic in place, QML code can create new instances of Person by itself: Independently from the mechanism used to publish a Go value to QML code, its methods and fields are available to QML logic as methods and properties of the respective QML object representing it. As required by QML, though, the Go method and field names are lowercased according to the following scheme when being accesed from QML: While QML code can directly read and write exported fields of Go values, as described above, a Go type can also intercept writes to specific fields by declaring a setter method according to common Go conventions. This is often useful for updating the internal state or the visible content of a Go-defined type. For example: In the example above, whenever QML code attempts to update the Person.Name field via any means (direct assignment, object declarations, etc) the SetName method is invoked with the provided value instead. A setter method may also be used in conjunction with a getter method rather than a real type field. A method is only considered a getter in the presence of the respective setter, and according to common Go conventions it must not have the Get prefix. Inside QML logic, the getter and setter pair is seen as a single object property. Custom types implemented in Go may have displayable content by defining a Paint method such as: A simple example is available at: Resource files (qml code, images, etc) may be packed into the Go qml application binary to simplify its handling and distribution. This is done with the genqrc tool: The following blog post provides more details:
Package watchdog runs a singleton memory watchdog in the process, which watches memory utilization and forces Go GC in accordance with a user-defined policy. There three kinds of watchdogs: The watchdog's behaviour is controlled by the policy, a pluggable function that determines when to trigger GC based on the current utilization. This library ships with two policies: You can easily write a custom policy tailored to the allocation patterns of your program. The recommended way to set up the watchdog is as follows, in descending order of precedence. This logic assumes that the library supports setting a heap limit through an environment variable (e.g. MYAPP_HEAP_MAX) or config key.
<h1 align="center">IrisAdmin</h1> [![Build Status](https://app.travis-ci.com/snowlyg/iris-admin.svg?branch=master)](https://app.travis-ci.com/snowlyg/iris-admin) [![LICENSE](https://img.shields.io/github/license/snowlyg/iris-admin)](https://github.com/snowlyg/iris-admin/blob/master/LICENSE) [![go doc](https://godoc.org/github.com/snowlyg/iris-admin?status.svg)](https://godoc.org/github.com/snowlyg/iris-admin) [![go report](https://goreportcard.com/badge/github.com/snowlyg/iris-admin)](https://goreportcard.com/badge/github.com/snowlyg/iris-admin) [![Build Status](https://codecov.io/gh/snowlyg/iris-admin/branch/master/graph/badge.svg)](https://codecov.io/gh/snowlyg/iris-admin) [简体中文](./README.md) | English #### Project url [GITHUB](https://github.com/snowlyg/iris-admin) | [GITEE](https://gitee.com/snowlyg/iris-admin) **** > This project just for learning golang, welcome to give your suggestions! #### Documentation - [IRIS-ADMIN-DOC](https://doc.snowlyg.com) - [IRIS V12 document for chinese](https://github.com/snowlyg/iris/wiki) - [godoc](https://pkg.go.dev/github.com/snowlyg/iris-admin?utm_source=godoc) [![Gitter](https://badges.gitter.im/iris-go-tenancy/community.svg)](https://gitter.im/iris-go-tenancy/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge) [![Join the chat at https://gitter.im/iris-go-tenancy/iris-admin](https://badges.gitter.im/iris-go-tenancy/iris-admin.svg)](https://gitter.im/iris-go-tenancy/iris-admin?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) #### BLOG - [REST API with iris-go web framework](https://blog.snowlyg.com/iris-go-api-1/) - [How to user iris-go with casbin](https://blog.snowlyg.com/iris-go-api-2/) --- #### Getting started - Get master package , Notice must use `master` version. ```sh ``` #### Program introduction ##### The project consists of multiple plugins, each with different functions - [viper_server] ```go package cache import ( ) var CONFIG Redis // getViperConfig get initialize config db: ` + db + ` addr: "` + CONFIG.Addr + `" password: "` + CONFIG.Password + `" pool-size: ` + poolSize), ``` - [zap_server] ```go ``` - [database] ```go ``` - [casbin] ```go ``` - [cache] ```go ``` - [operation] - [cron_server] ```go ``` - [web] - ```go // WebFunc web framework // - GetTestClient test client // - GetTestLogin test for login // - AddWebStatic add web static path // - AddUploadStatic add upload static path // - Run start ``` - [mongodb] #### Initialize database ##### Simple - Use gorm's `AutoMigrate()` function to auto migrate database. ```go package main import ( ) ``` ##### Custom migrate tools - Use `gormigrate` third party package. Tt's helpful for database migrate and program development. - Detail is see [iris-admin-cmd](https://github.com/snowlyg/iris-admin-example/blob/main/iris/cmd/main.go). --- - Add main.go file. ```go package main import ( ) ``` #### Run project - When you first run this cmd `go run main.go` , you can see some config files in the `config` directory, - and `rbac_model.conf` will be created in your project root directory. ```sh go run main.go ``` #### Module - You can use [iris-admin-rbac](https://github.com/snowlyg/iris-admin-rbac) package to add rbac function for your project quickly. - Your can use AddModule() to add other modules . ```go package main import ( ) ``` #### Default static file path - A static file access path has been built in by default - Static files will upload to `/static/upload` directory. - You can set this config key `static-path` to change the default directory. ```yaml system: ``` #### Use with front-end framework , e.g. vue - Default,you must build vue to the `dist` directory. - Naturally you can set this config key `web-path` to change the default directory. ```go package main import ( ) ``` #### Example - [iris](https://github.com/snowlyg/iris-admin-example/tree/main/iris) - [gin](https://github.com/snowlyg/iris-admin-example/tree/main/gin) #### RBAC - [iris-admin-rbac](https://github.com/snowlyg/iris-admin-rbac) #### Unit test and documentation - Before start unit tests, you need to set two system environment variables `mysqlPwd` and `mysqlAddr`,that will be used when running the test instance。 - helper/tests(https://github.com/snowlyg/helper/tree/main/tests) package the unit test used, it's simple package base on httpexpect/v2(https://github.com/gavv/httpexpect). - [example for unit test](https://github.com/snowlyg/iris-admin-rbac/tree/main/iris/perm/tests) - [example for unit test](https://github.com/snowlyg/iris-admin-rbac/tree/main/gin/authority/test) Before create a http api unit test , you need create a base test file named `main_test.go` , this file have some unit test step : ***Suggest use docker mysql, otherwise if the test fails, there will be a lot of test data left behind*** - 1.create database before test start and delete database when test finish. - 2.create tables and seed test data at once time. - 3.`PartyFunc` and `SeedFunc` use to custom someting for your test model. 内容如下所示: ***main_test.go*** ```go package test import ( ) var TestServer *web_gin.WebServer var TestClient *httptest.Client ``` ***index_test.go*** ```go package test import ( ) var ( ) ``` ## 🔋 JetBrains OS licenses <a href="https://www.jetbrains.com/?from=iris-admin" target="_blank"><img src="https://raw.githubusercontent.com/panjf2000/illustrations/master/jetbrains/jetbrains-variant-4.png" width="230" align="middle"/></a> ## ☕️ Buy me a coffee > Please be sure to leave your name, GitHub account or other social media accounts when you donate by the following means so that I can add it to the list of donors as a token of my appreciation. - [为爱发电](https://afdian.net/@snowlyg/plan) - [donating](https://paypal.me/snowlyg?country.x=C2&locale.x=zh_XC)
Package goebpf provides simple and convenient interface to Linux eBPF system. Extended Berkeley Packet Filter (eBPF) is a highly flexible and efficient virtual machine in the Linux kernel allowing to execute bytecode at various hook points in a safe manner. It is actually close to kernel modules which can provide the same functionality, but without cost of kernel panic if something went wrong. The library is intended to simplify work with eBPF programs. It takes care of low level routine implementation to make it easy to load/run/manage eBPF programs. Currently supported functionality: - Read / parse clang/llmv compiled binaries for eBPF programs / maps - Creates / loads eBPF program / eBPF maps into kernel - Provides simple interface to interact with eBPF maps - Has mock versions of eBPF objects (program, map, etc) in order to make writing unittests simple. eXpress Data Path - provides a bare metal, high performance, programmable packet processing at the closest at possible point to network driver. That makes it ideal for speed without compromising programmability. Key benefits includes following: - It does not require any specialized hardware (program works in kernel’s "VM") - It does not require kernel bypass - It does not replace the TCP/IP stack Considering very simple and highly effective way to DROP all packets from given source IPv4 address: XDP program (written in C): Once compiled can be used by goebpf in the following way: Perf Events (originally Performance Counters for Linux) is powerful kernel instrument for tracing, profiling and a lot of other cases like general events to user space. Usually it is implemented using special eBPF map type "BPF_MAP_TYPE_PERF_EVENT_ARRAY" as a container to send events into. A simple example could be to log all TCP SYN packets into user space from XDP program: There are currently two types of supported probes: kprobes, and kretprobes (also called return probes). A kprobe can be inserted on virtually any instruction in the kernel. A return probe fires when a specified function returns. For example, you can trigger eBPF code to run when a kernel function starts by attaching the program to a “kprobe” event. Because it runs in the kernel, eBPF code is extremely high performance. A simple example could be to log all process execution events into user space from Kprobe program:
Package vala is a simple, extensible, library to make argument validation in Go palatable. This package uses the fluent programming style to provide simultaneously more robust and more terse parameter validation. Notice how checks can be tiered. Vala is also extensible. As long as a function conforms to the Checker specification, you can pass it into the Validate method:
Package securejoin implements a set of helpers to make it easier to write Go code that is safe against symlink-related escape attacks. The primary idea is to let you resolve a path within a rootfs directory as if the rootfs was a chroot. securejoin has two APIs, a "legacy" API and a "modern" API. The legacy API is SecureJoin and SecureJoinVFS. These methods are **not** safe against race conditions where an attacker changes the filesystem after (or during) the SecureJoin operation. The new API is made up of OpenInRoot and MkdirAll (and derived functions). These are safe against racing attackers and have several other protections that are not provided by the legacy API. There are many more operations that most programs expect to be able to do safely, but we do not provide explicit support for them because we want to encourage users to switch to [libpathrs](https://github.com/openSUSE/libpathrs) which is a cross-language next-generation library that is entirely designed around operating on paths safely. securejoin has been used by several container runtimes (Docker, runc, Kubernetes, etc) for quite a few years as a de-facto standard for operating on container filesystem paths "safely". However, most users still use the legacy API which is unsafe against various attacks (there is a fairly long history of CVEs in dependent as a result). Users should switch to the modern API as soon as possible (or even better, switch to libpathrs). This project was initially intended to be included in the Go standard library, but [it was rejected](https://go.dev/issue/20126). There is now a [new Go proposal](https://go.dev/issue/67002) for a safe path resolution API that shares some of the goals of filepath-securejoin. However, that design is intended to work like `openat2(RESOLVE_BENEATH)` which does not fit the usecase of container runtimes and most system tools.
Package bindata converts any file into manageable Go source code. Useful for embedding binary data into a go program. The file data is optionally gzip compressed before being converted to a raw byte slice. The following paragraphs cover some of the customization options which can be specified in the Config struct, which must be passed into the Translate() call. When used with the `Debug` option, the generated code does not actually include the asset data. Instead, it generates function stubs which load the data from the original file on disk. The asset API remains identical between debug and release builds, so your code will not have to change. This is useful during development when you expect the assets to change often. The host application using these assets uses the same API in both cases and will not have to care where the actual data comes from. An example is a Go webserver with some embedded, static web content like HTML, JS and CSS files. While developing it, you do not want to rebuild the whole server and restart it every time you make a change to a bit of javascript. You just want to build and launch the server once. Then just press refresh in the browser to see those changes. Embedding the assets with the `debug` flag allows you to do just that. When you are finished developing and ready for deployment, just re-invoke `go-bindata` without the `-debug` flag. It will now embed the latest version of the assets. The `NoMemCopy` option will alter the way the output file is generated. It will employ a hack that allows us to read the file data directly from the compiled program's `.rodata` section. This ensures that when we call call our generated function, we omit unnecessary memcopies. The downside of this, is that it requires dependencies on the `reflect` and `unsafe` packages. These may be restricted on platforms like AppEngine and thus prevent you from using this mode. Another disadvantage is that the byte slice we create, is strictly read-only. For most use-cases this is not a problem, but if you ever try to alter the returned byte slice, a runtime panic is thrown. Use this mode only on target platforms where memory constraints are an issue. The default behaviour is to use the old code generation method. This prevents the two previously mentioned issues, but will employ at least one extra memcopy and thus increase memory requirements. For instance, consider the following two examples: This would be the default mode, using an extra memcopy but gives a safe implementation without dependencies on `reflect` and `unsafe`: Here is the same functionality, but uses the `.rodata` hack. The byte slice returned from this example can not be written to without generating a runtime error. The NoCompress option indicates that the supplied assets are *not* GZIP compressed before being turned into Go code. The data should still be accessed through a function call, so nothing changes in the API. This feature is useful if you do not care for compression, or the supplied resource is already compressed. Doing it again would not add any value and may even increase the size of the data. The default behaviour of the program is to use compression. The keys used in the `_bindata` map are the same as the input file name passed to `go-bindata`. This includes the path. In most cases, this is not desirable, as it puts potentially sensitive information in your code base. For this purpose, the tool supplies another command line flag `-prefix`. This accepts a [regular expression](https://github.com/google/re2/wiki/Syntax) string, which will be used to match a portion of the map keys and function names that should be stripped out. For example, running without the `-prefix` flag, we get: Running with the `-prefix` flag, we get: With the optional Tags field, you can specify any go build tags that must be fulfilled for the output file to be included in a build. This is useful when including binary data in multiple formats, where the desired format is specified at build time with the appropriate tags. The tags are appended to a `// +build` line in the beginning of the output file and must follow the build tags syntax specified by the go tool. When you want to embed big files or plenty of files, then the generated output is really big (maybe over 3Mo). Even if the generated file shouldn't be read, you probably need use analysis tool or an editor which can become slower with a such file. Generating big files can be avoided with `-split` command line option. In that case, the given output is a directory path, the tool will generate one source file per file to embed, and it will generate a common file nammed `common.go` which contains commons parts like API.
Package getopt (v2) provides traditional getopt processing for implementing commands that use traditional command lines. The standard Go flag package cannot be used to write a program that parses flags the way ls or ssh does, for example. Version 2 of this package has a simplified API. See the github.com/pborman/options package for a simple structure based interface to this package. Getopt supports functionality found in both the standard BSD getopt as well as (one of the many versions of) the GNU getopt_long. Being a Go package, this package makes common usage easy, but still enables more controlled usage if needed. Typical usage: If you don't want the program to exit on error, use getopt.Getopt: Support is provided for both short (-f) and long (--flag) options. A single option may have both a short and a long name. Each option may be a flag or a value. A value takes an argument. Declaring no long names causes this package to process arguments like the traditional BSD getopt. Short flags may be combined into a single parameter. For example, "-a -b -c" may also be expressed "-abc". Long flags must stand on their own "--alpha --beta" Values require an argument. For short options the argument may either be immediately following the short name or as the next argument. Only one short value may be combined with short flags in a single argument; the short value must be after all short flags. For example, if f is a flag and v is a value, then: For the long value option val: Values with an optional value only set the value if the value is part of the same argument. In any event, the option count is increased and the option is marked as seen. There is no convenience function defined for making the value optional. The SetOptional method must be called on the actual Option. Parsing continues until the first non-option or "--" is encountered. The short name "-" can be used, but it either is specified as "-" or as part of a group of options, for example "-f-". If there are no long options specified then "--f" could also be used. If "-" is not declared as an option then the single "-" will also terminate the option processing but unlike "--", the "-" will be part of the remaining arguments. Normally the parsing is performed by calling the Parse function. If it is important to see the order of the options then the Getopt function should be used. The standard Parse function does the equivalent of: When calling Getopt it is the responsibility of the caller to print any errors. Normally the default option set, CommandLine, is used. Other option sets may be created with New. After parsing, the sets Args will contain the non-option arguments. If an error is encountered then Args will begin with argument that caused the error. It is valid to call a set's Parse a second time to amen flags or values. As an example: If called with set to { "prog", "-a", "cmd", "-b", "arg" } then both and and b would be set, cmd would be set to "cmd", and opts.Args() would return { "arg" }. Unless an option type explicitly prohibits it, an option may appear more than once in the arguments. The last value provided to the option is the value. An option marked as mandatory and not seen when parsing will cause an error to be reported such as: "program: --name is a mandatory option". An option is marked mandatory by using the Mandatory method: Mandatory options have (required) appended to their help message: Options can be marked as part of a mutually exclusive group. When two or more options in a mutually exclusive group are both seen while parsing then an error such as "program: options -a and -b are mutually exclusive" will be reported. Mutually exclusive groups are declared using the SetGroup method: A set can have multiple mutually exclusive groups. Mutually exclusive groups are identified with their group name in {}'s appeneded to their help message: The Flag and FlagLong functions support most standard Go types. For the list, see the description of FlagLong below for a list of supported types. There are also helper routines to allow single line flag declarations. These types are: Bool, Counter, Duration, Enum, Int16, Int32, Int64, Int, List, Signed, String, Uint16, Uint32, Uint64, Uint, and Unsigned. Each comes in a short and long flavor, e.g., Bool and BoolLong and include functions to set the flags on the standard command line or for a specific Set of flags. Except for the Counter, Enum, Signed and Unsigned types, all of these types can be declared using Flag and FlagLong by passing in a pointer to the appropriate type. A pointer to any type that implements the Value interface may be passed to Flag or FlagLong. All non-flag options are created with a "valuehelp" as the last parameter. Valuehelp should be 0, 1, or 2 strings. The first string, if provided, is the usage message for the option. If the second string, if provided, is the name to use for the value when displaying the usage. If not provided the term "value" is assumed. The usage message for the option created with is is
muxado implements a general purpose stream-multiplexing protocol. muxado allows clients applications to multiplex any io.ReadWriteCloser (like a net.Conn) into multiple, independent full-duplex streams. muxado is a useful protocol for any two communicating processes. It is an excellent base protocol for implementing lightweight RPC. It eliminates the need for custom async/pipeling code from your peers in order to support multiple simultaneous inflight requests between peers. For the same reason, it also eliminates the need to build connection pools for your clients. It enables servers to initiate streams to clients without building any NAT traversal. muxado can also yield performance improvements (especially latency) for protocols that require rapidly opening many concurrent connections. Here's an example client which responds to simple JSON requests from a server. Maybe the client wants to make a request to the server instead of just responding. This is easy as well: muxado defines the following terms for clarity of the documentation: A "Transport" is an underlying stream (typically TCP) that is multiplexed by sending frames between muxado peers over this transport. A "Stream" is any of the full-duplex byte-streams multiplexed over the transport A "Session" is two peers running the muxado protocol over a single transport muxado's design is influenced heavily by the framing layer of HTTP2 and SPDY. However, instead of being specialized for a higher-level protocol, muxado is designed in a protocol agnostic way with simplicity and speed in mind. More advanced features are left to higher-level libraries and protocols. muxado's API is designed to make it seamless to integrate into existing Go programs. muxado.Session implements the net.Listener interface and muxado.Stream implements net.Conn. muxado ships with two wrappers that add commonly used functionality. The first is a TypedStreamSession which allows a client application to open streams with a type identifier so that the remote peer can identify the protocol that will be communicated on that stream. The second wrapper is a simple Heartbeat which issues a callback to the application informing it of round-trip latency and heartbeat failure.
Package retryablehttp provides a familiar HTTP client interface with automatic retries and exponential backoff. It is a thin wrapper over the standard net/http client library and exposes nearly the same public API. This makes retryablehttp very easy to drop into existing programs. retryablehttp performs automatic retries under certain conditions. Mainly, if an error is returned by the client (connection errors etc), or if a 500-range response is received, then a retry is invoked. Otherwise, the response is returned and left to the caller to interpret. Requests which take a request body should provide a non-nil function parameter. The best choice is to provide either a function satisfying ReaderFunc which provides multiple io.Readers in an efficient manner, a *bytes.Buffer (the underlying raw byte slice will be used) or a raw byte slice. As it is a reference type, and we will wrap it as needed by readers, we can efficiently re-use the request body without needing to copy it. If an io.Reader (such as a *bytes.Reader) is provided, the full body will be read prior to the first request, and will be efficiently re-used for any retries. ReadSeeker can be used, but some users have observed occasional data races between the net/http library and the Seek functionality of some implementations of ReadSeeker, so should be avoided if possible.
Package goresilience is a framework/lirbary of utilities to improve the resilience of programs easily. The library is based on `goresilience.Runner` interface, this runners can be chained using the decorator pattern (like std library `http.Handler` interface). This makes the library being extensible, flexible and clean to use. The runners can be chained like if they were middlewares that could act on all the execution process of the `goresilience.Func`. Will use a single runner, the retry with the default settings this will make the `gorunner.Func` to be executed and retried N times if it fails. Will use more than one `goresilience.Runner` and chain them to create a very resilient execution of the `goresilience.Func`. In this case we will create a runner that retries and also times out. And we will configure the timeout. Will measure all the execution through the runners uwing prometheus metrics. Is an example to show that when the result is not needed we don't need to use and inline function. Is an example to show that we could use objects aslo to pass parameter and get our results.
Package well provides a framework that helps implementation of commands having these features: Better logging: By using github.com/cybozu-go/log package, logs can be structured in JSON or logfmt format. HTTP servers log accesses automatically. Graceful exit: The framework provides functions to manage goroutines and network server implementation that can be shutdown gracefully. Signal handlers: The framework installs SIGINT/SIGTERM signal handlers for graceful exit, and SIGUSR1 signal handler to reopen log files. Environment: Environment is the heart of the framework. It provides a base context.Context that will be canceled before program stops, and methods to manage goroutines. To use the framework easily, the framework provides an instance of Environment as the default, and functions to work with it. The most basic usage of the framework. HTTP server that stops gracefully.
Package packit provides primitives for implementing a Cloud Native Buildpack according to the specification: https://github.com/buildpacks/spec/blob/main/buildpack.md. According to the specification, the buildpack interface is composed of both a detect and build phase. Each of these phases has a corresponding set of packit primitives enable developers to easily implement a buildpack. The purpose of the detect phase is for buildpacks to declare dependencies that are provided or required for the buildpack to execute. Implementing the detect phase can be achieved by calling the Detect function and providing a DetectFunc callback to be invoked during that phase. Below is an example of a simple detect phase that provides the "yarn" dependency and requires the "node" dependency. The purpose of the build phase is to perform the operation of providing whatever dependencies were declared in the detect phase for the given application code. Implementing the build phase can be achieved by calling the Build function and providing a BuildFunc callback to be invoked during that phase. Below is an example that adds "yarn" as a dependency to the application source code. Buildpacks can be created with a single entrypoint executable using the packit.Run function. Here, you can combine both the Detect and Build phases and run will ensure that the correct phase is called when the matching executable is called by the Cloud Native Buildpack Lifecycle. Below is an example that combines a simple detect and build into a single main program. These examples show the very basics of what a buildpack implementation using packit might entail. For more details, please consult the documentation of the types and functions declared herein.
Package wasmtime is a WebAssembly runtime for Go powered by Wasmtime. This package provides everything necessary to compile and execute WebAssembly modules as part of a Go program. Wasmtime is a JIT compiler written in Rust, and can be found at https://github.com/bytecodealliance/wasmtime. This package is a binding to the C API provided by Wasmtime. The API of this Go package is intended to mirror the Rust API (https://docs.rs/wasmtime) relatively closely, so if you find something is under-documented here then you may have luck consulting the Rust documentation as well. As always though feel free to file any issues at https://github.com/bytecodealliance/wasmtime-go/issues/new. It's also worth pointing out that the authors of this package up to this point primarily work in Rust, so if you've got suggestions of how to make this package more idiomatic for Go we'd love to hear your thoughts! An example of a wasm module which calculates the GCD of two numbers An example of instantiating a small wasm module which imports functionality from the host, then calling into wasm which calls back into the host.
Package gdal provides a wrapper for GDAL, the Geospatial Data Abstraction Library. This C/C++ library provides access to a large number of geospatial raster data formats. It also contains a wrapper for the related OGR Simple Feature Library which provides similar functionality for vector formats. Some less oftenly used functions are not yet implemented. The majoriry of these involve style tables, asynchronous I/O, and GCPs. The documentation is fairly limited, but the functionality fairly closely matches that of the C++ api. This wrapper has most recently been tested on Windows7, using the MinGW32_x64 compiler and GDAL version 1.11. A simple program to create a georeferenced blank 256x256 GeoTIFF: More examples can be found in the ./examples subdirectory.
Package wmenu creates menus for cli programs. It uses wlog for it's interface with the command line. It uses os.Stdin, os.Stdout, and os.Stderr with concurrency by default. wmenu allows you to change the color of the different parts of the menu. This package also creates it's own error structure so you can type assert if you need to. wmenu will validate all responses before calling any function. It will also figure out which function should be called so you don't have to.
Package rpm implements the rpm package file format. For more information about the rpm file format, see: http://ftp.rpm.org/max-rpm/s1-rpm-file-format-rpm-file-format.html Packages are composed of two headers: the Signature header and the "Header" header. Each contains key-value pairs called tags. Tags map an integer key to a value whose data type will be one of the TagType types. Tag values can be decoded with the appropriate Tag method for the data type. Many known tags are available as Package methods. For example, RPMTAG_NAME and RPMTAG_BUILDTIME are available as Package.Name and Package.BuildTime respectively. Tags can be retrieved and decoded from the Signature or Header headers directly using Header.GetTag and their tag identifier. Header.GetTag and all Tag methods will return a zero value if the header or the tag do not exist, or if the tag has a different data type. You may enumerate all tags in a header with Header.Tags: In the rpm ecosystem, package versions are compared using EVR; epoch, version, release. Versions may be compared using the Compare function. Packages may be be sorted using the PackageSlice type which implements sort.Interface. Packages are sorted lexically by name ascending and then by version descending. Version is evaluated first by epoch, then by version string, then by release. The Sort function is provided for your convenience. Packages may be validated using MD5Check or GPGCheck. See the example for each function. The payload of an rpm package is typically archived in cpio format and compressed with xz. To decompress and unarchive an rpm payload, the reader that read the rpm package headers will be positioned at the beginning of the payload and can be reused with the appropriate Go packages for the rpm payload format. You can check the archive format with Package.PayloadFormat and the compression algorithm with Package.PayloadCompression. For the cpio archive format, the following package is recommended: https://github.com/cavaliergopher/cpio For xz compression, the following package is recommended: https://github.com/ulikunitz/xz See README.md for a working example of extracting files from a cpio/xz rpm package using these packages. See cmd/rpmdump and cmd/rpminfo for example programs that emulate tools from the rpm ecosystem.
Package sqlite provides a Go interface to SQLite 3. The semantics of this package are deliberately close to the SQLite3 C API. See the official C API introduction for an overview of the basics. An SQLite connection is represented by a *Conn. Connections cannot be used concurrently. A typical Go program will create a pool of connections (e.g. by using zombiezen.com/go/sqlite/sqlitex.NewPool to create a *zombiezen.com/go/sqlite/sqlitex.Pool) so goroutines can borrow a connection while they need to talk to the database. This package assumes SQLite will be used concurrently by the process through several connections, so the build options for SQLite enable multi-threading and the shared cache. The implementation automatically handles shared cache locking, see the documentation on Stmt.Step for details. The optional SQLite 3 extensions compiled in are: session, FTS5, RTree, JSON1, and GeoPoly. This is not a database/sql driver. For helper functions to make it easier to execute statements, see the zombiezen.com/go/sqlite/sqlitex package. Statements are prepared with the Conn.Prepare and Conn.PrepareTransient methods. When using Conn.Prepare, statements are keyed inside a connection by the original query string used to create them. This means long-running high-performance code paths can write: After all the connections in a pool have been warmed up by passing through one of these Prepare calls, subsequent calls are simply a map lookup that returns an existing statement. SQLite transactions can be managed manually with this package by directly executing BEGIN / COMMIT / ROLLBACK or SAVEPOINT / RELEASE / ROLLBACK statements, but there are also helper functions available in zombiezen.com/go/sqlite/sqlitex: For simple schema migration needs, see the zombiezen.com/go/sqlite/sqlitemigration package. Use Conn.CreateFunction to register Go functions for use as SQL functions. The sqlite package supports the SQLite incremental I/O interface for streaming blob data into and out of the the database without loading the entire blob into a single []byte. (This is important when working either with very large blobs, or more commonly, a large number of moderate-sized blobs concurrently.) See Conn.OpenBlob for more details. Every connection can have a done channel associated with it using the Conn.SetInterrupt method. This is typically the channel returned by a context.Context.Done method. As database connections are long-lived, the Conn.SetInterrupt method can be called multiple times to reset the associated lifetime. Using a Pool to execute SQL in a concurrent HTTP handler. This is the same as the main package example, but uses the SQLite statement API instead of sqlitex.
Package packit provides primitives for implementing a Cloud Native Buildpack according to the specification: https://github.com/buildpacks/spec/blob/main/buildpack.md. According to the specification, the buildpack interface is composed of both a detect and build phase. Each of these phases has a corresponding set of packit primitives enable developers to easily implement a buildpack. The purpose of the detect phase is for buildpacks to declare dependencies that are provided or required for the buildpack to execute. Implementing the detect phase can be achieved by calling the Detect function and providing a DetectFunc callback to be invoked during that phase. Below is an example of a simple detect phase that provides the "yarn" dependency and requires the "node" dependency. The purpose of the build phase is to perform the operation of providing whatever dependencies were declared in the detect phase for the given application code. Implementing the build phase can be achieved by calling the Build function and providing a BuildFunc callback to be invoked during that phase. Below is an example that adds "yarn" as a dependency to the application source code. Buildpacks can be created with a single entrypoint executable using the packit.Run function. Here, you can combine both the Detect and Build phases and run will ensure that the correct phase is called when the matching executable is called by the Cloud Native Buildpack Lifecycle. Below is an example that combines a simple detect and build into a single main program. These examples show the very basics of what a buildpack implementation using packit might entail. For more details, please consult the documentation of the types and functions declared herein.