A cypher fuzzer generating complex, semantically and syntactically valid queries. Dinkel provides an easily expandable framework for targeting all possible cypher implementations. Additionally, dinkel supports different fuzzing techniques, targeting exception and logic bugs. This allows for easy and thorough testing of any cypher implementation with little setup required. Dinkel achieves query complexity and validity by keeping track of the query context and database state during generation. This information then gets used within the stateful generation of a query's clauses, allowing for complex data dependencies within a query.
Package noleak complements the Gingko/Gomega testing and matchers framework with matchers for Goroutine leakage detection. To start with, returns information about all (non-dead) goroutines at a particular moment. This is useful to capture a known correct snapshot and then later taking a new snapshot and comparing these two snapshots for leaked goroutines. Next, the matcher filters out well-known and expected "non-leaky" goroutines from an actual list of goroutines (passed from Eventually or Expect), hopefully ending up with an empty list of leaked goroutines. If there are still goroutines left after filtering, then HaveLeaked() will succeed ... which usually is actually considered to be failure. So, this can be rather declared to be "suckcess" because no one wants leaked goroutines. A typical pattern to detect goroutines leaked in individual tests is as follows: Using Eventually instead of Expect ensures that there is some time given for temporary goroutines to finally wind down. Gomega's default values apply: the 1s timeout and 10ms polling interval. Please note that the form is the same as the slightly longer, but also more expressive variant: Depending on your tests and the dependencies used, you might need to identify additional goroutines as not being leaks. The noleak packages comes with the following predefined goroutine "filter" matchers that can be specified as arguments to HaveLeaked(...): In addition, you can use any other GomegaMatcher, as long as it can work on a (single) goroutine.Goroutine. For instance, Gomega's HaveField and WithTransform matchers are good foundations for writing project-specific noleak matchers. By default, when noleak's HaveLeaked matcher finds one or more leaked goroutines, it dumps the goroutine backtraces in a condensed format that uses only a single line per call instead of two lines. Moreover, the backtraces abbreviate the source file location in the form of package/source.go:lineno: By setting noleak.ReportFilenameWithPath=true the leaky goroutine backtraces will show full path names for each source file: noleak has been heavily inspired by the Goroutine leak detector github.com/uber-go/goleak. That's definitely a fine piece of work! But then why another goroutine leak package? After a deep analysis of Uber's goleak we decided against crunching goleak somehow half-assed into the Gomega TDD matcher ecosystem. In particular, reusing and wrapping of the existing Uber implementation would have become very awkward: goleak.Find combines all the different elements of getting actual goroutines information, filtering them, arriving at a leak conclusion, and even retrying multiple times all in just one single exported function. Unfortunately, goleak makes gathering information about all goroutines an internal matter, so we cannot reuse such functionality elsewhere. Users of the Gomega ecosystem are already experienced in arriving at conclusions and retrying temporarily failing expectations: Gomega does it in form of Eventually().ShouldNot(), and (without the trying aspect) with Expect().NotTo(). So what is missing is only a goroutine leak detector in form of the HaveLeaked matcher, as well as the ability to specify goroutine filters in order to sort out the non-leaking (and therefore expected) goroutines, using a few filter criteria. That is, a few new goroutine-related matchers. In this architecture, even existing Gomega matchers can optionally be (re)used as the need arises. https://github.com/onsi/gomega and https://github.com/onsi/ginkgo.
Package httpexpect helps with end-to-end HTTP and REST API testing. See example directory: There are two common ways to test API with httpexpect: The second approach works only if the server is a Go module and its handler can be imported in tests. Concrete behaviour is determined by Client implementation passed to Config struct. If you're using http.Client, set its Transport field (http.RoundTriper) to one of the following: Note that http handler can be usually obtained from http framework you're using. E.g., echo framework provides either http.Handler or fasthttp.RequestHandler. You can also provide your own implementation of RequestFactory (creates http.Request), or Client (gets http.Request and returns http.Response). If you're starting server from tests, it's very handy to use net/http/httptest. Whenever values are checked for equality in httpexpect, they are converted to "canonical form": This is equivalent to subsequently json.Marshal() and json.Unmarshal() the value and currently is implemented so. When some check fails, failure is reported. If non-fatal failures are used (see Reporter interface), execution is continued and instance that was checked is marked as failed. If specific instance is marked as failed, all subsequent checks are ignored for this instance and for any child instances retrieved after failure. Example:
Package skogul is a framework for receiving, processing and forwarding data, typically metric data or event-oriented data, at high throughput. It is designed to be as agnostic as possible with regards to how it transmits data and how it receives it, and the processors in between need not worry about how the data got there or how it will be treated in the next chain. This means you can use Skogul to receive data on a influxdb-like line-based TCP interface and send it on to postgres - or influxdb - without having to write explicit support, just set up the chain. The guiding principles of Skogul is: - Make as few assumptions as possible about how data is received - Be stupid fast End users should only need to worry about the cmd/skogul tool, which comes fully equipped with self-contained documentation. Adding new logic to Skogul should also be fairly easy. New developers should focus on understanding two things: 1. The skogul.Container data structure - which is the heart of Skogul. 2. The relationship from receiver to handler to sender. The Container is documented in this very package. Receivers are where data originates within Skogul. The typical Receiver will receive data from the outside world, e.g. by other tools posting data to a HTTP endpoint. Receivers can also be used to "create" data, either test data or, for example, log data. When skogul starts, it will start all receivers that are configured. Handlers determine what is done with the data once received. They are responsible for parsing raw data and optionally transform it. This is the only place where it is allowed to _modify_ data. Today, the only transformer is the "templater", which allows a collection of metrics which share certain attributes (e.g.: all collected at the same time and from the same machine) to provide these shared attributes in a template which the "templater" transformer then applies to all metrics. Other examples of transformations that make sense are: - Adding a metadata field - Removing a metadata field - Removing all but a specific set of fields - Converting nested metrics to multiple metrics or flatten them Once a handler has done its deed, it sends the Container to the sender, and this is where "the fun begins" so to speak. Senders consist of just a data structure that implements the Send() interface. They are not allowed to change the container, but besides that, they can do "whatever". The most obvious example is to send the container to a suitable storage system - e.g., a time series database. So if you want to add support for a new time series database in Skogul, you will write a sender. In addition to that, many senders serve only to add internal logic and pass data on to other senders. Each sender should only do one specific thing. For example, if you want to write data both to InfluxDB and MySQL, you need three senders: The "MySQL" and "InfluxDB" senders, and the "dupe" sender, which just takes a list of other senders and sends whatever it receives on to all of them. Today, Senders and Receivers both have an identical "Auto"-system, found in auto.go of the relevant directories. This is how the individual implementations are made discoverable to the configuration system, and how documentation is provided. Documentation for the settings of a sender/receiver is handled as struct tags. Once more parsers/transformers are added, they will likely also use a similar system.
Gomega is the Ginkgo BDD-style testing framework's preferred matcher library. The godoc documentation describes Gomega's API. More comprehensive documentation (with examples!) is available at http://onsi.github.io/gomega/ Gomega on Github: http://github.com/onsi/gomega Learn more about Ginkgo online: http://onsi.github.io/ginkgo Ginkgo on Github: http://github.com/onsi/ginkgo Gomega is MIT-Licensed
Package gosnowflake is a pure Go Snowflake driver for the database/sql package. Clients can use the database/sql package directly. For example: Use Open to create a database handle with connection parameters: The Go Snowflake Driver supports the following connection syntaxes (or data source name formats): where all parameters must be escaped or use `Config` and `DSN` to construct a DSN string. The following example opens a database handle with the Snowflake account myaccount where the username is jsmith, password is mypassword, database is mydb, schema is testschema, and warehouse is mywh: The following connection parameters are supported: account <string>: Specifies the name of your Snowflake account, where string is the name assigned to your account by Snowflake. In the URL you received from Snowflake, your account name is the first segment in the domain (e.g. abc123 in https://abc123.snowflakecomputing.com). This parameter is optional if your account is specified after the @ character. If you are not on us-west-2 region or AWS deployment, then append the region after the account name, e.g. “<account>.<region>”. If you are not on AWS deployment, then append not only the region, but also the platform, e.g., “<account>.<region>.<platform>”. Account, region, and platform should be separated by a period (“.”), as shown above. If you are using a global url, then append connection group and "global", e.g., "account-<connection_group>.global". Account and connection group are separated by a dash ("-"), as shown above. region <string>: DEPRECATED. You may specify a region, such as “eu-central-1”, with this parameter. However, since this parameter is deprecated, it is best to specify the region as part of the account parameter. For details, see the description of the account parameter. database: Specifies the database to use by default in the client session (can be changed after login). schema: Specifies the database schema to use by default in the client session (can be changed after login). warehouse: Specifies the virtual warehouse to use by default for queries, loading, etc. in the client session (can be changed after login). role: Specifies the role to use by default for accessing Snowflake objects in the client session (can be changed after login). passcode: Specifies the passcode provided by Duo when using MFA for login. passcodeInPassword: false by default. Set to true if the MFA passcode is embedded in the login password. Appends the MFA passcode to the end of the password. loginTimeout: Specifies the timeout, in seconds, for login. The default is 60 seconds. The login request gives up after the timeout length if the HTTP response is success. authenticator: Specifies the authenticator to use for authenticating user credentials: To use the internal Snowflake authenticator, specify snowflake (Default). To authenticate through Okta, specify https://<okta_account_name>.okta.com (URL prefix for Okta). To authenticate using your IDP via a browser, specify externalbrowser. To authenticate via OAuth, specify oauth and provide an OAuth Access Token (see the token parameter below). application: Identifies your application to Snowflake Support. insecureMode: false by default. Set to true to bypass the Online Certificate Status Protocol (OCSP) certificate revocation check. IMPORTANT: Change the default value for testing or emergency situations only. token: a token that can be used to authenticate. Should be used in conjunction with the "oauth" authenticator. client_session_keep_alive: Set to true have a heartbeat in the background every hour to keep the connection alive such that the connection session will never expire. Care should be taken in using this option as it opens up the access forever as long as the process is alive. ocspFailOpen: true by default. Set to false to make OCSP check fail closed mode. validateDefaultParameters: true by default. Set to false to disable checks on existence and privileges check for Database, Schema, Warehouse and Role when setting up the connection All other parameters are taken as session parameters. For example, TIMESTAMP_OUTPUT_FORMAT session parameter can be set by adding: The Go Snowflake Driver honors the environment variables HTTP_PROXY, HTTPS_PROXY and NO_PROXY for the forward proxy setting. NO_PROXY specifies which hostname endings should be allowed to bypass the proxy server, e.g. :code:`no_proxy=.amazonaws.com` means that AWS S3 access does not need to go through the proxy. NO_PROXY does not support wildcards. Each value specified should be one of the following: The end of a hostname (or a complete hostname), for example: ".amazonaws.com" or "xy12345.snowflakecomputing.com". An IP address, for example "192.196.1.15". If more than one value is specified, values should be separated by commas, for example: By default, the driver's builtin logger is NOP; no output is generated. This is intentional for those applications that use the same set of logger parameters not to conflict with glog, which is incorporated in the driver logging framework. In order to enable debug logging for the driver, add a build tag sfdebug to the go tool command lines, for example: For tests, run the test command with the tag along with glog parameters. For example, the following command will generate all acitivty logs in the standard error. Likewise, if you build your application with the tag, you may specify the same set of glog parameters. To get the logs for a specific module, use the -vmodule option. For example, to retrieve the driver.go and connection.go module logs: Note: If your request retrieves no logs, call db.Close() or glog.flush() to flush the glog buffer. Note: The logger may be changed in the future for better logging. Currently if the applications use the same parameters as glog, you cannot collect both application and driver logs at the same time. From 0.5.0, a signal handling responsibility has moved to the applications. If you want to cancel a query/command by Ctrl+C, add a os.Interrupt trap in context to execute methods that can take the context parameter, e.g., QueryContext, ExecContext. See cmd/selectmany.go for the full example. Queries return SQL column type information in the ColumnType type. The DatabaseTypeName method returns the following strings representing Snowflake data types: Go's database/sql package limits Go's data types to the following for binding and fetching: Fetching data isn't an issue since the database data type is provided along with the data so the Go Snowflake Driver can translate Snowflake data types to Go native data types. When the client binds data to send to the server, however, the driver cannot determine the date/timestamp data types to associate with binding parameters. For example: To resolve this issue, a binding parameter flag is introduced that associates any subsequent time.Time type to the DATE, TIME, TIMESTAMP_LTZ, TIMESTAMP_NTZ or BINARY data type. The above example could be rewritten as follows: The driver fetches TIMESTAMP_TZ (timestamp with time zone) data using the offset-based Location types, which represent a collection of time offsets in use in a geographical area, such as CET (Central European Time) or UTC (Coordinated Universal Time). The offset-based Location data is generated and cached when a Go Snowflake Driver application starts, and if the given offset is not in the cache, it is generated dynamically. Currently, Snowflake doesn't support the name-based Location types, e.g., America/Los_Angeles. For more information about Location types, see the Go documentation for https://golang.org/pkg/time/#Location. Internally, this feature leverages the []byte data type. As a result, BINARY data cannot be bound without the binding parameter flag. In the following example, sf is an alias for the gosnowflake package: The driver directly downloads a result set from the cloud storage if the size is large. It is required to shift workloads from the Snowflake database to the clients for scale. The download takes place by goroutine named "Chunk Downloader" asynchronously so that the driver can fetch the next result set while the application can consume the current result set. The application may change the number of result set chunk downloader if required. Note this doesn't help reduce memory footprint by itself. Consider Custom JSON Decoder. Experimental: Custom JSON Decoder for parsing Result Set The application may have the driver use a custom JSON decoder that incrementally parses the result set as follows. This option will reduce the memory footprint to half or even quarter, but it can significantly degrade the performance depending on the environment. The test cases running on Travis Ubuntu box show five times less memory footprint while four times slower. Be cautious when using the option. (Private Preview) JWT authentication ** Not recommended for production use until GA Now JWT token is supported when compiling with a golang version of 1.10 or higher. Binary compiled with lower version of golang would return an error at runtime when users try to use JWT authentication feature. To enable this feature, one can construct DSN with fields "authenticator=SNOWFLAKE_JWT&privateKey=<your_private_key>", or using Config structure specifying: The <your_private_key> should be a base64 URL encoded PKCS8 rsa private key string. One way to encode a byte slice to URL base 64 URL format is through base64.URLEncoding.EncodeToString() function. On the server side, one can alter the public key with the SQL command: The <your_public_key> should be a base64 Standard encoded PKI public key string. One way to encode a byte slice to base 64 Standard format is through base64.StdEncoding.EncodeToString() function. To generate the valid key pair, one can do the following command on the shell script: GET and PUT operations are unsupported.
File: helper.go File: log.go File: sentinel.go File: sentinel_coordinator.go
Package tea provides a framework for building rich terminal user interfaces based on the paradigms of The Elm Architecture. It's well-suited for simple and complex terminal applications, either inline, full-window, or a mix of both. It's been battle-tested in several large projects and is production-ready. A tutorial is available at https://github.com/charmbracelet/bubbletea/tree/master/tutorials Example programs can be found at https://github.com/charmbracelet/bubbletea/tree/master/examples
Package go-ftw is a Framework for Testing Web Application Firewalls It is derived from the work made with the pytest plugin `ftw`
Package main is an application package for Findy Agency Service. Please be noted that the whole Findy Agency is still under construction, and there are many missing features for full production use. Also there are some refactoring candidates for restructuring the internal package layout. However, the findy-agent is currently tested for an extended period of pilot and development use, where it's proven to be stable. The current focus of the project is to offer efficient and straightforward multi-tenant agency with Aries compatible agent protocols. You can use the agency and related Go packages roughly for four purposes: 1. As a service agency for multiple Edge Agents at the same time by implementing them corresponding Cloud Agents. Allocated CAs implement Aries agent to agent protocols and interoperability. 2. As a CLI tool for setting up Edge Agent wallets, creating schemas and credential definitions into the wallet and writing them to the ledger. You can use findy-agent's own CLI for most of the needed tasks but for be usability we recommend to use findy CLI. 3. As an admin tool to monitor and maintain agency. 4. As a framework to implement Service Agents like issuers and verifiers. There are Go helpers to onboard EAs to agency and the Client, which hides the connections to the agency. The agency's compilation includes command and flag sets to operate it with minimal dependencies to other repos or utilities. The offered command set is minimal, but it offers everything to set up and maintain an agency. There is a separate CLI UI in other repo, which includes an extended command set with auto-completion scripts. The whole codebase is still heavily under construction, but the main principles are ready and ok. Documentation is very minimal and partially missing. findy-agent can be used as a service, as a framework and a CLI tool. It's structured to the following sub-packages:
Ctx represents the context of an HTTP request and response. It provides access to the request, response, headers, query parameters, body, and other necessary attributes for handling HTTP requests. Fields: Package quick provides a high-performance, minimalistic web framework for building web applications in Go. This file defines constants for HTTP methods and status codes, as well as a utility function to return human-readable descriptions for status codes. These definitions ensure consistent use of HTTP standards throughout the framework. 🚀 Quick is a flexible and extensible route manager for the Go language. It aims to be fast and performant, and 100% net/http compatible. Quick is a project under constant development and is open for collaboration, everyone is welcome to contribute. 😍 Package quick provides a high-performance, lightweight web framework for building modern HTTP applications in Go. It is designed for speed, efficiency, and simplicity. Features: - Middleware support for request/response processing. - Optimized routing with low overhead. - Built-in support for JSON, XML, and form parsing. - Efficient request handling using sync.Pool for memory optimization. - Customizable response handling with structured output. Quick is ideal for building RESTful APIs, microservices, and high-performance web applications. Package quick provides a high-performance HTTP framework for building web applications in Go. Quick is designed to be lightweight and efficient, offering a simplified API for handling HTTP requests, file uploads, middleware, and routing. Features: Qtest is an advanced HTTP testing function designed to facilitate route validation in the Quick framework. It allows you to test simulated HTTP requests using httptest, supporting: The Qtest function receives a QuickTestOptions structure containing the request parameters, executes the call and returns a QtestReturn object, which provides methods for analyzing and validating the result.
Package zaptest implements test helpers that facilitate using a zap.Logger in unit tests. This package is useful when running unit tests with components that use the github.com/uber-go/zap logging framework. In unit tests we usually want to suppress any logging output as long as the tests are not failing or they are started in a verbose mode. Package zaptest is also compatible with the github.com/onsi/ginkgo BDD testing framework. Have a look at the zaptest unit tests to see an usage examples.
Package tstr provides testing framework for Go programs that simplifies testing with test dependencies.
Package zaptest implements test helpers that facilitate using a zap.Logger in unit tests. This package is useful when running unit tests with components that use the github.com/uber-go/zap logging framework. In unit tests we usually want to suppress any logging output as long as the tests are not failing or they are started in a verbose mode. Package zaptest is also compatible with the github.com/onsi/ginkgo BDD testing framework. Have a look at the zaptest unit tests to see an usage examples.
Package assert provides a set of comprehensive testing tools for use with the normal Go testing system. The following is a complete example using assert in a standard test function: if you assert many times, use the format below: Assertions allow you to easily write test code, and are global funcs in the `assert` package. All assertion functions take, as the first argument, the `*testing.T` object provided by the testing framework. This allows the assertion funcs to write the failings and other details to the correct place. Every assertion function also takes an optional string message as the final argument, allowing custom error messages to be appended to the message the assertion method outputs.
Package cgi implements the common gateway interface (CGI) for Caddy 2, a modern, full-featured, easy-to-use web server. It has been forked from the fantastic work of Kurt Jung who wrote that plugin for Caddy 1. This plugin lets you generate dynamic content on your website by means of command line scripts. To collect information about the inbound HTTP request, your script examines certain environment variables such as PATH_INFO and QUERY_STRING. Then, to return a dynamically generated web page to the client, your script simply writes content to standard output. In the case of POST requests, your script reads additional inbound content from standard input. The advantage of CGI is that you do not need to fuss with server startup and persistence, long term memory management, sockets, and crash recovery. Your script is called when a request matches one of the patterns that you specify in your Caddyfile. As soon as your script completes its response, it terminates. This simplicity makes CGI a perfect complement to the straightforward operation and configuration of Caddy. The benefits of Caddy, including HTTPS by default, basic access authentication, and lots of middleware options extend easily to your CGI scripts. CGI has some disadvantages. For one, Caddy needs to start a new process for each request. This can adversely impact performance and, if resources are shared between CGI applications, may require the use of some interprocess synchronization mechanism such as a file lock. Your server's responsiveness could in some circumstances be affected, such as when your web server is hit with very high demand, when your script's dependencies require a long startup, or when concurrently running scripts take a long time to respond. However, in many cases, such as using a pre-compiled CGI application like fossil or a Lua script, the impact will generally be insignificant. Another restriction of CGI is that scripts will be run with the same permissions as Caddy itself. This can sometimes be less than ideal, for example when your script needs to read or write files associated with a different owner. Serving dynamic content exposes your server to more potential threats than serving static pages. There are a number of considerations of which you should be aware when using CGI applications. CGI scripts should be located outside of Caddy's document root. Otherwise, an inadvertent misconfiguration could result in Caddy delivering the script as an ordinary static resource. At best, this could merely confuse the site visitor. At worst, it could expose sensitive internal information that should not leave the server. Mistrust the contents of PATH_INFO, QUERY_STRING and standard input. Most of the environment variables available to your CGI program are inherently safe because they originate with Caddy and cannot be modified by external users. This is not the case with PATH_INFO, QUERY_STRING and, in the case of POST actions, the contents of standard input. Be sure to validate and sanitize all inbound content. If you use a CGI library or framework to process your scripts, make sure you understand its limitations. An error in a CGI application is generally handled within the application itself and reported in the headers it returns. Your CGI application can be executed directly or indirectly. In the direct case, the application can be a compiled native executable or it can be a shell script that contains as its first line a shebang that identifies the interpreter to which the file's name should be passed. Caddy must have permission to execute the application. On Posix systems this will mean making sure the application's ownership and permission bits are set appropriately; on Windows, this may involve properly setting up the filename extension association. In the indirect case, the name of the CGI script is passed to an interpreter such as lua, perl or python. This module needs to be installed (obviously). Refer to the Caddy documentation on how to build Caddy with plugins/modules. The basic cgi directive lets you add a handler in the current caddy router location with a given script and optional arguments. The matcher is a default caddy matcher that is used to restrict the scope of this directive. The directive can be repeated any reasonable number of times. Here is the basic syntax: For example: When a request such as https://example.com/report or https://example.com/report/weekly arrives, the cgi middleware will detect the match and invoke the script named /usr/local/cgi-bin/report. The current working directory will be the same as Caddy itself. Here, it is assumed that the script is self-contained, for example a pre-compiled CGI application or a shell script. Here is an example of a standalone script, similar to one used in the cgi plugin's test suite: The environment variables PATH_INFO and QUERY_STRING are populated and passed to the script automatically. There are a number of other standard CGI variables included that are described below. If you need to pass any special environment variables or allow any environment variables that are part of Caddy's process to pass to your script, you will need to use the advanced directive syntax described below. Beware that in Caddy v2 it is (currently) not possible to separate the path left of the matcher from the full URL. Therefore if you require your CGI program to know the SCRIPT_NAME, make sure to pass that explicitly: In order to specify custom environment variables, pass along one or more environment variables known to Caddy, or specify more than one match pattern for a given rule, you will need to use the advanced directive syntax. That looks like this: For example, The script_name subdirective helps the cgi module to separate the path to the script from the (virtual) path afterwards (which shall be passed to the script). env can be used to define a list of key=value environment variable pairs that shall be passed to the script. pass_env can be used to define a list of environment variables of the Caddy process that shall be passed to the script. If your CGI application runs properly at the command line but fails to run from Caddy it is possible that certain environment variables may be missing. For example, the ruby gem loader evidently requires the HOME environment variable to be set; you can do this with the subdirective pass_env HOME. Another class of problematic applications require the COMPUTERNAME variable. The pass_all_env subdirective instructs Caddy to pass each environment variable it knows about to the CGI excutable. This addresses a common frustration that is caused when an executable requires an environment variable and fails without a descriptive error message when the variable cannot be found. These applications often run fine from the command prompt but fail when invoked with CGI. The risk with this subdirective is that a lot of server information is shared with the CGI executable. Use this subdirective only with CGI applications that you trust not to leak this information. buffer_limit is used when a http request has Transfer-Endcoding: chunked. The Go CGI Handler refused to handle these kinds of requests, see https://github.com/golang/go/issues/5613. In order to work around this the chunked request is buffered by caddy and sent to the CGI application as a whole with the correct CONTENT_LENGTH set. The buffer_limit setting marks a threshold between buffering in memory and using a temporary file. Every request body smaller than the buffer_limit is buffered in-memory. It accepts all formats supported by go-humanize. Default: 4MiB. (An example of this is git push if the objects to push are larger than the http.postBuffer) With the unbuffered_output subdirective it is possible to instruct the CGI handler to flush output from the CGI script as soon as possible. By default, the output is buffered into chunks before it is being written to optimize the network usage and allow to determine the Content-Length. When unbuffered, bytes will be written as soon as possible. This will also force the response to be written in chunked encoding. If you run into unexpected results with the CGI plugin, you are able to examine the environment in which your CGI application runs. To enter inspection mode, add the subdirective inspect to your CGI configuration block. This is a development option that should not be used in production. When in inspection mode, the plugin will respond to matching requests with a page that displays variables of interest. In particular, it will show the replacement value of {match} and the environment variables to which your CGI application has access. For example, consider this example CGI block: When you request a matching URL, for example, the Caddy server will deliver a text page similar to the following. The CGI application (in this case, wapptclsh) will not be called. This information can be used to diagnose problems with how a CGI application is called. To return to operation mode, remove or comment out the inspect subdirective. In this example, the Caddyfile looks like this: Note that a request for /show gets mapped to a script named /usr/local/cgi-bin/report/gen. There is no need for any element of the script name to match any element of the match pattern. The contents of /usr/local/cgi-bin/report/gen are: The purpose of this script is to show how request information gets communicated to a CGI script. Note that POST data must be read from standard input. In this particular case, posted data gets stored in the variable POST_DATA. Your script may use a different method to read POST content. Secondly, the SCRIPT_EXEC variable is not a CGI standard. It is provided by this middleware and contains the entire command line, including all arguments, with which the CGI script was executed. When a browser requests the response looks like When a client makes a POST request, such as with the following command the response looks the same except for the following lines: This small example demonstrates how to write a CGI program in Go. The use of a bytes.Buffer makes it easy to report the content length in the CGI header. When this program is compiled and installed as /usr/local/bin/servertime, the following directive in your Caddy file will make it available: The module is written in a way that it expects the scripts you want it to execute to actually exist. A non-existing or non-executable file is considered a setup error and will yield a HTTP 500. If you want to make sure, only existing scripts are executed, use a more specific matcher, as explained in the Caddy docs. Example: When calling a url like /cgi/foo/bar.pl it will check if the local file ./app/foo/bar.pl exists and only then it will proceed with calling the CGI.
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package toml provides facilities for decoding and encoding TOML configuration files via reflection. There is also support for delaying decoding with the Primitive type, and querying the set of keys in a TOML document with the MetaData type. The specification implemented: https://github.com/toml-lang/toml The sub-command github.com/BurntSushi/toml/cmd/tomlv can be used to verify whether a file is a valid TOML document. It can also be used to print the type of each key in a TOML document. There are two important types of tests used for this package. The first is contained inside '*_test.go' files and uses the standard Go unit testing framework. These tests are primarily devoted to holistically testing the decoder and encoder. The second type of testing is used to verify the implementation's adherence to the TOML specification. These tests have been factored into their own project: https://github.com/BurntSushi/toml-test The reason the tests are in a separate project is so that they can be used by any implementation of TOML. Namely, it is language agnostic. Example StrictDecoding shows how to detect whether there are keys in the TOML document that weren't decoded into the value given. This is useful for returning an error to the user if they've included extraneous fields in their configuration. Example UnmarshalTOML shows how to implement a struct type that knows how to unmarshal itself. The struct must take full responsibility for mapping the values passed into the struct. The method may be used with interfaces in a struct in cases where the actual type is not known until the data is examined. Example Unmarshaler shows how to decode TOML strings into your own custom data type.
Package sandwich is a middleware framework for go that lets you write testable web servers. Sandwich allows writing robust middleware handlers that are easily tested: Sandwich is provides a basic PAT-style router. Here's a simple complete program using sandwich: Sandwich automatically calls your middleware with the necessary arguments to run them based on the types they require. These types can be provided by previous middleware or directly during the initial setup. For example, you can use this to provide your database to all handlers: Set(...) and SetAs(...) are excellent alternatives to using global values, plus they keep your functions easy to test! In many cases you want to initialize a value based on the request, for example extracting the user login: This starts to show off the real power of sandwich. For each request, the following occurs: This allows you to write small, independently testable functions and let sandwich chain them together for you. Sandwich works hard to ensure that you don't get annoying run-time errors: it's structured such that it must always be possible to call your functions when the middleware is initialized rather than when the http handler is being executed, so you don't get surprised while your server is running. When a handler returns an error, sandwich aborts the middleware chain and looks for the most recently registered error handler and calls that. Error handlers may accept any types that have been provided so far in the middleware stack as well as the error type. They must not have any return values. Sandwich also allows registering handlers to run during AND after the middleware (and error handling) stack has completed. This is especially useful for handles such as logging or gzip wrappers. Once the before handle is run, the 'after' handlers are queued to run and will be run regardless of whether an error aborts any subsequent middleware handlers. Typically this is done with the first function creating and initializing some state to pass to the deferred handler. For example, the logging handlers are: and are added to the chain using: In this case, the `Wrap` executes NewLogEntry during middleware processing that returns a *LogEntry which is provided to downstream handlers, including the deferred Commit handler -- in this case a method expression (https://golang.org/ref/spec#Method_expressions) that takes the *LogEntry as its value receiver. Unfortunately, providing interfaces is a little tricky. Since interfaces in Go are only used for static typing, the encapsulation isn't passed to functions that accept interface{}, like Set(). This means that if you have an interface and a concrete implementation, such as: You cannot provide this to handlers directly via the Set() call. Instead, you have to either use SetAs() or a dedicated middleware function: It's a bit silly, but that's how it is.