Package websocket implements the WebSocket protocol defined in RFC 6455. The Conn type represents a WebSocket connection. A server application calls the Upgrader.Upgrade method from an HTTP request handler to get a *Conn: Call the connection's WriteMessage and ReadMessage methods to send and receive messages as a slice of bytes. This snippet of code shows how to echo messages using these methods: In above snippet of code, p is a []byte and messageType is an int with value websocket.BinaryMessage or websocket.TextMessage. An application can also send and receive messages using the io.WriteCloser and io.Reader interfaces. To send a message, call the connection NextWriter method to get an io.WriteCloser, write the message to the writer and close the writer when done. To receive a message, call the connection NextReader method to get an io.Reader and read until io.EOF is returned. This snippet shows how to echo messages using the NextWriter and NextReader methods: The WebSocket protocol distinguishes between text and binary data messages. Text messages are interpreted as UTF-8 encoded text. The interpretation of binary messages is left to the application. This package uses the TextMessage and BinaryMessage integer constants to identify the two data message types. The ReadMessage and NextReader methods return the type of the received message. The messageType argument to the WriteMessage and NextWriter methods specifies the type of a sent message. It is the application's responsibility to ensure that text messages are valid UTF-8 encoded text. The WebSocket protocol defines three types of control messages: close, ping and pong. Call the connection WriteControl, WriteMessage or NextWriter methods to send a control message to the peer. Connections handle received close messages by calling the handler function set with the SetCloseHandler method and by returning a *CloseError from the NextReader, ReadMessage or the message Read method. The default close handler sends a close message to the peer. Connections handle received ping messages by calling the handler function set with the SetPingHandler method. The default ping handler sends a pong message to the peer. Connections handle received pong messages by calling the handler function set with the SetPongHandler method. The default pong handler does nothing. If an application sends ping messages, then the application should set a pong handler to receive the corresponding pong. The control message handler functions are called from the NextReader, ReadMessage and message reader Read methods. The default close and ping handlers can block these methods for a short time when the handler writes to the connection. The application must read the connection to process close, ping and pong messages sent from the peer. If the application is not otherwise interested in messages from the peer, then the application should start a goroutine to read and discard messages from the peer. A simple example is: Connections support one concurrent reader and one concurrent writer. Applications are responsible for ensuring that no more than one goroutine calls the write methods (NextWriter, SetWriteDeadline, WriteMessage, WriteJSON, EnableWriteCompression, SetCompressionLevel) concurrently and that no more than one goroutine calls the read methods (NextReader, SetReadDeadline, ReadMessage, ReadJSON, SetPongHandler, SetPingHandler) concurrently. The Close and WriteControl methods can be called concurrently with all other methods. Web browsers allow Javascript applications to open a WebSocket connection to any host. It's up to the server to enforce an origin policy using the Origin request header sent by the browser. The Upgrader calls the function specified in the CheckOrigin field to check the origin. If the CheckOrigin function returns false, then the Upgrade method fails the WebSocket handshake with HTTP status 403. If the CheckOrigin field is nil, then the Upgrader uses a safe default: fail the handshake if the Origin request header is present and the Origin host is not equal to the Host request header. The deprecated package-level Upgrade function does not perform origin checking. The application is responsible for checking the Origin header before calling the Upgrade function. Per message compression extensions (RFC 7692) are experimentally supported by this package in a limited capacity. Setting the EnableCompression option to true in Dialer or Upgrader will attempt to negotiate per message deflate support. If compression was successfully negotiated with the connection's peer, any message received in compressed form will be automatically decompressed. All Read methods will return uncompressed bytes. Per message compression of messages written to a connection can be enabled or disabled by calling the corresponding Conn method: Currently this package does not support compression with "context takeover". This means that messages must be compressed and decompressed in isolation, without retaining sliding window or dictionary state across messages. For more details refer to RFC 7692. Use of compression is experimental and may result in decreased performance.
Package websocket implements the WebSocket protocol defined in RFC 6455. The Conn type represents a WebSocket connection. A server application calls the Upgrader.Upgrade method from an HTTP request handler to get a *Conn: Call the connection's WriteMessage and ReadMessage methods to send and receive messages as a slice of bytes. This snippet of code shows how to echo messages using these methods: In above snippet of code, p is a []byte and messageType is an int with value websocket.BinaryMessage or websocket.TextMessage. An application can also send and receive messages using the io.WriteCloser and io.Reader interfaces. To send a message, call the connection NextWriter method to get an io.WriteCloser, write the message to the writer and close the writer when done. To receive a message, call the connection NextReader method to get an io.Reader and read until io.EOF is returned. This snippet shows how to echo messages using the NextWriter and NextReader methods: The WebSocket protocol distinguishes between text and binary data messages. Text messages are interpreted as UTF-8 encoded text. The interpretation of binary messages is left to the application. This package uses the TextMessage and BinaryMessage integer constants to identify the two data message types. The ReadMessage and NextReader methods return the type of the received message. The messageType argument to the WriteMessage and NextWriter methods specifies the type of a sent message. It is the application's responsibility to ensure that text messages are valid UTF-8 encoded text. The WebSocket protocol defines three types of control messages: close, ping and pong. Call the connection WriteControl, WriteMessage or NextWriter methods to send a control message to the peer. Connections handle received close messages by calling the handler function set with the SetCloseHandler method and by returning a *CloseError from the NextReader, ReadMessage or the message Read method. The default close handler sends a close message to the peer. Connections handle received ping messages by calling the handler function set with the SetPingHandler method. The default ping handler sends a pong message to the peer. Connections handle received pong messages by calling the handler function set with the SetPongHandler method. The default pong handler does nothing. If an application sends ping messages, then the application should set a pong handler to receive the corresponding pong. The control message handler functions are called from the NextReader, ReadMessage and message reader Read methods. The default close and ping handlers can block these methods for a short time when the handler writes to the connection. The application must read the connection to process close, ping and pong messages sent from the peer. If the application is not otherwise interested in messages from the peer, then the application should start a goroutine to read and discard messages from the peer. A simple example is: Connections support one concurrent reader and one concurrent writer. Applications are responsible for ensuring that no more than one goroutine calls the write methods (NextWriter, SetWriteDeadline, WriteMessage, WriteJSON, EnableWriteCompression, SetCompressionLevel) concurrently and that no more than one goroutine calls the read methods (NextReader, SetReadDeadline, ReadMessage, ReadJSON, SetPongHandler, SetPingHandler) concurrently. The Close and WriteControl methods can be called concurrently with all other methods. Web browsers allow Javascript applications to open a WebSocket connection to any host. It's up to the server to enforce an origin policy using the Origin request header sent by the browser. The Upgrader calls the function specified in the CheckOrigin field to check the origin. If the CheckOrigin function returns false, then the Upgrade method fails the WebSocket handshake with HTTP status 403. If the CheckOrigin field is nil, then the Upgrader uses a safe default: fail the handshake if the Origin request header is present and the Origin host is not equal to the Host request header. The deprecated package-level Upgrade function does not perform origin checking. The application is responsible for checking the Origin header before calling the Upgrade function. Per message compression extensions (RFC 7692) are experimentally supported by this package in a limited capacity. Setting the EnableCompression option to true in Dialer or Upgrader will attempt to negotiate per message deflate support. If compression was successfully negotiated with the connection's peer, any message received in compressed form will be automatically decompressed. All Read methods will return uncompressed bytes. Per message compression of messages written to a connection can be enabled or disabled by calling the corresponding Conn method: Currently this package does not support compression with "context takeover". This means that messages must be compressed and decompressed in isolation, without retaining sliding window or dictionary state across messages. For more details refer to RFC 7692. Use of compression is experimental and may result in decreased performance.
Package websocket implements the WebSocket protocol defined in RFC 6455. The Conn type represents a WebSocket connection. A server application calls the Upgrader.Upgrade method from an HTTP request handler to get a *Conn: Call the connection's WriteMessage and ReadMessage methods to send and receive messages as a slice of bytes. This snippet of code shows how to echo messages using these methods: In above snippet of code, p is a []byte and messageType is an int with value websocket.BinaryMessage or websocket.TextMessage. An application can also send and receive messages using the io.WriteCloser and io.Reader interfaces. To send a message, call the connection NextWriter method to get an io.WriteCloser, write the message to the writer and close the writer when done. To receive a message, call the connection NextReader method to get an io.Reader and read until io.EOF is returned. This snippet shows how to echo messages using the NextWriter and NextReader methods: The WebSocket protocol distinguishes between text and binary data messages. Text messages are interpreted as UTF-8 encoded text. The interpretation of binary messages is left to the application. This package uses the TextMessage and BinaryMessage integer constants to identify the two data message types. The ReadMessage and NextReader methods return the type of the received message. The messageType argument to the WriteMessage and NextWriter methods specifies the type of a sent message. It is the application's responsibility to ensure that text messages are valid UTF-8 encoded text. The WebSocket protocol defines three types of control messages: close, ping and pong. Call the connection WriteControl, WriteMessage or NextWriter methods to send a control message to the peer. Connections handle received close messages by calling the handler function set with the SetCloseHandler method and by returning a *CloseError from the NextReader, ReadMessage or the message Read method. The default close handler sends a close message to the peer. Connections handle received ping messages by calling the handler function set with the SetPingHandler method. The default ping handler sends a pong message to the peer. Connections handle received pong messages by calling the handler function set with the SetPongHandler method. The default pong handler does nothing. If an application sends ping messages, then the application should set a pong handler to receive the corresponding pong. The control message handler functions are called from the NextReader, ReadMessage and message reader Read methods. The default close and ping handlers can block these methods for a short time when the handler writes to the connection. The application must read the connection to process close, ping and pong messages sent from the peer. If the application is not otherwise interested in messages from the peer, then the application should start a goroutine to read and discard messages from the peer. A simple example is: Connections support one concurrent reader and one concurrent writer. Applications are responsible for ensuring that no more than one goroutine calls the write methods (NextWriter, SetWriteDeadline, WriteMessage, WriteJSON, EnableWriteCompression, SetCompressionLevel) concurrently and that no more than one goroutine calls the read methods (NextReader, SetReadDeadline, ReadMessage, ReadJSON, SetPongHandler, SetPingHandler) concurrently. The Close and WriteControl methods can be called concurrently with all other methods. Web browsers allow Javascript applications to open a WebSocket connection to any host. It's up to the server to enforce an origin policy using the Origin request header sent by the browser. The Upgrader calls the function specified in the CheckOrigin field to check the origin. If the CheckOrigin function returns false, then the Upgrade method fails the WebSocket handshake with HTTP status 403. If the CheckOrigin field is nil, then the Upgrader uses a safe default: fail the handshake if the Origin request header is present and the Origin host is not equal to the Host request header. The deprecated package-level Upgrade function does not perform origin checking. The application is responsible for checking the Origin header before calling the Upgrade function. Per message compression extensions (RFC 7692) are experimentally supported by this package in a limited capacity. Setting the EnableCompression option to true in Dialer or Upgrader will attempt to negotiate per message deflate support. If compression was successfully negotiated with the connection's peer, any message received in compressed form will be automatically decompressed. All Read methods will return uncompressed bytes. Per message compression of messages written to a connection can be enabled or disabled by calling the corresponding Conn method: Currently this package does not support compression with "context takeover". This means that messages must be compressed and decompressed in isolation, without retaining sliding window or dictionary state across messages. For more details refer to RFC 7692. Use of compression is experimental and may result in decreased performance.
Package websocket implements the WebSocket protocol defined in RFC 6455. The Conn type represents a WebSocket connection. A server application calls the Upgrader.Upgrade method from an HTTP request handler to get a *Conn: Call the connection's WriteMessage and ReadMessage methods to send and receive messages as a slice of bytes. This snippet of code shows how to echo messages using these methods: In above snippet of code, p is a []byte and messageType is an int with value websocket.BinaryMessage or websocket.TextMessage. An application can also send and receive messages using the io.WriteCloser and io.Reader interfaces. To send a message, call the connection NextWriter method to get an io.WriteCloser, write the message to the writer and close the writer when done. To receive a message, call the connection NextReader method to get an io.Reader and read until io.EOF is returned. This snippet shows how to echo messages using the NextWriter and NextReader methods: The WebSocket protocol distinguishes between text and binary data messages. Text messages are interpreted as UTF-8 encoded text. The interpretation of binary messages is left to the application. This package uses the TextMessage and BinaryMessage integer constants to identify the two data message types. The ReadMessage and NextReader methods return the type of the received message. The messageType argument to the WriteMessage and NextWriter methods specifies the type of a sent message. It is the application's responsibility to ensure that text messages are valid UTF-8 encoded text. The WebSocket protocol defines three types of control messages: close, ping and pong. Call the connection WriteControl, WriteMessage or NextWriter methods to send a control message to the peer. Connections handle received close messages by calling the handler function set with the SetCloseHandler method and by returning a *CloseError from the NextReader, ReadMessage or the message Read method. The default close handler sends a close message to the peer. Connections handle received ping messages by calling the handler function set with the SetPingHandler method. The default ping handler sends a pong message to the peer. Connections handle received pong messages by calling the handler function set with the SetPongHandler method. The default pong handler does nothing. If an application sends ping messages, then the application should set a pong handler to receive the corresponding pong. The control message handler functions are called from the NextReader, ReadMessage and message reader Read methods. The default close and ping handlers can block these methods for a short time when the handler writes to the connection. The application must read the connection to process close, ping and pong messages sent from the peer. If the application is not otherwise interested in messages from the peer, then the application should start a goroutine to read and discard messages from the peer. A simple example is: Connections support one concurrent reader and one concurrent writer. Applications are responsible for ensuring that no more than one goroutine calls the write methods (NextWriter, SetWriteDeadline, WriteMessage, WriteJSON, EnableWriteCompression, SetCompressionLevel) concurrently and that no more than one goroutine calls the read methods (NextReader, SetReadDeadline, ReadMessage, ReadJSON, SetPongHandler, SetPingHandler) concurrently. The Close and WriteControl methods can be called concurrently with all other methods. Web browsers allow Javascript applications to open a WebSocket connection to any host. It's up to the server to enforce an origin policy using the Origin request header sent by the browser. The Upgrader calls the function specified in the CheckOrigin field to check the origin. If the CheckOrigin function returns false, then the Upgrade method fails the WebSocket handshake with HTTP status 403. If the CheckOrigin field is nil, then the Upgrader uses a safe default: fail the handshake if the Origin request header is present and the Origin host is not equal to the Host request header. The deprecated package-level Upgrade function does not perform origin checking. The application is responsible for checking the Origin header before calling the Upgrade function. Per message compression extensions (RFC 7692) are experimentally supported by this package in a limited capacity. Setting the EnableCompression option to true in Dialer or Upgrader will attempt to negotiate per message deflate support. If compression was successfully negotiated with the connection's peer, any message received in compressed form will be automatically decompressed. All Read methods will return uncompressed bytes. Per message compression of messages written to a connection can be enabled or disabled by calling the corresponding Conn method: Currently this package does not support compression with "context takeover". This means that messages must be compressed and decompressed in isolation, without retaining sliding window or dictionary state across messages. For more details refer to RFC 7692. Use of compression is experimental and may result in decreased performance.
Package websocket implements the WebSocket protocol defined in RFC 6455. The Conn type represents a WebSocket connection. A server application calls the Upgrader.Upgrade method from an HTTP request handler to get a *Conn: Call the connection's WriteMessage and ReadMessage methods to send and receive messages as a slice of bytes. This snippet of code shows how to echo messages using these methods: In above snippet of code, p is a []byte and messageType is an int with value websocket.BinaryMessage or websocket.TextMessage. An application can also send and receive messages using the io.WriteCloser and io.Reader interfaces. To send a message, call the connection NextWriter method to get an io.WriteCloser, write the message to the writer and close the writer when done. To receive a message, call the connection NextReader method to get an io.Reader and read until io.EOF is returned. This snippet shows how to echo messages using the NextWriter and NextReader methods: The WebSocket protocol distinguishes between text and binary data messages. Text messages are interpreted as UTF-8 encoded text. The interpretation of binary messages is left to the application. This package uses the TextMessage and BinaryMessage integer constants to identify the two data message types. The ReadMessage and NextReader methods return the type of the received message. The messageType argument to the WriteMessage and NextWriter methods specifies the type of a sent message. It is the application's responsibility to ensure that text messages are valid UTF-8 encoded text. The WebSocket protocol defines three types of control messages: close, ping and pong. Call the connection WriteControl, WriteMessage or NextWriter methods to send a control message to the peer. Connections handle received close messages by calling the handler function set with the SetCloseHandler method and by returning a *CloseError from the NextReader, ReadMessage or the message Read method. The default close handler sends a close message to the peer. Connections handle received ping messages by calling the handler function set with the SetPingHandler method. The default ping handler sends a pong message to the peer. Connections handle received pong messages by calling the handler function set with the SetPongHandler method. The default pong handler does nothing. If an application sends ping messages, then the application should set a pong handler to receive the corresponding pong. The control message handler functions are called from the NextReader, ReadMessage and message reader Read methods. The default close and ping handlers can block these methods for a short time when the handler writes to the connection. The application must read the connection to process close, ping and pong messages sent from the peer. If the application is not otherwise interested in messages from the peer, then the application should start a goroutine to read and discard messages from the peer. A simple example is: Connections support one concurrent reader and one concurrent writer. Applications are responsible for ensuring that no more than one goroutine calls the write methods (NextWriter, SetWriteDeadline, WriteMessage, WriteJSON, EnableWriteCompression, SetCompressionLevel) concurrently and that no more than one goroutine calls the read methods (NextReader, SetReadDeadline, ReadMessage, ReadJSON, SetPongHandler, SetPingHandler) concurrently. The Close and WriteControl methods can be called concurrently with all other methods. Web browsers allow Javascript applications to open a WebSocket connection to any host. It's up to the server to enforce an origin policy using the Origin request header sent by the browser. The Upgrader calls the function specified in the CheckOrigin field to check the origin. If the CheckOrigin function returns false, then the Upgrade method fails the WebSocket handshake with HTTP status 403. If the CheckOrigin field is nil, then the Upgrader uses a safe default: fail the handshake if the Origin request header is present and the Origin host is not equal to the Host request header. The deprecated package-level Upgrade function does not perform origin checking. The application is responsible for checking the Origin header before calling the Upgrade function. Connections buffer network input and output to reduce the number of system calls when reading or writing messages. Write buffers are also used for constructing WebSocket frames. See RFC 6455, Section 5 for a discussion of message framing. A WebSocket frame header is written to the network each time a write buffer is flushed to the network. Decreasing the size of the write buffer can increase the amount of framing overhead on the connection. The buffer sizes in bytes are specified by the ReadBufferSize and WriteBufferSize fields in the Dialer and Upgrader. The Dialer uses a default size of 4096 when a buffer size field is set to zero. The Upgrader reuses buffers created by the HTTP server when a buffer size field is set to zero. The HTTP server buffers have a size of 4096 at the time of this writing. The buffer sizes do not limit the size of a message that can be read or written by a connection. Buffers are held for the lifetime of the connection by default. If the Dialer or Upgrader WriteBufferPool field is set, then a connection holds the write buffer only when writing a message. Applications should tune the buffer sizes to balance memory use and performance. Increasing the buffer size uses more memory, but can reduce the number of system calls to read or write the network. In the case of writing, increasing the buffer size can reduce the number of frame headers written to the network. Some guidelines for setting buffer parameters are: Limit the buffer sizes to the maximum expected message size. Buffers larger than the largest message do not provide any benefit. Depending on the distribution of message sizes, setting the buffer size to a value less than the maximum expected message size can greatly reduce memory use with a small impact on performance. Here's an example: If 99% of the messages are smaller than 256 bytes and the maximum message size is 512 bytes, then a buffer size of 256 bytes will result in 1.01 more system calls than a buffer size of 512 bytes. The memory savings is 50%. A write buffer pool is useful when the application has a modest number writes over a large number of connections. when buffers are pooled, a larger buffer size has a reduced impact on total memory use and has the benefit of reducing system calls and frame overhead. Per message compression extensions (RFC 7692) are experimentally supported by this package in a limited capacity. Setting the EnableCompression option to true in Dialer or Upgrader will attempt to negotiate per message deflate support. If compression was successfully negotiated with the connection's peer, any message received in compressed form will be automatically decompressed. All Read methods will return uncompressed bytes. Per message compression of messages written to a connection can be enabled or disabled by calling the corresponding Conn method: Currently this package does not support compression with "context takeover". This means that messages must be compressed and decompressed in isolation, without retaining sliding window or dictionary state across messages. For more details refer to RFC 7692. Use of compression is experimental and may result in decreased performance.
Package antlr implements the Go version of the ANTLR 4 runtime. ANTLR (ANother Tool for Language Recognition) is a powerful parser generator for reading, processing, executing, or translating structured text or binary files. It's widely used to build languages, tools, and frameworks. From a grammar, ANTLR generates a parser that can build parse trees and also generates a listener interface (or visitor) that makes it easy to respond to the recognition of phrases of interest. ANTLR supports the generation of code in a number of target languages, and the generated code is supported by a runtime library, written specifically to support the generated code in the target language. This library is the runtime for the Go target. To generate code for the go target, it is generally recommended to place the source grammar files in a package of their own, and use the `.sh` script method of generating code, using the go generate directive. In that same directory it is usual, though not required, to place the antlr tool that should be used to generate the code. That does mean that the antlr tool JAR file will be checked in to your source code control though, so you are free to use any other way of specifying the version of the ANTLR tool to use, such as aliasing in `.zshrc` or equivalent, or a profile in your IDE, or configuration in your CI system. Here is a general template for an ANTLR based recognizer in Go: Make sure that the package statement in your grammar file(s) reflects the go package they exist in. The generate.go file then looks like this: And the generate.sh file will look similar to this: depending on whether you want visitors or listeners or any other ANTLR options. From the command line at the root of your package “myproject” you can then simply issue the command: Copyright (c) 2012-2022 The ANTLR Project. All rights reserved. Use of this file is governed by the BSD 3-clause license, which can be found in the LICENSE.txt file in the project root.
Package cgi implements the common gateway interface (CGI) for Caddy, a modern, full-featured, easy-to-use web server. This plugin lets you generate dynamic content on your website by means of command line scripts. To collect information about the inbound HTTP request, your script examines certain environment variables such as PATH_INFO and QUERY_STRING. Then, to return a dynamically generated web page to the client, your script simply writes content to standard output. In the case of POST requests, your script reads additional inbound content from standard input. The advantage of CGI is that you do not need to fuss with server startup and persistence, long term memory management, sockets, and crash recovery. Your script is called when a request matches one of the patterns that you specify in your Caddyfile. As soon as your script completes its response, it terminates. This simplicity makes CGI a perfect complement to the straightforward operation and configuration of Caddy. The benefits of Caddy, including HTTPS by default, basic access authentication, and lots of middleware options extend easily to your CGI scripts. CGI has some disadvantages. For one, Caddy needs to start a new process for each request. This can adversely impact performance and, if resources are shared between CGI applications, may require the use of some interprocess synchronization mechanism such as a file lock. Your server’s responsiveness could in some circumstances be affected, such as when your web server is hit with very high demand, when your script’s dependencies require a long startup, or when concurrently running scripts take a long time to respond. However, in many cases, such as using a pre-compiled CGI application like fossil or a Lua script, the impact will generally be insignificant. Another restriction of CGI is that scripts will be run with the same permissions as Caddy itself. This can sometimes be less than ideal, for example when your script needs to read or write files associated with a different owner. Serving dynamic content exposes your server to more potential threats than serving static pages. There are a number of considerations of which you should be aware when using CGI applications. CGI SCRIPTS SHOULD BE LOCATED OUTSIDE OF CADDY’S DOCUMENT ROOT. Otherwise, an inadvertent misconfiguration could result in Caddy delivering the script as an ordinary static resource. At best, this could merely confuse the site visitor. At worst, it could expose sensitive internal information that should not leave the server. MISTRUST THE CONTENTS OF PATH_INFO, QUERY_STRING AND STANDARD INPUT. Most of the environment variables available to your CGI program are inherently safe because they originate with Caddy and cannot be modified by external users. This is not the case with PATH_INFO, QUERY_STRING and, in the case of POST actions, the contents of standard input. Be sure to validate and sanitize all inbound content. If you use a CGI library or framework to process your scripts, make sure you understand its limitations. An error in a CGI application is generally handled within the application itself and reported in the headers it returns. Additionally, if the Caddy errors directive is enabled, any content the application writes to its standard error stream will be written to the error log. This can be useful to diagnose problems with the execution of the CGI application. Your CGI application can be executed directly or indirectly. In the direct case, the application can be a compiled native executable or it can be a shell script that contains as its first line a shebang that identifies the interpreter to which the file’s name should be passed. Caddy must have permission to execute the application. On Posix systems this will mean making sure the application’s ownership and permission bits are set appropriately; on Windows, this may involve properly setting up the filename extension association. In the indirect case, the name of the CGI script is passed to an interpreter such as lua, perl or python. The basic cgi directive lets you associate a single pattern with a particular script. The directive can be repeated any reasonable number of times. Here is the basic syntax: For example: When a request such as https://example.com/report or https://example.com/report/weekly arrives, the cgi middleware will detect the match and invoke the script named /usr/local/cgi-bin/report. The current working directory will be the same as Caddy itself. Here, it is assumed that the script is self-contained, for example a pre-compiled CGI application or a shell script. Here is an example of a standalone script, similar to one used in the cgi plugin’s test suite: The environment variables PATH_INFO and QUERY_STRING are populated and passed to the script automatically. There are a number of other standard CGI variables included that are described below. If you need to pass any special environment variables or allow any environment variables that are part of Caddy’s process to pass to your script, you will need to use the advanced directive syntax described below. The values used for the script name and its arguments are subject to placeholder replacement. In addition to the standard Caddy placeholders such as {method} and {host}, the following placeholder substitutions are made: - {.} is replaced with Caddy’s current working directory - {match} is replaced with the portion of the request that satisfies the match directive - {root} is replaced with Caddy’s specified root directory You can include glob wildcards in your matches. Basically, an asterisk represents a sequence of zero or more non-slash characters and a question mark represents a single non-slash character. These wildcards can be used multiple times in a match expression. See the documentation for path/Match in the Go standard library for more details about glob matching. Here is an example directive: In this case, the cgi middleware will match requests such as https://example.com/report/weekly.lua and https://example.com/report/report.lua/weekly but not https://example.com/report.lua. The use of the asterisk expands to any character sequence within a directory. For example, if the request is made, the following command is executed: Note that the portion of the request that follows the match is not included. That information is conveyed to the script by means of environment variables. In this example, the Lua interpreter is invoked directly from Caddy, so the Lua script does not need the shebang that would be needed in a standalone script. This method facilitates the use of CGI on the Windows platform. In order to specify custom environment variables, pass along one or more environment variables known to Caddy, or specify more than one match pattern for a given rule, you will need to use the advanced directive syntax. That looks like this: For example, With the advanced syntax, the exec subdirective must appear exactly once. The match subdirective must appear at least once. The env, pass_env, empty_env, and except subdirectives can appear any reasonable number of times. pass_all_env, dir may appear once. The dir subdirective specifies the CGI executable’s working directory. If it is not specified, Caddy’s current working directory is used. The except subdirective uses the same pattern matching logic that is used with the match subdirective except that the request must match a rule fully; no request path prefix matching is performed. Any request that matches a match pattern is then checked with the patterns in except, if any. If any matches are made with the except pattern, the request is rejected and passed along to subsequent handlers. This is a convenient way to have static file resources served properly rather than being confused as CGI applications. The empty_env subdirective is used to pass one or more empty environment variables. Some CGI scripts may expect the server to pass certain empty variables rather than leaving them unset. This subdirective allows you to deal with those situations. The values associated with environment variable keys are all subject to placeholder substitution, just as with the script name and arguments. If your CGI application runs properly at the command line but fails to run from Caddy it is possible that certain environment variables may be missing. For example, the ruby gem loader evidently requires the HOME environment variable to be set; you can do this with the subdirective pass_env HOME. Another class of problematic applications require the COMPUTERNAME variable. The pass_all_env subdirective instructs Caddy to pass each environment variable it knows about to the CGI excutable. This addresses a common frustration that is caused when an executable requires an environment variable and fails without a descriptive error message when the variable cannot be found. These applications often run fine from the command prompt but fail when invoked with CGI. The risk with this subdirective is that a lot of server information is shared with the CGI executable. Use this subdirective only with CGI applications that you trust not to leak this information. If you protect your CGI application with the Caddy JWT middleware, your program will have access to the token’s payload claims by means of environment variables. For example, the following token claims will be available with the following environment variables All values are conveyed as strings, so some conversion may be necessary in your program. No placeholder substitutions are made on these values. If you run into unexpected results with the CGI plugin, you are able to examine the environment in which your CGI application runs. To enter inspection mode, add the subdirective inspect to your CGI configuration block. This is a development option that should not be used in production. When in inspection mode, the plugin will respond to matching requests with a page that displays variables of interest. In particular, it will show the replacement value of {match} and the environment variables to which your CGI application has access. For example, consider this example CGI block: When you request a matching URL, for example, the Caddy server will deliver a text page similar to the following. The CGI application (in this case, wapptclsh) will not be called. This information can be used to diagnose problems with how a CGI application is called. To return to operation mode, remove or comment out the inspect subdirective. In this example, the Caddyfile looks like this: Note that a request for /show gets mapped to a script named /usr/local/cgi-bin/report/gen. There is no need for any element of the script name to match any element of the match pattern. The contents of /usr/local/cgi-bin/report/gen are: The purpose of this script is to show how request information gets communicated to a CGI script. Note that POST data must be read from standard input. In this particular case, posted data gets stored in the variable POST_DATA. Your script may use a different method to read POST content. Secondly, the SCRIPT_EXEC variable is not a CGI standard. It is provided by this middleware and contains the entire command line, including all arguments, with which the CGI script was executed. When a browser requests the response looks like When a client makes a POST request, such as with the following command the response looks the same except for the following lines: The fossil distributed software management tool is a native executable that supports interaction as a CGI application. In this example, /usr/bin/fossil is the executable and /home/quixote/projects.fossil is the fossil repository. To configure Caddy to serve it, use a cgi directive something like this in your Caddyfile: In your /usr/local/cgi-bin directory, make a file named projects with the following single line: The fossil documentation calls this a command file. When fossil is invoked after a request to /projects, it examines the relevant environment variables and responds as a CGI application. If you protect /projects with basic HTTP authentication, you may wish to enable the ALLOW REMOTE_USER AUTHENTICATION option when setting up fossil. This lets fossil dispense with its own authentication, assuming it has an account for the user. The agedu utility can be used to identify unused files that are taking up space on your storage media. Like fossil, it can be used in different modes including CGI. First, use it from the command line to generate an index of a directory, for example In your Caddyfile, include a directive that references the generated index: You will want to protect the /agedu resource with some sort of access control, for example HTTP Basic Authentication. This small example demonstrates how to write a CGI program in Go. The use of a bytes.Buffer makes it easy to report the content length in the CGI header. When this program is compiled and installed as /usr/local/bin/servertime, the following directive in your Caddy file will make it available: The cgit application provides an attractive and useful web interface to git repositories. Here is how to run it with Caddy. After compiling cgit, you can place the executable somewhere out of Caddy’s document root. In this example, it is located in /usr/local/cgi-bin. A sample configuration file is included in the project’s cgitrc.5.txt file. You can use it as a starting point for your configuration. The default location for this file is /etc/cgitrc but in this example the location /home/quixote/caddy/cgitrc. Note that changing the location of this file from its default will necessitate the inclusion of the environment variable CGIT_CONFIG in the Caddyfile cgi directive. When you edit the repository stanzas in this file, be sure each repo.path item refers to the .git directory within a working checkout. Here is an example stanza: Also, you will likely want to change cgit’s cache directory from its default in /var/cache (generally accessible only to root) to a location writeable by Caddy. In this example, cgitrc contains the line You may need to create the cgit subdirectory. There are some static cgit resources (namely, cgit.css, favicon.ico, and cgit.png) that will be accessed from Caddy’s document tree. For this example, these files are placed in a directory named cgit-resource. The following lines are part of the cgitrc file: Additionally, you will likely need to tweak the various file viewer filters such source-filter and about-filter based on your system. The following Caddyfile directive will allow you to access the cgit application at /cgit: Feeling reckless? You can run PHP in CGI mode. In general, FastCGI is the preferred method to run PHP if your application has many pages or a fair amount of database activity. But for small PHP programs that are seldom used, CGI can work fine. You’ll need the php-cgi interpreter for your platform. This may involve downloading the executable or downloading and then compiling the source code. For this example, assume the interpreter is installed as /usr/local/bin/php-cgi. Additionally, because of the way PHP operates in CGI mode, you will need an intermediate script. This one works in Posix environments: This script can be reused for multiple cgi directives. In this example, it is installed as /usr/local/cgi-bin/phpwrap. The argument following -c is your initialization file for PHP. In this example, it is named /home/quixote/.config/php/php-cgi.ini. Two PHP files will be used for this example. The first, /usr/local/cgi-bin/sample/min.php, looks like this: The second, /usr/local/cgi-bin/sample/action.php, follows: The following directive in your Caddyfile will make the application available at sample/min.php: This examples demonstrates printing a CGI rule
Package pq is a pure Go Postgres driver for the database/sql package. In most cases clients will use the database/sql package instead of using this package directly. For example: You can also connect to a database using a URL. For example: Similarly to libpq, when establishing a connection using pq you are expected to supply a connection string containing zero or more parameters. A subset of the connection parameters supported by libpq are also supported by pq. Additionally, pq also lets you specify run-time parameters (such as search_path or work_mem) directly in the connection string. This is different from libpq, which does not allow run-time parameters in the connection string, instead requiring you to supply them in the options parameter. For compatibility with libpq, the following special connection parameters are supported: Valid values for sslmode are: See http://www.postgresql.org/docs/current/static/libpq-connect.html#LIBPQ-CONNSTRING for more information about connection string parameters. Use single quotes for values that contain whitespace: A backslash will escape the next character in values: Note that the connection parameter client_encoding (which sets the text encoding for the connection) may be set but must be "UTF8", matching with the same rules as Postgres. It is an error to provide any other value. In addition to the parameters listed above, any run-time parameter that can be set at backend start time can be set in the connection string. For more information, see http://www.postgresql.org/docs/current/static/runtime-config.html. Most environment variables as specified at http://www.postgresql.org/docs/current/static/libpq-envars.html supported by libpq are also supported by pq. If any of the environment variables not supported by pq are set, pq will panic during connection establishment. Environment variables have a lower precedence than explicitly provided connection parameters. The pgpass mechanism as described in http://www.postgresql.org/docs/current/static/libpq-pgpass.html is supported, but on Windows PGPASSFILE must be specified explicitly. database/sql does not dictate any specific format for parameter markers in query strings, and pq uses the Postgres-native ordinal markers, as shown above. The same marker can be reused for the same parameter: pq does not support the LastInsertId() method of the Result type in database/sql. To return the identifier of an INSERT (or UPDATE or DELETE), use the Postgres RETURNING clause with a standard Query or QueryRow call: For more details on RETURNING, see the Postgres documentation: For additional instructions on querying see the documentation for the database/sql package. Parameters pass through driver.DefaultParameterConverter before they are handled by this package. When the binary_parameters connection option is enabled, []byte values are sent directly to the backend as data in binary format. This package returns the following types for values from the PostgreSQL backend: All other types are returned directly from the backend as []byte values in text format. pq may return errors of type *pq.Error which can be interrogated for error details: See the pq.Error type for details. You can perform bulk imports by preparing a statement returned by pq.CopyIn (or pq.CopyInSchema) in an explicit transaction (sql.Tx). The returned statement handle can then be repeatedly "executed" to copy data into the target table. After all data has been processed you should call Exec() once with no arguments to flush all buffered data. Any call to Exec() might return an error which should be handled appropriately, but because of the internal buffering an error returned by Exec() might not be related to the data passed in the call that failed. CopyIn uses COPY FROM internally. It is not possible to COPY outside of an explicit transaction in pq. Usage example: PostgreSQL supports a simple publish/subscribe model over database connections. See http://www.postgresql.org/docs/current/static/sql-notify.html for more information about the general mechanism. To start listening for notifications, you first have to open a new connection to the database by calling NewListener. This connection can not be used for anything other than LISTEN / NOTIFY. Calling Listen will open a "notification channel"; once a notification channel is open, a notification generated on that channel will effect a send on the Listener.Notify channel. A notification channel will remain open until Unlisten is called, though connection loss might result in some notifications being lost. To solve this problem, Listener sends a nil pointer over the Notify channel any time the connection is re-established following a connection loss. The application can get information about the state of the underlying connection by setting an event callback in the call to NewListener. A single Listener can safely be used from concurrent goroutines, which means that there is often no need to create more than one Listener in your application. However, a Listener is always connected to a single database, so you will need to create a new Listener instance for every database you want to receive notifications in. The channel name in both Listen and Unlisten is case sensitive, and can contain any characters legal in an identifier (see http://www.postgresql.org/docs/current/static/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS for more information). Note that the channel name will be truncated to 63 bytes by the PostgreSQL server. You can find a complete, working example of Listener usage at https://godoc.org/github.com/lib/pq/example/listen. If you need support for Kerberos authentication, add the following to your main package: This package is in a separate module so that users who don't need Kerberos don't have to download unnecessary dependencies. When imported, additional connection string parameters are supported:
Package s2prot is a decoder/parser of Blizzard's StarCraft II replay file format (*.SC2Replay). s2prot processes the "raw" data that can be decoded from replay files using an MPQ parser such as https://github.com/icza/mpq. The package is safe for concurrent use. The package s2prot/rep provides enumerations and types to model data structures of StarCraft II replays (*.SC2Replay) decoded by the s2prot package. These provide a higher level overview and much easier to use. The below example code can be found in https://github.com/icza/s2prot/blob/master/_example/rep.go. To open and parse a replay: And that's all! We now have all the info from the replay! Printing some of it: Output: Tip: the Struct type defines a String() method which returns a nicely formatted JSON representation; this is what most type are "made of": Output: The below example code can be found in https://github.com/icza/s2prot/blob/master/_example/s2prot.go. To use s2prot, we need an MPQ parser to get content from a replay. Replay header (which is the MPQ User Data) can be decoded by s2prot.DecodeHeader(). Printing replay version: Base build is part of the replay header: Which can be used to obtain the proper instance of Protocol: Which can now be used to decode all other info in the replay. To decode the Details and print the map name: Tip: We can of course print the whole decoded header which is a Struct: Which yields a JSON text similar to the one posted above (at High-level Usage). - s2protocol: Blizzard's reference implementation in python: https://github.com/Blizzard/s2protocol - s2protocol implementation of the Scelight project: https://github.com/icza/scelight/tree/master/src-app/hu/scelight/sc2/rep/s2prot - Replay model of the Scelight project: https://github.com/icza/scelight/tree/master/src-app/hu/scelight/sc2/rep/model
Package gocql implements a fast and robust Cassandra driver for the Go programming language. Pass a list of initial node IP addresses to NewCluster to create a new cluster configuration: Port can be specified as part of the address, the above is equivalent to: It is recommended to use the value set in the Cassandra config for broadcast_address or listen_address, an IP address not a domain name. This is because events from Cassandra will use the configured IP address, which is used to index connected hosts. If the domain name specified resolves to more than 1 IP address then the driver may connect multiple times to the same host, and will not mark the node being down or up from events. Then you can customize more options (see ClusterConfig): The driver tries to automatically detect the protocol version to use if not set, but you might want to set the protocol version explicitly, as it's not defined which version will be used in certain situations (for example during upgrade of the cluster when some of the nodes support different set of protocol versions than other nodes). The driver advertises the module name and version in the STARTUP message, so servers are able to detect the version. If you use replace directive in go.mod, the driver will send information about the replacement module instead. When ready, create a session from the configuration. Don't forget to Close the session once you are done with it: CQL protocol uses a SASL-based authentication mechanism and so consists of an exchange of server challenges and client response pairs. The details of the exchanged messages depend on the authenticator used. To use authentication, set ClusterConfig.Authenticator or ClusterConfig.AuthProvider. PasswordAuthenticator is provided to use for username/password authentication: It is possible to secure traffic between the client and server with TLS. To use TLS, set the ClusterConfig.SslOpts field. SslOptions embeds *tls.Config so you can set that directly. There are also helpers to load keys/certificates from files. Warning: Due to historical reasons, the SslOptions is insecure by default, so you need to set EnableHostVerification to true if no Config is set. Most users should set SslOptions.Config to a *tls.Config. SslOptions and Config.InsecureSkipVerify interact as follows: For example: To route queries to local DC first, use DCAwareRoundRobinPolicy. For example, if the datacenter you want to primarily connect is called dc1 (as configured in the database): The driver can route queries to nodes that hold data replicas based on partition key (preferring local DC). Note that TokenAwareHostPolicy can take options such as gocql.ShuffleReplicas and gocql.NonLocalReplicasFallback. We recommend running with a token aware host policy in production for maximum performance. The driver can only use token-aware routing for queries where all partition key columns are query parameters. For example, instead of use The DCAwareRoundRobinPolicy can be replaced with RackAwareRoundRobinPolicy, which takes two parameters, datacenter and rack. Instead of dividing hosts with two tiers (local datacenter and remote datacenters) it divides hosts into three (the local rack, the rest of the local datacenter, and everything else). RackAwareRoundRobinPolicy can be combined with TokenAwareHostPolicy in the same way as DCAwareRoundRobinPolicy. Create queries with Session.Query. Query values must not be reused between different executions and must not be modified after starting execution of the query. To execute a query without reading results, use Query.Exec: Single row can be read by calling Query.Scan: Multiple rows can be read using Iter.Scanner: See Example for complete example. The driver automatically prepares DML queries (SELECT/INSERT/UPDATE/DELETE/BATCH statements) and maintains a cache of prepared statements. CQL protocol does not support preparing other query types. When using CQL protocol >= 4, it is possible to use gocql.UnsetValue as the bound value of a column. This will cause the database to ignore writing the column. The main advantage is the ability to keep the same prepared statement even when you don't want to update some fields, where before you needed to make another prepared statement. Session is safe to use from multiple goroutines, so to execute multiple concurrent queries, just execute them from several worker goroutines. Gocql provides synchronously-looking API (as recommended for Go APIs) and the queries are executed asynchronously at the protocol level. Null values are are unmarshalled as zero value of the type. If you need to distinguish for example between text column being null and empty string, you can unmarshal into *string variable instead of string. See Example_nulls for full example. The driver reuses backing memory of slices when unmarshalling. This is an optimization so that a buffer does not need to be allocated for every processed row. However, you need to be careful when storing the slices to other memory structures. When you want to save the data for later use, pass a new slice every time. A common pattern is to declare the slice variable within the scanner loop: The driver supports paging of results with automatic prefetch, see ClusterConfig.PageSize, Session.SetPrefetch, Query.PageSize, and Query.Prefetch. It is also possible to control the paging manually with Query.PageState (this disables automatic prefetch). Manual paging is useful if you want to store the page state externally, for example in a URL to allow users browse pages in a result. You might want to sign/encrypt the paging state when exposing it externally since it contains data from primary keys. Paging state is specific to the CQL protocol version and the exact query used. It is meant as opaque state that should not be modified. If you send paging state from different query or protocol version, then the behaviour is not defined (you might get unexpected results or an error from the server). For example, do not send paging state returned by node using protocol version 3 to a node using protocol version 4. Also, when using protocol version 4, paging state between Cassandra 2.2 and 3.0 is incompatible (https://issues.apache.org/jira/browse/CASSANDRA-10880). The driver does not check whether the paging state is from the same protocol version/statement. You might want to validate yourself as this could be a problem if you store paging state externally. For example, if you store paging state in a URL, the URLs might become broken when you upgrade your cluster. Call Query.PageState(nil) to fetch just the first page of the query results. Pass the page state returned by Iter.PageState to Query.PageState of a subsequent query to get the next page. If the length of slice returned by Iter.PageState is zero, there are no more pages available (or an error occurred). Using too low values of PageSize will negatively affect performance, a value below 100 is probably too low. While Cassandra returns exactly PageSize items (except for last page) in a page currently, the protocol authors explicitly reserved the right to return smaller or larger amount of items in a page for performance reasons, so don't rely on the page having the exact count of items. See Example_paging for an example of manual paging. There are certain situations when you don't know the list of columns in advance, mainly when the query is supplied by the user. Iter.Columns, Iter.RowData, Iter.MapScan and Iter.SliceMap can be used to handle this case. See Example_dynamicColumns. The CQL protocol supports sending batches of DML statements (INSERT/UPDATE/DELETE) and so does gocql. Use Session.NewBatch to create a new batch and then fill-in details of individual queries. Then execute the batch with Session.ExecuteBatch. Logged batches ensure atomicity, either all or none of the operations in the batch will succeed, but they have overhead to ensure this property. Unlogged batches don't have the overhead of logged batches, but don't guarantee atomicity. Updates of counters are handled specially by Cassandra so batches of counter updates have to use CounterBatch type. A counter batch can only contain statements to update counters. For unlogged batches it is recommended to send only single-partition batches (i.e. all statements in the batch should involve only a single partition). Multi-partition batch needs to be split by the coordinator node and re-sent to correct nodes. With single-partition batches you can send the batch directly to the node for the partition without incurring the additional network hop. It is also possible to pass entire BEGIN BATCH .. APPLY BATCH statement to Query.Exec. There are differences how those are executed. BEGIN BATCH statement passed to Query.Exec is prepared as a whole in a single statement. Session.ExecuteBatch prepares individual statements in the batch. If you have variable-length batches using the same statement, using Session.ExecuteBatch is more efficient. See Example_batch for an example. Query.ScanCAS or Query.MapScanCAS can be used to execute a single-statement lightweight transaction (an INSERT/UPDATE .. IF statement) and reading its result. See example for Query.MapScanCAS. Multiple-statement lightweight transactions can be executed as a logged batch that contains at least one conditional statement. All the conditions must return true for the batch to be applied. You can use Session.ExecuteBatchCAS and Session.MapExecuteBatchCAS when executing the batch to learn about the result of the LWT. See example for Session.MapExecuteBatchCAS. Queries can be marked as idempotent. Marking the query as idempotent tells the driver that the query can be executed multiple times without affecting its result. Non-idempotent queries are not eligible for retrying nor speculative execution. Idempotent queries are retried in case of errors based on the configured RetryPolicy. Queries can be retried even before they fail by setting a SpeculativeExecutionPolicy. The policy can cause the driver to retry on a different node if the query is taking longer than a specified delay even before the driver receives an error or timeout from the server. When a query is speculatively executed, the original execution is still executing. The two parallel executions of the query race to return a result, the first received result will be returned. UDTs can be mapped (un)marshaled from/to map[string]interface{} a Go struct (or a type implementing UDTUnmarshaler, UDTMarshaler, Unmarshaler or Marshaler interfaces). For structs, cql tag can be used to specify the CQL field name to be mapped to a struct field: See Example_userDefinedTypesMap, Example_userDefinedTypesStruct, ExampleUDTMarshaler, ExampleUDTUnmarshaler. It is possible to provide observer implementations that could be used to gather metrics: CQL protocol also supports tracing of queries. When enabled, the database will write information about internal events that happened during execution of the query. You can use Query.Trace to request tracing and receive the session ID that the database used to store the trace information in system_traces.sessions and system_traces.events tables. NewTraceWriter returns an implementation of Tracer that writes the events to a writer. Gathering trace information might be essential for debugging and optimizing queries, but writing traces has overhead, so this feature should not be used on production systems with very high load unless you know what you are doing. Example_batch demonstrates how to execute a batch of statements. Example_dynamicColumns demonstrates how to handle dynamic column list. Example_marshalerUnmarshaler demonstrates how to implement a Marshaler and Unmarshaler. Example_nulls demonstrates how to distinguish between null and zero value when needed. Null values are unmarshalled as zero value of the type. If you need to distinguish for example between text column being null and empty string, you can unmarshal into *string field. Example_paging demonstrates how to manually fetch pages and use page state. See also package documentation about paging. Example_set demonstrates how to use sets. Example_userDefinedTypesMap demonstrates how to work with user-defined types as maps. See also Example_userDefinedTypesStruct and examples for UDTMarshaler and UDTUnmarshaler if you want to map to structs. Example_userDefinedTypesStruct demonstrates how to work with user-defined types as structs. See also examples for UDTMarshaler and UDTUnmarshaler if you need more control/better performance.
Package blackfriday is a markdown processor. It translates plain text with simple formatting rules into an AST, which can then be further processed to HTML (provided by Blackfriday itself) or other formats (provided by the community). The simplest way to invoke Blackfriday is to call the Run function. It will take a text input and produce a text output in HTML (or other format). A slightly more sophisticated way to use Blackfriday is to create a Markdown processor and to call Parse, which returns a syntax tree for the input document. You can leverage Blackfriday's parsing for content extraction from markdown documents. You can assign a custom renderer and set various options to the Markdown processor. If you're interested in calling Blackfriday from command line, see https://github.com/russross/blackfriday-tool. Blackfriday includes an algorithm for creating sanitized anchor names corresponding to a given input text. This algorithm is used to create anchors for headings when AutoHeadingIDs extension is enabled. The algorithm is specified below, so that other packages can create compatible anchor names and links to those anchors. The algorithm iterates over the input text, interpreted as UTF-8, one Unicode code point (rune) at a time. All runes that are letters (category L) or numbers (category N) are considered valid characters. They are mapped to lower case, and included in the output. All other runes are considered invalid characters. Invalid characters that precede the first valid character, as well as invalid character that follow the last valid character are dropped completely. All other sequences of invalid characters between two valid characters are replaced with a single dash character '-'. SanitizedAnchorName exposes this functionality, and can be used to create compatible links to the anchor names generated by blackfriday. This algorithm is also implemented in a small standalone package at github.com/shurcooL/sanitized_anchor_name. It can be useful for clients that want a small package and don't need full functionality of blackfriday.
Package websocket implements the WebSocket protocol defined in RFC 6455. The Conn type represents a WebSocket connection. A server application calls the Upgrader.Upgrade method from an HTTP request handler to get a *Conn: Call the connection's WriteMessage and ReadMessage methods to send and receive messages as a slice of bytes. This snippet of code shows how to echo messages using these methods: In above snippet of code, p is a []byte and messageType is an int with value websocket.BinaryMessage or websocket.TextMessage. An application can also send and receive messages using the io.WriteCloser and io.Reader interfaces. To send a message, call the connection NextWriter method to get an io.WriteCloser, write the message to the writer and close the writer when done. To receive a message, call the connection NextReader method to get an io.Reader and read until io.EOF is returned. This snippet shows how to echo messages using the NextWriter and NextReader methods: The WebSocket protocol distinguishes between text and binary data messages. Text messages are interpreted as UTF-8 encoded text. The interpretation of binary messages is left to the application. This package uses the TextMessage and BinaryMessage integer constants to identify the two data message types. The ReadMessage and NextReader methods return the type of the received message. The messageType argument to the WriteMessage and NextWriter methods specifies the type of a sent message. It is the application's responsibility to ensure that text messages are valid UTF-8 encoded text. The WebSocket protocol defines three types of control messages: close, ping and pong. Call the connection WriteControl, WriteMessage or NextWriter methods to send a control message to the peer. Connections handle received close messages by calling the handler function set with the SetCloseHandler method and by returning a *CloseError from the NextReader, ReadMessage or the message Read method. The default close handler sends a close message to the peer. Connections handle received ping messages by calling the handler function set with the SetPingHandler method. The default ping handler sends a pong message to the peer. Connections handle received pong messages by calling the handler function set with the SetPongHandler method. The default pong handler does nothing. If an application sends ping messages, then the application should set a pong handler to receive the corresponding pong. The control message handler functions are called from the NextReader, ReadMessage and message reader Read methods. The default close and ping handlers can block these methods for a short time when the handler writes to the connection. The application must read the connection to process close, ping and pong messages sent from the peer. If the application is not otherwise interested in messages from the peer, then the application should start a goroutine to read and discard messages from the peer. A simple example is: Connections support one concurrent reader and one concurrent writer. Applications are responsible for ensuring that no more than one goroutine calls the write methods (NextWriter, SetWriteDeadline, WriteMessage, WriteJSON, EnableWriteCompression, SetCompressionLevel) concurrently and that no more than one goroutine calls the read methods (NextReader, SetReadDeadline, ReadMessage, ReadJSON, SetPongHandler, SetPingHandler) concurrently. The Close and WriteControl methods can be called concurrently with all other methods. Web browsers allow Javascript applications to open a WebSocket connection to any host. It's up to the server to enforce an origin policy using the Origin request header sent by the browser. The Upgrader calls the function specified in the CheckOrigin field to check the origin. If the CheckOrigin function returns false, then the Upgrade method fails the WebSocket handshake with HTTP status 403. If the CheckOrigin field is nil, then the Upgrader uses a safe default: fail the handshake if the Origin request header is present and the Origin host is not equal to the Host request header. The deprecated package-level Upgrade function does not perform origin checking. The application is responsible for checking the Origin header before calling the Upgrade function. Per message compression extensions (RFC 7692) are experimentally supported by this package in a limited capacity. Setting the EnableCompression option to true in Dialer or Upgrader will attempt to negotiate per message deflate support. If compression was successfully negotiated with the connection's peer, any message received in compressed form will be automatically decompressed. All Read methods will return uncompressed bytes. Per message compression of messages written to a connection can be enabled or disabled by calling the corresponding Conn method: Currently this package does not support compression with "context takeover". This means that messages must be compressed and decompressed in isolation, without retaining sliding window or dictionary state across messages. For more details refer to RFC 7692. Use of compression is experimental and may result in decreased performance.
Package ldk is an LDK (loop development kit) for plugins for the Sidekick project. The LDK is built with go-plugin (https://github.com/hashicorp/go-plugin), a HashiCorp plugin system used in several of their projects. Plugins developed with this library are executed by Sidekick as separate processes. This ensures that crashes or instability in the plugin will not destabilize the Sidekick process. Communication between Sidekick and the plugin is first initialized over stdio and then performed using gRPC (https://grpc.io/). On mac and linux the GRPC communication is sent over unix domain socket and on windows over local TCP socket. In order for Sidekick to use a plugin, it must be compiled. Sidekick does not compile or interpret source code at runtime. A consequence of this is that plugins will need to be compiled for each operating system that they want to support. Controllers receive events and use them to generate relevant whispers. Controllers choose which events they want to use and which they want to ignore. Writing a Controller plugin boils down to writing an implementation for the Controller interface. Start() - The Controller should wait to start operating until this is called. The provided `ControllerHost` should be stored in memory for continued use. Stop() - The Controller should stop operating when this is called. OnEvent() - The controller can use this to handle events that are broadcast by Sensors. Controllers do not need to emit events in a 1:1 relationship with events. Controllers may not use events at all. Controllers may only use some events. Controllers may keep a history of events and only emit whispers when several conditions are met. 1. Sidekick executes plugin process 2. Sidekick calls `Start`, sending the host connection information to the plugin. This connection information is used to create the `ControllerHost`. The `ControllerHost` interface allows the plugin to emit whispers. 3. On Controller wanting to emit a whisper, the Controller calls the `EmitWhisper` method on the host interface. 4. On Sensor event, Sidekick calls `OnEvent`, passing the event from the Sensor to the Controller. These events can be ignored or used at the Controller's choice. 5. On User disabling the Controller, Sidekick calls `Stop` then sends `sigterm` to the process. 6. On Sidekick shutdown, Sidekick calls `Stop` then sends `sigterm` to the process.* We recommend using this repo as a starting point when creating a new controller: https://github.com/open-olive/sidekick-controller-examplego A Sensor is a type of plugin that generates events. Events can be as simple as a chunk of text but allow for complicated information. Sensors do not choose which controllers get their events. They are simply emitting the events. The decision about which events to use is left to the controller. Writing a Sensor plugin boils down to writing an implementation for the Sensor interface. Start() - The Sensor should wait to start operating until this is called. The provided `SensorHost` should be stored in memory for continued use. Stop() - The Sensor should stop operating when this is called. OnEvent() - The sensor can use this to handle events from the Sidekick UI. Many aptitudes will not care about UI events, and in that case the function should just return `nil`. 1. Sidekick executes plugin process 2. Sidekick calls `Start`, sending the host connection information to the plugin. This connection information is used to create the `SensorHost`. The `SensorHost` interface allows the plugin to emit events. 3. On Sensor wanting to emit an event, the Sensor calls the `EmitEvent` method on the host interface. 4. On Sidekick UI event, Sidekick calls `OnEvent`, passing the event to the Sensor. These events can be ignore or used at the Sensor's choice. 5. On User disabling the Sensor, Sidekick calls `Stop` then sends `sigterm` to the process. 6. On Sidekick shutdown, Sidekick calls `Stop` then sends `sigterm` to the process.
Package metrics is a telemetry client designed for Uber's software networking team. It prioritizes performance on the hot path and integration with both push- and pull-based collection systems. Like Prometheus and Tally, it supports metrics tagged with arbitrary key-value pairs. Like Prometheus, but unlike Tally, metric names should be relatively long and descriptive - generally speaking, metrics from the same process shouldn't share names. (See the documentation for the Root struct below for a longer explanation of the uniqueness rules.) For example, prefer "grpc_successes_by_procedure" over "successes", since "successes" is common and vague. Where relevant, metric names should indicate their unit of measurement (e.g., "grpc_success_latency_ms"). Counters represent monotonically increasing values, like a car's odometer. Gauges represent point-in-time readings, like a car's speedometer. Both counters and gauges expose not only write operations (set, add, increment, etc.), but also atomic reads. This makes them easy to integrate directly into your business logic: you can use them anywhere you'd otherwise use a 64-bit atomic integer. This package doesn't support analogs of Tally's timer or Prometheus's summary, because they can't be accurately aggregated at query time. Instead, it approximates distributions of values with histograms. These require more up-front work to set up, but are typically more accurate and flexible when queried. See https://prometheus.io/docs/practices/histograms/ for a more detailed discussion of the trade-offs involved. Plain counters, gauges, and histograms have a fixed set of tags. However, it's common to encounter situations where a subset of a metric's tags vary constantly. For example, you might want to track the latency of your database queries by table: you know the database cluster, application name, and hostname at process startup, but you need to specify the table name with each query. To model these situations, this package uses vectors. Each vector is a local cache of metrics, so accessing them is quite fast. Within a vector, all metrics share a common set of constant tags and a list of variable tags. In our database query example, the constant tags are cluster, application, and hostname, and the only variable tag is table name. Usage examples are included in the documentation for each vector type. This package integrates with StatsD- and M3-based collection systems by periodically pushing differential updates. (Users can integrate with other push-based systems by implementing the push.Target interface.) It integrates with pull-based collectors by exposing an HTTP handler that supports Prometheus's text and protocol buffer exposition formats. Examples of both push and pull integration are included in the documentation for the root struct's Push and ServeHTTP methods. If you're unfamiliar with Tally and Prometheus, you may want to consult their documentation:
Package prose is a repository of packages related to text processing, including tokenization, part-of-speech tagging, and named-entity extraction.
Package websocket implements the WebSocket protocol defined in RFC 6455. The Conn type represents a WebSocket connection. A server application calls the Upgrader.Upgrade method from an HTTP request handler to get a *Conn: Call the connection's WriteMessage and ReadMessage methods to send and receive messages as a slice of bytes. This snippet of code shows how to echo messages using these methods: In above snippet of code, p is a []byte and messageType is an int with value websocket.BinaryMessage or websocket.TextMessage. An application can also send and receive messages using the io.WriteCloser and io.Reader interfaces. To send a message, call the connection NextWriter method to get an io.WriteCloser, write the message to the writer and close the writer when done. To receive a message, call the connection NextReader method to get an io.Reader and read until io.EOF is returned. This snippet shows how to echo messages using the NextWriter and NextReader methods: The WebSocket protocol distinguishes between text and binary data messages. Text messages are interpreted as UTF-8 encoded text. The interpretation of binary messages is left to the application. This package uses the TextMessage and BinaryMessage integer constants to identify the two data message types. The ReadMessage and NextReader methods return the type of the received message. The messageType argument to the WriteMessage and NextWriter methods specifies the type of a sent message. It is the application's responsibility to ensure that text messages are valid UTF-8 encoded text. The WebSocket protocol defines three types of control messages: close, ping and pong. Call the connection WriteControl, WriteMessage or NextWriter methods to send a control message to the peer. Connections handle received close messages by calling the handler function set with the SetCloseHandler method and by returning a *CloseError from the NextReader, ReadMessage or the message Read method. The default close handler sends a close message to the peer. Connections handle received ping messages by calling the handler function set with the SetPingHandler method. The default ping handler sends a pong message to the peer. Connections handle received pong messages by calling the handler function set with the SetPongHandler method. The default pong handler does nothing. If an application sends ping messages, then the application should set a pong handler to receive the corresponding pong. The control message handler functions are called from the NextReader, ReadMessage and message reader Read methods. The default close and ping handlers can block these methods for a short time when the handler writes to the connection. The application must read the connection to process close, ping and pong messages sent from the peer. If the application is not otherwise interested in messages from the peer, then the application should start a goroutine to read and discard messages from the peer. A simple example is: Connections support one concurrent reader and one concurrent writer. Applications are responsible for ensuring that no more than one goroutine calls the write methods (NextWriter, SetWriteDeadline, WriteMessage, WriteJSON, EnableWriteCompression, SetCompressionLevel) concurrently and that no more than one goroutine calls the read methods (NextReader, SetReadDeadline, ReadMessage, ReadJSON, SetPongHandler, SetPingHandler) concurrently. The Close and WriteControl methods can be called concurrently with all other methods. Web browsers allow Javascript applications to open a WebSocket connection to any host. It's up to the server to enforce an origin policy using the Origin request header sent by the browser. The Upgrader calls the function specified in the CheckOrigin field to check the origin. If the CheckOrigin function returns false, then the Upgrade method fails the WebSocket handshake with HTTP status 403. If the CheckOrigin field is nil, then the Upgrader uses a safe default: fail the handshake if the Origin request header is present and the Origin host is not equal to the Host request header. The deprecated package-level Upgrade function does not perform origin checking. The application is responsible for checking the Origin header before calling the Upgrade function. Per message compression extensions (RFC 7692) are experimentally supported by this package in a limited capacity. Setting the EnableCompression option to true in Dialer or Upgrader will attempt to negotiate per message deflate support. If compression was successfully negotiated with the connection's peer, any message received in compressed form will be automatically decompressed. All Read methods will return uncompressed bytes. Per message compression of messages written to a connection can be enabled or disabled by calling the corresponding Conn method: Currently this package does not support compression with "context takeover". This means that messages must be compressed and decompressed in isolation, without retaining sliding window or dictionary state across messages. For more details refer to RFC 7692. Use of compression is experimental and may result in decreased performance.
Package testscript provides support for defining filesystem-based tests by creating scripts in a directory. To invoke the tests, call testscript.Run. For example: A testscript directory holds test scripts *.txt run during 'go test'. Each script defines a subtest; the exact set of allowable commands in a script are defined by the parameters passed to the Run function. To run a specific script foo.txt where TestName is the name of the test that Run is called from. To define an executable command (or several) that can be run as part of the script, call RunMain with the functions that implement the command's functionality. The command functions will be called in a separate process, so are free to mutate global variables without polluting the top level test binary. In general script files should have short names: a few words, not whole sentences. The first word should be the general category of behavior being tested, often the name of a subcommand to be tested or a concept (vendor, pattern). Each script is a text archive (go doc github.com/rogpeppe/testscript//txtar). The script begins with an actual command script to run followed by the content of zero or more supporting files to create in the script's temporary file system before it starts executing. As an example: Each script runs in a fresh temporary work directory tree, available to scripts as $WORK. Scripts also have access to these other environment variables: The environment variable $exe (lowercase) is an empty string on most systems, ".exe" on Windows. The script's supporting files are unpacked relative to $WORK and then the script begins execution in that directory as well. Thus the example above runs in $WORK with $WORK/hello.txt containing the listed contents. The lines at the top of the script are a sequence of commands to be executed by a small script engine in the testscript package (not the system shell). The script stops and the overall test fails if any particular command fails. Each line is parsed into a sequence of space-separated command words, with environment variable expansion and # marking an end-of-line comment. Adding single quotes around text keeps spaces in that text from being treated as word separators and also disables environment variable expansion. Inside a single-quoted block of text, a repeated single quote indicates a literal single quote, as in: A line beginning with # is a comment and conventionally explains what is being done or tested at the start of a new phase in the script. A special form of environment variable syntax can be used to quote regexp metacharacters inside environment variables. The "@R" suffix is special, and indicates that the variable should be quoted. The command prefix ! indicates that the command on the rest of the line (typically go or a matching predicate) must fail, not succeed. Only certain commands support this prefix. They are indicated below by [!] in the synopsis. The command prefix [cond] indicates that the command on the rest of the line should only run when the condition is satisfied. The predefined conditions are: A condition can be negated: [!short] means to run the rest of the line when testing.Short() is false. Additional conditions can be added by passing a function to Params.Condition. The predefined commands are: - chmod mode file [!] exec program [args...] [&] Run the given executable program with the arguments. It must (or must not) succeed. Note that 'exec' does not terminate the script (unlike in Unix shells). If the last token is '&', the program executes in the background. The standard output and standard error of the previous command is cleared, but the output of the background process is buffered — and checking of its exit status is delayed — until the next call to 'wait', 'skip', or 'stop' or the end of the test. At the end of the test, any remaining background processes are terminated using os.Interrupt (if supported) or os.Kill. Standard input can be provided using the stdin command; this will be cleared after exec has been called. When TestScript runs a script and the script fails, by default TestScript shows the execution of the most recent phase of the script (since the last # comment) and only shows the # comments for earlier phases. For example, here is a multi-phase script with a bug in it (TODO: make this example less go-command specific): The bug is that the final phase installs p11 instead of p1. The test failure looks like: Note that the commands in earlier phases have been hidden, so that the relevant commands are more easily found, and the elapsed time for a completed phase is shown next to the phase heading. To see the entire execution, use "go test -v", which also adds an initial environment dump to the beginning of the log. Note also that in reported output, the actual name of the per-script temporary directory has been consistently replaced with the literal string $WORK. If Params.TestWork is true, it causes each test to log the name of its $WORK directory and other environment variable settings and also to leave that directory behind when it exits, for manual debugging of failing tests:
Package ql implements a pure Go embedded SQL database engine. QL is a member of the SQL family of languages. It is less complex and less powerful than SQL (whichever specification SQL is considered to be). 2017-01-10: Release v1.1.0 fixes some bugs and adds a configurable WAL headroom. 2016-07-29: Release v1.0.6 enables alternatively using = instead of == for equality operation. 2016-07-11: Release v1.0.5 undoes vendoring of lldb. QL now uses stable lldb (github.com/cznic/lldb). 2016-07-06: Release v1.0.4 fixes a panic when closing the WAL file. 2016-04-03: Release v1.0.3 fixes a data race. 2016-03-23: Release v1.0.2 vendors github.com/cznic/exp/lldb and github.com/camlistore/go4/lock. 2016-03-17: Release v1.0.1 adjusts for latest goyacc. Parser error messages are improved and changed, but their exact form is not considered a API change. 2016-03-05: The current version has been tagged v1.0.0. 2015-06-15: To improve compatibility with other SQL implementations, the count built-in aggregate function now accepts * as its argument. 2015-05-29: The execution planner was rewritten from scratch. It should use indices in all places where they were used before plus in some additional situations. It is possible to investigate the plan using the newly added EXPLAIN statement. The QL tool is handy for such analysis. If the planner would have used an index, but no such exists, the plan includes hints in form of copy/paste ready CREATE INDEX statements. The planner is still quite simple and a lot of work on it is yet ahead. You can help this process by filling an issue with a schema and query which fails to use an index or indices when it should, in your opinion. Bonus points for including output of `ql 'explain <query>'`. 2015-05-09: The grammar of the CREATE INDEX statement now accepts an expression list instead of a single expression, which was further limited to just a column name or the built-in id(). As a side effect, composite indices are now functional. However, the values in the expression-list style index are not yet used by other statements or the statement/query planner. The composite index is useful while having UNIQUE clause to check for semantically duplicate rows before they get added to the table or when such a row is mutated using the UPDATE statement and the expression-list style index tuple of the row is thus recomputed. 2015-05-02: The Schema field of table __Table now correctly reflects any column constraints and/or defaults. Also, the (*DB).Info method now has that information provided in new ColumInfo fields NotNull, Constraint and Default. 2015-04-20: Added support for {LEFT,RIGHT,FULL} [OUTER] JOIN. 2015-04-18: Column definitions can now have constraints and defaults. Details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. 2015-03-06: New built-in functions formatFloat and formatInt. Thanks urandom! (https://github.com/urandom) 2015-02-16: IN predicate now accepts a SELECT statement. See the updated "Predicates" section. 2015-01-17: Logical operators || and && have now alternative spellings: OR and AND (case insensitive). AND was a keyword before, but OR is a new one. This can possibly break existing queries. For the record, it's a good idea to not use any name appearing in, for example, [7] in your queries as the list of QL's keywords may expand for gaining better compatibility with existing SQL "standards". 2015-01-12: ACID guarantees were tightened at the cost of performance in some cases. The write collecting window mechanism, a formerly used implementation detail, was removed. Inserting rows one by one in a transaction is now slow. I mean very slow. Try to avoid inserting single rows in a transaction. Instead, whenever possible, perform batch updates of tens to, say thousands of rows in a single transaction. See also: http://www.sqlite.org/faq.html#q19, the discussed synchronization principles involved are the same as for QL, modulo minor details. Note: A side effect is that closing a DB before exiting an application, both for the Go API and through database/sql driver, is no more required, strictly speaking. Beware that exiting an application while there is an open (uncommitted) transaction in progress means losing the transaction data. However, the DB will not become corrupted because of not closing it. Nor that was the case before, but formerly failing to close a DB could have resulted in losing the data of the last transaction. 2014-09-21: id() now optionally accepts a single argument - a table name. 2014-09-01: Added the DB.Flush() method and the LIKE pattern matching predicate. 2014-08-08: The built in functions max and min now accept also time values. Thanks opennota! (https://github.com/opennota) 2014-06-05: RecordSet interface extended by new methods FirstRow and Rows. 2014-06-02: Indices on id() are now used by SELECT statements. 2014-05-07: Introduction of Marshal, Schema, Unmarshal. 2014-04-15: Added optional IF NOT EXISTS clause to CREATE INDEX and optional IF EXISTS clause to DROP INDEX. 2014-04-12: The column Unique in the virtual table __Index was renamed to IsUnique because the old name is a keyword. Unfortunately, this is a breaking change, sorry. 2014-04-11: Introduction of LIMIT, OFFSET. 2014-04-10: Introduction of query rewriting. 2014-04-07: Introduction of indices. QL imports zappy[8], a block-based compressor, which speeds up its performance by using a C version of the compression/decompression algorithms. If a CGO-free (pure Go) version of QL, or an app using QL, is required, please include 'purego' in the -tags option of go {build,get,install}. For example: If zappy was installed before installing QL, it might be necessary to rebuild zappy first (or rebuild QL with all its dependencies using the -a option): The syntax is specified using Extended Backus-Naur Form (EBNF) Lower-case production names are used to identify lexical tokens. Non-terminals are in CamelCase. Lexical tokens are enclosed in double quotes "" or back quotes “. The form a … b represents the set of characters from a through b as alternatives. The horizontal ellipsis … is also used elsewhere in the spec to informally denote various enumerations or code snippets that are not further specified. QL source code is Unicode text encoded in UTF-8. The text is not canonicalized, so a single accented code point is distinct from the same character constructed from combining an accent and a letter; those are treated as two code points. For simplicity, this document will use the unqualified term character to refer to a Unicode code point in the source text. Each code point is distinct; for instance, upper and lower case letters are different characters. Implementation restriction: For compatibility with other tools, the parser may disallow the NUL character (U+0000) in the statement. Implementation restriction: A byte order mark is disallowed anywhere in QL statements. The following terms are used to denote specific character classes The underscore character _ (U+005F) is considered a letter. Lexical elements are comments, tokens, identifiers, keywords, operators and delimiters, integer, floating-point, imaginary, rune and string literals and QL parameters. Line comments start with the character sequence // or -- and stop at the end of the line. A line comment acts like a space. General comments start with the character sequence /* and continue through the character sequence */. A general comment acts like a space. Comments do not nest. Tokens form the vocabulary of QL. There are four classes: identifiers, keywords, operators and delimiters, and literals. White space, formed from spaces (U+0020), horizontal tabs (U+0009), carriage returns (U+000D), and newlines (U+000A), is ignored except as it separates tokens that would otherwise combine into a single token. The formal grammar uses semicolons ";" as separators of QL statements. A single QL statement or the last QL statement in a list of statements can have an optional semicolon terminator. (Actually a separator from the following empty statement.) Identifiers name entities such as tables or record set columns. An identifier is a sequence of one or more letters and digits. The first character in an identifier must be a letter. For example No identifiers are predeclared, however note that no keyword can be used as an identifier. Identifiers starting with two underscores are used for meta data virtual tables names. For forward compatibility, users should generally avoid using any identifiers starting with two underscores. For example The following keywords are reserved and may not be used as identifiers. Keywords are not case sensitive. The following character sequences represent operators, delimiters, and other special tokens Operators consisting of more than one character are referred to by names in the rest of the documentation An integer literal is a sequence of digits representing an integer constant. An optional prefix sets a non-decimal base: 0 for octal, 0x or 0X for hexadecimal. In hexadecimal literals, letters a-f and A-F represent values 10 through 15. For example A floating-point literal is a decimal representation of a floating-point constant. It has an integer part, a decimal point, a fractional part, and an exponent part. The integer and fractional part comprise decimal digits; the exponent part is an e or E followed by an optionally signed decimal exponent. One of the integer part or the fractional part may be elided; one of the decimal point or the exponent may be elided. For example An imaginary literal is a decimal representation of the imaginary part of a complex constant. It consists of a floating-point literal or decimal integer followed by the lower-case letter i. For example A rune literal represents a rune constant, an integer value identifying a Unicode code point. A rune literal is expressed as one or more characters enclosed in single quotes. Within the quotes, any character may appear except single quote and newline. A single quoted character represents the Unicode value of the character itself, while multi-character sequences beginning with a backslash encode values in various formats. The simplest form represents the single character within the quotes; since QL statements are Unicode characters encoded in UTF-8, multiple UTF-8-encoded bytes may represent a single integer value. For instance, the literal 'a' holds a single byte representing a literal a, Unicode U+0061, value 0x61, while 'ä' holds two bytes (0xc3 0xa4) representing a literal a-dieresis, U+00E4, value 0xe4. Several backslash escapes allow arbitrary values to be encoded as ASCII text. There are four ways to represent the integer value as a numeric constant: \x followed by exactly two hexadecimal digits; \u followed by exactly four hexadecimal digits; \U followed by exactly eight hexadecimal digits, and a plain backslash \ followed by exactly three octal digits. In each case the value of the literal is the value represented by the digits in the corresponding base. Although these representations all result in an integer, they have different valid ranges. Octal escapes must represent a value between 0 and 255 inclusive. Hexadecimal escapes satisfy this condition by construction. The escapes \u and \U represent Unicode code points so within them some values are illegal, in particular those above 0x10FFFF and surrogate halves. After a backslash, certain single-character escapes represent special values All other sequences starting with a backslash are illegal inside rune literals. For example A string literal represents a string constant obtained from concatenating a sequence of characters. There are two forms: raw string literals and interpreted string literals. Raw string literals are character sequences between back quotes “. Within the quotes, any character is legal except back quote. The value of a raw string literal is the string composed of the uninterpreted (implicitly UTF-8-encoded) characters between the quotes; in particular, backslashes have no special meaning and the string may contain newlines. Carriage returns inside raw string literals are discarded from the raw string value. Interpreted string literals are character sequences between double quotes "". The text between the quotes, which may not contain newlines, forms the value of the literal, with backslash escapes interpreted as they are in rune literals (except that \' is illegal and \" is legal), with the same restrictions. The three-digit octal (\nnn) and two-digit hexadecimal (\xnn) escapes represent individual bytes of the resulting string; all other escapes represent the (possibly multi-byte) UTF-8 encoding of individual characters. Thus inside a string literal \377 and \xFF represent a single byte of value 0xFF=255, while ÿ, \u00FF, \U000000FF and \xc3\xbf represent the two bytes 0xc3 0xbf of the UTF-8 encoding of character U+00FF. For example These examples all represent the same string If the statement source represents a character as two code points, such as a combining form involving an accent and a letter, the result will be an error if placed in a rune literal (it is not a single code point), and will appear as two code points if placed in a string literal. Literals are assigned their values from the respective text representation at "compile" (parse) time. QL parameters provide the same functionality as literals, but their value is assigned at execution time from an expression list passed to DB.Run or DB.Execute. Using '?' or '$' is completely equivalent. For example Keywords 'false' and 'true' (not case sensitive) represent the two possible constant values of type bool (also not case sensitive). Keyword 'NULL' (not case sensitive) represents an untyped constant which is assignable to any type. NULL is distinct from any other value of any type. A type determines the set of values and operations specific to values of that type. A type is specified by a type name. Named instances of the boolean, numeric, and string types are keywords. The names are not case sensitive. Note: The blob type is exchanged between the back end and the API as []byte. On 32 bit platforms this limits the size which the implementation can handle to 2G. A boolean type represents the set of Boolean truth values denoted by the predeclared constants true and false. The predeclared boolean type is bool. A duration type represents the elapsed time between two instants as an int64 nanosecond count. The representation limits the largest representable duration to approximately 290 years. A numeric type represents sets of integer or floating-point values. The predeclared architecture-independent numeric types are The value of an n-bit integer is n bits wide and represented using two's complement arithmetic. Conversions are required when different numeric types are mixed in an expression or assignment. A string type represents the set of string values. A string value is a (possibly empty) sequence of bytes. The case insensitive keyword for the string type is 'string'. The length of a string (its size in bytes) can be discovered using the built-in function len. A time type represents an instant in time with nanosecond precision. Each time has associated with it a location, consulted when computing the presentation form of the time. The following functions are implicitly declared An expression specifies the computation of a value by applying operators and functions to operands. Operands denote the elementary values in an expression. An operand may be a literal, a (possibly qualified) identifier denoting a constant or a function or a table/record set column, or a parenthesized expression. A qualified identifier is an identifier qualified with a table/record set name prefix. For example Primary expression are the operands for unary and binary expressions. For example A primary expression of the form denotes the element of a string indexed by x. Its type is byte. The value x is called the index. The following rules apply - The index x must be of integer type except bigint or duration; it is in range if 0 <= x < len(s), otherwise it is out of range. - A constant index must be non-negative and representable by a value of type int. - A constant index must be in range if the string a is a literal. - If x is out of range at run time, a run-time error occurs. - s[x] is the byte at index x and the type of s[x] is byte. If s is NULL or x is NULL then the result is NULL. Otherwise s[x] is illegal. For a string, the primary expression constructs a substring. The indices low and high select which elements appear in the result. The result has indices starting at 0 and length equal to high - low. For convenience, any of the indices may be omitted. A missing low index defaults to zero; a missing high index defaults to the length of the sliced operand The indices low and high are in range if 0 <= low <= high <= len(a), otherwise they are out of range. A constant index must be non-negative and representable by a value of type int. If both indices are constant, they must satisfy low <= high. If the indices are out of range at run time, a run-time error occurs. Integer values of type bigint or duration cannot be used as indices. If s is NULL the result is NULL. If low or high is not omitted and is NULL then the result is NULL. Given an identifier f denoting a predeclared function, calls f with arguments a1, a2, … an. Arguments are evaluated before the function is called. The type of the expression is the result type of f. In a function call, the function value and arguments are evaluated in the usual order. After they are evaluated, the parameters of the call are passed by value to the function and the called function begins execution. The return value of the function is passed by value when the function returns. Calling an undefined function causes a compile-time error. Operators combine operands into expressions. Comparisons are discussed elsewhere. For other binary operators, the operand types must be identical unless the operation involves shifts or untyped constants. For operations involving constants only, see the section on constant expressions. Except for shift operations, if one operand is an untyped constant and the other operand is not, the constant is converted to the type of the other operand. The right operand in a shift expression must have unsigned integer type or be an untyped constant that can be converted to unsigned integer type. If the left operand of a non-constant shift expression is an untyped constant, the type of the constant is what it would be if the shift expression were replaced by its left operand alone. Expressions of the form yield a boolean value true if expr2, a regular expression, matches expr1 (see also [6]). Both expression must be of type string. If any one of the expressions is NULL the result is NULL. Predicates are special form expressions having a boolean result type. Expressions of the form are equivalent, including NULL handling, to The types of involved expressions must be comparable as defined in "Comparison operators". Another form of the IN predicate creates the expression list from a result of a SelectStmt. The SelectStmt must select only one column. The produced expression list is resource limited by the memory available to the process. NULL values produced by the SelectStmt are ignored, but if all records of the SelectStmt are NULL the predicate yields NULL. The select statement is evaluated only once. If the type of expr is not the same as the type of the field returned by the SelectStmt then the set operation yields false. The type of the column returned by the SelectStmt must be one of the simple (non blob-like) types: Expressions of the form are equivalent, including NULL handling, to The types of involved expressions must be ordered as defined in "Comparison operators". Expressions of the form yield a boolean value true if expr does not have a specific type (case A) or if expr has a specific type (case B). In other cases the result is a boolean value false. Unary operators have the highest precedence. There are five precedence levels for binary operators. Multiplication operators bind strongest, followed by addition operators, comparison operators, && (logical AND), and finally || (logical OR) Binary operators of the same precedence associate from left to right. For instance, x / y * z is the same as (x / y) * z. Note that the operator precedence is reflected explicitly by the grammar. Arithmetic operators apply to numeric values and yield a result of the same type as the first operand. The four standard arithmetic operators (+, -, *, /) apply to integer, rational, floating-point, and complex types; + also applies to strings; +,- also applies to times. All other arithmetic operators apply to integers only. sum integers, rationals, floats, complex values, strings difference integers, rationals, floats, complex values, times product integers, rationals, floats, complex values / quotient integers, rationals, floats, complex values % remainder integers & bitwise AND integers | bitwise OR integers ^ bitwise XOR integers &^ bit clear (AND NOT) integers << left shift integer << unsigned integer >> right shift integer >> unsigned integer Strings can be concatenated using the + operator String addition creates a new string by concatenating the operands. A value of type duration can be added to or subtracted from a value of type time. Times can subtracted from each other producing a value of type duration. For two integer values x and y, the integer quotient q = x / y and remainder r = x % y satisfy the following relationships with x / y truncated towards zero ("truncated division"). As an exception to this rule, if the dividend x is the most negative value for the int type of x, the quotient q = x / -1 is equal to x (and r = 0). If the divisor is a constant expression, it must not be zero. If the divisor is zero at run time, a run-time error occurs. If the dividend is non-negative and the divisor is a constant power of 2, the division may be replaced by a right shift, and computing the remainder may be replaced by a bitwise AND operation The shift operators shift the left operand by the shift count specified by the right operand. They implement arithmetic shifts if the left operand is a signed integer and logical shifts if it is an unsigned integer. There is no upper limit on the shift count. Shifts behave as if the left operand is shifted n times by 1 for a shift count of n. As a result, x << 1 is the same as x*2 and x >> 1 is the same as x/2 but truncated towards negative infinity. For integer operands, the unary operators +, -, and ^ are defined as follows For floating-point and complex numbers, +x is the same as x, while -x is the negation of x. The result of a floating-point or complex division by zero is not specified beyond the IEEE-754 standard; whether a run-time error occurs is implementation-specific. Whenever any operand of any arithmetic operation, unary or binary, is NULL, as well as in the case of the string concatenating operation, the result is NULL. For unsigned integer values, the operations +, -, *, and << are computed modulo 2n, where n is the bit width of the unsigned integer's type. Loosely speaking, these unsigned integer operations discard high bits upon overflow, and expressions may rely on “wrap around”. For signed integers with a finite bit width, the operations +, -, *, and << may legally overflow and the resulting value exists and is deterministically defined by the signed integer representation, the operation, and its operands. No exception is raised as a result of overflow. An evaluator may not optimize an expression under the assumption that overflow does not occur. For instance, it may not assume that x < x + 1 is always true. Integers of type bigint and rationals do not overflow but their handling is limited by the memory resources available to the program. Comparison operators compare two operands and yield a boolean value. In any comparison, the first operand must be of same type as is the second operand, or vice versa. The equality operators == and != apply to operands that are comparable. The ordering operators <, <=, >, and >= apply to operands that are ordered. These terms and the result of the comparisons are defined as follows - Boolean values are comparable. Two boolean values are equal if they are either both true or both false. - Complex values are comparable. Two complex values u and v are equal if both real(u) == real(v) and imag(u) == imag(v). - Integer values are comparable and ordered, in the usual way. Note that durations are integers. - Floating point values are comparable and ordered, as defined by the IEEE-754 standard. - Rational values are comparable and ordered, in the usual way. - String values are comparable and ordered, lexically byte-wise. - Time values are comparable and ordered. Whenever any operand of any comparison operation is NULL, the result is NULL. Note that slices are always of type string. Logical operators apply to boolean values and yield a boolean result. The right operand is evaluated conditionally. The truth tables for logical operations with NULL values Conversions are expressions of the form T(x) where T is a type and x is an expression that can be converted to type T. A constant value x can be converted to type T in any of these cases: - x is representable by a value of type T. - x is a floating-point constant, T is a floating-point type, and x is representable by a value of type T after rounding using IEEE 754 round-to-even rules. The constant T(x) is the rounded value. - x is an integer constant and T is a string type. The same rule as for non-constant x applies in this case. Converting a constant yields a typed constant as result. A non-constant value x can be converted to type T in any of these cases: - x has type T. - x's type and T are both integer or floating point types. - x's type and T are both complex types. - x is an integer, except bigint or duration, and T is a string type. Specific rules apply to (non-constant) conversions between numeric types or to and from a string type. These conversions may change the representation of x and incur a run-time cost. All other conversions only change the type but not the representation of x. A conversion of NULL to any type yields NULL. For the conversion of non-constant numeric values, the following rules apply 1. When converting between integer types, if the value is a signed integer, it is sign extended to implicit infinite precision; otherwise it is zero extended. It is then truncated to fit in the result type's size. For example, if v == uint16(0x10F0), then uint32(int8(v)) == 0xFFFFFFF0. The conversion always yields a valid value; there is no indication of overflow. 2. When converting a floating-point number to an integer, the fraction is discarded (truncation towards zero). 3. When converting an integer or floating-point number to a floating-point type, or a complex number to another complex type, the result value is rounded to the precision specified by the destination type. For instance, the value of a variable x of type float32 may be stored using additional precision beyond that of an IEEE-754 32-bit number, but float32(x) represents the result of rounding x's value to 32-bit precision. Similarly, x + 0.1 may use more than 32 bits of precision, but float32(x + 0.1) does not. In all non-constant conversions involving floating-point or complex values, if the result type cannot represent the value the conversion succeeds but the result value is implementation-dependent. 1. Converting a signed or unsigned integer value to a string type yields a string containing the UTF-8 representation of the integer. Values outside the range of valid Unicode code points are converted to "\uFFFD". 2. Converting a blob to a string type yields a string whose successive bytes are the elements of the blob. 3. Converting a value of a string type to a blob yields a blob whose successive elements are the bytes of the string. 4. Converting a value of a bigint type to a string yields a string containing the decimal decimal representation of the integer. 5. Converting a value of a string type to a bigint yields a bigint value containing the integer represented by the string value. A prefix of “0x” or “0X” selects base 16; the “0” prefix selects base 8, and a “0b” or “0B” prefix selects base 2. Otherwise the value is interpreted in base 10. An error occurs if the string value is not in any valid format. 6. Converting a value of a rational type to a string yields a string containing the decimal decimal representation of the rational in the form "a/b" (even if b == 1). 7. Converting a value of a string type to a bigrat yields a bigrat value containing the rational represented by the string value. The string can be given as a fraction "a/b" or as a floating-point number optionally followed by an exponent. An error occurs if the string value is not in any valid format. 8. Converting a value of a duration type to a string returns a string representing the duration in the form "72h3m0.5s". Leading zero units are omitted. As a special case, durations less than one second format using a smaller unit (milli-, micro-, or nanoseconds) to ensure that the leading digit is non-zero. The zero duration formats as 0, with no unit. 9. Converting a string value to a duration yields a duration represented by the string. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". 10. Converting a time value to a string returns the time formatted using the format string When evaluating the operands of an expression or of function calls, operations are evaluated in lexical left-to-right order. For example, in the evaluation of the function calls and evaluation of c happen in the order h(), i(), j(), c. Floating-point operations within a single expression are evaluated according to the associativity of the operators. Explicit parentheses affect the evaluation by overriding the default associativity. In the expression x + (y + z) the addition y + z is performed before adding x. Statements control execution. The empty statement does nothing. Alter table statements modify existing tables. With the ADD clause it adds a new column to the table. The column must not exist. With the DROP clause it removes an existing column from a table. The column must exist and it must be not the only (last) column of the table. IOW, there cannot be a table with no columns. For example When adding a column to a table with existing data, the constraint clause of the ColumnDef cannot be used. Adding a constrained column to an empty table is fine. Begin transactions statements introduce a new transaction level. Every transaction level must be eventually balanced by exactly one of COMMIT or ROLLBACK statements. Note that when a transaction is roll-backed because of a statement failure then no explicit balancing of the respective BEGIN TRANSACTION is statement is required nor permitted. Failure to properly balance any opened transaction level may cause dead locks and/or lose of data updated in the uppermost opened but never properly closed transaction level. For example A database cannot be updated (mutated) outside of a transaction. Statements requiring a transaction A database is effectively read only outside of a transaction. Statements not requiring a transaction The commit statement closes the innermost transaction nesting level. If that's the outermost level then the updates to the DB made by the transaction are atomically made persistent. For example Create index statements create new indices. Index is a named projection of ordered values of a table column to the respective records. As a special case the id() of the record can be indexed. Index name must not be the same as any of the existing tables and it also cannot be the same as of any column name of the table the index is on. For example Now certain SELECT statements may use the indices to speed up joins and/or to speed up record set filtering when the WHERE clause is used; or the indices might be used to improve the performance when the ORDER BY clause is present. The UNIQUE modifier requires the indexed values tuple to be index-wise unique or have all values NULL. The optional IF NOT EXISTS clause makes the statement a no operation if the index already exists. A simple index consists of only one expression which must be either a column name or the built-in id(). A more complex and more general index is one that consists of more than one expression or its single expression does not qualify as a simple index. In this case the type of all expressions in the list must be one of the non blob-like types. Note: Blob-like types are blob, bigint, bigrat, time and duration. Create table statements create new tables. A column definition declares the column name and type. Table names and column names are case sensitive. Neither a table or an index of the same name may exist in the DB. For example The optional IF NOT EXISTS clause makes the statement a no operation if the table already exists. The optional constraint clause has two forms. The first one is found in many SQL dialects. This form prevents the data in column DepartmentName to be NULL. The second form allows an arbitrary boolean expression to be used to validate the column. If the value of the expression is true then the validation succeeded. If the value of the expression is false or NULL then the validation fails. If the value of the expression is not of type bool an error occurs. The optional DEFAULT clause is an expression which, if present, is substituted instead of a NULL value when the colum is assigned a value. Note that the constraint and/or default expressions may refer to other columns by name: When a table row is inserted by the INSERT INTO statement or when a table row is updated by the UPDATE statement, the order of operations is as follows: 1. The new values of the affected columns are set and the values of all the row columns become the named values which can be referred to in default expressions evaluated in step 2. 2. If any row column value is NULL and the DEFAULT clause is present in the column's definition, the default expression is evaluated and its value is set as the respective column value. 3. The values, potentially updated, of row columns become the named values which can be referred to in constraint expressions evaluated during step 4. 4. All row columns which definition has the constraint clause present will have that constraint checked. If any constraint violation is detected, the overall operation fails and no changes to the table are made. Delete from statements remove rows from a table, which must exist. For example If the WHERE clause is not present then all rows are removed and the statement is equivalent to the TRUNCATE TABLE statement. Drop index statements remove indices from the DB. The index must exist. For example The optional IF EXISTS clause makes the statement a no operation if the index does not exist. Drop table statements remove tables from the DB. The table must exist. For example The optional IF EXISTS clause makes the statement a no operation if the table does not exist. Insert into statements insert new rows into tables. New rows come from literal data, if using the VALUES clause, or are a result of select statement. In the later case the select statement is fully evaluated before the insertion of any rows is performed, allowing to insert values calculated from the same table rows are to be inserted into. If the ColumnNameList part is omitted then the number of values inserted in the row must be the same as are columns in the table. If the ColumnNameList part is present then the number of values per row must be same as the same number of column names. All other columns of the record are set to NULL. The type of the value assigned to a column must be the same as is the column's type or the value must be NULL. For example If any of the columns of the table were defined using the optional constraints clause or the optional defaults clause then those are processed on a per row basis. The details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. Explain statement produces a recordset consisting of lines of text which describe the execution plan of a statement, if any. For example, the QL tool treats the explain statement specially and outputs the joined lines: The explanation may aid in uderstanding how a statement/query would be executed and if indices are used as expected - or which indices may possibly improve the statement performance. The create index statements above were directly copy/pasted in the terminal from the suggestions provided by the filter recordset pipeline part returned by the explain statement. If the statement has nothing special in its plan, the result is the original statement. To get an explanation of the select statement of the IN predicate, use the EXPLAIN statement with that particular select statement. The rollback statement closes the innermost transaction nesting level discarding any updates to the DB made by it. If that's the outermost level then the effects on the DB are as if the transaction never happened. For example The (temporary) record set from the last statement is returned and can be processed by the client. In this case the rollback is the same as 'DROP TABLE tmp;' but it can be a more complex operation. Select from statements produce recordsets. The optional DISTINCT modifier ensures all rows in the result recordset are unique. Either all of the resulting fields are returned ('*') or only those named in FieldList. RecordSetList is a list of table names or parenthesized select statements, optionally (re)named using the AS clause. The result can be filtered using a WhereClause and orderd by the OrderBy clause. For example If Recordset is a nested, parenthesized SelectStmt then it must be given a name using the AS clause if its field are to be accessible in expressions. A field is an named expression. Identifiers, not used as a type in conversion or a function name in the Call clause, denote names of (other) fields, values of which should be used in the expression. The expression can be named using the AS clause. If the AS clause is not present and the expression consists solely of a field name, then that field name is used as the name of the resulting field. Otherwise the field is unnamed. For example The SELECT statement can optionally enumerate the desired/resulting fields in a list. No two identical field names can appear in the list. When more than one record set is used in the FROM clause record set list, the result record set field names are rewritten to be qualified using the record set names. If a particular record set doesn't have a name, its respective fields became unnamed. The optional JOIN clause, for example is mostly equal to except that the rows from a which, when they appear in the cross join, never made expr to evaluate to true, are combined with a virtual row from b, containing all nulls, and added to the result set. For the RIGHT JOIN variant the discussed rules are used for rows from b not satisfying expr == true and the virtual, all-null row "comes" from a. The FULL JOIN adds the respective rows which would be otherwise provided by the separate executions of the LEFT JOIN and RIGHT JOIN variants. For more thorough OUTER JOIN discussion please see the Wikipedia article at [10]. Resultins rows of a SELECT statement can be optionally ordered by the ORDER BY clause. Collating proceeds by considering the expressions in the expression list left to right until a collating order is determined. Any possibly remaining expressions are not evaluated. All of the expression values must yield an ordered type or NULL. Ordered types are defined in "Comparison operators". Collating of elements having a NULL value is different compared to what the comparison operators yield in expression evaluation (NULL result instead of a boolean value). Below, T denotes a non NULL value of any QL type. NULL collates before any non NULL value (is considered smaller than T). Two NULLs have no collating order (are considered equal). The WHERE clause restricts records considered by some statements, like SELECT FROM, DELETE FROM, or UPDATE. It is an error if the expression evaluates to a non null value of non bool type. The GROUP BY clause is used to project rows having common values into a smaller set of rows. For example Using the GROUP BY without any aggregate functions in the selected fields is in certain cases equal to using the DISTINCT modifier. The last two examples above produce the same resultsets. The optional OFFSET clause allows to ignore first N records. For example The above will produce only rows 11, 12, ... of the record set, if they exist. The value of the expression must a non negative integer, but not bigint or duration. The optional LIMIT clause allows to ignore all but first N records. For example The above will return at most the first 10 records of the record set. The value of the expression must a non negative integer, but not bigint or duration. The LIMIT and OFFSET clauses can be combined. For example Considering table t has, say 10 records, the above will produce only records 4 - 8. After returning record #8, no more result rows/records are computed. 1. The FROM clause is evaluated, producing a Cartesian product of its source record sets (tables or nested SELECT statements). 2. If present, the JOIN cluase is evaluated on the result set of the previous evaluation and the recordset specified by the JOIN clause. (... JOIN Recordset ON ...) 3. If present, the WHERE clause is evaluated on the result set of the previous evaluation. 4. If present, the GROUP BY clause is evaluated on the result set of the previous evaluation(s). 5. The SELECT field expressions are evaluated on the result set of the previous evaluation(s). 6. If present, the DISTINCT modifier is evaluated on the result set of the previous evaluation(s). 7. If present, the ORDER BY clause is evaluated on the result set of the previous evaluation(s). 8. If present, the OFFSET clause is evaluated on the result set of the previous evaluation(s). The offset expression is evaluated once for the first record produced by the previous evaluations. 9. If present, the LIMIT clause is evaluated on the result set of the previous evaluation(s). The limit expression is evaluated once for the first record produced by the previous evaluations. Truncate table statements remove all records from a table. The table must exist. For example Update statements change values of fields in rows of a table. For example Note: The SET clause is optional. If any of the columns of the table were defined using the optional constraints clause or the optional defaults clause then those are processed on a per row basis. The details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. To allow to query for DB meta data, there exist specially named tables, some of them being virtual. Note: Virtual system tables may have fake table-wise unique but meaningless and unstable record IDs. Do not apply the built-in id() to any system table. The table __Table lists all tables in the DB. The schema is The Schema column returns the statement to (re)create table Name. This table is virtual. The table __Colum lists all columns of all tables in the DB. The schema is The Ordinal column defines the 1-based index of the column in the record. This table is virtual. The table __Colum2 lists all columns of all tables in the DB which have the constraint NOT NULL or which have a constraint expression defined or which have a default expression defined. The schema is It's possible to obtain a consolidated recordset for all properties of all DB columns using The Name column is the column name in TableName. The table __Index lists all indices in the DB. The schema is The IsUnique columns reflects if the index was created using the optional UNIQUE clause. This table is virtual. Built-in functions are predeclared. The built-in aggregate function avg returns the average of values of an expression. Avg ignores NULL values, but returns NULL if all values of a column are NULL or if avg is applied to an empty record set. The column values must be of a numeric type. The built-in function contains returns true if substr is within s. If any argument to contains is NULL the result is NULL. The built-in aggregate function count returns how many times an expression has a non NULL values or the number of rows in a record set. Note: count() returns 0 for an empty record set. For example Date returns the time corresponding to in the appropriate zone for that time in the given location. The month, day, hour, min, sec, and nsec values may be outside their usual ranges and will be normalized during the conversion. For example, October 32 converts to November 1. A daylight savings time transition skips or repeats times. For example, in the United States, March 13, 2011 2:15am never occurred, while November 6, 2011 1:15am occurred twice. In such cases, the choice of time zone, and therefore the time, is not well-defined. Date returns a time that is correct in one of the two zones involved in the transition, but it does not guarantee which. A location maps time instants to the zone in use at that time. Typically, the location represents the collection of time offsets in use in a geographical area, such as "CEST" and "CET" for central Europe. "local" represents the system's local time zone. "UTC" represents Universal Coordinated Time (UTC). The month specifies a month of the year (January = 1, ...). If any argument to date is NULL the result is NULL. The built-in function day returns the day of the month specified by t. If the argument to day is NULL the result is NULL. The built-in function formatTime returns a textual representation of the time value formatted according to layout, which defines the format by showing how the reference time, would be displayed if it were the value; it serves as an example of the desired output. The same display rules will then be applied to the time value. If any argument to formatTime is NULL the result is NULL. NOTE: The string value of the time zone, like "CET" or "ACDT", is dependent on the time zone of the machine the function is run on. For example, if the t value is in "CET", but the machine is in "ACDT", instead of "CET" the result is "+0100". This is the same what Go (time.Time).String() returns and in fact formatTime directly calls t.String(). returns on a machine in the CET time zone, but may return on a machine in the ACDT zone. The time value is in both cases the same so its ordering and comparing is correct. Only the display value can differ. The built-in functions formatFloat and formatInt format numbers to strings using go's number format functions in the `strconv` package. For all three functions, only the first argument is mandatory. The default values of the rest are shown in the examples. If the first argument is NULL, the result is NULL. returns returns returns Unlike the `strconv` equivalent, the formatInt function handles all integer types, both signed and unsigned. The built-in function hasPrefix tests whether the string s begins with prefix. If any argument to hasPrefix is NULL the result is NULL. The built-in function hasSuffix tests whether the string s ends with suffix. If any argument to hasSuffix is NULL the result is NULL. The built-in function hour returns the hour within the day specified by t, in the range [0, 23]. If the argument to hour is NULL the result is NULL. The built-in function hours returns the duration as a floating point number of hours. If the argument to hours is NULL the result is NULL. The built-in function id takes zero or one arguments. If no argument is provided, id() returns a table-unique automatically assigned numeric identifier of type int. Ids of deleted records are not reused unless the DB becomes completely empty (has no tables). For example If id() without arguments is called for a row which is not a table record then the result value is NULL. For example If id() has one argument it must be a table name of a table in a cross join. For example The built-in function len takes a string argument and returns the lentgh of the string in bytes. The expression len(s) is constant if s is a string constant. If the argument to len is NULL the result is NULL. The built-in aggregate function max returns the largest value of an expression in a record set. Max ignores NULL values, but returns NULL if all values of a column are NULL or if max is applied to an empty record set. The expression values must be of an ordered type. For example The built-in aggregate function min returns the smallest value of an expression in a record set. Min ignores NULL values, but returns NULL if all values of a column are NULL or if min is applied to an empty record set. For example The column values must be of an ordered type. The built-in function minute returns the minute offset within the hour specified by t, in the range [0, 59]. If the argument to minute is NULL the result is NULL. The built-in function minutes returns the duration as a floating point number of minutes. If the argument to minutes is NULL the result is NULL. The built-in function month returns the month of the year specified by t (January = 1, ...). If the argument to month is NULL the result is NULL. The built-in function nanosecond returns the nanosecond offset within the second specified by t, in the range [0, 999999999]. If the argument to nanosecond is NULL the result is NULL. The built-in function nanoseconds returns the duration as an integer nanosecond count. If the argument to nanoseconds is NULL the result is NULL. The built-in function now returns the current local time. The built-in function parseTime parses a formatted string and returns the time value it represents. The layout defines the format by showing how the reference time, would be interpreted if it were the value; it serves as an example of the input format. The same interpretation will then be made to the input string. Elements omitted from the value are assumed to be zero or, when zero is impossible, one, so parsing "3:04pm" returns the time corresponding to Jan 1, year 0, 15:04:00 UTC (note that because the year is 0, this time is before the zero Time). Years must be in the range 0000..9999. The day of the week is checked for syntax but it is otherwise ignored. In the absence of a time zone indicator, parseTime returns a time in UTC. When parsing a time with a zone offset like -0700, if the offset corresponds to a time zone used by the current location, then parseTime uses that location and zone in the returned time. Otherwise it records the time as being in a fabricated location with time fixed at the given zone offset. When parsing a time with a zone abbreviation like MST, if the zone abbreviation has a defined offset in the current location, then that offset is used. The zone abbreviation "UTC" is recognized as UTC regardless of location. If the zone abbreviation is unknown, Parse records the time as being in a fabricated location with the given zone abbreviation and a zero offset. This choice means that such a time can be parses and reformatted with the same layout losslessly, but the exact instant used in the representation will differ by the actual zone offset. To avoid such problems, prefer time layouts that use a numeric zone offset. If any argument to parseTime is NULL the result is NULL. The built-in function second returns the second offset within the minute specified by t, in the range [0, 59]. If the argument to second is NULL the result is NULL. The built-in function seconds returns the duration as a floating point number of seconds. If the argument to seconds is NULL the result is NULL. The built-in function since returns the time elapsed since t. It is shorthand for now()-t. If the argument to since is NULL the result is NULL. The built-in aggregate function sum returns the sum of values of an expression for all rows of a record set. Sum ignores NULL values, but returns NULL if all values of a column are NULL or if sum is applied to an empty record set. The column values must be of a numeric type. The built-in function timeIn returns t with the location information set to loc. For discussion of the loc argument please see date(). If any argument to timeIn is NULL the result is NULL. The built-in function weekday returns the day of the week specified by t. Sunday == 0, Monday == 1, ... If the argument to weekday is NULL the result is NULL. The built-in function year returns the year in which t occurs. If the argument to year is NULL the result is NULL. The built-in function yearDay returns the day of the year specified by t, in the range [1,365] for non-leap years, and [1,366] in leap years. If the argument to yearDay is NULL the result is NULL. Three functions assemble and disassemble complex numbers. The built-in function complex constructs a complex value from a floating-point real and imaginary part, while real and imag extract the real and imaginary parts of a complex value. The type of the arguments and return value correspond. For complex, the two arguments must be of the same floating-point type and the return type is the complex type with the corresponding floating-point constituents: complex64 for float32, complex128 for float64. The real and imag functions together form the inverse, so for a complex value z, z == complex(real(z), imag(z)). If the operands of these functions are all constants, the return value is a constant. If any argument to any of complex, real, imag functions is NULL the result is NULL. For the numeric types, the following sizes are guaranteed Portions of this specification page are modifications based on work[2] created and shared by Google[3] and used according to terms described in the Creative Commons 3.0 Attribution License[4]. This specification is licensed under the Creative Commons Attribution 3.0 License, and code is licensed under a BSD license[5]. Links from the above documentation This section is not part of the specification. WARNING: The implementation of indices is new and it surely needs more time to become mature. Indices are used currently used only by the WHERE clause. The following expression patterns of 'WHERE expression' are recognized and trigger index use. The relOp is one of the relation operators <, <=, ==, >=, >. For the equality operator both operands must be of comparable types. For all other operators both operands must be of ordered types. The constant expression is a compile time constant expression. Some constant folding is still a TODO. Parameter is a QL parameter ($1 etc.). Consider tables t and u, both with an indexed field f. The WHERE expression doesn't comply with the above simple detected cases. However, such query is now automatically rewritten to which will use both of the indices. The impact of using the indices can be substantial (cf. BenchmarkCrossJoin*) if the resulting rows have low "selectivity", ie. only few rows from both tables are selected by the respective WHERE filtering. Note: Existing QL DBs can be used and indices can be added to them. However, once any indices are present in the DB, the old QL versions cannot work with such DB anymore. Running a benchmark with -v (-test.v) outputs information about the scale used to report records/s and a brief description of the benchmark. For example Running the full suite of benchmarks takes a lot of time. Use the -timeout flag to avoid them being killed after the default time limit (10 minutes).
Package awk implements AWK-style processing of input streams. The awk package can be considered a shallow EDSL (embedded domain-specific language) for Go that facilitates text processing. It aims to implement the core semantics provided by AWK, a pattern scanning and processing language defined as part of the POSIX 1003.1 standard (http://pubs.opengroup.org/onlinepubs/9699919799/utilities/awk.html) and therefore part of all standard Linux/Unix distributions. AWK's forte is simple transformations of tabular data. For example, the following is a complete AWK program that reads an entire file from the standard input device, splits each file into whitespace-separated columns, and outputs all lines in which the fifth column is an odd number: Here's a typical Go analogue of that one-line AWK program: The goal of the awk package is to emulate AWK's simplicity while simultaneously taking advantage of Go's speed, safety, and flexibility. With the awk package, the preceding code reduces to the following: While not a one-liner like the original AWK program, the above is conceptually close to it. The AppendStmt method defines a script in terms of patterns and actions exactly as in the AWK program. The Run method then runs the script on an input stream, which can be any io.Reader. For those programmers unfamiliar with AWK, an AWK program consists of a sequence of pattern/action pairs. Each pattern that matches a given line causes the corresponding action to be performed. AWK programs tend to be terse because AWK implicitly reads the input file, splits it into records (default: newline-terminated lines), and splits each record into fields (default: whitespace-separated columns), saving the programmer from having to express such operations explicitly. Furthermore, AWK provides a default pattern, which matches every record, and a default action, which outputs a record unmodified. The awk package attempts to mimic those semantics in Go. Basic usage consists of three steps: 1. Script allocation (awk.NewScript) 2. Script definition (Script.AppendStmt) 3. Script execution (Script.Run) In Step 2, AppendStmt is called once for each pattern/action pair that is to be appended to the script. The same script can be applied to multiple input streams by re-executing Step 3. Actions to be executed on every run of Step 3 can be supplied by assigning the script's Begin and End fields. The Begin action is typically used to initialize script state by calling methods such as SetRS and SetFS and assigning user-defined data to the script's State field (what would be global variables in AWK). The End action is typically used to store or report final results. To mimic AWK's dynamic type system. the awk package provides the Value and ValueArray types. Value represents a scalar that can be coerced without error to a string, an int, or a float64. ValueArray represents a—possibly multidimensional—associative array of Values. Both patterns and actions can access the current record's fields via the script's F method, which takes a 1-based index and returns the corresponding field as a Value. An index of 0 returns the entire record as a Value. The following AWK features and GNU AWK extensions are currently supported by the awk package: • the basic pattern/action structure of an AWK script, including BEGIN and END rules and range patterns • control over record separation (RS), including regular expressions and null strings (implying blank lines as separators) • control over field separation (FS), including regular expressions and null strings (implying single-character fields) • fixed-width fields (FIELDWIDTHS) • fields defined by a regular expression (FPAT) • control over case-sensitive vs. case-insensitive comparisons (IGNORECASE) • control over the number conversion format (CONVFMT) • automatic enumeration of records (NR) and fields (NR) • "weak typing" • multidimensional associative arrays • premature termination of record processing (next) and script processing (exit) • explicit record reading (getline) from either the current stream or a specified stream • maintenance of regular-expression status variables (RT, RSTART, and RLENGTH) For more information about AWK and its features, see the awk(1) manual page on any Linux/Unix system (available online from, e.g., http://linux.die.net/man/1/awk) or read the book, "The AWK Programming Language" by Aho, Kernighan, and Weinberger. A number of examples ported from the POSIX 1003.1 standard document (http://pubs.opengroup.org/onlinepubs/9699919799/utilities/awk.html) are presented below.
Package ldk is an LDK (loop development kit) for plugins for the Sidekick project. The LDK is built with go-plugin (https://github.com/hashicorp/go-plugin), a HashiCorp plugin system used in several of their projects. Plugins developed with this library are executed by Sidekick as separate processes. This ensures that crashes or instability in the plugin will not destabilize the Sidekick process. Communication between Sidekick and the plugin is first initialized over stdio and then performed using gRPC (https://grpc.io/). On mac and linux the GRPC communication is sent over unix domain socket and on windows over local TCP socket. In order for Sidekick to use a plugin, it must be compiled. Sidekick does not compile or interpret source code at runtime. A consequence of this is that plugins will need to be compiled for each operating system that they want to support. Controllers receive events and use them to generate relevant whispers. Controllers choose which events they want to use and which they want to ignore. Writing a Controller plugin boils down to writing an implementation for the Controller interface. Start() - The Controller should wait to start operating until this is called. The provided `ControllerHost` should be stored in memory for continued use. Stop() - The Controller should stop operating when this is called. OnEvent() - The controller can use this to handle events that are broadcast by Sensors. Controllers do not need to emit events in a 1:1 relationship with events. Controllers may not use events at all. Controllers may only use some events. Controllers may keep a history of events and only emit whispers when several conditions are met. 1. Sidekick executes plugin process 2. Sidekick calls `Start`, sending the host connection information to the plugin. This connection information is used to create the `ControllerHost`. The `ControllerHost` interface allows the plugin to emit whispers. 3. On Controller wanting to emit a whisper, the Controller calls the `EmitWhisper` method on the host interface. 4. On Sensor event, Sidekick calls `OnEvent`, passing the event from the Sensor to the Controller. These events can be ignored or used at the Controller's choice. 5. On User disabling the Controller, Sidekick calls `Stop` then sends `sigterm` to the process. 6. On Sidekick shutdown, Sidekick calls `Stop` then sends `sigterm` to the process.* We recommend using this repo as a starting point when creating a new controller: https://github.com/open-olive/sidekick-controller-examplego A Sensor is a type of plugin that generates events. Events can be as simple as a chunk of text but allow for complicated information. Sensors do not choose which controllers get their events. They are simply emitting the events. The decision about which events to use is left to the controller. Writing a Sensor plugin boils down to writing an implementation for the Sensor interface. Start() - The Sensor should wait to start operating until this is called. The provided `SensorHost` should be stored in memory for continued use. Stop() - The Sensor should stop operating when this is called. OnEvent() - The sensor can use this to handle events from the Sidekick UI. Many aptitudes will not care about UI events, and in that case the function should just return `nil`. 1. Sidekick executes plugin process 2. Sidekick calls `Start`, sending the host connection information to the plugin. This connection information is used to create the `SensorHost`. The `SensorHost` interface allows the plugin to emit events. 3. On Sensor wanting to emit an event, the Sensor calls the `EmitEvent` method on the host interface. 4. On Sidekick UI event, Sidekick calls `OnEvent`, passing the event to the Sensor. These events can be ignore or used at the Sensor's choice. 5. On User disabling the Sensor, Sidekick calls `Stop` then sends `sigterm` to the process. 6. On Sidekick shutdown, Sidekick calls `Stop` then sends `sigterm` to the process.
Command mox is a modern, secure, full-featured, open source mail server for low-maintenance self-hosted email. Mox is started with the "serve" subcommand, but mox also has many other subcommands. Many of those commands talk to a running mox instance, through the ctl file in the data directory. Specify the configuration file (that holds the path to the data directory) through the -config flag or MOXCONF environment variable. Commands that don't talk to a running mox instance are often for testing/debugging email functionality. For example for parsing an email message, or looking up SPF/DKIM/DMARC records. Below is the usage information as printed by the command when started without any parameters. Followed by the help and usage information for each command. Start mox, serving SMTP/IMAP/HTTPS. Incoming email is accepted over SMTP. Email can be retrieved by users using IMAP. HTTP listeners are started for the admin/account web interfaces, and for automated TLS configuration. Missing essential TLS certificates are immediately requested, other TLS certificates are requested on demand. Only implemented on unix systems, not Windows. Quickstart generates configuration files and prints instructions to quickly set up a mox instance. Quickstart writes configuration files, prints initial admin and account passwords, DNS records you should create. If you run it on Linux it writes a systemd service file and prints commands to enable and start mox as service. The user or uid is optional, defaults to "mox", and is the user or uid/gid mox will run as after initialization. Quickstart assumes mox will run on the machine you run quickstart on and uses its host name and public IPs. On many systems the hostname is not a fully qualified domain name, but only the first dns "label", e.g. "mail" in case of "mail.example.org". If so, quickstart does a reverse DNS lookup to find the hostname, and as fallback uses the label plus the domain of the email address you specified. Use flag -hostname to explicitly specify the hostname mox will run on. Mox is by far easiest to operate if you let it listen on port 443 (HTTPS) and 80 (HTTP). TLS will be fully automatic with ACME with Let's Encrypt. You can run mox along with an existing webserver, but because of MTA-STS and autoconfig, you'll need to forward HTTPS traffic for two domains to mox. Run "mox quickstart -existing-webserver ..." to generate configuration files and instructions for configuring mox along with an existing webserver. But please first consider configuring mox on port 443. It can itself serve domains with HTTP/HTTPS, including with automatic TLS with ACME, is easily configured through both configuration files and admin web interface, and can act as a reverse proxy (and static file server for that matter), so you can forward traffic to your existing backend applications. Look for "WebHandlers:" in the output of "mox config describe-domains" and see the output of "mox config example webhandlers". Shut mox down, giving connections maximum 3 seconds to stop before closing them. While shutting down, new IMAP and SMTP connections will get a status response indicating temporary unavailability. Existing connections will get a 3 second period to finish their transaction and shut down. Under normal circumstances, only IMAP has long-living connections, with the IDLE command to get notified of new mail deliveries. Set new password an account. The password is read from stdin. Secrets derived from the password, but not the password itself, are stored in the account database. The stored secrets are for authentication with: scram-sha-256, scram-sha-1, cram-md5, plain text (bcrypt hash). The parameter is an account name, as configured under Accounts in domains.conf and as present in the data/accounts/ directory, not a configured email address for an account. Set a new admin password, for the web interface. The password is read from stdin. Its bcrypt hash is stored in a file named "adminpasswd" in the configuration directory. Print the log levels, or set a new default log level, or a level for the given package. By default, a single log level applies to all logging in mox. But for each "pkg", an overriding log level can be configured. Examples of packages: smtpserver, smtpclient, queue, imapserver, spf, dkim, dmarc, junk, message, etc. Specify a pkg and an empty level to clear the configured level for a package. Valid labels: error, info, debug, trace, traceauth, tracedata. List hold rules for the delivery queue. Messages submitted to the queue that match a hold rule will be marked as on hold and not scheduled for delivery. Add hold rule for the delivery queue. Add a hold rule to mark matching newly submitted messages as on hold. Set the matching rules with the flags. Don't specify any flags to match all submitted messages. Remove hold rule for the delivery queue. Remove a hold rule by its id. List matching messages in the delivery queue. Prints the message with its ID, last and next delivery attempts, last error. Mark matching messages on hold. Messages that are on hold are not delivered until marked as off hold again, or otherwise handled by the admin. Mark matching messages off hold. Once off hold, messages can be delivered according to their current next delivery attempt. See the "queue schedule" command. Change next delivery attempt for matching messages. The next delivery attempt is adjusted by the duration parameter. If the -now flag is set, the new delivery attempt is set to the duration added to the current time, instead of added to the current scheduled time. Schedule immediate delivery with "mox queue schedule -now 0". Set transport for matching messages. By default, the routing rules determine how a message is delivered. The default and common case is direct delivery with SMTP. Messages can get a previously configured transport assigned to use for delivery, e.g. using submission to another mail server or with connections over a SOCKS proxy. Set TLS requirements for delivery of matching messages. Value "yes" is handled as if the RequireTLS extension was specified during submission. Value "no" is handled as if the message has a header "TLS-Required: No". This header is not added by the queue. If messages without this header are relayed through other mail servers they will apply their own default TLS policy. Value "default" is the default behaviour, currently for unverified opportunistic TLS. Fail delivery of matching messages, delivering DSNs. Failing a message is handled similar to how delivery is given up after all delivery attempts failed. The DSN (delivery status notification) message contains a line saying the message was canceled by the admin. Remove matching messages from the queue. Dangerous operation, this completely removes the message. If you want to store the message, use "queue dump" before removing. Dump a message from the queue. The message is printed to stdout and is in standard internet mail format. List matching messages in the retired queue. Prints messages with their ID and results. Print a message from the retired queue. Prints a JSON representation of the information from the retired queue. Print addresses in suppression list. Add address to suppression list for account. Remove address from suppression list for account. Check if address is present in suppression list, for any or specific account. List matching webhooks in the queue. Prints list of webhooks, their IDs and basic information. Change next delivery attempt for matching webhooks. The next delivery attempt is adjusted by the duration parameter. If the -now flag is set, the new delivery attempt is set to the duration added to the current time, instead of added to the current scheduled time. Schedule immediate delivery with "mox queue schedule -now 0". Fail delivery of matching webhooks. Print details of a webhook from the queue. The webhook is printed to stdout as JSON. List matching webhooks in the retired queue. Prints list of retired webhooks, their IDs and basic information. Print details of a webhook from the retired queue. The retired webhook is printed to stdout as JSON. Import a maildir into an account. The mbox/maildir archive is accessed and imported by the running mox process, so it must have access to the archive files. The default suggested systemd service file isolates mox from most of the file system, with only the "data/" directory accessible, so you may want to put the mbox/maildir archive files in a directory like "data/import/" to make it available to mox. By default, messages will train the junk filter based on their flags and, if "automatic junk flags" configuration is set, based on mailbox naming. If the destination mailbox is the Sent mailbox, the recipients of the messages are added to the message metadata, causing later incoming messages from these recipients to be accepted, unless other reputation signals prevent that. Users can also import mailboxes/messages through the account web page by uploading a zip or tgz file with mbox and/or maildirs. Messages are imported even if already present. Importing messages twice will result in duplicate messages. Mailbox flags, like "seen", "answered", will be imported. An optional dovecot-keywords file can specify additional flags, like Forwarded/Junk/NotJunk. Import an mbox into an account. Using mbox is not recommended, maildir is a better defined format. The mbox/maildir archive is accessed and imported by the running mox process, so it must have access to the archive files. The default suggested systemd service file isolates mox from most of the file system, with only the "data/" directory accessible, so you may want to put the mbox/maildir archive files in a directory like "data/import/" to make it available to mox. By default, messages will train the junk filter based on their flags and, if "automatic junk flags" configuration is set, based on mailbox naming. If the destination mailbox is the Sent mailbox, the recipients of the messages are added to the message metadata, causing later incoming messages from these recipients to be accepted, unless other reputation signals prevent that. Users can also import mailboxes/messages through the account web page by uploading a zip or tgz file with mbox and/or maildirs. Messages are imported even if already present. Importing messages twice will result in duplicate messages. Export one or all mailboxes from an account in maildir format. Export bypasses a running mox instance. It opens the account mailbox/message database file directly. This may block if a running mox instance also has the database open, e.g. for IMAP connections. To export from a running instance, use the accounts web page or webmail. Export messages from one or all mailboxes in an account in mbox format. Using mbox is not recommended. Maildir is a better format. Export bypasses a running mox instance. It opens the account mailbox/message database file directly. This may block if a running mox instance also has the database open, e.g. for IMAP connections. To export from a running instance, use the accounts web page or webmail. For mbox export, "mboxrd" is used where message lines starting with the magic "From " string are escaped by prepending a >. All ">*From " are escaped, otherwise reconstructing the original could lose a ">". Start a local SMTP/IMAP server that accepts all messages, useful when testing/developing software that sends email. Localserve starts mox with a configuration suitable for local email-related software development/testing. It listens for SMTP/Submission(s), IMAP(s) and HTTP(s), on the regular port numbers + 1000. Data is stored in the system user's configuration directory under "mox-localserve", e.g. $HOME/.config/mox-localserve/ on linux, but can be overridden with the -dir flag. If the directory does not yet exist, it is automatically initialized with configuration files, an account with email address mox@localhost and password moxmoxmox, and a newly generated self-signed TLS certificate. Incoming messages are delivered as normal, falling back to accepting and delivering to the mox account for unknown addresses. Submitted messages are added to the queue, which delivers by ignoring the destination servers, always connecting to itself instead. Recipient addresses with the following localpart suffixes are handled specially: - "temperror": fail with a temporary error code - "permerror": fail with a permanent error code - [45][0-9][0-9]: fail with the specific error code - "timeout": no response (for an hour) If the localpart begins with "mailfrom" or "rcptto", the error is returned during those commands instead of during "data". Prints help about matching commands. If multiple commands match, they are listed along with the first line of their help text. If a single command matches, its usage and full help text is printed. Creates a backup of the data directory. Backup creates consistent snapshots of the databases and message files and copies other files in the data directory. Empty directories are not copied. These files can then be stored elsewhere for long-term storage, or used to fall back to should an upgrade fail. Simply copying files in the data directory while mox is running can result in unusable database files. Message files never change (they are read-only, though can be removed) and are hard-linked so they don't consume additional space. If hardlinking fails, for example when the backup destination directory is on a different file system, a regular copy is made. Using a destination directory like "data/tmp/backup" increases the odds hardlinking succeeds: the default systemd service file specifically mounts the data directory, causing attempts to hardlink outside it to fail with an error about cross-device linking. All files in the data directory that aren't recognized (i.e. other than known database files, message files, an acme directory, the "tmp" directory, etc), are stored, but with a warning. Remove files in the destination directory before doing another backup. The backup command will not overwrite files, but print and return errors. Exit code 0 indicates the backup was successful. A clean successful backup does not print any output, but may print warnings. Use the -verbose flag for details, including timing. To restore a backup, first shut down mox, move away the old data directory and move an earlier backed up directory in its place, run "mox verifydata", possibly with the "-fix" option, and restart mox. After the restore, you may also want to run "mox bumpuidvalidity" for each account for which messages in a mailbox changed, to force IMAP clients to synchronize mailbox state. Before upgrading, to check if the upgrade will likely succeed, first make a backup, then use the new mox binary to run "mox verifydata" on the backup. This can change the backup files (e.g. upgrade database files, move away unrecognized message files), so you should make a new backup before actually upgrading. Verify the contents of a data directory, typically of a backup. Verifydata checks all database files to see if they are valid BoltDB/bstore databases. It checks that all messages in the database have a corresponding on-disk message file and there are no unrecognized files. If option -fix is specified, unrecognized message files are moved away. This may be needed after a restore, because messages enqueued or delivered in the future may get those message sequence numbers assigned and writing the message file would fail. Consistency of message/mailbox UID, UIDNEXT and UIDVALIDITY is verified as well. Because verifydata opens the database files, schema upgrades may automatically be applied. This can happen if you use a new mox release. It is useful to run "mox verifydata" with a new binary before attempting an upgrade, but only on a copy of the database files, as made with "mox backup". Before upgrading, make a new backup again since "mox verifydata" may have upgraded the database files, possibly making them potentially no longer readable by the previous version. Print licenses of mox source code and dependencies. Parses and validates the configuration files. If valid, the command exits with status 0. If not valid, all errors encountered are printed. Check the DNS records with the configuration for the domain, and print any errors/warnings. Prints annotated DNS records as zone file that should be created for the domain. The zone file can be imported into existing DNS software. You should review the DNS records, especially if your domain previously/currently has email configured. Prints an annotated empty configuration for use as domains.conf. The domains configuration file contains the domains and their configuration, and accounts and their configuration. This includes the configured email addresses. The mox admin web interface, and the mox command line interface, can make changes to this file. Mox automatically reloads this file when it changes. Like the static configuration, the example domains.conf printed by this command needs modifications to make it valid. Prints an annotated empty configuration for use as mox.conf. The static configuration file cannot be reloaded while mox is running. Mox has to be restarted for changes to the static configuration file to take effect. This configuration file needs modifications to make it valid. For example, it may contain unfinished list items. Add an account with an email address and reload the configuration. Email can be delivered to this address/account. A password has to be configured explicitly, see the setaccountpassword command. Remove an account and reload the configuration. Email addresses for this account will also be removed, and incoming email for these addresses will be rejected. All data for the account will be removed. Adds an address to an account and reloads the configuration. If address starts with a @ (i.e. a missing localpart), this is a catchall address for the domain. Remove an address and reload the configuration. Incoming email for this address will be rejected after removing an address. Adds a new domain to the configuration and reloads the configuration. The account is used for the postmaster mailboxes the domain, including as DMARC and TLS reporting. Localpart is the "username" at the domain for this account. If must be set if and only if account does not yet exist. Remove a domain from the configuration and reload the configuration. This is a dangerous operation. Incoming email delivery for this domain will be rejected. List aliases for domain. Print settings and members of alias. Add new alias with one or more addresses. Update alias configuration. Remove alias. Add addresses to alias. Remove addresses from alias. Describe configuration for mox when invoked as sendmail. Prints a systemd unit service file for mox. This is the same file as generated using quickstart. If the systemd service file has changed with a newer version of mox, use this command to generate an up to date version. Ensure host private keys exist for TLS listeners with ACME. In mox.conf, each listener can have TLS configured. Long-lived private key files can be specified, which will be used when requesting ACME certificates. Configuring these private keys makes it feasible to publish DANE TLSA records for the corresponding public keys in DNS, protected with DNSSEC, allowing TLS certificate verification without depending on a list of Certificate Authorities (CAs). Previous versions of mox did not pre-generate private keys for use with ACME certificates, but would generate private keys on-demand. By explicitly configuring private keys, they will not change automatedly with new certificates, and the DNS TLSA records stay valid. This command looks for listeners in mox.conf with TLS with ACME configured. For each missing host private key (of type rsa-2048 and ecdsa-p256) a key is written to config/hostkeys/. If a certificate exists in the ACME "cache", its private key is copied. Otherwise a new private key is generated. Snippets for manually updating/editing mox.conf are printed. After running this command, and updating mox.conf, run "mox config dnsrecords" for a domain and create the TLSA DNS records it suggests to enable DANE. List available config examples, or print a specific example. Check if a newer version of mox is available. A single DNS TXT lookup to _updates.xmox.nl tells if a new version is available. If so, a changelog is fetched from https://updates.xmox.nl, and the individual entries verified with a builtin public key. The changelog is printed. Turn an ID from a Received header into a cid, for looking up in logs. A cid is essentially a connection counter initialized when mox starts. Each log line contains a cid. Received headers added by mox contain a unique ID that can be decrypted to a cid by admin of a mox instance only. Print the configuration for email clients for a domain. Sending email is typically not done on the SMTP port 25, but on submission ports 465 (with TLS) and 587 (without initial TLS, but usually added to the connection with STARTTLS). For IMAP, the port with TLS is 993 and without is 143. Without TLS/STARTTLS, passwords are sent in clear text, which should only be configured over otherwise secured connections, like a VPN. Dial the address using TLS with certificate verification using DANE. Data is copied between connection and stdin/stdout until either side closes the connection. Connect to MX server for domain using STARTTLS verified with DANE. If no destination host is specified, regular delivery logic is used to find the hosts to attempt delivery too. This involves following CNAMEs for the domain, looking up MX records, and possibly falling back to the domain name itself as host. If a destination host is specified, that is the only candidate host considered for dialing. With a list of destinations gathered, each is dialed until a successful SMTP session verified with DANE has been initialized, including EHLO and STARTTLS commands. Once connected, data is copied between connection and stdin/stdout, until either side closes the connection. This command follows the same logic as delivery attempts made from the queue, sharing most of its code. Print TLSA record for given certificate/key and parameters. Valid values: - usage: pkix-ta (0), pkix-ee (1), dane-ta (2), dane-ee (3) - selector: cert (0), spki (1) - matchtype: full (0), sha2-256 (1), sha2-512 (2) Common DANE TLSA record parameters are: dane-ee spki sha2-256, or 3 1 1, followed by a sha2-256 hash of the DER-encoded "SPKI" (subject public key info) from the certificate. An example DNS zone file entry: The first usable information from the pem file is used to compose the TLSA record. In case of selector "cert", a certificate is required. Otherwise the "subject public key info" (spki) of the first certificate or public or private key (pkcs#8, pkcs#1 or ec private key) is used. Lookup DNS name of given type. Lookup always prints whether the response was DNSSEC-protected. Examples: mox dns lookup ptr 1.1.1.1 mox dns lookup mx xmox.nl mox dns lookup txt _dmarc.xmox.nl. mox dns lookup tlsa _25._tcp.xmox.nl Generate a new ed25519 key for use with DKIM. Ed25519 keys are much smaller than RSA keys of comparable cryptographic strength. This is convenient because of maximum DNS message sizes. At the time of writing, not many mail servers appear to support ed25519 DKIM keys though, so it is recommended to sign messages with both RSA and ed25519 keys. Generate a new 2048 bit RSA private key for use with DKIM. The generated file is in PEM format, and has a comment it is generated for use with DKIM, by mox. Lookup and print the DKIM record for the selector at the domain. Print a DKIM DNS TXT record with the public key derived from the private key read from stdin. The DNS should be configured as a TXT record at $selector._domainkey.$domain. Verify the DKIM signatures in a message and print the results. The message is parsed, and the DKIM-Signature headers are validated. Validation of older messages may fail because the DNS records have been removed or changed by now, or because the signature header may have specified an expiration time that was passed. Sign a message, adding DKIM-Signature headers based on the domain in the From header. The message is parsed, the domain looked up in the configuration files, and DKIM-Signature headers generated. The message is printed with the DKIM-Signature headers prepended. Lookup dmarc policy for domain, a DNS TXT record at _dmarc.<domain>, validate and print it. Parse a DMARC report from an email message, and print its extracted details. DMARC reports are periodically mailed, if requested in the DMARC DNS record of a domain. Reports are sent by mail servers that received messages with our domain in a From header. This may or may not be legatimate email. DMARC reports contain summaries of evaluations of DMARC and DKIM/SPF, which can help understand email deliverability problems. Parse an email message and evaluate it against the DMARC policy of the domain in the From-header. mailfromaddress and helodomain are used for SPF validation. If both are empty, SPF validation is skipped. mailfromaddress should be the address used as MAIL FROM in the SMTP session. For DSN messages, that address may be empty. The helo domain was specified at the beginning of the SMTP transaction that delivered the message. These values can be found in message headers. For each reporting address in the domain's DMARC record, check if it has opted into receiving reports (if needed). A DMARC record can request reports about DMARC evaluations to be sent to an email/http address. If the organizational domains of that of the DMARC record and that of the report destination address do not match, the destination address must opt-in to receiving DMARC reports by creating a DMARC record at <dmarcdomain>._report._dmarc.<reportdestdomain>. Test if IP is in the DNS blocklist of the zone, e.g. bl.spamcop.net. If the IP is in the blocklist, an explanation is printed. This is typically a URL with more information. Check the health of the DNS blocklist represented by zone, e.g. bl.spamcop.net. The health of a DNS blocklist can be checked by querying for 127.0.0.1 and 127.0.0.2. The second must and the first must not be present. Lookup the MTASTS record and policy for the domain. MTA-STS is a mechanism for a domain to specify if it requires TLS connections for delivering email. If a domain has a valid MTA-STS DNS TXT record at _mta-sts.<domain> it signals it implements MTA-STS. A policy can then be fetched at https://mta-sts.<domain>/.well-known/mta-sts.txt. The policy specifies the mode (enforce, testing, none), which MX servers support TLS and should be used, and how long the policy can be cached. Recreate and retrain the junk filter for the account. Useful after having made changes to the junk filter configuration, or if the implementation has changed. Sendmail is a drop-in replacement for /usr/sbin/sendmail to deliver emails sent by unix processes like cron. If invoked as "sendmail", it will act as sendmail for sending messages. Its intention is to let processes like cron send emails. Messages are submitted to an actual mail server over SMTP. The destination mail server and credentials are configured in /etc/moxsubmit.conf, see mox config describe-sendmail. The From message header is rewritten to the configured address. When the addressee appears to be a local user, because without @, the message is sent to the configured default address. If submitting an email fails, it is added to a directory moxsubmit.failures in the user's home directory. Most flags are ignored to fake compatibility with other sendmail implementations. A single recipient or the -t flag with a To-header is required. With the -t flag, Cc and Bcc headers are not handled specially, so Bcc is not removed and the addresses do not receive the email. /etc/moxsubmit.conf should be group-readable and not readable by others and this binary should be setgid that group: Check the status of IP for the policy published in DNS for the domain. IPs may be allowed to send for a domain, or disallowed, and several shades in between. If not allowed, an explanation may be provided by the policy. If so, the explanation is printed. The SPF mechanism that matched (if any) is also printed. Lookup the SPF record for the domain and print it. Parse the record as SPF record. If valid, nothing is printed. Lookup the TLSRPT record for the domain. A TLSRPT record typically contains an email address where reports about TLS connectivity should be sent. Mail servers attempting delivery to our domain should attempt to use TLS. TLSRPT lets them report how many connection successfully used TLS, and how what kind of errors occurred otherwise. Parse and print the TLSRPT in the message. The report is printed in formatted JSON. Prints this mox version. Lists available methods, prints request/response parameters for method, or calls a method with a request read from standard input. List available examples, or print a specific example. Change the IMAP UID validity of the mailbox, causing IMAP clients to refetch messages. This can be useful after manually repairing metadata about the account/mailbox. Opens account database file directly. Ensure mox does not have the account open, or is not running. Reassign UIDs in one mailbox or all mailboxes in an account and bump UID validity, causing IMAP clients to refetch messages. Opens account database file directly. Ensure mox does not have the account open, or is not running. Fix inconsistent UIDVALIDITY and UIDNEXT in messages/mailboxes/account. The next UID to use for a message in a mailbox should always be higher than any existing message UID in the mailbox. If it is not, the mailbox UIDNEXT is updated. Each mailbox has a UIDVALIDITY sequence number, which should always be lower than the per-account next UIDVALIDITY to use. If it is not, the account next UIDVALIDITY is updated. Opens account database file directly. Ensure mox does not have the account open, or is not running. Ensure message sizes in the database matching the sum of the message prefix length and on-disk file size. Messages with an inconsistent size are also parsed again. If an inconsistency is found, you should probably also run "mox bumpuidvalidity" on the mailboxes or entire account to force IMAP clients to refetch messages. Parse all messages in the account or all accounts again. Can be useful after upgrading mox with improved message parsing. Messages are parsed in batches, so other access to the mailboxes/messages are not blocked while reparsing all messages. Ensure messages in the database have a pre-parsed MIME form in the database. Recalculate message counts for all mailboxes in the account, and total message size for quota. When a message is added to/removed from a mailbox, or when message flags change, the total, unread, unseen and deleted messages are accounted, the total size of the mailbox, and the total message size for the account. In case of a bug in this accounting, the numbers could become incorrect. This command will find, fix and print them. Parse message, print JSON representation. Reassign message threads. For all accounts, or optionally only the specified account. Threading for all messages in an account is first reset, and new base subject and normalized message-id saved with the message. Then all messages are evaluated and matched against their parents/ancestors. Messages are matched based on the References header, with a fall-back to an In-Reply-To header, and if neither is present/valid, based only on base subject. A References header typically points to multiple previous messages in a hierarchy. From oldest ancestor to most recent parent. An In-Reply-To header would have only a message-id of the parent message. A message is only linked to a parent/ancestor if their base subject is the same. This ensures unrelated replies, with a new subject, are placed in their own thread. The base subject is lower cased, has whitespace collapsed to a single space, and some components removed: leading "Re:", "Fwd:", "Fw:", or bracketed tag (that mailing lists often add, e.g. "[listname]"), trailing "(fwd)", or enclosing "[fwd: ...]". Messages are linked to all their ancestors. If an intermediate parent/ancestor message is deleted in the future, the message can still be linked to the earlier ancestors. If the direct parent already wasn't available while matching, this is stored as the message having a "missing link" to its stored ancestors.
Package types implements concrete types for the dcrwallet JSON-RPC API. When communicating via the JSON-RPC protocol, all of the commands need to be marshalled to and from the the wire in the appropriate format. This package provides data structures and primitives that are registered with dcrjson to ease this process. An overview specific to this package is provided here, however it is also instructive to read the documentation for the dcrjson package (https://pkg.go.dev/github.com/decred/dcrd/dcrjson/v3). The types in this package map to the required parts of the protocol as discussed in the dcrjson documention To simplify the marshalling of the requests and responses, the dcrjson.MarshalCmd and dcrjson.MarshalResponse functions may be used. They return the raw bytes ready to be sent across the wire. Unmarshalling a received Request object is a two step process: This approach is used since it provides the caller with access to the additional fields in the request that are not part of the command such as the ID. Unmarshalling a received Response object is also a two step process: As above, this approach is used since it provides the caller with access to the fields in the response such as the ID and Error. This package provides two approaches for creating a new command. This first, and preferred, method is to use one of the New<Foo>Cmd functions. This allows static compile-time checking to help ensure the parameters stay in sync with the struct definitions. The second approach is the dcrjson.NewCmd function which takes a method (command) name and variable arguments. Since this package registers all of its types with dcrjson, the function will recognize them and includes full checking to ensure the parameters are accurate according to provided method, however these checks are, obviously, run-time which means any mistakes won't be found until the code is actually executed. However, it is quite useful for user-supplied commands that are intentionally dynamic. To facilitate providing consistent help to users of the RPC server, the dcrjson package exposes the GenerateHelp and function which uses reflection on commands and notifications registered by this package, as well as the provided expected result types, to generate the final help text. In addition, the dcrjson.MethodUsageText function may be used to generate consistent one-line usage for registered commands and notifications using reflection.
Package aw is a "plug-and-play" workflow development library/framework for Alfred 3 & 4 (https://www.alfredapp.com/). It requires Go 1.13 or later. It provides everything you need to create a polished and blazing-fast Alfred frontend for your project. As of AwGo 0.26, all applicable features of Alfred 4.1 are supported. The main features are: AwGo is an opinionated framework that expects to be used in a certain way in order to eliminate boilerplate. It *will* panic if not run in a valid, minimally Alfred-like environment. At a minimum the following environment variables should be set to meaningful values: NOTE: AwGo is currently in development. The API *will* change and should not be considered stable until v1.0. Until then, be sure to pin a version using go modules or similar. Be sure to also check out the _examples/ subdirectory, which contains some simple, but complete, workflows that demonstrate the features of AwGo and useful workflow idioms. Typically, you'd call your program's main entry point via Workflow.Run(). This way, the library will rescue any panic, log the stack trace and show an error message to the user in Alfred. In the Script box (Language = "/bin/bash"): To generate results for Alfred to show in a Script Filter, use the feedback API of Workflow: You can set workflow variables (via feedback) with Workflow.Var, Item.Var and Modifier.Var. See Workflow.SendFeedback for more documentation. Alfred requires a different JSON format if you wish to set workflow variables. Use the ArgVars (named for its equivalent element in Alfred) struct to generate output from Run Script actions. Be sure to set TextErrors to true to prevent Workflow from generating Alfred JSON if it catches a panic: See ArgVars for more information. New() creates a *Workflow using the default values and workflow settings read from environment variables set by Alfred. You can change defaults by passing one or more Options to New(). If you do not want to use Alfred's environment variables, or they aren't set (i.e. you're not running the code in Alfred), use NewFromEnv() with a custom Env implementation. A Workflow can be re-configured later using its Configure() method. See the documentation for Option for more information on configuring a Workflow. AwGo can check for and install new versions of your workflow. Subpackage update provides an implementation of the Updater interface and sources to load updates from GitHub or Gitea releases, or from the URL of an Alfred `metadata.json` file. See subpackage update and _examples/update. AwGo can filter Script Filter feedback using a Sublime Text-like fuzzy matching algorithm. Workflow.Filter() sorts feedback Items against the provided query, removing those that do not match. See _examples/fuzzy for a basic demonstration, and _examples/bookmarks for a demonstration of implementing fuzzy.Sortable on your own structs and customising the fuzzy sort settings. Fuzzy matching is done by package https://godoc.org/go.deanishe.net/fuzzy AwGo automatically configures the default log package to write to STDERR (Alfred's debugger) and a log file in the workflow's cache directory. The log file is necessary because background processes aren't connected to Alfred, so their output is only visible in the log. It is rotated when it exceeds 1 MiB in size. One previous log is kept. AwGo detects when Alfred's debugger is open (Workflow.Debug() returns true) and in this case prepends filename:linenumber: to log messages. The Config struct (which is included in Workflow as Workflow.Config) provides an interface to the workflow's settings from the Workflow Environment Variables panel (see https://www.alfredapp.com/help/workflows/advanced/variables/#environment). Alfred exports these settings as environment variables, and you can read them ad-hoc with the Config.Get*() methods, and save values back to Alfred/info.plist with Config.Set(). Using Config.To() and Config.From(), you can "bind" your own structs to the settings in Alfred: See the documentation for Config.To and Config.From for more information, and _examples/settings for a demo workflow based on the API. The Alfred struct provides methods for the rest of Alfred's AppleScript API. Amongst other things, you can use it to tell Alfred to open, to search for a query, to browse/action files & directories, or to run External Triggers. See documentation of the Alfred struct for more information. AwGo provides a basic, but useful, API for loading and saving data. In addition to reading/writing bytes and marshalling/unmarshalling to/from JSON, the API can auto-refresh expired cache data. See Cache and Session for the API documentation. Workflow has three caches tied to different directories: These all share (almost) the same API. The difference is in when the data go away. Data saved with Session are deleted after the user closes Alfred or starts using a different workflow. The Cache directory is in a system cache directory, so may be deleted by the system or "system maintenance" tools. The Data directory lives with Alfred's application data and would not normally be deleted. Subpackage util provides several functions for running script files and snippets of AppleScript/JavaScript code. See util for documentation and examples. AwGo offers a simple API to start/stop background processes via Workflow's RunInBackground(), IsRunning() and Kill() methods. This is useful for running checks for updates and other jobs that hit the network or take a significant amount of time to complete, allowing you to keep your Script Filters extremely responsive. See _examples/update and _examples/workflows for demonstrations of this API.
Package aw is a "plug-and-play" workflow development library/framework for Alfred 3 & 4 (https://www.alfredapp.com/). It requires Go 1.13 or later. It provides everything you need to create a polished and blazing-fast Alfred frontend for your project. As of AwGo 0.26, all applicable features of Alfred 4.1 are supported. The main features are: AwGo is an opinionated framework that expects to be used in a certain way in order to eliminate boilerplate. It *will* panic if not run in a valid, minimally Alfred-like environment. At a minimum the following environment variables should be set to meaningful values: NOTE: AwGo is currently in development. The API *will* change and should not be considered stable until v1.0. Until then, be sure to pin a version using go modules or similar. Be sure to also check out the _examples/ subdirectory, which contains some simple, but complete, workflows that demonstrate the features of AwGo and useful workflow idioms. Typically, you'd call your program's main entry point via Workflow.Run(). This way, the library will rescue any panic, log the stack trace and show an error message to the user in Alfred. In the Script box (Language = "/bin/bash"): To generate results for Alfred to show in a Script Filter, use the feedback API of Workflow: You can set workflow variables (via feedback) with Workflow.Var, Item.Var and Modifier.Var. See Workflow.SendFeedback for more documentation. Alfred requires a different JSON format if you wish to set workflow variables. Use the ArgVars (named for its equivalent element in Alfred) struct to generate output from Run Script actions. Be sure to set TextErrors to true to prevent Workflow from generating Alfred JSON if it catches a panic: See ArgVars for more information. New() creates a *Workflow using the default values and workflow settings read from environment variables set by Alfred. You can change defaults by passing one or more Options to New(). If you do not want to use Alfred's environment variables, or they aren't set (i.e. you're not running the code in Alfred), use NewFromEnv() with a custom Env implementation. A Workflow can be re-configured later using its Configure() method. See the documentation for Option for more information on configuring a Workflow. AwGo can check for and install new versions of your workflow. Subpackage update provides an implementation of the Updater interface and sources to load updates from GitHub or Gitea releases, or from the URL of an Alfred `metadata.json` file. See subpackage update and _examples/update. AwGo can filter Script Filter feedback using a Sublime Text-like fuzzy matching algorithm. Workflow.Filter() sorts feedback Items against the provided query, removing those that do not match. See _examples/fuzzy for a basic demonstration, and _examples/bookmarks for a demonstration of implementing fuzzy.Sortable on your own structs and customising the fuzzy sort settings. Fuzzy matching is done by package https://godoc.org/go.deanishe.net/fuzzy AwGo automatically configures the default log package to write to STDERR (Alfred's debugger) and a log file in the workflow's cache directory. The log file is necessary because background processes aren't connected to Alfred, so their output is only visible in the log. It is rotated when it exceeds 1 MiB in size. One previous log is kept. AwGo detects when Alfred's debugger is open (Workflow.Debug() returns true) and in this case prepends filename:linenumber: to log messages. The Config struct (which is included in Workflow as Workflow.Config) provides an interface to the workflow's settings from the Workflow Environment Variables panel (see https://www.alfredapp.com/help/workflows/advanced/variables/#environment). Alfred exports these settings as environment variables, and you can read them ad-hoc with the Config.Get*() methods, and save values back to Alfred/info.plist with Config.Set(). Using Config.To() and Config.From(), you can "bind" your own structs to the settings in Alfred: See the documentation for Config.To and Config.From for more information, and _examples/settings for a demo workflow based on the API. The Alfred struct provides methods for the rest of Alfred's AppleScript API. Amongst other things, you can use it to tell Alfred to open, to search for a query, to browse/action files & directories, or to run External Triggers. See documentation of the Alfred struct for more information. AwGo provides a basic, but useful, API for loading and saving data. In addition to reading/writing bytes and marshalling/unmarshalling to/from JSON, the API can auto-refresh expired cache data. See Cache and Session for the API documentation. Workflow has three caches tied to different directories: These all share (almost) the same API. The difference is in when the data go away. Data saved with Session are deleted after the user closes Alfred or starts using a different workflow. The Cache directory is in a system cache directory, so may be deleted by the system or "system maintenance" tools. The Data directory lives with Alfred's application data and would not normally be deleted. Subpackage util provides several functions for running script files and snippets of AppleScript/JavaScript code. See util for documentation and examples. AwGo offers a simple API to start/stop background processes via Workflow's RunInBackground(), IsRunning() and Kill() methods. This is useful for running checks for updates and other jobs that hit the network or take a significant amount of time to complete, allowing you to keep your Script Filters extremely responsive. See _examples/update and _examples/workflows for demonstrations of this API.
Package websocket implements the WebSocket protocol defined in RFC 6455. The Conn type represents a WebSocket connection. A server application uses the Upgrade function from an Upgrader object with a HTTP request handler to get a pointer to a Conn: Call the connection's WriteMessage and ReadMessage methods to send and receive messages as a slice of bytes. This snippet of code shows how to echo messages using these methods: In above snippet of code, p is a []byte and messageType is an int with value websocket.BinaryMessage or websocket.TextMessage. An application can also send and receive messages using the io.WriteCloser and io.Reader interfaces. To send a message, call the connection NextWriter method to get an io.WriteCloser, write the message to the writer and close the writer when done. To receive a message, call the connection NextReader method to get an io.Reader and read until io.EOF is returned. This snippet shows how to echo messages using the NextWriter and NextReader methods: The WebSocket protocol distinguishes between text and binary data messages. Text messages are interpreted as UTF-8 encoded text. The interpretation of binary messages is left to the application. This package uses the TextMessage and BinaryMessage integer constants to identify the two data message types. The ReadMessage and NextReader methods return the type of the received message. The messageType argument to the WriteMessage and NextWriter methods specifies the type of a sent message. It is the application's responsibility to ensure that text messages are valid UTF-8 encoded text. The WebSocket protocol defines three types of control messages: close, ping and pong. Call the connection WriteControl, WriteMessage or NextWriter methods to send a control message to the peer. Connections handle received close messages by sending a close message to the peer and returning a *CloseError from the the NextReader, ReadMessage or the message Read method. Connections handle received ping and pong messages by invoking callback functions set with SetPingHandler and SetPongHandler methods. The callback functions are called from the NextReader, ReadMessage and the message Read methods. The default ping handler sends a pong to the peer. The application's reading goroutine can block for a short time while the handler writes the pong data to the connection. The application must read the connection to process ping, pong and close messages sent from the peer. If the application is not otherwise interested in messages from the peer, then the application should start a goroutine to read and discard messages from the peer. A simple example is: Connections support one concurrent reader and one concurrent writer. Applications are responsible for ensuring that no more than one goroutine calls the write methods (NextWriter, SetWriteDeadline, WriteMessage, WriteJSON) concurrently and that no more than one goroutine calls the read methods (NextReader, SetReadDeadline, ReadMessage, ReadJSON, SetPongHandler, SetPingHandler) concurrently. The Close and WriteControl methods can be called concurrently with all other methods. Web browsers allow Javascript applications to open a WebSocket connection to any host. It's up to the server to enforce an origin policy using the Origin request header sent by the browser. The Upgrader calls the function specified in the CheckOrigin field to check the origin. If the CheckOrigin function returns false, then the Upgrade method fails the WebSocket handshake with HTTP status 403. If the CheckOrigin field is nil, then the Upgrader uses a safe default: fail the handshake if the Origin request header is present and not equal to the Host request header. An application can allow connections from any origin by specifying a function that always returns true: The deprecated Upgrade function does not enforce an origin policy. It's the application's responsibility to check the Origin header before calling Upgrade.
Package main is the UBNT edgeos-dnsmasq-blacklist dnsmasq DNS Blacklisting and Redirection. View the software license here (https://github.com/britannic/blacklist/blob/master/LICENSE.txt)Latest versionVersion (https://github.com/britannic/blacklist)Go documentationGoDoc (https://godoc.org/github.com/britannic/blacklist)Build status for this versionBuild Status (https://travis-ci.org/britannic/blacklist)Test coverage status for this versionCoverage Status (https://coveralls.io/github/britannic/blacklist?branch=master)Quality of Go code for this versionGo Report Card (https://goreportcard.com/report/github.com/britannic/blacklist) Follow the conversation @ community.ubnt.com (https://community.ubnt.com/t5/EdgeRouter/DNS-Adblocking-amp-Blacklisting-dnsmasq-Configuration/td-p/2215008/jump-to/first-unread-message "Follow the conversation about this software in the EdgeRouter forum (https://community.ubnt.com/t5/EdgeRouter/)") Please show your thanks by donating to the project using Securely send and receive cash without fees using Square CashSquare Cash (https://cash.me/$HelmRockSecurity/) or PayPal (https://www.paypal.me/helmrocksecurity/) Donate (https://cash.me/$HelmRockSecurity/5 "Give $5 using Square Cash (free money transfer)") Donate (https://cash.me/$HelmRockSecurity/10 "Give $10 using Square Cash (free money transfer)") Donate (https://cash.me/$HelmRockSecurity/15 "Give $15 using Square Cash (free money transfer)") Donate (https://cash.me/$HelmRockSecurity/20 "Give $20 using Square Cash (free money transfer)") Donate (https://cash.me/$HelmRockSecurity/25 "Give $25 using Square Cash (free money transfer)") Donate (https://cash.me/$HelmRockSecurity/50 "Give $50 using Square Cash (free money transfer)") Donate (https://cash.me/$HelmRockSecurity/100 "Give $100 using Square Cash (free money transfer)") Donate (https://cash.me/$HelmRockSecurity/ "Choose your own donation amount using Square Cash (free money transfer)") Donate (https://paypal.me/helmrocksecurity/5 "Give $5 using PayPal (PayPal money transfer)") Donate (https://paypal.me/helmrocksecurity/10 "Give $10 using PayPal (PayPal money transfer)") Donate (https://paypal.me/helmrocksecurity/15 "Give $15 using PayPal (PayPal money transfer)") Donate (https://paypal.me/helmrocksecurity/20 "Give $20 using PayPal (PayPal money transfer)") Donate (https://paypal.me/helmrocksecurity/25 "Give $25 using PayPal (PayPal money transfer)") Donate (https://paypal.me/helmrocksecurity/50 "Give $50 using PayPal (PayPal money transfer)") Donate (https://paypal.me/helmrocksecurity/100 "Give $100 using PayPal (PayPal money transfer)") Donate (https://paypal.me/helmrocksecurity/ "Choose your own donation amount using PayPal (PayPal money transfer)") We greatly appreciate any and all donations - Thank you! Funds go to maintaining development servers and networks. Note: This is 3rd party software and isn't supported or endorsed by Ubiquiti Networks® • Overview (#overview) • Donate (#donations-and-sponsorship) • Copyright (#copyright) • Licenses (#licenses) • Latest Version (#latest-version) • Change Log (https://github.com/britannic/blacklist/blob/master/CHANGELOG.md) • Features (#features) • Compatibility (#compatibility) • Installation (#installation) • Using apt-get (#apt-get-installation---erlite-3-erpoe-5-er-x-er-x-sfp--unifi-gateway-3) • Using dpkg (#dpkg-installation---best-for-disk-space-constrained-routers) • Upgrade (#upgrade) • Removal (#removal) • Frequently Asked Questions (#frequently-asked-questions) • Can I donate to project? (#donations-and-sponsorship) • Does the install backup my blacklist configuration before deleting it? (#does-the-install-backup-my-blacklist-configuration-before-deleting-it) • Does update-dnsmasq run automatically? (#does-update-dnsmasq-run-automatically) • How do I add or delete sources? (#how-do-i-add-or-delete-sources) • How do I back up my blacklist configuration and restore it later? (#how-do-i-back-up-my-blacklist-configuration-and-restore-it-later) • How do I configure dnsmasq? (#how-do-i-configure-dnsmasq) • How do I configure local file sources instead of internet based ones? (#how-do-i-configure-local-file-sources-instead-of-internet-based-ones) • How do I disable/enable dnsmasq blacklisting? (#how-do-i-disableenable-dnsmasq-blacklisting) • How do I exclude or include a host or a domain? (#how-do-i-exclude-or-include-a-host-or-a-domain) • How do I globally exclude or include hosts or a domains? (#how-do-i-globally-exclude-or-include-hosts-or-a-domains) • How do I use the command line switches? (#how-do-i-use-the-command-line-switches) • How do can keep my USG configuration after an upgrade, provision or reboot? (#how-do-can-keep-my-usg-configuration-after-an-upgrade-provision-or-reboot) • How does whitelisting work? (#how-does-whitelisting-work) • What is the difference between blocking domains and hosts? (#what-is-the-difference-between-blocking-domains-and-hosts) • Which blacklist sources are installed by default? (#which-blacklist-sources-are-installed-by-default) EdgeMax dnsmasq DNS blacklisting and redirection is inspired by the users at EdgeMAX Community (https://community.ubnt.com/t5/EdgeMAX/bd-p/EdgeMAX/) [Top] (#contents) • Copyright © Visit Helm Rock Consulting at https://www.helmrock.com/2019 Helm Rock Consulting (https://www.helmrock.com/) [Top] (#contents) Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: • Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. • Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. The views and conclusions contained in the software and documentation are those of the authors and should not be interpreted as representing official policies, either expressed or implied, of the FreeBSD Project. [Top] (#contents) Latest versionLatest (https://github.com/britannic/blacklist/releases/latest) Release v1.1.6.2 (April 24, 2018) • Code refactor • Global whitelist and blacklist configuration files now have their own prefix: "roots" i.e. [Top] (#contents) • See changelog (https://github.com/britannic/blacklist/blob/master/CHANGELOG.md) for details. [Top] (#contents) • Adds DNS blacklisting integration to the EdgeRouter configuration • Generates configuration files used directly by dnsmasq to redirect dns lookups • Integrated with the EdgeMax OS CLI • Any FQDN in the blacklist will force dnsmasq to return the configured dns redirect IP address [Top] (#contents) • edgeos-dnsmasq-blacklist has been tested on the EdgeRouter ERLite-3, ERPoe-5, ER-X and UniFi Security Gateway USG-3 routers • EdgeMAX versions: v1.9.7+hotfix.4-v1.10.1, UniFi: v4.4.12-v4.4.18 • integration could be adapted to work on VyOS and Vyatta derived ports, since EdgeOS is a fork and port of Vyatta 6.3 [Top] (#contents) • Using apt-get (#apt-get-installation---erlite-3-erpoe-5-er-x-er-x-sfp--unifi-gateway-3) - works for all routers • Using dpkg (#dpkg-installation---best-for-disk-space-constrained-routers) - best for disk space constrained routers [Top] (#contents) apt-get Installation - ERLite-3, ERPoe-5, ER-X, ER-X-SFP & UniFi-Gateway-3 • Add the blacklist debian package repository using the router's CLI shell • Add the GPG signing key • Update the system repositorities and install edgeos-dnsmasq-blacklist [Top] (#contents) dpkg Installation - best for disk space constrained routers EdgeRouter ERLite-3, ERPoe-5 & UniFi-Gateway-3 [Top] (#contents) EdgeRouter ER-X & ER-X-SFP • Ensure the router has enough space, by removing unnecessary files • Now download and install the edgeos-dnsmasq-blacklist package [Top] (#contents) • If the repository is set up and you are using apt-get: • Note, if you are using dpkg, it cannot upgrade packages, so follow these instructions (#dpkg-installation---best-for-disk-space-constrained-routers) and the previous package version will be automatically removed before the new package version is installed [Top] (#contents) EdgeMAX - All Platforms [Top] (#contents) How do I disable/enable dnsmasq blacklisting? • Use these CLI configure commands: • Disable: • Enable: [Top] (#contents) Does the install backup my blacklist configuration before deleting it? • If a blacklist configuration already exists, the install routine will automatically back it up to /config/user-data/blacklist.$(date +'%FT%H%M%S').cmds [Top] (#contents) How do I back up my blacklist configuration and restore it later? • use the following commands (make a note of the file name): • After installing the latest version, you can merge your backed up configuration: • If you prefer to delete the default configuration and restore your previous configuration, run these commands: [Top] (#contents) Which blacklist sources are installed by default? • You can use this command in the CLI shell to view the current sources after installation or view the log and see previous downloads: [Top] (#contents) How do I configure local file sources instead of internet based ones? • Use these commands to configure a local file source • File contents example for /config/user-data/blist.hosts.src: [Top] (#contents) How do can keep my USG configuration after an upgrade, provision or reboot? • Follow these instructions (https://britannic.github.io/install-edgeos-packages/) on how to automatically install edgeos-dnsmasq-blacklist • Create a config.gateway.json file following these instructions (https://help.ubnt.com/hc/en-us/articles/215458888-UniFi-How-to-further-customize-USG-configuration-with-config-gateway-json) • Here's a sample config.gateway.json (https://raw.githubusercontent.com/britannic/blacklist/master/config.gateway.json) [Top] (#contents) How do I add or delete sources? • Using the CLI configure command, to delete domains and hosts sources: • To add a source, first check it can serve a text list and also note the prefix (if any) before the hosts or domains, e.g. http://www.malwaredomainlist.com/ (http://www.malwaredomainlist.com/) has this format: • So the prefix is "127.0.0.1 " • Here's how to creating the source in the CLI: [Top] (#contents) How do I globally exclude or include hosts or a domains? • Use these example commands to globally include or exclude blacklisted entries: [Top] (#contents) How do I exclude or include a host or a domain? • Use these example commands to include or exclude blacklisted entries: [Top] (#contents) How does whitelisting work? *dnsmasq will whitelist any entries in the configuration file domains and hosts (servers) with a hash in place of an IP address (the "#" force dnsmasq to forward the DNS request to the router's configured nameservers) • i.e. servers (hosts) • i.e. domains [Top] (#contents) Does update-dnsmasq run automatically? • Yes, a scheduled task is created and run daily at midnight with a random start delay is used ensure other routers in the same time zone won't overload the source servers. • The random start delay window is configured in seconds using this command - this example sets the start delay between 1-10800 seconds (0-3 hours): • It can be reconfigured using these CLI configuration commands: • For example, to change the execution interval to every 6 hours, use this command: • In daily use, no additional interaction with update-dnsmasq is required. By default, cron will run update-dnsmasq at midnight each day to download the blacklist sources and update the dnsmasq configuration files in /etc/dnsmasq.d. dnsmasq will automatically be reloaded after the configuration file update is completed. [Top] (#contents) How do I use the command line switches? • update-dnsmasq has the following commandline switches available: [Top] (#contents) How do I configure dnsmasq? • dnsmasq may need to be configured to ensure blacklisting works correctly • Here is an example using the EdgeOS configuration shell [Top] (#contents) What is the difference between blocking domains and hosts? • The difference lies in the order of update-dnsmasq's processing algorithm. Domains are processed first and take precedence over hosts, so that a blacklisted domain will force update-dnsmasq's source parser to exclude subsequent hosts from the same domain. This reduces dnsmasq's list of lookups, since it will automatically redirect hosts for a blacklisted domain. [Top] (#contents) blacklist
Package dcrjson provides infrastructure for working with Decred JSON-RPC APIs. When communicating via the JSON-RPC protocol, all requests and responses must be marshalled to and from the wire in the appropriate format. This package provides infrastructure and primitives to ease this process. This information is not necessary in order to use this package, but it does provide some intuition into what the marshalling and unmarshalling that is discussed below is doing under the hood. As defined by the JSON-RPC spec, there are effectively two forms of messages on the wire: Request Objects {"jsonrpc":"1.0","id":"SOMEID","method":"SOMEMETHOD","params":[SOMEPARAMS]} NOTE: Notifications are the same format except the id field is null. Response Objects {"result":SOMETHING,"error":null,"id":"SOMEID"} {"result":null,"error":{"code":SOMEINT,"message":SOMESTRING},"id":"SOMEID"} For requests, the params field can vary in what it contains depending on the method (a.k.a. command) being sent. Each parameter can be as simple as an int or a complex structure containing many nested fields. The id field is used to identify a request and will be included in the associated response. When working with streamed RPC transports, such as websockets, spontaneous notifications are also possible. As indicated, they are the same as a request object, except they have the id field set to null. Therefore, servers will ignore requests with the id field set to null, while clients can choose to consume or ignore them. Unfortunately, the original Bitcoin JSON-RPC API (and hence anything compatible with it) doesn't always follow the spec and will sometimes return an error string in the result field with a null error for certain commands. However, for the most part, the error field will be set as described on failure. To simplify the marshalling of the requests and responses, the MarshalCmd and MarshalResponse functions are provided. They return the raw bytes ready to be sent across the wire. Unmarshalling a received Request object is a two step process: This approach is used since it provides the caller with access to the additional fields in the request that are not part of the command such as the ID. Unmarshalling a received Response object is also a two step process: As above, this approach is used since it provides the caller with access to the fields in the response such as the ID and Error. This package provides the NewCmd function which takes a method (command) name and variable arguments. The function includes full checking to ensure the parameters are accurate according to provided method, however these checks are, obviously, run-time which means any mistakes won't be found until the code is actually executed. However, it is quite useful for user-supplied commands that are intentionally dynamic. External packages can and should implement types implementing Command for use with MarshalCmd/ParseParams. The command handling of this package is built around the concept of registered commands. This is true for the wide variety of commands already provided by the package, but it also means caller can easily provide custom commands with all of the same functionality as the built-in commands. Use the RegisterCmd function for this purpose. A list of all registered methods can be obtained with the RegisteredCmdMethods function. All registered commands are registered with flags that identify information such as whether the command applies to a chain server, wallet server, or is a notification along with the method name to use. These flags can be obtained with the MethodUsageFlags flags, and the method can be obtained with the CmdMethod function. To facilitate providing consistent help to users of the RPC server, this package exposes the GenerateHelp and function which uses reflection on registered commands or notifications to generate the final help text. In addition, the MethodUsageText function is provided to generate consistent one-line usage for registered commands and notifications using reflection. There are 2 distinct type of errors supported by this package: The first category of errors (type Error) typically indicates a programmer error and can be avoided by properly using the API. Errors of this type will be returned from the various functions available in this package. They identify issues such as unsupported field types, attempts to register malformed commands, and attempting to create a new command with an improper number of parameters. The specific reason for the error can be detected by type asserting it to a *dcrjson.Error and accessing the ErrorCode field. The second category of errors (type RPCError), on the other hand, are useful for returning errors to RPC clients. Consequently, they are used in the previously described Response type. This example demonstrates how to unmarshal a JSON-RPC response and then unmarshal the result field in the response to a concrete type.
Package websocket implements the WebSocket protocol defined in RFC 6455. The Conn type represents a WebSocket connection. A server application calls the Upgrader.Upgrade method from an HTTP request handler to get a *Conn: Call the connection's WriteMessage and ReadMessage methods to send and receive messages as a slice of bytes. This snippet of code shows how to echo messages using these methods: In above snippet of code, p is a []byte and messageType is an int with value websocket.BinaryMessage or websocket.TextMessage. An application can also send and receive messages using the io.WriteCloser and io.Reader interfaces. To send a message, call the connection NextWriter method to get an io.WriteCloser, write the message to the writer and close the writer when done. To receive a message, call the connection NextReader method to get an io.Reader and read until io.EOF is returned. This snippet shows how to echo messages using the NextWriter and NextReader methods: The WebSocket protocol distinguishes between text and binary data messages. Text messages are interpreted as UTF-8 encoded text. The interpretation of binary messages is left to the application. This package uses the TextMessage and BinaryMessage integer constants to identify the two data message types. The ReadMessage and NextReader methods return the type of the received message. The messageType argument to the WriteMessage and NextWriter methods specifies the type of a sent message. It is the application's responsibility to ensure that text messages are valid UTF-8 encoded text. The WebSocket protocol defines three types of control messages: close, ping and pong. Call the connection WriteControl, WriteMessage or NextWriter methods to send a control message to the peer. Connections handle received close messages by calling the handler function set with the SetCloseHandler method and by returning a *CloseError from the NextReader, ReadMessage or the message Read method. The default close handler sends a close message to the peer. Connections handle received ping messages by calling the handler function set with the SetPingHandler method. The default ping handler sends a pong message to the peer. Connections handle received pong messages by calling the handler function set with the SetPongHandler method. The default pong handler does nothing. If an application sends ping messages, then the application should set a pong handler to receive the corresponding pong. The control message handler functions are called from the NextReader, ReadMessage and message reader Read methods. The default close and ping handlers can block these methods for a short time when the handler writes to the connection. The application must read the connection to process close, ping and pong messages sent from the peer. If the application is not otherwise interested in messages from the peer, then the application should start a goroutine to read and discard messages from the peer. A simple example is: Connections support one concurrent reader and one concurrent writer. Applications are responsible for ensuring that no more than one goroutine calls the write methods (NextWriter, SetWriteDeadline, WriteMessage, WriteJSON, EnableWriteCompression, SetCompressionLevel) concurrently and that no more than one goroutine calls the read methods (NextReader, SetReadDeadline, ReadMessage, ReadJSON, SetPongHandler, SetPingHandler) concurrently. The Close and WriteControl methods can be called concurrently with all other methods. Web browsers allow Javascript applications to open a WebSocket connection to any host. It's up to the server to enforce an origin policy using the Origin request header sent by the browser. The Upgrader calls the function specified in the CheckOrigin field to check the origin. If the CheckOrigin function returns false, then the Upgrade method fails the WebSocket handshake with HTTP status 403. If the CheckOrigin field is nil, then the Upgrader uses a safe default: fail the handshake if the Origin request header is present and the Origin host is not equal to the Host request header. The deprecated package-level Upgrade function does not perform origin checking. The application is responsible for checking the Origin header before calling the Upgrade function. Connections buffer network input and output to reduce the number of system calls when reading or writing messages. Write buffers are also used for constructing WebSocket frames. See RFC 6455, Section 5 for a discussion of message framing. A WebSocket frame header is written to the network each time a write buffer is flushed to the network. Decreasing the size of the write buffer can increase the amount of framing overhead on the connection. The buffer sizes in bytes are specified by the ReadBufferSize and WriteBufferSize fields in the Dialer and Upgrader. The Dialer uses a default size of 4096 when a buffer size field is set to zero. The Upgrader reuses buffers created by the HTTP server when a buffer size field is set to zero. The HTTP server buffers have a size of 4096 at the time of this writing. The buffer sizes do not limit the size of a message that can be read or written by a connection. Buffers are held for the lifetime of the connection by default. If the Dialer or Upgrader WriteBufferPool field is set, then a connection holds the write buffer only when writing a message. Applications should tune the buffer sizes to balance memory use and performance. Increasing the buffer size uses more memory, but can reduce the number of system calls to read or write the network. In the case of writing, increasing the buffer size can reduce the number of frame headers written to the network. Some guidelines for setting buffer parameters are: Limit the buffer sizes to the maximum expected message size. Buffers larger than the largest message do not provide any benefit. Depending on the distribution of message sizes, setting the buffer size to a value less than the maximum expected message size can greatly reduce memory use with a small impact on performance. Here's an example: If 99% of the messages are smaller than 256 bytes and the maximum message size is 512 bytes, then a buffer size of 256 bytes will result in 1.01 more system calls than a buffer size of 512 bytes. The memory savings is 50%. A write buffer pool is useful when the application has a modest number writes over a large number of connections. when buffers are pooled, a larger buffer size has a reduced impact on total memory use and has the benefit of reducing system calls and frame overhead. Per message compression extensions (RFC 7692) are experimentally supported by this package in a limited capacity. Setting the EnableCompression option to true in Dialer or Upgrader will attempt to negotiate per message deflate support. If compression was successfully negotiated with the connection's peer, any message received in compressed form will be automatically decompressed. All Read methods will return uncompressed bytes. Per message compression of messages written to a connection can be enabled or disabled by calling the corresponding Conn method: Currently this package does not support compression with "context takeover". This means that messages must be compressed and decompressed in isolation, without retaining sliding window or dictionary state across messages. For more details refer to RFC 7692. Use of compression is experimental and may result in decreased performance.
Package blackfriday is a markdown processor. It translates plain text with simple formatting rules into an AST, which can then be further processed to HTML (provided by Blackfriday itself) or other formats (provided by the community). The simplest way to invoke Blackfriday is to call the Run function. It will take a text input and produce a text output in HTML (or other format). A slightly more sophisticated way to use Blackfriday is to create a Markdown processor and to call Parse, which returns a syntax tree for the input document. You can leverage Blackfriday's parsing for content extraction from markdown documents. You can assign a custom renderer and set various options to the Markdown processor. If you're interested in calling Blackfriday from command line, see https://github.com/russross/blackfriday-tool. Blackfriday includes an algorithm for creating sanitized anchor names corresponding to a given input text. This algorithm is used to create anchors for headings when AutoHeadingIDs extension is enabled. The algorithm is specified below, so that other packages can create compatible anchor names and links to those anchors. The algorithm iterates over the input text, interpreted as UTF-8, one Unicode code point (rune) at a time. All runes that are letters (category L) or numbers (category N) are considered valid characters. They are mapped to lower case, and included in the output. All other runes are considered invalid characters. Invalid characters that precede the first valid character, as well as invalid character that follow the last valid character are dropped completely. All other sequences of invalid characters between two valid characters are replaced with a single dash character '-'. SanitizedAnchorName exposes this functionality, and can be used to create compatible links to the anchor names generated by blackfriday. This algorithm is also implemented in a small standalone package at github.com/shurcooL/sanitized_anchor_name. It can be useful for clients that want a small package and don't need full functionality of blackfriday.
Package websocket implements the WebSocket protocol defined in RFC 6455. The Conn type represents a WebSocket connection. A server application calls the Upgrader.Upgrade method from an HTTP request handler to get a *Conn: Call the connection's WriteMessage and ReadMessage methods to send and receive messages as a slice of bytes. This snippet of code shows how to echo messages using these methods: In above snippet of code, p is a []byte and messageType is an int with value websocket.BinaryMessage or websocket.TextMessage. An application can also send and receive messages using the io.WriteCloser and io.Reader interfaces. To send a message, call the connection NextWriter method to get an io.WriteCloser, write the message to the writer and close the writer when done. To receive a message, call the connection NextReader method to get an io.Reader and read until io.EOF is returned. This snippet shows how to echo messages using the NextWriter and NextReader methods: The WebSocket protocol distinguishes between text and binary data messages. Text messages are interpreted as UTF-8 encoded text. The interpretation of binary messages is left to the application. This package uses the TextMessage and BinaryMessage integer constants to identify the two data message types. The ReadMessage and NextReader methods return the type of the received message. The messageType argument to the WriteMessage and NextWriter methods specifies the type of a sent message. It is the application's responsibility to ensure that text messages are valid UTF-8 encoded text. The WebSocket protocol defines three types of control messages: close, ping and pong. Call the connection WriteControl, WriteMessage or NextWriter methods to send a control message to the peer. Connections handle received close messages by calling the handler function set with the SetCloseHandler method and by returning a *CloseError from the NextReader, ReadMessage or the message Read method. The default close handler sends a close message to the peer. Connections handle received ping messages by calling the handler function set with the SetPingHandler method. The default ping handler sends a pong message to the peer. Connections handle received pong messages by calling the handler function set with the SetPongHandler method. The default pong handler does nothing. If an application sends ping messages, then the application should set a pong handler to receive the corresponding pong. The control message handler functions are called from the NextReader, ReadMessage and message reader Read methods. The default close and ping handlers can block these methods for a short time when the handler writes to the connection. The application must read the connection to process close, ping and pong messages sent from the peer. If the application is not otherwise interested in messages from the peer, then the application should start a goroutine to read and discard messages from the peer. A simple example is: Connections support one concurrent reader and one concurrent writer. Applications are responsible for ensuring that no more than one goroutine calls the write methods (NextWriter, SetWriteDeadline, WriteMessage, WriteJSON, EnableWriteCompression, SetCompressionLevel) concurrently and that no more than one goroutine calls the read methods (NextReader, SetReadDeadline, ReadMessage, ReadJSON, SetPongHandler, SetPingHandler) concurrently. The Close and WriteControl methods can be called concurrently with all other methods. Web browsers allow Javascript applications to open a WebSocket connection to any host. It's up to the server to enforce an origin policy using the Origin request header sent by the browser. The Upgrader calls the function specified in the CheckOrigin field to check the origin. If the CheckOrigin function returns false, then the Upgrade method fails the WebSocket handshake with HTTP status 403. If the CheckOrigin field is nil, then the Upgrader uses a safe default: fail the handshake if the Origin request header is present and the Origin host is not equal to the Host request header. The deprecated package-level Upgrade function does not perform origin checking. The application is responsible for checking the Origin header before calling the Upgrade function. Per message compression extensions (RFC 7692) are experimentally supported by this package in a limited capacity. Setting the EnableCompression option to true in Dialer or Upgrader will attempt to negotiate per message deflate support. If compression was successfully negotiated with the connection's peer, any message received in compressed form will be automatically decompressed. All Read methods will return uncompressed bytes. Per message compression of messages written to a connection can be enabled or disabled by calling the corresponding Conn method: Currently this package does not support compression with "context takeover". This means that messages must be compressed and decompressed in isolation, without retaining sliding window or dictionary state across messages. For more details refer to RFC 7692. Use of compression is experimental and may result in decreased performance.
Package dcrjson provides infrastructure for working with Decred JSON-RPC APIs. When communicating via the JSON-RPC protocol, all requests and responses must be marshalled to and from the wire in the appropriate format. This package provides infrastructure and primitives to ease this process. This information is not necessary in order to use this package, but it does provide some intuition into what the marshalling and unmarshalling that is discussed below is doing under the hood. As defined by the JSON-RPC spec, there are effectively two forms of messages on the wire: Request Objects {"jsonrpc":"1.0","id":"SOMEID","method":"SOMEMETHOD","params":[SOMEPARAMS]} NOTE: Notifications are the same format except the id field is null. Response Objects {"result":SOMETHING,"error":null,"id":"SOMEID"} {"result":null,"error":{"code":SOMEINT,"message":SOMESTRING},"id":"SOMEID"} For requests, the params field can vary in what it contains depending on the method (a.k.a. command) being sent. Each parameter can be as simple as an int or a complex structure containing many nested fields. The id field is used to identify a request and will be included in the associated response. When working with streamed RPC transports, such as websockets, spontaneous notifications are also possible. As indicated, they are the same as a request object, except they have the id field set to null. Therefore, servers will ignore requests with the id field set to null, while clients can choose to consume or ignore them. Unfortunately, the original Bitcoin JSON-RPC API (and hence anything compatible with it) doesn't always follow the spec and will sometimes return an error string in the result field with a null error for certain commands. However, for the most part, the error field will be set as described on failure. To simplify the marshalling of the requests and responses, the MarshalCmd and MarshalResponse functions are provided. They return the raw bytes ready to be sent across the wire. Unmarshalling a received Request object is a two step process: This approach is used since it provides the caller with access to the additional fields in the request that are not part of the command such as the ID. Unmarshalling a received Response object is also a two step process: As above, this approach is used since it provides the caller with access to the fields in the response such as the ID and Error. This package provides the NewCmd function which takes a method (command) name and variable arguments. The function includes full checking to ensure the parameters are accurate according to provided method, however these checks are, obviously, run-time which means any mistakes won't be found until the code is actually executed. However, it is quite useful for user-supplied commands that are intentionally dynamic. External packages can and should implement types implementing Command for use with MarshalCmd/ParseParams. The command handling of this package is built around the concept of registered commands. This is true for the wide variety of commands already provided by the package, but it also means caller can easily provide custom commands with all of the same functionality as the built-in commands. Use the RegisterCmd function for this purpose. A list of all registered methods can be obtained with the RegisteredCmdMethods function. All registered commands are registered with flags that identify information such as whether the command applies to a chain server, wallet server, or is a notification along with the method name to use. These flags can be obtained with the MethodUsageFlags flags, and the method can be obtained with the CmdMethod function. To facilitate providing consistent help to users of the RPC server, this package exposes the GenerateHelp and function which uses reflection on registered commands or notifications to generate the final help text. In addition, the MethodUsageText function is provided to generate consistent one-line usage for registered commands and notifications using reflection. There are 2 distinct type of errors supported by this package: The first category of errors (type Error) typically indicates a programmer error and can be avoided by properly using the API. Errors of this type will be returned from the various functions available in this package. They identify issues such as unsupported field types, attempts to register malformed commands, and attempting to create a new command with an improper number of parameters. The specific reason for the error can be detected by type asserting it to a *dcrjson.Error and accessing the ErrorCode field. The second category of errors (type RPCError), on the other hand, are useful for returning errors to RPC clients. Consequently, they are used in the previously described Response type. This example demonstrates how to unmarshal a JSON-RPC response and then unmarshal the result field in the response to a concrete type.
Package dataset includes the operations needed for processing collections of JSON documents and their attachments. Authors R. S. Doiel, <rsdoiel@library.caltech.edu> and Tom Morrel, <tmorrell@library.caltech.edu> Copyright (c) 2021, Caltech All rights not granted herein are expressly reserved by Caltech. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Package dataset includes the operations needed for processing collections of JSON documents and their attachments. Authors R. S. Doiel, <rsdoiel@library.caltech.edu> and Tom Morrel, <tmorrell@library.caltech.edu> Copyright (c) 2021, Caltech All rights not granted herein are expressly reserved by Caltech. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Package dataset provides a common approach for storing JSON object documents on local disc. It is intended as a single user system for intermediate processing of JSON content for analysis or batch processing. It is not a database management system (if you need a JSON database system I would suggest looking at Couchdb, Mongo and Redis as a starting point). The approach dataset takes is to store JSON documents in a pairtree structure under the collection folder. The keys are the JSON document names. JSON documents (and possibly their attachments) are then stored based on that assignment in the pairtree. Conversely the collection.json document is used to find and retrieve documents from the collection. The layout of the metadata is as follows + Collection - a directory A key feature of dataset is to be Posix shell friendly. This has lead to storing the JSON documents in a directory structure that standard Posix tooling can traverse. It has also mean that the JSON documents themselves remain on "disc" as plain text. This has facilitated integration with many other applications, programming langauages and systems. Attachments are non-JSON documents explicitly "attached" that share the same pairtree path but are placed in a sub directory called "_". If the document name is "Jane.Doe.json" and the attachment is photo.jpg the JSON document is "pairtree/Ja/ne/.D/e./Jane.Doe.json" and the photo is in "pairtree/Ja/ne/.D/e./_/photo.jpg". Additional operations beside storing and reading JSON documents are also supported. These include creating lists (arrays) of JSON documents from a list of keys, listing keys in the collection, counting documents in the collection, indexing and searching by indexes. The primary use case driving the development of dataset is harvesting API content for library systems (e.g. EPrints, Invenio, ArchivesSpace, ORCID, CrossRef, OCLC). The harvesting needed to be done in such a way as to leverage existing Posix tooling (e.g. grep, sed, etc) for processing and analysis. Initial use case: Caltech Library has many repository, catelog and record management systems (e.g. EPrints, Invenion, ArchivesSpace, Islandora, Invenio). It is common practice to harvest data from these systems for analysis or processing. Harvested records typically come in XML or JSON format. JSON has proven a flexibly way for working with the data and in our more modern tools the common format we use to move data around. We needed a way to standardize how we stored these JSON records for intermediate processing to allow us to use the growing ecosystem of JSON related tooling available under Posix/Unix compatible systems. Package dataset includes the operations needed for processing collections of JSON documents and their attachments. Authors R. S. Doiel, <rsdoiel@library.caltech.edu> and Tom Morrel, <tmorrell@library.caltech.edu> Copyright (c) 2021, Caltech All rights not granted herein are expressly reserved by Caltech. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Package dataset includes the operations needed for processing collections of JSON documents and their attachments. Authors R. S. Doiel, <rsdoiel@library.caltech.edu> and Tom Morrel, <tmorrell@library.caltech.edu> Copyright (c) 2021, Caltech All rights not granted herein are expressly reserved by Caltech. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Package dataset includes the operations needed for processing collections of JSON documents and their attachments. Authors R. S. Doiel, <rsdoiel@library.caltech.edu> and Tom Morrel, <tmorrell@library.caltech.edu> Copyright (c) 2021, Caltech All rights not granted herein are expressly reserved by Caltech. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Package dataset includes the operations needed for processing collections of JSON documents and their attachments. Authors R. S. Doiel, <rsdoiel@library.caltech.edu> and Tom Morrel, <tmorrell@library.caltech.edu> Copyright (c) 2021, Caltech All rights not granted herein are expressly reserved by Caltech. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Package dataset includes the operations needed for processing collections of JSON documents and their attachments. Authors R. S. Doiel, <rsdoiel@library.caltech.edu> and Tom Morrel, <tmorrell@library.caltech.edu> Copyright (c) 2021, Caltech All rights not granted herein are expressly reserved by Caltech. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Package textrank is an implementation of Text Rank algorithm in Go with extendable features (automatic summarization, phrase extraction). It supports multithreading by goroutines. The package is under The MIT Licence. If there was a program what could rank book size text's words, phrases and sentences continuously on multiple threads and it would be opened to modifing by objects, written in a simple, secure, static language and if it would be very well documented... Now, here it is. - Find the most important phrases. - Find the most important words. - Find the most important N sentences. - Importance by phrase weights. - Importance by word occurrence. - Find the first N sentences, start from Xth sentence. - Find sentences by phrase chains ordered by position in text. - Access to the whole ranked data. - Support more languages. - Algorithm for weighting can be modified by interface implementation. - Parser can be modified by interface implementation. - Multi thread support. Find the most important phrases: This is the most basic and simplest usage of textrank. All possible pre-defined finder queries: After ranking, the graph contains a lot of valuable data. There are functions in textrank package what contains logic to retrieve those data from the graph. After ranking, the graph contains a lot of valuable data. The GetRank function allows access to the graph and every data can be retrieved from this structure. Adding text continuously: It is possibe to add more text after another texts already have been added. The Ranking function can merge these multiple texts and it can recalculate the weights and all related data. Using different algorithm to ranking text: There are two algorithm has implemented, it is possible to write custom algorithm by Algorithm interface and use it instead of defaults. Using multiple graphs: Graph ID exists because it is possible run multiple independent text ranking processes. Using different non-English languages: Engish is used by default but it is possible to add any language. To use other languages a stop word list is required what you can find here: https://github.com/stopwords-iso Asynchronous usage by goroutines: It is thread safe. Independent graphs can receive texts in the same time and can be extended by more text also in the same time.
Package types implements concrete types for the dcrwallet JSON-RPC API. When communicating via the JSON-RPC protocol, all of the commands need to be marshalled to and from the the wire in the appropriate format. This package provides data structures and primitives that are registered with dcrjson to ease this process. An overview specific to this package is provided here, however it is also instructive to read the documentation for the dcrjson package (https://godoc.org/github.com/decred/dcrd/dcrjson). The types in this package map to the required parts of the protocol as discussed in the dcrjson documention To simplify the marshalling of the requests and responses, the dcrjson.MarshalCmd and dcrjson.MarshalResponse functions may be used. They return the raw bytes ready to be sent across the wire. Unmarshalling a received Request object is a two step process: This approach is used since it provides the caller with access to the additional fields in the request that are not part of the command such as the ID. Unmarshalling a received Response object is also a two step process: As above, this approach is used since it provides the caller with access to the fields in the response such as the ID and Error. This package provides two approaches for creating a new command. This first, and preferred, method is to use one of the New<Foo>Cmd functions. This allows static compile-time checking to help ensure the parameters stay in sync with the struct definitions. The second approach is the dcrjson.NewCmd function which takes a method (command) name and variable arguments. Since this package registers all of its types with dcrjson, the function will recognize them and includes full checking to ensure the parameters are accurate according to provided method, however these checks are, obviously, run-time which means any mistakes won't be found until the code is actually executed. However, it is quite useful for user-supplied commands that are intentionally dynamic. To facilitate providing consistent help to users of the RPC server, the dcrjson package exposes the GenerateHelp and function which uses reflection on commands and notifications registered by this package, as well as the provided expected result types, to generate the final help text. In addition, the dcrjson.MethodUsageText function may be used to generate consistent one-line usage for registered commands and notifications using reflection.
Package tview implements rich widgets for terminal based user interfaces. The widgets provided with this package are useful for data exploration and data entry. The package implements the following widgets: The package also provides Application which is used to poll the event queue and draw widgets on screen. The following is a very basic example showing a box with the title "Hello, world!": First, we create a box primitive with a border and a title. Then we create an application, set the box as its root primitive, and run the event loop. The application exits when the application's Stop() function is called or when Ctrl-C is pressed. If we have a primitive which consumes key presses, we call the application's SetFocus() function to redirect all key presses to that primitive. Most primitives then offer ways to install handlers that allow you to react to any actions performed on them. You will find more demos in the "demos" subdirectory. It also contains a presentation (written using tview) which gives an overview of the different widgets and how they can be used. Throughout this package, colors are specified using the tcell.Color type. Functions such as tcell.GetColor(), tcell.NewHexColor(), and tcell.NewRGBColor() can be used to create colors from W3C color names or RGB values. Almost all strings which are displayed can contain color tags. Color tags are W3C color names or six hexadecimal digits following a hash tag, wrapped in square brackets. Examples: A color tag changes the color of the characters following that color tag. This applies to almost everything from box titles, list text, form item labels, to table cells. In a TextView, this functionality has to be switched on explicitly. See the TextView documentation for more information. Color tags may contain not just the foreground (text) color but also the background color and additional flags. In fact, the full definition of a color tag is as follows: Each of the three fields can be left blank and trailing fields can be omitted. (Empty square brackets "[]", however, are not considered color tags.) Colors that are not specified will be left unchanged. A field with just a dash ("-") means "reset to default". You can specify the following flags (some flags may not be supported by your terminal): Examples: In the rare event that you want to display a string such as "[red]" or "[#00ff1a]" without applying its effect, you need to put an opening square bracket before the closing square bracket. Note that the text inside the brackets will be matched less strictly than region or colors tags. I.e. any character that may be used in color or region tags will be recognized. Examples: You can use the Escape() function to insert brackets automatically where needed. When primitives are instantiated, they are initialized with colors taken from the global Styles variable. You may change this variable to adapt the look and feel of the primitives to your preferred style. This package supports unicode characters including wide characters. Many functions in this package are not thread-safe. For many applications, this may not be an issue: If your code makes changes in response to key events, it will execute in the main goroutine and thus will not cause any race conditions. If you access your primitives from other goroutines, however, you will need to synchronize execution. The easiest way to do this is to call Application.QueueUpdate() or Application.QueueUpdateDraw() (see the function documentation for details): One exception to this is the io.Writer interface implemented by TextView. You can safely write to a TextView from any goroutine. See the TextView documentation for details. You can also call Application.Draw() from any goroutine without having to wrap it in QueueUpdate(). And, as mentioned above, key event callbacks are executed in the main goroutine and thus should not use QueueUpdate() as that may lead to deadlocks. All widgets listed above contain the Box type. All of Box's functions are therefore available for all widgets, too. All widgets also implement the Primitive interface. There is also the Focusable interface which is used to override functions in subclassing types. The tview package is based on https://github.com/gdamore/tcell. It uses types and constants from that package (e.g. colors and keyboard values). This package does not process mouse input (yet).
Package pq is a pure Go Postgres driver for the database/sql package. In most cases clients will use the database/sql package instead of using this package directly. For example: You can also connect to a database using a URL. For example: Similarly to libpq, when establishing a connection using pq you are expected to supply a connection string containing zero or more parameters. A subset of the connection parameters supported by libpq are also supported by pq. Additionally, pq also lets you specify run-time parameters (such as search_path or work_mem) directly in the connection string. This is different from libpq, which does not allow run-time parameters in the connection string, instead requiring you to supply them in the options parameter. For compatibility with libpq, the following special connection parameters are supported: Valid values for sslmode are: See http://www.postgresql.org/docs/current/static/libpq-connect.html#LIBPQ-CONNSTRING for more information about connection string parameters. Use single quotes for values that contain whitespace: A backslash will escape the next character in values: Note that the connection parameter client_encoding (which sets the text encoding for the connection) may be set but must be "UTF8", matching with the same rules as Postgres. It is an error to provide any other value. In addition to the parameters listed above, any run-time parameter that can be set at backend start time can be set in the connection string. For more information, see http://www.postgresql.org/docs/current/static/runtime-config.html. Most environment variables as specified at http://www.postgresql.org/docs/current/static/libpq-envars.html supported by libpq are also supported by pq. If any of the environment variables not supported by pq are set, pq will panic during connection establishment. Environment variables have a lower precedence than explicitly provided connection parameters. database/sql does not dictate any specific format for parameter markers in query strings, and pq uses the Postgres-native ordinal markers, as shown above. The same marker can be reused for the same parameter: pq does not support the LastInsertId() method of the Result type in database/sql. To return the identifier of an INSERT (or UPDATE or DELETE), use the Postgres RETURNING clause with a standard Query or QueryRow call: For more details on RETURNING, see the Postgres documentation: For additional instructions on querying see the documentation for the database/sql package. pq may return errors of type *pq.Error which can be interrogated for error details: See the pq.Error type for details. You can perform bulk imports by preparing a statement returned by pq.CopyIn (or pq.CopyInSchema) in an explicit transaction (sql.Tx). The returned statement handle can then be repeatedly "executed" to copy data into the target table. After all data has been processed you should call Exec() once with no arguments to flush all buffered data. Any call to Exec() might return an error which should be handled appropriately, but because of the internal buffering an error returned by Exec() might not be related to the data passed in the call that failed. CopyIn uses COPY FROM internally. It is not possible to COPY outside of an explicit transaction in pq. Usage example: PostgreSQL supports a simple publish/subscribe model over database connections. See http://www.postgresql.org/docs/current/static/sql-notify.html for more information about the general mechanism. To start listening for notifications, you first have to open a new connection to the database by calling NewListener. This connection can not be used for anything other than LISTEN / NOTIFY. Calling Listen will open a "notification channel"; once a notification channel is open, a notification generated on that channel will effect a send on the Listener.Notify channel. A notification channel will remain open until Unlisten is called, though connection loss might result in some notifications being lost. To solve this problem, Listener sends a nil pointer over the Notify channel any time the connection is re-established following a connection loss. The application can get information about the state of the underlying connection by setting an event callback in the call to NewListener. A single Listener can safely be used from concurrent goroutines, which means that there is often no need to create more than one Listener in your application. However, a Listener is always connected to a single database, so you will need to create a new Listener instance for every database you want to receive notifications in. The channel name in both Listen and Unlisten is case sensitive, and can contain any characters legal in an identifier (see http://www.postgresql.org/docs/current/static/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS for more information). Note that the channel name will be truncated to 63 bytes by the PostgreSQL server. You can find a complete, working example of Listener usage at http://godoc.org/github.com/flynn/pq/listen_example.
Package websocket implements the WebSocket protocol defined in RFC 6455. The Conn type represents a WebSocket connection. A server application calls the Upgrader.Upgrade method from an HTTP request handler to get a *Conn: Call the connection's WriteMessage and ReadMessage methods to send and receive messages as a slice of bytes. This snippet of code shows how to echo messages using these methods: In above snippet of code, p is a []byte and messageType is an int with value websocket.BinaryMessage or websocket.TextMessage. An application can also send and receive messages using the io.WriteCloser and io.Reader interfaces. To send a message, call the connection NextWriter method to get an io.WriteCloser, write the message to the writer and close the writer when done. To receive a message, call the connection NextReader method to get an io.Reader and read until io.EOF is returned. This snippet shows how to echo messages using the NextWriter and NextReader methods: The WebSocket protocol distinguishes between text and binary data messages. Text messages are interpreted as UTF-8 encoded text. The interpretation of binary messages is left to the application. This package uses the TextMessage and BinaryMessage integer constants to identify the two data message types. The ReadMessage and NextReader methods return the type of the received message. The messageType argument to the WriteMessage and NextWriter methods specifies the type of a sent message. It is the application's responsibility to ensure that text messages are valid UTF-8 encoded text. The WebSocket protocol defines three types of control messages: close, ping and pong. Call the connection WriteControl, WriteMessage or NextWriter methods to send a control message to the peer. Connections handle received close messages by calling the handler function set with the SetCloseHandler method and by returning a *CloseError from the NextReader, ReadMessage or the message Read method. The default close handler sends a close message to the peer. Connections handle received ping messages by calling the handler function set with the SetPingHandler method. The default ping handler sends a pong message to the peer. Connections handle received pong messages by calling the handler function set with the SetPongHandler method. The default pong handler does nothing. If an application sends ping messages, then the application should set a pong handler to receive the corresponding pong. The control message handler functions are called from the NextReader, ReadMessage and message reader Read methods. The default close and ping handlers can block these methods for a short time when the handler writes to the connection. The application must read the connection to process close, ping and pong messages sent from the peer. If the application is not otherwise interested in messages from the peer, then the application should start a goroutine to read and discard messages from the peer. A simple example is: Connections support one concurrent reader and one concurrent writer. Applications are responsible for ensuring that no more than one goroutine calls the write methods (NextWriter, SetWriteDeadline, WriteMessage, WriteJSON, EnableWriteCompression, SetCompressionLevel) concurrently and that no more than one goroutine calls the read methods (NextReader, SetReadDeadline, ReadMessage, ReadJSON, SetPongHandler, SetPingHandler) concurrently. The Close and WriteControl methods can be called concurrently with all other methods. Web browsers allow Javascript applications to open a WebSocket connection to any host. It's up to the server to enforce an origin policy using the Origin request header sent by the browser. The Upgrader calls the function specified in the CheckOrigin field to check the origin. If the CheckOrigin function returns false, then the Upgrade method fails the WebSocket handshake with HTTP status 403. If the CheckOrigin field is nil, then the Upgrader uses a safe default: fail the handshake if the Origin request header is present and the Origin host is not equal to the Host request header. The deprecated package-level Upgrade function does not perform origin checking. The application is responsible for checking the Origin header before calling the Upgrade function. Connections buffer network input and output to reduce the number of system calls when reading or writing messages. Write buffers are also used for constructing WebSocket frames. See RFC 6455, Section 5 for a discussion of message framing. A WebSocket frame header is written to the network each time a write buffer is flushed to the network. Decreasing the size of the write buffer can increase the amount of framing overhead on the connection. The buffer sizes in bytes are specified by the ReadBufferSize and WriteBufferSize fields in the Dialer and Upgrader. The Dialer uses a default size of 4096 when a buffer size field is set to zero. The Upgrader reuses buffers created by the HTTP server when a buffer size field is set to zero. The HTTP server buffers have a size of 4096 at the time of this writing. The buffer sizes do not limit the size of a message that can be read or written by a connection. Buffers are held for the lifetime of the connection by default. If the Dialer or Upgrader WriteBufferPool field is set, then a connection holds the write buffer only when writing a message. Applications should tune the buffer sizes to balance memory use and performance. Increasing the buffer size uses more memory, but can reduce the number of system calls to read or write the network. In the case of writing, increasing the buffer size can reduce the number of frame headers written to the network. Some guidelines for setting buffer parameters are: Limit the buffer sizes to the maximum expected message size. Buffers larger than the largest message do not provide any benefit. Depending on the distribution of message sizes, setting the buffer size to a value less than the maximum expected message size can greatly reduce memory use with a small impact on performance. Here's an example: If 99% of the messages are smaller than 256 bytes and the maximum message size is 512 bytes, then a buffer size of 256 bytes will result in 1.01 more system calls than a buffer size of 512 bytes. The memory savings is 50%. A write buffer pool is useful when the application has a modest number writes over a large number of connections. when buffers are pooled, a larger buffer size has a reduced impact on total memory use and has the benefit of reducing system calls and frame overhead. Per message compression extensions (RFC 7692) are experimentally supported by this package in a limited capacity. Setting the EnableCompression option to true in Dialer or Upgrader will attempt to negotiate per message deflate support. If compression was successfully negotiated with the connection's peer, any message received in compressed form will be automatically decompressed. All Read methods will return uncompressed bytes. Per message compression of messages written to a connection can be enabled or disabled by calling the corresponding Conn method: Currently this package does not support compression with "context takeover". This means that messages must be compressed and decompressed in isolation, without retaining sliding window or dictionary state across messages. For more details refer to RFC 7692. Use of compression is experimental and may result in decreased performance.
Package types implements concrete types for marshalling to and from the dcrd JSON-RPC commands, return values, and notifications. When communicating via the JSON-RPC protocol, all requests and responses must be marshalled to and from the wire in the appropriate format. This package provides data structures and primitives that are registered with dcrjson to ease this process. An overview specific to this package is provided here, however it is also instructive to read the documentation for the dcrjson package (https://pkg.go.dev/github.com/Decred-Next/dcrnd/dcrjson/version4/v8). The types in this package map to the required parts of the protocol as discussed in the dcrjson documentation To simplify the marshalling of the requests and responses, the dcrjson.MarshalCmd and dcrjson.MarshalResponse functions may be used. They return the raw bytes ready to be sent across the wire. Unmarshalling a received Request object is a two step process: This approach is used since it provides the caller with access to the additional fields in the request that are not part of the command such as the ID. Unmarshalling a received Response object is also a two step process: As above, this approach is used since it provides the caller with access to the fields in the response such as the ID and Error. This package provides two approaches for creating a new command. This first, and preferred, method is to use one of the New<Foo>Cmd functions. This allows static compile-time checking to help ensure the parameters stay in sync with the struct definitions. The second approach is the dcrjson.NewCmd function which takes a method (command) name and variable arguments. Since this package registers all of its types with dcrjson, the function will recognize them and includes full checking to ensure the parameters are accurate according to provided method, however these checks are, obviously, run-time which means any mistakes won't be found until the code is actually executed. However, it is quite useful for user-supplied commands that are intentionally dynamic. To facilitate providing consistent help to users of the RPC server, the dcrjson package exposes the GenerateHelp and function which uses reflection on commands and notifications registered by this package, as well as the provided expected result types, to generate the final help text. In addition, the dcrjson.MethodUsageText function may be used to generate consistent one-line usage for registered commands and notifications using reflection.
Package hugot provides a simple interface for building extensible chat bots in an idiomatic go style. It is heavily influenced by net/http, and uses an internal message format that is compatible with Slack messages. Note: This package requires go1.7 Adapters are used to integrate with external chat systems. Currently the following adapters exist: Examples of using these adapters can be found in github.com/tcolgate/hugot/cmd Handlers process messages. There are a several built in handler types: - Plain Handlers will execute for every message sent to them. - Background handlers, are started when a bot is started. They do not receive messages but can send them. They are intended to implement long lived background tasks that react to external inputs. - WebHook handlers can be used to implement web hooks by adding the bot to a http.ServeMux. A URL is built from the name of the handler. In addition to these basic handlers some more complex handlers are supplied. - Hears Handlers will execute for any message which matches a given regular expression. - Command Handlers act as command line tools. Message are attempted to be processed as a command line. Quoted text is handle as a single argument. The passed message can be used as a flag.FlagSet. - A Mux. The Mux will multiplex message across a set of handlers. In addition, a top level "help" Command handler is added to provide help on usage of the various handlers added to the Mux. WARNING: The API is still subject to change.
Package ogdl is used to process OGDL, the Ordered Graph Data Language. OGDL is a textual format to write trees or graphs of text, where indentation and spaces define the structure. Here is an example: The languange is simple, either in its textual representation or its number of productions (the specification rules), allowing for compact implementations. OGDL character streams are normally formed by Unicode characters, and encoded as UTF-8 strings, but any encoding that is ASCII transparent is compatible with the specification. See the full spec at http://ogdl.org. To install this package just do: If we have a text file 'config.g' containing: then, will print If the timeout parameter was not present, then the default value (60) will be assigned to 'to'. The default value is optional, but be aware that Int64() will return 0 in case that the parameter doesn't exist. The configuration file can be written in a conciser way: The package includes a template processor. It takes an arbitrary input stream with some variables in it, and produces an output stream with the variables resolved out of a Graph object which acts as context. For example (given the previous config file): string(b) is then: Some rules are followed:
Package ogdl is used to process OGDL, the Ordered Graph Data Language. OGDL is a textual format to write trees or graphs of text, where indentation and spaces define the structure. Here is an example: The languange is simple, either in its textual representation or its number of productions (the specification rules), allowing for compact implementations. OGDL character streams are normally formed by Unicode characters, and encoded as UTF-8 strings, but any encoding that is ASCII transparent is compatible with the specification. See the full spec at http://ogdl.org. To install this package just do: If we have a text file 'config.ogdl' containing: then, will print If the timeout parameter was not present, then the default value (60) will be assigned to 'to'. The default value is optional, but be aware that Int64() will return 0 in case that the parameter doesn't exist. The configuration file can be written in a conciser way: The package includes a template processor. It takes an arbitrary input stream with some variables in it, and produces an output stream with the variables resolved out of a Graph object which acts as context. For example (given the previous config file): string(b) is then: Some rules are followed:
Package rst implements tools and methods to expose resources in a RESTFul web service. The idea behind rst is to have endpoints and resources implement interfaces to support HTTP features. Endpoints can implement Getter, Poster, Patcher, Putter or Deleter to respectively allow the HEAD/GET, POST, PATCH, PUT, and DELETE HTTP methods. Resources can implement Ranger to support partial GET requests, Marshaler to customize the process with which they are encoded, or http.Handler to have a complete control over the ResponseWriter. With these interfaces, the complexity behind dealing with all the headers and status codes of the HTTP protocol is abstracted to let you focus on returning a resource or an error. A resource must implement the rst.Resource interface. For that, you can either wrap an rst.Envelope around an existing type, or define a new type and implement the methods of the interface yourself. Using a rst.Envelope: Using a struct: An endpoint is an access point to a resource in your service. You can either define an endpoint by defining handlers for different methods sharing the same pattern, or by submitting a type that implements Getter, Poster, Patcher, Putter, Deleter and/or Prefligher. Using rst.Mux: Using a struct: In the following example, PersonEP implements Getter and is therefore able to handle GET requests. Routing of requests in rst is powered by Gorilla mux (https://github.com/gorilla/mux). Only URL patterns are available for now. Optional regular expressions are supported. rst supports JSON, XML and text encoding of resources using the encoders in Go's standard library. It negotiates the right encoding format based on the content of the Accept header in the request, calls the appropriate marshaler, and inserts the result in a response with the right status code and headers. You can implement the Marshaler interface if you want to add support for another format, or for more control over the encoding process of a specific resource. rst compresses the payload of responses using the supported algorithm detected in the request's Accept-Encoding header. Payloads under the size defined in the CompressionThreshold const are not compressed. Both Gzip and Flate are supported. OPTIONS requests are implicitly supported by all endpoints. The ETag, Last-Modified and Vary headers are automatically set. rst responds with 304 NOT MODIFIED when an appropriate If-Modified-Since or If-None-Match header is found in the request. The Expires header is also automatically inserted with the duration returned by Resource.TTL(). A resource can implement the Ranger interface to gain the ability to return partial responses with status code 206 PARTIAL CONTENT and Content-Range header automatically inserted. Ranger.Range method will be called when a valid Range header is found in an incoming GET request. The Accept-Range header will be inserted automatically. The supported range units and the range extent will be validated for you. Note that the If-Range conditional header is supported as well. rst can add the headers required to serve cross-origin (CORS) requests for you. You can choose between two provided policies (DefaultAccessControl and PermissiveAccessControl), or define your own. Support can be disabled by passing nil. Preflighted requests are also supported. However, you can customize the responses returned by preflight OPTIONS requests if you implement the Preflighter interface in your endpoint.
Package websocket implements the WebSocket protocol defined in RFC 6455. The Conn type represents a WebSocket connection. A server application calls the Upgrader.Upgrade method from an HTTP request handler to get a *Conn: Call the connection's WriteMessage and ReadMessage methods to send and receive messages as a slice of bytes. This snippet of code shows how to echo messages using these methods: In above snippet of code, p is a []byte and messageType is an int with value websocket.BinaryMessage or websocket.TextMessage. An application can also send and receive messages using the io.WriteCloser and io.Reader interfaces. To send a message, call the connection NextWriter method to get an io.WriteCloser, write the message to the writer and close the writer when done. To receive a message, call the connection NextReader method to get an io.Reader and read until io.EOF is returned. This snippet shows how to echo messages using the NextWriter and NextReader methods: The WebSocket protocol distinguishes between text and binary data messages. Text messages are interpreted as UTF-8 encoded text. The interpretation of binary messages is left to the application. This package uses the TextMessage and BinaryMessage integer constants to identify the two data message types. The ReadMessage and NextReader methods return the type of the received message. The messageType argument to the WriteMessage and NextWriter methods specifies the type of a sent message. It is the application's responsibility to ensure that text messages are valid UTF-8 encoded text. The WebSocket protocol defines three types of control messages: close, ping and pong. Call the connection WriteControl, WriteMessage or NextWriter methods to send a control message to the peer. Connections handle received close messages by calling the handler function set with the SetCloseHandler method and by returning a *CloseError from the NextReader, ReadMessage or the message Read method. The default close handler sends a close message to the peer. Connections handle received ping messages by calling the handler function set with the SetPingHandler method. The default ping handler sends a pong message to the peer. Connections handle received pong messages by calling the handler function set with the SetPongHandler method. The default pong handler does nothing. If an application sends ping messages, then the application should set a pong handler to receive the corresponding pong. The control message handler functions are called from the NextReader, ReadMessage and message reader Read methods. The default close and ping handlers can block these methods for a short time when the handler writes to the connection. The application must read the connection to process close, ping and pong messages sent from the peer. If the application is not otherwise interested in messages from the peer, then the application should start a goroutine to read and discard messages from the peer. A simple example is: Connections support one concurrent reader and one concurrent writer. Applications are responsible for ensuring that no more than one goroutine calls the write methods (NextWriter, SetWriteDeadline, WriteMessage, WriteJSON, EnableWriteCompression, SetCompressionLevel) concurrently and that no more than one goroutine calls the read methods (NextReader, SetReadDeadline, ReadMessage, ReadJSON, SetPongHandler, SetPingHandler) concurrently. The Close and WriteControl methods can be called concurrently with all other methods. Web browsers allow Javascript applications to open a WebSocket connection to any host. It's up to the server to enforce an origin policy using the Origin request header sent by the browser. The Upgrader calls the function specified in the CheckOrigin field to check the origin. If the CheckOrigin function returns false, then the Upgrade method fails the WebSocket handshake with HTTP status 403. If the CheckOrigin field is nil, then the Upgrader uses a safe default: fail the handshake if the Origin request header is present and the Origin host is not equal to the Host request header. The deprecated package-level Upgrade function does not perform origin checking. The application is responsible for checking the Origin header before calling the Upgrade function. Per message compression extensions (RFC 7692) are experimentally supported by this package in a limited capacity. Setting the EnableCompression option to true in Dialer or Upgrader will attempt to negotiate per message deflate support. If compression was successfully negotiated with the connection's peer, any message received in compressed form will be automatically decompressed. All Read methods will return uncompressed bytes. Per message compression of messages written to a connection can be enabled or disabled by calling the corresponding Conn method: Currently this package does not support compression with "context takeover". This means that messages must be compressed and decompressed in isolation, without retaining sliding window or dictionary state across messages. For more details refer to RFC 7692. Use of compression is experimental and may result in decreased performance.
Package iris provides a beautifully expressive and easy to use foundation for your next website, API, or distributed app. Source code and other details for the project are available at GitHub: 8.5.9 Final The only requirement is the Go Programming Language, at least version 1.8 but 1.9 is highly recommended. Iris takes advantage of the vendor directory feature wisely: https://docs.google.com/document/d/1Bz5-UB7g2uPBdOx-rw5t9MxJwkfpx90cqG9AFL0JAYo. You get truly reproducible builds, as this method guards against upstream renames and deletes. A simple copy-paste and `go get ./...` to resolve two dependencies: https://github.com/kataras/golog and the https://github.com/iris-contrib/httpexpect will work for ever even for older versions, the newest version can be retrieved by `go get` but this file contains documentation for an older version of Iris. Follow the instructions below: 1. install the Go Programming Language: https://golang.org/dl 2. clear yours previously `$GOPATH/src/github.com/kataras/iris` folder or create new 3. download the Iris v8.5.9 (final): https://github.com/kataras/iris/archive/v8.zip 4. extract the contents of the `iris-v8` folder that's inside the downloaded zip file to your `$GOPATH/src/github.com/kataras/iris` 5. navigate to your `$GOPATH/src/github.com/kataras/iris` folder if you're not already there and open a terminal/command prompt, execute the command: `go get ./...` and you're ready to GO:) Example code: You can start the server(s) listening to any type of `net.Listener` or even `http.Server` instance. The method for initialization of the server should be passed at the end, via `Run` function. Below you'll see some useful examples: UNIX and BSD hosts can take advandage of the reuse port feature. Example code: That's all with listening, you have the full control when you need it. Let's continue by learning how to catch CONTROL+C/COMMAND+C or unix kill command and shutdown the server gracefully. In order to manually manage what to do when app is interrupted, we have to disable the default behavior with the option `WithoutInterruptHandler` and register a new interrupt handler (globally, across all possible hosts). Example code: Access to all hosts that serve your application can be provided by the `Application#Hosts` field, after the `Run` method. But the most common scenario is that you may need access to the host before the `Run` method, there are two ways of gain access to the host supervisor, read below. First way is to use the `app.NewHost` to create a new host and use one of its `Serve` or `Listen` functions to start the application via the `iris#Raw` Runner. Note that this way needs an extra import of the `net/http` package. Example Code: Second, and probably easier way is to use the `host.Configurator`. Note that this method requires an extra import statement of "github.com/kataras/iris/core/host" when using go < 1.9, if you're targeting on go1.9 then you can use the `iris#Supervisor` and omit the extra host import. All common `Runners` we saw earlier (`iris#Addr, iris#Listener, iris#Server, iris#TLS, iris#AutoTLS`) accept a variadic argument of `host.Configurator`, there are just `func(*host.Supervisor)`. Therefore the `Application` gives you the rights to modify the auto-created host supervisor through these. Example Code: Read more about listening and gracefully shutdown by navigating to: All HTTP methods are supported, developers can also register handlers for same paths for different methods. The first parameter is the HTTP Method, second parameter is the request path of the route, third variadic parameter should contains one or more iris.Handler executed by the registered order when a user requests for that specific resouce path from the server. Example code: In order to make things easier for the user, iris provides functions for all HTTP Methods. The first parameter is the request path of the route, second variadic parameter should contains one or more iris.Handler executed by the registered order when a user requests for that specific resouce path from the server. Example code: A set of routes that are being groupped by path prefix can (optionally) share the same middleware handlers and template layout. A group can have a nested group too. `.Party` is being used to group routes, developers can declare an unlimited number of (nested) groups. Example code: iris developers are able to register their own handlers for http statuses like 404 not found, 500 internal server error and so on. Example code: With the help of iris's expressionist router you can build any form of API you desire, with safety. Example code: Iris has first-class support for the MVC pattern, you'll not find these stuff anywhere else in the Go world. Example Code: Iris web framework supports Request data, Models, Persistence Data and Binding with the fastest possible execution. Characteristics: All HTTP Methods are supported, for example if want to serve `GET` then the controller should have a function named `Get()`, you can define more than one method function to serve in the same Controller struct. Persistence data inside your Controller struct (share data between requests) via `iris:"persistence"` tag right to the field or Bind using `app.Controller("/" , new(myController), theBindValue)`. Models inside your Controller struct (set-ed at the Method function and rendered by the View) via `iris:"model"` tag right to the field, i.e User UserModel `iris:"model" name:"user"` view will recognise it as `{{.user}}`. If `name` tag is missing then it takes the field's name, in this case the `"User"`. Access to the request path and its parameters via the `Path and Params` fields. Access to the template file that should be rendered via the `Tmpl` field. Access to the template data that should be rendered inside the template file via `Data` field. Access to the template layout via the `Layout` field. Access to the low-level `iris.Context` via the `Ctx` field. Get the relative request path by using the controller's name via `RelPath()`. Get the relative template path directory by using the controller's name via `RelTmpl()`. Flow as you used to, `Controllers` can be registered to any `Party`, including Subdomains, the Party's begin and done handlers work as expected. Optional `BeginRequest(ctx)` function to perform any initialization before the method execution, useful to call middlewares or when many methods use the same collection of data. Optional `EndRequest(ctx)` function to perform any finalization after any method executed. Inheritance, recursively, see for example our `mvc.SessionController/iris.SessionController`, it has the `mvc.Controller/iris.Controller` as an embedded field and it adds its logic to its `BeginRequest`. Source file: https://github.com/kataras/iris/blob/v8/mvc/session_controller.go. Read access to the current route via the `Route` field. Support for more than one input arguments (map to dynamic request path parameters). Register one or more relative paths and able to get path parameters, i.e Response via output arguments, optionally, i.e Where `any` means everything, from custom structs to standard language's types-. `Result` is an interface which contains only that function: Dispatch(ctx iris.Context) and Get where HTTP Method function(Post, Put, Delete...). Iris has a very powerful and blazing fast MVC support, you can return any value of any type from a method function and it will be sent to the client as expected. * if `string` then it's the body. * if `string` is the second output argument then it's the content type. * if `int` then it's the status code. * if `bool` is false then it throws 404 not found http error by skipping everything else. * if `error` and not nil then (any type) response will be omitted and error's text with a 400 bad request will be rendered instead. * if `(int, error)` and error is not nil then the response result will be the error's text with the status code as `int`. * if `custom struct` or `interface{}` or `slice` or `map` then it will be rendered as json, unless a `string` content type is following. * if `mvc.Result` then it executes its `Dispatch` function, so good design patters can be used to split the model's logic where needed. The example below is not intended to be used in production but it's a good showcase of some of the return types we saw before; Another good example with a typical folder structure, that many developers are used to work, can be found at: https://github.com/kataras/iris/tree/v8/_examples/mvc/overview. By creating components that are independent of one another, developers are able to reuse components quickly and easily in other applications. The same (or similar) view for one application can be refactored for another application with different data because the view is simply handling how the data is being displayed to the user. If you're new to back-end web development read about the MVC architectural pattern first, a good start is that wikipedia article: https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller. Follow the examples at: https://github.com/kataras/iris/tree/v8/_examples/#mvc At the previous example, we've seen static routes, group of routes, subdomains, wildcard subdomains, a small example of parameterized path with a single known parameter and custom http errors, now it's time to see wildcard parameters and macros. iris, like net/http std package registers route's handlers by a Handler, the iris' type of handler is just a func(ctx iris.Context) where context comes from github.com/kataras/iris/context. Iris has the easiest and the most powerful routing process you have ever meet. At the same time, iris has its own interpeter(yes like a programming language) for route's path syntax and their dynamic path parameters parsing and evaluation, We call them "macros" for shortcut. How? It calculates its needs and if not any special regexp needed then it just registers the route with the low-level path syntax, otherwise it pre-compiles the regexp and adds the necessary middleware(s). Standard macro types for parameters: if type is missing then parameter's type is defaulted to string, so {param} == {param:string}. If a function not found on that type then the "string"'s types functions are being used. i.e: Besides the fact that iris provides the basic types and some default "macro funcs" you are able to register your own too!. Register a named path parameter function: at the func(argument ...) you can have any standard type, it will be validated before the server starts so don't care about performance here, the only thing it runs at serve time is the returning func(paramValue string) bool. Example Code: A path parameter name should contain only alphabetical letters, symbols, containing '_' and numbers are NOT allowed. If route failed to be registered, the app will panic without any warnings if you didn't catch the second return value(error) on .Handle/.Get.... Last, do not confuse ctx.Values() with ctx.Params(). Path parameter's values goes to ctx.Params() and context's local storage that can be used to communicate between handlers and middleware(s) goes to ctx.Values(), path parameters and the rest of any custom values are separated for your own good. Run Static Files Example code: More examples can be found here: https://github.com/kataras/iris/tree/v8/_examples/beginner/file-server Middleware is just a concept of ordered chain of handlers. Middleware can be registered globally, per-party, per-subdomain and per-route. Example code: iris is able to wrap and convert any external, third-party Handler you used to use to your web application. Let's convert the https://github.com/rs/cors net/http external middleware which returns a `next form` handler. Example code: Iris supports 5 template engines out-of-the-box, developers can still use any external golang template engine, as `context/context#ResponseWriter()` is an `io.Writer`. All of these five template engines have common features with common API, like Layout, Template Funcs, Party-specific layout, partial rendering and more. Example code: View engine supports bundled(https://github.com/jteeuwen/go-bindata) template files too. go-bindata gives you two functions, asset and assetNames, these can be setted to each of the template engines using the `.Binary` func. Example code: A real example can be found here: https://github.com/kataras/iris/tree/v8/_examples/view/embedding-templates-into-app. Enable auto-reloading of templates on each request. Useful while developers are in dev mode as they no neeed to restart their app on every template edit. Example code: Note: In case you're wondering, the code behind the view engines derives from the "github.com/kataras/iris/view" package, access to the engines' variables can be granded by "github.com/kataras/iris" package too. Each one of these template engines has different options located here: https://github.com/kataras/iris/tree/v8/view . This example will show how to store and access data from a session. You don’t need any third-party library, but If you want you can use any session manager compatible or not. In this example we will only allow authenticated users to view our secret message on the /secret page. To get access to it, the will first have to visit /login to get a valid session cookie, which logs him in. Additionally he can visit /logout to revoke his access to our secret message. Example code: Running the example: Sessions persistence can be achieved using one (or more) `sessiondb`. Example Code: More examples: In this example we will create a small chat between web sockets via browser. Example Server Code: Example Client(javascript) Code: Running the example: But you should have a basic idea of the framework by now, we just scratched the surface. If you enjoy what you just saw and want to learn more, please follow the below links: Examples: Middleware: Home Page:
Package iris provides a beautifully expressive and easy to use foundation for your next website, API, or distributed app. Source code and other details for the project are available at GitHub: 11.1.1 The only requirement is the Go Programming Language, at least version 1.8 but 1.11.1 and above is highly recommended. Example code: You can start the server(s) listening to any type of `net.Listener` or even `http.Server` instance. The method for initialization of the server should be passed at the end, via `Run` function. Below you'll see some useful examples: UNIX and BSD hosts can take advantage of the reuse port feature. Example code: That's all with listening, you have the full control when you need it. Let's continue by learning how to catch CONTROL+C/COMMAND+C or unix kill command and shutdown the server gracefully. In order to manually manage what to do when app is interrupted, we have to disable the default behavior with the option `WithoutInterruptHandler` and register a new interrupt handler (globally, across all possible hosts). Example code: Access to all hosts that serve your application can be provided by the `Application#Hosts` field, after the `Run` method. But the most common scenario is that you may need access to the host before the `Run` method, there are two ways of gain access to the host supervisor, read below. First way is to use the `app.NewHost` to create a new host and use one of its `Serve` or `Listen` functions to start the application via the `iris#Raw` Runner. Note that this way needs an extra import of the `net/http` package. Example Code: Second, and probably easier way is to use the `host.Configurator`. Note that this method requires an extra import statement of "github.com/kataras/iris/core/host" when using go < 1.9, if you're targeting on go1.9 then you can use the `iris#Supervisor` and omit the extra host import. All common `Runners` we saw earlier (`iris#Addr, iris#Listener, iris#Server, iris#TLS, iris#AutoTLS`) accept a variadic argument of `host.Configurator`, there are just `func(*host.Supervisor)`. Therefore the `Application` gives you the rights to modify the auto-created host supervisor through these. Example Code: Read more about listening and gracefully shutdown by navigating to: All HTTP methods are supported, developers can also register handlers for same paths for different methods. The first parameter is the HTTP Method, second parameter is the request path of the route, third variadic parameter should contains one or more iris.Handler executed by the registered order when a user requests for that specific resouce path from the server. Example code: In order to make things easier for the user, iris provides functions for all HTTP Methods. The first parameter is the request path of the route, second variadic parameter should contains one or more iris.Handler executed by the registered order when a user requests for that specific resouce path from the server. Example code: A set of routes that are being groupped by path prefix can (optionally) share the same middleware handlers and template layout. A group can have a nested group too. `.Party` is being used to group routes, developers can declare an unlimited number of (nested) groups. Example code: iris developers are able to register their own handlers for http statuses like 404 not found, 500 internal server error and so on. Example code: With the help of iris's expressionist router you can build any form of API you desire, with safety. Example code: At the previous example, we've seen static routes, group of routes, subdomains, wildcard subdomains, a small example of parameterized path with a single known parameter and custom http errors, now it's time to see wildcard parameters and macros. iris, like net/http std package registers route's handlers by a Handler, the iris' type of handler is just a func(ctx iris.Context) where context comes from github.com/kataras/iris/context. Iris has the easiest and the most powerful routing process you have ever meet. At the same time, iris has its own interpeter(yes like a programming language) for route's path syntax and their dynamic path parameters parsing and evaluation, We call them "macros" for shortcut. How? It calculates its needs and if not any special regexp needed then it just registers the route with the low-level path syntax, otherwise it pre-compiles the regexp and adds the necessary middleware(s). Standard macro types for parameters: if type is missing then parameter's type is defaulted to string, so {param} == {param:string}. If a function not found on that type then the "string"'s types functions are being used. i.e: Besides the fact that iris provides the basic types and some default "macro funcs" you are able to register your own too!. Register a named path parameter function: at the func(argument ...) you can have any standard type, it will be validated before the server starts so don't care about performance here, the only thing it runs at serve time is the returning func(paramValue string) bool. Example Code: Last, do not confuse ctx.Values() with ctx.Params(). Path parameter's values goes to ctx.Params() and context's local storage that can be used to communicate between handlers and middleware(s) goes to ctx.Values(), path parameters and the rest of any custom values are separated for your own good. Run Static Files Example code: More examples can be found here: https://github.com/kataras/iris/tree/master/_examples/beginner/file-server Middleware is just a concept of ordered chain of handlers. Middleware can be registered globally, per-party, per-subdomain and per-route. Example code: iris is able to wrap and convert any external, third-party Handler you used to use to your web application. Let's convert the https://github.com/rs/cors net/http external middleware which returns a `next form` handler. Example code: Iris supports 5 template engines out-of-the-box, developers can still use any external golang template engine, as `context/context#ResponseWriter()` is an `io.Writer`. All of these five template engines have common features with common API, like Layout, Template Funcs, Party-specific layout, partial rendering and more. Example code: View engine supports bundled(https://github.com/shuLhan/go-bindata) template files too. go-bindata gives you two functions, asset and assetNames, these can be setted to each of the template engines using the `.Binary` func. Example code: A real example can be found here: https://github.com/kataras/iris/tree/master/_examples/view/embedding-templates-into-app. Enable auto-reloading of templates on each request. Useful while developers are in dev mode as they no neeed to restart their app on every template edit. Example code: Note: In case you're wondering, the code behind the view engines derives from the "github.com/kataras/iris/view" package, access to the engines' variables can be granded by "github.com/kataras/iris" package too. Each one of these template engines has different options located here: https://github.com/kataras/iris/tree/master/view . This example will show how to store and access data from a session. You don’t need any third-party library, but If you want you can use any session manager compatible or not. In this example we will only allow authenticated users to view our secret message on the /secret page. To get access to it, the will first have to visit /login to get a valid session cookie, which logs him in. Additionally he can visit /logout to revoke his access to our secret message. Example code: Running the example: Sessions persistence can be achieved using one (or more) `sessiondb`. Example Code: More examples: In this example we will create a small chat between web sockets via browser. Example Server Code: Example Client(javascript) Code: Running the example: Iris has first-class support for the MVC pattern, you'll not find these stuff anywhere else in the Go world. Example Code: // GetUserBy serves // Method: GET // Resource: http://localhost:8080/user/{username:string} // By is a reserved "keyword" to tell the framework that you're going to // bind path parameters in the function's input arguments, and it also // helps to have "Get" and "GetBy" in the same controller. // // func (c *ExampleController) GetUserBy(username string) mvc.Result { // return mvc.View{ // Name: "user/username.html", // Data: username, // } // } Can use more than one, the factory will make sure that the correct http methods are being registered for each route for this controller, uncomment these if you want: Iris web framework supports Request data, Models, Persistence Data and Binding with the fastest possible execution. Characteristics: All HTTP Methods are supported, for example if want to serve `GET` then the controller should have a function named `Get()`, you can define more than one method function to serve in the same Controller. Register custom controller's struct's methods as handlers with custom paths(even with regex parametermized path) via the `BeforeActivation` custom event callback, per-controller. Example: Persistence data inside your Controller struct (share data between requests) by defining services to the Dependencies or have a `Singleton` controller scope. Share the dependencies between controllers or register them on a parent MVC Application, and ability to modify dependencies per-controller on the `BeforeActivation` optional event callback inside a Controller, i.e Access to the `Context` as a controller's field(no manual binding is neede) i.e `Ctx iris.Context` or via a method's input argument, i.e Models inside your Controller struct (set-ed at the Method function and rendered by the View). You can return models from a controller's method or set a field in the request lifecycle and return that field to another method, in the same request lifecycle. Flow as you used to, mvc application has its own `Router` which is a type of `iris/router.Party`, the standard iris api. `Controllers` can be registered to any `Party`, including Subdomains, the Party's begin and done handlers work as expected. Optional `BeginRequest(ctx)` function to perform any initialization before the method execution, useful to call middlewares or when many methods use the same collection of data. Optional `EndRequest(ctx)` function to perform any finalization after any method executed. Session dynamic dependency via manager's `Start` to the MVC Application, i.e Inheritance, recursively. Access to the dynamic path parameters via the controller's methods' input arguments, no binding is needed. When you use the Iris' default syntax to parse handlers from a controller, you need to suffix the methods with the `By` word, uppercase is a new sub path. Example: Register one or more relative paths and able to get path parameters, i.e Response via output arguments, optionally, i.e Where `any` means everything, from custom structs to standard language's types-. `Result` is an interface which contains only that function: Dispatch(ctx iris.Context) and Get where HTTP Method function(Post, Put, Delete...). Iris has a very powerful and blazing fast MVC support, you can return any value of any type from a method function and it will be sent to the client as expected. * if `string` then it's the body. * if `string` is the second output argument then it's the content type. * if `int` then it's the status code. * if `bool` is false then it throws 404 not found http error by skipping everything else. * if `error` and not nil then (any type) response will be omitted and error's text with a 400 bad request will be rendered instead. * if `(int, error)` and error is not nil then the response result will be the error's text with the status code as `int`. * if `custom struct` or `interface{}` or `slice` or `map` then it will be rendered as json, unless a `string` content type is following. * if `mvc.Result` then it executes its `Dispatch` function, so good design patters can be used to split the model's logic where needed. Examples with good patterns to follow but not intend to be used in production of course can be found at: https://github.com/kataras/iris/tree/master/_examples/#mvc. By creating components that are independent of one another, developers are able to reuse components quickly and easily in other applications. The same (or similar) view for one application can be refactored for another application with different data because the view is simply handling how the data is being displayed to the user. If you're new to back-end web development read about the MVC architectural pattern first, a good start is that wikipedia article: https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller. But you should have a basic idea of the framework by now, we just scratched the surface. If you enjoy what you just saw and want to learn more, please follow the below links: Examples: Middleware: Home Page: Book (in-progress):
Package tk9.0 is a CGo-free, cross platform GUI toolkit for Go. It is similar to Tkinter for Python. Also available in _examples/hello.go To execute the above program on any supported target issue something like The CGO_ENABLED=0 is optional and here it only demonstrates the program can be built without CGo. Consider this program in _examples/debugging.go: Execute the program using the tags as indicated, then close the window or click the Hello button. With the tk.dmesg tag the package initialization prints the debug messages path. So we can view it, for example, like this: 18876 was the process PID in this particular run. Using the tags allows to inspect the Tcl/Tk code executed during the lifetime of the process. These combinations of GOOS and GOARCH are currently supported Specific to FreeBSD: When building with cross-compiling or CGO_ENABLED=0, add the following argument to `go` so that these symbols are defined by making fakecgo the Cgo. Builder results available at modern-c.appspot.com. At the moment the package is a MVP allowing to build at least some simple, yet useful programs. The full Tk API is not yet usable. Please report needed, but non-exposed Tk features at the issue tracker, thanks. Providing feedback about the missing building blocks, bugs and your user experience is invaluable in helping this package to eventually reach version 1. See also RERO. The ErrorMode variable selects the behaviour on errors for certain functions that do not return error. When ErrorMode is PanicOnError, the default, errors will panic, providing a stack trace. When ErrorMode is CollectErrors, errors will be recorded using errors.Join in the Error variable. Even if a function does not return error, it is still possible to handle errors in the usual way when needed, except that Error is now a static variable. That's a problem in the general case, but less so in this package that must be used from a single goroutine only, as documented elsewhere. This is obviously a compromise enabling to have a way to check for errors and, at the same time, the ability to write concise code like: There are altogether four different places where the call to the Button function can produce errors as additionally to the call itself, every of its three arguments can independently fail as well. Checking each and one of them separately is not always necessary in GUI code. But the explicit option in the first example is still available when needed. Package initialization is done lazily. This saves noticeable additional startup time and avoids screen flicker in hybrid programs that use the GUI only on demand. Early package initialization can be enforced by Initialize. Initialization will fail if a Unix process starts on a machine with no X server or the process is started in a way that it has no access to the X server. On the other hand, this package may work on Unix machines with no X server if the process is started remotely using '$ ssh -X foo@bar' and the X forwarding is enabled/supported. Darwin port uses the macOS GUI API and does not use X11. Zero or more options can be specified when creating a widget. For example or Tcl/Tk uses widget pathnames, image and font names explicitly set by user code. This package generates those names automatically and they are not directly needed in code that uses this package. There is, for a example, a Tcl/tk 'text' widget and a '-text' option. This package exports the widget as type 'TextWidget', its constructor as function 'Text' and the option as function 'Txt'. The complete list is: This package should be used from the same goroutine that initialized the package. Package initialization performs a runtime.LockOSThread, meaning func main() will start execuing locked on the same OS thread. The Command() and similar options expect an argument that must be one of: - An EventHandler or a function literal of the same signature. - A func(). This can be used when the handler does not need the associated Event instance. When passing an argument of type time.Durarion to a function accepting 'any', the duration is converted to an integer number of milliseconds. When passing an argument of type []byte to a function accepting 'any', the byte slice is converted to a encoding/base64 encoded string. When passing an argument of type []FileType to a function accepting 'any', the slice is converted to the representation the Tcl/Tk -filetypes option expects. At least some minimal knowledge of reading Tcl/Tk code is probably required for using this package and/or using the related documentation. However you will not need to write any Tcl code and you do not need to care about the grammar of Tcl words/string literals and how it differs from Go. There are several Tcl/Tk tutorials available, for example at tutorialspoint. Merge requests for known issues are always welcome. Please send merge requests for new features/APIs after filling and discussing the additions/changes at the issue tracker first. Most of the documentation is generated directly from the Tcl/Tk documentation and may not be entirely correct for the Go package. Those parts hopefully still serve as a quick/offline Tcl/Tk reference. Parts of the documentation are copied and/or modified from the tcl.tk site, see the LICENSE-TCLTK file for details. Parts of the documentation are copied and/or modified from the tkinter.ttk site which is You can support the maintenance and further development of this package at jnml's LiberaPay (using PayPal). "Checkbutton.indicator" style element options: "Combobox.downarrow" style element options: "Menubutton.indicator" style element options: "Radiobutton.indicator" style element options: "Spinbox.downarrow" style element options: "Spinbox.uparrow" style element options: "Treeitem.indicator" style element options: "arrow" style element options: "border" style element options: "downarrow" style element options: "field" style element options: "leftarrow" style element options: "rightarrow" style element options: "slider" style element options: "thumb" style element options: "uparrow" style element options: "alt" theme style list Style map: -foreground {disabled #a3a3a3} -background {disabled #d9d9d9 active #ececec} -embossed {disabled 1} Layout: ComboboxPopdownFrame.border -sticky nswe Layout: Treeheading.cell -sticky nswe Treeheading.border -sticky nswe -children {Treeheading.padding -sticky nswe -children {Treeheading.image -side right -sticky {} Treeheading.text -sticky we}} Layout: Treeitem.padding -sticky nswe -children {Treeitem.indicator -side left -sticky {} Treeitem.image -side left -sticky {} Treeitem.text -sticky nswe} Layout: Treeitem.separator -sticky nswe Layout: Button.border -sticky nswe -border 1 -children {Button.focus -sticky nswe -children {Button.padding -sticky nswe -children {Button.label -sticky nswe}}} Style map: -highlightcolor {alternate black} -relief { {pressed !disabled} sunken {active !disabled} raised } Layout: Checkbutton.padding -sticky nswe -children {Checkbutton.indicator -side left -sticky {} Checkbutton.focus -side left -sticky w -children {Checkbutton.label -sticky nswe}} Style map: -indicatorcolor {pressed #d9d9d9 alternate #aaaaaa disabled #d9d9d9} Layout: Combobox.field -sticky nswe -children {Combobox.downarrow -side right -sticky ns Combobox.padding -sticky nswe -children {Combobox.textarea -sticky nswe}} Style map: -fieldbackground {readonly #d9d9d9 disabled #d9d9d9} -arrowcolor {disabled #a3a3a3} Layout: Entry.field -sticky nswe -border 1 -children {Entry.padding -sticky nswe -children {Entry.textarea -sticky nswe}} Style map: -fieldbackground {readonly #d9d9d9 disabled #d9d9d9} Layout: Labelframe.border -sticky nswe Layout: Menubutton.border -sticky nswe -children {Menubutton.focus -sticky nswe -children {Menubutton.indicator -side right -sticky {} Menubutton.padding -sticky we -children {Menubutton.label -side left -sticky {}}}} Layout: Notebook.client -sticky nswe Layout: Notebook.tab -sticky nswe -children {Notebook.padding -side top -sticky nswe -children {Notebook.focus -side top -sticky nswe -children {Notebook.label -side top -sticky {}}}} Style map: -expand {selected {1.5p 1.5p 0.75p 0}} -background {selected #d9d9d9} - Layout: Radiobutton.padding -sticky nswe -children {Radiobutton.indicator -side left -sticky {} Radiobutton.focus -side left -sticky {} -children {Radiobutton.label -sticky nswe}} Style map: -indicatorcolor {pressed #d9d9d9 alternate #aaaaaa disabled #d9d9d9} - - Layout: Spinbox.field -side top -sticky we -children {null -side right -sticky {} -children {Spinbox.uparrow -side top -sticky e Spinbox.downarrow -side bottom -sticky e} Spinbox.padding -sticky nswe -children {Spinbox.textarea -sticky nswe}} Style map: -fieldbackground {readonly #d9d9d9 disabled #d9d9d9} -arrowcolor {disabled #a3a3a3} Layout: Notebook.tab -sticky nswe -children {Notebook.padding -side top -sticky nswe -children {Notebook.focus -side top -sticky nswe -children {Notebook.label -side top -sticky {}}}} Layout: Toolbutton.border -sticky nswe -children {Toolbutton.focus -sticky nswe -children {Toolbutton.padding -sticky nswe -children {Toolbutton.label -sticky nswe}}} Style map: -relief {disabled flat selected sunken pressed sunken active raised} -background {pressed #c3c3c3 active #ececec} Layout: Treeview.field -sticky nswe -border 1 -children {Treeview.padding -sticky nswe -children {Treeview.treearea -sticky nswe}} Style map: -foreground {disabled #a3a3a3 selected #ffffff} -background {disabled #d9d9d9 selected #4a6984} Layout: Treeitem.separator -sticky nswe "Button.button" style element options: "Checkbutton.button" style element options: "Combobox.button" style element options: "DisclosureButton.button" style element options: "Entry.field" style element options: "GradientButton.button" style element options: "HelpButton.button" style element options: "Horizontal.Scrollbar.leftarrow" style element options: "Horizontal.Scrollbar.rightarrow" style element options: "Horizontal.Scrollbar.thumb" style element options: "Horizontal.Scrollbar.trough" style element options: "InlineButton.button" style element options: "Labelframe.border" style element options: "Menubutton.button" style element options: "Notebook.client" style element options: "Notebook.tab" style element options: "Progressbar.track" style element options: "Radiobutton.button" style element options: "RecessedButton.button" style element options: "RoundedRectButton.button" style element options: "Scale.slider" style element options: "Scale.trough" style element options: "Searchbox.field" style element options: "SidebarButton.button" style element options: "Spinbox.downarrow" style element options: "Spinbox.field" style element options: "Spinbox.uparrow" style element options: "Toolbar.background" style element options: "Toolbutton.border" style element options: "Treeheading.cell" style element options: "Treeitem.indicator" style element options: "Treeview.treearea" style element options: "Vertical.Scrollbar.downarrow" style element options: "Vertical.Scrollbar.thumb" style element options: "Vertical.Scrollbar.trough" style element options: "Vertical.Scrollbar.uparrow" style element options: "background" style element options: "field" style element options: "fill" style element options: "hseparator" style element options: "separator" style element options: "sizegrip" style element options: "vseparator" style element options: "aqua" theme style list Style map: -selectforeground { background systemSelectedTextColor !focus systemSelectedTextColor} -foreground { disabled systemDisabledControlTextColor background systemLabelColor} -selectbackground { background systemSelectedTextBackgroundColor !focus systemSelectedTextBackgroundColor} Layout: DisclosureButton.button -sticky nswe Layout: GradientButton.button -sticky nswe -children {Button.padding -sticky nswe -children {Button.label -sticky nswe}} Layout: Treeheading.cell -sticky nswe Treeheading.image -side right -sticky {} Treeheading.text -side top -sticky {} Layout: HelpButton.button -sticky nswe Layout: Horizontal.Scrollbar.trough -sticky we -children {Horizontal.Scrollbar.thumb -sticky nswe Horizontal.Scrollbar.rightarrow -side right -sticky {} Horizontal.Scrollbar.leftarrow -side right -sticky {}} Layout: Button.padding -sticky nswe -children {Button.label -sticky nswe} Style map: -foreground { pressed systemLabelColor !pressed systemSecondaryLabelColor } Layout: InlineButton.button -sticky nswe -children {Button.padding -sticky nswe -children {Button.label -sticky nswe}} Style map: -foreground { disabled systemWindowBackgroundColor } Layout: Treeitem.padding -sticky nswe -children {Treeitem.indicator -side left -sticky {} Treeitem.image -side left -sticky {} Treeitem.text -side left -sticky {}} Layout: Label.fill -sticky nswe -children {Label.text -sticky nswe} Layout: RecessedButton.button -sticky nswe -children {Button.padding -sticky nswe -children {Button.label -sticky nswe}} Style map: -font { selected RecessedFont active RecessedFont pressed RecessedFont } -foreground { {disabled selected} systemWindowBackgroundColor3 {disabled !selected} systemDisabledControlTextColor selected systemTextBackgroundColor active white pressed white } Layout: RoundedRectButton.button -sticky nswe -children {Button.padding -sticky nswe -children {Button.label -sticky nswe}} Layout: Searchbox.field -sticky nswe -border 1 -children {Entry.padding -sticky nswe -children {Entry.textarea -sticky nswe}} Layout: SidebarButton.button -sticky nswe -children {Button.padding -sticky nswe -children {Button.label -sticky nswe}} Style map: -foreground { {disabled selected} systemWindowBackgroundColor3 {disabled !selected} systemDisabledControlTextColor selected systemTextColor active systemTextColor pressed systemTextColor } Layout: Button.button -sticky nswe -children {Button.padding -sticky nswe -children {Button.label -sticky nswe}} Style map: -foreground { pressed white {alternate !pressed !background} white disabled systemDisabledControlTextColor} Layout: Checkbutton.button -sticky nswe -children {Checkbutton.padding -sticky nswe -children {Checkbutton.label -side left -sticky {}}} Layout: Combobox.button -sticky nswe -children {Combobox.padding -sticky nswe -children {Combobox.textarea -sticky nswe}} Style map: -foreground { disabled systemDisabledControlTextColor } -selectbackground { !focus systemUnemphasizedSelectedTextBackgroundColor } Layout: Entry.field -sticky nswe -border 1 -children {Entry.padding -sticky nswe -children {Entry.textarea -sticky nswe}} Style map: -foreground { disabled systemDisabledControlTextColor } -selectbackground { !focus systemUnemphasizedSelectedTextBackgroundColor } Layout: Labelframe.border -sticky nswe Layout: Label.fill -sticky nswe -children {Label.text -sticky nswe} Layout: Menubutton.button -sticky nswe -children {Menubutton.padding -sticky nswe -children {Menubutton.label -side left -sticky {}}} Layout: Notebook.client -sticky nswe Layout: Notebook.tab -sticky nswe -children {Notebook.padding -sticky nswe -children {Notebook.label -sticky nswe}} Style map: -foreground { {background !selected} systemControlTextColor {background selected} black {!background selected} systemSelectedTabTextColor disabled systemDisabledControlTextColor} Layout: Progressbar.track -sticky nswe Layout: Radiobutton.button -sticky nswe -children {Radiobutton.padding -sticky nswe -children {Radiobutton.label -side left -sticky {}}} - Layout: Spinbox.buttons -side right -sticky {} -children {Spinbox.uparrow -side top -sticky e Spinbox.downarrow -side bottom -sticky e} Spinbox.field -sticky we -children {Spinbox.textarea -sticky we} Style map: -foreground { disabled systemDisabledControlTextColor } -selectbackground { !focus systemUnemphasizedSelectedTextBackgroundColor } Layout: Notebook.tab -sticky nswe -children {Notebook.padding -sticky nswe -children {Notebook.label -sticky nswe}} Layout: Toolbar.background -sticky nswe Layout: Toolbutton.border -sticky nswe -children {Toolbutton.focus -sticky nswe -children {Toolbutton.padding -sticky nswe -children {Toolbutton.label -sticky nswe}}} Layout: Treeview.field -sticky nswe -children {Treeview.padding -sticky nswe -children {Treeview.treearea -sticky nswe}} Style map: -background { selected systemSelectedTextBackgroundColor } Layout: Vertical.Scrollbar.trough -sticky ns -children {Vertical.Scrollbar.thumb -sticky nswe Vertical.Scrollbar.downarrow -side bottom -sticky {} Vertical.Scrollbar.uparrow -side bottom -sticky {}} "Checkbutton.indicator" style element options: "Combobox.field" style element options: "Radiobutton.indicator" style element options: "Spinbox.downarrow" style element options: "Spinbox.uparrow" style element options: "arrow" style element options: "bar" style element options: "border" style element options: "client" style element options: "downarrow" style element options: "field" style element options: "hgrip" style element options: "leftarrow" style element options: "pbar" style element options: "rightarrow" style element options: "slider" style element options: "tab" style element options: "thumb" style element options: "trough" style element options: "uparrow" style element options: "vgrip" style element options: "clam" theme style list Style map: -selectforeground {!focus white} -foreground {disabled #999999} -selectbackground {!focus #9e9a91} -background {disabled #dcdad5 active #eeebe7} Layout: ComboboxPopdownFrame.border -sticky nswe Layout: Treeheading.cell -sticky nswe Treeheading.border -sticky nswe -children {Treeheading.padding -sticky nswe -children {Treeheading.image -side right -sticky {} Treeheading.text -sticky we}} Layout: Sash.hsash -sticky nswe -children {Sash.hgrip -sticky nswe} Layout: Treeitem.padding -sticky nswe -children {Treeitem.indicator -side left -sticky {} Treeitem.image -side left -sticky {} Treeitem.text -sticky nswe} - Layout: Treeitem.separator -sticky nswe Layout: Button.border -sticky nswe -border 1 -children {Button.focus -sticky nswe -children {Button.padding -sticky nswe -children {Button.label -sticky nswe}}} Style map: -lightcolor {pressed #bab5ab} -background {disabled #dcdad5 pressed #bab5ab active #eeebe7} -bordercolor {alternate #000000} -darkcolor {pressed #bab5ab} Layout: Checkbutton.padding -sticky nswe -children {Checkbutton.indicator -side left -sticky {} Checkbutton.focus -side left -sticky w -children {Checkbutton.label -sticky nswe}} Style map: -indicatorbackground {pressed #dcdad5 {!disabled alternate} #5895bc {disabled alternate} #a0a0a0 disabled #dcdad5} Layout: Combobox.downarrow -side right -sticky ns Combobox.field -sticky nswe -children {Combobox.padding -sticky nswe -children {Combobox.textarea -sticky nswe}} Style map: -foreground {{readonly focus} #ffffff} -fieldbackground {{readonly focus} #4a6984 readonly #dcdad5} -background {active #eeebe7 pressed #eeebe7} -bordercolor {focus #4a6984} -arrowcolor {disabled #999999} Layout: Entry.field -sticky nswe -border 1 -children {Entry.padding -sticky nswe -children {Entry.textarea -sticky nswe}} Style map: -lightcolor {focus #6f9dc6} -background {readonly #dcdad5} -bordercolor {focus #4a6984} Layout: Labelframe.border -sticky nswe Layout: Menubutton.border -sticky nswe -children {Menubutton.focus -sticky nswe -children {Menubutton.indicator -side right -sticky {} Menubutton.padding -sticky we -children {Menubutton.label -side left -sticky {}}}} Layout: Notebook.tab -sticky nswe -children {Notebook.padding -side top -sticky nswe -children {Notebook.focus -side top -sticky nswe -children {Notebook.label -side top -sticky {}}}} Style map: -lightcolor {selected #eeebe7 {} #cfcdc8} -padding {selected {4.5p 3p 4.5p 1.5p}} -background {selected #dcdad5 {} #bab5ab} - Layout: Radiobutton.padding -sticky nswe -children {Radiobutton.indicator -side left -sticky {} Radiobutton.focus -side left -sticky {} -children {Radiobutton.label -sticky nswe}} Style map: -indicatorbackground {pressed #dcdad5 {!disabled alternate} #5895bc {disabled alternate} #a0a0a0 disabled #dcdad5} - - Layout: Spinbox.field -side top -sticky we -children {null -side right -sticky {} -children {Spinbox.uparrow -side top -sticky e Spinbox.downarrow -side bottom -sticky e} Spinbox.padding -sticky nswe -children {Spinbox.textarea -sticky nswe}} Style map: -background {readonly #dcdad5} -bordercolor {focus #4a6984} -arrowcolor {disabled #999999} Layout: Notebook.tab -sticky nswe -children {Notebook.padding -side top -sticky nswe -children {Notebook.focus -side top -sticky nswe -children {Notebook.label -side top -sticky {}}}} Layout: Toolbutton.border -sticky nswe -children {Toolbutton.focus -sticky nswe -children {Toolbutton.padding -sticky nswe -children {Toolbutton.label -sticky nswe}}} Style map: -lightcolor {pressed #bab5ab} -relief {disabled flat selected sunken pressed sunken active raised} -background {disabled #dcdad5 pressed #bab5ab active #eeebe7} -darkcolor {pressed #bab5ab} Layout: Treeview.field -sticky nswe -border 1 -children {Treeview.padding -sticky nswe -children {Treeview.treearea -sticky nswe}} Style map: -foreground {disabled #999999 selected #ffffff} -background {disabled #dcdad5 selected #4a6984} -bordercolor {focus #4a6984} Layout: Treeitem.separator -sticky nswe Layout: Sash.vsash -sticky nswe -children {Sash.vgrip -sticky nswe} "Button.border" style element options: "Checkbutton.indicator" style element options: "Combobox.downarrow" style element options: "Menubutton.indicator" style element options: "Radiobutton.indicator" style element options: "Spinbox.downarrow" style element options: "Spinbox.uparrow" style element options: "arrow" style element options: "downarrow" style element options: "highlight" style element options: "hsash" style element options: "leftarrow" style element options: "rightarrow" style element options: "slider" style element options: "uparrow" style element options: "vsash" style element options: "classic" theme style list Style map: -highlightcolor {focus black} -foreground {disabled #a3a3a3} -background {disabled #d9d9d9 active #ececec} Layout: ComboboxPopdownFrame.border -sticky nswe Layout: Treeheading.cell -sticky nswe Treeheading.border -sticky nswe -children {Treeheading.padding -sticky nswe -children {Treeheading.image -side right -sticky {} Treeheading.text -sticky we}} Layout: Horizontal.Scale.highlight -sticky nswe -children {Horizontal.Scale.trough -sticky nswe -children {Horizontal.Scale.slider -side left -sticky {}}} Layout: Treeitem.padding -sticky nswe -children {Treeitem.indicator -side left -sticky {} Treeitem.image -side left -sticky {} Treeitem.text -sticky nswe} - Layout: Treeitem.separator -sticky nswe Layout: Button.highlight -sticky nswe -children {Button.border -sticky nswe -border 1 -children {Button.padding -sticky nswe -children {Button.label -sticky nswe}}} Style map: -relief {{!disabled pressed} sunken} Layout: Checkbutton.highlight -sticky nswe -children {Checkbutton.border -sticky nswe -children {Checkbutton.padding -sticky nswe -children {Checkbutton.indicator -side left -sticky {} Checkbutton.label -side left -sticky nswe}}} Style map: -indicatorrelief {alternate raised selected sunken pressed sunken} -indicatorcolor {pressed #d9d9d9 alternate #b05e5e selected #b03060} Layout: Combobox.highlight -sticky nswe -children {Combobox.field -sticky nswe -children {Combobox.downarrow -side right -sticky ns Combobox.padding -sticky nswe -children {Combobox.textarea -sticky nswe}}} Style map: -fieldbackground {readonly #d9d9d9 disabled #d9d9d9} Layout: Entry.highlight -sticky nswe -children {Entry.field -sticky nswe -border 1 -children {Entry.padding -sticky nswe -children {Entry.textarea -sticky nswe}}} Style map: -fieldbackground {readonly #d9d9d9 disabled #d9d9d9} Layout: Labelframe.border -sticky nswe Layout: Menubutton.highlight -sticky nswe -children {Menubutton.border -sticky nswe -children {Menubutton.indicator -side right -sticky {} Menubutton.padding -sticky we -children {Menubutton.label -sticky {}}}} Layout: Notebook.tab -sticky nswe -children {Notebook.padding -side top -sticky nswe -children {Notebook.focus -side top -sticky nswe -children {Notebook.label -side top -sticky {}}}} Style map: -background {selected #d9d9d9} - Layout: Radiobutton.highlight -sticky nswe -children {Radiobutton.border -sticky nswe -children {Radiobutton.padding -sticky nswe -children {Radiobutton.indicator -side left -sticky {} Radiobutton.label -side left -sticky nswe}}} Style map: -indicatorrelief {alternate raised selected sunken pressed sunken} -indicatorcolor {pressed #d9d9d9 alternate #b05e5e selected #b03060} Style map: -sliderrelief {{pressed !disabled} sunken} Style map: -relief {{pressed !disabled} sunken} Layout: Spinbox.highlight -sticky nswe -children {Spinbox.field -sticky nswe -children {null -side right -sticky {} -children {Spinbox.uparrow -side top -sticky e Spinbox.downarrow -side bottom -sticky e} Spinbox.padding -sticky nswe -children {Spinbox.textarea -sticky nswe}}} Style map: -fieldbackground {readonly #d9d9d9 disabled #d9d9d9} Layout: Notebook.tab -sticky nswe -children {Notebook.padding -side top -sticky nswe -children {Notebook.focus -side top -sticky nswe -children {Notebook.label -side top -sticky {}}}} Layout: Toolbutton.focus -sticky nswe -children {Toolbutton.border -sticky nswe -children {Toolbutton.padding -sticky nswe -children {Toolbutton.label -sticky nswe}}} Style map: -relief {disabled flat selected sunken pressed sunken active raised} -background {pressed #b3b3b3 active #ececec} Layout: Treeview.highlight -sticky nswe -children {Treeview.field -sticky nswe -border 1 -children {Treeview.padding -sticky nswe -children {Treeview.treearea -sticky nswe}}} Style map: -foreground {disabled #a3a3a3 selected #000000} -background {disabled #d9d9d9 selected #c3c3c3} Layout: Treeitem.separator -sticky nswe Layout: Vertical.Scale.highlight -sticky nswe -children {Vertical.Scale.trough -sticky nswe -children {Vertical.Scale.slider -side top -sticky {}}} "" style element options: "Checkbutton.indicator" style element options: "Combobox.downarrow" style element options: "Menubutton.indicator" style element options: "Radiobutton.indicator" style element options: "Spinbox.downarrow" style element options: "Spinbox.uparrow" style element options: "Treeheading.cell" style element options: "Treeitem.indicator" style element options: "Treeitem.row" style element options: "Treeitem.separator" style element options: "arrow" style element options: "background" style element options: "border" style element options: "client" style element options: "ctext" style element options: "downarrow" style element options: "field" style element options: "fill" style element options: "focus" style element options: "hsash" style element options: "hseparator" style element options: "image" style element options: "indicator" style element options: "label" style element options: "leftarrow" style element options: "padding" style element options: "pbar" style element options: "rightarrow" style element options: "separator" style element options: "sizegrip" style element options: "slider" style element options: "tab" style element options: "text" style element options: "textarea" style element options: "thumb" style element options: "treearea" style element options: "trough" style element options: "uparrow" style element options: "vsash" style element options: "vseparator" style element options: "default" theme style list Style map: -foreground {disabled #a3a3a3} -background {disabled #edeceb active #ececec} Layout: Treedata.padding -sticky nswe -children {Treeitem.image -side left -sticky {} Treeitem.text -sticky nswe} Layout: ComboboxPopdownFrame.border -sticky nswe Layout: Treeheading.cell -sticky nswe Treeheading.border -sticky nswe -children {Treeheading.padding -sticky nswe -children {Treeheading.image -side right -sticky {} Treeheading.text -sticky we}} Layout: Sash.hsash -sticky we Layout: Horizontal.Progressbar.trough -sticky nswe -children {Horizontal.Progressbar.pbar -side left -sticky ns Horizontal.Progressbar.ctext -side left -sticky {}} Layout: Horizontal.Scale.focus -sticky nswe -children {Horizontal.Scale.padding -sticky nswe -children {Horizontal.Scale.trough -sticky nswe -children {Horizontal.Scale.slider -side left -sticky {}}}} Layout: Horizontal.Scrollbar.trough -sticky we -children {Horizontal.Scrollbar.leftarrow -side left -sticky {} Horizontal.Scrollbar.rightarrow -side right -sticky {} Horizontal.Scrollbar.thumb -sticky nswe} Layout: Treeitem.padding -sticky nswe -children {Treeitem.indicator -side left -sticky {} Treeitem.image -side left -sticky {} Treeitem.text -sticky nswe} Layout: Label.fill -sticky nswe -children {Label.text -sticky nswe} Layout: Treeitem.row -sticky nswe - Layout: Treeitem.separator -sticky nswe Layout: Button.border -sticky nswe -border 1 -children {Button.focus -sticky nswe -children {Button.padding -sticky nswe -children {Button.label -sticky nswe}}} Style map: -relief {{!disabled pressed} sunken} Layout: Checkbutton.padding -sticky nswe -children {Checkbutton.indicator -side left -sticky {} Checkbutton.focus -side left -sticky w -children {Checkbutton.label -sticky nswe}} Style map: -indicatorbackground {{alternate disabled} #a3a3a3 {alternate pressed} #5895bc alternate #4a6984 {selected disabled} #a3a3a3 {selected pressed} #5895bc selected #4a6984 disabled #edeceb pressed #c3c3c3} Layout: Combobox.field -sticky nswe -children {Combobox.downarrow -side right -sticky ns Combobox.padding -sticky nswe -children {Combobox.textarea -sticky nswe}} Style map: -fieldbackground {readonly #edeceb disabled #edeceb} -arrowcolor {disabled #a3a3a3} Layout: Entry.field -sticky nswe -border 1 -children {Entry.padding -sticky nswe -children {Entry.textarea -sticky nswe}} Style map: -fieldbackground {readonly #edeceb disabled #edeceb} Layout: Frame.border -sticky nswe Layout: Label.border -sticky nswe -border 1 -children {Label.padding -sticky nswe -border 1 -children {Label.label -sticky nswe}} Layout: Labelframe.border -sticky nswe Layout: Menubutton.border -sticky nswe -children {Menubutton.focus -sticky nswe -children {Menubutton.indicator -side right -sticky {} Menubutton.padding -sticky we -children {Menubutton.label -side left -sticky {}}}} Style map: -arrowcolor {disabled #a3a3a3} Layout: Notebook.client -sticky nswe Layout: Notebook.tab -sticky nswe -children {Notebook.padding -side top -sticky nswe -children {Notebook.focus -side top -sticky nswe -children {Notebook.label -side top -sticky {}}}} Style map: -highlightcolor {selected #4a6984} -highlight {selected 1} -background {selected #edeceb} Layout: Panedwindow.background -sticky {} - Layout: Radiobutton.padding -sticky nswe -children {Radiobutton.indicator -side left -sticky {} Radiobutton.focus -side left -sticky {} -children {Radiobutton.label -sticky nswe}} Style map: -indicatorbackground {{alternate disabled} #a3a3a3 {alternate pressed} #5895bc alternate #4a6984 {selected disabled} #a3a3a3 {selected pressed} #5895bc selected #4a6984 disabled #edeceb pressed #c3c3c3} Style map: -outercolor {active #ececec} Style map: -arrowcolor {disabled #a3a3a3} Layout: Separator.separator -sticky nswe Layout: Sizegrip.sizegrip -side bottom -sticky se Layout: Spinbox.field -side top -sticky we -children {null -side right -sticky {} -children {Spinbox.uparrow -side top -sticky e Spinbox.downarrow -side bottom -sticky e} Spinbox.padding -sticky nswe -children {Spinbox.textarea -sticky nswe}} Style map: -fieldbackground {readonly #edeceb disabled #edeceb} -arrowcolor {disabled #a3a3a3} Layout: Notebook.tab -sticky nswe -children {Notebook.padding -side top -sticky nswe -children {Notebook.focus -side top -sticky nswe -children {Notebook.label -side top -sticky {}}}} Layout: Toolbutton.border -sticky nswe -children {Toolbutton.focus -sticky nswe -children {Toolbutton.padding -sticky nswe -children {Toolbutton.label -sticky nswe}}} Style map: -relief {disabled flat selected sunken pressed sunken active raised} -background {pressed #c3c3c3 active #ececec} Layout: Treeview.field -sticky nswe -border 1 -children {Treeview.padding -sticky nswe -children {Treeview.treearea -sticky nswe}} Style map: -foreground {disabled #a3a3a3 selected #ffffff} -background {disabled #edeceb selected #4a6984} Layout: Treeitem.separator -sticky nswe Layout: Sash.vsash -sticky ns Layout: Vertical.Progressbar.trough -sticky nswe -children {Vertical.Progressbar.pbar -side bottom -sticky we} Layout: Vertical.Scale.focus -sticky nswe -children {Vertical.Scale.padding -sticky nswe -children {Vertical.Scale.trough -sticky nswe -children {Vertical.Scale.slider -side top -sticky {}}}} Layout: Vertical.Scrollbar.trough -sticky ns -children {Vertical.Scrollbar.uparrow -side top -sticky {} Vertical.Scrollbar.downarrow -side bottom -sticky {} Vertical.Scrollbar.thumb -sticky nswe}PASS "Combobox.background" style element options: "Combobox.border" style element options: "Combobox.rightdownarrow" style element options: "ComboboxPopdownFrame.background" style element options: "Entry.background" style element options: "Entry.field" style element options: "Horizontal.Progressbar.pbar" style element options: "Horizontal.Scale.slider" style element options: "Horizontal.Scrollbar.grip" style element options: "Horizontal.Scrollbar.leftarrow" style element options: "Horizontal.Scrollbar.rightarrow" style element options: "Horizontal.Scrollbar.thumb" style element options: "Horizontal.Scrollbar.trough" style element options: "Menubutton.dropdown" style element options: "Spinbox.background" style element options: "Spinbox.downarrow" style element options: "Spinbox.field" style element options: "Spinbox.innerbg" style element options: "Spinbox.uparrow" style element options: "Vertical.Progressbar.pbar" style element options: "Vertical.Scale.slider" style element options: "Vertical.Scrollbar.downarrow" style element options: "Vertical.Scrollbar.grip" style element options: "Vertical.Scrollbar.thumb" style element options: "Vertical.Scrollbar.trough" style element options: "Vertical.Scrollbar.uparrow" style element options: "vista" theme style list Style map: -foreground {disabled SystemGrayText} Layout: ComboboxPopdownFrame.background -sticky nswe -border 1 -children {ComboboxPopdownFrame.padding -sticky nswe} Layout: Treeheading.cell -sticky nswe Treeheading.border -sticky nswe -children {Treeheading.padding -sticky nswe -children {Treeheading.image -side right -sticky {} Treeheading.text -sticky we}} Layout: Horizontal.Progressbar.trough -sticky nswe -children {Horizontal.Progressbar.pbar -side left -sticky ns Horizontal.Progressbar.ctext -sticky nswe} Layout: Scale.focus -sticky nswe -children {Horizontal.Scale.trough -sticky nswe -children {Horizontal.Scale.track -sticky we Horizontal.Scale.slider -side left -sticky {}}} Layout: Treeitem.padding -sticky nswe -children {Treeitem.indicator -side left -sticky {} Treeitem.image -side left -sticky {} Treeitem.text -sticky nswe} Layout: Label.fill -sticky nswe -children {Label.text -sticky nswe} Layout: Treeitem.separator -sticky nswe Layout: Button.button -sticky nswe -children {Button.focus -sticky nswe -children {Button.padding -sticky nswe -children {Button.label -sticky nswe}}} Layout: Checkbutton.padding -sticky nswe -children {Checkbutton.indicator -side left -sticky {} Checkbutton.focus -side left -sticky w -children {Checkbutton.label -sticky nswe}} Layout: Combobox.border -sticky nswe -children {Combobox.rightdownarrow -side right -sticky ns Combobox.padding -sticky nswe -children {Combobox.background -sticky nswe -children {Combobox.focus -sticky nswe -children {Combobox.textarea -sticky nswe}}}} Style map: -focusfill {{readonly focus} SystemHighlight} -foreground {disabled SystemGrayText {readonly focus} SystemHighlightText} -selectforeground {!focus SystemWindowText} -selectbackground {!focus SystemWindow} Layout: Entry.field -sticky nswe -children {Entry.background -sticky nswe -children {Entry.padding -sticky nswe -children {Entry.textarea -sticky nswe}}} Style map: -selectforeground {!focus SystemWindowText} -selectbackground {!focus SystemWindow} Layout: Label.fill -sticky nswe -children {Label.text -sticky nswe} Layout: Menubutton.dropdown -side right -sticky ns Menubutton.button -sticky nswe -children {Menubutton.padding -sticky we -children {Menubutton.label -sticky {}}} Layout: Notebook.client -sticky nswe Layout: Notebook.tab -sticky nswe -children {Notebook.padding -side top -sticky nswe -children {Notebook.focus -side top -sticky nswe -children {Notebook.label -side top -sticky {}}}} Style map: -expand {selected {2 2 2 2}} - Layout: Radiobutton.padding -sticky nswe -children {Radiobutton.indicator -side left -sticky {} Radiobutton.focus -side left -sticky {} -children {Radiobutton.label -sticky nswe}} - Layout: Spinbox.field -sticky nswe -children {Spinbox.background -sticky nswe -children {Spinbox.padding -sticky nswe -children {Spinbox.innerbg -sticky nswe -children {Spinbox.textarea -sticky nswe}} Spinbox.uparrow -side top -sticky nse Spinbox.downarrow -side bottom -sticky nse}} Style map: -selectforeground {!focus SystemWindowText} -selectbackground {!focus SystemWindow} Layout: Notebook.tab -sticky nswe -children {Notebook.padding -side top -sticky nswe -children {Notebook.focus -side top -sticky nswe -children {Notebook.label -side top -sticky {}}}} Layout: Toolbutton.border -sticky nswe -children {Toolbutton.focus -sticky nswe -children {Toolbutton.padding -sticky nswe -children {Toolbutton.label -sticky nswe}}} Layout: Treeview.field -sticky nswe -border 1 -children {Treeview.padding -sticky nswe -children {Treeview.treearea -sticky nswe}} Style map: -foreground {disabled SystemGrayText selected SystemHighlightText} -background {disabled SystemButtonFace selected SystemHighlight} Layout: Treeitem.separator -sticky nswe Layout: Vertical.Progressbar.trough -sticky nswe -children {Vertical.Progressbar.pbar -side bottom -sticky we} Layout: Scale.focus -sticky nswe -children {Vertical.Scale.trough -sticky nswe -children {Vertical.Scale.track -sticky ns Vertical.Scale.slider -side top -sticky {}}} "Button.border" style element options: "Checkbutton.indicator" style element options: "Combobox.focus" style element options: "ComboboxPopdownFrame.border" style element options: "Radiobutton.indicator" style element options: "Scrollbar.trough" style element options: "Spinbox.downarrow" style element options: "Spinbox.uparrow" style element options: "border" style element options: "client" style element options: "downarrow" style element options: "field" style element options: "focus" style element options: "leftarrow" style element options: "rightarrow" style element options: "sizegrip" style element options: "slider" style element options: "tab" style element options: "thumb" style element options: "uparrow" style element options: "winnative" theme style list Style map: -foreground {disabled SystemGrayText} -embossed {disabled 1} Layout: ComboboxPopdownFrame.border -sticky nswe Layout: Treeheading.cell -sticky nswe Treeheading.border -sticky nswe -children {Treeheading.padding -sticky nswe -children {Treeheading.image -side right -sticky {} Treeheading.text -sticky we}} Layout: Treeitem.padding -sticky nswe -children {Treeitem.indicator -side left -sticky {} Treeitem.image -side left -sticky {} Treeitem.text -sticky nswe} Layout: Label.fill -sticky nswe -children {Label.text -sticky nswe} Layout: Treeitem.separator -sticky nswe Layout: Button.border -sticky nswe -children {Button.padding -sticky nswe -children {Button.label -sticky nswe}} Style map: -relief {{!disabled pressed} sunken} Layout: Checkbutton.padding -sticky nswe -children {Checkbutton.indicator -side left -sticky {} Checkbutton.focus -side left -sticky w -children {Checkbutton.label -sticky nswe}} Layout: Combobox.field -sticky nswe -children {Combobox.downarrow -side right -sticky ns Combobox.padding -sticky nswe -children {Combobox.focus -sticky nswe -children {Combobox.textarea -sticky nswe}}} Style map: -focusfill {{readonly focus} SystemHighlight} -foreground {disabled SystemGrayText {readonly focus} SystemHighlightText} -selectforeground {!focus SystemWindowText} -fieldbackground {readonly SystemButtonFace disabled SystemButtonFace} -selectbackground {!focus SystemWindow} Layout: Entry.field -sticky nswe -border 1 -children {Entry.padding -sticky nswe -children {Entry.textarea -sticky nswe}} Style map: -selectforeground {!focus SystemWindowText} -selectbackground {!focus SystemWindow} -fieldbackground {readonly SystemButtonFace disabled SystemButtonFace} Layout: Labelframe.border -sticky nswe Layout: Label.fill -sticky nswe -children {Label.text -sticky nswe} Layout: Menubutton.border -sticky nswe -children {Menubutton.focus -sticky nswe -children {Menubutton.indicator -side right -sticky {} Menubutton.padding -sticky we -children {Menubutton.label -side left -sticky {}}}} Layout: Notebook.client -sticky nswe Layout: Notebook.tab -sticky nswe -children {Notebook.padding -side top -sticky nswe -children {Notebook.focus -side top -sticky nswe -children {Notebook.label -side top -sticky {}}}} Style map: -expand {selected {2 2 2 0}} - Layout: Radiobutton.padding -sticky nswe -children {Radiobutton.indicator -side left -sticky {} Radiobutton.focus -side left -sticky {} -children {Radiobutton.label -sticky nswe}} - Layout: Spinbox.field -side top -sticky we -children {null -side right -sticky {} -children {Spinbox.uparrow -side top -sticky e Spinbox.downarrow -side bottom -sticky e} Spinbox.padding -sticky nswe -children {Spinbox.textarea -sticky nswe}} Layout: Notebook.tab -sticky nswe -children {Notebook.padding -side top -sticky nswe -children {Notebook.focus -side top -sticky nswe -children {Notebook.label -side top -sticky {}}}} Layout: Toolbutton.border -sticky nswe -children {Toolbutton.focus -sticky nswe -children {Toolbutton.padding -sticky nswe -children {Toolbutton.label -sticky nswe}}} Style map: -relief {disabled flat selected sunken pressed sunken active raised} Layout: Treeview.field -sticky nswe -border 1 -children {Treeview.padding -sticky nswe -children {Treeview.treearea -sticky nswe}} Style map: -foreground {disabled SystemGrayText selected SystemHighlightText} -background {disabled SystemButtonFace selected SystemHighlight} Layout: Treeitem.separator -sticky nswe "Button.button" style element options: "Checkbutton.indicator" style element options: "Combobox.downarrow" style element options: "Combobox.field" style element options: "Entry.field" style element options: "Horizontal.Progressbar.pbar" style element options: "Horizontal.Progressbar.trough" style element options: "Horizontal.Scale.slider" style element options: "Horizontal.Scale.track" style element options: "Horizontal.Scrollbar.grip" style element options: "Horizontal.Scrollbar.thumb" style element options: "Horizontal.Scrollbar.trough" style element options: "Labelframe.border" style element options: "Menubutton.button" style element options: "Menubutton.dropdown" style element options: "NotebookPane.background" style element options: "Radiobutton.indicator" style element options: "Scale.trough" style element options: "Scrollbar.downarrow" style element options: "Scrollbar.leftarrow" style element options: "Scrollbar.rightarrow" style element options: "Scrollbar.uparrow" style element options: "Spinbox.downarrow" style element options: "Spinbox.field" style element options: "Spinbox.uparrow" style element options: "Toolbutton.border" style element options: "Treeheading.border" style element options: "Treeitem.indicator" style element options: "Treeview.field" style element options: "Vertical.Progressbar.pbar" style element options: "Vertical.Progressbar.trough" style element options: "Vertical.Scale.slider" style element options: "Vertical.Scale.track" style element options: "Vertical.Scrollbar.grip" style element options: "Vertical.Scrollbar.thumb" style element options: "Vertical.Scrollbar.trough" style element options: "client" style element options: "sizegrip" style element options: "tab" style element options: "xpnative" theme style list Style map: -foreground {disabled SystemGrayText} Layout: Treeheading.cell -sticky nswe Treeheading.border -sticky nswe -children {Treeheading.padding -sticky nswe -children {Treeheading.image -side right -sticky {} Treeheading.text -sticky we}} Layout: Scale.focus -sticky nswe -children {Horizontal.Scale.trough -sticky nswe -children {Horizontal.Scale.track -sticky we Horizontal.Scale.slider -side left -sticky {}}} Layout: Horizontal.Scrollbar.trough -sticky we -children {Horizontal.Scrollbar.leftarrow -side left -sticky {} Horizontal.Scrollbar.rightarrow -side right -sticky {} Horizontal.Scrollbar.thumb -sticky nswe -unit 1 -children {Horizontal.Scrollbar.grip -sticky {}}} Layout: Treeitem.padding -sticky nswe -children {Treeitem.indicator -side left -sticky {} Treeitem.image -side left -sticky {} Treeitem.text -sticky nswe} Layout: Label.fill -sticky nswe -children {Label.text -sticky nswe} Layout: Treeitem.separator -sticky nswe Layout: Button.button -sticky nswe -children {Button.focus -sticky nswe -children {Button.padding -sticky nswe -children {Button.label -sticky nswe}}} Layout: Checkbutton.padding -sticky nswe -children {Checkbutton.indicator -side left -sticky {} Checkbutton.focus -side left -sticky w -children {Checkbutton.label -sticky nswe}} Layout: Combobox.field -sticky nswe -children {Combobox.downarrow -side right -sticky ns Combobox.padding -sticky nswe -children {Combobox.focus -sticky nswe -children {Combobox.textarea -sticky nswe}}} Style map: -focusfill {{readonly focus} SystemHighlight} -foreground {disabled SystemGrayText {readonly focus} SystemHighlightText} -selectforeground {!focus SystemWindowText} -selectbackground {!focus SystemWindow} Layout: Entry.field -sticky nswe -border 1 -children {Entry.padding -sticky nswe -children {Entry.textarea -sticky nswe}} Style map: -selectforeground {!focus SystemWindowText} -selectbackground {!focus SystemWindow} Layout: Label.fill -sticky nswe -children {Label.text -sticky nswe} Layout: Menubutton.dropdown -side right -sticky ns Menubutton.button -sticky nswe -children {Menubutton.padding -sticky we -children {Menubutton.label -sticky {}}} Layout: Notebook.client -sticky nswe Layout: Notebook.tab -sticky nswe -children {Notebook.padding -side top -sticky nswe -children {Notebook.focus -side top -sticky nswe -children {Notebook.label -side top -sticky {}}}} Style map: -expand {selected {2 2 2 2}} - Layout: Radiobutton.padding -sticky nswe -children {Radiobutton.indicator -side left -sticky {} Radiobutton.focus -side left -sticky {} -children {Radiobutton.label -sticky nswe}} - - Layout: Spinbox.field -side top -sticky we -children {null -side right -sticky {} -children {Spinbox.uparrow -side top -sticky e Spinbox.downarrow -side bottom -sticky e} Spinbox.padding -sticky nswe -children {Spinbox.textarea -sticky nswe}} Style map: -selectforeground {!focus SystemWindowText} -selectbackground {!focus SystemWindow} Layout: Notebook.tab -sticky nswe -children {Notebook.padding -side top -sticky nswe -children {Notebook.focus -side top -sticky nswe -children {Notebook.label -side top -sticky {}}}} Layout: Toolbutton.border -sticky nswe -children {Toolbutton.focus -sticky nswe -children {Toolbutton.padding -sticky nswe -children {Toolbutton.label -sticky nswe}}} Layout: Treeview.field -sticky nswe -border 1 -children {Treeview.padding -sticky nswe -children {Treeview.treearea -sticky nswe}} Style map: -foreground {disabled SystemGrayText selected SystemHighlightText} -background {disabled SystemButtonFace selected SystemHighlight} Layout: Treeitem.separator -sticky nswe Layout: Scale.focus -sticky nswe -children {Vertical.Scale.trough -sticky nswe -children {Vertical.Scale.track -sticky ns Vertical.Scale.slider -side top -sticky {}}} Layout: Vertical.Scrollbar.trough -sticky ns -children {Vertical.Scrollbar.uparrow -side top -sticky {} Vertical.Scrollbar.downarrow -side bottom -sticky {} Vertical.Scrollbar.thumb -sticky nswe -unit 1 -children {Vertical.Scrollbar.grip -sticky {}}}PASS
Package antlr implements the Go version of the ANTLR 4 runtime. ANTLR (ANother Tool for Language Recognition) is a powerful parser generator for reading, processing, executing, or translating structured text or binary files. It's widely used to build languages, tools, and frameworks. From a grammar, ANTLR generates a parser that can build parse trees and also generates a listener interface (or visitor) that makes it easy to respond to the recognition of phrases of interest. At version 4.11.x and prior, the Go runtime was not properly versioned for go modules. After this point, the runtime source code to be imported was held in the `runtime/Go/antlr/v4` directory, and the go.mod file was updated to reflect the version of ANTLR4 that it is compatible with (I.E. uses the /v4 path). However, this was found to be problematic, as it meant that with the runtime embedded so far underneath the root of the repo, the `go get` and related commands could not properly resolve the location of the go runtime source code. This meant that the reference to the runtime in your `go.mod` file would refer to the correct source code, but would not list the release tag such as @4.13.1 - this was confusing, to say the least. As of 4.13.0, the runtime is now available as a go module in its own repo, and can be imported as `github.com/antlr4-go/antlr` (the go get command should also be used with this path). See the main documentation for the ANTLR4 project for more information, which is available at ANTLR docs. The documentation for using the Go runtime is available at Go runtime docs. This means that if you are using the source code without modules, you should also use the source code in the new repo. Though we highly recommend that you use go modules, as they are now idiomatic for Go. I am aware that this change will prove Hyrum's Law, but am prepared to live with it for the common good. Go runtime author: Jim Idle jimi@idle.ws ANTLR supports the generation of code in a number of target languages, and the generated code is supported by a runtime library, written specifically to support the generated code in the target language. This library is the runtime for the Go target. To generate code for the go target, it is generally recommended to place the source grammar files in a package of their own, and use the `.sh` script method of generating code, using the go generate directive. In that same directory it is usual, though not required, to place the antlr tool that should be used to generate the code. That does mean that the antlr tool JAR file will be checked in to your source code control though, so you are, of course, free to use any other way of specifying the version of the ANTLR tool to use, such as aliasing in `.zshrc` or equivalent, or a profile in your IDE, or configuration in your CI system. Checking in the jar does mean that it is easy to reproduce the build as it was at any point in its history. Here is a general/recommended template for an ANTLR based recognizer in Go: Make sure that the package statement in your grammar file(s) reflects the go package the generated code will exist in. The generate.go file then looks like this: And the generate.sh file will look similar to this: depending on whether you want visitors or listeners or any other ANTLR options. Not that another option here is to generate the code into a From the command line at the root of your source package (location of go.mo)d) you can then simply issue the command: Which will generate the code for the parser, and place it in the parsing package. You can then use the generated code by importing the parsing package. There are no hard and fast rules on this. It is just a recommendation. You can generate the code in any way and to anywhere you like. Copyright (c) 2012-2023 The ANTLR Project. All rights reserved. Use of this file is governed by the BSD 3-clause license, which can be found in the LICENSE.txt file in the project root.
Package cgi implements the common gateway interface (CGI) for Caddy 2, a modern, full-featured, easy-to-use web server. It has been forked from the fantastic work of Kurt Jung who wrote that plugin for Caddy 1. This plugin lets you generate dynamic content on your website by means of command line scripts. To collect information about the inbound HTTP request, your script examines certain environment variables such as PATH_INFO and QUERY_STRING. Then, to return a dynamically generated web page to the client, your script simply writes content to standard output. In the case of POST requests, your script reads additional inbound content from standard input. The advantage of CGI is that you do not need to fuss with server startup and persistence, long term memory management, sockets, and crash recovery. Your script is called when a request matches one of the patterns that you specify in your Caddyfile. As soon as your script completes its response, it terminates. This simplicity makes CGI a perfect complement to the straightforward operation and configuration of Caddy. The benefits of Caddy, including HTTPS by default, basic access authentication, and lots of middleware options extend easily to your CGI scripts. CGI has some disadvantages. For one, Caddy needs to start a new process for each request. This can adversely impact performance and, if resources are shared between CGI applications, may require the use of some interprocess synchronization mechanism such as a file lock. Your server's responsiveness could in some circumstances be affected, such as when your web server is hit with very high demand, when your script's dependencies require a long startup, or when concurrently running scripts take a long time to respond. However, in many cases, such as using a pre-compiled CGI application like fossil or a Lua script, the impact will generally be insignificant. Another restriction of CGI is that scripts will be run with the same permissions as Caddy itself. This can sometimes be less than ideal, for example when your script needs to read or write files associated with a different owner. Serving dynamic content exposes your server to more potential threats than serving static pages. There are a number of considerations of which you should be aware when using CGI applications. CGI scripts should be located outside of Caddy's document root. Otherwise, an inadvertent misconfiguration could result in Caddy delivering the script as an ordinary static resource. At best, this could merely confuse the site visitor. At worst, it could expose sensitive internal information that should not leave the server. Mistrust the contents of PATH_INFO, QUERY_STRING and standard input. Most of the environment variables available to your CGI program are inherently safe because they originate with Caddy and cannot be modified by external users. This is not the case with PATH_INFO, QUERY_STRING and, in the case of POST actions, the contents of standard input. Be sure to validate and sanitize all inbound content. If you use a CGI library or framework to process your scripts, make sure you understand its limitations. An error in a CGI application is generally handled within the application itself and reported in the headers it returns. Your CGI application can be executed directly or indirectly. In the direct case, the application can be a compiled native executable or it can be a shell script that contains as its first line a shebang that identifies the interpreter to which the file's name should be passed. Caddy must have permission to execute the application. On Posix systems this will mean making sure the application's ownership and permission bits are set appropriately; on Windows, this may involve properly setting up the filename extension association. In the indirect case, the name of the CGI script is passed to an interpreter such as lua, perl or python. - This module needs to be installed (obviously). - The directive needs to be registered in the Caddyfile: The basic cgi directive lets you add a handler in the current caddy router location with a given script and optional arguments. The matcher is a default caddy matcher that is used to restrict the scope of this directive. The directive can be repeated any reasonable number of times. Here is the basic syntax: For example: When a request such as https://example.com/report or https://example.com/report/weekly arrives, the cgi middleware will detect the match and invoke the script named /usr/local/cgi-bin/report. The current working directory will be the same as Caddy itself. Here, it is assumed that the script is self-contained, for example a pre-compiled CGI application or a shell script. Here is an example of a standalone script, similar to one used in the cgi plugin's test suite: The environment variables PATH_INFO and QUERY_STRING are populated and passed to the script automatically. There are a number of other standard CGI variables included that are described below. If you need to pass any special environment variables or allow any environment variables that are part of Caddy's process to pass to your script, you will need to use the advanced directive syntax described below. Beware that in Caddy v2 it is (currently) not possible to separate the path left of the matcher from the full URL. Therefore if you require your CGI program to know the SCRIPT_NAME, make sure to pass that explicitly: In order to specify custom environment variables, pass along one or more environment variables known to Caddy, or specify more than one match pattern for a given rule, you will need to use the advanced directive syntax. That looks like this: For example, The script_name subdirective helps the cgi module to separate the path to the script from the (virtual) path afterwards (which shall be passed to the script). env can be used to define a list of key=value environment variable pairs that shall be passed to the script. pass_env can be used to define a list of environment variables of the Caddy process that shall be passed to the script. If your CGI application runs properly at the command line but fails to run from Caddy it is possible that certain environment variables may be missing. For example, the ruby gem loader evidently requires the HOME environment variable to be set; you can do this with the subdirective pass_env HOME. Another class of problematic applications require the COMPUTERNAME variable. The pass_all_env subdirective instructs Caddy to pass each environment variable it knows about to the CGI excutable. This addresses a common frustration that is caused when an executable requires an environment variable and fails without a descriptive error message when the variable cannot be found. These applications often run fine from the command prompt but fail when invoked with CGI. The risk with this subdirective is that a lot of server information is shared with the CGI executable. Use this subdirective only with CGI applications that you trust not to leak this information. buffer_limit is used when a http request has Transfer-Endcoding: chunked. The Go CGI Handler refused to handle these kinds of requests, see https://github.com/golang/go/issues/5613. In order to work around this the chunked request is buffered by caddy and sent to the CGI application as a whole with the correct CONTENT_LENGTH set. The buffer_limit setting marks a threshold between buffering in memory and using a temporary file. Every request body smaller than the buffer_limit is buffered in-memory. It accepts all formats supported by go-humanize. Default: 4MiB. (An example of this is git push if the objects to push are larger than the http.postBuffer) With the unbuffered_output subdirective it is possible to instruct the CGI handler to flush output from the CGI script as soon as possible. By default, the output is buffered into chunks before it is being written to optimize the network usage and allow to determine the Content-Length. When unbuffered, bytes will be written as soon as possible. This will also force the response to be written in chunked encoding. If you run into unexpected results with the CGI plugin, you are able to examine the environment in which your CGI application runs. To enter inspection mode, add the subdirective inspect to your CGI configuration block. This is a development option that should not be used in production. When in inspection mode, the plugin will respond to matching requests with a page that displays variables of interest. In particular, it will show the replacement value of {match} and the environment variables to which your CGI application has access. For example, consider this example CGI block: When you request a matching URL, for example, the Caddy server will deliver a text page similar to the following. The CGI application (in this case, wapptclsh) will not be called. This information can be used to diagnose problems with how a CGI application is called. To return to operation mode, remove or comment out the inspect subdirective. In this example, the Caddyfile looks like this: Note that a request for /show gets mapped to a script named /usr/local/cgi-bin/report/gen. There is no need for any element of the script name to match any element of the match pattern. The contents of /usr/local/cgi-bin/report/gen are: The purpose of this script is to show how request information gets communicated to a CGI script. Note that POST data must be read from standard input. In this particular case, posted data gets stored in the variable POST_DATA. Your script may use a different method to read POST content. Secondly, the SCRIPT_EXEC variable is not a CGI standard. It is provided by this middleware and contains the entire command line, including all arguments, with which the CGI script was executed. When a browser requests the response looks like When a client makes a POST request, such as with the following command the response looks the same except for the following lines: This small example demonstrates how to write a CGI program in Go. The use of a bytes.Buffer makes it easy to report the content length in the CGI header. When this program is compiled and installed as /usr/local/bin/servertime, the following directive in your Caddy file will make it available: The module is written in a way that it expects the scripts you want it to execute to actually exist. A non-existing or non-executable file is considered a setup error and will yield a HTTP 500. If you want to make sure, only existing scripts are executed, use a more specific matcher, as explained in the Caddy docs. Example: When calling a url like /cgi/foo/bar.pl it will check if the local file ./app/foo/bar.pl exists and only then it will proceed with calling the CGI.