Package capn is a capnproto library for go see http://kentonv.github.io/capnproto/ capnpc-go provides the compiler backend for capnp after installing to $PATH capnp files can be compiled with capnpc-go requires two annotations for all types. This is the package and import found in go.capnp. Package is needed to know what package to place at the head of the generated file and what go name to use when referring to the type from another package. Import should be the fully qualified import path and is used to generate import statement from other packages and to detect when two types are in the same package. Typically these are added as file annotations. For example: In capnproto, the unit of communication is a message. A message consists of one or more of segments to allow easier allocation, but ideally and typically you just make one segment per message. Logically, a message organized in a tree of objects, with the root always being a struct (as opposed to a list or primitive). Here is an example of writing a new message. We use the demo schema aircraft.capnp from the aircraftlib directory. You may wish to read the schema before reading this example. << Example moved to its own file: See the file, write_test.go >> In summary, when you make a new message, you should first make new segment, and then create the root struct in that segment. Then add your non-child (contained) objects. This is because, as the spec says: All objects are values with pointer semantics that point into the data in a message or segment. Messages can be read/written from a stream uncompressed or using the capnproto compression. In this library a *Segment is taken to refer to both a specific segment as well as the containing message. This is to reduce the number of types generic code needs to deal with and allows objects to be created in the same segment as their outer object (thus reducing the number of far pointers). Most getters/setters in the library don't return an error. Instead a get that fails due to an invalid pointer, out of bounds, etc will return the default value. A invalid set will be noop'ed. If you really need to know whether a set succeeds then errors are provided by the lower level Object methods. Since go doesn't have any templating, lists are created for the basic types and one level of named types. The list of basic types (e.g. List(UInt8), List(Text), etc) are all provided in this library. Lists of user named types are created with the user types (e.g. user struct Foo will create a Foo_List type). capnp schemas that use deeper lists (e.g. List(List(UInt8))) will use PointerList and the user will have to use the Object.ToList* functions to cast to the correct type. For adding documentation comments to the generated code, there's the doc annotation. This annotation adds the comment to a struct, enum or field so that godoc will pick it up. For Example: capnpc-go will generate the following for structs: For each group a typedef is created with a different method set for just the groups fields: That way the following may be used to access a field in a group: Note that Group accessors just cast the type and so have no overhead Named unions are treated as a group with an inner unnamed union. Unnamed unions generate an enum Type_Which and a corresponding Which() function: Which() should be checked before using the getters, and the default case must always be handled. Setters for single values will set the union discriminator as well as set the value. For voids in unions, there is a void setter that just sets the discriminator. For example: For groups in unions, there is a group setter that just sets the discriminator. This must be called before the group getter can be used to set values. For example: capnpc-go generates enum values in all caps. For example in the capnp file: In the generated capnp.go file: In addition an enum.String() function is generated that will convert the constants to a string for debugging or logging purposes. By default, the enum name is used as the tag value, but the tags can be customized with a $Go.tag or $Go.notag annotation. For example: In the generated go file:
Package cae implements PHP-like Compression and Archive Extensions.
Package basex provides fast base encoding / decoding of any given alphabet using bitcoin style leading zero compression. It is a GO port of https://github.com/cryptocoinjs/base-x
Package xz supports the compression and decompression of xz files. It supports version 1.0.4 of the specification without the non-LZMA2 filters. See http://tukaani.org/xz/xz-file-format-1.0.4.txt
Package archive contains traversal utilities for ZIP and tar archives with gzip, bzip2, xz, and LZ4 compression.
Package metrics contains the serializer compression component for metrics
Package httpgzip provides net/http-like primitives that use gzip compression when serving HTTP requests.
Package logs contains the serializer compression component for metrics
Package cae implements PHP-like Compression and Archive Extensions.
Package govcr records and replays HTTP interactions for offline unit / behavioural / integration tests thereby acting as an HTTP mock. This project was inspired by php-vcr which is a PHP port of VCR for ruby. For usage and more information, please refer to the project's README at: https://github.com/seborama/govcr Example_simpleVCR is an example use of govcr. It shows how to use govcr in the simplest case when the default http.Client suffices. Example2 is an example use of govcr. It shows the use of a VCR with a custom Client. Here, the app executes a GET request. Example_simpleVCR is an example use of govcr. It shows how to use govcr in the simplest case when the default http.Client suffices. Example_simpleVCR is an example use of govcr. It shows a simple use of a Long Play cassette (i.e. compressed). Example_simpleVCR is an example use of govcr. It shows how to use govcr in the simplest case when the default http.Client suffices. Example_number7BodyInjection will show how bodies can be rewritten. We will take a varying ID from the request URL, neutralize it and also change the ID in the body of the response.
Package kprom provides prometheus plug-in metrics for a kgo client. This package tracks the following metrics under the following names, all metrics being counter vecs: The above metrics can be expanded considerably with options in this package, allowing timings, uncompressed and compressed bytes, and different labels. This can be used in a client like so: More examples are linked in the main project readme: https://github.com/twmb/franz-go/#metrics--logging By default, metrics are installed under the a new prometheus registry, but this can be overridden with the Registry option. Note that seed brokers use broker IDs prefixed with "seed_", with the number corresponding to which seed it is.
bindata converts any file into managable Go source code. Useful for embedding binary data into a go program. The file data is optionally gzip compressed before being converted to a raw byte slice. The following paragraphs cover some of the customization options which can be specified in the Config struct, which must be passed into the Translate() call. When used with the `Debug` option, the generated code does not actually include the asset data. Instead, it generates function stubs which load the data from the original file on disk. The asset API remains identical between debug and release builds, so your code will not have to change. This is useful during development when you expect the assets to change often. The host application using these assets uses the same API in both cases and will not have to care where the actual data comes from. An example is a Go webserver with some embedded, static web content like HTML, JS and CSS files. While developing it, you do not want to rebuild the whole server and restart it every time you make a change to a bit of javascript. You just want to build and launch the server once. Then just press refresh in the browser to see those changes. Embedding the assets with the `debug` flag allows you to do just that. When you are finished developing and ready for deployment, just re-invoke `go-bindata` without the `-debug` flag. It will now embed the latest version of the assets. The `NoMemCopy` option will alter the way the output file is generated. It will employ a hack that allows us to read the file data directly from the compiled program's `.rodata` section. This ensures that when we call call our generated function, we omit unnecessary memcopies. The downside of this, is that it requires dependencies on the `reflect` and `unsafe` packages. These may be restricted on platforms like AppEngine and thus prevent you from using this mode. Another disadvantage is that the byte slice we create, is strictly read-only. For most use-cases this is not a problem, but if you ever try to alter the returned byte slice, a runtime panic is thrown. Use this mode only on target platforms where memory constraints are an issue. The default behavior is to use the old code generation method. This prevents the two previously mentioned issues, but will employ at least one extra memcopy and thus increase memory requirements. For instance, consider the following two examples: This would be the default mode, using an extra memcopy but gives a safe implementation without dependencies on `reflect` and `unsafe`: Here is the same functionality, but uses the `.rodata` hack. The byte slice returned from this example can not be written to without generating a runtime error. The NoCompress option indicates that the supplied assets are *not* GZIP compressed before being turned into Go code. The data should still be accessed through a function call, so nothing changes in the API. This feature is useful if you do not care for compression, or the supplied resource is already compressed. Doing it again would not add any value and may even increase the size of the data. The default behavior of the program is to use compression. The keys used in the `_bindata` map are the same as the input file name passed to `go-bindata`. This includes the path. In most cases, this is not desireable, as it puts potentially sensitive information in your code base. For this purpose, the tool supplies another command line flag `-prefix`. This accepts a portion of a path name, which should be stripped off from the map keys and function names. For example, running without the `-prefix` flag, we get: Running with the `-prefix` flag, we get: With the optional Tags field, you can specify any go build tags that must be fulfilled for the output file to be included in a build. This is useful when including binary data in multiple formats, where the desired format is specified at build time with the appropriate tags. The tags are appended to a `// +build` line in the beginning of the output file and must follow the build tags syntax specified by the go tool.
Package httpgzip is a simple wrapper around http.FileServer that looks for a compressed version of a file and serves that if the client requested compressed content
Package websocket implements the WebSocket protocol defined in RFC 6455. The Conn type represents a WebSocket connection. A server application uses the Upgrade function from an Upgrader object with a HTTP request handler to get a pointer to a Conn: Call the connection's WriteMessage and ReadMessage methods to send and receive messages as a slice of bytes. This snippet of code shows how to echo messages using these methods: In above snippet of code, p is a []byte and messageType is an int with value websocket.BinaryMessage or websocket.TextMessage. An application can also send and receive messages using the io.WriteCloser and io.Reader interfaces. To send a message, call the connection NextWriter method to get an io.WriteCloser, write the message to the writer and close the writer when done. To receive a message, call the connection NextReader method to get an io.Reader and read until io.EOF is returned. This snippet shows how to echo messages using the NextWriter and NextReader methods: The WebSocket protocol distinguishes between text and binary data messages. Text messages are interpreted as UTF-8 encoded text. The interpretation of binary messages is left to the application. This package uses the TextMessage and BinaryMessage integer constants to identify the two data message types. The ReadMessage and NextReader methods return the type of the received message. The messageType argument to the WriteMessage and NextWriter methods specifies the type of a sent message. It is the application's responsibility to ensure that text messages are valid UTF-8 encoded text. The WebSocket protocol defines three types of control messages: close, ping and pong. Call the connection WriteControl, WriteMessage or NextWriter methods to send a control message to the peer. Connections handle received close messages by sending a close message to the peer and returning a *CloseError from the the NextReader, ReadMessage or the message Read method. Connections handle received ping and pong messages by invoking callback functions set with SetPingHandler and SetPongHandler methods. The callback functions are called from the NextReader, ReadMessage and the message Read methods. The default ping handler sends a pong to the peer. The application's reading goroutine can block for a short time while the handler writes the pong data to the connection. The application must read the connection to process ping, pong and close messages sent from the peer. If the application is not otherwise interested in messages from the peer, then the application should start a goroutine to read and discard messages from the peer. A simple example is: Connections support one concurrent reader and one concurrent writer. Applications are responsible for ensuring that no more than one goroutine calls the write methods (NextWriter, SetWriteDeadline, WriteMessage, WriteJSON) concurrently and that no more than one goroutine calls the read methods (NextReader, SetReadDeadline, ReadMessage, ReadJSON, SetPongHandler, SetPingHandler) concurrently. The Close and WriteControl methods can be called concurrently with all other methods. Web browsers allow Javascript applications to open a WebSocket connection to any host. It's up to the server to enforce an origin policy using the Origin request header sent by the browser. The Upgrader calls the function specified in the CheckOrigin field to check the origin. If the CheckOrigin function returns false, then the Upgrade method fails the WebSocket handshake with HTTP status 403. If the CheckOrigin field is nil, then the Upgrader uses a safe default: fail the handshake if the Origin request header is present and not equal to the Host request header. An application can allow connections from any origin by specifying a function that always returns true: The deprecated Upgrade function does not enforce an origin policy. It's the application's responsibility to check the Origin header before calling Upgrade. Per message compression extensions (RFC 7692) are experimentally supported by this package in a limited capacity. Setting the EnableCompression option to true in Dialer or Upgrader will attempt to negotiate per message deflate support. If compression was successfully negotiated with the connection's peer, any message received in compressed form will be automatically decompressed. All Read methods will return uncompressed bytes. Per message compression of messages written to a connection can be enabled or disabled by calling the corresponding Conn method: Currently this package does not support compression with "context takeover". This means that messages must be compressed and decompressed in isolation, without retaining sliding window or dictionary state across messages. For more details refer to RFC 7692. Use of compression is experimental and may result in decreased performance.
Package smaz is an implementation of the smaz library (https://github.com/antirez/smaz) for compressing small strings.
Package mafsa implements Minimal Acyclic Finite State Automata (MA-FSA) in a space-optimized way as described by Dacuik, Mihov, Watson, and Watson in their paper, "Incremental Construction of Minimal Acyclic Finite-State Automata" (2000). It also implements Minimal Perfect Hashing (MPH) as described by Lucceshi and Kowaltowski in their paper, "Applications of Finite Automata Representing Large Vocabularies" (1992). Unscientifically speaking, this package lets you store large amounts of strings (with Unicode) in memory so that membership queries, prefix lookups, and fuzzy searches are fast. And because minimal perfect hashing is included, you can associate each entry in the tree with more data used by your application. See the README or the end of this documentation for a brief tutorial. MA-FSA structures are a specific type of Deterministic Acyclic Finite State Automaton (DAFSA) which fold equivalent state transitions into each other starting from the suffix of each entry. Typical construction algorithms involve building out the entire tree first, then minimizing the completed tree. However, the method described in the paper above allows the tree to be minimized after every word insertion, provided the insertions are performed in lexicographical order, which drastically reduces memory usage compared to regular prefix trees ("tries"). The goal of this package is to provide a simple, useful, and correct implementation of MA-FSA. Though more complex algorithms exist for removal of items and unordered insertion, these features may be outside the scope of this package. Membership queries are on the order of O(n), where n is the length of the input string, so basically O(1). It is advisable to keep n small since long entries without much in common, especially in the beginning or end of the string, will quickly overrun the optimizations that are available. In those cases, n-gram implementations might be preferable, though these will use more CPU. This package provides two kinds of MA-FSA implementations. One, the BuildTree, facilitates the construction of an optimized tree and allows ordered insertions. The other, MinTree, is effectively read-only but uses significantly less memory and is ideal for production environments where only reads will be occurring. Usually your build process will be separate from your production application, which will make heavy use of reading the structure. To use this package, create a BuildTree and insert your items in lexicographical order: The tree is now compressed to a minimum number of nodes and is ready to be saved. In your production application, then, you can read the file into a MinTree directly: The mt variable is a *MinTree which has the same data as the original BuildTree, but without all the extra "scaffolding" that was required for adding new elements. The package provides some basic read mechanisms.
Package gzipimpl implements the compression component interface
Package zstdimpl implements the compression component interface
Package compression provides compression for trace payloads