Package extract allows to extract archives in zip, tar.gz or tar.bz2 formats easily. Most of the time you'll just need to call the proper function with a Reader and a destination: ``` Sometimes you'll want a bit more control over the files, such as extracting a subfolder of the archive. In this cases you can specify a renamer func that will change the path for every file: ``` If you don't know which archive you're dealing with (life really is always a surprise) you can use Archive, which will infer the type of archive from the first bytes
Package couchdb is a driver for connecting with a CouchDB server over HTTP. Use the `couch` driver name when using this driver. The DSN should be a full URL, likely with login credentials: The CouchDB driver generally interprets kivik.Options keys and values as URL query parameters. Values of the following types will be converted to their appropriate string representation when URL-encoded: Passing any other type will return an error. The only exceptions to the above rule are: The CouchDB driver supports a number of authentication methods. For most uses, you don't need to worry about authentication at all--just include authentication credentials in your connection DSN: This will use Cookie authentication by default. To use one of the explicit authentication mechanisms, you'll need to use kivik's Authenticate method. For example: Normally, to include an attachment in a CouchDB document, it must be base-64 encoded, which leads to increased network traffic and higher CPU load. CouchDB also supports the option to upload multiple attachments in a single request using the 'multipart/related' content type. See http://docs.couchdb.org/en/stable/api/document/common.html#creating-multiple-attachments As an experimental feature, this is now supported by the Kivik CouchDB driver as well. To take advantage of this capability, the `doc` argument to the Put() method must be either: With this in place, the CouchDB driver will switch to `multipart/related` mode, sending each attachment in binary format, rather than base-64 encoding it. To function properly, each attachment must have an accurate Size value. If the Size value is unset, the entirely attachment may be read to determine its size, prior to sending it over the network, leading to delays and unnecessary I/O and CPU usage. The simplest way to ensure efficiency is to use the NewAttachment() method, provided by this package. See the documentation on that method for proper usage. Example: To disable the `multipart/related` capabilities entirely, you may pass the `NoMultipartPut` option, with any value. This will fallback to the default of inline base-64 encoding the attachments. Example: If you find yourself wanting to disable this feature, due to bugs or performance, please consider filing a bug report against Kivik as well, so we can look for a solution that will allow using this optimization.
Package zap provides fast, structured, leveled logging. For applications that log in the hot path, reflection-based serialization and string formatting are prohibitively expensive - they're CPU-intensive and make many small allocations. Put differently, using json.Marshal and fmt.Fprintf to log tons of interface{} makes your application slow. Zap takes a different approach. It includes a reflection-free, zero-allocation JSON encoder, and the base Logger strives to avoid serialization overhead and allocations wherever possible. By building the high-level SugaredLogger on that foundation, zap lets users choose when they need to count every allocation and when they'd prefer a more familiar, loosely typed API. In contexts where performance is nice, but not critical, use the SugaredLogger. It's 4-10x faster than other structured logging packages and supports both structured and printf-style logging. Like log15 and go-kit, the SugaredLogger's structured logging APIs are loosely typed and accept a variadic number of key-value pairs. (For more advanced use cases, they also accept strongly typed fields - see the SugaredLogger.With documentation for details.) By default, loggers are unbuffered. However, since zap's low-level APIs allow buffering, calling Sync before letting your process exit is a good habit. In the rare contexts where every microsecond and every allocation matter, use the Logger. It's even faster than the SugaredLogger and allocates far less, but it only supports strongly-typed, structured logging. Choosing between the Logger and SugaredLogger doesn't need to be an application-wide decision: converting between the two is simple and inexpensive. The simplest way to build a Logger is to use zap's opinionated presets: NewExample, NewProduction, and NewDevelopment. These presets build a logger with a single function call: Presets are fine for small projects, but larger projects and organizations naturally require a bit more customization. For most users, zap's Config struct strikes the right balance between flexibility and convenience. See the package-level BasicConfiguration example for sample code. More unusual configurations (splitting output between files, sending logs to a message queue, etc.) are possible, but require direct use of go.uber.org/zap/zapcore. See the package-level AdvancedConfiguration example for sample code. The zap package itself is a relatively thin wrapper around the interfaces in go.uber.org/zap/zapcore. Extending zap to support a new encoding (e.g., BSON), a new log sink (e.g., Kafka), or something more exotic (perhaps an exception aggregation service, like Sentry or Rollbar) typically requires implementing the zapcore.Encoder, zapcore.WriteSyncer, or zapcore.Core interfaces. See the zapcore documentation for details. Similarly, package authors can use the high-performance Encoder and Core implementations in the zapcore package to build their own loggers. An FAQ covering everything from installation errors to design decisions is available at https://github.com/uber-go/zap/blob/master/FAQ.md.
Demonstrates how to access differ information Demonstrates how to append cols/rows/sheets. Demonstrates how to copy information in sheet Demonstrates how to delete information Demonstrates how to create/open/save XLSX files Demonstrates how to add style formatting Demonstrates how to get/set value for cell Demonstrates how to insert cols/rows Demonstrates how to iterate Demonstrates how to set options of rows/cols/sheets Demonstrates how to open sheet in streaming mode Demonstrates how to update information Demonstrate walk cells using callback
Package ical implements the iCalendar file format. iCalendar is defined in RFC 5545.
Package couchdb is a driver for connecting with a CouchDB server over HTTP. Use the `couch` driver name when using this driver. The DSN should be a full URL, likely with login credentials: The CouchDB driver generally interprets kivik.Options keys and values as URL query parameters. Values of the following types will be converted to their appropriate string representation when URL-encoded: Passing any other type will return an error. The only exceptions to the above rule are: The CouchDB driver supports a number of authentication methods. For most uses, you don't need to worry about authentication at all--just include authentication credentials in your connection DSN: This will use Cookie authentication by default. To use one of the explicit authentication mechanisms, you'll need to use kivik's Authenticate method. For example: Normally, to include an attachment in a CouchDB document, it must be base-64 encoded, which leads to increased network traffic and higher CPU load. CouchDB also supports the option to upload multiple attachments in a single request using the 'multipart/related' content type. See http://docs.couchdb.org/en/stable/api/document/common.html#creating-multiple-attachments As an experimental feature, this is now supported by the Kivik CouchDB driver as well. To take advantage of this capability, the `doc` argument to the Put() method must be either: With this in place, the CouchDB driver will switch to `multipart/related` mode, sending each attachment in binary format, rather than base-64 encoding it. To function properly, each attachment must have an accurate Size value. If the Size value is unset, the entirely attachment may be read to determine its size, prior to sending it over the network, leading to delays and unnecessary I/O and CPU usage. The simplest way to ensure efficiency is to use the NewAttachment() method, provided by this package. See the documentation on that method for proper usage. Example: To disable the `multipart/related` capabilities entirely, you may pass the `NoMultipartPut` option, with any value. This will fallback to the default of inline base-64 encoding the attachments. Example: If you find yourself wanting to disable this feature, due to bugs or performance, please consider filing a bug report against Kivik as well, so we can look for a solution that will allow using this optimization.
Package extract allows to extract archives in zip, tar.gz or tar.bz2 formats easily. Most of the time you'll just need to call the proper function with a buffer and a destination: ``` Sometimes you'll want a bit more control over the files, such as extracting a subfolder of the archive. In this cases you can specify a renamer func that will change the path for every file: ``` If you don't know which archive you're dealing with (life really is always a surprise) you can use Archive, which will infer the type of archive from the first bytes
bindata converts any file into managable Go source code. Useful for embedding binary data into a go program. The file data is optionally gzip compressed before being converted to a raw byte slice. The following paragraphs cover some of the customization options which can be specified in the Config struct, which must be passed into the Translate() call. When used with the `Debug` option, the generated code does not actually include the asset data. Instead, it generates function stubs which load the data from the original file on disk. The asset API remains identical between debug and release builds, so your code will not have to change. This is useful during development when you expect the assets to change often. The host application using these assets uses the same API in both cases and will not have to care where the actual data comes from. An example is a Go webserver with some embedded, static web content like HTML, JS and CSS files. While developing it, you do not want to rebuild the whole server and restart it every time you make a change to a bit of javascript. You just want to build and launch the server once. Then just press refresh in the browser to see those changes. Embedding the assets with the `debug` flag allows you to do just that. When you are finished developing and ready for deployment, just re-invoke `go-bindata` without the `-debug` flag. It will now embed the latest version of the assets. The `NoMemCopy` option will alter the way the output file is generated. It will employ a hack that allows us to read the file data directly from the compiled program's `.rodata` section. This ensures that when we call call our generated function, we omit unnecessary memcopies. The downside of this, is that it requires dependencies on the `reflect` and `unsafe` packages. These may be restricted on platforms like AppEngine and thus prevent you from using this mode. Another disadvantage is that the byte slice we create, is strictly read-only. For most use-cases this is not a problem, but if you ever try to alter the returned byte slice, a runtime panic is thrown. Use this mode only on target platforms where memory constraints are an issue. The default behavior is to use the old code generation method. This prevents the two previously mentioned issues, but will employ at least one extra memcopy and thus increase memory requirements. For instance, consider the following two examples: This would be the default mode, using an extra memcopy but gives a safe implementation without dependencies on `reflect` and `unsafe`: Here is the same functionality, but uses the `.rodata` hack. The byte slice returned from this example can not be written to without generating a runtime error. The NoCompress option indicates that the supplied assets are *not* GZIP compressed before being turned into Go code. The data should still be accessed through a function call, so nothing changes in the API. This feature is useful if you do not care for compression, or the supplied resource is already compressed. Doing it again would not add any value and may even increase the size of the data. The default behavior of the program is to use compression. The keys used in the `_bindata` map are the same as the input file name passed to `go-bindata`. This includes the path. In most cases, this is not desireable, as it puts potentially sensitive information in your code base. For this purpose, the tool supplies another command line flag `-prefix`. This accepts a portion of a path name, which should be stripped off from the map keys and function names. For example, running without the `-prefix` flag, we get: Running with the `-prefix` flag, we get: With the optional Tags field, you can specify any go build tags that must be fulfilled for the output file to be included in a build. This is useful when including binary data in multiple formats, where the desired format is specified at build time with the appropriate tags. The tags are appended to a `// +build` line in the beginning of the output file and must follow the build tags syntax specified by the go tool.
Package nanolog is a package to speed up your logging. The format string is inspired by the full fledged fmt.Fprintf function. The codes are unique to this package, so normal fmt documentation is not be applicable. The format string is similar to fmt in that it uses the percent sign (a.k.a. the modulo operator) to signify the start of a format code. The reader is greedy, meaning that the parser will attempt to read as much as it can for a code before it stops. E.g. if you have a generic int in the middle of your format string immediately followed by the number 1 and a space ("%i1 "), the parser may complain saying that it encountered an invalid code. To fix this, use curly braces after the percent sign to surround the code: "%{i}1 ". Kinds and their corresponding format codes: The file format has two categories of data: The differentiation is done with the entryType, which is prefixed on to the record. The log line records are formatted as follows: The log entry records are formatted as follows: The data is serialized as follows: Bool: 1 byte False: 0 or True: 1 String: 4 + len(string) bytes Length: 4 bytes - little endian uint32 String bytes: Length bytes int family: int: 8 bytes - int64 as little endian uint64 int8: 1 byte int16: 2 bytes - int16 as little endian uint16 int32: 4 bytes - int32 as little endian uint32 int64: 8 bytes - int64 as little endian uint64 uint family: uint: 8 bytes - little endian uint64 uint8: 1 byte uint16: 2 bytes - little endian uint16 uint32: 4 bytes - little endian uint32 uint64: 8 bytes - little endian uint64 float32: 4 bytes as little endian uint32 from float32 bits float64: 8 bytes as little endian uint64 from float64 bits complex64: Real: 4 bytes as little endian uint32 from float32 bits Complex: 4 bytes as little endian uint32 from float32 bits complex128: Real: 8 bytes as little endian uint64 from float64 bits Complex: 8 bytes as little endian uint64 from float64 bits
Package icns implements an encoder for Apple's `.icns` file format. Reference: "https://en.wikipedia.org/wiki/Apple_Icon_Image_format". icns files allow for high resolution icons to make your apps look sexy. The most common ways to generate icns files are 1. use `iconutil` which is a Mac native cli utility, or 2. use tools that wrap `ImageMagick` which adds a large dependency to your project for such a simple use case. With this library you can use pure Go to create icns files from any source image, given that you can decode it into an `image.Image`, without any heavyweight dependencies or subprocessing required. You can also use this library to create icns files on windows and linux. A small CLI app `icnsify` is provided to allow you to create icns files using this library from the command line. It supports piping, which is something `iconutil` does not do, making it substantially easier to wrap. Note: All icons within the icns are sized for high dpi retina screens, using the appropriate icns OSTypes.
Package icns implements an encoder for Apple's `.icns` file format. Reference: "https://en.wikipedia.org/wiki/Apple_Icon_Image_format". icns files allow for high resolution icons to make your apps look sexy. The most common ways to generate icns files are 1. use `iconutil` which is a Mac native cli utility, or 2. use tools that wrap `ImageMagick` which adds a large dependency to your project for such a simple use case. With this library you can use pure Go to create icns files from any source image, given that you can decode it into an `image.Image`, without any heavyweight dependencies or subprocessing required. You can also use this library to create icns files on windows and linux. A small CLI app `icnsify` is provided to allow you to create icns files using this library from the command line. It supports piping, which is something `iconutil` does not do, making it substantially easier to wrap. Note: All icons within the icns are sized for high dpi retina screens, using the appropriate icns OSTypes.
Package sox is a set of bindings for the libSoX sound library. LibSoX is a library of sound sample file format readers/writers and sound effects processors. It is mainly developed for use by SoX but is useful for any sound application.
Package goconf is a simple, straightforward go configuration library. Configuration is defined via structs and tags. Values are loaded from files. Any file format can be used. Simple configuration setup using Toml files
Package stl implements functions to read, write, and transform files in the Stereolithography/Surface Tesselation Language (.stl) file format used in 3D modelling. The format specification was taken from http://www.ennex.com/~fabbers/StL.asp, found at http://en.wikipedia.org/wiki/STL_%28file_format%29. While STL stores the data in single precision 32 bit floating point numbers, the stl package does all calculations beyond simple addition in double precision 64 bit (float64). Usage Example Everything that operates on a model is defined as a method of Solid. Note that The STL format has two variants, a human-readable ASCII variant, and a more compact and precise binary variant which is preferrable. The Solid.BinaryHeader field and the Triangle.Attributes fields will be empty, after reading, as these are not part of the ASCII format. The Solid.Name field is read from the first line after "solid ". It is not checked against the name at the end of the file after "endsolid ". The stl package will also not cope with Unicode byte order marks, which some text editors might automatically place at the beginning of a file. The Solid.BinaryHeader field is filled with all 80 bytes of header data. Then, ReadFile will try to fill solid.Name with an ASCII string read from the header data from the first byte until a \0 or a non-ASCII character is detected. As always when you do linear transformations on floating point numbers, you get numerical errors. So you should expect a vertex being rotated for 360° not to end up at exactly the original coordinates, but instead just very close to them. As the error is usually far smaller than the available precision of 3D printing applications, this is not an issue in most cases. You can implement the Writer interface to directly write into your own data structures. This way you can use the CopyFile and CopyAll functions.
Package cronowriter provides a simple file writer that it writes message to the specified format path. The file path is constructed based on current date and time, like cronolog. Installation The format specifications can be found here: https://github.com/lestrrat/go-strftime#supported-conversion-specifications
Package goics is a toolkit for encoding and decoding ics/Ical/icalendar files. This is a work in progress project, that will try to incorporate as many exceptions and variants of the format. This is a toolkit because user has to define the way it needs the data. The idea is builded with something similar to the consumer/provider pattern. We want to decode a stream of vevents from a .ics file into a custom type Events Our custom type, will need to implement ICalConsumer interface, where, the type will pick up data from the format. The decoding process will be somthing like this: I have choosed this model, because, this format is a pain and also I don't like a lot the reflect package. For encoding objects to iCal format, something similar has to be done: The object emitting elements for the encoder, will have to implement the ICalEmiter, returning a Component structure to be encoded. This also had been done, because every package could require to encode vals and keys their way. Just for encoding time, I found more than three types of lines. The Componenter, is an interface that every Component that can be encoded to ical implements. Properties, had to be stored as strings, the conversion from origin type to string format, must be done, on the emmiter. There are some helpers for date conversion and on the future I will add more, for encoding params on the string, and also for handling lists and recurrent events. A simple example not functional used for testing:
Package ogdl is used to process OGDL, the Ordered Graph Data Language. OGDL is a textual format to write trees or graphs of text, where indentation and spaces define the structure. Here is an example: The languange is simple, either in its textual representation or its number of productions (the specification rules), allowing for compact implementations. OGDL character streams are normally formed by Unicode characters, and encoded as UTF-8 strings, but any encoding that is ASCII transparent is compatible with the specification. See the full spec at http://ogdl.org. To install this package just do: If we have a text file 'config.ogdl' containing: then, will print If the timeout parameter was not present, then the default value (60) will be assigned to 'to'. The default value is optional, but be aware that Int64() will return 0 in case that the parameter doesn't exist. The configuration file can be written in a conciser way: The package includes a template processor. It takes an arbitrary input stream with some variables in it, and produces an output stream with the variables resolved out of a Graph object which acts as context. For example (given the previous config file): string(b) is then: Some rules are followed:
Package flagsfiller makes Go's flag package pleasant to use by mapping the fields of a given struct into flags in a FlagSet. A FlagSetFiller is created with the New constructor, passing it any desired FillerOptions. With that, call Fill, passing it a flag.FlatSet, such as flag.CommandLine, and your struct to be mapped. Even a simple struct with no special changes can be used, such as: After calling Parse on the flag.FlagSet, the corresponding fields of the mapped struct will be populated with values passed from the command-line. For an even quicker start, flagsfiller provides a convenience Parse function that does the same as the snippet above in one call: By default, the flags are named by taking the field name and performing a word-wise conversion to kebab-case. For example the field named "MyMultiWordField" becomes the flag named "my-multi-word-field". The naming strategy can be changed by passing a custom Renamer using the WithFieldRenamer option in the constructor. Additional aliases, such as short names, can be declared with the `aliases` tag as a comma-separated list: FlagSetFiller supports nested structs and computes the flag names by prefixing the field name of the struct to the names of the fields it contains. For example, the following maps to the flags named remote-host, remote-auth-username, and remote-auth-password: To declare a flag's usage add a `usage:""` tag to the field, such as: Since flag.UnquoteUsage normally uses back quotes to locate the argument placeholder name but struct tags also use back quotes, flagsfiller will instead use [square brackets] to define the placeholder name, such as: results in the rendered output: To declare the default value of a flag, you can either set a field's value before passing the struct to process, such as: or add a `default:""` tag to the field. Be sure to provide a valid string that can be converted into the field's type. For example, FlagSetFiller also includes support for []string fields. Repetition of the argument appends to the slice and/or an argument value can contain a comma or newline separated list of values. For example: results in a three element slice. The default tag's value is provided as a comma-separated list, such as FlagSetFiller also includes support for map[string]string fields. Each argument entry is a key=value and/or repetition of the arguments adds to the map or multiple entries can be comma or newline separated in a single argument value. For example: results in a map with three entries. The default tag's value is provided a comma-separate list of key=value entries, such as FlagSetFiller also supports following field types: - net.IP: format used by net.ParseIP() - net.IPNet: format used by net.ParseCIDR() - net.HardwareAddr (MAC addr): format used by net.ParseMAC() - time.Time: format is the layout string used by time.Parse(), default layout is time.DateTime, could be overriden by field tag "layout" - slog.Level: parsed as specified by https://pkg.go.dev/log/slog#Level.UnmarshalText, such as "info" To activate the setting of flag values from environment variables, pass the WithEnv option to flagsfiller.New or flagsfiller.Parse. That option takes a prefix that will be prepended to the resolved field name and then the whole thing is converted to SCREAMING_SNAKE_CASE. The environment variable name will be automatically included in the flag usage along with the standard inclusion of the default value. For example, using the option WithEnv("App") along with the following field declaration would render the following usage: To override the naming of a flag, the field can be declared with the tag `flag:"name"` where the given name will be used exactly as the flag name. An empty string for the name indicates the field should be ignored and no flag is declared. For example, Environment variable naming and processing can be overridden with the `env:"name"` tag, where the given name will be used exactly as the mapped environment variable name. If the WithEnv or WithEnvRenamer options were enabled, a field can be excluded from environment variable mapping by giving an empty string. Conversely, environment variable mapping can be enabled per field with `env:"name"` even when the flagsfiller-wide option was not included. For example, This file implements support for all types that support interface encoding.TextUnmarshaler
Package mux implements a request router and dispatcher. The name mux stands for "HTTP request multiplexer". Like the standard http.ServeMux, mux.Router matches incoming requests against a list of registered routes and calls a handler for the route that matches the URL or other conditions. The main features are: Let's start registering a couple of URL paths and handlers: Here we register three routes mapping URL paths to handlers. This is equivalent to how http.HandleFunc() works: if an incoming request URL matches one of the paths, the corresponding handler is called passing (http.ResponseWriter, *http.Request) as parameters. Paths can have variables. They are defined using the format {name} or {name:pattern}. If a regular expression pattern is not defined, the matched variable will be anything until the next slash. For example: Groups can be used inside patterns, as long as they are non-capturing (?:re). For example: The names are used to create a map of route variables which can be retrieved calling mux.Vars(): And this is all you need to know about the basic usage. More advanced options are explained below. Routes can also be restricted to a domain or subdomain. Just define a host pattern to be matched. They can also have variables: There are several other matchers that can be added. To match path prefixes: ...or HTTP methods: ...or URL schemes: ...or header values: ...or query values: ...or to use a custom matcher function: ...and finally, it is possible to combine several matchers in a single route: Setting the same matching conditions again and again can be boring, so we have a way to group several routes that share the same requirements. We call it "subrouting". For example, let's say we have several URLs that should only match when the host is "www.example.com". Create a route for that host and get a "subrouter" from it: Then register routes in the subrouter: The three URL paths we registered above will only be tested if the domain is "www.example.com", because the subrouter is tested first. This is not only convenient, but also optimizes request matching. You can create subrouters combining any attribute matchers accepted by a route. Subrouters can be used to create domain or path "namespaces": you define subrouters in a central place and then parts of the app can register its paths relatively to a given subrouter. There's one more thing about subroutes. When a subrouter has a path prefix, the inner routes use it as base for their paths: Note that the path provided to PathPrefix() represents a "wildcard": calling PathPrefix("/static/").Handler(...) means that the handler will be passed any request that matches "/static/*". This makes it easy to serve static files with mux: Now let's see how to build registered URLs. Routes can be named. All routes that define a name can have their URLs built, or "reversed". We define a name calling Name() on a route. For example: To build a URL, get the route and call the URL() method, passing a sequence of key/value pairs for the route variables. For the previous route, we would do: ...and the result will be a url.URL with the following path: This also works for host and query value variables: All variables defined in the route are required, and their values must conform to the corresponding patterns. These requirements guarantee that a generated URL will always match a registered route -- the only exception is for explicitly defined "build-only" routes which never match. Regex support also exists for matching Headers within a route. For example, we could do: ...and the route will match both requests with a Content-Type of `application/json` as well as `application/text` There's also a way to build only the URL host or path for a route: use the methods URLHost() or URLPath() instead. For the previous route, we would do: And if you use subrouters, host and path defined separately can be built as well:
gcov2lcov - convert golang coverage files to the lcov format. Copyright (c) 2019 Jan Delgado Copyright (c) 2019 Richard S Allinson Credits: This tool is based on covfmt (https://github.com/ricallinson/covfmt) and uses some parts of goveralls (https://github.com/mattn/goveralls).
Package go-liblzma is a wrapper for liblzma and XZ file format.
Package seelog implements logging functionality with flexible dispatching, filtering, and formatting. To create a logger, use one of the following constructors: Example: The "defer" line is important because if you are using asynchronous logger behavior, without this line you may end up losing some messages when you close your application because they are processed in another non-blocking goroutine. To avoid that you explicitly defer flushing all messages before closing. Logger created using one of the LoggerFrom* funcs can be used directly by calling one of the main log funcs. Example: Having loggers as variables is convenient if you are writing your own package with internal logging or if you have several loggers with different options. But for most standalone apps it is more convenient to use package level funcs and vars. There is a package level var 'Current' made for it. You can replace it with another logger using 'ReplaceLogger' and then use package level funcs: Last lines do the same as In this example the 'Current' logger was replaced using a 'ReplaceLogger' call and became equal to 'logger' variable created from config. This way you are able to use package level funcs instead of passing the logger variable. Main seelog point is to configure logger via config files and not the code. The configuration is read by LoggerFrom* funcs. These funcs read xml configuration from different sources and try to create a logger using it. All the configuration features are covered in detail in the official wiki: https://github.com/cihub/seelog/wiki. There are many sections covering different aspects of seelog, but the most important for understanding configs are: After you understand these concepts, check the 'Reference' section on the main wiki page to get the up-to-date list of dispatchers, receivers, formats, and logger types. Here is an example config with all these features: This config represents a logger with adaptive timeout between log messages (check logger types reference) which logs to console, all.log, and errors.log depending on the log level. Its output formats also depend on log level. This logger will only use log level 'debug' and higher (minlevel is set) for all files with names that don't start with 'test'. For files starting with 'test' this logger prohibits all levels below 'error'. Although configuration using code is not recommended, it is sometimes needed and it is possible to do with seelog. Basically, what you need to do to get started is to create constraints, exceptions and a dispatcher tree (same as with config). Most of the New* functions in this package are used to provide such capabilities. Here is an example of configuration in code, that demonstrates an async loop logger that logs to a simple split dispatcher with a console receiver using a specified format and is filtered using a top-level min-max constraints and one expection for the 'main.go' file. So, this is basically a demonstration of configuration of most of the features: To learn seelog features faster you should check the examples package: https://github.com/cihub/seelog-examples It contains many example configs and usecases.
Package parquet is a library for working with parquet files. For an overview of Parquet's qualities as a storage format, see this blog post: https://blog.twitter.com/engineering/en_us/a/2013/dremel-made-simple-with-parquet Or see the Parquet documentation: https://parquet.apache.org/docs/
zebrapack is a code generation tool for creating methods to serialize and de-serialize Go data structures to and from ZebraPack (a schema-based serialization format that is derived from MessagePack2). This package is targeted at the `go generate` tool. To use it, include the following directive in a go source file with types requiring source generation: The go generate tool should set the proper environment variables for the generator to execute without any command-line flags. However, the following options are supported, if you need them (See zebrapack -h): For more information, please read README.md, and the wiki at github.com/glycerine/zebrapack
greenpack is a code generation tool for creating methods to serialize and de-serialize Go data structures to and from Greenpack (a schema-based serialization format that is derived from MessagePack2). This package is targeted at the `go generate` tool. To use it, include the following directive in a go source file with types requiring source generation: The go generate tool should set the proper environment variables for the generator to execute without any command-line flags. However, the following options are supported, if you need them (See greenpack -h): For more information, please read README.md, and the wiki at github.com/glycerine/greenpack
Package lctime provides a way to format dates and times using strftime directives. More importantly, it does so in a locale-aware fashion. This allows developers to format time based on a user's locale. An initial locale is loaded at import time. It's determined by the first, non-empty locale identifier from these environment variables. If all of the previous variables are empty, then POSIX will be the initial locale used. All locales use UTF-8 character encoding. The formats used are loosely based on glibc locale files. These are the supported strftime directives. They're loosely based on The Open Group Base Specifications Issue 7, http://pubs.opengroup.org/onlinepubs/9699919799/functions/strftime.html.
Package ovirtclient provides a human-friendly Go client for the oVirt Engine. It provides an abstraction layer for the oVirt API, as well as a mocking facility for testing purposes. This documentation contains two parts. This introduction explains setting up the client with the credentials. The API doc contains the individual API calls. When reading the API doc, start with the Client interface: it contains all components of the API. The individual API's, their documentation and examples are located in subinterfaces, such as DiskClient. There are several ways to create a client instance. The most basic way is to use the New() function as follows: The mock client simulates the oVirt engine behavior in-memory without needing an actual running engine. This is a good way to provide a testing facility. It can be created using the NewMock method: That's it! However, to make it really useful, you will need the test helper which can set up test fixtures. The test helper can work in two ways: Either it sets up test fixtures in the mock client, or it sets up a live connection and identifies a usable storage domain, cluster, etc. for testing purposes. The ovirtclient.NewMockTestHelper() function can be used to create a test helper with a mock client in the backend: The easiest way to set up the test helper for a live connection is by using environment variables. To do that, you can use the ovirtclient.NewLiveTestHelperFromEnv() function: This function will inspect environment variables to determine if a connection to a live oVirt engine can be established. The following environment variables are supported: URL of the oVirt engine API. Mandatory. The username for the oVirt engine. Mandatory. The password for the oVirt engine. Mandatory. A file containing the CA certificate in PEM format. Provide the CA certificate in PEM format directly. Disable certificate verification if set. Not recommended. The cluster to use for testing. Will be automatically chosen if not provided. ID of the blank template. Will be automatically chosen if not provided. Storage domain to use for testing. Will be automatically chosen if not provided. VNIC profile to use for testing. Will be automatically chosen if not provided. You can also create the test helper manually: This library provides extensive logging. Each API interaction is logged on the debug level, and other messages are added on other levels. In order to provide logging this library uses the go-ovirt-client-log (https://github.com/oVirt/go-ovirt-client-log) interface definition. As long as your logger implements this interface, you will be able to receive log messages. The logging library also provides a few built-in loggers. For example, you can log via the default Go log interface: Or, you can also log in tests: You can also disable logging: Finally, we also provide an adapter library for klog here: https://github.com/oVirt/go-ovirt-client-log-klog Modern-day oVirt engines run secured with TLS. This means that the client needs a way to verify the certificate the server is presenting. This is controlled by the tls parameter of the New() function. You can implement your own source by implementing the TLSProvider interface, but the package also includes a ready-to-use provider. Create the provider using the TLS() function: This provider has several functions. The easiest to set up is using the system trust root for certificates. However, this won't work own Windows: Now you need to add your oVirt engine certificate to your system trust root. If you don't want to, or can't add the certificate to the system trust root, you can also directly provide it to the client. Finally, you can also disable certificate verification. Do we need to say that this is a very, very bad idea? The configured tls variable can then be passed to the New() function to create an oVirt client. This library attempts to retry API calls that can be retried if possible. Each function has a sensible retry policy. However, you may want to customize the retries by passing one or more retry flags. The following retry flags are supported: This strategy will stop retries when the context parameter is canceled. This strategy adds a wait time after each time, which is increased by the given factor on each try. The default is a backoff with a factor of 2. This strategy will cancel retries if the error in question is a permanent error. This is enabled by default. This strategy will abort retries if a maximum number of tries is reached. On complex calls the retries are counted per underlying API call. This strategy will abort retries if a certain time has been elapsed for the higher level call. This strategy will abort retries if a certain underlying API call takes longer than the specified duration.
Package glog implements logging analogous to the Google-internal C++ INFO/ERROR/V setup. It provides functions Info, Warning, Error, Fatal, plus formatting variants such as Infof. It also provides V-style logging controlled by the -v and -vmodule=file=2 flags. Basic examples: See the documentation for the V function for an explanation of these examples: Log output is buffered and written periodically using Flush. Programs should call Flush before exiting to guarantee all log output is written. By default, all log statements write to files in a temporary directory. This package provides several flags that modify this behavior. As a result, flag.Parse must be called before any logging is done.
Package opc implements the ISO/IEC 29500-2, also known as the "Open Packaging Convention". The Open Packaging specification describes an abstract model and physical format conventions for the use of XML, Unicode, ZIP, and other openly available technologies and specifications to organize the content and resources of a document within a package. The OPC is the foundation technology for many new file formats: .docx, .pptx, .xlsx, .3mf, .dwfx, ...
Package cmd provides a framework that helps implementation of commands having these features: Better logging: By using github.com/cybozu-go/log package, logs can be structured in JSON or logfmt format. HTTP servers log accesses automatically. Graceful exit: The framework provides functions to manage goroutines and network server implementation that can be shutdown gracefully. Signal handlers: The framework installs SIGINT/SIGTERM signal handlers for graceful exit, and SIGUSR1 signal handler to reopen log files. Environment is the heart of the framework. It provides a base context.Context that will be canceled before program stops, and methods to manage goroutines. To use the framework easily, the framework provides an instance of Environment as the default, and functions to work with it. The most basic usage of the framework. HTTP server that stops gracefully.
Command gopages generates static files for Go documentation, formatted with godoc. To install gopages, run the following command: Generate documentation for your module by running gopages without any flags. A 'go.mod' file must be present in the current directory. NOTE: Install gopages with Go v1.19 or higher to generate documentation with improved formatting. Usage of gopages:
Package dataexchange provides the API client, operations, and parameter types for AWS Data Exchange. AWS Data Exchange is a service that makes it easy for AWS customers to exchange data in the cloud. You can use the AWS Data Exchange APIs to create, update, manage, and access file-based data set in the AWS Cloud. As a subscriber, you can view and access the data sets that you have an entitlement to through a subscription. You can use the APIs to download or copy your entitled data sets to Amazon Simple Storage Service (Amazon S3) for use across a variety of AWS analytics and machine learning services. As a provider, you can create and manage your data sets that you would like to publish to a product. Being able to package and provide your data sets into products requires a few steps to determine eligibility. For more information, visit the AWS Data Exchange User Guide. A data set is a collection of data that can be changed or updated over time. Data sets can be updated using revisions, which represent a new version or incremental change to a data set. A revision contains one or more assets. An asset in AWS Data Exchange is a piece of data that can be stored as an Amazon S3 object, Redshift datashare, API Gateway API, AWS Lake Formation data permission, or Amazon S3 data access. The asset can be a structured data file, an image file, or some other data file. Jobs are asynchronous import or export operations used to create or copy assets.
Package zim implements reading support for the ZIM File Format.
Package gcfg reads "INI-style" text-based configuration files with "name=value" pairs grouped into sections (gcfg files). This package is still a work in progress; see the sections below for planned changes. The syntax is based on that used by git config: http://git-scm.com/docs/git-config#_syntax . There are some (planned) differences compared to the git config format: The functions in this package read values into a user-defined struct. Each section corresponds to a struct field in the config struct, and each variable in a section corresponds to a data field in the section struct. The mapping of each section or variable name to fields is done either based on the "gcfg" struct tag or by matching the name of the section or variable, ignoring case. In the latter case, hyphens '-' in section and variable names correspond to underscores '_' in field names. Fields must be exported; to use a section or variable name starting with a letter that is neither upper- or lower-case, prefix the field name with 'X'. (See https://code.google.com/p/go/issues/detail?id=5763#c4 .) For sections with subsections, the corresponding field in config must be a map, rather than a struct, with string keys and pointer-to-struct values. Values for subsection variables are stored in the map with the subsection name used as the map key. (Note that unlike section and variable names, subsection names are case sensitive.) When using a map, and there is a section with the same section name but without a subsection name, its values are stored with the empty string used as the key. It is possible to provide default values for subsections in the section "default-<sectionname>" (or by setting values in the corresponding struct field "Default_<sectionname>"). The functions in this package panic if config is not a pointer to a struct, or when a field is not of a suitable type (either a struct or a map with string keys and pointer-to-struct values). The section structs in the config struct may contain single-valued or multi-valued variables. Variables of unnamed slice type (that is, a type starting with `[]`) are treated as multi-value; all others (including named slice types) are treated as single-valued variables. Single-valued variables are handled based on the type as follows. Unnamed pointer types (that is, types starting with `*`) are dereferenced, and if necessary, a new instance is allocated. For types implementing the encoding.TextUnmarshaler interface, the UnmarshalText method is used to set the value. Implementing this method is the recommended way for parsing user-defined types. For fields of string kind, the value string is assigned to the field, after unquoting and unescaping as needed. For fields of bool kind, the field is set to true if the value is "true", "yes", "on" or "1", and set to false if the value is "false", "no", "off" or "0", ignoring case. In addition, single-valued bool fields can be specified with a "blank" value (variable name without equals sign and value); in such case the value is set to true. Predefined integer types [u]int(|8|16|32|64) and big.Int are parsed as decimal or hexadecimal (if having '0x' prefix). (This is to prevent unintuitively handling zero-padded numbers as octal.) Other types having [u]int* as the underlying type, such as os.FileMode and uintptr allow decimal, hexadecimal, or octal values. Parsing mode for integer types can be overridden using the struct tag option ",int=mode" where mode is a combination of the 'd', 'h', and 'o' characters (each standing for decimal, hexadecimal, and octal, respectively.) All other types are parsed using fmt.Sscanf with the "%v" verb. For multi-valued variables, each individual value is parsed as above and appended to the slice. If the first value is specified as a "blank" value (variable name without equals sign and value), a new slice is allocated; that is any values previously set in the slice will be ignored. The types subpackage for provides helpers for parsing "enum-like" and integer types. There are 3 types of errors: Programmer errors trigger panics. These are should be fixed by the programmer before releasing code that uses gcfg. Data errors cause gcfg to return a non-nil error value. This includes the case when there are extra unknown key-value definitions in the configuration data (extra data). However, in some occasions it is desirable to be able to proceed in situations when the only data error is that of extra data. These errors are handled at a different (warning) priority and can be filtered out programmatically. To ignore extra data warnings, wrap the gcfg.Read*Into invocation into a call to gcfg.FatalOnly. The following is a list of changes under consideration:
Package pkcs12 implements some of PKCS#12 (also known as P12 or PFX). It is intended for decoding P12/PFX files for use with the crypto/tls package, and for encoding P12/PFX files for use by legacy applications which do not support newer formats. Since PKCS#12 uses weak encryption primitives, it SHOULD NOT be used for new applications. This package is forked from golang.org/x/crypto/pkcs12, which is frozen. The implementation is distilled from https://tools.ietf.org/html/rfc7292 and referenced documents.
Package redirects provides Netlify style _redirects file format parsing.
Evernote2md is a cli tool to convert Evernote notes exported in *.enex format to a directory with markdown files. Usage: If outputDir is not specified, current workdir is used.
Golex is a lex/flex like (not fully POSIX lex compatible) utility. It renders .l formated data (https://westes.github.io/flex/manual/Format.html#Format) to Go source code. The .l data can come from a file named in a command line argument. If no non-opt args are given, golex reads stdin. Options: To get the latest golex version: Please see http://godoc.org/modernc.org/golex/lex. 2014-11-18: Golex now supports %yym - a hook which can be used to mark an accepting state. Consider for example this .l file: Execution and output: 2014-11-15: Golex's output is now gofmt'ed, if possible. Missing/differing functionality of the current renderer (compared to flex): Further limitations on the .l source are listed in the cznic/lex package godocs. A simple golex program example (make example1 && ./example1):
The `cbt` CLI is a command-line interface that lets you interact with Cloud Bigtable. See the [cbt CLI overview](https://cloud.google.com/bigtable/docs/cbt-overview) to learn how to install the `cbt` CLI. Before you use the `cbt` CLI, you should be familiar with the [Bigtable overview](https://cloud.google.com/bigtable/docs/overview). The examples on this page use [sample data](https://cloud.google.com/bigtable/docs/using-filters#data) similar to data that you might store in Bigtable. Usage: The commands are: The options are: Example: cbt -instance=my-instance ls Use "cbt help \<command>" for more information about a command. Preview features are not currently available to most Cloud Bigtable customers. Alpha features might be changed in backward-incompatible ways and are not recommended for production use. They are not subject to any SLA or deprecation policy. Syntax rules for the Bash shell apply to the `cbt` CLI. This means, for example, that you must put quotes around values that contain spaces or operators. It also means that if a value is arbitrary bytes, you need to prefix it with a dollar sign and use single quotes. Example: cbt -project my-project -instance my-instance lookup my-table $'\224\257\312W\365:\205d\333\2471\315\' For convenience, you can add values for the -project, -instance, -creds, -admin-endpoint and -data-endpoint options to your ~/.cbtrc file in the following format: All values are optional and can be overridden at the command prompt. Usage: Usage: Usage: Usage: Usage: Usage: Usage: Usage: Usage: Usage: Usage: Usage: Usage: Usage: Usage: Usage: Usage: Usage: Usage: Usage: Usage: Usage: Usage: Usage: Usage: Usage: Set value of a cell (write) Usage: Add a value to an aggregate cell (write) Usage: Usage: Usage: Usage: Usage: Usage:
Package dailyrotate provides a file that is rotated daily (at midnight in specified location). You provide a pattern for a file path. That pattern will be formatted with time.Format to generate a real path. It should be unique for each day e.g. 2006-01-02.txt. You Write to a file and the code takes care of closing existing file and opening a new file when we're crossing daily boundaries. For a full tutorial see https://presstige.io/p/dailyrotate-a-Go-library-for-rotating-files-daily-6fb283a94e604f879e0d97a5f788dee6
Package gitfs is a complete solution for static files in Go code. When Go code uses non-Go files, they are not packaged into the binary. The common approach to the problem, as implemented by (go-bindata) https://github.com/kevinburke/go-bindata is to convert all the required static files into Go code, which eventually compiled into the binary. This library takes a different approach, in which the static files are not required to be "binary-packed", and even no required to be in the same repository as the Go code. This package enables loading static content from a remote git repository, or packing it to the binary if desired or loaded from local path for development process. The transition from remote repository to binary packed content, to local content is completely smooth. *The API is simple and minimalistic*. The `New` method returns a (sub)tree of a Git repository, represented by the standard `http.FileSystem` interface. This object enables anything that is possible to do with a regular filesystem, such as opening a file or listing a directory. Additionally, the ./fsutil package provides enhancements over the `http.FileSystem` object (They can work with any object that implements the interface) such as loading Go templates in the standard way, walking over the filesystem, and applying glob patterns on a filesystem. Supported features: * Loading of specific version/tag/branch. * For debug purposes, the files can be loaded from local path instead of the remote repository. * Files are loaded lazily by default or they can be preloaded if required. * Files can be packed to the Go binary using a command line tool. * This project is using the standard `http.FileSystem` interface. * In ./fsutil there are some general useful tools around the `http.FileSystem` interace. To create a filesystem using the `New` function, provide the Git project with the pattern: `github.com/<owner>/<repo>(/<path>)?(@<ref>)?`. If no `path` is specified, the root of the project will be used. `ref` can be any git branch using `heads/<branch name>` or any git tag using `tags/<tag>`. If the tag is of Semver format, the `tags/` prefix is not required. If no `ref` is specified, the default branch will be used. In the following example, the repository `github.com/x/y` at tag v1.2.3 and internal path "static" is loaded: The variable `fs` implements the `http.FileSystem` interface. Reading a file from the repository can be done using the `Open` method. This function accepts a path, relative to the root of the defined filesystem. The `fs` variable can be used in anything that accept the standard interface. For example, it can be used for serving static content using the standard library: When used with private github repository, the Github API calls should be instrumented with the appropriate credentials. The credentials can be passed by providing an HTTP client. For example, to use a Github Token from environnement variable `GITHUB_TOKEN`: For quick development workflows, it is easier and faster to use local static content and not remote content that was pushed to a remote repository. This is enabled by the `OptLocal` option. To use this option only in local development and not in production system, it can be used as follow: In this example, we stored the value for `OptLocal` in an environment variable. As a result, when running the program with `LOCAL_DEBUG=.` local files will be used, while running without it will result in using the remote files. (the value of the environment variable should point to any directory within the github project). Using gitfs does not mean that files are required to be remotely fetched. When binary packing of the files is needed, a command line tool can pack them for you. To get the tool run: `go get github.com/posener/gitfs/cmd/gitfs`. Running the tool is by `gitfs <patterns>`. This generates a `gitfs.go` file in the current directory that contains all the used filesystems' data. This will cause all `gitfs.New` calls to automatically use the packed data, insted of fetching the data on runtime. By default, a test will also be generated with the code. This test fails when the local files are modified without updating the binary content. Use binary-packing with `go generate`: To generate all filesystems used by a project add `//go:generate gitfs ./...` in the root of the project. To generate only a specific filesystem add `//go:generate gitfs $GOFILE` in the file it is being used. An interesting anecdote is that gitfs command is using itself for generating its own templates. Files exclusion can be done by including only specific files using a glob pattern with `OptGlob` option, using the Glob options. This will affect both local loading of files, remote loading and binary packing (may reduce binary size). For example: The ./fsutil package is a collection of useful functions that can work with any `http.FileSystem` implementation. For example, here we will use a function that loads go templates from the filesystem. With gitfs you can open a remote git repository, and load any file, including non-go files. In this example, the README.md file of a remote repository is loaded.
Package CloudForest implements ensembles of decision trees for machine learning in pure Go (golang to search engines). It allows for a number of related algorithms for classification, regression, feature selection and structure analysis on heterogeneous numerical/categorical data with missing values. These include: Breiman and Cutler's Random Forest for Classification and Regression Adaptive Boosting (AdaBoost) Classification Gradiant Boosting Tree Regression Entropy and Cost driven classification L1 regression Feature selection with artificial contrasts Proximity and model structure analysis Roughly balanced bagging for unbalanced classification The API hasn't stabilized yet and may change rapidly. Tests and benchmarks have been performed only on embargoed data sets and can not yet be released. Library Documentation is in code and can be viewed with godoc or live at: http://godoc.org/github.com/ryanbressler/CloudForest Documentation of command line utilities and file formats can be found in README.md, which can be viewed fromated on github: http://github.com/ryanbressler/CloudForest Pull requests and bug reports are welcome. CloudForest was created by Ryan Bressler and is being developed in the Shumelivich Lab at the Institute for Systems Biology for use on genomic/biomedical data with partial support from The Cancer Genome Atlas and the Inova Translational Medicine Institute. CloudForest is intended to provide fast, comprehensible building blocks that can be used to implement ensembles of decision trees. CloudForest is written in Go to allow a data scientist to develop and scale new models and analysis quickly instead of having to modify complex legacy code. Data structures and file formats are chosen with use in multi threaded and cluster environments in mind. Go's support for function types is used to provide a interface to run code as data is percolated through a tree. This method is flexible enough that it can extend the tree being analyzed. Growing a decision tree using Breiman and Cutler's method can be done in an anonymous function/closure passed to a tree's root node's Recurse method: This allows a researcher to include whatever additional analysis they need (importance scores, proximity etc) in tree growth. The same Recurse method can also be used to analyze existing forests to tabulate scores or extract structure. Utilities like leafcount and errorrate use this method to tabulate data about the tree in collection objects. Decision tree's are grown with the goal of reducing "Impurity" which is usually defined as Gini Impurity for categorical targets or mean squared error for numerical targets. CloudForest grows trees against the Target interface which allows for alternative definitions of impurity. CloudForest includes several alternative targets: Additional targets can be stacked on top of these target to add boosting functionality: Repeatedly splitting the data and searching for the best split at each node of a decision tree are the most computationally intensive parts of decision tree learning and CloudForest includes optimized code to perform these tasks. Go's slices are used extensively in CloudForest to make it simple to interact with optimized code. Many previous implementations of Random Forest have avoided reallocation by reordering data in place and keeping track of start and end indexes. In go, slices pointing at the same underlying arrays make this sort of optimization transparent. For example a function like: can return left and right slices that point to the same underlying array as the original slice of cases but these slices should not have their values changed. Functions used while searching for the best split also accepts pointers to reusable slices and structs to maximize speed by keeping memory allocations to a minimum. BestSplitAllocs contains pointers to these items and its use can be seen in functions like: For categorical predictors, BestSplit will also attempt to intelligently choose between 4 different implementations depending on user input and the number of categories. These include exhaustive, random, and iterative searches for the best combination of categories implemented with bitwise operations against int and big.Int. See BestCatSplit, BestCatSplitIter, BestCatSplitBig and BestCatSplitIterBig. All numerical predictors are handled by BestNumSplit which relies on go's sorting package. Training a Random forest is an inherently parallel process and CloudForest is designed to allow parallel implementations that can tackle large problems while keeping memory usage low by writing and using data structures directly to/from disk. Trees can be grown in separate go routines. The growforest utility provides an example of this that uses go routines and channels to grow trees in parallel and write trees to disk as the are finished by the "worker" go routines. The few summary statistics like mean impurity decrease per feature (importance) can be calculated using thread safe data structures like RunningMean. Trees can also be grown on separate machines. The .sf stochastic forest format allows several small forests to be combined by concatenation and the ForestReader and ForestWriter structs allow these forests to be accessed tree by tree (or even node by node) from disk. For data sets that are too big to fit in memory on a single machine Tree.Grow and FeatureMatrix.BestSplitter can be reimplemented to load candidate features from disk, distributed database etc. By default cloud forest uses a fast heuristic for missing values. When proposing a split on a feature with missing data the missing cases are removed and the impurity value is corrected to use three way impurity which reduces the bias towards features with lots of missing data: Missing values in the target variable are left out of impurity calculations. This provided generally good results at a fraction of the computational costs of imputing data. Optionally, feature.ImputeMissing or featurematrixImputeMissing can be called before forest growth to impute missing values to the feature mean/mode which Brieman [2] suggests as a fast method for imputing values. This forest could also be analyzed for proximity (using leafcount or tree.GetLeaves) to do the more accurate proximity weighted imputation Brieman describes. Experimental support is provided for 3 way splitting which splits missing cases onto a third branch. [2] This has so far yielded mixed results in testing. At some point in the future support may be added for local imputing of missing values during tree growth as described in [3] [1] http://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm#missing1 [2] https://code.google.com/p/rf-ace/ [3] http://projecteuclid.org/DPubS?verb=Display&version=1.0&service=UI&handle=euclid.aoas/1223908043&page=record In CloudForest data is stored using the FeatureMatrix struct which contains Features. The Feature struct implements storage and methods for both categorical and numerical data and calculations of impurity etc and the search for the best split. The Target interface abstracts the methods of Feature that are needed for a feature to be predictable. This allows for the implementation of alternative types of regression and classification. Trees are built from Nodes and Splitters and stored within a Forest. Tree has a Grow implements Brieman and Cutler's method (see extract above) for growing a tree. A GrowForest method is also provided that implements the rest of the method including sampling cases but it may be faster to grow the forest to disk as in the growforest utility. Prediction and Voting is done using Tree.Vote and CatBallotBox and NumBallotBox which implement the VoteTallyer interface.
Package protobuf implements Protocol Buffers reflectively using Go types to define message formats. This approach provides convenience similar to Gob encoding, but with a widely-used and language-neutral wire format. For general information on Protocol buffers see http://protobuf.googlecode.com. In contrast with goprotobuf, this package does not require users to write or compile .proto files; you just define the message formats you want as Go struct types. Consider for example this example message format definition from the Protocol Buffers overview: The following Go type and const definitions express exactly the same format, for purposes of encoding and decoding with this protobuf package: To encode a message, you simply call the Encode() function with a pointer to the struct you wish to encode, and Encode() returns a []byte slice containing the protobuf-encoded struct: To decode an encoded message, simply call Decode() on the byte-slice: If you want to interoperate with code in other languages using the same message formats, you may of course still end up writing .proto files for the code in those other languages. However, defining message formats with native Go types enables these types to be tailored to the code using them without affecting wire compatibility, such as by attaching useful methods to these struct types. The translation between a Go struct definition and a basic Protocol Buffers message format definition is straightforward; the rules are as follows. A message definition in a .proto file translates to a Go struct, whose fields are implicitly assigned consecutive numbers starting from 1. If you need to leave gaps in the field number sequence (e.g., to delete an obsolete field without breaking wire compatibility), then you can skip that field number using a blank Go field, like this: A 'required' protobuf field translates to a plain field of a corresponding type in the Go struct. The following table summarizes the correspondence between .proto definition types and Go field types: An 'optional' protobuf field is expressed as a pointer field in Go. Encode() will transmit the field only if the pointer is non-nil. Decode() will instantiate the pointed-to type and fill in the pointer if the field is present in the message being decoded, leaving the pointer unmodified (usually nil) if the field is not present. A 'repeated' protobuf field translates to a slice field in Go. Slices of primitive bool, integer, and float types are encoded and decoded in packed format, as if the [packed=true] option was declared for the field in the .proto file. For flexibility and convenience, struct fields may have interface types, which this package interprets as having dynamic types to be bound at runtime. Encode() follows the interface's implicit pointer and uses reflection to determine the referred-to object's actual type for encoding Decode() takes an optional map of interface types to constructor functions, which it uses to instantiate concrete types for interfaces while decoding. Furthermore, if the instantiated types support the Encoding interface, Encode() and Decode() will invoke the methods of that interface, allowing objects to implement their own custom encoding/decoding methods. This package does not try to support all possible protobuf formats. It currently does not support nonzero default value declarations for enums, the legacy unpacked formats for repeated numeric fields, messages with extremely sparse field numbering, or other more exotic features like extensions or oneof. If you need to interoperate with existing protobuf code using these features, then you should probably use goprotobuf, at least for those particular message formats. Another downside of this reflective approach to protobuf implementation is that reflective code is generally less efficient than statically generated code, as gogoprotobuf produces for example. If we decide we want the convenience of format definitions in Go with the runtime performance of static code generation, we could in principle achieve that by adding a "Go-format" message format compiler frontend to goprotobuf or gogoprotobuf - but we leave this as an exercise for the reader. This example defines, encodes, and decodes a Person message format equivalent to the example used in the Protocol Buffers overview.
Package flume is a logging package, build on top of zap. It's structured and leveled logs, like zap/logrus/etc. It adds global, runtime re-configuration of all loggers, via an internal logger registry. There are two interaction points with flume: code that generates logs, and code that configures logging output. Code which generates logs needs to create named logger instances, and call log functions on it, like Info() and Debug(). But by default, all these logs will be silently discarded. Flume does not output log entries unless explicitly told to do so. This ensures libraries can freely use flume internally, without polluting the stdout of the programs importing the library. The Logger type is a small interface. Libraries should allow replacement of their Logger instances so importers can entirely replace flume if they wish. Alternately, importers can use flume to configure the library's log output, and/or redirect it into the overall program's log stream. This package does not offer package level log functions, so you need to create a logger instance first: A common pattern is to create a single, package-wide logger, named after the package: Then, write some logs: Logs have a message, then matched pairs of key/value properties. Child loggers can be created and pre-seeded with a set of properties: Expensive log events can be avoid by explicitly checking level: Loggers can be bound to context.Context, which is convenient for carrying per-transaction loggers (pre-seeded with transaction specific context) through layers of request processing code: The standard Logger interface only supports 3 levels of log, DBG, INF, and ERR. This is inspired by this article: https://dave.cheney.net/2015/11/05/lets-talk-about-logging. However, you can create instances of DeprecatedLogger instead, which support more levels. There are several package level functions which reconfigure logging output. They control which levels are discarded, which fields are included in each log entry, and how those fields are rendered, and how the overall log entry is rendered (JSON, LTSV, colorized, etc). To configure logging settings from environment variables, call the configuration function from main(): This reads the log configuration from the environment variable "FLUME" (the default, which can be overridden). The value is JSON, e.g.: The properties of the config string: "level": ERR, INF, or DBG. The default level for all loggers. "levels": A string configuring log levels for specific loggers, overriding the default level. See note below for syntax. "development": true or false. In development mode, the defaults for the other settings change to be more suitable for developers at a terminal (colorized, multiline, human readable, etc). See note below for exact defaults. "addCaller": true or false. Adds call site information to log entries (file and line). "encoding": json, ltsv, term, or term-color. Configures how log entries are encoded in the output. "term" and "term-color" are multi-line, human-friendly formats, intended for terminal output. "encoderConfig": a JSON object which configures advanced encoding settings, like how timestamps are formatted. See docs for go.uber.org/zap/zapcore/EncoderConfig "messageKey": the label of the message property of the log entry. If empty, message is omitted. "levelKey": the label of the level property of the log entry. If empty, level is omitted. "timeKey": the label of the timestamp of the log entry. If empty, timestamp is omitted. "nameKey": the label of the logger name in the log entry. If empty, logger name is omitted. "callerKey": the label of the logger name in the log entry. If empty, logger name is omitted. "lineEnding": the end of each log output line. "levelEncoder": capital, capitalColor, color, lower, or abbr. Controls how the log entry level is rendered. "abbr" renders 3-letter abbreviations, like ERR and INF. "timeEncoder": iso8601, millis, nanos, unix, or justtime. Controls how timestamps are rendered. "millis", "nanos", and "unix" are since UNIX epoch. "unix" is in floating point seconds. "justtime" omits the date, and just prints the time in the format "15:04:05.000". "durationEncoder": string, nanos, or seconds. Controls how time.Duration values are rendered. "callerEncoder": full or short. Controls how the call site is rendered. "full" includes the entire package path, "short" only includes the last folder of the package. Defaults: These defaults are only applied if one of the configuration functions is called, like ConfigFromEnv(), ConfigString(), Configure(), or LevelsString(). Initially, all loggers are configured to discard everything, following flume's opinion that log packages should be silent unless spoken too. Ancillary to this: library packages should *not* call these functions, or configure logging levels or output in anyway. Only program entry points, like main() or test code, should configure logging. Libraries should just create loggers and log to them. Development mode: if "development"=true, the defaults for the rest of the settings change, equivalent to: The "levels" value is a list of key=value pairs, configuring the level of individual named loggers. If the key is "*", it sets the default level. If "level" and "levels" both configure the default level, "levels" wins. Examples: Most usages of flume will use its package functions. The package functions delegate to an internal instance of Factory, which a the logger registry. You can create and manage your own instance of Factory, which will be an isolated set of Loggers. tl;dr The implementation is a wrapper around zap. zap does levels, structured logs, and is very fast. zap doesn't do centralized, global configuration, so this package adds that by maintaining an internal registry of all loggers, and using the sync.atomic stuff to swap out levels and writers in a thread safe way.
Package tracelog : logcalls.go provides formatting functions. Package tracelog : logcalls.go provides formatting functions. Package tracelog implements a logging system to trace all aspect of your code. This is great for task oriented programs. Based on the Go log standard library. It provides 4 destinations with logging levels plus you can attach a file for persistent writes. A log clean process is provided to maintain disk space. There is also email support to send email alerts.
Package mux implements a request router and dispatcher. The name mux stands for "HTTP request multiplexer". Like the standard http.ServeMux, mux.Router matches incoming requests against a list of registered routes and calls a handler for the route that matches the URL or other conditions. The main features are: Let's start registering a couple of URL paths and handlers: Here we register three routes mapping URL paths to handlers. This is equivalent to how http.HandleFunc() works: if an incoming request URL matches one of the paths, the corresponding handler is called passing (http.ResponseWriter, *http.Request) as parameters. Paths can have variables. They are defined using the format {name} or {name:pattern}. If a regular expression pattern is not defined, the matched variable will be anything until the next slash. For example: Groups can be used inside patterns, as long as they are non-capturing (?:re). For example: The names are used to create a map of route variables which can be retrieved calling mux.Vars(): Note that if any capturing groups are present, mux will panic() during parsing. To prevent this, convert any capturing groups to non-capturing, e.g. change "/{sort:(asc|desc)}" to "/{sort:(?:asc|desc)}". This is a change from prior versions which behaved unpredictably when capturing groups were present. And this is all you need to know about the basic usage. More advanced options are explained below. Routes can also be restricted to a domain or subdomain. Just define a host pattern to be matched. They can also have variables: There are several other matchers that can be added. To match path prefixes: ...or HTTP methods: ...or URL schemes: ...or header values: ...or query values: ...or to use a custom matcher function: ...and finally, it is possible to combine several matchers in a single route: Setting the same matching conditions again and again can be boring, so we have a way to group several routes that share the same requirements. We call it "subrouting". For example, let's say we have several URLs that should only match when the host is "www.example.com". Create a route for that host and get a "subrouter" from it: Then register routes in the subrouter: The three URL paths we registered above will only be tested if the domain is "www.example.com", because the subrouter is tested first. This is not only convenient, but also optimizes request matching. You can create subrouters combining any attribute matchers accepted by a route. Subrouters can be used to create domain or path "namespaces": you define subrouters in a central place and then parts of the app can register its paths relatively to a given subrouter. There's one more thing about subroutes. When a subrouter has a path prefix, the inner routes use it as base for their paths: Note that the path provided to PathPrefix() represents a "wildcard": calling PathPrefix("/static/").Handler(...) means that the handler will be passed any request that matches "/static/*". This makes it easy to serve static files with mux: Now let's see how to build registered URLs. Routes can be named. All routes that define a name can have their URLs built, or "reversed". We define a name calling Name() on a route. For example: To build a URL, get the route and call the URL() method, passing a sequence of key/value pairs for the route variables. For the previous route, we would do: ...and the result will be a url.URL with the following path: This also works for host and query value variables: All variables defined in the route are required, and their values must conform to the corresponding patterns. These requirements guarantee that a generated URL will always match a registered route -- the only exception is for explicitly defined "build-only" routes which never match. Regex support also exists for matching Headers within a route. For example, we could do: ...and the route will match both requests with a Content-Type of `application/json` as well as `application/text` There's also a way to build only the URL host or path for a route: use the methods URLHost() or URLPath() instead. For the previous route, we would do: And if you use subrouters, host and path defined separately can be built as well: Mux supports the addition of middlewares to a Router, which are executed in the order they are added if a match is found, including its subrouters. Middlewares are (typically) small pieces of code which take one request, do something with it, and pass it down to another middleware or the final handler. Some common use cases for middleware are request logging, header manipulation, or ResponseWriter hijacking. Typically, the returned handler is a closure which does something with the http.ResponseWriter and http.Request passed to it, and then calls the handler passed as parameter to the MiddlewareFunc (closures can access variables from the context where they are created). A very basic middleware which logs the URI of the request being handled could be written as: Middlewares can be added to a router using `Router.Use()`: A more complex authentication middleware, which maps session token to users, could be written as: Note: The handler chain will be stopped if your middleware doesn't call `next.ServeHTTP()` with the corresponding parameters. This can be used to abort a request if the middleware writer wants to.
Package mux implements a request router and dispatcher. The name mux stands for "HTTP request multiplexer". Like the standard http.ServeMux, mux.Router matches incoming requests against a list of registered routes and calls a handler for the route that matches the URL or other conditions. The main features are: Let's start registering a couple of URL paths and handlers: Here we register three routes mapping URL paths to handlers. This is equivalent to how http.HandleFunc() works: if an incoming request URL matches one of the paths, the corresponding handler is called passing (http.ResponseWriter, *http.Request) as parameters. Paths can have variables. They are defined using the format {name} or {name:pattern}. If a regular expression pattern is not defined, the matched variable will be anything until the next slash. For example: Groups can be used inside patterns, as long as they are non-capturing (?:re). For example: The names are used to create a map of route variables which can be retrieved calling mux.Vars(): Note that if any capturing groups are present, mux will panic() during parsing. To prevent this, convert any capturing groups to non-capturing, e.g. change "/{sort:(asc|desc)}" to "/{sort:(?:asc|desc)}". This is a change from prior versions which behaved unpredictably when capturing groups were present. And this is all you need to know about the basic usage. More advanced options are explained below. Routes can also be restricted to a domain or subdomain. Just define a host pattern to be matched. They can also have variables: There are several other matchers that can be added. To match path prefixes: ...or HTTP methods: ...or URL schemes: ...or header values: ...or query values: ...or to use a custom matcher function: ...and finally, it is possible to combine several matchers in a single route: Setting the same matching conditions again and again can be boring, so we have a way to group several routes that share the same requirements. We call it "subrouting". For example, let's say we have several URLs that should only match when the host is "www.example.com". Create a route for that host and get a "subrouter" from it: Then register routes in the subrouter: The three URL paths we registered above will only be tested if the domain is "www.example.com", because the subrouter is tested first. This is not only convenient, but also optimizes request matching. You can create subrouters combining any attribute matchers accepted by a route. Subrouters can be used to create domain or path "namespaces": you define subrouters in a central place and then parts of the app can register its paths relatively to a given subrouter. There's one more thing about subroutes. When a subrouter has a path prefix, the inner routes use it as base for their paths: Note that the path provided to PathPrefix() represents a "wildcard": calling PathPrefix("/static/").Handler(...) means that the handler will be passed any request that matches "/static/*". This makes it easy to serve static files with mux: Now let's see how to build registered URLs. Routes can be named. All routes that define a name can have their URLs built, or "reversed". We define a name calling Name() on a route. For example: To build a URL, get the route and call the URL() method, passing a sequence of key/value pairs for the route variables. For the previous route, we would do: ...and the result will be a url.URL with the following path: This also works for host and query value variables: All variables defined in the route are required, and their values must conform to the corresponding patterns. These requirements guarantee that a generated URL will always match a registered route -- the only exception is for explicitly defined "build-only" routes which never match. Regex support also exists for matching Headers within a route. For example, we could do: ...and the route will match both requests with a Content-Type of `application/json` as well as `application/text` There's also a way to build only the URL host or path for a route: use the methods URLHost() or URLPath() instead. For the previous route, we would do: And if you use subrouters, host and path defined separately can be built as well: Mux supports the addition of middlewares to a Router, which are executed in the order they are added if a match is found, including its subrouters. Middlewares are (typically) small pieces of code which take one request, do something with it, and pass it down to another middleware or the final handler. Some common use cases for middleware are request logging, header manipulation, or ResponseWriter hijacking. Typically, the returned handler is a closure which does something with the http.ResponseWriter and http.Request passed to it, and then calls the handler passed as parameter to the MiddlewareFunc (closures can access variables from the context where they are created). A very basic middleware which logs the URI of the request being handled could be written as: Middlewares can be added to a router using `Router.Use()`: A more complex authentication middleware, which maps session token to users, could be written as: Note: The handler chain will be stopped if your middleware doesn't call `next.ServeHTTP()` with the corresponding parameters. This can be used to abort a request if the middleware writer wants to.
Package mux implements a request router and dispatcher. The name mux stands for "HTTP request multiplexer". Like the standard http.ServeMux, mux.Router matches incoming requests against a list of registered routes and calls a handler for the route that matches the URL or other conditions. The main features are: Let's start registering a couple of URL paths and handlers: Here we register three routes mapping URL paths to handlers. This is equivalent to how http.HandleFunc() works: if an incoming request URL matches one of the paths, the corresponding handler is called passing (http.ResponseWriter, *http.Request) as parameters. Paths can have variables. They are defined using the format {name} or {name:pattern}. If a regular expression pattern is not defined, the matched variable will be anything until the next slash. For example: Groups can be used inside patterns, as long as they are non-capturing (?:re). For example: The names are used to create a map of route variables which can be retrieved calling mux.Vars(): Note that if any capturing groups are present, mux will panic() during parsing. To prevent this, convert any capturing groups to non-capturing, e.g. change "/{sort:(asc|desc)}" to "/{sort:(?:asc|desc)}". This is a change from prior versions which behaved unpredictably when capturing groups were present. And this is all you need to know about the basic usage. More advanced options are explained below. Routes can also be restricted to a domain or subdomain. Just define a host pattern to be matched. They can also have variables: There are several other matchers that can be added. To match path prefixes: ...or HTTP methods: ...or URL schemes: ...or header values: ...or query values: ...or to use a custom matcher function: ...and finally, it is possible to combine several matchers in a single route: Setting the same matching conditions again and again can be boring, so we have a way to group several routes that share the same requirements. We call it "subrouting". For example, let's say we have several URLs that should only match when the host is "www.example.com". Create a route for that host and get a "subrouter" from it: Then register routes in the subrouter: The three URL paths we registered above will only be tested if the domain is "www.example.com", because the subrouter is tested first. This is not only convenient, but also optimizes request matching. You can create subrouters combining any attribute matchers accepted by a route. Subrouters can be used to create domain or path "namespaces": you define subrouters in a central place and then parts of the app can register its paths relatively to a given subrouter. There's one more thing about subroutes. When a subrouter has a path prefix, the inner routes use it as base for their paths: Note that the path provided to PathPrefix() represents a "wildcard": calling PathPrefix("/static/").Handler(...) means that the handler will be passed any request that matches "/static/*". This makes it easy to serve static files with mux: Now let's see how to build registered URLs. Routes can be named. All routes that define a name can have their URLs built, or "reversed". We define a name calling Name() on a route. For example: To build a URL, get the route and call the URL() method, passing a sequence of key/value pairs for the route variables. For the previous route, we would do: ...and the result will be a url.URL with the following path: This also works for host and query value variables: All variables defined in the route are required, and their values must conform to the corresponding patterns. These requirements guarantee that a generated URL will always match a registered route -- the only exception is for explicitly defined "build-only" routes which never match. Regex support also exists for matching Headers within a route. For example, we could do: ...and the route will match both requests with a Content-Type of `application/json` as well as `application/text` There's also a way to build only the URL host or path for a route: use the methods URLHost() or URLPath() instead. For the previous route, we would do: And if you use subrouters, host and path defined separately can be built as well: Mux supports the addition of middlewares to a Router, which are executed in the order they are added if a match is found, including its subrouters. Middlewares are (typically) small pieces of code which take one request, do something with it, and pass it down to another middleware or the final handler. Some common use cases for middleware are request logging, header manipulation, or ResponseWriter hijacking. Typically, the returned handler is a closure which does something with the http.ResponseWriter and http.Request passed to it, and then calls the handler passed as parameter to the MiddlewareFunc (closures can access variables from the context where they are created). A very basic middleware which logs the URI of the request being handled could be written as: Middlewares can be added to a router using `Router.Use()`: A more complex authentication middleware, which maps session token to users, could be written as: Note: The handler chain will be stopped if your middleware doesn't call `next.ServeHTTP()` with the corresponding parameters. This can be used to abort a request if the middleware writer wants to.