Package enmime implements a MIME encoding and decoding library. It's built on top of Go's included mime/multipart support where possible, but is geared towards parsing MIME encoded emails. The enmime API has two conceptual layers. The lower layer is a tree of Part structs, representing each component of a decoded MIME message. The upper layer, called an Envelope provides an intuitive way to interact with a MIME message. Calling ReadParts causes enmime to parse the body of a MIME message into a tree of Part objects, each of which is aware of its content type, filename and headers. The content of a Part is available as a slice of bytes via the Content field. If the part was encoded in quoted-printable or base64, it is decoded prior to being placed in Content. If the Part contains text in a character set other than utf-8, enmime will attempt to convert it to utf-8. To locate a particular Part, pass a custom PartMatcher function into the BreadthMatchFirst() or DepthMatchFirst() methods to search the Part tree. BreadthMatchAll() and DepthMatchAll() will collect all Parts matching your criteria. ReadEnvelope returns an Envelope struct. Behind the scenes a Part tree is constructed, and then sorted into the correct fields of the Envelope. The Envelope contains both the plain text and HTML portions of the email. If there was no plain text Part available, the HTML Part will be down-converted using the html2text library1. The root of the Part tree, as well as slices of the inline and attachment Parts are also available. Every MIME Part has its own headers, accessible via the Part.Header field. The raw headers for an Envelope are available in Root.Header. Envelope also provides helper methods to fetch headers: GetHeader(key) will return the RFC 2047 decoded value of the specified header. AddressList(key) will convert the specified address header into a slice of net/mail.Address values. enmime attempts to be tolerant of poorly encoded MIME messages. In situations where parsing is not possible, the ReadEnvelope and ReadParts functions will return a hard error. If enmime is able to continue parsing the message, it will add an entry to the Errors slice on the relevant Part. After parsing is complete, all Part errors will be appended to the Envelope Errors slice. The Error* constants can be used to identify a specific class of error. Please note that enmime parses messages into memory, so it is not likely to perform well with multi-gigabyte attachments. enmime is open source software released under the MIT License. The latest version can be found at https://github.com/zond/enmime
Package redisc implements a redis cluster client on top of the redigo client package. It supports all commands that can be executed on a redis cluster, including pub-sub, scripts and read-only connections to read data from replicas. See http://redis.io/topics/cluster-spec for details. The package defines two main types: Cluster and Conn. Both are described in more details below, but the Cluster manages the mapping of keys (or more exactly, hash slots computed from keys) to a group of nodes that form a redis cluster, and a Conn manages a connection to this cluster. The package is designed such that for simple uses, or when keys have been carefully named to play well with a redis cluster, a Cluster value can be used as a drop-in replacement for a redis.Pool from the redigo package. Similarly, the Conn type implements redigo's redis.Conn interface, so the API to execute commands is the same - in fact the redisc package uses the redigo package as its only third-party dependency. When more control is needed, the package offers some extra behaviour specific to working with a redis cluster: Slot and SplitBySlot functions to compute the slot for a given key and to split a list of keys into groups of keys from the same slot, so that each group can safely be handled using the same connection. *Conn.Bind (or the BindConn package-level helper function) to explicitly specify the keys that will be used with the connection so that the right node is selected, instead of relying on the automatic detection based on the first parameter of the command. *Conn.ReadOnly (or the ReadOnlyConn package-level helper function) to mark a connection as read-only, allowing commands to be served by a replica instead of the master. RetryConn to wrap a connection into one that automatically follows redirections when the cluster moves slots around. Helper functions to deal with cluster-specific errors. The Cluster type manages a redis cluster and offers an interface compatible with redigo's redis.Pool: Along with some additional methods specific to a cluster: If the CreatePool function field is set, then a redis.Pool is created to manage connections to each of the cluster's nodes. A call to Get returns a connection from that pool. The Dial method, on the other hand, guarantees that the returned connection will not be managed by a pool, even if CreatePool is set. It calls redigo's redis.Dial function to create the unpooled connection, passing along any DialOptions set on the cluster. If the cluster's CreatePool field is nil, Get behaves the same as Dial. The Refresh method refreshes the cluster's internal mapping of hash slots to nodes. It should typically be called only once, after the cluster is created and before it is used, so that the first connections already benefit from smart routing. It is automatically kept up-to-date based on the redis MOVED responses afterwards. A cluster must be closed once it is no longer used to release its resources. The connection returned from Get or Dial is a redigo redis.Conn interface, with a concrete type of *Conn. In addition to the interface's required methods, *Conn adds the following methods: The returned connection is not yet connected to any node; it is "bound" to a specific node only when a call to Do, Send, Receive or Bind is made. For Do, Send and Receive, the node selection is implicit, it uses the first parameter of the command, and computes the hash slot assuming that first parameter is a key. It then binds the connection to the node corresponding to that slot. If there are no parameters for the command, or if there is no command (e.g. in a call to Receive), a random node is selected. Bind is explicit, it gives control to the caller over which node to select by specifying a list of keys that the caller wishes to handle with the connection. All keys must belong to the same slot, and the connection must not already be bound to a node, otherwise an error is returned. On success, the connection is bound to the node holding the slot of the specified key(s). Because the connection is returned as a redis.Conn interface, a type assertion must be used to access the underlying *Conn and to be able to call Bind: The BindConn package-level function is provided as a helper for this common use-case. The ReadOnly method marks the connection as read-only, meaning that it will attempt to connect to a replica instead of the master node for its slot. Once bound to a node, the READONLY redis command is sent automatically, so it doesn't have to be sent explicitly before use. ReadOnly must be called before the connection is bound to a node, otherwise an error is returned. For the same reason as for Bind, a type assertion must be used to call ReadOnly on a *Conn, so a package-level helper function is also provided, ReadOnlyConn. There is no ReadWrite method, because it can be sent as a normal redis command and will essentially end that connection (all commands will now return MOVED errors). If the connection was wrapped in a RetryConn call, then it will automatically follow the redirection to the master node (see the Redirections section). The connection must be closed after use, to release the underlying resources. The redis cluster may return MOVED and ASK errors when the node that received the command doesn't currently hold the slot corresponding to the key. The package cannot reliably handle those redirections automatically because the redirection error may be returned for a pipeline of commands, some of which may have succeeded. However, a connection can be wrapped by a call to RetryConn, which returns a redis.Conn interface where only calls to Do, Close and Err can succeed. That means pipelining is not supported, and only a single command can be executed at a time, but it will automatically handle MOVED and ASK replies, as well as TRYAGAIN errors. Note that even if RetryConn is not used, the cluster always updates its mapping of slots to nodes automatically by keeping track of MOVED replies. The concurrency model is similar to that of the redigo package: Cluster methods are safe to call concurrently (like redis.Pool). Connections do not support concurrent calls to write methods (Send, Flush) or concurrent calls to the read method (Receive). Connections do allow a concurrent reader and writer. Because the Do method combines the functionality of Send, Flush and Receive, it cannot be called concurrently with other methods. The Bind and ReadOnly methods are safe to call concurrently, but there is not much point in doing so for as both will fail if the connection is already bound. Create and use a cluster.
Package bindata converts any file into manageable Go source code. Useful for embedding binary data into a go program. The file data is optionally gzip compressed before being converted to a raw byte slice. The following paragraphs cover some of the customization options which can be specified in the Config struct, which must be passed into the Translate() call. When used with the `Debug` option, the generated code does not actually include the asset data. Instead, it generates function stubs which load the data from the original file on disk. The asset API remains identical between debug and release builds, so your code will not have to change. This is useful during development when you expect the assets to change often. The host application using these assets uses the same API in both cases and will not have to care where the actual data comes from. An example is a Go webserver with some embedded, static web content like HTML, JS and CSS files. While developing it, you do not want to rebuild the whole server and restart it every time you make a change to a bit of javascript. You just want to build and launch the server once. Then just press refresh in the browser to see those changes. Embedding the assets with the `debug` flag allows you to do just that. When you are finished developing and ready for deployment, just re-invoke `go-bindata` without the `-debug` flag. It will now embed the latest version of the assets. The `NoMemCopy` option will alter the way the output file is generated. It will employ a hack that allows us to read the file data directly from the compiled program's `.rodata` section. This ensures that when we call call our generated function, we omit unnecessary memcopies. The downside of this, is that it requires dependencies on the `reflect` and `unsafe` packages. These may be restricted on platforms like AppEngine and thus prevent you from using this mode. Another disadvantage is that the byte slice we create, is strictly read-only. For most use-cases this is not a problem, but if you ever try to alter the returned byte slice, a runtime panic is thrown. Use this mode only on target platforms where memory constraints are an issue. The default behaviour is to use the old code generation method. This prevents the two previously mentioned issues, but will employ at least one extra memcopy and thus increase memory requirements. For instance, consider the following two examples: This would be the default mode, using an extra memcopy but gives a safe implementation without dependencies on `reflect` and `unsafe`: Here is the same functionality, but uses the `.rodata` hack. The byte slice returned from this example can not be written to without generating a runtime error. The NoCompress option indicates that the supplied assets are *not* GZIP compressed before being turned into Go code. The data should still be accessed through a function call, so nothing changes in the API. This feature is useful if you do not care for compression, or the supplied resource is already compressed. Doing it again would not add any value and may even increase the size of the data. The default behaviour of the program is to use compression. The keys used in the `_bindata` map are the same as the input file name passed to `go-bindata`. This includes the path. In most cases, this is not desirable, as it puts potentially sensitive information in your code base. For this purpose, the tool supplies another command line flag `-prefix`. This accepts a [regular expression](https://github.com/google/re2/wiki/Syntax) string, which will be used to match a portion of the map keys and function names that should be stripped out. For example, running without the `-prefix` flag, we get: Running with the `-prefix` flag, we get: With the optional Tags field, you can specify any go build tags that must be fulfilled for the output file to be included in a build. This is useful when including binary data in multiple formats, where the desired format is specified at build time with the appropriate tags. The tags are appended to a `// +build` line in the beginning of the output file and must follow the build tags syntax specified by the go tool. When you want to embed big files or plenty of files, then the generated output is really big (maybe over 3Mo). Even if the generated file shouldn't be read, you probably need use analysis tool or an editor which can become slower with a such file. Generating big files can be avoided with `-split` command line option. In that case, the given output is a directory path, the tool will generate one source file per file to embed, and it will generate a common file nammed `common.go` which contains commons parts like API.
Package bindata converts any file into manageable Go source code. Useful for embedding binary data into a go program. The file data is optionally gzip compressed before being converted to a raw byte slice. The following paragraphs cover some of the customization options which can be specified in the Config struct, which must be passed into the Translate() call. When used with the `Debug` option, the generated code does not actually include the asset data. Instead, it generates function stubs which load the data from the original file on disk. The asset API remains identical between debug and release builds, so your code will not have to change. This is useful during development when you expect the assets to change often. The host application using these assets uses the same API in both cases and will not have to care where the actual data comes from. An example is a Go webserver with some embedded, static web content like HTML, JS and CSS files. While developing it, you do not want to rebuild the whole server and restart it every time you make a change to a bit of javascript. You just want to build and launch the server once. Then just press refresh in the browser to see those changes. Embedding the assets with the `debug` flag allows you to do just that. When you are finished developing and ready for deployment, just re-invoke `go-bindata` without the `-debug` flag. It will now embed the latest version of the assets. The `NoMemCopy` option will alter the way the output file is generated. It will employ a hack that allows us to read the file data directly from the compiled program's `.rodata` section. This ensures that when we call call our generated function, we omit unnecessary memcopies. The downside of this, is that it requires dependencies on the `reflect` and `unsafe` packages. These may be restricted on platforms like AppEngine and thus prevent you from using this mode. Another disadvantage is that the byte slice we create, is strictly read-only. For most use-cases this is not a problem, but if you ever try to alter the returned byte slice, a runtime panic is thrown. Use this mode only on target platforms where memory constraints are an issue. The default behaviour is to use the old code generation method. This prevents the two previously mentioned issues, but will employ at least one extra memcopy and thus increase memory requirements. For instance, consider the following two examples: This would be the default mode, using an extra memcopy but gives a safe implementation without dependencies on `reflect` and `unsafe`: Here is the same functionality, but uses the `.rodata` hack. The byte slice returned from this example can not be written to without generating a runtime error. The NoCompress option indicates that the supplied assets are *not* GZIP compressed before being turned into Go code. The data should still be accessed through a function call, so nothing changes in the API. This feature is useful if you do not care for compression, or the supplied resource is already compressed. Doing it again would not add any value and may even increase the size of the data. The default behaviour of the program is to use compression. The keys used in the `_bindata` map are the same as the input file name passed to `go-bindata`. This includes the path. In most cases, this is not desirable, as it puts potentially sensitive information in your code base. For this purpose, the tool supplies another command line flag `-prefix`. This accepts a [regular expression](https://github.com/google/re2/wiki/Syntax) string, which will be used to match a portion of the map keys and function names that should be stripped out. For example, running without the `-prefix` flag, we get: Running with the `-prefix` flag, we get: With the optional Tags field, you can specify any go build tags that must be fulfilled for the output file to be included in a build. This is useful when including binary data in multiple formats, where the desired format is specified at build time with the appropriate tags. The tags are appended to a `// +build` line in the beginning of the output file and must follow the build tags syntax specified by the go tool. When you want to embed big files or plenty of files, then the generated output is really big (maybe over 3Mo). Even if the generated file shouldn't be read, you probably need use analysis tool or an editor which can become slower with a such file. Generating big files can be avoided with `-split` command line option. In that case, the given output is a directory path, the tool will generate one source file per file to embed, and it will generate a common file nammed `common.go` which contains commons parts like API.
Package qringbuf provides a concurrency-friendly, zero-copy abstraction of io.ReadAtLeast(…) over a pre-allocated ring-buffer, populated asynchronously by a standalone goroutine. It is primarily designed for processing a series of consecutive sub-streams from a single io.Reader, each sub-stream in turn comprised of variable-length records. The buffer object DOES NOT ASSUME exclusive ownership of the supplied io.Reader, never reads more than instructed by an argument to StartFill(…), and exposes a standard sync.Mutex interface allowing pausing all operations when exclusive access of the underlying Reader is desired. In all cases below the background "collector" goroutine reading from the enclosed someIoReader into the ring buffer is guaranteed to: In code the basic usage looks roughly like this (error/flow handling elided): In addition one can operate over individual (sub)regions with "fearless concurrency": The specific technical guarantees made by an object of this package are: Unlike io.ReadAtLeast(…), errors from the underlying reader are always made available on NextRegion(…). As with the standard io.Read(…) semantics, an error can be returned together with a result. One should always check whether the *Region return value is nil first, before processing the error. See the documentation of io.Read(…) for an extended discussion. Changes of the NextRegion(…) "emitter" and collector positions are protected by a mutex on the qringbuf object. Calls modifying the buffer state will block until this lock can be obtained. The same mutex is exposed as part of the API, so one can pause the collector if a direct read and/or skip on the underlying io.Reader is needed. The *Region.{Reserve/Release}() functionality does not use the mutex, ensuring that an asynchronous Release() call can not be affected by the current state of the buffer. Reservation tracking is implemented as an atomically modified list of reservation counts, one int32 per SectorSize bytes of the buffer. The reservation system explicitly allows "recursive locking": you can hold an arbitrary number of reservations over a sector by repeatedly creating SubRegion(…) objects. Care must be taken to release every single reservation obtained previously, otherwise the collector will remain blocked forever. Follows an illustration of a contrived lifecycle of a hypothetical qringbuf object initialized with: Note that for brevity THE DIAGRAMS BELOW ARE DECIDEDLY NOT REPRESENTATIVE of a typical lifecycle. Normally BufferSize is an order of magnitude larger than MinRegion and MaxCopy, and the time spent waiting and copying data is insignificant in relation to all other possible states. Also outstanding async reservations typically trail the emitter very closely, so after a wrap the collector is virtually never blocked, contrary to what is depicted below. Instead the diagrams merely demonstrate the choices this library makes dealing with the "tricky parts" of maintaining the illusion of an arbitrary stream of contiguous bytes.
Package podcast generates a fully compliant iTunes and RSS 2.0 podcast feed for GoLang using a simple API. Full documentation with detailed examples located at https://godoc.org/github.com/eduncan911/podcast To use, `go get` and `import` the package like your typical GoLang library. The API exposes a number of method receivers on structs that implements the logic required to comply with the specifications and ensure a compliant feed. A number of overrides occur to help with iTunes visibility of your episodes. Notably, the `Podcast.AddItem` function performs most of the heavy lifting by taking the `Item` input and performing validation, overrides and duplicate setters through the feed. Full detailed Examples of the API are at https://godoc.org/github.com/eduncan911/podcast. In no way are you restricted in having full control over your feeds. You may choose to skip the API methods and instead use the structs directly. The fields have been grouped by RSS 2.0 and iTunes fields. iTunes specific fields are all prefixed with the letter `I`. RSS 2.0: https://cyber.harvard.edu/rss/rss.html Podcasts: https://help.apple.com/itc/podcasts_connect/#/itca5b22233 The 1.x branch is now mostly in maintenance mode, open to PRs. This means no more planned features on the 1.x feature branch is expected. With the success of 6 iTunes-accepted podcasts I have published with this library, and with the feedback from the community, the 1.x releases are now considered stable. The 2.x branch's primary focus is to allow for bi-direction marshalling both ways. Currently, the 1.x branch only allows unmarshalling to a serial feed. An attempt to marshall a serialized feed back into a Podcast form will error or not work correctly. Note that while the 2.x branch is targeted to remain backwards compatible, it is true if using the public API funcs to set parameters only. Several of the underlying public fields are being removed in order to accommodate the marshalling of serialized data. Therefore, a version 2.x is denoted for this release. We use SemVer versioning schema. You can rest assured that pulling 1.x branches will remain backwards compatible now and into the future. However, the new 2.x branch, while keeping the same API, is expected break those that bypass the API methods and use the underlying public properties instead. 1.3.2 1.3.1 1.3.0 1.2.1 1.2.0 1.1.0 1.0.0
Package dosa is the DOSA - Declarative Object Storage Abstraction. DOSA (https://github.com/uber-go/dosa/wiki) is a storage framework that provides a declarative object storage abstraction for applications in Golang and (soon) Java. DOSA is designed to relieve common headaches developers face while building stateful, database-dependent services. If you'd like to start by writing a small DOSA-enabled program, check out the getting started guide (https://github.com/uber-go/dosa/wiki/Getting-Started-Guide). DOSA is a storage library that supports: • methods to store and retrieve go structs • struct annotations to describe queries against data • tools to create and/or migrate database schemas • implementations that serialize requests to remote stateless servers This project is released under the MIT License (LICENSE.txt).
Package codegurusecurity provides the API client, operations, and parameter types for Amazon CodeGuru Security. Amazon CodeGuru Security is in preview release and is subject to change. This section provides documentation for the Amazon CodeGuru Security API operations. CodeGuru Security is a service that uses program analysis and machine learning to detect security policy violations and vulnerabilities, and recommends ways to address these security risks. By proactively detecting and providing recommendations for addressing security risks, CodeGuru Security improves the overall security of your application code. For more information about CodeGuru Security, see the Amazon CodeGuru Security User Guide.
Package mediapackagev2 provides the API client, operations, and parameter types for AWS Elemental MediaPackage v2. This guide is intended for creating AWS Elemental MediaPackage resources in MediaPackage Version 2 (v2) starting from May 2023. To get started with MediaPackage v2, create your MediaPackage resources. There isn't an automated process to migrate your resources from MediaPackage v1 to MediaPackage v2. The names of the entities that you use to access this API, like URLs and ARNs, all have the versioning information added, like "v2", to distinguish from the prior version. If you used MediaPackage prior to this release, you can't use the MediaPackage v2 CLI or the MediaPackage v2 API to access any MediaPackage v1 resources. If you created resources in MediaPackage v1, use video on demand (VOD) workflows, and aren't looking to migrate to MediaPackage v2 yet, see the MediaPackage v1 Live API Reference. This is the AWS Elemental MediaPackage v2 Live REST API Reference. It describes all the MediaPackage API operations for live content in detail, and provides sample requests, responses, and errors for the supported web services protocols. We assume that you have the IAM permissions that you need to use MediaPackage via the REST API. We also assume that you are familiar with the features and operations of MediaPackage, as described in the AWS Elemental MediaPackage User Guide.
Package ngrok makes it easy to work with the ngrok API from Go. The package is fully code generated and should always be up to date with the latest ngrok API. Full documentation of the ngrok API can be found at: https://ngrok.com/docs/api This package follows the best practices outlined for Go modules. All releases are tagged and any breaking changes will be reflected as a new major version. You should only import this package for production applications by pointing at a stable tagged version. The following example code demonstrates typical initialization and usage of the package to make an API call: API client configuration and all of the datatypes exchanged by the API are defined in this base package. There are subpackages for every API service and a Client type defined in those packages with methods to interact with that API service. It's usually easiest to find the subpackage of the service you want to work with and begin consulting the documentation there. It is recommended to construct the service-specific clients once at initialization time. The ClientConfig object in the root package supports functional options for configuration. The most common option to use is `WithHTTPClient()` which allows the caller to specify a different net/http.Client object. This allows the caller full customization over the transport if needed for use with proxies, custom TLS setups, observability and tracing, etc. Some arguments to methods in the ngrok API are optional and must be meaningfully distinguished from zero values, especially in Update() methods. This allows the API to distinguish between choosing not to update a value vs. setting it to zero or the empty string. For these arguments, ngrok follows the industry standard practice of using pointers to the primitive types and providing convenince functions like ngrok.String() and ngrok.Bool() for the caller to wrap literals as pointer values. For example: All List methods in the ngrok API are paged. This package abstracts that problem away from you by returning an iterator from any List API call. As you advance the iterator it will transparently fetch new pages of values for you behind the scenes. Note that the context supplied to the initial List() call will be used for all subsequent page fetches so it must be long enough to work through the entire list. Here's an example of paging through all of the TLS certificates on your account. Note that you must check for an error after Next() returns false to determine if the iterator failed to fetch the next page of results. All errors returned by the ngrok API are returned as structured payloads for easy error handling. Most non-networking errors returned by API calls in this package will be an ngrok.Error type. The ngrok.Error type exposes important metadata that will help you handle errors. Specifically it includes the HTTP status code of any failed operation as well as an error code value that uniquely identifies the failure condition. There are two helper functions that will make error handling easy: IsNotFound and IsErrorCode. IsNotFound helps identify the common case of accessing an API resource that no longer exists: IsErrorCode helps you identify specific ngrok errors by their unique ngrok error code. All ngrok error codes are documented at https://ngrok.com/docs/errors To check for a specific error condition, you would structure your code like the following example: All ngrok datatypes in this package define String() and GoString() methods so that they can be formatted into strings in helpful representations. The GoString() method is defined to pretty-print an object for debugging purposes with the "%#v" formatting verb.
Package stm provides Software Transactional Memory operations for Go. This is an alternative to the standard way of writing concurrent code (channels and mutexes). STM makes it easy to perform arbitrarily complex operations in an atomic fashion. One of its primary advantages over traditional locking is that STM transactions are composable, whereas locking functions are not -- the composition will either deadlock or release the lock between functions (making it non-atomic). To begin, create an STM object that wraps the data you want to access concurrently. You can then use the Atomically method to atomically read and/or write the the data. This code atomically decrements x: An important part of STM transactions is retrying. At any point during the transaction, you can call tx.Retry(), which will abort the transaction, but not cancel it entirely. The call to Atomically will block until another call to Atomically finishes, at which point the transaction will be rerun. Specifically, one of the values read by the transaction (via tx.Get) must be updated before the transaction will be rerun. As an example, this code will try to decrement x, but will block as long as x is zero: Internally, tx.Retry simply calls panic(stm.Retry). Panicking with any other value will cancel the transaction; no values will be changed. However, it is the responsibility of the caller to catch such panics. Multiple transactions can be composed using Select. If the first transaction calls Retry, the next transaction will be run, and so on. If all of the transactions call Retry, the call will block and the entire selection will be retried. For example, this code implements the "decrement-if-nonzero" transaction above, but for two values. It will first try to decrement x, then y, and block if both values are zero. An important caveat: transactions must be idempotent (they should have the same effect every time they are invoked). This is because a transaction may be retried several times before successfully completing, meaning its side effects may execute more than once. This will almost certainly cause incorrect behavior. One common way to get around this is to build up a list of impure operations inside the transaction, and then perform them after the transaction completes. The stm API tries to mimic that of Haskell's Control.Concurrent.STM, but this is not entirely possible due to Go's type system; we are forced to use interface{} and type assertions. Furthermore, Haskell can enforce at compile time that STM variables are not modified outside the STM monad. This is not possible in Go, so be especially careful when using pointers in your STM code. Remember: modifying a pointer is a side effect!
Package mathutil provides utilities supplementing the standard 'math' and 'math/rand' packages. 2020-12-20 v1.2.1 fixes MulOverflowInt64. 2020-12-19 Added {Add,Sub,Mul}OverflowInt{8,16,32,64} 2018-10-21 Added BinaryLog 2018-04-25: New functions for determining Max/Min of nullable values. Ex: 2017-10-14: New variadic functions for Max/Min. Ex: 2016-10-10: New functions QuadPolyDiscriminant and QuadPolyFactors. 2013-12-13: The following functions have been REMOVED 2013-05-13: The following functions are now DEPRECATED These functions will be REMOVED with Go release 1.1+1. 2013-01-21: The following functions have been REMOVED They are now replaced by untyped constants Additionally one more untyped constant was added This change breaks any existing code depending on the above removed functions. They should have not been published in the first place, that was unfortunate. Instead, defining such architecture and/or implementation specific integer limits and bit widths as untyped constants improves performance and allows for static dead code elimination if it depends on these values. Thanks to minux for pointing it out in the mail list (https://groups.google.com/d/msg/golang-nuts/tlPpLW6aJw8/NT3mpToH-a4J). 2012-12-12: The following functions will be DEPRECATED with Go release 1.0.3+1 and REMOVED with Go release 1.0.3+2, b/c of http://code.google.com/p/go/source/detail?r=954a79ee3ea8
Package gofpdf implements a PDF document generator with high level support for text, drawing and images. - UTF-8 support - Choice of measurement unit, page format and margins - Page header and footer management - Automatic page breaks, line breaks, and text justification - Inclusion of JPEG, PNG, GIF, TIFF and basic path-only SVG images - Colors, gradients and alpha channel transparency - Outline bookmarks - Internal and external links - TrueType, Type1 and encoding support - Page compression - Lines, Bézier curves, arcs, and ellipses - Rotation, scaling, skewing, translation, and mirroring - Clipping - Document protection - Layers - Templates - Barcodes - Charting facility - Import PDFs as templates gofpdf has no dependencies other than the Go standard library. All tests pass on Linux, Mac and Windows platforms. gofpdf supports UTF-8 TrueType fonts and “right-to-left” languages. Note that Chinese, Japanese, and Korean characters may not be included in many general purpose fonts. For these languages, a specialized font (for example, NotoSansSC for simplified Chinese) can be used. Also, support is provided to automatically translate UTF-8 runes to code page encodings for languages that have fewer than 256 glyphs. This repository will not be maintained, at least for some unknown duration. But it is hoped that gofpdf has a bright future in the open source world. Due to Go’s promise of compatibility, gofpdf should continue to function without modification for a longer time than would be the case with many other languages. Forks should be based on the last viable commit. Tools such as active-forks can be used to select a fork that looks promising for your needs. If a particular fork looks like it has taken the lead in attracting followers, this README will be updated to point people in that direction. The efforts of all contributors to this project have been deeply appreciated. Best wishes to all of you. To install the package on your system, run Later, to receive updates, run The following Go code generates a simple PDF file. See the functions in the fpdf_test.go file (shown as examples in this documentation) for more advanced PDF examples. If an error occurs in an Fpdf method, an internal error field is set. After this occurs, Fpdf method calls typically return without performing any operations and the error state is retained. This error management scheme facilitates PDF generation since individual method calls do not need to be examined for failure; it is generally sufficient to wait until after Output() is called. For the same reason, if an error occurs in the calling application during PDF generation, it may be desirable for the application to transfer the error to the Fpdf instance by calling the SetError() method or the SetErrorf() method. At any time during the life cycle of the Fpdf instance, the error state can be determined with a call to Ok() or Err(). The error itself can be retrieved with a call to Error(). This package is a relatively straightforward translation from the original FPDF library written in PHP (despite the caveat in the introduction to Effective Go). The API names have been retained even though the Go idiom would suggest otherwise (for example, pdf.GetX() is used rather than simply pdf.X()). The similarity of the two libraries makes the original FPDF website a good source of information. It includes a forum and FAQ. However, some internal changes have been made. Page content is built up using buffers (of type bytes.Buffer) rather than repeated string concatenation. Errors are handled as explained above rather than panicking. Output is generated through an interface of type io.Writer or io.WriteCloser. A number of the original PHP methods behave differently based on the type of the arguments that are passed to them; in these cases additional methods have been exported to provide similar functionality. Font definition files are produced in JSON rather than PHP. A side effect of running go test ./... is the production of a number of example PDFs. These can be found in the gofpdf/pdf directory after the tests complete. Please note that these examples run in the context of a test. In order run an example as a standalone application, you’ll need to examine fpdf_test.go for some helper routines, for example exampleFilename() and summary(). Example PDFs can be compared with reference copies in order to verify that they have been generated as expected. This comparison will be performed if a PDF with the same name as the example PDF is placed in the gofpdf/pdf/reference directory and if the third argument to ComparePDFFiles() in internal/example/example.go is true. (By default it is false.) The routine that summarizes an example will look for this file and, if found, will call ComparePDFFiles() to check the example PDF for equality with its reference PDF. If differences exist between the two files they will be printed to standard output and the test will fail. If the reference file is missing, the comparison is considered to succeed. In order to successfully compare two PDFs, the placement of internal resources must be consistent and the internal creation timestamps must be the same. To do this, the methods SetCatalogSort() and SetCreationDate() need to be called for both files. This is done automatically for all examples. Nothing special is required to use the standard PDF fonts (courier, helvetica, times, zapfdingbats) in your documents other than calling SetFont(). You should use AddUTF8Font() or AddUTF8FontFromBytes() to add a TrueType UTF-8 encoded font. Use RTL() and LTR() methods switch between “right-to-left” and “left-to-right” mode. In order to use a different non-UTF-8 TrueType or Type1 font, you will need to generate a font definition file and, if the font will be embedded into PDFs, a compressed version of the font file. This is done by calling the MakeFont function or using the included makefont command line utility. To create the utility, cd into the makefont subdirectory and run “go build”. This will produce a standalone executable named makefont. Select the appropriate encoding file from the font subdirectory and run the command as in the following example. In your PDF generation code, call AddFont() to load the font and, as with the standard fonts, SetFont() to begin using it. Most examples, including the package example, demonstrate this method. Good sources of free, open-source fonts include Google Fonts and DejaVu Fonts. The draw2d package is a two dimensional vector graphics library that can generate output in different forms. It uses gofpdf for its document production mode. gofpdf is a global community effort and you are invited to make it even better. If you have implemented a new feature or corrected a problem, please consider contributing your change to the project. A contribution that does not directly pertain to the core functionality of gofpdf should be placed in its own directory directly beneath the contrib directory. Here are guidelines for making submissions. Your change should - be compatible with the MIT License - be properly documented - be formatted with go fmt - include an example in fpdf_test.go if appropriate - conform to the standards of golint and go vet, that is, golint . and go vet . should not generate any warnings - not diminish test coverage Pull requests are the preferred means of accepting your changes. gofpdf is released under the MIT License. It is copyrighted by Kurt Jung and the contributors acknowledged below. This package’s code and documentation are closely derived from the FPDF library created by Olivier Plathey, and a number of font and image resources are copied directly from it. Bruno Michel has provided valuable assistance with the code. Drawing support is adapted from the FPDF geometric figures script by David Hernández Sanz. Transparency support is adapted from the FPDF transparency script by Martin Hall-May. Support for gradients and clipping is adapted from FPDF scripts by Andreas Würmser. Support for outline bookmarks is adapted from Olivier Plathey by Manuel Cornes. Layer support is adapted from Olivier Plathey. Support for transformations is adapted from the FPDF transformation script by Moritz Wagner and Andreas Würmser. PDF protection is adapted from the work of Klemen Vodopivec for the FPDF product. Lawrence Kesteloot provided code to allow an image’s extent to be determined prior to placement. Support for vertical alignment within a cell was provided by Stefan Schroeder. Ivan Daniluk generalized the font and image loading code to use the Reader interface while maintaining backward compatibility. Anthony Starks provided code for the Polygon function. Robert Lillack provided the Beziergon function and corrected some naming issues with the internal curve function. Claudio Felber provided implementations for dashed line drawing and generalized font loading. Stani Michiels provided support for multi-segment path drawing with smooth line joins, line join styles, enhanced fill modes, and has helped greatly with package presentation and tests. Templating is adapted by Marcus Downing from the FPDF_Tpl library created by Jan Slabon and Setasign. Jelmer Snoeck contributed packages that generate a variety of barcodes and help with registering images on the web. Jelmer Snoek and Guillermo Pascual augmented the basic HTML functionality with aligned text. Kent Quirk implemented backwards-compatible support for reading DPI from images that support it, and for setting DPI manually and then having it properly taken into account when calculating image size. Paulo Coutinho provided support for static embedded fonts. Dan Meyers added support for embedded JavaScript. David Fish added a generic alias-replacement function to enable, among other things, table of contents functionality. Andy Bakun identified and corrected a problem in which the internal catalogs were not sorted stably. Paul Montag added encoding and decoding functionality for templates, including images that are embedded in templates; this allows templates to be stored independently of gofpdf. Paul also added support for page boxes used in printing PDF documents. Wojciech Matusiak added supported for word spacing. Artem Korotkiy added support of UTF-8 fonts. Dave Barnes added support for imported objects and templates. Brigham Thompson added support for rounded rectangles. Joe Westcott added underline functionality and optimized image storage. Benoit KUGLER contributed support for rectangles with corners of unequal radius, modification times, and for file attachments and annotations. - Remove all legacy code page font support; use UTF-8 exclusively - Improve test coverage as reported by the coverage tool. Example demonstrates the generation of a simple PDF document. Note that since only core fonts are used (in this case Arial, a synonym for Helvetica), an empty string can be specified for the font directory in the call to New(). Note also that the example.Filename() and example.Summary() functions belong to a separate, internal package and are not part of the gofpdf library. If an error occurs at some point during the construction of the document, subsequent method calls exit immediately and the error is finally retrieved with the output call where it can be handled by the application.
Package gochrome aims to be a complete Chrome DevTools Protocol Viewer implementation. Versioned packages are available. Curently the only version is `tot` or Tip-of-Tree. Stable versions will be made available in the future. This is beta software and hasn't been well exercised in real-world applications. See https://chromedevtools.github.io/devtools-protocol/ The Chrome DevTools Protocol allows for tools to instrument, inspect, debug and profile Chromium, Chrome and other Blink-based browsers. Many existing projects currently use the protocol. The Chrome DevTools uses this protocol and the team maintains its API. Instrumentation is divided into a number of domains (DOM, Debugger, Network etc.). Each domain defines a number of commands it supports and events it generates. Both commands and events are serialized JSON objects of a fixed structure. You can either debug over the wire using the raw messages as they are described in the corresponding domain documentation, or use extension JavaScript API. The latest (tip-of-tree) protocol (tot) It changes frequently and can break at any time. However it captures the full capabilities of the Protocol, whereas the stable release is a subset. There is no backwards compatibility support guaranteed for the capabilities it introduces. Resources Basics: Using DevTools as protocol client The Developer Tools front-end can attach to a remotely running Chrome instance for debugging. For this scenario to work, you should start your host Chrome instance with the remote-debugging-port command line switch: Then you can start a separate client Chrome instance, using a distinct user profile: Now you can navigate to the given port from your client and attach to any of the discovered tabs for debugging: http://localhost:9222 You will find the Developer Tools interface identical to the embedded one and here is why: In this scenario, you can substitute Developer Tools front-end with your own implementation. Instead of navigating to the HTML page at http://localhost:9222, your application can discover available pages by requesting: http://localhost:9222/json and getting a JSON object with information about inspectable pages along with the WebSocket addresses that you could use in order to start instrumenting them. Remote debugging is especially useful when debugging remote instances of the browser or attaching to the embedded devices. Blink port owners are responsible for exposing debugging connections to the external users. This is especially handy to understand how the DevTools frontend makes use of the protocol. First, run Chrome with the debugging port open: Then, select the Chromium Projects item in the Inspectable Pages list. Now that DevTools is up and fullscreen, open DevTools to inspect it. Cmd-R in the new inspector to make the first restart. Now head to Network Panel, filter by Websocket, select the connection and click the Frames tab. Now you can easily see the frames of WebSocket activity as you use the first instance of the DevTools. To allow chrome extensions to interact with the protocol, we introduced chrome.debugger extension API that exposes this JSON message transport interface. As a result, you can not only attach to the remotely running Chrome instance, but also instrument it from its own extension. Chrome Debugger Extension API provides a higher level API where command domain, name and body are provided explicitly in the `sendCommand` call. This API hides request ids and handles binding of the request with its response, hence allowing `sendCommand` to report result in the callback function call. One can also use this API in combination with the other Extension APIs. If you are developing a Web-based IDE, you should implement an extension that exposes debugging capabilities to your page and your IDE will be able to open pages with the target application, set breakpoints there, evaluate expressions in console, live edit JavaScript and CSS, display live DOM, network interaction and any other aspect that Developer Tools is instrumenting today. Opening embedded Developer Tools will terminate the remote connection and thus detach the extension. https://chromedevtools.github.io/devtools-protocol/#simultaneous The canonical protocol definitions live in the Chromium source tree: (browser_protocol.json and js_protocol.json). They are maintained manually by the DevTools engineering team. These files are mirrored (hourly) on GitHub in the devtools-protocol repo. The declarative protocol definitions are used across tools. Within Chromium, a binding layer is created for the Chrome DevTools to interact with, and separately the protocol is used for Chrome Headless’s C++ interface. What’s the protocol_externs file It’s created via generate_protocol_externs.py and useful for tools using closure compiler. The TypeScript story is here. Not yet. See bugger-daemon’s third-party docs. See also the endpoints implementation in Chromium. /json/protocol was added in Chrome 60. The endpoint is exposed as webSocketDebuggerUrl in /json/version. Note the browser in the URL, rather than page. If Chrome was launched with --remote-debugging-port=0 and chose an open port, the browser endpoint is written to both stderr and the DevToolsActivePort file in browser profile folder. Yes, as of Chrome 63! See Multi-client remote debugging support. Upon disconnnection, the outgoing client will receive a detached event. For example: View the enum of possible reasons. (For reference: the original patch). After disconnection, some apps have chosen to pause their state and offer a reconnect button.
Package uinput is a pure go package that provides access to the userland input device driver uinput on linux systems. Virtual keyboard devices as well as virtual mouse input devices may be created using this package. The keycodes and other event definitions, that are available and can be used to trigger input events, are part of this package ("Key1" for number 1, for example). In order to use the virtual keyboard, you will need to follow these three steps: Initialize the device Example: vk, err := CreateKeyboard("/dev/uinput", "Virtual Keyboard") Send Button events to the device Example (print a single D): err = vk.KeyPress(uinput.KeyD) Example (keep moving right by holding down right arrow key): err = vk.KeyDown(uinput.KeyRight) Example (stop moving right by releasing the right arrow key): err = vk.KeyUp(uinput.KeyRight) Close the device Example: err = vk.Close() A virtual mouse input device is just as easy to create and use: Initialize the device: Example: vm, err := CreateMouse("/dev/uinput", "DangerMouse") Move the cursor around and issue click events Example (move mouse right): err = vm.MoveRight(42) Example (move mouse left): err = vm.MoveLeft(42) Example (move mouse up): err = vm.MoveUp(42) Example (move mouse down): err = vm.MoveDown(42) Example (trigger a left click): err = vm.LeftClick() Example (trigger a right click): err = vm.RightClick() Close the device Example: err = vm.Close() If you'd like to use absolute input events (move the cursor to specific positions on screen), use the touch pad. Note that you'll need to specify the size of the screen area you want to use when you initialize the device. Here are a few examples of how to use the virtual touch pad: Initialize the device: Example: vt, err := CreateTouchPad("/dev/uinput", "DontTouchThis", 0, 1024, 0, 768) Move the cursor around and issue click events Example (move cursor to the top left corner of the screen): err = vt.MoveTo(0, 0) Example (move cursor to the position x: 100, y: 250): err = vt.MoveTo(100, 250) Example (trigger a left click): err = vt.LeftClick() Example (trigger a right click): err = vt.RightClick() Close the device Example: err = vt.Close()
bindata converts any file into managable Go source code. Useful for embedding binary data into a go program. The file data is optionally gzip compressed before being converted to a raw byte slice. The following paragraphs cover some of the customization options which can be specified in the Config struct, which must be passed into the Translate() call. When used with the `Debug` option, the generated code does not actually include the asset data. Instead, it generates function stubs which load the data from the original file on disk. The asset API remains identical between debug and release builds, so your code will not have to change. This is useful during development when you expect the assets to change often. The host application using these assets uses the same API in both cases and will not have to care where the actual data comes from. An example is a Go webserver with some embedded, static web content like HTML, JS and CSS files. While developing it, you do not want to rebuild the whole server and restart it every time you make a change to a bit of javascript. You just want to build and launch the server once. Then just press refresh in the browser to see those changes. Embedding the assets with the `debug` flag allows you to do just that. When you are finished developing and ready for deployment, just re-invoke `go-bindata` without the `-debug` flag. It will now embed the latest version of the assets. The `NoMemCopy` option will alter the way the output file is generated. It will employ a hack that allows us to read the file data directly from the compiled program's `.rodata` section. This ensures that when we call call our generated function, we omit unnecessary memcopies. The downside of this, is that it requires dependencies on the `reflect` and `unsafe` packages. These may be restricted on platforms like AppEngine and thus prevent you from using this mode. Another disadvantage is that the byte slice we create, is strictly read-only. For most use-cases this is not a problem, but if you ever try to alter the returned byte slice, a runtime panic is thrown. Use this mode only on target platforms where memory constraints are an issue. The default behaviour is to use the old code generation method. This prevents the two previously mentioned issues, but will employ at least one extra memcopy and thus increase memory requirements. For instance, consider the following two examples: This would be the default mode, using an extra memcopy but gives a safe implementation without dependencies on `reflect` and `unsafe`: Here is the same functionality, but uses the `.rodata` hack. The byte slice returned from this example can not be written to without generating a runtime error. The NoCompress option indicates that the supplied assets are *not* GZIP compressed before being turned into Go code. The data should still be accessed through a function call, so nothing changes in the API. This feature is useful if you do not care for compression, or the supplied resource is already compressed. Doing it again would not add any value and may even increase the size of the data. The default behaviour of the program is to use compression. The keys used in the `_bindata` map are the same as the input file name passed to `go-bindata`. This includes the path. In most cases, this is not desireable, as it puts potentially sensitive information in your code base. For this purpose, the tool supplies another command line flag `-prefix`. This accepts a portion of a path name, which should be stripped off from the map keys and function names. For example, running without the `-prefix` flag, we get: Running with the `-prefix` flag, we get: With the optional Tags field, you can specify any go build tags that must be fulfilled for the output file to be included in a build. This is useful when including binary data in multiple formats, where the desired format is specified at build time with the appropriate tags. The tags are appended to a `// +build` line in the beginning of the output file and must follow the build tags syntax specified by the go tool.
Package selfupdate provides functionality to implement secure, self-updating Go programs (or other single-file targets). For complete updating solutions please see Equinox (https://equinox.io) and go-tuf (https://github.com/flynn/go-tuf). This example shows how to update a program remotely from a URL. Go binaries can often be large. It can be advantageous to only ship a binary patch to a client instead of the complete program text of a new version. This example shows how to update a program with a bsdiff binary patch. Other patch formats may be applied by implementing the Patcher interface. Updating executable code on a computer can be a dangerous operation unless you take the appropriate steps to guarantee the authenticity of the new code. While checksum verification is important, it should always be combined with signature verification (next section) to guarantee that the code came from a trusted party. selfupdate validates SHA256 checksums by default, but this is pluggable via the Hash property on the Options struct. This example shows how to guarantee that the newly-updated binary is verified to have an appropriate checksum (that was otherwise retrived via a secure channel) specified as a hex string. Cryptographic verification of new code from an update is an extremely important way to guarantee the security and integrity of your updates. Verification is performed by validating the signature of a hash of the new file. This means nothing changes if you apply your update with a patch. This example shows how to add signature verification to your updates. To make all of this work an application distributor must first create a public/private key pair and embed the public key into their application. When they issue a new release, the issuer must sign the new executable file with the private key and distribute the signature along with the update. In order to update a Go application with self-update, you must distributed it as a single executable. This is often easy, but some applications require static assets (like HTML and CSS asset files or TLS certificates). In order to update applications like these, you'll want to make sure to embed those asset files into the distributed binary with a tool like go-bindata (my favorite): https://github.com/jteeuwen/go-bindata Mechanisms and protocols for determining whether an update should be applied and, if so, which one are out of scope for this package. Please consult go-tuf (https://github.com/flynn/go-tuf) or Equinox (https://equinox.io) for more complete solutions. selfupdate only works for self-updating applications that are distributed as a single binary, i.e. applications that do not have additional assets or dependency files. Updating application that are distributed as mutliple on-disk files is out of scope, although this may change in future versions of this library.
Package mem implements a memory allocator and deallocator. It currently uses mmap on unix and VirtualAlloc on windows to request pages of memory from the operating system, and munmap and VirtualFree to release pages of memory to the operating system. The allocator uses a first-fit algorithm on a singly-linked free list of blocks. Blocks are divided into sets called arenas, which correspond to the chunk of memory mapped from the operating system. When all of the blocks in a set are freed, the arena is unmapped.
Package sccp provides encoding/decoding feature of Signalling Connection Control Part used in SS7/SIGTRAN protocol stack. This is still an experimental project, and currently in its very early stage of development. Any part of implementations (including exported APIs) may be changed before released as v1.0.0.
Package retag provides an ability to change tags of structures' fields in runtime without copying of the data. It may be helpful in next cases: Features: The package requires go1.7+. The package is still experimental and subject to change. The package can be broken by a next release of go.
bindata converts any file into managable Go source code. Useful for embedding binary data into a go program. The file data is optionally gzip compressed before being converted to a raw byte slice. The following paragraphs cover some of the customization options which can be specified in the Config struct, which must be passed into the Translate() call. When used with the `Debug` option, the generated code does not actually include the asset data. Instead, it generates function stubs which load the data from the original file on disk. The asset API remains identical between debug and release builds, so your code will not have to change. This is useful during development when you expect the assets to change often. The host application using these assets uses the same API in both cases and will not have to care where the actual data comes from. An example is a Go webserver with some embedded, static web content like HTML, JS and CSS files. While developing it, you do not want to rebuild the whole server and restart it every time you make a change to a bit of javascript. You just want to build and launch the server once. Then just press refresh in the browser to see those changes. Embedding the assets with the `debug` flag allows you to do just that. When you are finished developing and ready for deployment, just re-invoke `go-bindata` without the `-debug` flag. It will now embed the latest version of the assets. The `NoMemCopy` option will alter the way the output file is generated. It will employ a hack that allows us to read the file data directly from the compiled program's `.rodata` section. This ensures that when we call call our generated function, we omit unnecessary memcopies. The downside of this, is that it requires dependencies on the `reflect` and `unsafe` packages. These may be restricted on platforms like AppEngine and thus prevent you from using this mode. Another disadvantage is that the byte slice we create, is strictly read-only. For most use-cases this is not a problem, but if you ever try to alter the returned byte slice, a runtime panic is thrown. Use this mode only on target platforms where memory constraints are an issue. The default behavior is to use the old code generation method. This prevents the two previously mentioned issues, but will employ at least one extra memcopy and thus increase memory requirements. For instance, consider the following two examples: This would be the default mode, using an extra memcopy but gives a safe implementation without dependencies on `reflect` and `unsafe`: Here is the same functionality, but uses the `.rodata` hack. The byte slice returned from this example can not be written to without generating a runtime error. The NoCompress option indicates that the supplied assets are *not* GZIP compressed before being turned into Go code. The data should still be accessed through a function call, so nothing changes in the API. This feature is useful if you do not care for compression, or the supplied resource is already compressed. Doing it again would not add any value and may even increase the size of the data. The default behavior of the program is to use compression. The keys used in the `_bindata` map are the same as the input file name passed to `go-bindata`. This includes the path. In most cases, this is not desireable, as it puts potentially sensitive information in your code base. For this purpose, the tool supplies another command line flag `-prefix`. This accepts a portion of a path name, which should be stripped off from the map keys and function names. For example, running without the `-prefix` flag, we get: Running with the `-prefix` flag, we get: With the optional Tags field, you can specify any go build tags that must be fulfilled for the output file to be included in a build. This is useful when including binary data in multiple formats, where the desired format is specified at build time with the appropriate tags. The tags are appended to a `// +build` line in the beginning of the output file and must follow the build tags syntax specified by the go tool.
Package statping is a server monitoring application that includs a status page server. Visit the Statping repo at https://github.com/hunterlong/statping to get a full understanding of what this application can do. Statping is available for Mac, Linux and Windows 64x. You can download the tar.gz file or use a couple other methods. Download the latest release at https://github.com/hunterlong/statping/releases/latest or view below. If you're on windows, download the zip file from the latest releases link. Statping can be built in many way, the best way is to use Docker! Enjoy Statping and tell me any issues you might be having on Github. https://github.com/hunterlong
Package ngrok makes it easy to work with the ngrok API from Go. The package is fully code generated and should always be up to date with the latest ngrok API. Full documentation of the ngrok API can be found at: https://ngrok.com/docs/api This package follows the best practices outlined for Go modules. All releases are tagged and any breaking changes will be reflected as a new major version. You should only import this package for production applications by pointing at a stable tagged version. The following example code demonstrates typical initialization and usage of the package to make an API call: API client configuration and all of the datatypes exchanged by the API are defined in this base package. There are subpackages for every API service and a Client type defined in those packages with methods to interact with that API service. It's usually easiest to find the subpackage of the service you want to work with and begin consulting the documentation there. It is recommended to construct the service-specific clients once at initialization time. The ClientConfig object in the root package supports functional options for configuration. The most common option to use is `WithHTTPClient()` which allows the caller to specify a different net/http.Client object. This allows the caller full customization over the transport if needed for use with proxies, custom TLS setups, observability and tracing, etc. Some arguments to methods in the ngrok API are optional and must be meaningfully distinguished from zero values, especially in Update() methods. This allows the API to distinguish between choosing not to update a value vs. setting it to zero or the empty string. For these arguments, ngrok follows the industry standard practice of using pointers to the primitive types and providing convenince functions like ngrok.String() and ngrok.Bool() for the caller to wrap literals as pointer values. For example: All List methods in the ngrok API are paged. This package abstracts that problem away from you by returning an iterator from any List API call. As you advance the iterator it will transparently fetch new pages of values for you behind the scenes. Note that the context supplied to the initial List() call will be used for all subsequent page fetches so it must be long enough to work through the entire list. Here's an example of paging through all of the TLS certificates on your account. Note that you must check for an error after Next() returns false to determine if the iterator failed to fetch the next page of results. All errors returned by the ngrok API are returned as structured payloads for easy error handling. Most non-networking errors returned by API calls in this package will be an ngrok.Error type. The ngrok.Error type exposes important metadata that will help you handle errors. Specifically it includes the HTTP status code of any failed operation as well as an error code value that uniquely identifies the failure condition. There are two helper functions that will make error handling easy: IsNotFound and IsErrorCode. IsNotFound helps identify the common case of accessing an API resource that no longer exists: IsErrorCode helps you identify specific ngrok errors by their unique ngrok error code. All ngrok error codes are documented at https://ngrok.com/docs/errors To check for a specific error condition, you would structure your code like the following example: All ngrok datatypes in this package define String() and GoString() methods so that they can be formatted into strings in helpful representations. The GoString() method is defined to pretty-print an object for debugging purposes with the "%#v" formatting verb.
Package pgx is a PostgreSQL database driver. pgx provides lower level access to PostgreSQL than the standard database/sql. It remains as similar to the database/sql interface as possible while providing better speed and access to PostgreSQL specific features. Import github.com/jackc/pgx/stdlib to use pgx as a database/sql compatible driver. pgx implements Query and Scan in the familiar database/sql style. pgx also implements QueryRow in the same style as database/sql. Use Exec to execute a query that does not return a result set. Connection pool usage is explicit and configurable. In pgx, a connection can be created and managed directly, or a connection pool with a configurable maximum connections can be used. The connection pool offers an after connect hook that allows every connection to be automatically setup before being made available in the connection pool. It delegates methods such as QueryRow to an automatically checked out and released connection so you can avoid manually acquiring and releasing connections when you do not need that level of control. pgx maps between all common base types directly between Go and PostgreSQL. In particular: pgx can map nulls in two ways. The first is package pgtype provides types that have a data field and a status field. They work in a similar fashion to database/sql. The second is to use a pointer to a pointer. pgx maps between int16, int32, int64, float32, float64, and string Go slices and the equivalent PostgreSQL array type. Go slices of native types do not support nulls, so if a PostgreSQL array that contains a null is read into a native Go slice an error will occur. The pgtype package includes many more array types for PostgreSQL types that do not directly map to native Go types. pgx includes built-in support to marshal and unmarshal between Go types and the PostgreSQL JSON and JSONB. pgx encodes from net.IPNet to and from inet and cidr PostgreSQL types. In addition, as a convenience pgx will encode from a net.IP; it will assume a /32 netmask for IPv4 and a /128 for IPv6. pgx includes support for the common data types like integers, floats, strings, dates, and times that have direct mappings between Go and SQL. In addition, pgx uses the github.com/jackc/pgx/pgtype library to support more types. See documention for that library for instructions on how to implement custom types. See example_custom_type_test.go for an example of a custom type for the PostgreSQL point type. pgx also includes support for custom types implementing the database/sql.Scanner and database/sql/driver.Valuer interfaces. If pgx does cannot natively encode a type and that type is a renamed type (e.g. type MyTime time.Time) pgx will attempt to encode the underlying type. While this is usually desired behavior it can produce suprising behavior if one the underlying type and the renamed type each implement database/sql interfaces and the other implements pgx interfaces. It is recommended that this situation be avoided by implementing pgx interfaces on the renamed type. []byte passed as arguments to Query, QueryRow, and Exec are passed unmodified to PostgreSQL. Transactions are started by calling Begin or BeginEx. The BeginEx variant can create a transaction with a specified isolation level. Use CopyFrom to efficiently insert multiple rows at a time using the PostgreSQL copy protocol. CopyFrom accepts a CopyFromSource interface. If the data is already in a [][]interface{} use CopyFromRows to wrap it in a CopyFromSource interface. Or implement CopyFromSource to avoid buffering the entire data set in memory. CopyFrom can be faster than an insert with as few as 5 rows. pgx can listen to the PostgreSQL notification system with the WaitForNotification function. It takes a maximum time to wait for a notification. The pgx ConnConfig struct has a TLSConfig field. If this field is nil, then TLS will be disabled. If it is present, then it will be used to configure the TLS connection. This allows total configuration of the TLS connection. pgx has never explicitly supported Postgres < 9.6's `ssl_renegotiation` option. As of v3.3.0, it doesn't send `ssl_renegotiation: 0` either to support Redshift (https://github.com/jackc/pgx/pull/476). If you need TLS Renegotiation, consider supplying `ConnConfig.TLSConfig` with a non-zero `Renegotiation` value and if it's not the default on your server, set `ssl_renegotiation` via `ConnConfig.RuntimeParams`. pgx defines a simple logger interface. Connections optionally accept a logger that satisfies this interface. Set LogLevel to control logging verbosity. Adapters for github.com/inconshreveable/log15, github.com/sirupsen/logrus, and the testing log are provided in the log directory.
bindata converts any file into managable Go source code. Useful for embedding binary data into a go program. The file data is optionally gzip compressed before being converted to a raw byte slice. The following paragraphs cover some of the customization options which can be specified in the Config struct, which must be passed into the Translate() call. When used with the `Debug` option, the generated code does not actually include the asset data. Instead, it generates function stubs which load the data from the original file on disk. The asset API remains identical between debug and release builds, so your code will not have to change. This is useful during development when you expect the assets to change often. The host application using these assets uses the same API in both cases and will not have to care where the actual data comes from. An example is a Go webserver with some embedded, static web content like HTML, JS and CSS files. While developing it, you do not want to rebuild the whole server and restart it every time you make a change to a bit of javascript. You just want to build and launch the server once. Then just press refresh in the browser to see those changes. Embedding the assets with the `debug` flag allows you to do just that. When you are finished developing and ready for deployment, just re-invoke `go-bindata` without the `-debug` flag. It will now embed the latest version of the assets. The `NoMemCopy` option will alter the way the output file is generated. It will employ a hack that allows us to read the file data directly from the compiled program's `.rodata` section. This ensures that when we call call our generated function, we omit unnecessary memcopies. The downside of this, is that it requires dependencies on the `reflect` and `unsafe` packages. These may be restricted on platforms like AppEngine and thus prevent you from using this mode. Another disadvantage is that the byte slice we create, is strictly read-only. For most use-cases this is not a problem, but if you ever try to alter the returned byte slice, a runtime panic is thrown. Use this mode only on target platforms where memory constraints are an issue. The default behaviour is to use the old code generation method. This prevents the two previously mentioned issues, but will employ at least one extra memcopy and thus increase memory requirements. For instance, consider the following two examples: This would be the default mode, using an extra memcopy but gives a safe implementation without dependencies on `reflect` and `unsafe`: Here is the same functionality, but uses the `.rodata` hack. The byte slice returned from this example can not be written to without generating a runtime error. The NoCompress option indicates that the supplied assets are *not* GZIP compressed before being turned into Go code. The data should still be accessed through a function call, so nothing changes in the API. This feature is useful if you do not care for compression, or the supplied resource is already compressed. Doing it again would not add any value and may even increase the size of the data. The default behaviour of the program is to use compression. The keys used in the `_bindata` map are the same as the input file name passed to `go-bindata`. This includes the path. In most cases, this is not desireable, as it puts potentially sensitive information in your code base. For this purpose, the tool supplies another command line flag `-prefix`. This accepts a portion of a path name, which should be stripped off from the map keys and function names. For example, running without the `-prefix` flag, we get: Running with the `-prefix` flag, we get: With the optional Tags field, you can specify any go build tags that must be fulfilled for the output file to be included in a build. This is useful when including binary data in multiple formats, where the desired format is specified at build time with the appropriate tags. The tags are appended to a `// +build` line in the beginning of the output file and must follow the build tags syntax specified by the go tool.
Package tcl is a CGo-free port of the Tool Command Language (Tcl). Tcl is a very powerful but easy to learn dynamic programming language, suitable for a very wide range of uses, including web and desktop applications, networking, administration, testing and many more. A separate Tcl shell is in the gotclsh directory. All tests pass on supported platforms. Some tests pass on experimental platforms. 2020-09-13 v1.4.0 supports linux/{amd64,386,arm,arm64}. The arm, arm64 ports fail the http tests. 2020-09-03 v1.2.0 is now completelely CGo-free. 2020-08-04: beta2 released for linux/amd64 only. Support for threads, sockets and fork is not yet implemented. Some tests still crash, those are disabled at the moment.
Package statping is a server monitoring application that includs a status page server. Visit the Statping repo at https://github.com/hunterlong/statping to get a full understanding of what this application can do. Statping is available for Mac, Linux and Windows 64x. You can download the tar.gz file or use a couple other methods. Download the latest release at https://github.com/hunterlong/statping/releases/latest or view below. If you're on windows, download the zip file from the latest releases link. Statping can be built in many way, the best way is to use Docker! Enjoy Statping and tell me any issues you might be having on Github. https://github.com/hunterlong
Package sqlite is an in-process implementation of a self-contained, serverless, zero-configuration, transactional SQL database engine. (Work In Progress) 2017-06-10 Windows/Intel no more uses the VM (thanks Steffen Butzer). 2017-06-05 Linux/Intel no more uses the VM (cznic/virtual). To access a Sqlite database do something like This is an experimental, pre-alpha, technology preview package. The alpha release is due when the C runtime support of SQLite in cznic/crt will be complete. See http://modernc.org/ccir. To add a newly supported os/arch combination to this package try running 'go generate'. See https://sqlite.org/docs.html
Package qml offers graphical QML application support for the Go language. This package is in an alpha stage, and still in heavy development. APIs may change, and things may break. At this time contributors and developers that are interested in tracking the development closely are encouraged to use it. If you'd prefer a more stable release, please hold on a bit and subscribe to the mailing list for news. It's in a pretty good state, so it shall not take too long. See http://github.com/go-qml/qml for details. The qml package enables Go programs to display and manipulate graphical content using Qt's QML framework. QML uses a declarative language to express structure and style, and supports JavaScript for in-place manipulation of the described content. When using the Go qml package, such QML content can also interact with Go values, making use of its exported fields and methods, and even explicitly creating new instances of registered Go types. A simple Go application that integrates with QML may perform the following steps for offering a graphical interface: Some of these topics are covered below, and may also be observed in practice in the following examples: The following logic demonstrates loading a QML file into a window: Any QML object may be manipulated by Go via the Object interface. That interface is implemented both by dynamic QML values obtained from a running engine, and by Go types in the qml package that represent QML values, such as Window, Context, and Engine. For example, the following logic creates a window and prints its width whenever it's made visible: Information about the methods, properties, and signals that are available for QML objects may be obtained in the Qt documentation. As a reference, the "visibleChanged" signal and the "width" property used in the example above are described at: When in doubt about what type is being manipulated, the Object.TypeName method provides the type name of the underlying value. The simplest way of making a Go value available to QML code is setting it as a variable of the engine's root context, as in: This logic would enable the following QML code to successfully run: While registering an individual Go value as described above is a quick way to get started, it is also fairly limited. For more flexibility, a Go type may be registered so that QML code can natively create new instances in an arbitrary position of the structure. This may be achieved via the RegisterType function, as the following example demonstrates: With this logic in place, QML code can create new instances of Person by itself: Independently from the mechanism used to publish a Go value to QML code, its methods and fields are available to QML logic as methods and properties of the respective QML object representing it. As required by QML, though, the Go method and field names are lowercased according to the following scheme when being accesed from QML: While QML code can directly read and write exported fields of Go values, as described above, a Go type can also intercept writes to specific fields by declaring a setter method according to common Go conventions. This is often useful for updating the internal state or the visible content of a Go-defined type. For example: In the example above, whenever QML code attempts to update the Person.Name field via any means (direct assignment, object declarations, etc) the SetName method is invoked with the provided value instead. A setter method may also be used in conjunction with a getter method rather than a real type field. A method is only considered a getter in the presence of the respective setter, and according to common Go conventions it must not have the Get prefix. Inside QML logic, the getter and setter pair is seen as a single object property. Custom types implemented in Go may have displayable content by defining a Paint method such as: A simple example is available at: Resource files (qml code, images, etc) may be packed into the Go qml application binary to simplify its handling and distribution. This is done with the genqrc tool: The following blog post provides more details:
Package statping is a server monitoring application that includs a status page server. Visit the Statping repo at https://github.com/hunterlong/statping to get a full understanding of what this application can do. Statping is available for Mac, Linux and Windows 64x. You can download the tar.gz file or use a couple other methods. Download the latest release at https://github.com/hunterlong/statping/releases/latest or view below. If you're on windows, download the zip file from the latest releases link. Statping can be built in many way, the best way is to use Docker! Enjoy Statping and tell me any issues you might be having on Github. https://github.com/hunterlong
Package sqlite is an in-process implementation of a self-contained, serverless, zero-configuration, transactional SQL database engine. (Work In Progress) 2017-06-10 Windows/Intel no more uses the VM (thanks Steffen Butzer). 2017-06-05 Linux/Intel no more uses the VM (cznic/virtual). To access a Sqlite database do something like This is an experimental, pre-alpha, technology preview package. The alpha release is due when the C runtime support of SQLite in cznic/crt will be complete. See http://modernc.org/ccir. To add a newly supported os/arch combination to this package try running 'go generate'. See https://sqlite.org/docs.html
Copying All nodes (since they implement the Node interface) also implement the NodeCopier interface which provides the ShallowCopy() function. A shallow copy returns a new node with all the same properties, but no children. On the other hand there is a DeepCopy function which returns a new node with all recursive children also copied. This ensures that the new returned node can be manipulated without affecting the original node or any of its children. Dates in GEDCOM files can be very complex as they can cater for many scenarios: 1. Incomplete, like "Dec 1943" 2. Anchored, like "Aft. 3 Sep 2003" or "Before 1923" 3. Ranges, like "Bet. 4 Apr 1823 and 8 Apr 1823" 4. Phrases, like "(Foo Bar)" This package provides a very rich API for dealing with all kind of dates in a meaningful and sensible way. Some notable features include: 1. All dates, even though that specify an specific day have a minimum and maximum value that are their true bounds. This is especially important for larger date ranges like the whole month of "Jun 1945". 2. Upper and lower bounds of dates can be converted to the native Go time.Time object. 3. There is a Years function that provides a convenient way to normalise a date range into a number for easier distance and comparison measurements. 4. Algorithms for calculating the similarity of dates on a configurable parabola. Decoding a GEDCOM stream: If you are reading from a file you can use NewDocumentFromGEDCOMFile: Package gedcom contains functionality for encoding, decoding, traversing, manipulating and comparing of GEDCOM data. You can download the latest binaries for macOS, Windows and Linux on the Releases page: https://github.com/elliotchance/gedcom/releases This will not require you to install Go or any other dependencies. If you wish to build it from source you must install the dependencies with: On top of the raw document is a powerful API that takes care of the complex traversing of the Document. Here is a simple example: Some of the nodes in a GEDCOM file have been replaced with more function rich types, such as names, dates, families and more. Encoding a Document If you need the GEDCOM data as a string you can simply using fmt.Stringer: The Filter function recursively removes or manipulates nodes with a FilterFunction: Some examples of Filter functions include BlacklistTagFilter, OfficialTagFilter, SimpleNameFilter and WhitelistTagFilter. There are several functions available that handle different kinds of merging: - MergeNodes(left, right Node) Node: returns a new node that merges children from both nodes. - MergeNodeSlices(left, right Nodes, mergeFn MergeFunction) Nodes: merges two slices based on the mergeFn. This allows more advanced merging when dealing with slices of nodes. - MergeDocuments(left, right *Document, mergeFn MergeFunction) *Document: creates a new document with their respective nodes merged. You can use IndividualBySurroundingSimilarityMergeFunction with this to merge individuals, rather than just appending them all. The MergeFunction is a type that can be received in some of the merging functions. The closure determines if two nodes should be merged and what the result would be. Alternatively it can also describe when two nodes should not be merged. You may certainly create your own MergeFunction, but there are some that are already included: - IndividualBySurroundingSimilarityMergeFunction creates a MergeFunction that will merge individuals if their surrounding similarity is at least minimumSimilarity. - EqualityMergeFunction is a MergeFunction that will return a merged node if the node are considered equal (with Equals). Node.Equals performs a shallow comparison between two nodes. The implementation is different depending on the types of nodes being compared. You should see the specific documentation for the Node. Equality is not to be confused with the Is function seen on some of the nodes, such as Date.Is. The Is function is used to compare exact raw values in nodes. DeepEqual tests if left and right are recursively equal. CompareNodes recursively compares two nodes. For example: Produces a *NodeDiff than can be rendered with the String method:
Package env maps environment variables to struct fields and vice versa. It is heavily based on github.com/caarlos0/env, but has different semantics, and also allows the dumping of a struct to environment variables, not just populating a struct from environment variables. Read environment variables with the Get* functions: Populate a struct from the environment by passing it to Bind(): Use tags to specify a variable name or ignore a field: Dump a struct to a map[string]string by passing it to Dump(): Add `env:"..."` tags to your struct fields to bind them to specific environment variables or ignore them. `env:"-"` tells Bind() to ignore the field: Add `env:"VARNAME"` to bind a field to the variable VARNAME: Variables are retrieved via implementors of the Env interface, which Bind() accepts as a second, optional parameter. So you can pass a custom Env implementation to Bind() to populate structs from a source other than environment variables. See _examples/docopt to see a custom Env implementation used to populate a struct from docopt command-line options. You can also customise the map keys used when dumping a struct by passing VarNameFunc to Dump(). This library is released under the MIT Licence.
Package semver handles the parsing and formatting of Semantic Version strings. Create a new semantic version by providing major, minor, and patch versions: The resulting version has no release or build metadata. To extend a version with release or build metadata, use: To format the version as a string in standard notation, use: To parse an existing semantic version string: If you have a partial version string, with some of the parts not specified or a "v" prefix, use Clean to normalize it: A V is comparable, and can be used as a map key; however, the rules of semantic version comparison mean that equivalent semantic versions may not be structurally equal. In particular, build metadata are not considered in the comparison of order or equivalence of versions. Use V.Equiv to check whether versions are semantically equivalent. If using V values as map keys, consider using V.Key.