Command u-root builds CPIO archives with the given files and Go commands.
Package gocron : A Golang Job Scheduling Package. An in-process scheduler for periodic jobs that uses the builder pattern for configuration. Schedule lets you run Golang functions periodically at pre-determined intervals using a simple, human-friendly syntax. Inspired by the Ruby module clockwork <https://github.com/tomykaira/clockwork> and Python package schedule <https://github.com/dbader/schedule> See also http://adam.heroku.com/past/2010/4/13/rethinking_cron/ http://adam.heroku.com/past/2010/6/30/replace_cron_with_clockwork/ Copyright 2014 Jason Lyu. jasonlvhit@gmail.com . All rights reserved. Use of this source code is governed by a BSD-style . license that can be found in the LICENSE file.
Package logging implements a logging infrastructure for Go. It supports different logging backends like syslog, file and memory. Multiple backends can be utilized with different log levels per backend and logger.
Package fsnotify implements file system notification.
This is a repository containing Go bindings for writing FUSE file systems. Go to https://godoc.org/github.com/hanwen/go-fuse/fs for the in-depth documentation for this library. Older, deprecated APIs are available at https://godoc.org/github.com/hanwen/go-fuse/fuse/pathfs and https://godoc.org/github.com/hanwen/go-fuse/fuse/nodefs.
* This file is part of the KubeVirt project * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * * Copyright 2019 Red Hat, Inc. *
msgp is a code generation tool for creating methods to serialize and de-serialize Go data structures to and from MessagePack. This package is targeted at the `go generate` tool. To use it, include the following directive in a go source file with types requiring source generation: The go generate tool should set the proper environment variables for the generator to execute without any command-line flags. However, the following options are supported, if you need them: For more information, please read README.md, and the wiki at github.com/tinylib/msgp
This file is created for depping ensure.
Package seelog implements logging functionality with flexible dispatching, filtering, and formatting. To create a logger, use one of the following constructors: Example: The "defer" line is important because if you are using asynchronous logger behavior, without this line you may end up losing some messages when you close your application because they are processed in another non-blocking goroutine. To avoid that you explicitly defer flushing all messages before closing. Logger created using one of the LoggerFrom* funcs can be used directly by calling one of the main log funcs. Example: Having loggers as variables is convenient if you are writing your own package with internal logging or if you have several loggers with different options. But for most standalone apps it is more convenient to use package level funcs and vars. There is a package level var 'Current' made for it. You can replace it with another logger using 'ReplaceLogger' and then use package level funcs: Last lines do the same as In this example the 'Current' logger was replaced using a 'ReplaceLogger' call and became equal to 'logger' variable created from config. This way you are able to use package level funcs instead of passing the logger variable. Main seelog point is to configure logger via config files and not the code. The configuration is read by LoggerFrom* funcs. These funcs read xml configuration from different sources and try to create a logger using it. All the configuration features are covered in detail in the official wiki: https://github.com/cihub/seelog/wiki. There are many sections covering different aspects of seelog, but the most important for understanding configs are: After you understand these concepts, check the 'Reference' section on the main wiki page to get the up-to-date list of dispatchers, receivers, formats, and logger types. Here is an example config with all these features: This config represents a logger with adaptive timeout between log messages (check logger types reference) which logs to console, all.log, and errors.log depending on the log level. Its output formats also depend on log level. This logger will only use log level 'debug' and higher (minlevel is set) for all files with names that don't start with 'test'. For files starting with 'test' this logger prohibits all levels below 'error'. Although configuration using code is not recommended, it is sometimes needed and it is possible to do with seelog. Basically, what you need to do to get started is to create constraints, exceptions and a dispatcher tree (same as with config). Most of the New* functions in this package are used to provide such capabilities. Here is an example of configuration in code, that demonstrates an async loop logger that logs to a simple split dispatcher with a console receiver using a specified format and is filtered using a top-level min-max constraints and one expection for the 'main.go' file. So, this is basically a demonstration of configuration of most of the features: To learn seelog features faster you should check the examples package: https://github.com/cihub/seelog-examples It contains many example configs and usecases.
Sync files and directories to and from local and remote object stores Nick Craig-Wood <nick@craig-wood.com>
Package update provides functionality to implement secure, self-updating Go programs (or other single-file targets). For complete updating solutions please see Equinox (https://equinox.io) and go-tuf (https://github.com/flynn/go-tuf). This example shows how to update a program remotely from a URL. Go binaries can often be large. It can be advantageous to only ship a binary patch to a client instead of the complete program text of a new version. This example shows how to update a program with a bsdiff binary patch. Other patch formats may be applied by implementing the Patcher interface. Updating executable code on a computer can be a dangerous operation unless you take the appropriate steps to guarantee the authenticity of the new code. While checksum verification is important, it should always be combined with signature verification (next section) to guarantee that the code came from a trusted party. go-update validates SHA256 checksums by default, but this is pluggable via the Hash property on the Options struct. This example shows how to guarantee that the newly-updated binary is verified to have an appropriate checksum (that was otherwise retrived via a secure channel) specified as a hex string. Cryptographic verification of new code from an update is an extremely important way to guarantee the security and integrity of your updates. Verification is performed by validating the signature of a hash of the new file. This means nothing changes if you apply your update with a patch. This example shows how to add signature verification to your updates. To make all of this work an application distributor must first create a public/private key pair and embed the public key into their application. When they issue a new release, the issuer must sign the new executable file with the private key and distribute the signature along with the update. In order to update a Go application with go-update, you must distributed it as a single executable. This is often easy, but some applications require static assets (like HTML and CSS asset files or TLS certificates). In order to update applications like these, you'll want to make sure to embed those asset files into the distributed binary with a tool like go-bindata (my favorite): https://github.com/jteeuwen/go-bindata Mechanisms and protocols for determining whether an update should be applied and, if so, which one are out of scope for this package. Please consult go-tuf (https://github.com/flynn/go-tuf) or Equinox (https://equinox.io) for more complete solutions. go-update only works for self-updating applications that are distributed as a single binary, i.e. applications that do not have additional assets or dependency files. Updating application that are distributed as mutliple on-disk files is out of scope, although this may change in future versions of this library.
Package otlpjsonfilereceiver implements a receiver that can be used by the Opentelemetry collector to receive logs, traces and metrics from files See https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/file-exporter.md#json-file-serialization
getter is a package for downloading files or directories from a variety of protocols. getter is unique in its ability to download both directories and files. It also detects certain source strings to be protocol-specific URLs. For example, "github.com/hashicorp/go-getter" would turn into a Git URL and use the Git protocol. Protocols and detectors are extensible. To get started, see Client.
This is an example of how to extend a Go application with an addition function defined in WebAssembly. Since addWasm was compiled with TinyGo's `wasi` target, we need to configure WASI host imports. A complete project that does the same as this is available here: https://github.com/tetratelabs/wazero/tree/main/examples/basic This is a basic example of using the file system compilation cache via wazero.NewCompilationCacheWithDir. The main goal is to show how it is configured. This example shows how to configure an embed.FS. This is a basic example of retrieving custom sections using RuntimeConfig.WithCustomSections.
Package fuse enables writing FUSE file systems on Linux and FreeBSD. There are two approaches to writing a FUSE file system. The first is to speak the low-level message protocol, reading from a Conn using ReadRequest and writing using the various Respond methods. This approach is closest to the actual interaction with the kernel and can be the simplest one in contexts such as protocol translators. Servers of synthesized file systems tend to share common bookkeeping abstracted away by the second approach, which is to call fs.Serve to serve the FUSE protocol using an implementation of the service methods in the interfaces FS* (file system), Node* (file or directory), and Handle* (opened file or directory). There are a daunting number of such methods that can be written, but few are required. The specific methods are described in the documentation for those interfaces. The examples/hellofs subdirectory contains a simple illustration of the fs.Serve approach. The required and optional methods for the FS, Node, and Handle interfaces have the general form where Op is the name of a FUSE operation. Op reads request parameters from req and writes results to resp. An operation whose only result is the error result omits the resp parameter. Multiple goroutines may call service methods simultaneously; the methods being called are responsible for appropriate synchronization. The operation must not hold on to the request or response, including any []byte fields such as WriteRequest.Data or SetxattrRequest.Xattr. Operations can return errors. The FUSE interface can only communicate POSIX errno error numbers to file system clients, the message is not visible to file system clients. The returned error can implement ErrorNumber to control the errno returned. Without ErrorNumber, a generic errno (EIO) is returned. Error messages will be visible in the debug log as part of the response. In some file systems, some operations may take an undetermined amount of time. For example, a Read waiting for a network message or a matching Write might wait indefinitely. If the request is cancelled and no longer needed, the context will be cancelled. Blocking operations should select on a receive from ctx.Done() and attempt to abort the operation early if the receive succeeds (meaning the channel is closed). To indicate that the operation failed because it was aborted, return syscall.EINTR. If an operation does not block for an indefinite amount of time, supporting cancellation is not necessary. All requests types embed a Header, meaning that the method can inspect req.Pid, req.Uid, and req.Gid as necessary to implement permission checking. The kernel FUSE layer normally prevents other users from accessing the FUSE file system (to change this, see AllowOther), but does not enforce access modes (to change this, see DefaultPermissions). Behavior and metadata of the mounted file system can be changed by passing MountOption values to Mount.
Command godep helps build packages reproducibly by fixing their dependencies. Save currently-used dependencies to file Godeps: Build project using saved dependencies: or
Copyright 2013 The Go Authors. All rights reserved. Use of this source code is governed by a BSD-style license that can be found in the LICENSE file.
Package cloudtrail provides the API client, operations, and parameter types for AWS CloudTrail. This is the CloudTrail API Reference. It provides descriptions of actions, data types, common parameters, and common errors for CloudTrail. CloudTrail is a web service that records Amazon Web Services API calls for your Amazon Web Services account and delivers log files to an Amazon S3 bucket. The recorded information includes the identity of the user, the start time of the Amazon Web Services API call, the source IP address, the request parameters, and the response elements returned by the service. As an alternative to the API, you can use one of the Amazon Web Services SDKs, which consist of libraries and sample code for various programming languages and platforms (Java, Ruby, .NET, iOS, Android, etc.). The SDKs provide programmatic access to CloudTrail. For example, the SDKs handle cryptographically signing requests, managing errors, and retrying requests automatically. For more information about the Amazon Web Services SDKs, including how to download and install them, see Tools to Build on Amazon Web Services. See the CloudTrail User Guide for information about the data that is included with each Amazon Web Services API call listed in the log files.
Package cloudformation provides the API client, operations, and parameter types for AWS CloudFormation. CloudFormation allows you to create and manage Amazon Web Services infrastructure deployments predictably and repeatedly. You can use CloudFormation to leverage Amazon Web Services products, such as Amazon Elastic Compute Cloud, Amazon Elastic Block Store, Amazon Simple Notification Service, Elastic Load Balancing, and Amazon EC2 Auto Scaling to build highly reliable, highly scalable, cost-effective applications without creating or configuring the underlying Amazon Web Services infrastructure. With CloudFormation, you declare all your resources and dependencies in a template file. The template defines a collection of resources as a single unit called a stack. CloudFormation creates and deletes all member resources of the stack together and manages all dependencies between the resources for you. For more information about CloudFormation, see the CloudFormation product page. CloudFormation makes use of other Amazon Web Services products. If you need additional technical information about a specific Amazon Web Services product, you can find the product's technical documentation at docs.aws.amazon.com.
Package gendoc is a protoc plugin for generating documentation from your proto files. Typically this will not be used as a library, though nothing prevents that. Normally it'll be invoked by passing `--doc_out` and `--doc_opt` values to protoc. Example: generate HTML documentation Example: exclude patterns Example: use a custom template For more details, check out the README at https://github.com/pseudomuto/protoc-gen-doc
Package dbus implements bindings to the D-Bus message bus system. To use the message bus API, you first need to connect to a bus (usually the session or system bus). The acquired connection then can be used to call methods on remote objects and emit or receive signals. Using the Export method, you can arrange D-Bus methods calls to be directly translated to method calls on a Go value. For outgoing messages, Go types are automatically converted to the corresponding D-Bus types. See the official specification at https://dbus.freedesktop.org/doc/dbus-specification.html#type-system for more information on the D-Bus type system. The following types are directly encoded as their respective D-Bus equivalents: Slices and arrays encode as ARRAYs of their element type. Maps encode as DICTs, provided that their key type can be used as a key for a DICT. Structs other than Variant and Signature encode as a STRUCT containing their exported fields in order. Fields whose tags contain `dbus:"-"` and unexported fields will be skipped. Pointers encode as the value they're pointed to. Types convertible to one of the base types above will be mapped as the base type. Trying to encode any other type or a slice, map or struct containing an unsupported type will result in an InvalidTypeError. For incoming messages, the inverse of these rules are used, with the exception of STRUCTs. Incoming STRUCTS are represented as a slice of empty interfaces containing the struct fields in the correct order. The Store function can be used to convert such values to Go structs. Handling Unix file descriptors deserves special mention. To use them, you should first check that they are supported on a connection by calling SupportsUnixFDs. If it returns true, all method of Connection will translate messages containing UnixFD's to messages that are accompanied by the given file descriptors with the UnixFD values being substituted by the correct indices. Similarly, the indices of incoming messages are automatically resolved. It shouldn't be necessary to use UnixFDIndex.
Package saml contains a partial implementation of the SAML standard in golang. SAML is a standard for identity federation, i.e. either allowing a third party to authenticate your users or allowing third parties to rely on us to authenticate their users. In SAML parlance an Identity Provider (IDP) is a service that knows how to authenticate users. A Service Provider (SP) is a service that delegates authentication to an IDP. If you are building a service where users log in with someone else's credentials, then you are a Service Provider. This package supports implementing both service providers and identity providers. The core package contains the implementation of SAML. The package samlsp provides helper middleware suitable for use in Service Provider applications. The package samlidp provides a rudimentary IDP service that is useful for testing or as a starting point for other integrations. Version 0.4.0 introduces a few breaking changes to the _samlsp_ package in order to make the package more extensible, and to clean up the interfaces a bit. The default behavior remains the same, but you can now provide interface implementations of _RequestTracker_ (which tracks pending requests), _Session_ (which handles maintaining a session) and _OnError_ which handles reporting errors. Public fields of _samlsp.Middleware_ have changed, so some usages may require adjustment. See [issue 231](https://github.com/crewjam/saml/issues/231) for details. The option to provide an IDP metadata URL has been deprecated. Instead, we recommend that you use the `FetchMetadata()` function, or fetch the metadata yourself and use the new `ParseMetadata()` function, and pass the metadata in _samlsp.Options.IDPMetadata_. Similarly, the _HTTPClient_ field is now deprecated because it was only used for fetching metdata, which is no longer directly implemented. The fields that manage how cookies are set are deprecated as well. To customize how cookies are managed, provide custom implementation of _RequestTracker_ and/or _Session_, perhaps by extending the default implementations. The deprecated fields have not been removed from the Options structure, but will be in future. In particular we have deprecated the following fields in _samlsp.Options_: - `Logger` - This was used to emit errors while validating, which is an anti-pattern. - `IDPMetadataURL` - Instead use `FetchMetadata()` - `HTTPClient` - Instead pass httpClient to FetchMetadata - `CookieMaxAge` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieName` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieDomain` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieDomain` - Instead assign a custom CookieRequestTracker or CookieSessionProvider Let us assume we have a simple web application to protect. We'll modify this application so it uses SAML to authenticate users. ```golang package main import ( ) ``` Each service provider must have an self-signed X.509 key pair established. You can generate your own with something like this: We will use `samlsp.Middleware` to wrap the endpoint we want to protect. Middleware provides both an `http.Handler` to serve the SAML specific URLs and a set of wrappers to require the user to be logged in. We also provide the URL where the service provider can fetch the metadata from the IDP at startup. In our case, we'll use [samltest.id](https://samltest.id/), an identity provider designed for testing. ```golang package main import ( ) ``` Next we'll have to register our service provider with the identity provider to establish trust from the service provider to the IDP. For [samltest.id](https://samltest.id/), you can do something like: Navigate to https://samltest.id/upload.php and upload the file you fetched. Now you should be able to authenticate. The flow should look like this: 1. You browse to `localhost:8000/hello` 1. The middleware redirects you to `https://samltest.id/idp/profile/SAML2/Redirect/SSO` 1. samltest.id prompts you for a username and password. 1. samltest.id returns you an HTML document which contains an HTML form setup to POST to `localhost:8000/saml/acs`. The form is automatically submitted if you have javascript enabled. 1. The local service validates the response, issues a session cookie, and redirects you to the original URL, `localhost:8000/hello`. 1. This time when `localhost:8000/hello` is requested there is a valid session and so the main content is served. Please see `example/idp/` for a substantially complete example of how to use the library and helpers to be an identity provider. The SAML standard is huge and complex with many dark corners and strange, unused features. This package implements the most commonly used subset of these features required to provide a single sign on experience. The package supports at least the subset of SAML known as [interoperable SAML](http://saml2int.org). This package supports the Web SSO profile. Message flows from the service provider to the IDP are supported using the HTTP Redirect binding and the HTTP POST binding. Message flows from the IDP to the service provider are supported via the HTTP POST binding. The package can produce signed SAML assertions, and can validate both signed and encrypted SAML assertions. It does not support signed or encrypted requests. The _RelayState_ parameter allows you to pass user state information across the authentication flow. The most common use for this is to allow a user to request a deep link into your site, be redirected through the SAML login flow, and upon successful completion, be directed to the originally requested link, rather than the root. Unfortunately, _RelayState_ is less useful than it could be. Firstly, it is not authenticated, so anything you supply must be signed to avoid XSS or CSRF. Secondly, it is limited to 80 bytes in length, which precludes signing. (See section 3.6.3.1 of SAMLProfiles.) The SAML specification is a collection of PDFs (sadly): - [SAMLCore](http://docs.oasis-open.org/security/saml/v2.0/saml-core-2.0-os.pdf) defines data types. - [SAMLBindings](http://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf) defines the details of the HTTP requests in play. - [SAMLProfiles](http://docs.oasis-open.org/security/saml/v2.0/saml-profiles-2.0-os.pdf) describes data flows. - [SAMLConformance](http://docs.oasis-open.org/security/saml/v2.0/saml-conformance-2.0-os.pdf) includes a support matrix for various parts of the protocol. [SAMLtest](https://samltest.id/) is a testing ground for SAML service and identity providers. Please do not report security issues in the issue tracker. Rather, please contact me directly at ross@kndr.org ([PGP Key `78B6038B3B9DFB88`](https://keybase.io/crewjam)).
Package efs provides the API client, operations, and parameter types for Amazon Elastic File System. Amazon Elastic File System (Amazon EFS) provides simple, scalable file storage for use with Amazon EC2 Linux and Mac instances in the Amazon Web Services Cloud. With Amazon EFS, storage capacity is elastic, growing and shrinking automatically as you add and remove files, so that your applications have the storage they need, when they need it. For more information, see the Amazon Elastic File System API Referenceand the Amazon Elastic File System User Guide.
Package dbus implements bindings to the D-Bus message bus system. To use the message bus API, you first need to connect to a bus (usually the session or system bus). The acquired connection then can be used to call methods on remote objects and emit or receive signals. Using the Export method, you can arrange D-Bus methods calls to be directly translated to method calls on a Go value. For outgoing messages, Go types are automatically converted to the corresponding D-Bus types. The following types are directly encoded as their respective D-Bus equivalents: Slices and arrays encode as ARRAYs of their element type. Maps encode as DICTs, provided that their key type can be used as a key for a DICT. Structs other than Variant and Signature encode as a STRUCT containing their exported fields. Fields whose tags contain `dbus:"-"` and unexported fields will be skipped. Pointers encode as the value they're pointed to. Types convertible to one of the base types above will be mapped as the base type. Trying to encode any other type or a slice, map or struct containing an unsupported type will result in an InvalidTypeError. For incoming messages, the inverse of these rules are used, with the exception of STRUCTs. Incoming STRUCTS are represented as a slice of empty interfaces containing the struct fields in the correct order. The Store function can be used to convert such values to Go structs. Handling Unix file descriptors deserves special mention. To use them, you should first check that they are supported on a connection by calling SupportsUnixFDs. If it returns true, all method of Connection will translate messages containing UnixFD's to messages that are accompanied by the given file descriptors with the UnixFD values being substituted by the correct indices. Similarily, the indices of incoming messages are automatically resolved. It shouldn't be necessary to use UnixFDIndex.
Package script aims to make it easy to write shell-type scripts in Go, for general system administration purposes: reading files, counting lines, matching strings, and so on.
Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Package scrubber implements support for cleaning sensitive information out of strings and files. This module's API is not yet stable, and may change incompatibly from version to version.
Package codecommit provides the API client, operations, and parameter types for AWS CodeCommit. This is the CodeCommit API Reference. This reference provides descriptions of the operations and data types for CodeCommit API along with usage examples. You can use the CodeCommit API to work with the following objects: Repositories, by calling the following: BatchGetRepositories CreateRepository DeleteRepository GetRepository ListRepositories UpdateRepositoryDescription UpdateRepositoryEncryptionKey UpdateRepositoryName Branches, by calling the following: CreateBranch DeleteBranch GetBranch ListBranches UpdateDefaultBranch Files, by calling the following: DeleteFile GetBlob GetFile GetFolder ListFileCommitHistory PutFile Commits, by calling the following: BatchGetCommits CreateCommit GetCommit GetDifferences Merges, by calling the following: BatchDescribeMergeConflicts CreateUnreferencedMergeCommit DescribeMergeConflicts GetMergeCommit GetMergeConflicts GetMergeOptions MergeBranchesByFastForward MergeBranchesBySquash MergeBranchesByThreeWay Pull requests, by calling the following: CreatePullRequest CreatePullRequestApprovalRule DeletePullRequestApprovalRule DescribePullRequestEvents EvaluatePullRequestApprovalRules GetCommentsForPullRequest GetPullRequest GetPullRequestApprovalStates GetPullRequestOverrideState ListPullRequests MergePullRequestByFastForward MergePullRequestBySquash MergePullRequestByThreeWay OverridePullRequestApprovalRules PostCommentForPullRequest UpdatePullRequestApprovalRuleContent UpdatePullRequestApprovalState UpdatePullRequestDescription UpdatePullRequestStatus UpdatePullRequestTitle Approval rule templates, by calling the following: AssociateApprovalRuleTemplateWithRepository BatchAssociateApprovalRuleTemplateWithRepositories BatchDisassociateApprovalRuleTemplateFromRepositories CreateApprovalRuleTemplate DeleteApprovalRuleTemplate DisassociateApprovalRuleTemplateFromRepository GetApprovalRuleTemplate ListApprovalRuleTemplates ListAssociatedApprovalRuleTemplatesForRepository ListRepositoriesForApprovalRuleTemplate UpdateApprovalRuleTemplateDescription UpdateApprovalRuleTemplateName UpdateApprovalRuleTemplateContent Comments in a repository, by calling the following: DeleteCommentContent GetComment GetCommentReactions GetCommentsForComparedCommit PostCommentForComparedCommit PostCommentReply PutCommentReaction UpdateComment Tags used to tag resources in CodeCommit (not Git tags), by calling the following: ListTagsForResource TagResource UntagResource Triggers, by calling the following: GetRepositoryTriggers PutRepositoryTriggers TestRepositoryTriggers For information about how to use CodeCommit, see the CodeCommit User Guide.
Package goavro is a library that encodes and decodes Avro data. Goavro provides methods to encode native Go data into both binary and textual JSON Avro data, and methods to decode both binary and textual JSON Avro data to native Go data. Goavro also provides methods to read and write Object Container File (OCF) formatted files, and the library contains example programs to read and write OCF files. Usage Example:
Package secp256k1 implements optimized secp256k1 elliptic curve operations in pure Go. This package provides an optimized pure Go implementation of elliptic curve cryptography operations over the secp256k1 curve as well as data structures and functions for working with public and private secp256k1 keys. See https://www.secg.org/sec2-v2.pdf for details on the standard. In addition, sub packages are provided to produce, verify, parse, and serialize ECDSA signatures and EC-Schnorr-DCRv0 (a custom Schnorr-based signature scheme specific to Decred) signatures. See the README.md files in the relevant sub packages for more details about those aspects. An overview of the features provided by this package are as follows: It also provides an implementation of the Go standard library crypto/elliptic Curve interface via the S256 function so that it may be used with other packages in the standard library such as crypto/tls, crypto/x509, and crypto/ecdsa. However, in the case of ECDSA, it is highly recommended to use the ecdsa sub package of this package instead since it is optimized specifically for secp256k1 and is significantly faster as a result. Although this package was primarily written for dcrd, it has intentionally been designed so it can be used as a standalone package for any projects needing to use optimized secp256k1 elliptic curve cryptography. Finally, a comprehensive suite of tests is provided to provide a high level of quality assurance. At the time of this writing, the primary public key cryptography in widespread use on the Decred network used to secure coins is based on elliptic curves defined by the secp256k1 domain parameters. This example demonstrates use of GenerateSharedSecret to encrypt a message for a recipient's public key, and subsequently decrypt the message using the recipient's private key.
Package gokrb5 provides a Kerberos 5 implementation for Go. This is a pure Go implementation and does not have dependencies on native libraries. Feature include: HTTP handler wrapper implements SPNEGO Kerberos authentication. HTTP handler wrapper decodes Microsoft AD PAC authorization data. Client that can authenticate to an SPNEGO Kerberos authenticated web service. Ability to change client's password. Kerberos libraries for custom integration. Parsing Keytab files. Parsing krb5.conf files.
Package emr provides the API client, operations, and parameter types for Amazon EMR. Amazon EMR is a web service that makes it easier to process large amounts of data efficiently. Amazon EMR uses Hadoop processing combined with several Amazon Web Services services to do tasks such as web indexing, data mining, log file analysis, machine learning, scientific simulation, and data warehouse management.
Package elasticsearch provides a Go client for Elasticsearch. Create the client with the NewDefaultClient function: The ELASTICSEARCH_URL environment variable is used instead of the default URL, when set. Use a comma to separate multiple URLs. To configure the client, pass a Config object to the NewClient function: See the elasticsearch_integration_test.go file and the _examples folder for more information. Call the Elasticsearch APIs by invoking the corresponding methods on the client: See the github.com/elastic/go-elasticsearch/esapi package for more information and examples.
Package mmap allows mapping files into memory. It tries to provide a simple, reasonably portable interface, but doesn't go out of its way to abstract away every little platform detail. This specifically means:
nolint // it's not a Go code file
Package organizations provides the API client, operations, and parameter types for AWS Organizations. Organizations is a web service that enables you to consolidate your multiple Amazon Web Services accounts into an organization and centrally manage your accounts and their resources. This guide provides descriptions of the Organizations operations. For more information about using this service, see the Organizations User Guide. We welcome your feedback. Send your comments to feedback-awsorganizations@amazon.com or post your feedback and questions in the Organizations support forum. For more information about the Amazon Web Services support forums, see Forums Help. For the current release of Organizations, specify the us-east-1 region for all Amazon Web Services API and CLI calls made from the commercial Amazon Web Services Regions outside of China. If calling from one of the Amazon Web Services Regions in China, then specify cn-northwest-1 . You can do this in the CLI by using these parameters and commands: --endpoint-url https://organizations.us-east-1.amazonaws.com (from commercial or --endpoint-url https://organizations.cn-northwest-1.amazonaws.com.cn (from aws configure set default.region us-east-1 (from commercial Amazon Web Services or aws configure set default.region cn-northwest-1 (from Amazon Web Services --region us-east-1 (from commercial Amazon Web Services Regions outside of or --region cn-northwest-1 (from Amazon Web Services Regions in China) Organizations supports CloudTrail, a service that records Amazon Web Services API calls for your Amazon Web Services account and delivers log files to an Amazon S3 bucket. By using information collected by CloudTrail, you can determine which requests the Organizations service received, who made the request and when, and so on. For more about Organizations and its support for CloudTrail, see Logging Organizations API calls with CloudTrailin the Organizations User Guide. To learn more about CloudTrail, including how to turn it on and find your log files, see the CloudTrail User Guide.
Package parquetexporter implements an exporter that writes data to Parquet files. Deprecated: this component is no longer developed and will be removed in the next release.
Package klog implements logging analogous to the Google-internal C++ INFO/ERROR/V setup. It provides functions Info, Warning, Error, Fatal, plus formatting variants such as Infof. It also provides V-style logging controlled by the -v and -vmodule=file=2 flags. Basic examples: See the documentation for the V function for an explanation of these examples: Log output is buffered and written periodically using Flush. Programs should call Flush before exiting to guarantee all log output is written. By default, all log statements write to standard error. This package provides several flags that modify this behavior. As a result, flag.Parse must be called before any logging is done.
Fresh is a command line tool that builds and (re)starts your web application everytime you save a go or template file. If the web framework you are using supports the Fresh runner, it will show build errors on your browser. It currently works with Traffic (https://github.com/pilu/traffic), Martini (https://github.com/codegangsta/martini) and gocraft/web (https://github.com/gocraft/web). Fresh will watch for file events, and every time you create/modifiy/delete a file it will build and restart the application. If `go build` returns an error, it will logs it in the tmp folder. Traffic (https://github.com/pilu/traffic) already has a middleware that shows the content of that file if it is present. This middleware is automatically added if you run a Traffic web app in dev mode with Fresh.
Package log15 provides an opinionated, simple toolkit for best-practice logging that is both human and machine readable. It is modeled after the standard library's io and net/http packages. This package enforces you to only log key/value pairs. Keys must be strings. Values may be any type that you like. The default output format is logfmt, but you may also choose to use JSON instead if that suits you. Here's how you log: This will output a line that looks like: To get started, you'll want to import the library: Now you're ready to start logging: Because recording a human-meaningful message is common and good practice, the first argument to every logging method is the value to the *implicit* key 'msg'. Additionally, the level you choose for a message will be automatically added with the key 'lvl', and so will the current timestamp with key 't'. You may supply any additional context as a set of key/value pairs to the logging function. log15 allows you to favor terseness, ordering, and speed over safety. This is a reasonable tradeoff for logging functions. You don't need to explicitly state keys/values, log15 understands that they alternate in the variadic argument list: If you really do favor your type-safety, you may choose to pass a log.Ctx instead: Frequently, you want to add context to a logger so that you can track actions associated with it. An http request is a good example. You can easily create new loggers that have context that is automatically included with each log line: This will output a log line that includes the path context that is attached to the logger: The Handler interface defines where log lines are printed to and how they are formated. Handler is a single interface that is inspired by net/http's handler interface: Handlers can filter records, format them, or dispatch to multiple other Handlers. This package implements a number of Handlers for common logging patterns that are easily composed to create flexible, custom logging structures. Here's an example handler that prints logfmt output to Stdout: Here's an example handler that defers to two other handlers. One handler only prints records from the rpc package in logfmt to standard out. The other prints records at Error level or above in JSON formatted output to the file /var/log/service.json This package implements three Handlers that add debugging information to the context, CallerFileHandler, CallerFuncHandler and CallerStackHandler. Here's an example that adds the source file and line number of each logging call to the context. This will output a line that looks like: Here's an example that logs the call stack rather than just the call site. This will output a line that looks like: The "%+v" format instructs the handler to include the path of the source file relative to the compile time GOPATH. The github.com/go-stack/stack package documents the full list of formatting verbs and modifiers available. The Handler interface is so simple that it's also trivial to write your own. Let's create an example handler which tries to write to one handler, but if that fails it falls back to writing to another handler and includes the error that it encountered when trying to write to the primary. This might be useful when trying to log over a network socket, but if that fails you want to log those records to a file on disk. This pattern is so useful that a generic version that handles an arbitrary number of Handlers is included as part of this library called FailoverHandler. Sometimes, you want to log values that are extremely expensive to compute, but you don't want to pay the price of computing them if you haven't turned up your logging level to a high level of detail. This package provides a simple type to annotate a logging operation that you want to be evaluated lazily, just when it is about to be logged, so that it would not be evaluated if an upstream Handler filters it out. Just wrap any function which takes no arguments with the log.Lazy type. For example: If this message is not logged for any reason (like logging at the Error level), then factorRSAKey is never evaluated. The same log.Lazy mechanism can be used to attach context to a logger which you want to be evaluated when the message is logged, but not when the logger is created. For example, let's imagine a game where you have Player objects: You always want to log a player's name and whether they're alive or dead, so when you create the player object, you might do: Only now, even after a player has died, the logger will still report they are alive because the logging context is evaluated when the logger was created. By using the Lazy wrapper, we can defer the evaluation of whether the player is alive or not to each log message, so that the log records will reflect the player's current state no matter when the log message is written: If log15 detects that stdout is a terminal, it will configure the default handler for it (which is log.StdoutHandler) to use TerminalFormat. This format logs records nicely for your terminal, including color-coded output based on log level. Becasuse log15 allows you to step around the type system, there are a few ways you can specify invalid arguments to the logging functions. You could, for example, wrap something that is not a zero-argument function with log.Lazy or pass a context key that is not a string. Since logging libraries are typically the mechanism by which errors are reported, it would be onerous for the logging functions to return errors. Instead, log15 handles errors by making these guarantees to you: - Any log record containing an error will still be printed with the error explained to you as part of the log record. - Any log record containing an error will include the context key LOG15_ERROR, enabling you to easily (and if you like, automatically) detect if any of your logging calls are passing bad values. Understanding this, you might wonder why the Handler interface can return an error value in its Log method. Handlers are encouraged to return errors only if they fail to write their log records out to an external source like if the syslog daemon is not responding. This allows the construction of useful handlers which cope with those failures like the FailoverHandler. log15 is intended to be useful for library authors as a way to provide configurable logging to users of their library. Best practice for use in a library is to always disable all output for your logger by default and to provide a public Logger instance that consumers of your library can configure. Like so: Users of your library may then enable it if they like: The ability to attach context to a logger is a powerful one. Where should you do it and why? I favor embedding a Logger directly into any persistent object in my application and adding unique, tracing context keys to it. For instance, imagine I am writing a web browser: When a new tab is created, I assign a logger to it with the url of the tab as context so it can easily be traced through the logs. Now, whenever we perform any operation with the tab, we'll log with its embedded logger and it will include the tab title automatically: There's only one problem. What if the tab url changes? We could use log.Lazy to make sure the current url is always written, but that would mean that we couldn't trace a tab's full lifetime through our logs after the user navigate to a new URL. Instead, think about what values to attach to your loggers the same way you think about what to use as a key in a SQL database schema. If it's possible to use a natural key that is unique for the lifetime of the object, do so. But otherwise, log15's ext package has a handy RandId function to let you generate what you might call "surrogate keys" They're just random hex identifiers to use for tracing. Back to our Tab example, we would prefer to set up our Logger like so: Now we'll have a unique traceable identifier even across loading new urls, but we'll still be able to see the tab's current url in the log messages. For all Handler functions which can return an error, there is a version of that function which will return no error but panics on failure. They are all available on the Must object. For example: All of the following excellent projects inspired the design of this library: code.google.com/p/log4go github.com/op/go-logging github.com/technoweenie/grohl github.com/Sirupsen/logrus github.com/kr/logfmt github.com/spacemonkeygo/spacelog golang's stdlib, notably io and net/http https://xkcd.com/927/
Package ini provides INI file read and write functionality in Go.
Package mimetype uses magic number signatures to detect the MIME type of a file. File formats are stored in a hierarchy with application/octet-stream at its root. For example, the hierarchy for HTML format is application/octet-stream -> text/plain -> text/html. Pure io.Readers (meaning those without a Seek method) cannot be read twice. This means that once DetectReader has been called on an io.Reader, that reader is missing the bytes representing the header of the file. To detect the MIME type and then reuse the input, use a buffer, io.TeeReader, and io.MultiReader to create a new reader containing the original, unaltered data. If the input is an io.ReadSeeker instead, call input.Seek(0, io.SeekStart) before reusing it. Use Extend to add support for a file format which is not detected by mimetype. https://www.garykessler.net/library/file_sigs.html and https://github.com/file/file/tree/master/magic/Magdir have signatures for a multitude of file formats. Considering the definition of a binary file as "a computer file that is not a text file", they can differentiated by searching for the text/plain MIME in their MIME hierarchy.
Package godog is the official Cucumber BDD framework for Golang, it merges specification and test documentation into one cohesive whole. Godog does not intervene with the standard "go test" command and it's behavior. You can leverage both frameworks to functionally test your application while maintaining all test related source code in *_test.go files. Godog acts similar compared to go test command. It uses go compiler and linker tool in order to produce test executable. Godog contexts needs to be exported same as Test functions for go test. For example, imagine you're about to create the famous UNIX ls command. Before you begin, you describe how the feature should work, see the example below.. Example: Now, wouldn't it be cool if something could read this sentence and use it to actually run a test against the ls command? Hey, that's exactly what this package does! As you'll see, Godog is easy to learn, quick to use, and will put the fun back into tests. Godog was inspired by Behat and Cucumber the above description is taken from it's documentation.
Package workerpool queues work to a limited number of goroutines. The purpose of the worker pool is to limit the concurrency of tasks executed by the workers. This is useful when performing tasks that require sufficient resources (CPU, memory, etc.), and running too many tasks at the same time would exhaust resources. A task is a function submitted to the worker pool for execution. Submitting tasks to this worker pool will not block, regardless of the number of tasks. Incoming tasks are immediately dispatched to an available worker. If no worker is immediately available, or there are already tasks waiting for an available worker, then the task is put on a waiting queue to wait for an available worker. The intent of the worker pool is to limit the concurrency of task execution, not limit the number of tasks queued to be executed. Therefore, this unbounded input of tasks is acceptable as the tasks cannot be discarded. If the number of inbound tasks is too many to even queue for pending processing, then the solution is outside the scope of workerpool. It should be solved by distributing load over multiple systems, and/or storing input for pending processing in intermediate storage such as a database, file system, distributed message queue, etc. This worker pool uses a single dispatcher goroutine to read tasks from the input task queue and dispatch them to worker goroutines. This allows for a small input channel, and lets the dispatcher queue as many tasks as are submitted when there are no available workers. Additionally, the dispatcher can adjust the number of workers as appropriate for the work load, without having to utilize locked counters and checks incurred on task submission. When no tasks have been submitted for a period of time, a worker is removed by the dispatcher. This is done until there are no more workers to remove. The minimum number of workers is always zero, because the time to start new workers is insignificant. It is advisable to use different worker pools for tasks that are bound by different resources, or that have different resource use patterns. For example, tasks that use X Mb of memory may need different concurrency limits than tasks that use Y Mb of memory. When there are no available workers to handle incoming tasks, the tasks are put on a waiting queue, in this implementation. In implementations mentioned in the credits below, these tasks were passed to goroutines. Using a queue is faster and has less memory overhead than creating a separate goroutine for each waiting task, allowing a much higher number of waiting tasks. Also, using a waiting queue ensures that tasks are given to workers in the order the tasks were received. This implementation builds on ideas from the following: http://marcio.io/2015/07/handling-1-million-requests-per-minute-with-golang http://nesv.github.io/golang/2014/02/25/worker-queues-in-go.html
bindata converts any file into managable Go source code. Useful for embedding binary data into a go program. The file data is optionally gzip compressed before being converted to a raw byte slice. The following paragraphs cover some of the customization options which can be specified in the Config struct, which must be passed into the Translate() call. When used with the `Debug` option, the generated code does not actually include the asset data. Instead, it generates function stubs which load the data from the original file on disk. The asset API remains identical between debug and release builds, so your code will not have to change. This is useful during development when you expect the assets to change often. The host application using these assets uses the same API in both cases and will not have to care where the actual data comes from. An example is a Go webserver with some embedded, static web content like HTML, JS and CSS files. While developing it, you do not want to rebuild the whole server and restart it every time you make a change to a bit of javascript. You just want to build and launch the server once. Then just press refresh in the browser to see those changes. Embedding the assets with the `debug` flag allows you to do just that. When you are finished developing and ready for deployment, just re-invoke `go-bindata` without the `-debug` flag. It will now embed the latest version of the assets. The `NoMemCopy` option will alter the way the output file is generated. It will employ a hack that allows us to read the file data directly from the compiled program's `.rodata` section. This ensures that when we call call our generated function, we omit unnecessary memcopies. The downside of this, is that it requires dependencies on the `reflect` and `unsafe` packages. These may be restricted on platforms like AppEngine and thus prevent you from using this mode. Another disadvantage is that the byte slice we create, is strictly read-only. For most use-cases this is not a problem, but if you ever try to alter the returned byte slice, a runtime panic is thrown. Use this mode only on target platforms where memory constraints are an issue. The default behaviour is to use the old code generation method. This prevents the two previously mentioned issues, but will employ at least one extra memcopy and thus increase memory requirements. For instance, consider the following two examples: This would be the default mode, using an extra memcopy but gives a safe implementation without dependencies on `reflect` and `unsafe`: Here is the same functionality, but uses the `.rodata` hack. The byte slice returned from this example can not be written to without generating a runtime error. The NoCompress option indicates that the supplied assets are *not* GZIP compressed before being turned into Go code. The data should still be accessed through a function call, so nothing changes in the API. This feature is useful if you do not care for compression, or the supplied resource is already compressed. Doing it again would not add any value and may even increase the size of the data. The default behaviour of the program is to use compression. The keys used in the `_bindata` map are the same as the input file name passed to `go-bindata`. This includes the path. In most cases, this is not desireable, as it puts potentially sensitive information in your code base. For this purpose, the tool supplies another command line flag `-prefix`. This accepts a portion of a path name, which should be stripped off from the map keys and function names. For example, running without the `-prefix` flag, we get: Running with the `-prefix` flag, we get: With the optional Tags field, you can specify any go build tags that must be fulfilled for the output file to be included in a build. This is useful when including binary data in multiple formats, where the desired format is specified at build time with the appropriate tags. The tags are appended to a `// +build` line in the beginning of the output file and must follow the build tags syntax specified by the go tool.
getter is a package for downloading files or directories from a variety of protocols. getter is unique in its ability to download both directories and files. It also detects certain source strings to be protocol-specific URLs. For example, "github.com/hashicorp/go-getter/v2" would turn into a Git URL and use the Git protocol. Protocols and detectors are extensible. To get started, see Client.