Package ssmcontacts provides the API client, operations, and parameter types for AWS Systems Manager Incident Manager Contacts. Systems Manager Incident Manager is an incident management console designed to help users mitigate and recover from incidents affecting their Amazon Web Services-hosted applications. An incident is any unplanned interruption or reduction in quality of services. Incident Manager increases incident resolution by notifying responders of impact, highlighting relevant troubleshooting data, and providing collaboration tools to get services back up and running. To achieve the primary goal of reducing the time-to-resolution of critical incidents, Incident Manager automates response plans and enables responder team escalation.
Package storage provides an easy way to work with Google Cloud Storage. Google Cloud Storage stores data in named objects, which are grouped into buckets. More information about Google Cloud Storage is available at https://cloud.google.com/storage/docs. See https://pkg.go.dev/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. To start working with this package, create a Client: The client will use your default application credentials. Clients should be reused instead of created as needed. The methods of Client are safe for concurrent use by multiple goroutines. You may configure the client by passing in options from the google.golang.org/api/option package. You may also use options defined in this package, such as WithJSONReads. If you only wish to access public data, you can create an unauthenticated client with To use an emulator with this library, you can set the STORAGE_EMULATOR_HOST environment variable to the address at which your emulator is running. This will send requests to that address instead of to Cloud Storage. You can then create and use a client as usual: Please note that there is no official emulator for Cloud Storage. A Google Cloud Storage bucket is a collection of objects. To work with a bucket, make a bucket handle: A handle is a reference to a bucket. You can have a handle even if the bucket doesn't exist yet. To create a bucket in Google Cloud Storage, call BucketHandle.Create: Note that although buckets are associated with projects, bucket names are global across all projects. Each bucket has associated metadata, represented in this package by BucketAttrs. The third argument to BucketHandle.Create allows you to set the initial BucketAttrs of a bucket. To retrieve a bucket's attributes, use BucketHandle.Attrs: An object holds arbitrary data as a sequence of bytes, like a file. You refer to objects using a handle, just as with buckets, but unlike buckets you don't explicitly create an object. Instead, the first time you write to an object it will be created. You can use the standard Go io.Reader and io.Writer interfaces to read and write object data: Objects also have attributes, which you can fetch with ObjectHandle.Attrs: Listing objects in a bucket is done with the BucketHandle.Objects method: Objects are listed lexicographically by name. To filter objects lexicographically, [Query.StartOffset] and/or [Query.EndOffset] can be used: If only a subset of object attributes is needed when listing, specifying this subset using Query.SetAttrSelection may speed up the listing process: Both objects and buckets have ACLs (Access Control Lists). An ACL is a list of ACLRules, each of which specifies the role of a user, group or project. ACLs are suitable for fine-grained control, but you may prefer using IAM to control access at the project level (see Cloud Storage IAM docs. To list the ACLs of a bucket or object, obtain an ACLHandle and call ACLHandle.List: You can also set and delete ACLs. Every object has a generation and a metageneration. The generation changes whenever the content changes, and the metageneration changes whenever the metadata changes. Conditions let you check these values before an operation; the operation only executes if the conditions match. You can use conditions to prevent race conditions in read-modify-write operations. For example, say you've read an object's metadata into objAttrs. Now you want to write to that object, but only if its contents haven't changed since you read it. Here is how to express that: You can obtain a URL that lets anyone read or write an object for a limited time. Signing a URL requires credentials authorized to sign a URL. To use the same authentication that was used when instantiating the Storage client, use BucketHandle.SignedURL. You can also sign a URL without creating a client. See the documentation of SignedURL for details. A type of signed request that allows uploads through HTML forms directly to Cloud Storage with temporary permission. Conditions can be applied to restrict how the HTML form is used and exercised by a user. For more information, please see the XML POST Object docs as well as the documentation of BucketHandle.GenerateSignedPostPolicyV4. If the GoogleAccessID and PrivateKey option fields are not provided, they will be automatically detected by BucketHandle.SignedURL and BucketHandle.GenerateSignedPostPolicyV4 if any of the following are true: Detecting GoogleAccessID may not be possible if you are authenticated using a token source or using option.WithHTTPClient. In this case, you can provide a service account email for GoogleAccessID and the client will attempt to sign the URL or Post Policy using that service account. To generate the signature, you must have: Errors returned by this client are often of the type googleapi.Error. These errors can be introspected for more information by using errors.As with the richer googleapi.Error type. For example: Methods in this package may retry calls that fail with transient errors. Retrying continues indefinitely unless the controlling context is canceled, the client is closed, or a non-transient error is received. To stop retries from continuing, use context timeouts or cancellation. The retry strategy in this library follows best practices for Cloud Storage. By default, operations are retried only if they are idempotent, and exponential backoff with jitter is employed. In addition, errors are only retried if they are defined as transient by the service. See the Cloud Storage retry docs for more information. Users can configure non-default retry behavior for a single library call (using BucketHandle.Retryer and ObjectHandle.Retryer) or for all calls made by a client (using Client.SetRetry). For example: You can add custom headers to any API call made by this package by using callctx.SetHeaders on the context which is passed to the method. For example, to add a custom audit logging header: This package includes support for the Cloud Storage gRPC API, which is currently in preview. This implementation uses gRPC rather than the current JSON & XML APIs to make requests to Cloud Storage. If you would like to try the API, please contact your GCP account rep for more information. The gRPC API is not yet generally available, so it may be subject to breaking changes. To create a client which will use gRPC, use the alternate constructor: If the application is running within GCP, users may get better performance by enabling DirectPath (enabling requests to skip some proxy steps). To enable, set the environment variable `GOOGLE_CLOUD_ENABLE_DIRECT_PATH_XDS=true` and add the following side-effect imports to your application:
Package saml contains a partial implementation of the SAML standard in golang. SAML is a standard for identity federation, i.e. either allowing a third party to authenticate your users or allowing third parties to rely on us to authenticate their users. In SAML parlance an Identity Provider (IDP) is a service that knows how to authenticate users. A Service Provider (SP) is a service that delegates authentication to an IDP. If you are building a service where users log in with someone else's credentials, then you are a Service Provider. This package supports implementing both service providers and identity providers. The core package contains the implementation of SAML. The package samlsp provides helper middleware suitable for use in Service Provider applications. The package samlidp provides a rudimentary IDP service that is useful for testing or as a starting point for other integrations. Version 0.4.0 introduces a few breaking changes to the _samlsp_ package in order to make the package more extensible, and to clean up the interfaces a bit. The default behavior remains the same, but you can now provide interface implementations of _RequestTracker_ (which tracks pending requests), _Session_ (which handles maintaining a session) and _OnError_ which handles reporting errors. Public fields of _samlsp.Middleware_ have changed, so some usages may require adjustment. See [issue 231](https://github.com/crewjam/saml/issues/231) for details. The option to provide an IDP metadata URL has been deprecated. Instead, we recommend that you use the `FetchMetadata()` function, or fetch the metadata yourself and use the new `ParseMetadata()` function, and pass the metadata in _samlsp.Options.IDPMetadata_. Similarly, the _HTTPClient_ field is now deprecated because it was only used for fetching metdata, which is no longer directly implemented. The fields that manage how cookies are set are deprecated as well. To customize how cookies are managed, provide custom implementation of _RequestTracker_ and/or _Session_, perhaps by extending the default implementations. The deprecated fields have not been removed from the Options structure, but will be in future. In particular we have deprecated the following fields in _samlsp.Options_: - `Logger` - This was used to emit errors while validating, which is an anti-pattern. - `IDPMetadataURL` - Instead use `FetchMetadata()` - `HTTPClient` - Instead pass httpClient to FetchMetadata - `CookieMaxAge` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieName` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieDomain` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieDomain` - Instead assign a custom CookieRequestTracker or CookieSessionProvider Let us assume we have a simple web application to protect. We'll modify this application so it uses SAML to authenticate users. ```golang package main import ( ) ``` Each service provider must have an self-signed X.509 key pair established. You can generate your own with something like this: We will use `samlsp.Middleware` to wrap the endpoint we want to protect. Middleware provides both an `http.Handler` to serve the SAML specific URLs and a set of wrappers to require the user to be logged in. We also provide the URL where the service provider can fetch the metadata from the IDP at startup. In our case, we'll use [samltest.id](https://samltest.id/), an identity provider designed for testing. ```golang package main import ( ) ``` Next we'll have to register our service provider with the identity provider to establish trust from the service provider to the IDP. For [samltest.id](https://samltest.id/), you can do something like: Navigate to https://samltest.id/upload.php and upload the file you fetched. Now you should be able to authenticate. The flow should look like this: 1. You browse to `localhost:8000/hello` 1. The middleware redirects you to `https://samltest.id/idp/profile/SAML2/Redirect/SSO` 1. samltest.id prompts you for a username and password. 1. samltest.id returns you an HTML document which contains an HTML form setup to POST to `localhost:8000/saml/acs`. The form is automatically submitted if you have javascript enabled. 1. The local service validates the response, issues a session cookie, and redirects you to the original URL, `localhost:8000/hello`. 1. This time when `localhost:8000/hello` is requested there is a valid session and so the main content is served. Please see `example/idp/` for a substantially complete example of how to use the library and helpers to be an identity provider. The SAML standard is huge and complex with many dark corners and strange, unused features. This package implements the most commonly used subset of these features required to provide a single sign on experience. The package supports at least the subset of SAML known as [interoperable SAML](http://saml2int.org). This package supports the Web SSO profile. Message flows from the service provider to the IDP are supported using the HTTP Redirect binding and the HTTP POST binding. Message flows from the IDP to the service provider are supported via the HTTP POST binding. The package can produce signed SAML assertions, and can validate both signed and encrypted SAML assertions. It does not support signed or encrypted requests. The _RelayState_ parameter allows you to pass user state information across the authentication flow. The most common use for this is to allow a user to request a deep link into your site, be redirected through the SAML login flow, and upon successful completion, be directed to the originally requested link, rather than the root. Unfortunately, _RelayState_ is less useful than it could be. Firstly, it is not authenticated, so anything you supply must be signed to avoid XSS or CSRF. Secondly, it is limited to 80 bytes in length, which precludes signing. (See section 3.6.3.1 of SAMLProfiles.) The SAML specification is a collection of PDFs (sadly): - [SAMLCore](http://docs.oasis-open.org/security/saml/v2.0/saml-core-2.0-os.pdf) defines data types. - [SAMLBindings](http://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf) defines the details of the HTTP requests in play. - [SAMLProfiles](http://docs.oasis-open.org/security/saml/v2.0/saml-profiles-2.0-os.pdf) describes data flows. - [SAMLConformance](http://docs.oasis-open.org/security/saml/v2.0/saml-conformance-2.0-os.pdf) includes a support matrix for various parts of the protocol. [SAMLtest](https://samltest.id/) is a testing ground for SAML service and identity providers. Please do not report security issues in the issue tracker. Rather, please contact me directly at ross@kndr.org ([PGP Key `78B6038B3B9DFB88`](https://keybase.io/crewjam)).
Package dht implements a Distributed Hash Table (DHT) part of the BitTorrent protocol, as specified by BEP 5: http://www.bittorrent.org/beps/bep_0005.html BitTorrent uses a "distributed hash table" (DHT) for storing peer contact information for "trackerless" torrents. In effect, each peer becomes a tracker. The protocol is based on Kademila DHT protocol and is implemented over UDP. Please note the terminology used to avoid confusion. A "peer" is a client/server listening on a TCP port that implements the BitTorrent protocol. A "node" is a client/server listening on a UDP port implementing the distributed hash table protocol. The DHT is composed of nodes and stores the location of peers. BitTorrent clients include a DHT node, which is used to contact other nodes in the DHT to get the location of peers to download from using the BitTorrent protocol. Standard use involves creating a Server, and calling Announce on it with the details of your local torrent client and infohash of interest.
Package dht implements a Distributed Hash Table (DHT) part of the BitTorrent protocol, as specified by BEP 5: http://www.bittorrent.org/beps/bep_0005.html BitTorrent uses a "distributed hash table" (DHT) for storing peer contact information for "trackerless" torrents. In effect, each peer becomes a tracker. The protocol is based on Kademila DHT protocol and is implemented over UDP. Please note the terminology used to avoid confusion. A "peer" is a client/server listening on a TCP port that implements the BitTorrent protocol. A "node" is a client/server listening on a UDP port implementing the distributed hash table protocol. The DHT is composed of nodes and stores the location of peers. BitTorrent clients include a DHT node, which is used to contact other nodes in the DHT to get the location of peers to download from using the BitTorrent protocol. Standard use involves creating a Server, and calling Announce on it with the details of your local torrent client and infohash of interest.
Package rdap implements a client for the Registration Data Access Protocol (RDAP). RDAP is a modern replacement for the text-based WHOIS (port 43) protocol. It provides registration data for domain names/IP addresses/AS numbers, and more, in a structured format. This client executes RDAP queries and returns the responses as Go values. Quick usage: The QueryDomain(), QueryAutnum(), and QueryIP() methods all provide full contact information, and timeout after 30s. Normal usage: As of June 2017, all five number registries (AFRINIC, ARIN, APNIC, LANIC, RIPE) run RDAP servers. A small number of TLDs (top level domains) support RDAP so far, listed on https://data.iana.org/rdap/dns.json. The RDAP protocol uses HTTP, with responses in a JSON format. A bootstrapping mechanism (http://data.iana.org/rdap/) is used to determine the server to query.
Package apiserver_runtime contains libraries building Kubernetes style apiservers. Note that this package is experimental and a work in progress. Do not rely on this project for production without first contacting the maintainers at kubebuilder@googlegroups.com.
Package keybase implements an interface for interacting with the Keybase Chat, Team, and Wallet APIs I've tried to follow Keybase's JSON API as closely as possible, so if you're stuck on anything, or wondering why things are organized in a certain way, it's most likely due to that. It may be helpful to look at the Keybase JSON API docs by running some of the following commands in your terminal: The git repo for this code is hosted on Keybase. You can contact me directly (https://keybase.io/dxb), or join the mkbot team (https://keybase.io/team/mkbot) if you need assistance, or if you'd like to contribute. Here's a quick example of a bot that will attach a reaction with the sender's device name to every message sent in @mkbot#test1:
Package main ORY Oathkeeper ORY Oathkeeper is a reverse proxy that checks the HTTP Authorization for validity against a set of rules. This service uses Hydra to validate access tokens and policies. Schemes: http, https Host: BasePath: / Version: Latest Contact: ORY <hi@ory.am> https://www.ory.am Consumes: - application/json Produces: - application/json Extensions: --- x-request-id: string x-forwarded-proto: string --- swagger:meta
Package workmail provides the API client, operations, and parameter types for Amazon WorkMail. WorkMail is a secure, managed business email and calendaring service with support for existing desktop and mobile email clients. You can access your email, contacts, and calendars using Microsoft Outlook, your browser, or other native iOS and Android email applications. You can integrate WorkMail with your existing corporate directory and control both the keys that encrypt your data and the location in which your data is stored. The WorkMail API is designed for the following scenarios: Listing and describing organizations Managing users Managing groups Managing resources All WorkMail API operations are Amazon-authenticated and certificate-signed. They not only require the use of the AWS SDK, but also allow for the exclusive use of AWS Identity and Access Management users and roles to help facilitate access, trust, and permission policies. By creating a role and allowing an IAM user to access the WorkMail site, the IAM user gains full administrative visibility into the entire WorkMail organization (or as set in the IAM policy). This includes, but is not limited to, the ability to create, update, and delete users, groups, and resources. This allows developers to perform the scenarios listed above, as well as give users the ability to grant access on a selective basis using the IAM model.
Package resolv is a simple collision detection and resolution library mainly geared towards simpler 2D arcade-style games. Its goal is to be lightweight, fast, simple, and easy-to-use for game development. Its goal is to also not become a physics engine or physics library itself, but to always leave the actual physics implementation and "game feel" to the developer, while making it very easy to do so. Usage of resolv essentially centers around two main concepts: Spaces and Shapes. A Shape can be used to test for collisions against another Shape. That's really all they have to do, but that capability is powerful when paired with the resolv.Resolve() function. You can then check to see if a Shape would have a collision if it attempted to move in a specified direction. If so, the Resolve() function would return a Collision object, which tells you some information about the Collision, like how far the checking Shape would have to move to come into contact with the other, and which Shape it comes into contact with. A Space is just a slice that holds Shapes for detection. It doesn't represent any real physical space, and so there aren't any units of measurement to remember when using Spaces. Similar to Shapes, Spaces are simple, but also very powerful. Spaces allow you to easily check for collision with, and resolve collision against multiple Shapes within that Space. A Space being just a collection of Shapes means that you can manipulate and filter them as necessary.
Package emailaddress provides a tiny library for finding, parsing and validation of email addresses. This library is tested for Go v1.9 and above. Parse and validate the email locally using RFC 5322 regex, note that when err == nil it doesn't necessarily mean the email address actually exists. Host validation will first attempt to resolve the domain and then verify if we can start a mail transaction with the host. This is relatively slow as it will contact the host several times. Note that when err == nil it doesn't necessarily mean the email address actually exists. This will look for emails in a byte array (ie text or an html response). As RFC 5322 is really broad this method will likely match images and urls that contain the '@' character (ie. !--logo@2x.png). For more reliable results, you can use the following method.