Package sdk is the official AWS SDK for the Go programming language. The AWS SDK for Go provides APIs and utilities that developers can use to build Go applications that use AWS services, such as Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3). The SDK removes the complexity of coding directly against a web service interface. It hides a lot of the lower-level plumbing, such as authentication, request retries, and error handling. The SDK also includes helpful utilities on top of the AWS APIs that add additional capabilities and functionality. For example, the Amazon S3 Download and Upload Manager will automatically split up large objects into multiple parts and transfer them concurrently. See the s3manager package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/s3/s3manager/ Checkout the Getting Started Guide and API Reference Docs detailed the SDK's components and details on each AWS client the SDK supports. The Getting Started Guide provides examples and detailed description of how to get setup with the SDK. https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/welcome.html The API Reference Docs include a detailed breakdown of the SDK's components such as utilities and AWS clients. Use this as a reference of the Go types included with the SDK, such as AWS clients, API operations, and API parameters. https://docs.aws.amazon.com/sdk-for-go/api/ The SDK is composed of two main components, SDK core, and service clients. The SDK core packages are all available under the aws package at the root of the SDK. Each client for a supported AWS service is available within its own package under the service folder at the root of the SDK. aws - SDK core, provides common shared types such as Config, Logger, and utilities to make working with API parameters easier. awserr - Provides the error interface that the SDK will use for all errors that occur in the SDK's processing. This includes service API response errors as well. The Error type is made up of a code and message. Cast the SDK's returned error type to awserr.Error and call the Code method to compare returned error to specific error codes. See the package's documentation for additional values that can be extracted such as RequestId. credentials - Provides the types and built in credentials providers the SDK will use to retrieve AWS credentials to make API requests with. Nested under this folder are also additional credentials providers such as stscreds for assuming IAM roles, and ec2rolecreds for EC2 Instance roles. endpoints - Provides the AWS Regions and Endpoints metadata for the SDK. Use this to lookup AWS service endpoint information such as which services are in a region, and what regions a service is in. Constants are also provided for all region identifiers, e.g UsWest2RegionID for "us-west-2". session - Provides initial default configuration, and load configuration from external sources such as environment and shared credentials file. request - Provides the API request sending, and retry logic for the SDK. This package also includes utilities for defining your own request retryer, and configuring how the SDK processes the request. service - Clients for AWS services. All services supported by the SDK are available under this folder. The SDK includes the Go types and utilities you can use to make requests to AWS service APIs. Within the service folder at the root of the SDK you'll find a package for each AWS service the SDK supports. All service clients follows a common pattern of creation and usage. When creating a client for an AWS service you'll first need to have a Session value constructed. The Session provides shared configuration that can be shared between your service clients. When service clients are created you can pass in additional configuration via the aws.Config type to override configuration provided by in the Session to create service client instances with custom configuration. Once the service's client is created you can use it to make API requests the AWS service. These clients are safe to use concurrently. In the AWS SDK for Go, you can configure settings for service clients, such as the log level and maximum number of retries. Most settings are optional; however, for each service client, you must specify a region and your credentials. The SDK uses these values to send requests to the correct AWS region and sign requests with the correct credentials. You can specify these values as part of a session or as environment variables. See the SDK's configuration guide for more information. https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html See the session package documentation for more information on how to use Session with the SDK. https://docs.aws.amazon.com/sdk-for-go/api/aws/session/ See the Config type in the aws package for more information on configuration options. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config When using the SDK you'll generally need your AWS credentials to authenticate with AWS services. The SDK supports multiple methods of supporting these credentials. By default the SDK will source credentials automatically from its default credential chain. See the session package for more information on this chain, and how to configure it. The common items in the credential chain are the following: Environment Credentials - Set of environment variables that are useful when sub processes are created for specific roles. Shared Credentials file (~/.aws/credentials) - This file stores your credentials based on a profile name and is useful for local development. EC2 Instance Role Credentials - Use EC2 Instance Role to assign credentials to application running on an EC2 instance. This removes the need to manage credential files in production. Credentials can be configured in code as well by setting the Config's Credentials value to a custom provider or using one of the providers included with the SDK to bypass the default credential chain and use a custom one. This is helpful when you want to instruct the SDK to only use a specific set of credentials or providers. This example creates a credential provider for assuming an IAM role, "myRoleARN" and configures the S3 service client to use that role for API requests. See the credentials package documentation for more information on credential providers included with the SDK, and how to customize the SDK's usage of credentials. https://docs.aws.amazon.com/sdk-for-go/api/aws/credentials The SDK has support for the shared configuration file (~/.aws/config). This support can be enabled by setting the environment variable, "AWS_SDK_LOAD_CONFIG=1", or enabling the feature in code when creating a Session via the Option's SharedConfigState parameter. In addition to the credentials you'll need to specify the region the SDK will use to make AWS API requests to. In the SDK you can specify the region either with an environment variable, or directly in code when a Session or service client is created. The last value specified in code wins if the region is specified multiple ways. To set the region via the environment variable set the "AWS_REGION" to the region you want to the SDK to use. Using this method to set the region will allow you to run your application in multiple regions without needing additional code in the application to select the region. The endpoints package includes constants for all regions the SDK knows. The values are all suffixed with RegionID. These values are helpful, because they reduce the need to type the region string manually. To set the region on a Session use the aws package's Config struct parameter Region to the AWS region you want the service clients created from the session to use. This is helpful when you want to create multiple service clients, and all of the clients make API requests to the same region. See the endpoints package for the AWS Regions and Endpoints metadata. https://docs.aws.amazon.com/sdk-for-go/api/aws/endpoints/ In addition to setting the region when creating a Session you can also set the region on a per service client bases. This overrides the region of a Session. This is helpful when you want to create service clients in specific regions different from the Session's region. See the Config type in the aws package for more information and additional options such as setting the Endpoint, and other service client configuration options. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config Once the client is created you can make an API request to the service. Each API method takes a input parameter, and returns the service response and an error. The SDK provides methods for making the API call in multiple ways. In this list we'll use the S3 ListObjects API as an example for the different ways of making API requests. ListObjects - Base API operation that will make the API request to the service. ListObjectsRequest - API methods suffixed with Request will construct the API request, but not send it. This is also helpful when you want to get a presigned URL for a request, and share the presigned URL instead of your application making the request directly. ListObjectsPages - Same as the base API operation, but uses a callback to automatically handle pagination of the API's response. ListObjectsWithContext - Same as base API operation, but adds support for the Context pattern. This is helpful for controlling the canceling of in flight requests. See the Go standard library context package for more information. This method also takes request package's Option functional options as the variadic argument for modifying how the request will be made, or extracting information from the raw HTTP response. ListObjectsPagesWithContext - same as ListObjectsPages, but adds support for the Context pattern. Similar to ListObjectsWithContext this method also takes the request package's Option function option types as the variadic argument. In addition to the API operations the SDK also includes several higher level methods that abstract checking for and waiting for an AWS resource to be in a desired state. In this list we'll use WaitUntilBucketExists to demonstrate the different forms of waiters. WaitUntilBucketExists. - Method to make API request to query an AWS service for a resource's state. Will return successfully when that state is accomplished. WaitUntilBucketExistsWithContext - Same as WaitUntilBucketExists, but adds support for the Context pattern. In addition these methods take request package's WaiterOptions to configure the waiter, and how underlying request will be made by the SDK. The API method will document which error codes the service might return for the operation. These errors will also be available as const strings prefixed with "ErrCode" in the service client's package. If there are no errors listed in the API's SDK documentation you'll need to consult the AWS service's API documentation for the errors that could be returned. Pagination helper methods are suffixed with "Pages", and provide the functionality needed to round trip API page requests. Pagination methods take a callback function that will be called for each page of the API's response. Waiter helper methods provide the functionality to wait for an AWS resource state. These methods abstract the logic needed to to check the state of an AWS resource, and wait until that resource is in a desired state. The waiter will block until the resource is in the state that is desired, an error occurs, or the waiter times out. If a resource times out the error code returned will be request.WaiterResourceNotReadyErrorCode. This example shows a complete working Go file which will upload a file to S3 and use the Context pattern to implement timeout logic that will cancel the request if it takes too long. This example highlights how to use sessions, create a service client, make a request, handle the error, and process the response.
pprof is a tool for collection, manipulation and visualization of performance profiles.
Package chromedp is a high level Chrome DevTools Protocol client that simplifies driving browsers for scraping, unit testing, or profiling web pages using the CDP. chromedp requires no third-party dependencies, implementing the async Chrome DevTools Protocol entirely in Go. This package includes a number of simple examples. Additionally, chromedp/examples contains more complex examples.
Package profiler is a client for the Cloud Profiler service. Usage example: Calling Start will start a goroutine to collect profiles and upload to the profiler server, at the rhythm specified by the server. The caller must provide the service string in the config, and may provide other information as well. See Config for details. Profiler has CPU, heap and goroutine profiling enabled by default. Mutex profiling can be enabled in the config. Note that goroutine and mutex profiles are shown as "threads" and "contention" profiles in the profiler UI.
Package ebiten provides graphics and input API to develop a 2D game. You can start the game by calling the function RunGame. In the API document, 'the main thread' means the goroutine in init(), main() and their callees without 'go' statement. It is assured that 'the main thread' runs on the OS main thread. There are some Ebitengine functions (e.g., DeviceScaleFactor) that must be called on the main thread under some conditions (typically, before ebiten.RunGame is called). `EBITENGINE_SCREENSHOT_KEY` environment variable specifies the key to take a screenshot. For example, if you run your game with `EBITENGINE_SCREENSHOT_KEY=q`, you can take a game screen's screenshot by pressing Q key. This works only on desktops and browsers. `EBITENGINE_INTERNAL_IMAGES_KEY` environment variable specifies the key to dump all the internal images. This is valid only when the build tag 'ebitenginedebug' is specified. This works only on desktops and browsers. `EBITENGINE_GRAPHICS_LIBRARY` environment variable specifies the graphics library. If the specified graphics library is not available, RunGame returns an error. This environment variable works when RunGame is called or RunGameWithOptions is called with GraphicsLibraryAuto. This can take one of the following value: `EBITENGINE_DIRECTX` environment variable specifies various parameters for DirectX. You can specify multiple values separated by a comma. The default value is empty (i.e. no parameters). The options taking arguments are exclusive, and if multiples are specified, the lastly specified value is adopted. The possible values for the option "version" are "11" and "12". If the version is not specified, the default version 11 is adopted. On Xbox, the "version" option is ignored and DirectX 12 is always adopted. The option "featurelevel" is valid only for DirectX 12. The possible values are "11_0", "11_1", "12_0", "12_1", and "12_2". The default value is "11_0". `ebitenginedebug` outputs a log of graphics commands. This is useful to know what happens in Ebitengine. In general, the number of graphics commands affects the performance of your game. `ebitenginegldebug` enables a debug mode for OpenGL. This is valid only when the graphics library is OpenGL. This affects performance very much. `ebitenginesinglethread` disables Ebitengine's thread safety to unlock maximum performance. If you use this you will have to manage threads yourself. Functions like `SetWindowSize` will no longer be concurrent-safe with this build tag. They must be called from the main thread or the same goroutine as the given game's callback functions like Update `ebitenginesinglethread` works only with desktops and consoles. `ebitenginesinglethread` was deprecated as of v2.7. Use RunGameOptions.SingleThread instead. `microsoftgdk` is for Microsoft GDK (e.g. Xbox). `nintendosdk` is for NintendoSDK (e.g. Nintendo Switch). `nintendosdkprofile` enables a profiler for NintendoSDK. `playstation5` is for PlayStation 5.
Package pprofextension implements an extension that exposes the golang net/http/pprof (Performance Profiler) in a HTTP endpoint.
Package profile provides a simple way to manage runtime/pprof profiling of your Go application.
Package saml contains a partial implementation of the SAML standard in golang. SAML is a standard for identity federation, i.e. either allowing a third party to authenticate your users or allowing third parties to rely on us to authenticate their users. In SAML parlance an Identity Provider (IDP) is a service that knows how to authenticate users. A Service Provider (SP) is a service that delegates authentication to an IDP. If you are building a service where users log in with someone else's credentials, then you are a Service Provider. This package supports implementing both service providers and identity providers. The core package contains the implementation of SAML. The package samlsp provides helper middleware suitable for use in Service Provider applications. The package samlidp provides a rudimentary IDP service that is useful for testing or as a starting point for other integrations. Version 0.4.0 introduces a few breaking changes to the _samlsp_ package in order to make the package more extensible, and to clean up the interfaces a bit. The default behavior remains the same, but you can now provide interface implementations of _RequestTracker_ (which tracks pending requests), _Session_ (which handles maintaining a session) and _OnError_ which handles reporting errors. Public fields of _samlsp.Middleware_ have changed, so some usages may require adjustment. See [issue 231](https://github.com/crewjam/saml/issues/231) for details. The option to provide an IDP metadata URL has been deprecated. Instead, we recommend that you use the `FetchMetadata()` function, or fetch the metadata yourself and use the new `ParseMetadata()` function, and pass the metadata in _samlsp.Options.IDPMetadata_. Similarly, the _HTTPClient_ field is now deprecated because it was only used for fetching metdata, which is no longer directly implemented. The fields that manage how cookies are set are deprecated as well. To customize how cookies are managed, provide custom implementation of _RequestTracker_ and/or _Session_, perhaps by extending the default implementations. The deprecated fields have not been removed from the Options structure, but will be in future. In particular we have deprecated the following fields in _samlsp.Options_: - `Logger` - This was used to emit errors while validating, which is an anti-pattern. - `IDPMetadataURL` - Instead use `FetchMetadata()` - `HTTPClient` - Instead pass httpClient to FetchMetadata - `CookieMaxAge` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieName` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieDomain` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieDomain` - Instead assign a custom CookieRequestTracker or CookieSessionProvider Let us assume we have a simple web application to protect. We'll modify this application so it uses SAML to authenticate users. ```golang package main import ( ) ``` Each service provider must have an self-signed X.509 key pair established. You can generate your own with something like this: We will use `samlsp.Middleware` to wrap the endpoint we want to protect. Middleware provides both an `http.Handler` to serve the SAML specific URLs and a set of wrappers to require the user to be logged in. We also provide the URL where the service provider can fetch the metadata from the IDP at startup. In our case, we'll use [samltest.id](https://samltest.id/), an identity provider designed for testing. ```golang package main import ( ) ``` Next we'll have to register our service provider with the identity provider to establish trust from the service provider to the IDP. For [samltest.id](https://samltest.id/), you can do something like: Navigate to https://samltest.id/upload.php and upload the file you fetched. Now you should be able to authenticate. The flow should look like this: 1. You browse to `localhost:8000/hello` 1. The middleware redirects you to `https://samltest.id/idp/profile/SAML2/Redirect/SSO` 1. samltest.id prompts you for a username and password. 1. samltest.id returns you an HTML document which contains an HTML form setup to POST to `localhost:8000/saml/acs`. The form is automatically submitted if you have javascript enabled. 1. The local service validates the response, issues a session cookie, and redirects you to the original URL, `localhost:8000/hello`. 1. This time when `localhost:8000/hello` is requested there is a valid session and so the main content is served. Please see `example/idp/` for a substantially complete example of how to use the library and helpers to be an identity provider. The SAML standard is huge and complex with many dark corners and strange, unused features. This package implements the most commonly used subset of these features required to provide a single sign on experience. The package supports at least the subset of SAML known as [interoperable SAML](http://saml2int.org). This package supports the Web SSO profile. Message flows from the service provider to the IDP are supported using the HTTP Redirect binding and the HTTP POST binding. Message flows from the IDP to the service provider are supported via the HTTP POST binding. The package can produce signed SAML assertions, and can validate both signed and encrypted SAML assertions. It does not support signed or encrypted requests. The _RelayState_ parameter allows you to pass user state information across the authentication flow. The most common use for this is to allow a user to request a deep link into your site, be redirected through the SAML login flow, and upon successful completion, be directed to the originally requested link, rather than the root. Unfortunately, _RelayState_ is less useful than it could be. Firstly, it is not authenticated, so anything you supply must be signed to avoid XSS or CSRF. Secondly, it is limited to 80 bytes in length, which precludes signing. (See section 3.6.3.1 of SAMLProfiles.) The SAML specification is a collection of PDFs (sadly): - [SAMLCore](http://docs.oasis-open.org/security/saml/v2.0/saml-core-2.0-os.pdf) defines data types. - [SAMLBindings](http://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf) defines the details of the HTTP requests in play. - [SAMLProfiles](http://docs.oasis-open.org/security/saml/v2.0/saml-profiles-2.0-os.pdf) describes data flows. - [SAMLConformance](http://docs.oasis-open.org/security/saml/v2.0/saml-conformance-2.0-os.pdf) includes a support matrix for various parts of the protocol. [SAMLtest](https://samltest.id/) is a testing ground for SAML service and identity providers. Please do not report security issues in the issue tracker. Rather, please contact me directly at ross@kndr.org ([PGP Key `78B6038B3B9DFB88`](https://keybase.io/crewjam)).
Package cognitoidentityprovider provides the API client, operations, and parameter types for Amazon Cognito Identity Provider. With the Amazon Cognito user pools API, you can configure user pools and authenticate users. To authenticate users from third-party identity providers (IdPs) in this API, you can link IdP users to native user profiles. Learn more about the authentication and authorization of federated users at Adding user pool sign-in through a third partyand in the User pool federation endpoints and hosted UI reference. This API reference provides detailed information about API operations and object types in Amazon Cognito. Along with resource management operations, the Amazon Cognito user pools API includes classes of operations and authorization models for client-side and server-side authentication of users. You can interact with operations in the Amazon Cognito user pools API as any of the following subjects. An administrator who wants to configure user pools, app clients, users, groups, or other user pool functions. A server-side app, like a web application, that wants to use its Amazon Web Services privileges to manage, authenticate, or authorize a user. A client-side app, like a mobile app, that wants to make unauthenticated requests to manage, authenticate, or authorize a user. For more information, see Using the Amazon Cognito user pools API and user pool endpoints in the Amazon Cognito Developer Guide. With your Amazon Web Services SDK, you can build the logic to support operational flows in every use case for this API. You can also make direct REST API requests to Amazon Cognito user pools service endpoints. The following links can get you started with the CognitoIdentityProvider client in other supported Amazon Web Services SDKs. Amazon Web Services Command Line Interface Amazon Web Services SDK for .NET Amazon Web Services SDK for C++ Amazon Web Services SDK for Go Amazon Web Services SDK for Java V2 Amazon Web Services SDK for JavaScript Amazon Web Services SDK for PHP V3 Amazon Web Services SDK for Python Amazon Web Services SDK for Ruby V3 To get started with an Amazon Web Services SDK, see Tools to Build on Amazon Web Services. For example actions and scenarios, see Code examples for Amazon Cognito Identity Provider using Amazon Web Services SDKs.
fgprof is a sampling Go profiler that allows you to analyze On-CPU as well as [Off-CPU](http://www.brendangregg.com/offcpuanalysis.html) (e.g. I/O) time together.
Package appconfig provides the API client, operations, and parameter types for Amazon AppConfig. AppConfig feature flags and dynamic configurations help software builders quickly and securely adjust application behavior in production environments without full code deployments. AppConfig speeds up software release frequency, improves application resiliency, and helps you address emergent issues more quickly. With feature flags, you can gradually release new capabilities to users and measure the impact of those changes before fully deploying the new capabilities to all users. With operational flags and dynamic configurations, you can update block lists, allow lists, throttling limits, logging verbosity, and perform other operational tuning to quickly respond to issues in production environments. AppConfig is a capability of Amazon Web Services Systems Manager. Despite the fact that application configuration content can vary greatly from application to application, AppConfig supports the following use cases, which cover a broad spectrum of customer needs: Feature flags and toggles - Safely release new capabilities to your customers in a controlled environment. Instantly roll back changes if you experience a problem. Application tuning - Carefully introduce application changes while testing the impact of those changes with users in production environments. Allow list or block list - Control access to premium features or instantly block specific users without deploying new code. Centralized configuration storage - Keep your configuration data organized and consistent across all of your workloads. You can use AppConfig to deploy configuration data stored in the AppConfig hosted configuration store, Secrets Manager, Systems Manager, Parameter Store, or Amazon S3. This section provides a high-level description of how AppConfig works and how you get started. 1. Identify configuration values in code you want to manage in the cloud Before you start creating AppConfig artifacts, we recommend you identify configuration data in your code that you want to dynamically manage using AppConfig. Good examples include feature flags or toggles, allow and block lists, logging verbosity, service limits, and throttling rules, to name a few. If your configuration data already exists in the cloud, you can take advantage of AppConfig validation, deployment, and extension features to further streamline configuration data management. 2. Create an application namespace To create a namespace, you create an AppConfig artifact called an application. An application is simply an organizational construct like a folder. 3. Create environments For each AppConfig application, you define one or more environments. An environment is a logical grouping of targets, such as applications in a Beta or Production environment, Lambda functions, or containers. You can also define environments for application subcomponents, such as the Web , Mobile , and Back-end . You can configure Amazon CloudWatch alarms for each environment. The system monitors alarms during a configuration deployment. If an alarm is triggered, the system rolls back the configuration. 4. Create a configuration profile A configuration profile includes, among other things, a URI that enables AppConfig to locate your configuration data in its stored location and a profile type. AppConfig supports two configuration profile types: feature flags and freeform configurations. Feature flag configuration profiles store their data in the AppConfig hosted configuration store and the URI is simply hosted . For freeform configuration profiles, you can store your data in the AppConfig hosted configuration store or any Amazon Web Services service that integrates with AppConfig, as described in Creating a free form configuration profilein the the AppConfig User Guide. A configuration profile can also include optional validators to ensure your configuration data is syntactically and semantically correct. AppConfig performs a check using the validators when you start a deployment. If any errors are detected, the deployment rolls back to the previous configuration data. 5. Deploy configuration data When you create a new deployment, you specify the following: An application ID A configuration profile ID A configuration version An environment ID where you want to deploy the configuration data A deployment strategy ID that defines how fast you want the changes to take effect When you call the StartDeployment API action, AppConfig performs the following tasks: Retrieves the configuration data from the underlying data store by using the location URI in the configuration profile. Verifies the configuration data is syntactically and semantically correct by using the validators you specified when you created your configuration profile. Caches a copy of the data so it is ready to be retrieved by your application. This cached copy is called the deployed data. 6. Retrieve the configuration You can configure AppConfig Agent as a local host and have the agent poll AppConfig for configuration updates. The agent calls the StartConfigurationSessionand GetLatestConfiguration API actions and caches your configuration data locally. To retrieve the data, your application makes an HTTP call to the localhost server. AppConfig Agent supports several use cases, as described in Simplified retrieval methodsin the the AppConfig User Guide. If AppConfig Agent isn't supported for your use case, you can configure your application to poll AppConfig for configuration updates by directly calling the StartConfigurationSession and GetLatestConfigurationAPI actions. This reference is intended to be used with the AppConfig User Guide.
Cover is a program for analyzing the coverage profiles generated by 'go test -coverprofile=cover.out'. Deprecated: For Go releases 1.5 and later, this tool lives in the standard repository. The code here is not maintained. Cover is also used by 'go test -cover' to rewrite the source code with annotations to track which parts of each function are executed. It operates on one Go source file at a time, computing approximate basic block information by studying the source. It is thus more portable than binary-rewriting coverage tools, but also a little less capable. For instance, it does not probe inside && and || expressions, and can be mildly confused by single statements with multiple function literals. For usage information, please see:
Package gatt provides a Bluetooth Low Energy gatt implementation. Gatt (Generic Attribute Profile) is the protocol used to write BLE peripherals (servers) and centrals (clients). This package is a work in progress. The API will change. As a peripheral, you can create services, characteristics, and descriptors, advertise, accept connections, and handle requests. As a central, you can scan, connect, discover services, and make requests. gatt supports both Linux and OS X. On Linux: To gain complete and exclusive control of the HCI device, gatt uses HCI_CHANNEL_USER (introduced in Linux v3.14) instead of HCI_CHANNEL_RAW. Those who must use an older kernel may patch in these relevant commits from Marcel Holtmann: Note that because gatt uses HCI_CHANNEL_USER, once gatt has opened the device no other program may access it. Before starting a gatt program, make sure that your BLE device is down: If you have BlueZ 5.14+ (or aren't sure), stop the built-in bluetooth server, which interferes with gatt, e.g.: Because gatt programs administer network devices, they must either be run as root, or be granted appropriate capabilities: USAGE See the server.go, discoverer.go, and explorer.go in the examples/ directory for writing server or client programs that run on Linux and OS X. Users, especially on Linux platforms, seeking finer-grained control over the devices can see the examples/server_lnx.go for the usage of Option, which are platform specific. See the rest of the docs for other options and finer-grained control. Note that some BLE central devices, particularly iOS, may aggressively cache results from previous connections. If you change your services or characteristics, you may need to reboot the other device to pick up the changes. This is a common source of confusion and apparent bugs. For an OS X central, see http://stackoverflow.com/questions/20553957. gatt started life as a port of bleno, to which it is indebted: https://github.com/sandeepmistry/bleno. If you are having problems with gatt, particularly around installation, issues filed with bleno might also be helpful references. To try out your GATT server, it is useful to experiment with a generic BLE client. LightBlue is a good choice. It is available free for both iOS and OS X.
Package main is the entry point of go-torch, a stochastic flame graph profiler for Go programs.
Package signer provides the API client, operations, and parameter types for AWS Signer. AWS Signer is a fully managed code-signing service to help you ensure the trust and integrity of your code. Signer supports the following applications: With code signing for AWS Lambda, you can sign AWS Lambda deployment packages. Integrated support is provided for Amazon S3, Amazon CloudWatch, and AWS CloudTrail. In order to sign code, you create a signing profile and then use Signer to sign Lambda zip files in S3. With code signing for IoT, you can sign code for any IoT device that is supported by AWS. IoT code signing is available for Amazon FreeRTOSand AWS IoT Device Management, and is integrated with AWS Certificate Manager (ACM). In order to sign code, you import a third-party code-signing certificate using ACM, and use that to sign updates in Amazon FreeRTOS and AWS IoT Device Management. With Signer and the Notation CLI from the Notary Project, you can sign container images stored in a container registry such as Amazon Elastic Container Registry (ECR). The signatures are stored in the registry alongside the images, where they are available for verifying image authenticity and integrity. For more information about Signer, see the AWS Signer Developer Guide.
gocovmerge takes the results from multiple `go test -coverprofile` runs and merges them into one profile
Package codestar provides the API client, operations, and parameter types for AWS CodeStar. This is the API reference for AWS CodeStar. This reference provides descriptions of the operations and data types for the AWS CodeStar API along with usage examples. You can use the AWS CodeStar API to work with: Projects and their resources, by calling the following: DeleteProject , which deletes a project. DescribeProject , which lists the attributes of a project. ListProjects , which lists all projects associated with your AWS account. ListResources , which lists the resources associated with a project. ListTagsForProject , which lists the tags associated with a project. TagProject , which adds tags to a project. UntagProject , which removes tags from a project. UpdateProject , which updates the attributes of a project. Teams and team members, by calling the following: AssociateTeamMember , which adds an IAM user to the team for a project. DisassociateTeamMember , which removes an IAM user from the team for a project. ListTeamMembers , which lists all the IAM users in the team for a project, including their roles and attributes. UpdateTeamMember , which updates a team member's attributes in a project. Users, by calling the following: CreateUserProfile , which creates a user profile that contains data associated with the user across all projects. DeleteUserProfile , which deletes all user profile information across all projects. DescribeUserProfile , which describes the profile of a user. ListUserProfiles , which lists all user profiles. UpdateUserProfile , which updates the profile for a user.
Package main is the entry point of go-torch, a stochastic flame graph profiler for Go programs.
Package goebpf provides simple and convenient interface to Linux eBPF system. Extended Berkeley Packet Filter (eBPF) is a highly flexible and efficient virtual machine in the Linux kernel allowing to execute bytecode at various hook points in a safe manner. It is actually close to kernel modules which can provide the same functionality, but without cost of kernel panic if something went wrong. The library is intended to simplify work with eBPF programs. It takes care of low level routine implementation to make it easy to load/run/manage eBPF programs. Currently supported functionality: - Read / parse clang/llmv compiled binaries for eBPF programs / maps - Creates / loads eBPF program / eBPF maps into kernel - Provides simple interface to interact with eBPF maps - Has mock versions of eBPF objects (program, map, etc) in order to make writing unittests simple. eXpress Data Path - provides a bare metal, high performance, programmable packet processing at the closest at possible point to network driver. That makes it ideal for speed without compromising programmability. Key benefits includes following: - It does not require any specialized hardware (program works in kernel’s "VM") - It does not require kernel bypass - It does not replace the TCP/IP stack Considering very simple and highly effective way to DROP all packets from given source IPv4 address: XDP program (written in C): Once compiled can be used by goebpf in the following way: Perf Events (originally Performance Counters for Linux) is powerful kernel instrument for tracing, profiling and a lot of other cases like general events to user space. Usually it is implemented using special eBPF map type "BPF_MAP_TYPE_PERF_EVENT_ARRAY" as a container to send events into. A simple example could be to log all TCP SYN packets into user space from XDP program: There are currently two types of supported probes: kprobes, and kretprobes (also called return probes). A kprobe can be inserted on virtually any instruction in the kernel. A return probe fires when a specified function returns. For example, you can trigger eBPF code to run when a kernel function starts by attaching the program to a “kprobe” event. Because it runs in the kernel, eBPF code is extremely high performance. A simple example could be to log all process execution events into user space from Kprobe program:
Package platform is a profile for running a highly available Micro platform
gocovmerge takes the results from multiple `go test -coverprofile` runs and merges them into one profile
Package profile provides a simple way to manage runtime/pprof profiling of your Go application.
Package getlang provides fast natural language detection for various languages getlang compares input text to a characteristic profile of each supported language and returns the language that best matches the input text
Package emergent is the overall repository for the emergent neural network simulation software, written in Go (golang) with Python wrappers. This top-level of the repository has no functional code -- everything is organized into the following sub-repositories: * emer: defines the primary structural interfaces for emergent, at the level of Network, Layer, and Prjn (projection). These contain no algorithm-specific code and are only about the overall structure of a network, sufficient to support general purpose tools such as the 3D NetView. It also houses widely-used support classes used in algorithm-specific code, including things like MinMax and AvgMax, and also the parameter-styling infrastructure (emer.Params, emer.ParamStyle, emer.ParamSet and emer.ParamSets). * erand has misc random-number generation support functionality, including erand.RndParams for parameterizing the type of random noise to add to a model, and easier support for making permuted random lists, etc. * netview provides the NetView interactive 3D network viewer, implemented in the GoGi 3D framework. * prjn is a separate package for defining patterns of connectivity between layers (i.e., the ProjectionSpecs from C++ emergent). This is done using a fully independent structure that *only* knows about the shapes of the two layers, and it returns a fully general bitmap representation of the pattern of connectivity between them. The leabra.Prjn code then uses these patterns to do all the nitty-gritty of connecting up neurons. This makes the projection code *much* simpler compared to the ProjectionSpec in C++ emergent, which was involved in both creating the pattern and also all the complexity of setting up the actual connections themselves. This should be the *last* time any of those projection patterns need to be written (having re-written this code too many times in the C++ version as the details of memory allocations changed). * patgen supports various pattern-generation algorithms, as implemented in taDataGen in C++ emergent (e.g., PermutedBinary and FlipBits). * timer is a simple interval timing struct, used for benchmarking / profiling etc. * python contains a template Makefile that uses [GoPy](https://github.com/goki/gopy) to generate python bindings to the entire emergent system. See the leabra package version to actually run an example.
Package modl provides a non-declarative database modelling layer to ease the use of frequently repeated patterns in database-backed applications and centralize database use to ease profiling and reporting. It is a fork of the wonderful github.com/coopernurse/gorp package, but is rewritten to use github.com/jmoiron/sqlx as a base. Use of this source code is governed by a MIT-style license that can be found in the LICENSE file.
Package gatt provides a Bluetooth Low Energy gatt implementation. Gatt (Generic Attribute Profile) is the protocol used to write BLE peripherals (servers) and centrals (clients). This package is a work in progress. The API will change. As a peripheral, you can create services, characteristics, and descriptors, advertise, accept connections, and handle requests. As a central, you can scan, connect, discover services, and make requests. gatt supports both Linux and OS X. On Linux: To gain complete and exclusive control of the HCI device, gatt uses HCI_CHANNEL_USER (introduced in Linux v3.14) instead of HCI_CHANNEL_RAW. Those who must use an older kernel may patch in these relevant commits from Marcel Holtmann: Note that because gatt uses HCI_CHANNEL_USER, once gatt has opened the device no other program may access it. Before starting a gatt program, make sure that your BLE device is down: If you have BlueZ 5.14+ (or aren't sure), stop the built-in bluetooth server, which interferes with gatt, e.g.: Because gatt programs administer network devices, they must either be run as root, or be granted appropriate capabilities: USAGE See the server.go, discoverer.go, and explorer.go in the examples/ directory for writing server or client programs that run on Linux and OS X. Users, especially on Linux platforms, seeking finer-grained control over the devices can see the examples/server_lnx.go for the usage of Option, which are platform specific. See the rest of the docs for other options and finer-grained control. Note that some BLE central devices, particularly iOS, may aggressively cache results from previous connections. If you change your services or characteristics, you may need to reboot the other device to pick up the changes. This is a common source of confusion and apparent bugs. For an OS X central, see http://stackoverflow.com/questions/20553957. gatt started life as a port of bleno, to which it is indebted: https://github.com/sandeepmistry/bleno. If you are having problems with gatt, particularly around installation, issues filed with bleno might also be helpful references. To try out your GATT server, it is useful to experiment with a generic BLE client. LightBlue is a good choice. It is available free for both iOS and OS X.
Package sdk is the official AWS SDK for the Go programming language. The AWS SDK for Go provides APIs and utilities that developers can use to build Go applications that use AWS services, such as Amazon Simple Storage Service (Amazon S3). The SDK removes the complexity of coding directly against a web service interface. It hides a lot of the lower-level plumbing, such as authentication, request retries, and error handling. The SDK also includes helpful utilities on top of the AWS APIs that add additional capabilities and functionality. For example, the Amazon S3 Download and Upload Manager will automatically split up large objects into multiple parts and transfer them concurrently. See the s3manager package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/s3/s3manager/ Checkout the Getting Started Guide and API Reference Docs detailed the SDK's components and details on each AWS client the SDK supports. The Getting Started Guide provides examples and detailed description of how to get setup with the SDK. https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/welcome.html The API Reference Docs include a detailed breakdown of the SDK's components such as utilities and AWS clients. Use this as a reference of the Go types included with the SDK, such as AWS clients, API operations, and API parameters. https://docs.aws.amazon.com/sdk-for-go/api/ The SDK is composed of two main components, SDK core, and service clients. The SDK core packages are all available under the aws package at the root of the SDK. Each client for a supported AWS service is available within its own package under the service folder at the root of the SDK. aws - SDK core, provides common shared types such as Config, Logger, and utilities to make working with API parameters easier. awserr - Provides the error interface that the SDK will use for all errors that occur in the SDK's processing. This includes service API response errors as well. The Error type is made up of a code and message. Cast the SDK's returned error type to awserr.Error and call the Code method to compare returned error to specific error codes. See the package's documentation for additional values that can be extracted such as RequestId. credentials - Provides the types and built in credentials providers the SDK will use to retrieve AWS credentials to make API requests with. Nested under this folder are also additional credentials providers such as stscreds for assuming IAM roles, and ec2rolecreds for EC2 Instance roles. endpoints - Provides the AWS Regions and Endpoints metadata for the SDK. Use this to lookup AWS service endpoint information such as which services are in a region, and what regions a service is in. Constants are also provided for all region identifiers, e.g UsWest2RegionID for "us-west-2". session - Provides initial default configuration, and load configuration from external sources such as environment and shared credentials file. request - Provides the API request sending, and retry logic for the SDK. This package also includes utilities for defining your own request retryer, and configuring how the SDK processes the request. service - Clients for AWS services. All services supported by the SDK are available under this folder. The SDK includes the Go types and utilities you can use to make requests to AWS service APIs. Within the service folder at the root of the SDK you'll find a package for each AWS service the SDK supports. All service clients follows a common pattern of creation and usage. When creating a client for an AWS service you'll first need to have a Session value constructed. The Session provides shared configuration that can be shared between your service clients. When service clients are created you can pass in additional configuration via the aws.Config type to override configuration provided by in the Session to create service client instances with custom configuration. Once the service's client is created you can use it to make API requests the AWS service. These clients are safe to use concurrently. In the AWS SDK for Go, you can configure settings for service clients, such as the log level and maximum number of retries. Most settings are optional; however, for each service client, you must specify a region and your credentials. The SDK uses these values to send requests to the correct AWS region and sign requests with the correct credentials. You can specify these values as part of a session or as environment variables. See the SDK's configuration guide for more information. https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html See the session package documentation for more information on how to use Session with the SDK. https://docs.aws.amazon.com/sdk-for-go/api/aws/session/ See the Config type in the aws package for more information on configuration options. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config When using the SDK you'll generally need your AWS credentials to authenticate with AWS services. The SDK supports multiple methods of supporting these credentials. By default the SDK will source credentials automatically from its default credential chain. See the session package for more information on this chain, and how to configure it. The common items in the credential chain are the following: Environment Credentials - Set of environment variables that are useful when sub processes are created for specific roles. Shared Credentials file (~/.aws/credentials) - This file stores your credentials based on a profile name and is useful for local development. Credentials can be configured in code as well by setting the Config's Credentials value to a custom provider or using one of the providers included with the SDK to bypass the default credential chain and use a custom one. This is helpful when you want to instruct the SDK to only use a specific set of credentials or providers. This example creates a credential provider for assuming an IAM role, "myRoleARN" and configures the S3 service client to use that role for API requests. The SDK has support for the shared configuration file (~/.aws/config). This support can be enabled by setting the environment variable, "AWS_SDK_LOAD_CONFIG=1", or enabling the feature in code when creating a Session via the Option's SharedConfigState parameter. In addition to the credentials you'll need to specify the region the SDK will use to make AWS API requests to. In the SDK you can specify the region either with an environment variable, or directly in code when a Session or service client is created. The last value specified in code wins if the region is specified multiple ways. To set the region via the environment variable set the "AWS_REGION" to the region you want to the SDK to use. Using this method to set the region will allow you to run your application in multiple regions without needing additional code in the application to select the region. The endpoints package includes constants for all regions the SDK knows. The values are all suffixed with RegionID. These values are helpful, because they reduce the need to type the region string manually. To set the region on a Session use the aws package's Config struct parameter Region to the AWS region you want the service clients created from the session to use. This is helpful when you want to create multiple service clients, and all of the clients make API requests to the same region. In addition to setting the region when creating a Session you can also set the region on a per service client bases. This overrides the region of a Session. This is helpful when you want to create service clients in specific regions different from the Session's region. See the Config type in the aws package for more information and additional options such as setting the Endpoint, and other service client configuration options. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config Once the client is created you can make an API request to the service. Each API method takes a input parameter, and returns the service response and an error. The SDK provides methods for making the API call in multiple ways. In this list we'll use the S3 ListObjects API as an example for the different ways of making API requests. ListObjects - Base API operation that will make the API request to the service. ListObjectsRequest - API methods suffixed with Request will construct the API request, but not send it. This is also helpful when you want to get a presigned URL for a request, and share the presigned URL instead of your application making the request directly. ListObjectsPages - Same as the base API operation, but uses a callback to automatically handle pagination of the API's response. ListObjectsWithContext - Same as base API operation, but adds support for the Context pattern. This is helpful for controlling the canceling of in flight requests. See the Go standard library context package for more information. This method also takes request package's Option functional options as the variadic argument for modifying how the request will be made, or extracting information from the raw HTTP response. ListObjectsPagesWithContext - same as ListObjectsPages, but adds support for the Context pattern. Similar to ListObjectsWithContext this method also takes the request package's Option function option types as the variadic argument. In addition to the API operations the SDK also includes several higher level methods that abstract checking for and waiting for an AWS resource to be in a desired state. In this list we'll use WaitUntilBucketExists to demonstrate the different forms of waiters. WaitUntilBucketExists. - Method to make API request to query an AWS service for a resource's state. Will return successfully when that state is accomplished. WaitUntilBucketExistsWithContext - Same as WaitUntilBucketExists, but adds support for the Context pattern. In addition these methods take request package's WaiterOptions to configure the waiter, and how underlying request will be made by the SDK. The API method will document which error codes the service might return for the operation. These errors will also be available as const strings prefixed with "ErrCode" in the service client's package. If there are no errors listed in the API's SDK documentation you'll need to consult the AWS service's API documentation for the errors that could be returned. Pagination helper methods are suffixed with "Pages", and provide the functionality needed to round trip API page requests. Pagination methods take a callback function that will be called for each page of the API's response. Waiter helper methods provide the functionality to wait for an AWS resource state. These methods abstract the logic needed to to check the state of an AWS resource, and wait until that resource is in a desired state. The waiter will block until the resource is in the state that is desired, an error occurs, or the waiter times out. If a resource times out the error code returned will be request.WaiterResourceNotReadyErrorCode. This example shows a complete working Go file which will upload a file to S3 and use the Context pattern to implement timeout logic that will cancel the request if it takes too long. This example highlights how to use sessions, create a service client, make a request, handle the error, and process the response.
Package profiler contains tools to help profile a running Go service.
Package appflow provides the API client, operations, and parameter types for Amazon Appflow. Welcome to the Amazon AppFlow API reference. This guide is for developers who need detailed information about the Amazon AppFlow API operations, data types, and errors. Amazon AppFlow is a fully managed integration service that enables you to securely transfer data between software as a service (SaaS) applications like Salesforce, Marketo, Slack, and ServiceNow, and Amazon Web Services like Amazon S3 and Amazon Redshift. Use the following links to get started on the Amazon AppFlow API: Actions Data types Common parameters Common errors If you're new to Amazon AppFlow, we recommend that you review the Amazon AppFlow User Guide. Amazon AppFlow API users can use vendor-specific mechanisms for OAuth, and include applicable OAuth attributes (such as auth-code and redirecturi ) with the connector-specific ConnectorProfileProperties when creating a new connector profile using Amazon AppFlow API operations. For example, Salesforce users can refer to the Authorize Apps with OAuthdocumentation.