Package clientgo is the official Go client for the Kubernetes API. It provides a standard set of clients and tools for building applications, controllers, and operators that communicate with a Kubernetes cluster. kubernetes: Contains the typed Clientset for interacting with built-in, versioned API objects (e.g., Pods, Deployments). This is the most common starting point. dynamic: Provides a dynamic client that can perform operations on any Kubernetes object, including Custom Resources (CRs). It is essential for building controllers that work with CRDs. discovery: Used to discover the API groups, versions, and resources supported by a Kubernetes cluster. tools/cache: The foundation of the controller pattern. This package provides efficient caching and synchronization mechanisms (Informers and Listers) for building controllers. tools/clientcmd: Provides methods for loading client configuration from kubeconfig files. This is essential for out-of-cluster applications and CLI tools. rest: Provides a lower-level RESTClient that manages the details of communicating with the Kubernetes API server. It is useful for advanced use cases that require fine-grained control over requests, such as working with non-standard REST verbs. There are two primary ways to configure a client to connect to the API server: In-Cluster Configuration: For applications running inside a Kubernetes pod, the `rest.InClusterConfig()` function provides a straightforward way to configure the client. It automatically uses the pod's service account for authentication and is the recommended approach for controllers and operators. Out-of-Cluster Configuration: For local development or command-line tools, the `clientcmd` package is used to load configuration from a kubeconfig file. The `rest.Config` object allows for fine-grained control over client-side performance and reliability. Key settings include: Once configured, a client can be used to interact with objects in the cluster. The Typed Clientset (`kubernetes` package) provides a strongly typed interface for working with built-in Kubernetes objects. The Dynamic Client (`dynamic` package) can work with any object, including Custom Resources, using `unstructured.Unstructured` types. For Custom Resources (CRDs), the `k8s.io/code-generator` repository contains the tools to generate typed clients, informers, and listers. The `sample-controller` is the canonical example of this pattern. Server-Side Apply is a patching strategy that allows multiple actors to share management of an object by tracking field ownership. This prevents actors from inadvertently overwriting each other's changes and provides a mechanism for resolving conflicts. The `applyconfigurations` package provides the necessary tools for this declarative approach. Robust error handling is essential when interacting with the API. The `k8s.io/apimachinery/pkg/api/errors` package provides functions to inspect errors and check for common conditions, such as whether a resource was not found or already exists. This allows controllers to implement robust, idempotent reconciliation logic. The controller pattern is central to Kubernetes. A controller observes the state of the cluster and works to bring it to the desired state. The `tools/cache` package provides the building blocks for this pattern. Informers watch the API server and maintain a local cache, Listers provide read-only access to the cache, and Workqueues decouple event detection from processing. In a high-availability deployment where multiple instances of a controller are running, leader election (`tools/leaderelection`) is used to ensure that only one instance is active at a time. Client-side feature gates allow for enabling or disabling experimental features in `client-go`. They can be configured via the `rest.Config` object.
Package sdk is the official AWS SDK for the Go programming language. The AWS SDK for Go provides APIs and utilities that developers can use to build Go applications that use AWS services, such as Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3). The SDK removes the complexity of coding directly against a web service interface. It hides a lot of the lower-level plumbing, such as authentication, request retries, and error handling. The SDK also includes helpful utilities on top of the AWS APIs that add additional capabilities and functionality. For example, the Amazon S3 Download and Upload Manager will automatically split up large objects into multiple parts and transfer them concurrently. See the s3manager package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/s3/s3manager/ Checkout the Getting Started Guide and API Reference Docs detailed the SDK's components and details on each AWS client the SDK supports. The Getting Started Guide provides examples and detailed description of how to get setup with the SDK. https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/welcome.html The API Reference Docs include a detailed breakdown of the SDK's components such as utilities and AWS clients. Use this as a reference of the Go types included with the SDK, such as AWS clients, API operations, and API parameters. https://docs.aws.amazon.com/sdk-for-go/api/ The SDK is composed of two main components, SDK core, and service clients. The SDK core packages are all available under the aws package at the root of the SDK. Each client for a supported AWS service is available within its own package under the service folder at the root of the SDK. aws - SDK core, provides common shared types such as Config, Logger, and utilities to make working with API parameters easier. awserr - Provides the error interface that the SDK will use for all errors that occur in the SDK's processing. This includes service API response errors as well. The Error type is made up of a code and message. Cast the SDK's returned error type to awserr.Error and call the Code method to compare returned error to specific error codes. See the package's documentation for additional values that can be extracted such as RequestId. credentials - Provides the types and built in credentials providers the SDK will use to retrieve AWS credentials to make API requests with. Nested under this folder are also additional credentials providers such as stscreds for assuming IAM roles, and ec2rolecreds for EC2 Instance roles. endpoints - Provides the AWS Regions and Endpoints metadata for the SDK. Use this to lookup AWS service endpoint information such as which services are in a region, and what regions a service is in. Constants are also provided for all region identifiers, e.g UsWest2RegionID for "us-west-2". session - Provides initial default configuration, and load configuration from external sources such as environment and shared credentials file. request - Provides the API request sending, and retry logic for the SDK. This package also includes utilities for defining your own request retryer, and configuring how the SDK processes the request. service - Clients for AWS services. All services supported by the SDK are available under this folder. The SDK includes the Go types and utilities you can use to make requests to AWS service APIs. Within the service folder at the root of the SDK you'll find a package for each AWS service the SDK supports. All service clients follows a common pattern of creation and usage. When creating a client for an AWS service you'll first need to have a Session value constructed. The Session provides shared configuration that can be shared between your service clients. When service clients are created you can pass in additional configuration via the aws.Config type to override configuration provided by in the Session to create service client instances with custom configuration. Once the service's client is created you can use it to make API requests the AWS service. These clients are safe to use concurrently. In the AWS SDK for Go, you can configure settings for service clients, such as the log level and maximum number of retries. Most settings are optional; however, for each service client, you must specify a region and your credentials. The SDK uses these values to send requests to the correct AWS region and sign requests with the correct credentials. You can specify these values as part of a session or as environment variables. See the SDK's configuration guide for more information. https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/configuring-sdk.html See the session package documentation for more information on how to use Session with the SDK. https://docs.aws.amazon.com/sdk-for-go/api/aws/session/ See the Config type in the aws package for more information on configuration options. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config When using the SDK you'll generally need your AWS credentials to authenticate with AWS services. The SDK supports multiple methods of supporting these credentials. By default the SDK will source credentials automatically from its default credential chain. See the session package for more information on this chain, and how to configure it. The common items in the credential chain are the following: Environment Credentials - Set of environment variables that are useful when sub processes are created for specific roles. Shared Credentials file (~/.aws/credentials) - This file stores your credentials based on a profile name and is useful for local development. EC2 Instance Role Credentials - Use EC2 Instance Role to assign credentials to application running on an EC2 instance. This removes the need to manage credential files in production. Credentials can be configured in code as well by setting the Config's Credentials value to a custom provider or using one of the providers included with the SDK to bypass the default credential chain and use a custom one. This is helpful when you want to instruct the SDK to only use a specific set of credentials or providers. This example creates a credential provider for assuming an IAM role, "myRoleARN" and configures the S3 service client to use that role for API requests. See the credentials package documentation for more information on credential providers included with the SDK, and how to customize the SDK's usage of credentials. https://docs.aws.amazon.com/sdk-for-go/api/aws/credentials The SDK has support for the shared configuration file (~/.aws/config). This support can be enabled by setting the environment variable, "AWS_SDK_LOAD_CONFIG=1", or enabling the feature in code when creating a Session via the Option's SharedConfigState parameter. In addition to the credentials you'll need to specify the region the SDK will use to make AWS API requests to. In the SDK you can specify the region either with an environment variable, or directly in code when a Session or service client is created. The last value specified in code wins if the region is specified multiple ways. To set the region via the environment variable set the "AWS_REGION" to the region you want to the SDK to use. Using this method to set the region will allow you to run your application in multiple regions without needing additional code in the application to select the region. The endpoints package includes constants for all regions the SDK knows. The values are all suffixed with RegionID. These values are helpful, because they reduce the need to type the region string manually. To set the region on a Session use the aws package's Config struct parameter Region to the AWS region you want the service clients created from the session to use. This is helpful when you want to create multiple service clients, and all of the clients make API requests to the same region. See the endpoints package for the AWS Regions and Endpoints metadata. https://docs.aws.amazon.com/sdk-for-go/api/aws/endpoints/ In addition to setting the region when creating a Session you can also set the region on a per service client bases. This overrides the region of a Session. This is helpful when you want to create service clients in specific regions different from the Session's region. See the Config type in the aws package for more information and additional options such as setting the Endpoint, and other service client configuration options. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config Once the client is created you can make an API request to the service. Each API method takes a input parameter, and returns the service response and an error. The SDK provides methods for making the API call in multiple ways. In this list we'll use the S3 ListObjects API as an example for the different ways of making API requests. ListObjects - Base API operation that will make the API request to the service. ListObjectsRequest - API methods suffixed with Request will construct the API request, but not send it. This is also helpful when you want to get a presigned URL for a request, and share the presigned URL instead of your application making the request directly. ListObjectsPages - Same as the base API operation, but uses a callback to automatically handle pagination of the API's response. ListObjectsWithContext - Same as base API operation, but adds support for the Context pattern. This is helpful for controlling the canceling of in flight requests. See the Go standard library context package for more information. This method also takes request package's Option functional options as the variadic argument for modifying how the request will be made, or extracting information from the raw HTTP response. ListObjectsPagesWithContext - same as ListObjectsPages, but adds support for the Context pattern. Similar to ListObjectsWithContext this method also takes the request package's Option function option types as the variadic argument. In addition to the API operations the SDK also includes several higher level methods that abstract checking for and waiting for an AWS resource to be in a desired state. In this list we'll use WaitUntilBucketExists to demonstrate the different forms of waiters. WaitUntilBucketExists. - Method to make API request to query an AWS service for a resource's state. Will return successfully when that state is accomplished. WaitUntilBucketExistsWithContext - Same as WaitUntilBucketExists, but adds support for the Context pattern. In addition these methods take request package's WaiterOptions to configure the waiter, and how underlying request will be made by the SDK. The API method will document which error codes the service might return for the operation. These errors will also be available as const strings prefixed with "ErrCode" in the service client's package. If there are no errors listed in the API's SDK documentation you'll need to consult the AWS service's API documentation for the errors that could be returned. Pagination helper methods are suffixed with "Pages", and provide the functionality needed to round trip API page requests. Pagination methods take a callback function that will be called for each page of the API's response. Waiter helper methods provide the functionality to wait for an AWS resource state. These methods abstract the logic needed to to check the state of an AWS resource, and wait until that resource is in a desired state. The waiter will block until the resource is in the state that is desired, an error occurs, or the waiter times out. If a resource times out the error code returned will be request.WaiterResourceNotReadyErrorCode. This example shows a complete working Go file which will upload a file to S3 and use the Context pattern to implement timeout logic that will cancel the request if it takes too long. This example highlights how to use sessions, create a service client, make a request, handle the error, and process the response. Deprecated: aws-sdk-go is deprecated. Use aws-sdk-go-v2. See https://aws.amazon.com/blogs/developer/announcing-end-of-support-for-aws-sdk-for-go-v1-on-july-31-2025/.
Package swagger (2.0) provides a powerful interface to your API Contains an implementation of Swagger 2.0. It knows how to serialize, deserialize and validate swagger specifications. Swagger is a simple yet powerful representation of your RESTful API. With the largest ecosystem of API tooling on the planet, thousands of developers are supporting Swagger in almost every modern programming language and deployment environment. With a Swagger-enabled API, you get interactive documentation, client SDK generation and discoverability. We created Swagger to help fulfill the promise of APIs. Swagger helps companies like Apigee, Getty Images, Intuit, LivingSocial, McKesson, Microsoft, Morningstar, and PayPal build the best possible services with RESTful APIs.Now in version 2.0, Swagger is more enabling than ever. And it's 100% open source software. More detailed documentation is available at https://goswagger.io. Install: The implementation also provides a number of command line tools to help working with swagger. Currently there is a spec validator tool: To generate a server for a swagger spec document: To generate a client for a swagger spec document: To generate a swagger spec document for a go application: There are several other sub commands available for the generate command You're free to add files to the directories the generated code lands in, but the files generated by the generator itself will be regenerated on following generation runs so any changes to those files will be lost. However extra files you create won't be lost so they are safe to use for customizing the application to your needs. To generate a server for a swagger spec document:
Package mtl provides access to Apple's Metal API (https://developer.apple.com/documentation/metal). Package mtl requires macOS version 10.13 or newer. This package is in very early stages of development. The API will change when opportunities for improvement are discovered; it is not yet frozen. Less than 20% of the Metal API surface is implemented. Current functionality is sufficient to render very basic geometry.
Package firestore provides a client for reading and writing to a Cloud Firestore database. See https://cloud.google.com/firestore/docs for an introduction to Cloud Firestore and additional help on using the Firestore API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Note: you can't use both Cloud Firestore and Cloud Datastore in the same project. To start working with this package, create a client with a project ID: In Firestore, documents are sets of key-value pairs, and collections are groups of documents. A Firestore database consists of a hierarchy of alternating collections and documents, referred to by slash-separated paths like "States/California/Cities/SanFrancisco". This client is built around references to collections and documents. CollectionRefs and DocumentRefs are lightweight values that refer to the corresponding database entities. Creating a ref does not involve any network traffic. Use DocumentRef.Get to read a document. The result is a DocumentSnapshot. Call its Data method to obtain the entire document contents as a map. You can also obtain a single field with DataAt, or extract the data into a struct with DataTo. With the type definition we can extract the document's data into a value of type State: Note that this client supports struct tags beginning with "firestore:" that work like the tags of the encoding/json package, letting you rename fields, ignore them, or omit their values when empty. To retrieve multiple documents from their references in a single call, use Client.GetAll. For writing individual documents, use the methods on DocumentReference. Create creates a new document. The first return value is a WriteResult, which contains the time at which the document was updated. Create fails if the document exists. Another method, Set, either replaces an existing document or creates a new one. To update some fields of an existing document, use Update. It takes a list of paths to update and their corresponding values. Use DocumentRef.Delete to delete a document. You can condition Deletes or Updates on when a document was last changed. Specify these preconditions as an option to a Delete or Update method. The check and the write happen atomically with a single RPC. Here we update a doc only if it hasn't changed since we read it. You could also do this with a transaction. To perform multiple writes at once, use a WriteBatch. Its methods chain for convenience. WriteBatch.Commit sends the collected writes to the server, where they happen atomically. You can use SQL to select documents from a collection. Begin with the collection, and build up a query using Select, Where and other methods of Query. Supported operators include '<', '<=', '>', '>=', '==', 'in', 'array-contains', and 'array-contains-any'. Call the Query's Documents method to get an iterator, and use it like the other Google Cloud Client iterators. To get all the documents in a collection, you can use the collection itself as a query. Firestore supports similarity search over embedding vectors. See Query.FindNearest for details. You can partition the documents of a Collection Group allowing for smaller subqueries. You can also Serialize/Deserialize queries making it possible to run/stream the queries elsewhere; another process or machine for instance. Use a transaction to execute reads and writes atomically. All reads must happen before any writes. Transaction creation, commit, rollback and retry are handled for you by the Client.RunTransaction method; just provide a function and use the read and write methods of the Transaction passed to it. This package supports the Cloud Firestore emulator, which is useful for testing and development. Environment variables are used to indicate that Firestore traffic should be directed to the emulator instead of the production Firestore service. To install and run the emulator and its environment variables, see the documentation at https://cloud.google.com/sdk/gcloud/reference/beta/emulators/firestore/. Once the emulator is running, set FIRESTORE_EMULATOR_HOST to the API endpoint.
Package beego provide a MVC framework beego: an open-source, high-performance, modular, full-stack web framework It is used for rapid development of RESTful APIs, web apps and backend services in Go. beego is inspired by Tornado, Sinatra and Flask with the added benefit of some Go-specific features such as interfaces and struct embedding. more information: http://beego.me
Package ebiten provides graphics and input API to develop a 2D game. You can start the game by calling the function RunGame. In the API document, 'the main thread' means the goroutine in init(), main() and their callees without 'go' statement. It is assured that 'the main thread' runs on the OS main thread. There are some Ebitengine functions (e.g., DeviceScaleFactor) that must be called on the main thread under some conditions (typically, before ebiten.RunGame is called). `EBITENGINE_SCREENSHOT_KEY` environment variable specifies the key to take a screenshot. For example, if you run your game with `EBITENGINE_SCREENSHOT_KEY=q`, you can take a game screen's screenshot by pressing Q key. This works only on desktops and browsers. `EBITENGINE_INTERNAL_IMAGES_KEY` environment variable specifies the key to dump all the internal images. This is valid only when the build tag 'ebitenginedebug' is specified. This works only on desktops and browsers. `EBITENGINE_GRAPHICS_LIBRARY` environment variable specifies the graphics library. If the specified graphics library is not available, RunGame returns an error. This environment variable works when RunGame is called or RunGameWithOptions is called with GraphicsLibraryAuto. This can take one of the following value: `EBITENGINE_DIRECTX` environment variable specifies various parameters for DirectX. You can specify multiple values separated by a comma. The default value is empty (i.e. no parameters). The options taking arguments are exclusive, and if multiples are specified, the lastly specified value is adopted. The possible values for the option "version" are "11" and "12". If the version is not specified, the default version 11 is adopted. On Xbox, the "version" option is ignored and DirectX 12 is always adopted. The option "featurelevel" is valid only for DirectX 12. The possible values are "11_0", "11_1", "12_0", "12_1", and "12_2". The default value is "11_0". `ebitenginedebug` outputs a log of graphics commands. This is useful to know what happens in Ebitengine. In general, the number of graphics commands affects the performance of your game. `ebitenginegldebug` enables a debug mode for OpenGL. This is valid only when the graphics library is OpenGL. This affects performance very much. `ebitenginesinglethread` disables Ebitengine's thread safety to unlock maximum performance. If you use this you will have to manage threads yourself. Functions like `SetWindowSize` will no longer be concurrent-safe with this build tag. They must be called from the main thread or the same goroutine as the given game's callback functions like Update `ebitenginesinglethread` works only with desktops and consoles. `ebitenginesinglethread` was deprecated as of v2.7. Use RunGameOptions.SingleThread instead. `microsoftgdk` is for Microsoft GDK (e.g. Xbox). `nintendosdk` is for NintendoSDK (e.g. Nintendo Switch). `nintendosdkprofile` enables a profiler for NintendoSDK. `playstation5` is for PlayStation 5.
Package buffalo is a Go web development eco-system, designed to make your life easier. Buffalo helps you to generate a web project that already has everything from front-end (JavaScript, SCSS, etc.) to back-end (database, routing, etc.) already hooked up and ready to run. From there it provides easy APIs to build your web application quickly in Go. Buffalo **isn't just a framework**, it's a holistic web development environment and project structure that **lets developers get straight to the business** of, well, building their business.
Package ec2 provides the API client, operations, and parameter types for Amazon Elastic Compute Cloud. You can access the features of Amazon Elastic Compute Cloud (Amazon EC2) programmatically. For more information, see the Amazon EC2 Developer Guide.
SQL Schema migration tool for Go. Key features: To install the library and command line program, use the following: The main command is called sql-migrate. Each command requires a configuration file (which defaults to dbconfig.yml, but can be specified with the -config flag). This config file should specify one or more environments: The `table` setting is optional and will default to `gorp_migrations`. The environment that will be used can be specified with the -env flag (defaults to development). Use the --help flag in combination with any of the commands to get an overview of its usage: The up command applies all available migrations. By contrast, down will only apply one migration by default. This behavior can be changed for both by using the -limit parameter. The redo command will unapply the last migration and reapply it. This is useful during development, when you're writing migrations. Use the status command to see the state of the applied migrations: If you are using MySQL, you must append ?parseTime=true to the datasource configuration. For example: See https://github.com/go-sql-driver/mysql#parsetime for more information. Import sql-migrate into your application: Set up a source of migrations, this can be from memory, from a set of files or from bindata (more on that later): Then use the Exec function to upgrade your database: Note that n can be greater than 0 even if there is an error: any migration that succeeded will remain applied even if a later one fails. The full set of capabilities can be found in the API docs below. Migrations are defined in SQL files, which contain a set of SQL statements. Special comments are used to distinguish up and down migrations. You can put multiple statements in each block, as long as you end them with a semicolon (;). If you have complex statements which contain semicolons, use StatementBegin and StatementEnd to indicate boundaries: The order in which migrations are applied is defined through the filename: sql-migrate will sort migrations based on their name. It's recommended to use an increasing version number or a timestamp as the first part of the filename. Normally each migration is run within a transaction in order to guarantee that it is fully atomic. However some SQL commands (for example creating an index concurrently in PostgreSQL) cannot be executed inside a transaction. In order to execute such a command in a migration, the migration can be run using the notransaction option: If you like your Go applications self-contained (that is: a single binary): use packr (https://github.com/gobuffalo/packr) to embed the migration files. Just write your migration files as usual, as a set of SQL files in a folder. Use the PackrMigrationSource in your application to find the migrations: If you already have a box and would like to use a subdirectory: As an alternative, but slightly less maintained, you can use bindata (https://github.com/shuLhan/go-bindata) to embed the migration files. Just write your migration files as usual, as a set of SQL files in a folder. Then use bindata to generate a .go file with the migrations embedded: The resulting bindata.go file will contain your migrations. Remember to regenerate your bindata.go file whenever you add/modify a migration (go generate will help here, once it arrives). Use the AssetMigrationSource in your application to find the migrations: Both Asset and AssetDir are functions provided by bindata. Then proceed as usual. Adding a new migration source means implementing MigrationSource. The resulting slice of migrations will be executed in the given order, so it should usually be sorted by the Id field.
Package kms provides the API client, operations, and parameter types for AWS Key Management Service. Key Management Service (KMS) is an encryption and key management web service. This guide describes the KMS operations that you can call programmatically. For general information about KMS, see the Key Management Service Developer Guide. KMS has replaced the term customer master key (CMK) with Key Management Service key and KMS key. The concept has not changed. To prevent breaking changes, KMS is keeping some variations of this term. Amazon Web Services provides SDKs that consist of libraries and sample code for various programming languages and platforms (Java, Rust, Python, Ruby, .Net, macOS, Android, etc.). The SDKs provide a convenient way to create programmatic access to KMS and other Amazon Web Services services. For example, the SDKs take care of tasks such as signing requests (see below), managing errors, and retrying requests automatically. For more information about the Amazon Web Services SDKs, including how to download and install them, see Tools for Amazon Web Services. We recommend that you use the Amazon Web Services SDKs to make programmatic API calls to KMS. If you need to use FIPS 140-2 validated cryptographic modules when communicating with Amazon Web Services, use one of the FIPS endpoints in your preferred Amazon Web Services Region. If you need communicate over IPv6, use the dual-stack endpoint in your preferred Amazon Web Services Region. For more information see Service endpointsin the Key Management Service topic of the Amazon Web Services General Reference and Dual-stack endpoint supportin the KMS Developer Guide. All KMS API calls must be signed and be transmitted using Transport Layer Security (TLS). KMS recommends you always use the latest supported TLS version. Clients must also support cipher suites with Perfect Forward Secrecy (PFS) such as Ephemeral Diffie-Hellman (DHE) or Elliptic Curve Ephemeral Diffie-Hellman (ECDHE). Most modern systems such as Java 7 and later support these modes. Requests must be signed using an access key ID and a secret access key. We strongly recommend that you do not use your Amazon Web Services account root access key ID and secret access key for everyday work. You can use the access key ID and secret access key for an IAM user or you can use the Security Token Service (STS) to generate temporary security credentials and use those to sign requests. All KMS requests must be signed with Signature Version 4. KMS supports CloudTrail, a service that logs Amazon Web Services API calls and related events for your Amazon Web Services account and delivers them to an Amazon S3 bucket that you specify. By using the information collected by CloudTrail, you can determine what requests were made to KMS, who made the request, when it was made, and so on. To learn more about CloudTrail, including how to turn it on and find your log files, see the CloudTrail User Guide. For more information about credentials and request signing, see the following: Amazon Web Services Security Credentials Temporary Security Credentials Signature Version 4 Signing Process Of the API operations discussed in this guide, the following will prove the most useful for most applications. You will likely perform operations other than these, such as creating keys and assigning policies, by using the console.
Package ebiten provides graphics and input API to develop a 2D game. You can start the game by calling the function RunGame. In the API document, 'the main thread' means the goroutine in init(), main() and their callees without 'go' statement. It is assured that 'the main thread' runs on the OS main thread. There are some Ebiten functions that must be called on the main thread under some conditions (typically, before ebiten.RunGame is called). `EBITEN_SCREENSHOT_KEY` environment variable specifies the key to take a screenshot. For example, if you run your game with `EBITEN_SCREENSHOT_KEY=q`, you can take a game screen's screenshot by pressing Q key. This works only on desktops. `EBITEN_INTERNAL_IMAGES_KEY` environment variable specifies the key to dump all the internal images. This is valid only when the build tag 'ebitendebug' is specified. This works only on desktops. `ebitendebug` outputs a log of graphics commands. This is useful to know what happens in Ebiten. In general, the number of graphics commands affects the performance of your game. `ebitengl` forces to use OpenGL in any environments.
Package sqs provides the API client, operations, and parameter types for Amazon Simple Queue Service. Welcome to the Amazon SQS API Reference. Amazon SQS is a reliable, highly-scalable hosted queue for storing messages as they travel between applications or microservices. Amazon SQS moves data between distributed application components and helps you decouple these components. For information on the permissions you need to use this API, see Identity and access management in the Amazon SQS Developer Guide. You can use Amazon Web Services SDKs to access Amazon SQS using your favorite programming language. The SDKs perform tasks such as the following automatically: Cryptographically sign your service requests Retry requests Handle error responses Amazon SQS Product Page Making API Requests Amazon SQS Message Attributes Amazon SQS Dead-Letter Queues Amazon SQS in the Command Line Interface Regions and Endpoints
Package dynamodbstreams provides the API client, operations, and parameter types for Amazon DynamoDB Streams. Amazon DynamoDB Streams provides API actions for accessing streams and processing stream records. To learn more about application development with Streams, see Capturing Table Activity with DynamoDB Streamsin the Amazon DynamoDB Developer Guide.
Package sns provides the API client, operations, and parameter types for Amazon Simple Notification Service. Amazon Simple Notification Service (Amazon SNS) is a web service that enables you to build distributed web-enabled applications. Applications can use Amazon SNS to easily push real-time notification messages to interested subscribers over multiple delivery protocols. For more information about this product see the Amazon SNS product page. For detailed information about Amazon SNS features and their associated API calls, see the Amazon SNS Developer Guide. For information on the permissions you need to use this API, see Identity and access management in Amazon SNS in the Amazon SNS Developer Guide. We also provide SDKs that enable you to access Amazon SNS from your preferred programming language. The SDKs contain functionality that automatically takes care of tasks such as: cryptographically signing your service requests, retrying requests, and handling error responses. For a list of available SDKs, go to Tools for Amazon Web Services.
Gnostic is a tool for building better REST APIs through knowledge. Gnostic reads declarative descriptions of REST APIs that conform to the OpenAPI Specification, reports errors, resolves internal dependencies, and puts the results in a binary form that can be used in any language that is supported by the Protocol Buffer tools. Gnostic models are validated and typed. This allows API tool developers to focus on their product and not worry about input validation and type checking. Gnostic calls plugins that implement a variety of API implementation and support features including generation of client and server support code.
Gnostic is a tool for building better REST APIs through knowledge. Gnostic reads declarative descriptions of REST APIs that conform to the OpenAPI Specification, reports errors, resolves internal dependencies, and puts the results in a binary form that can be used in any language that is supported by the Protocol Buffer tools. Gnostic models are validated and typed. This allows API tool developers to focus on their product and not worry about input validation and type checking. Gnostic calls plugins that implement a variety of API implementation and support features including generation of client and server support code.
Package lambda provides the API client, operations, and parameter types for AWS Lambda. Lambda is a compute service that lets you run code without provisioning or managing servers. Lambda runs your code on a high-availability compute infrastructure and performs all of the administration of the compute resources, including server and operating system maintenance, capacity provisioning and automatic scaling, code monitoring and logging. With Lambda, you can run code for virtually any type of application or backend service. For more information about the Lambda service, see What is Lambdain the Lambda Developer Guide. The Lambda API Reference provides information about each of the API methods, including details about the parameters in each API request and response. You can use Software Development Kits (SDKs), Integrated Development Environment (IDE) Toolkits, and command line tools to access the API. For installation instructions, see Tools for Amazon Web Services. For a list of Region-specific endpoints that Lambda supports, see Lambda endpoints and quotas in the Amazon Web Services General Reference.. When making the API calls, you will need to authenticate your request by providing a signature. Lambda supports signature version 4. For more information, see Signature Version 4 signing processin the Amazon Web Services General Reference.. Because Amazon Web Services SDKs use the CA certificates from your computer, changes to the certificates on the Amazon Web Services servers can cause connection failures when you attempt to use an SDK. You can prevent these failures by keeping your computer's CA certificates and operating system up-to-date. If you encounter this issue in a corporate environment and do not manage your own computer, you might need to ask an administrator to assist with the update process. The following list shows minimum operating system and Java versions: Microsoft Windows versions that have updates from January 2005 or later installed contain at least one of the required CAs in their trust list. Mac OS X 10.4 with Java for Mac OS X 10.4 Release 5 (February 2007), Mac OS X 10.5 (October 2007), and later versions contain at least one of the required CAs in their trust list. Red Hat Enterprise Linux 5 (March 2007), 6, and 7 and CentOS 5, 6, and 7 all contain at least one of the required CAs in their default trusted CA list. Java 1.4.2_12 (May 2006), 5 Update 2 (March 2005), and all later versions, including Java 6 (December 2006), 7, and 8, contain at least one of the required CAs in their default trusted CA list. When accessing the Lambda management console or Lambda API endpoints, whether through browsers or programmatically, you will need to ensure your client machines support any of the following CAs: Amazon Root CA 1 Starfield Services Root Certificate Authority - G2 Starfield Class 2 Certification Authority Root certificates from the first two authorities are available from Amazon trust services, but keeping your computer up-to-date is the more straightforward solution. To learn more about ACM-provided certificates, see Amazon Web Services Certificate Manager FAQs.
Package rds provides the API client, operations, and parameter types for Amazon Relational Database Service. Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the cloud. It provides cost-efficient, resizeable capacity for an industry-standard relational database and manages common database administration tasks, freeing up developers to focus on what makes their applications and businesses unique. Amazon RDS gives you access to the capabilities of a MySQL, MariaDB, PostgreSQL, Microsoft SQL Server, Oracle, Db2, or Amazon Aurora database server. These capabilities mean that the code, applications, and tools you already use today with your existing databases work with Amazon RDS without modification. Amazon RDS automatically backs up your database and maintains the database software that powers your DB instance. Amazon RDS is flexible: you can scale your DB instance's compute resources and storage capacity to meet your application's demand. As with all Amazon Web Services, there are no up-front investments, and you pay only for the resources you use. This interface reference for Amazon RDS contains documentation for a programming or command line interface you can use to manage Amazon RDS. Amazon RDS is asynchronous, which means that some interfaces might require techniques such as polling or callback functions to determine when a command has been applied. In this reference, the parameter descriptions indicate whether a command is applied immediately, on the next instance reboot, or during the maintenance window. The reference structure is as follows, and we list following some related topics from the user guide. Amazon RDS API Reference For the alphabetical list of API actions, see API Actions. For the alphabetical list of data types, see Data Types. For a list of common query parameters, see Common Parameters. For descriptions of the error codes, see Common Errors. Amazon RDS User Guide For a summary of the Amazon RDS interfaces, see Available RDS Interfaces. For more information about how to use the Query API, see Using the Query API.
Package apigateway provides the API client, operations, and parameter types for Amazon API Gateway. Amazon API Gateway helps developers deliver robust, secure, and scalable mobile and web application back ends. API Gateway allows developers to securely connect mobile and web applications to APIs that run on Lambda, Amazon EC2, or other publicly addressable web services that are hosted outside of AWS.
Package cloudfront provides the API client, operations, and parameter types for Amazon CloudFront. This is the Amazon CloudFront API Reference. This guide is for developers who need detailed information about CloudFront API actions, data types, and errors. For detailed information about CloudFront features, see the Amazon CloudFront Developer Guide.
Package whatsapp provides a developer API to interact with the WhatsAppWeb-Servers.
Package redshift provides the API client, operations, and parameter types for Amazon Redshift. This is an interface reference for Amazon Redshift. It contains documentation for one of the programming or command line interfaces you can use to manage Amazon Redshift clusters. Note that Amazon Redshift is asynchronous, which means that some interfaces may require techniques, such as polling or asynchronous callback handlers, to determine when a command has been applied. In this reference, the parameter descriptions indicate whether a change is applied immediately, on the next instance reboot, or during the next maintenance window. For a summary of the Amazon Redshift cluster management interfaces, go to Using the Amazon Redshift Management Interfaces. Amazon Redshift manages all the work of setting up, operating, and scaling a data warehouse: provisioning capacity, monitoring and backing up the cluster, and applying patches and upgrades to the Amazon Redshift engine. You can focus on using your data to acquire new insights for your business and customers. If you are a first-time user of Amazon Redshift, we recommend that you begin by reading the Amazon Redshift Getting Started Guide. If you are a database developer, the Amazon Redshift Database Developer Guide explains how to design, build, query, and maintain the databases that make up your data warehouse.
Package apiserver provides the machinery for building Kubernetes-style API servers. This library is the foundation for the Kubernetes API server (`kube-apiserver`), and is also the primary framework for developers building custom API servers to extend the Kubernetes API. An extension API server is a user-provided, standalone web server that registers itself with the main kube-apiserver to handle specific API groups. This allows developers to extend Kubernetes with their own APIs that behave like core Kubernetes APIs, complete with typed clients, authentication, authorization, and discovery. The `apiserver` library is composed of several key packages: The `GenericAPIServer` struct is the heart of any extension server. It is responsible for assembling and running the HTTP serving stack. See the runnable example for a demonstration of how to instantiate a `GenericAPIServer`. The mechanism that enables extension API servers is API aggregation. The primary apiserver (typically the kube-apiserver) acts as a proxy, forwarding requests for a specific API group (e.g., /apis/myextension.io/v1) to a registered extension server. The apiserver is configured using APIService objects. For most use cases, custom resources (CustomResourceDefinitions) are the preferred way to extend the Kubernetes API. The `pkg/admission` package provides a way to add admission policies directly into an apiserver. Admission plugins can be used to validate or mutate objects during write operations. The kube-apiserver uses admission plugins to provide a variety of core system capabilities. For most extension use cases dynamic admission control using policies (ValidatingAdmissionPolicies or MutatingAdmissionPolicies) or webhooks (ValidatingWebhookConfiguration and MutatingWebhookConfiguration) are the preferred way to extend admission control.
Package elasticsearchservice provides the API client, operations, and parameter types for Amazon Elasticsearch Service. Use the Amazon Elasticsearch Configuration API to create, configure, and manage Elasticsearch domains. For sample code that uses the Configuration API, see the Amazon Elasticsearch Service Developer Guide. The guide also contains sample code for sending signed HTTP requests to the Elasticsearch APIs. The endpoint for configuration service requests is region-specific: es.region.amazonaws.com. For example, es.us-east-1.amazonaws.com. For a current list of supported regions and endpoints, see Regions and Endpoints.
Package securityhub provides the API client, operations, and parameter types for AWS SecurityHub. Security Hub CSPM provides you with a comprehensive view of your security state in Amazon Web Services and helps you assess your Amazon Web Services environment against security industry standards and best practices. Security Hub CSPM collects security data across Amazon Web Services accounts, Amazon Web Services services, and supported third-party products and helps you analyze your security trends and identify the highest priority security issues. To help you manage the security state of your organization, Security Hub CSPM supports multiple security standards. These include the Amazon Web Services Foundational Security Best Practices (FSBP) standard developed by Amazon Web Services, and external compliance frameworks such as the Center for Internet Security (CIS), the Payment Card Industry Data Security Standard (PCI DSS), and the National Institute of Standards and Technology (NIST). Each standard includes several security controls, each of which represents a security best practice. Security Hub CSPM runs checks against security controls and generates control findings to help you assess your compliance against security best practices. In addition to generating control findings, Security Hub CSPM also receives findings from other Amazon Web Services services, such as Amazon GuardDuty and Amazon Inspector, and supported third-party products. This gives you a single pane of glass into a variety of security-related issues. You can also send Security Hub CSPM findings to other Amazon Web Services services and supported third-party products. Security Hub CSPM offers automation features that help you triage and remediate security issues. For example, you can use automation rules to automatically update critical findings when a security check fails. You can also leverage the integration with Amazon EventBridge to trigger automatic responses to specific findings. This guide, the Security Hub CSPM API Reference, provides information about the Security Hub CSPM API. This includes supported resources, HTTP methods, parameters, and schemas. If you're new to Security Hub CSPM, you might find it helpful to also review the Security Hub CSPM User Guide. The user guide explains key concepts and provides procedures that demonstrate how to use Security Hub CSPM features. It also provides information about topics such as integrating Security Hub CSPM with other Amazon Web Services services. In addition to interacting with Security Hub CSPM by making calls to the Security Hub CSPM API, you can use a current version of an Amazon Web Services command line tool or SDK. Amazon Web Services provides tools and SDKs that consist of libraries and sample code for various languages and platforms, such as PowerShell, Java, Go, Python, C++, and .NET. These tools and SDKs provide convenient, programmatic access to Security Hub CSPM and other Amazon Web Services services . They also handle tasks such as signing requests, managing errors, and retrying requests automatically. For information about installing and using the Amazon Web Services tools and SDKs, see Tools to Build on Amazon Web Services. With the exception of operations that are related to central configuration, Security Hub CSPM API requests are executed only in the Amazon Web Services Region that is currently active or in the specific Amazon Web Services Region that you specify in your request. Any configuration or settings change that results from the operation is applied only to that Region. To make the same change in other Regions, call the same API operation in each Region in which you want to apply the change. When you use central configuration, API requests for enabling Security Hub CSPM, standards, and controls are executed in the home Region and all linked Regions. For a list of central configuration operations, see the Central configuration terms and conceptssection of the Security Hub CSPM User Guide. The following throttling limits apply to Security Hub CSPM API operations. BatchEnableStandards - RateLimit of 1 request per second. BurstLimit of 1 request per second. GetFindings - RateLimit of 3 requests per second. BurstLimit of 6 requests per second. BatchImportFindings - RateLimit of 10 requests per second. BurstLimit of 30 requests per second. BatchUpdateFindings - RateLimit of 10 requests per second. BurstLimit of 30 requests per second. UpdateStandardsControl - RateLimit of 1 request per second. BurstLimit of 5 requests per second. All other operations - RateLimit of 10 requests per second. BurstLimit of 30 requests per second.
Package cognitoidentityprovider provides the API client, operations, and parameter types for Amazon Cognito Identity Provider. With the Amazon Cognito user pools API, you can configure user pools and authenticate users. To authenticate users from third-party identity providers (IdPs) in this API, you can link IdP users to native user profiles. Learn more about the authentication and authorization of federated users at Adding user pool sign-in through a third partyand in the User pool federation endpoints and hosted UI reference. This API reference provides detailed information about API operations and object types in Amazon Cognito. Along with resource management operations, the Amazon Cognito user pools API includes classes of operations and authorization models for client-side and server-side authentication of users. You can interact with operations in the Amazon Cognito user pools API as any of the following subjects. An administrator who wants to configure user pools, app clients, users, groups, or other user pool functions. A server-side app, like a web application, that wants to use its Amazon Web Services privileges to manage, authenticate, or authorize a user. A client-side app, like a mobile app, that wants to make unauthenticated requests to manage, authenticate, or authorize a user. For more information, see Using the Amazon Cognito user pools API and user pool endpoints in the Amazon Cognito Developer Guide. With your Amazon Web Services SDK, you can build the logic to support operational flows in every use case for this API. You can also make direct REST API requests to Amazon Cognito user pools service endpoints. The following links can get you started with the CognitoIdentityProvider client in other supported Amazon Web Services SDKs. Amazon Web Services Command Line Interface Amazon Web Services SDK for .NET Amazon Web Services SDK for C++ Amazon Web Services SDK for Go Amazon Web Services SDK for Java V2 Amazon Web Services SDK for JavaScript Amazon Web Services SDK for PHP V3 Amazon Web Services SDK for Python Amazon Web Services SDK for Ruby V3 To get started with an Amazon Web Services SDK, see Tools to Build on Amazon Web Services. For example actions and scenarios, see Code examples for Amazon Cognito Identity Provider using Amazon Web Services SDKs.
Package flutter combines the embedder API with GLFW and plugins. Flutter and Go on the desktop. go-flutter is in active development. API's must be considered beta and may be changed.
Package graphql provides a GraphQL client implementation. For more information, see package github.com/shurcooL/githubv4, which is a specialized version targeting GitHub GraphQL API v4. That package is driving the feature development. For now, see README for more details.
Package ses provides the API client, operations, and parameter types for Amazon Simple Email Service. This document contains reference information for the Amazon Simple Email Service (Amazon SES) API, version 2010-12-01. This document is best used in conjunction with the Amazon SES Developer Guide. For a list of Amazon SES endpoints to use in service requests, see Regions and Amazon SES in the Amazon SES Developer Guide. This documentation contains reference information related to the following: Amazon SES API Actions Amazon SES API Data Types Common Parameters Common Errors
Package base64Captcha supports digits, numbers,alphabet, arithmetic, audio and digit-alphabet captcha. base64Captcha is used for fast development of RESTful APIs, web apps and backend services in Go. give a string identifier to the package and it returns with a base64-encoding-png-string
bindata converts any file into managable Go source code. Useful for embedding binary data into a go program. The file data is optionally gzip compressed before being converted to a raw byte slice. The following paragraphs cover some of the customization options which can be specified in the Config struct, which must be passed into the Translate() call. When used with the `Debug` option, the generated code does not actually include the asset data. Instead, it generates function stubs which load the data from the original file on disk. The asset API remains identical between debug and release builds, so your code will not have to change. This is useful during development when you expect the assets to change often. The host application using these assets uses the same API in both cases and will not have to care where the actual data comes from. An example is a Go webserver with some embedded, static web content like HTML, JS and CSS files. While developing it, you do not want to rebuild the whole server and restart it every time you make a change to a bit of javascript. You just want to build and launch the server once. Then just press refresh in the browser to see those changes. Embedding the assets with the `debug` flag allows you to do just that. When you are finished developing and ready for deployment, just re-invoke `go-bindata` without the `-debug` flag. It will now embed the latest version of the assets. The `NoMemCopy` option will alter the way the output file is generated. It will employ a hack that allows us to read the file data directly from the compiled program's `.rodata` section. This ensures that when we call call our generated function, we omit unnecessary memcopies. The downside of this, is that it requires dependencies on the `reflect` and `unsafe` packages. These may be restricted on platforms like AppEngine and thus prevent you from using this mode. Another disadvantage is that the byte slice we create, is strictly read-only. For most use-cases this is not a problem, but if you ever try to alter the returned byte slice, a runtime panic is thrown. Use this mode only on target platforms where memory constraints are an issue. The default behaviour is to use the old code generation method. This prevents the two previously mentioned issues, but will employ at least one extra memcopy and thus increase memory requirements. For instance, consider the following two examples: This would be the default mode, using an extra memcopy but gives a safe implementation without dependencies on `reflect` and `unsafe`: Here is the same functionality, but uses the `.rodata` hack. The byte slice returned from this example can not be written to without generating a runtime error. The NoCompress option indicates that the supplied assets are *not* GZIP compressed before being turned into Go code. The data should still be accessed through a function call, so nothing changes in the API. This feature is useful if you do not care for compression, or the supplied resource is already compressed. Doing it again would not add any value and may even increase the size of the data. The default behaviour of the program is to use compression. The keys used in the `_bindata` map are the same as the input file name passed to `go-bindata`. This includes the path. In most cases, this is not desireable, as it puts potentially sensitive information in your code base. For this purpose, the tool supplies another command line flag `-prefix`. This accepts a portion of a path name, which should be stripped off from the map keys and function names. For example, running without the `-prefix` flag, we get: Running with the `-prefix` flag, we get: With the optional Tags field, you can specify any go build tags that must be fulfilled for the output file to be included in a build. This is useful when including binary data in multiple formats, where the desired format is specified at build time with the appropriate tags. The tags are appended to a `// +build` line in the beginning of the output file and must follow the build tags syntax specified by the go tool.
bindata converts any file into managable Go source code. Useful for embedding binary data into a go program. The file data is optionally gzip compressed before being converted to a raw byte slice. The following paragraphs cover some of the customization options which can be specified in the Config struct, which must be passed into the Translate() call. When used with the `Debug` option, the generated code does not actually include the asset data. Instead, it generates function stubs which load the data from the original file on disk. The asset API remains identical between debug and release builds, so your code will not have to change. This is useful during development when you expect the assets to change often. The host application using these assets uses the same API in both cases and will not have to care where the actual data comes from. An example is a Go webserver with some embedded, static web content like HTML, JS and CSS files. While developing it, you do not want to rebuild the whole server and restart it every time you make a change to a bit of javascript. You just want to build and launch the server once. Then just press refresh in the browser to see those changes. Embedding the assets with the `debug` flag allows you to do just that. When you are finished developing and ready for deployment, just re-invoke `go-bindata` without the `-debug` flag. It will now embed the latest version of the assets. The `NoMemCopy` option will alter the way the output file is generated. It will employ a hack that allows us to read the file data directly from the compiled program's `.rodata` section. This ensures that when we call call our generated function, we omit unnecessary memcopies. The downside of this, is that it requires dependencies on the `reflect` and `unsafe` packages. These may be restricted on platforms like AppEngine and thus prevent you from using this mode. Another disadvantage is that the byte slice we create, is strictly read-only. For most use-cases this is not a problem, but if you ever try to alter the returned byte slice, a runtime panic is thrown. Use this mode only on target platforms where memory constraints are an issue. The default behaviour is to use the old code generation method. This prevents the two previously mentioned issues, but will employ at least one extra memcopy and thus increase memory requirements. For instance, consider the following two examples: This would be the default mode, using an extra memcopy but gives a safe implementation without dependencies on `reflect` and `unsafe`: Here is the same functionality, but uses the `.rodata` hack. The byte slice returned from this example can not be written to without generating a runtime error. The NoCompress option indicates that the supplied assets are *not* GZIP compressed before being turned into Go code. The data should still be accessed through a function call, so nothing changes in the API. This feature is useful if you do not care for compression, or the supplied resource is already compressed. Doing it again would not add any value and may even increase the size of the data. The default behaviour of the program is to use compression. The keys used in the `_bindata` map are the same as the input file name passed to `go-bindata`. This includes the path. In most cases, this is not desireable, as it puts potentially sensitive information in your code base. For this purpose, the tool supplies another command line flag `-prefix`. This accepts a portion of a path name, which should be stripped off from the map keys and function names. For example, running without the `-prefix` flag, we get: Running with the `-prefix` flag, we get: With the optional Tags field, you can specify any go build tags that must be fulfilled for the output file to be included in a build. This is useful when including binary data in multiple formats, where the desired format is specified at build time with the appropriate tags. The tags are appended to a `// +build` line in the beginning of the output file and must follow the build tags syntax specified by the go tool.
Package aw is a utility library/framework for Alfred 3 workflows https://www.alfredapp.com/ It provides APIs for interacting with Alfred (e.g. Script Filter feedback) and the workflow environment (variables, caches, settings). NOTE: AwGo is currently in development. The API *will* change and should not be considered stable until v1.0. Until then, vendoring AwGo (e.g. with dep or vgo) is strongly recommended. As of AwGo 0.14, all applicable features of Alfred 3.6 are supported. The main features are: Typically, you'd call your program's main entry point via Run(). This way, the library will rescue any panic, log the stack trace and show an error message to the user in Alfred. In the Script box (Language = "/bin/bash"): To generate results for Alfred to show in a Script Filter, use the feedback API of Workflow: You can set workflow variables (via feedback) with Workflow.Var, Item.Var and Modifier.Var. See Workflow.SendFeedback for more documentation. Alfred requires a different JSON format if you wish to set workflow variables. Use the ArgVars (named for its equivalent element in Alfred) struct to generate output from Run Script actions. Be sure to set TextErrors to true to prevent Workflow from generating Alfred JSON if it catches a panic: See ArgVars for more information. New() creates a *Workflow using the default values and workflow settings read from environment variables set by Alfred. You can change defaults by passing one or more Options to New(). If you do not want to use Alfred's environment variables, or they aren't set (i.e. you're not running the code in Alfred), you must pass an Env as the first Option to New() using CustomEnv(). A Workflow can be re-configured later using its Configure() method. Check out the _examples/ subdirectory for some simple, but complete, workflows which you can copy to get started. See the documentation for Option for more information on configuring a Workflow. AwGo can filter Script Filter feedback using a Sublime Text-like fuzzy matching algorithm. Workflow.Filter() sorts feedback Items against the provided query, removing those that do not match. Sorting is performed by subpackage fuzzy via the fuzzy.Sortable interface. See _examples/fuzzy for a basic demonstration. See _examples/bookmarks for a demonstration of implementing fuzzy.Sortable on your own structs and customising the fuzzy sort settings. AwGo automatically configures the default log package to write to STDERR (Alfred's debugger) and a log file in the workflow's cache directory. The log file is necessary because background processes aren't connected to Alfred, so their output is only visible in the log. It is rotated when it exceeds 1 MiB in size. One previous log is kept. AwGo detects when Alfred's debugger is open (Workflow.Debug() returns true) and in this case prepends filename:linenumber: to log messages. The Config struct (which is included in Workflow as Workflow.Config) provides an interface to the workflow's settings from the Workflow Environment Variables panel. https://www.alfredapp.com/help/workflows/advanced/variables/#environment Alfred exports these settings as environment variables, and you can read them ad-hoc with the Config.Get*() methods, and save values back to Alfred with Config.Set(). Using Config.To() and Config.From(), you can "bind" your own structs to the settings in Alfred: And to save a struct's fields to the workflow's settings in Alfred: See the documentation for Config.To and Config.From for more information, and _examples/settings for a demo workflow based on the API. The Alfred struct provides methods for the rest of Alfred's AppleScript API. Amongst other things, you can use it to tell Alfred to open, to search for a query, or to browse/action files & directories. See documentation of the Alfred struct for more information. AwGo provides a basic, but useful, API for loading and saving data. In addition to reading/writing bytes and marshalling/unmarshalling to/from JSON, the API can auto-refresh expired cache data. See Cache and Session for the API documentation. Workflow has three caches tied to different directories: These all share the same API. The difference is in when the data go away. Data saved with Session are deleted after the user closes Alfred or starts using a different workflow. The Cache directory is in a system cache directory, so may be deleted by the system or "System Maintenance" tools. The Data directory lives with Alfred's application data and would not normally be deleted. Subpackage util provides several functions for running script files and snippets of AppleScript/JavaScript code. See util for documentation and examples. AwGo offers a simple API to start/stop background processes via Workflow's RunInBackground(), IsRunning() and Kill() methods. This is useful for running checks for updates and other jobs that hit the network or take a significant amount of time to complete, allowing you to keep your Script Filters extremely responsive. See _examples/update and _examples/workflows for demonstrations of this API.
Package limiter provides local and distributed rate limiting based on the Token Bucket algorithm. The primary entry point is the RateLimiter interface: The returned Decision contains whether the request is allowed, how many whole tokens remain, and timing hints for callers that want to set rate-limit headers (for example, Retry-After). This package implements a Token Bucket: Unlike fixed-window counters, token buckets naturally support bursts while still enforcing a long-term average rate. Limit defines the policy: Identity defines "who" is being rate-limited. It is split into: The package provides two implementations with the same Allow API: MemoryLimiter: an in-process limiter backed by a Go map. This is useful for unit tests, local development, and single-instance deployments. Because its state is local to the process, it does not enforce a global limit across multiple replicas. RedisLimiter: a distributed limiter backed by Redis. It uses a Lua script to perform the read/compute/write cycle atomically, which makes it safe to use across many application instances while enforcing a single global budget per identity. Recommendation: use RedisLimiter in production when you need a global limit, and MemoryLimiter in tests (as a fast, dependency-free stand-in). MemoryLimiter is safe for concurrent use by multiple goroutines (it uses a mutex to protect its internal map and per-identity state). RedisLimiter delegates concurrency safety to Redis and the go-redis client. Allow accepts a context.Context. RedisLimiter passes this context through to Redis operations so callers can enforce deadlines and cancel work to avoid cascading failures during partial outages. This package does not impose a "fail open" vs "fail closed" policy. If Redis is unavailable or the context expires, Allow returns a non-nil error and the caller decides whether to deny traffic (protect the backend) or allow traffic (maximize availability). Decision fields are intended to be directly consumable by application code: For a runnable example using MemoryLimiter, see ExampleMemoryLimiter in example_test.go. MemoryLimiter stores state in a process-local map keyed by: RedisLimiter stores state in Redis under keys prefixed with "limiter:" and uses a Redis hash with two fields: Redis keys are set to expire to avoid leaking memory for identities that stop sending requests. RedisLimiter is configured using the Functional Options pattern: Supported options:
bindata converts any file into managable Go source code. Useful for embedding binary data into a go program. The file data is optionally gzip compressed before being converted to a raw byte slice. The following paragraphs cover some of the customization options which can be specified in the Config struct, which must be passed into the Translate() call. When used with the `Debug` option, the generated code does not actually include the asset data. Instead, it generates function stubs which load the data from the original file on disk. The asset API remains identical between debug and release builds, so your code will not have to change. This is useful during development when you expect the assets to change often. The host application using these assets uses the same API in both cases and will not have to care where the actual data comes from. An example is a Go webserver with some embedded, static web content like HTML, JS and CSS files. While developing it, you do not want to rebuild the whole server and restart it every time you make a change to a bit of javascript. You just want to build and launch the server once. Then just press refresh in the browser to see those changes. Embedding the assets with the `debug` flag allows you to do just that. When you are finished developing and ready for deployment, just re-invoke `go-bindata` without the `-debug` flag. It will now embed the latest version of the assets. The `NoMemCopy` option will alter the way the output file is generated. It will employ a hack that allows us to read the file data directly from the compiled program's `.rodata` section. This ensures that when we call call our generated function, we omit unnecessary memcopies. The downside of this, is that it requires dependencies on the `reflect` and `unsafe` packages. These may be restricted on platforms like AppEngine and thus prevent you from using this mode. Another disadvantage is that the byte slice we create, is strictly read-only. For most use-cases this is not a problem, but if you ever try to alter the returned byte slice, a runtime panic is thrown. Use this mode only on target platforms where memory constraints are an issue. The default behaviour is to use the old code generation method. This prevents the two previously mentioned issues, but will employ at least one extra memcopy and thus increase memory requirements. For instance, consider the following two examples: This would be the default mode, using an extra memcopy but gives a safe implementation without dependencies on `reflect` and `unsafe`: Here is the same functionality, but uses the `.rodata` hack. The byte slice returned from this example can not be written to without generating a runtime error. The NoCompress option indicates that the supplied assets are *not* GZIP compressed before being turned into Go code. The data should still be accessed through a function call, so nothing changes in the API. This feature is useful if you do not care for compression, or the supplied resource is already compressed. Doing it again would not add any value and may even increase the size of the data. The default behaviour of the program is to use compression. The keys used in the `_bindata` map are the same as the input file name passed to `go-bindata`. This includes the path. In most cases, this is not desireable, as it puts potentially sensitive information in your code base. For this purpose, the tool supplies another command line flag `-prefix`. This accepts a portion of a path name, which should be stripped off from the map keys and function names. For example, running without the `-prefix` flag, we get: Running with the `-prefix` flag, we get: With the optional Tags field, you can specify any go build tags that must be fulfilled for the output file to be included in a build. This is useful when including binary data in multiple formats, where the desired format is specified at build time with the appropriate tags. The tags are appended to a `// +build` line in the beginning of the output file and must follow the build tags syntax specified by the go tool.
A push notification server using Gin framework written in Go (Golang). Details about the gorush project are found in github page: The pre-compiled binaries can be downloaded from release page. Send Android notification Send iOS notification The default endpoint is APNs development. Please add -production flag for APNs production push endpoint. Run gorush web server Get go status of api server using httpie tool: Simple send iOS notification example, the platform value is 1: Simple send Android notification example, the platform value is 2: For more details, see the documentation and example.
Package beego provide a MVC framework beego: an open-source, high-performance, modular, full-stack web framework It is used for rapid development of RESTful APIs, web apps and backend services in Go. beego is inspired by Tornado, Sinatra and Flask with the added benefit of some Go-specific features such as interfaces and struct embedding. more information: http://beego.me
Package Faygo is a fast and concise Go Web framework that can be used to develop high-performance web app(especially API) with fewer codes; Just define a struct Handler, Faygo will automatically bind/verify the request parameters and generate the online API doc. Copyright 2016 HenryLee. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. A trivial example is: run result: StructHandler tag value description: List of supported param value types:
Gojenkins is a Jenkins Client in Go, that exposes the jenkins REST api in a more developer friendly way.
Package httpx provides comprehensive utilities for building and executing HTTP requests with advanced features including fluent request building, automatic retry logic, and type-safe generic clients. Requirements: Go 1.22 or higher is required to use this package. Zero Dependencies: This package is built entirely using the Go standard library, with no external dependencies. This ensures maximum reliability, security, and minimal maintenance overhead for your projects. This package is designed to simplify HTTP client development in Go by providing: Build and execute a simple GET request: Use type-safe generic client: The RequestBuilder provides a fluent API for constructing HTTP requests with comprehensive input validation and error accumulation. All inputs are validated before the request is built, ensuring early error detection. Basic usage: Request builder features: Validation features: The builder validates all inputs and accumulates errors, allowing you to detect multiple issues at once: Builder reuse: The GenericClient provides type-safe HTTP requests using Go generics with automatic JSON marshaling and unmarshaling. This eliminates the need for manual type assertions and reduces boilerplate code. Basic usage: Generic client features: Configuration options: Integration with RequestBuilder: Multiple typed clients: Error handling: The package provides transparent retry logic that automatically retries failed requests using configurable backoff strategies. Retry logic preserves all request properties including headers and authentication. What gets retried: What does NOT get retried: Available retry strategies: Exponential Backoff (recommended for most use cases): strategy := httpx.ExponentialBackoff(500*time.Millisecond, 10*time.Second) // Wait times: 500ms → 1s → 2s → 4s → 8s (capped at maxDelay) Fixed Delay (useful for predictable retry timing): strategy := httpx.FixedDelay(1*time.Second) // Wait times: 1s → 1s → 1s Jitter Backoff (prevents thundering herd problem): strategy := httpx.JitterBackoff(500*time.Millisecond, 10*time.Second) // Wait times: random(0-500ms) → random(0-1s) → random(0-2s) Direct usage (advanced): The ClientBuilder provides a fluent API for configuring HTTP clients with retry logic, timeouts, and connection pooling. All settings are validated and default to production-ready values if out of range. Basic configuration: Advanced configuration: Combine with GenericClient: Default values (validated and adjusted if out of range): The package provides comprehensive error handling with specific error types: ErrorResponse (from GenericClient): Structured API errors with status code and message ClientError: Generic HTTP client operation errors RequestBuilder validation errors: Accumulated during building, reported at Build() time Error handling examples: Use type-safe clients for JSON APIs: client := httpx.NewGenericClient[User](...) response, err := client.Get("/users/1") // response.Data is User, not interface{} Configure retry logic for production: retryClient := httpx.NewClientBuilder(). WithMaxRetries(3). WithRetryStrategy(httpx.ExponentialBackoffStrategy). Build() Reuse HTTP clients (they're safe for concurrent use): client := httpx.NewGenericClient[User](...) // Use from multiple goroutines safely Use contexts for timeouts and cancellation: ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) defer cancel() req, _ := builder.WithContext(ctx).Build() Validate before building: if builder.HasErrors() { // Handle validation errors } Handle API errors appropriately: if apiErr, ok := err.(*httpx.ErrorResponse); ok { // Handle specific status codes } The package provides comprehensive proxy support for all HTTP clients. Proxy configuration works transparently across all client types and supports both HTTP and HTTPS proxies with optional authentication. Basic proxy configuration with ClientBuilder: HTTPS proxy: Proxy with authentication: Proxy with GenericClient: Proxy with retry client: Disable proxy (override environment variables): Common proxy ports: The proxy configuration: All utilities in this package are safe for concurrent use across multiple goroutines: Example concurrent usage: The package uses slog for debug logging. Enable debug logging to see: Enable debug logging: You can view the full documentation and examples locally by running: Then navigate to http://localhost:8080/pkg/github.com/slashdevops/httpx/ in your browser to browse the complete documentation, examples, and source code. For complete examples and API reference, see the README.md file or visit: https://pkg.go.dev/github.com/slashdevops/httpx Example demonstrates using exponential backoff. Example_withLogger_disabled demonstrates the default behavior with no logging. This example shows that by default, logging is disabled for clean, silent operation.
Sequence-based Go-native audio mixer for music apps See `demo/demo.go`: Play this Demo from the root of the project, using SDL 2.0 for hardware audio playback: Or export WAV via stdout `> demo/output.wav`: Game audio mixers are designed to play audio spontaneously, but when the timing is known in advance (e.g. sequence-based music apps) there is a demand for much greater accuracy in playback timing. Read the API documentation at https://godoc.org/gopkg.in/mix.v0 Mix seeks to solve the problem of audio mixing for the purpose of the playback of sequences where audio files and their playback timing is known in advance. Mix stores and mixes audio in native Go `[]float64` and natively implements Paul Vögler's "Loudness Normalization by Logarithmic Dynamic Range Compression" (details below) Author: Charney Kaye <hiya@charney.io> Even after selecting a hardware interface library such as http://www.portaudio.com/ or https://www.libsdl.org/, there remains a critical design problem to be solved. This design is a music application mixer. Most available options are geared towards Game development. Game audio mixers offer playback timing accuracy +/- 2 milliseconds. But that's totally unacceptable for music, specifically sequence-based sample playback. The design pattern particular to Game design is that the timing of the audio is not know in advance- the timing that really matters is that which is assembled in near-real-time in response to user interaction. In the field of Music development, often the timing is known in advance, e.g. a sequencer, the composition of music by specifying exactly how, when and which audio files will be played relative to the beginning of playback. Ergo, mix seeks to solve the problem of audio mixing for the purpose of the playback of sequences where audio files and their playback timing is known in advance. It seeks to do this with the absolute minimal logical overhead on top of the audio interface. Mix takes maximum advantage of Go by storing and mixing audio in native Go `[]float64` and natively implementing Paul Vögler's "Loudness Normalization by Logarithmic Dynamic Range Compression" (see The Mixing Algorithm below) To the Mix API, time is specified as a time.Duration-since-epoch, where the epoch is the moment that mix.Start() was called. Internally, time is tracked as samples-since-epoch at the master out playback frequency (e.g. 48000 Hz). This is most efficient because source audio is pre-converted to the master out playback frequency, and all audio maths are performed in terms of samples. Inspired by the theory paper "Mixing two digital audio streams with on the fly Loudness Normalization by Logarithmic Dynamic Range Compression" by Paul Vögler, 2012-04-20. This paper is published at http://www.voegler.eu/pub/audio/digital-audio-mixing-and-normalization.html. There's a demo implementation of mix included in the `demo/` folder in this repository. Run it using the defaults: Or specify options, e.g. using SDL for playback To show the help screen: Ubuntu: Mac OS X: More details for Linux, Mac and Windows: http://lazyfoo.net/SDL_tutorials/lesson01/index.php Ubuntu: Mac OS X: Windows: https://gnuradio.org/redmine/projects/gnuradio/wiki/PortAudioInstall. Best efforts will be made to preserve each API version in a release tag that can be parsed, e.g. http://gopkg.in/mix.v0 Mix in good health!
Package ebiten provides graphics and input API to develop a 2D game. You can start the game by calling the function Run.
Package xapi provides a production-ready Twitter API client with 95-100% success rates. This package implements a high-performance Go client for Twitter's API with intelligent caching, automatic retry logic, and authentic transaction ID generation. It achieves production-grade reliability through real algorithm implementation and comprehensive error handling. Key features: Basic usage: The client automatically handles authentication, caching, and retry logic, providing a simple interface for complex Twitter API interactions. Package xapi provides a production-ready Twitter API client achieving 95-100% success rates. This package implements a high-performance Go client for Twitter's API with intelligent caching, automatic retry logic, and authentic transaction ID generation. It achieves production-grade reliability through real algorithm implementation and comprehensive error handling. The simplest way to get started: The client supports three configuration modes: Production mode (default): Development mode (faster refresh, more logging): Custom configuration: The client provides access to 12 Twitter API endpoints: Core endpoints: Social graph endpoints: Content endpoints: Utility endpoints: Many methods support functional options for customization: The client implements intelligent error handling with automatic retry: Built-in monitoring provides insights into client performance: Intelligent caching: Retry logic: Rate limiting: The client follows clean architecture principles: Key components:
Package beego provide a MVC framework beego: an open-source, high-performance, modular, full-stack web framework It is used for rapid development of RESTful APIs, web apps and backend services in Go. beego is inspired by Tornado, Sinatra and Flask with the added benefit of some Go-specific features such as interfaces and struct embedding. more information: http://beego.me
Pact Go enables consumer driven contract testing, providing a mock service and DSL for the consumer project, and interaction playback and verification for the service provider project. Consumer side Pact testing is an isolated test that ensures a given component is able to collaborate with another (remote) component. Pact will automatically start a Mock server in the background that will act as the collaborators' test double. This implies that any interactions expected on the Mock server will be validated, meaning a test will fail if all interactions were not completed, or if unexpected interactions were found: A typical consumer-side test would look something like this: If this test completed successfully, a Pact file should have been written to ./pacts/my_consumer-my_provider.json containing all of the interactions expected to occur between the Consumer and Provider. In addition to verbatim value matching, you have 3 useful matching functions in the `dsl` package that can increase expressiveness and reduce brittle test cases. Here is a complex example that shows how all 3 terms can be used together: This example will result in a response body from the mock server that looks like: See the examples in the dsl package and the matcher tests (https://github.com/pact-foundation/pact-go/blob/master/dsl/matcher_test.go) for more matching examples. NOTE: You will need to use valid Ruby regular expressions (http://ruby-doc.org/core-2.1.5/Regexp.html) and double escape backslashes. Read more about flexible matching (https://github.com/pact-foundation/pact-ruby/wiki/Regular-expressions-and-type-matching-with-Pact. Provider side Pact testing, involves verifying that the contract - the Pact file - can be satisfied by the Provider. A typical Provider side test would like something like: The `VerifyProvider` will handle all verifications, treating them as subtests and giving you granular test reporting. If you don't like this behaviour, you may call `VerifyProviderRaw` directly and handle the errors manually. Note that `PactURLs` may be a list of local pact files or remote based urls (possibly from a Pact Broker - http://docs.pact.io/documentation/sharings_pacts.html). Pact reads the specified pact files (from remote or local sources) and replays the interactions against a running Provider. If all of the interactions are met we can say that both sides of the contract are satisfied and the test passes. When validating a Provider, you have 3 options to provide the Pact files: 1. Use "PactURLs" to specify the exact set of pacts to be replayed: Options 2 and 3 are particularly useful when you want to validate that your Provider is able to meet the contracts of what's in Production and also the latest in development. See this [article](http://rea.tech/enter-the-pact-matrix-or-how-to-decouple-the-release-cycles-of-your-microservices/) for more on this strategy. Each interaction in a pact should be verified in isolation, with no context maintained from the previous interactions. So how do you test a request that requires data to exist on the provider? Provider states are how you achieve this using Pact. Provider states also allow the consumer to make the same request with different expected responses (e.g. different response codes, or the same resource with a different subset of data). States are configured on the consumer side when you issue a dsl.Given() clause with a corresponding request/response pair. Configuring the provider is a little more involved, and (currently) requires running an API endpoint to configure any [provider states](http://docs.pact.io/documentation/provider_states.html) during the verification process. The option you must provide to the dsl.VerifyRequest is: An example route using the standard Go http package might look like this: See the examples or read more at http://docs.pact.io/documentation/provider_states.html. See the Pact Broker (http://docs.pact.io/documentation/sharings_pacts.html) documentation for more details on the Broker and this article (http://rea.tech/enter-the-pact-matrix-or-how-to-decouple-the-release-cycles-of-your-microservices/) on how to make it work for you. Publishing using Go code: Publishing from the CLI: Use a cURL request like the following to PUT the pact to the right location, specifying your consumer name, provider name and consumer version. The following flags are required to use basic authentication when publishing or retrieving Pact files to/from a Pact Broker: Pact Go uses a simple log utility (logutils - https://github.com/hashicorp/logutils) to filter log messages. The CLI already contains flags to manage this, should you want to control log level in your tests, you can set it like so:
Package base64Captcha supports digits, numbers,alphabet, arithmetic, audio and digit-alphabet captcha. base64Captcha is used for fast development of RESTful APIs, web apps and backend services in Go. give a string identifier to the package and it returns with a base64-encoding-png-string
Package sourcecraft provides a Go client library for the SourceCraft API. SourceCraft is a comprehensive platform for software development lifecycle management, providing code repository management, CI/CD pipelines, error tracking, and more. The Client type provides methods to interact with the SourceCraft REST API. Create a client using NewClient with your API endpoint and authentication token: The package includes comprehensive webhook parsing and verification capabilities. Create a webhook parser to handle incoming webhook events: Parse incoming webhook requests and handle events: The webhook parser automatically verifies HMAC signatures using SHA-256 when a secret is configured, ensuring webhook authenticity. All Client methods are safe for concurrent use. The Client type uses internal synchronization to protect shared state.
Package hercules contains the functions which are needed to gather various statistics from a Git repository. The analysis is expressed in a form of the tree: there are nodes - "pipeline items" - which require some other nodes to be executed prior to selves and in turn provide the data for dependent nodes. There are several service items which do not produce any useful statistics but rather provide the requirements for other items. The top-level items include: - BurndownAnalysis - line burndown statistics for project, files and developers. - CouplesAnalysis - coupling statistics for files and developers. - ShotnessAnalysis - structural hotness and couples, by any Babelfish UAST XPath (functions by default). The typical API usage is to initialize the Pipeline class: Then add the required analysis: This call will add all the needed intermediate pipeline items. Then link and execute the analysis tree: Finally extract the result: The actual usage example is cmd/hercules/root.go - the command line tool's code. Hercules depends heavily on https://github.com/src-d/go-git and leverages the diff algorithm through https://github.com/sergi/go-diff. Besides, BurndownAnalysis involves File and RBTree. These are low level data structures which enable incremental blaming. File carries an instance of RBTree and the current line burndown state. RBTree implements the red-black balanced binary tree and is based on https://github.com/yasushi-saito/rbtree. Coupling stats are supposed to be further processed rather than observed directly. labours.py uses Swivel embeddings and visualises them in Tensorflow Projector. Shotness analysis as well as other UAST-featured items relies on [Babelfish](https://doc.bblf.sh) and requires the server to be running.
Package swagger (2.0) provides a powerful interface to your API Contains an implementation of Swagger 2.0. It knows how to serialize, deserialize and validate swagger specifications. Swagger is a simple yet powerful representation of your RESTful API. With the largest ecosystem of API tooling on the planet, thousands of developers are supporting Swagger in almost every modern programming language and deployment environment. With a Swagger-enabled API, you get interactive documentation, client SDK generation and discoverability. We created Swagger to help fulfill the promise of APIs. Swagger helps companies like Apigee, Getty Images, Intuit, LivingSocial, McKesson, Microsoft, Morningstar, and PayPal build the best possible services with RESTful APIs.Now in version 2.0, Swagger is more enabling than ever. And it's 100% open source software. More detailed documentation is available at https://goswagger.io. Install: The implementation also provides a number of command line tools to help working with swagger. Currently there is a spec validator tool: To generate a server for a swagger spec document: To generate a client for a swagger spec document: To generate a swagger spec document for a go application: There are several other sub commands available for the generate command You're free to add files to the directories the generated code lands in, but the files generated by the generator itself will be regenerated on following generation runs so any changes to those files will be lost. However extra files you create won't be lost so they are safe to use for customizing the application to your needs. To generate a server for a swagger spec document: