Command godep helps build packages reproducibly by fixing their dependencies. Save currently-used dependencies to file Godeps: Build project using saved dependencies: or
Package cloudtrail provides the API client, operations, and parameter types for AWS CloudTrail. This is the CloudTrail API Reference. It provides descriptions of actions, data types, common parameters, and common errors for CloudTrail. CloudTrail is a web service that records Amazon Web Services API calls for your Amazon Web Services account and delivers log files to an Amazon S3 bucket. The recorded information includes the identity of the user, the start time of the Amazon Web Services API call, the source IP address, the request parameters, and the response elements returned by the service. As an alternative to the API, you can use one of the Amazon Web Services SDKs, which consist of libraries and sample code for various programming languages and platforms (Java, Ruby, .NET, iOS, Android, etc.). The SDKs provide programmatic access to CloudTrail. For example, the SDKs handle cryptographically signing requests, managing errors, and retrying requests automatically. For more information about the Amazon Web Services SDKs, including how to download and install them, see Tools to Build on Amazon Web Services. See the CloudTrail User Guide for information about the data that is included with each Amazon Web Services API call listed in the log files.
Package cloudformation provides the API client, operations, and parameter types for AWS CloudFormation. CloudFormation allows you to create and manage Amazon Web Services infrastructure deployments predictably and repeatedly. You can use CloudFormation to leverage Amazon Web Services products, such as Amazon Elastic Compute Cloud, Amazon Elastic Block Store, Amazon Simple Notification Service, Elastic Load Balancing, and Amazon EC2 Auto Scaling to build highly reliable, highly scalable, cost-effective applications without creating or configuring the underlying Amazon Web Services infrastructure. With CloudFormation, you declare all your resources and dependencies in a template file. The template defines a collection of resources as a single unit called a stack. CloudFormation creates and deletes all member resources of the stack together and manages all dependencies between the resources for you. For more information about CloudFormation, see the CloudFormation product page. CloudFormation makes use of other Amazon Web Services products. If you need additional technical information about a specific Amazon Web Services product, you can find the product's technical documentation at docs.aws.amazon.com.
Package redshift provides the API client, operations, and parameter types for Amazon Redshift. This is an interface reference for Amazon Redshift. It contains documentation for one of the programming or command line interfaces you can use to manage Amazon Redshift clusters. Note that Amazon Redshift is asynchronous, which means that some interfaces may require techniques, such as polling or asynchronous callback handlers, to determine when a command has been applied. In this reference, the parameter descriptions indicate whether a change is applied immediately, on the next instance reboot, or during the next maintenance window. For a summary of the Amazon Redshift cluster management interfaces, go to Using the Amazon Redshift Management Interfaces. Amazon Redshift manages all the work of setting up, operating, and scaling a data warehouse: provisioning capacity, monitoring and backing up the cluster, and applying patches and upgrades to the Amazon Redshift engine. You can focus on using your data to acquire new insights for your business and customers. If you are a first-time user of Amazon Redshift, we recommend that you begin by reading the Amazon Redshift Getting Started Guide. If you are a database developer, the Amazon Redshift Database Developer Guide explains how to design, build, query, and maintain the databases that make up your data warehouse.
Package saml contains a partial implementation of the SAML standard in golang. SAML is a standard for identity federation, i.e. either allowing a third party to authenticate your users or allowing third parties to rely on us to authenticate their users. In SAML parlance an Identity Provider (IDP) is a service that knows how to authenticate users. A Service Provider (SP) is a service that delegates authentication to an IDP. If you are building a service where users log in with someone else's credentials, then you are a Service Provider. This package supports implementing both service providers and identity providers. The core package contains the implementation of SAML. The package samlsp provides helper middleware suitable for use in Service Provider applications. The package samlidp provides a rudimentary IDP service that is useful for testing or as a starting point for other integrations. Version 0.4.0 introduces a few breaking changes to the _samlsp_ package in order to make the package more extensible, and to clean up the interfaces a bit. The default behavior remains the same, but you can now provide interface implementations of _RequestTracker_ (which tracks pending requests), _Session_ (which handles maintaining a session) and _OnError_ which handles reporting errors. Public fields of _samlsp.Middleware_ have changed, so some usages may require adjustment. See [issue 231](https://github.com/crewjam/saml/issues/231) for details. The option to provide an IDP metadata URL has been deprecated. Instead, we recommend that you use the `FetchMetadata()` function, or fetch the metadata yourself and use the new `ParseMetadata()` function, and pass the metadata in _samlsp.Options.IDPMetadata_. Similarly, the _HTTPClient_ field is now deprecated because it was only used for fetching metdata, which is no longer directly implemented. The fields that manage how cookies are set are deprecated as well. To customize how cookies are managed, provide custom implementation of _RequestTracker_ and/or _Session_, perhaps by extending the default implementations. The deprecated fields have not been removed from the Options structure, but will be in future. In particular we have deprecated the following fields in _samlsp.Options_: - `Logger` - This was used to emit errors while validating, which is an anti-pattern. - `IDPMetadataURL` - Instead use `FetchMetadata()` - `HTTPClient` - Instead pass httpClient to FetchMetadata - `CookieMaxAge` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieName` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieDomain` - Instead assign a custom CookieRequestTracker or CookieSessionProvider - `CookieDomain` - Instead assign a custom CookieRequestTracker or CookieSessionProvider Let us assume we have a simple web application to protect. We'll modify this application so it uses SAML to authenticate users. ```golang package main import ( ) ``` Each service provider must have an self-signed X.509 key pair established. You can generate your own with something like this: We will use `samlsp.Middleware` to wrap the endpoint we want to protect. Middleware provides both an `http.Handler` to serve the SAML specific URLs and a set of wrappers to require the user to be logged in. We also provide the URL where the service provider can fetch the metadata from the IDP at startup. In our case, we'll use [samltest.id](https://samltest.id/), an identity provider designed for testing. ```golang package main import ( ) ``` Next we'll have to register our service provider with the identity provider to establish trust from the service provider to the IDP. For [samltest.id](https://samltest.id/), you can do something like: Navigate to https://samltest.id/upload.php and upload the file you fetched. Now you should be able to authenticate. The flow should look like this: 1. You browse to `localhost:8000/hello` 1. The middleware redirects you to `https://samltest.id/idp/profile/SAML2/Redirect/SSO` 1. samltest.id prompts you for a username and password. 1. samltest.id returns you an HTML document which contains an HTML form setup to POST to `localhost:8000/saml/acs`. The form is automatically submitted if you have javascript enabled. 1. The local service validates the response, issues a session cookie, and redirects you to the original URL, `localhost:8000/hello`. 1. This time when `localhost:8000/hello` is requested there is a valid session and so the main content is served. Please see `example/idp/` for a substantially complete example of how to use the library and helpers to be an identity provider. The SAML standard is huge and complex with many dark corners and strange, unused features. This package implements the most commonly used subset of these features required to provide a single sign on experience. The package supports at least the subset of SAML known as [interoperable SAML](http://saml2int.org). This package supports the Web SSO profile. Message flows from the service provider to the IDP are supported using the HTTP Redirect binding and the HTTP POST binding. Message flows from the IDP to the service provider are supported via the HTTP POST binding. The package can produce signed SAML assertions, and can validate both signed and encrypted SAML assertions. It does not support signed or encrypted requests. The _RelayState_ parameter allows you to pass user state information across the authentication flow. The most common use for this is to allow a user to request a deep link into your site, be redirected through the SAML login flow, and upon successful completion, be directed to the originally requested link, rather than the root. Unfortunately, _RelayState_ is less useful than it could be. Firstly, it is not authenticated, so anything you supply must be signed to avoid XSS or CSRF. Secondly, it is limited to 80 bytes in length, which precludes signing. (See section 3.6.3.1 of SAMLProfiles.) The SAML specification is a collection of PDFs (sadly): - [SAMLCore](http://docs.oasis-open.org/security/saml/v2.0/saml-core-2.0-os.pdf) defines data types. - [SAMLBindings](http://docs.oasis-open.org/security/saml/v2.0/saml-bindings-2.0-os.pdf) defines the details of the HTTP requests in play. - [SAMLProfiles](http://docs.oasis-open.org/security/saml/v2.0/saml-profiles-2.0-os.pdf) describes data flows. - [SAMLConformance](http://docs.oasis-open.org/security/saml/v2.0/saml-conformance-2.0-os.pdf) includes a support matrix for various parts of the protocol. [SAMLtest](https://samltest.id/) is a testing ground for SAML service and identity providers. Please do not report security issues in the issue tracker. Rather, please contact me directly at ross@kndr.org ([PGP Key `78B6038B3B9DFB88`](https://keybase.io/crewjam)).
Package turn contains the public API for pion/turn, a toolkit for building TURN clients and servers
Package validator implements value validations for structs and individual fields based on tags. It can also handle Cross Field and Cross Struct validation for nested structs. Validate A simple example usage: The error can be used like so Both StructErrors and FieldError implement the Error interface but it's intended use is for development + debugging, not a production error message. Why not a better error message? because this library intends for you to handle your own error messages Why should I handle my own errors? Many reasons, for us building an internationalized application I needed to know the field and what validation failed so that I could provide an error in the users specific language. The hierarchical error structure is hard to work with sometimes.. Agreed Flatten function to the rescue! Flatten will return a map of FieldError's but the field name will be namespaced. Custom functions can be added Cross Field Validation can be implemented, for example Start & End Date range validation Multiple validators on a field will process in the order defined Bad Validator definitions are not handled by the library NOTE: Baked In Cross field validation only compares fields on the same struct, if cross field + cross struct validation is needed your own custom validator should be implemented. NOTE2: comma is the default separator of validation tags, if you wish to have a comma included within the parameter i.e. excludesall=, you will need to use the UTF-8 hex representation 0x2C, which is replaced in the code as a comma, so the above will become excludesall=0x2C Here is a list of the current built in validators: Validator notes: This package panics when bad input is provided, this is by design, bad code like that should not make it to production.
Package securityhub provides the API client, operations, and parameter types for AWS SecurityHub. Security Hub provides you with a comprehensive view of your security state in Amazon Web Services and helps you assess your Amazon Web Services environment against security industry standards and best practices. Security Hub collects security data across Amazon Web Services accounts, Amazon Web Services services, and supported third-party products and helps you analyze your security trends and identify the highest priority security issues. To help you manage the security state of your organization, Security Hub supports multiple security standards. These include the Amazon Web Services Foundational Security Best Practices (FSBP) standard developed by Amazon Web Services, and external compliance frameworks such as the Center for Internet Security (CIS), the Payment Card Industry Data Security Standard (PCI DSS), and the National Institute of Standards and Technology (NIST). Each standard includes several security controls, each of which represents a security best practice. Security Hub runs checks against security controls and generates control findings to help you assess your compliance against security best practices. In addition to generating control findings, Security Hub also receives findings from other Amazon Web Services services, such as Amazon GuardDuty and Amazon Inspector, and supported third-party products. This gives you a single pane of glass into a variety of security-related issues. You can also send Security Hub findings to other Amazon Web Services services and supported third-party products. Security Hub offers automation features that help you triage and remediate security issues. For example, you can use automation rules to automatically update critical findings when a security check fails. You can also leverage the integration with Amazon EventBridge to trigger automatic responses to specific findings. This guide, the Security Hub API Reference, provides information about the Security Hub API. This includes supported resources, HTTP methods, parameters, and schemas. If you're new to Security Hub, you might find it helpful to also review the Security Hub User Guide. The user guide explains key concepts and provides procedures that demonstrate how to use Security Hub features. It also provides information about topics such as integrating Security Hub with other Amazon Web Services services. In addition to interacting with Security Hub by making calls to the Security Hub API, you can use a current version of an Amazon Web Services command line tool or SDK. Amazon Web Services provides tools and SDKs that consist of libraries and sample code for various languages and platforms, such as PowerShell, Java, Go, Python, C++, and .NET. These tools and SDKs provide convenient, programmatic access to Security Hub and other Amazon Web Services services . They also handle tasks such as signing requests, managing errors, and retrying requests automatically. For information about installing and using the Amazon Web Services tools and SDKs, see Tools to Build on Amazon Web Services. With the exception of operations that are related to central configuration, Security Hub API requests are executed only in the Amazon Web Services Region that is currently active or in the specific Amazon Web Services Region that you specify in your request. Any configuration or settings change that results from the operation is applied only to that Region. To make the same change in other Regions, call the same API operation in each Region in which you want to apply the change. When you use central configuration, API requests for enabling Security Hub, standards, and controls are executed in the home Region and all linked Regions. For a list of central configuration operations, see the Central configuration terms and conceptssection of the Security Hub User Guide. The following throttling limits apply to Security Hub API operations. BatchEnableStandards - RateLimit of 1 request per second. BurstLimit of 1 request per second. GetFindings - RateLimit of 3 requests per second. BurstLimit of 6 requests per second. BatchImportFindings - RateLimit of 10 requests per second. BurstLimit of 30 requests per second. BatchUpdateFindings - RateLimit of 10 requests per second. BurstLimit of 30 requests per second. UpdateStandardsControl - RateLimit of 1 request per second. BurstLimit of 5 requests per second. All other operations - RateLimit of 10 requests per second. BurstLimit of 30 requests per second.
Package cognitoidentityprovider provides the API client, operations, and parameter types for Amazon Cognito Identity Provider. With the Amazon Cognito user pools API, you can configure user pools and authenticate users. To authenticate users from third-party identity providers (IdPs) in this API, you can link IdP users to native user profiles. Learn more about the authentication and authorization of federated users at Adding user pool sign-in through a third partyand in the User pool federation endpoints and hosted UI reference. This API reference provides detailed information about API operations and object types in Amazon Cognito. Along with resource management operations, the Amazon Cognito user pools API includes classes of operations and authorization models for client-side and server-side authentication of users. You can interact with operations in the Amazon Cognito user pools API as any of the following subjects. An administrator who wants to configure user pools, app clients, users, groups, or other user pool functions. A server-side app, like a web application, that wants to use its Amazon Web Services privileges to manage, authenticate, or authorize a user. A client-side app, like a mobile app, that wants to make unauthenticated requests to manage, authenticate, or authorize a user. For more information, see Using the Amazon Cognito user pools API and user pool endpoints in the Amazon Cognito Developer Guide. With your Amazon Web Services SDK, you can build the logic to support operational flows in every use case for this API. You can also make direct REST API requests to Amazon Cognito user pools service endpoints. The following links can get you started with the CognitoIdentityProvider client in other supported Amazon Web Services SDKs. Amazon Web Services Command Line Interface Amazon Web Services SDK for .NET Amazon Web Services SDK for C++ Amazon Web Services SDK for Go Amazon Web Services SDK for Java V2 Amazon Web Services SDK for JavaScript Amazon Web Services SDK for PHP V3 Amazon Web Services SDK for Python Amazon Web Services SDK for Ruby V3 To get started with an Amazon Web Services SDK, see Tools to Build on Amazon Web Services. For example actions and scenarios, see Code examples for Amazon Cognito Identity Provider using Amazon Web Services SDKs.
Fresh is a command line tool that builds and (re)starts your web application everytime you save a go or template file. If the web framework you are using supports the Fresh runner, it will show build errors on your browser. It currently works with Traffic (https://github.com/pilu/traffic), Martini (https://github.com/codegangsta/martini) and gocraft/web (https://github.com/gocraft/web). Fresh will watch for file events, and every time you create/modifiy/delete a file it will build and restart the application. If `go build` returns an error, it will logs it in the tmp folder. Traffic (https://github.com/pilu/traffic) already has a middleware that shows the content of that file if it is present. This middleware is automatically added if you run a Traffic web app in dev mode with Fresh.
Package workerpool queues work to a limited number of goroutines. The purpose of the worker pool is to limit the concurrency of tasks executed by the workers. This is useful when performing tasks that require sufficient resources (CPU, memory, etc.), and running too many tasks at the same time would exhaust resources. A task is a function submitted to the worker pool for execution. Submitting tasks to this worker pool will not block, regardless of the number of tasks. Incoming tasks are immediately dispatched to an available worker. If no worker is immediately available, or there are already tasks waiting for an available worker, then the task is put on a waiting queue to wait for an available worker. The intent of the worker pool is to limit the concurrency of task execution, not limit the number of tasks queued to be executed. Therefore, this unbounded input of tasks is acceptable as the tasks cannot be discarded. If the number of inbound tasks is too many to even queue for pending processing, then the solution is outside the scope of workerpool. It should be solved by distributing load over multiple systems, and/or storing input for pending processing in intermediate storage such as a database, file system, distributed message queue, etc. This worker pool uses a single dispatcher goroutine to read tasks from the input task queue and dispatch them to worker goroutines. This allows for a small input channel, and lets the dispatcher queue as many tasks as are submitted when there are no available workers. Additionally, the dispatcher can adjust the number of workers as appropriate for the work load, without having to utilize locked counters and checks incurred on task submission. When no tasks have been submitted for a period of time, a worker is removed by the dispatcher. This is done until there are no more workers to remove. The minimum number of workers is always zero, because the time to start new workers is insignificant. It is advisable to use different worker pools for tasks that are bound by different resources, or that have different resource use patterns. For example, tasks that use X Mb of memory may need different concurrency limits than tasks that use Y Mb of memory. When there are no available workers to handle incoming tasks, the tasks are put on a waiting queue, in this implementation. In implementations mentioned in the credits below, these tasks were passed to goroutines. Using a queue is faster and has less memory overhead than creating a separate goroutine for each waiting task, allowing a much higher number of waiting tasks. Also, using a waiting queue ensures that tasks are given to workers in the order the tasks were received. This implementation builds on ideas from the following: http://marcio.io/2015/07/handling-1-million-requests-per-minute-with-golang http://nesv.github.io/golang/2014/02/25/worker-queues-in-go.html
bindata converts any file into managable Go source code. Useful for embedding binary data into a go program. The file data is optionally gzip compressed before being converted to a raw byte slice. The following paragraphs cover some of the customization options which can be specified in the Config struct, which must be passed into the Translate() call. When used with the `Debug` option, the generated code does not actually include the asset data. Instead, it generates function stubs which load the data from the original file on disk. The asset API remains identical between debug and release builds, so your code will not have to change. This is useful during development when you expect the assets to change often. The host application using these assets uses the same API in both cases and will not have to care where the actual data comes from. An example is a Go webserver with some embedded, static web content like HTML, JS and CSS files. While developing it, you do not want to rebuild the whole server and restart it every time you make a change to a bit of javascript. You just want to build and launch the server once. Then just press refresh in the browser to see those changes. Embedding the assets with the `debug` flag allows you to do just that. When you are finished developing and ready for deployment, just re-invoke `go-bindata` without the `-debug` flag. It will now embed the latest version of the assets. The `NoMemCopy` option will alter the way the output file is generated. It will employ a hack that allows us to read the file data directly from the compiled program's `.rodata` section. This ensures that when we call call our generated function, we omit unnecessary memcopies. The downside of this, is that it requires dependencies on the `reflect` and `unsafe` packages. These may be restricted on platforms like AppEngine and thus prevent you from using this mode. Another disadvantage is that the byte slice we create, is strictly read-only. For most use-cases this is not a problem, but if you ever try to alter the returned byte slice, a runtime panic is thrown. Use this mode only on target platforms where memory constraints are an issue. The default behaviour is to use the old code generation method. This prevents the two previously mentioned issues, but will employ at least one extra memcopy and thus increase memory requirements. For instance, consider the following two examples: This would be the default mode, using an extra memcopy but gives a safe implementation without dependencies on `reflect` and `unsafe`: Here is the same functionality, but uses the `.rodata` hack. The byte slice returned from this example can not be written to without generating a runtime error. The NoCompress option indicates that the supplied assets are *not* GZIP compressed before being turned into Go code. The data should still be accessed through a function call, so nothing changes in the API. This feature is useful if you do not care for compression, or the supplied resource is already compressed. Doing it again would not add any value and may even increase the size of the data. The default behaviour of the program is to use compression. The keys used in the `_bindata` map are the same as the input file name passed to `go-bindata`. This includes the path. In most cases, this is not desireable, as it puts potentially sensitive information in your code base. For this purpose, the tool supplies another command line flag `-prefix`. This accepts a portion of a path name, which should be stripped off from the map keys and function names. For example, running without the `-prefix` flag, we get: Running with the `-prefix` flag, we get: With the optional Tags field, you can specify any go build tags that must be fulfilled for the output file to be included in a build. This is useful when including binary data in multiple formats, where the desired format is specified at build time with the appropriate tags. The tags are appended to a `// +build` line in the beginning of the output file and must follow the build tags syntax specified by the go tool.
Package expression provides types and functions to create Amazon DynamoDB Expression strings, ExpressionAttributeNames maps, and ExpressionAttributeValues maps. The package represents the various DynamoDB Expressions as structs named accordingly. For example, ConditionBuilder represents a DynamoDB Condition Expression, an UpdateBuilder represents a DynamoDB Update Expression, and so on. The following example shows a sample ConditionExpression and how to build an equivalent ConditionBuilder In order to retrieve the formatted DynamoDB Expression strings, call the getter methods on the Expression struct. To create the Expression struct, call the Build() method on the Builder struct. Because some input structs, such as QueryInput, can have multiple DynamoDB Expressions, multiple structs representing various DynamoDB Expressions can be added to the Builder struct. The following example shows a generic usage of the whole package. The ExpressionAttributeNames and ExpressionAttributeValues member of the input struct must always be assigned when using the Expression struct because all item attribute names and values are aliased. That means that if the ExpressionAttributeNames and ExpressionAttributeValues member is not assigned with the corresponding Names() and Values() methods, the DynamoDB operation will run into a logic error.
Package tableflip implements zero downtime upgrades. An upgrade spawns a new copy of argv[0] and passes file descriptors of used listening sockets to the new process. The old process exits once the new process signals readiness. Thus new code can use sockets allocated in the old process. This is similar to the approach used by nginx, but as a library. At any point in time there are one or two processes, with at most one of them in non-ready state. A successful upgrade fully replaces all old configuration and code. To use this library with systemd you need to use the PIDFile option in the service file. Then pass /path/to/pid-file to New. You can use systemd-run to test your implementation: systemd-run will print a unit name, which you can use with systemctl to inspect the service. NOTES: Requires at least Go 1.9, since there is a race condition on the pipes used for communication between parent and child. If you're seeing "can't start process: no such file or directory", you're probably using "go run main.go", for graceful reloads to work, you'll need use "go build main.go". Tableflip does not work on Windows, because Windows does not have the mechanisms required to support this method of graceful restarting. It is still possible to include this package in code that runs on Windows, which may be necessary in certain development circumstances, but it will not provide zero downtime upgrades when running on Windows. See the `testing` package for an example of how to use it. This shows how to use the upgrader with the graceful shutdown facilities of net/http. This shows how to use the Upgrader with a listener based service.
Package virtualkubelet is currently just for providing docs for godoc. Virtual Kubelet is a project which aims to provide a library that can be consumed by other projects to build a Kubernetes node agent that performs the same basic role as the Kubelet, but fully customize the behavior. *Note*: Virtual Kubelet is not the Kubelet. All of the business logic for virtual-kubelet is in the `node` package. The node package has controllers for managing the node in Kubernetes and running scheduled pods against a backend service. The backend service along with the code wrapping what is provided in the node package is what consumers of this project would implement. In the interest of not duplicating examples, please see that package on how to get started using virtual kubelet. Virtual Kubelet supports propagation of logging and traces through a context. See the "log" and "trace" packages for how to use this. Errors produced by and consumed from the node package are expected to conform to error types defined in the "errdefs" package in order to be able to understand the kind of failure that occurred and react accordingly.
Package neptune provides the API client, operations, and parameter types for Amazon Neptune. Amazon Neptune is a fast, reliable, fully-managed graph database service that makes it easy to build and run applications that work with highly connected datasets. The core of Amazon Neptune is a purpose-built, high-performance graph database engine optimized for storing billions of relationships and querying the graph with milliseconds latency. Amazon Neptune supports popular graph models Property Graph and W3C's RDF, and their respective query languages Apache TinkerPop Gremlin and SPARQL, allowing you to easily build queries that efficiently navigate highly connected datasets. Neptune powers graph use cases such as recommendation engines, fraud detection, knowledge graphs, drug discovery, and network security. This interface reference for Amazon Neptune contains documentation for a programming or command line interface you can use to manage Amazon Neptune. Note that Amazon Neptune is asynchronous, which means that some interfaces might require techniques such as polling or callback functions to determine when a command has been applied. In this reference, the parameter descriptions indicate whether a command is applied immediately, on the next instance reboot, or during the maintenance window. The reference structure is as follows, and we list following some related topics from the user guide.
Package codepipeline provides the API client, operations, and parameter types for AWS CodePipeline. This is the CodePipeline API Reference. This guide provides descriptions of the actions and data types for CodePipeline. Some functionality for your pipeline can only be configured through the API. For more information, see the CodePipeline User Guide. You can use the CodePipeline API to work with pipelines, stages, actions, and transitions. Pipelines are models of automated release processes. Each pipeline is uniquely named, and consists of stages, actions, and transitions. You can work with pipelines by calling: CreatePipeline DeletePipeline GetPipeline GetPipelineExecution GetPipelineState ListActionExecutions ListPipelines ListPipelineExecutions StartPipelineExecution StopPipelineExecution UpdatePipeline Pipelines include stages. Each stage contains one or more actions that must complete before the next stage begins. A stage results in success or failure. If a stage fails, the pipeline stops at that stage and remains stopped until either a new version of an artifact appears in the source location, or a user takes action to rerun the most recent artifact through the pipeline. You can call GetPipelineState, which displays the status of a pipeline, including the status of stages in the pipeline, or GetPipeline, which returns the entire structure of the pipeline, including the stages of that pipeline. For more information about the structure of stages and actions, see CodePipeline Pipeline Structure Reference. Pipeline stages include actions that are categorized into categories such as source or build actions performed in a stage of a pipeline. For example, you can use a source action to import artifacts into a pipeline from a source such as Amazon S3. Like stages, you do not work with actions directly in most cases, but you do define and interact with actions when working with pipeline operations such as CreatePipelineand GetPipelineState. Valid action categories are: Source Build Test Deploy Approval Invoke Compute Pipelines also include transitions, which allow the transition of artifacts from one stage to the next in a pipeline after the actions in one stage complete. You can work with transitions by calling: DisableStageTransition EnableStageTransition For third-party integrators or developers who want to create their own integrations with CodePipeline, the expected sequence varies from the standard API user. To integrate with CodePipeline, developers need to work with the following items: Jobs, which are instances of an action. For example, a job for a source action might import a revision of an artifact from a source. You can work with jobs by calling: AcknowledgeJob GetJobDetails PollForJobs PutJobFailureResult PutJobSuccessResult Third party jobs, which are instances of an action created by a partner action and integrated into CodePipeline. Partner actions are created by members of the Amazon Web Services Partner Network. You can work with third party jobs by calling: AcknowledgeThirdPartyJob GetThirdPartyJobDetails PollForThirdPartyJobs PutThirdPartyJobFailureResult PutThirdPartyJobSuccessResult
Package sfn provides the API client, operations, and parameter types for AWS Step Functions. Step Functions coordinates the components of distributed applications and microservices using visual workflows. You can use Step Functions to build applications from individual components, each of which performs a discrete function, or task, allowing you to scale and change applications quickly. Step Functions provides a console that helps visualize the components of your application as a series of steps. Step Functions automatically triggers and tracks each step, and retries steps when there are errors, so your application executes predictably and in the right order every time. Step Functions logs the state of each step, so you can quickly diagnose and debug any issues. Step Functions manages operations and underlying infrastructure to ensure your application is available at any scale. You can run tasks on Amazon Web Services, your own servers, or any system that has access to Amazon Web Services. You can access and use Step Functions using the console, the Amazon Web Services SDKs, or an HTTP API. For more information about Step Functions, see the Step Functions Developer Guide. If you use the Step Functions API actions using Amazon Web Services SDK integrations, make sure the API actions are in camel case and parameter names are in Pascal case. For example, you could use Step Functions API action startSyncExecution and specify its parameter as StateMachineArn .