Package codedeploy provides the API client, operations, and parameter types for AWS CodeDeploy. CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances running in your own facility, serverless Lambda functions, or applications in an Amazon ECS service. You can deploy a nearly unlimited variety of application content, such as an updated Lambda function, updated applications in an Amazon ECS service, code, web and configuration files, executables, packages, scripts, multimedia files, and so on. CodeDeploy can deploy application content stored in Amazon S3 buckets, GitHub repositories, or Bitbucket repositories. You do not need to make changes to your existing code before you can use CodeDeploy. CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of updating your applications, without many of the risks associated with error-prone manual deployments. Use the information in this guide to help you work with the following CodeDeploy components: Application: A name that uniquely identifies the application you want to deploy. CodeDeploy uses this name, which functions as a container, to ensure the correct combination of revision, deployment configuration, and deployment group are referenced during a deployment. Deployment group: A set of individual instances, CodeDeploy Lambda deployment configuration settings, or an Amazon ECS service and network details. A Lambda deployment group specifies how to route traffic to a new version of a Lambda function. An Amazon ECS deployment group specifies the service created in Amazon ECS to deploy, a load balancer, and a listener to reroute production traffic to an updated containerized application. An Amazon EC2/On-premises deployment group contains individually tagged instances, Amazon EC2 instances in Amazon EC2 Auto Scaling groups, or both. All deployment groups can specify optional trigger, alarm, and rollback settings. Deployment configuration: A set of deployment rules and deployment success and failure conditions used by CodeDeploy during a deployment. Deployment: The process and the components used when updating a Lambda function, a containerized application in an Amazon ECS service, or of installing content on one or more instances. Application revisions: For an Lambda deployment, this is an AppSpec file that specifies the Lambda function to be updated and one or more functions to validate deployment lifecycle events. For an Amazon ECS deployment, this is an AppSpec file that specifies the Amazon ECS task definition, container, and port where production traffic is rerouted. For an EC2/On-premises deployment, this is an archive file that contains source content—source code, webpages, executable files, and deployment scripts—along with an AppSpec file. Revisions are stored in Amazon S3 buckets or GitHub repositories. For Amazon S3, a revision is uniquely identified by its Amazon S3 object key and its ETag, version, or both. For GitHub, a revision is uniquely identified by its commit ID. This guide also contains information to help you get details about the instances in your deployments, to make on-premises instances available for CodeDeploy deployments, to get details about a Lambda function deployment, and to get details about Amazon ECS service deployments. CodeDeploy User Guide CodeDeploy API Reference Guide CLI Reference for CodeDeploy CodeDeploy Developer Forum
Package eventbridge provides the API client, operations, and parameter types for Amazon EventBridge. Amazon EventBridge helps you to respond to state changes in your Amazon Web Services resources. When your resources change state, they automatically send events to an event stream. You can create rules that match selected events in the stream and route them to targets to take action. You can also use rules to take action on a predetermined schedule. For example, you can configure rules to: Automatically invoke an Lambda function to update DNS entries when an event notifies you that Amazon EC2 instance enters the running state. Direct specific API records from CloudTrail to an Amazon Kinesis data stream for detailed analysis of potential security or availability risks. Periodically invoke a built-in target to create a snapshot of an Amazon EBS volume. For more information about the features of Amazon EventBridge, see the Amazon EventBridge User Guide.
Package tcpproxy lets users build TCP proxies, optionally making routing decisions based on HTTP/1 Host headers and the SNI hostname in TLS connections. Typical usage: Calling Run (or Start) on a proxy also starts all the necessary listeners. For each accepted connection, the rules for that ipPort are matched, in order. If one matches (currently HTTP Host, SNI, or always), then the connection is handed to the target. The two predefined Target implementations are: 1) DialProxy, proxying to another address (use the To func to return a DialProxy value), 2) TargetListener, making the matched connection available via a net.Listener.Accept call. But Target is an interface, so you can also write your own. Note that tcpproxy does not do any TLS encryption or decryption. It only (via DialProxy) copies bytes around. The SNI hostname in the TLS header is unencrypted, for better or worse. This package makes no API stability promises. If you depend on it, vendor it.
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package otelecho instruments the labstack/echo package (https://github.com/labstack/echo). Currently only the routing of a received message can be instrumented. To do so, use the Middleware function.
Package route53resolver provides the API client, operations, and parameter types for Amazon Route 53 Resolver. When you create a VPC using Amazon VPC, you automatically get DNS resolution within the VPC from Route 53 Resolver. By default, Resolver answers DNS queries for VPC domain names such as domain names for EC2 instances or Elastic Load Balancing load balancers. Resolver performs recursive lookups against public name servers for all other domain names. You can also configure DNS resolution between your VPC and your network over a Direct Connect or VPN connection: DNS resolvers on your network can forward DNS queries to Resolver in a specified VPC. This allows your DNS resolvers to easily resolve domain names for Amazon Web Services resources such as EC2 instances or records in a Route 53 private hosted zone. For more information, see How DNS Resolvers on Your Network Forward DNS Queries to Route 53 Resolverin the Amazon Route 53 Developer Guide. You can configure Resolver to forward queries that it receives from EC2 instances in your VPCs to DNS resolvers on your network. To forward selected queries, you create Resolver rules that specify the domain names for the DNS queries that you want to forward (such as example.com), and the IP addresses of the DNS resolvers on your network that you want to forward the queries to. If a query matches multiple rules (example.com, acme.example.com), Resolver chooses the rule with the most specific match (acme.example.com) and forwards the query to the IP addresses that you specified in that rule. For more information, see How Route 53 Resolver Forwards DNS Queries from Your VPCs to Your Network in the Amazon Route 53 Developer Guide. Like Amazon VPC, Resolver is Regional. In each Region where you have VPCs, you can choose whether to forward queries from your VPCs to your network (outbound queries), from your network to your VPCs (inbound queries), or both.
Package tcpproxy lets users build TCP proxies, optionally making routing decisions based on HTTP/1 Host headers and the SNI hostname in TLS connections. Typical usage: Calling Run (or Start) on a proxy also starts all the necessary listeners. For each accepted connection, the rules for that ipPort are matched, in order. If one matches (currently HTTP Host, SNI, or always), then the connection is handed to the target. The two predefined Target implementations are: 1) DialProxy, proxying to another address (use the To func to return a DialProxy value), 2) TargetListener, making the matched connection available via a net.Listener.Accept call. But Target is an interface, so you can also write your own. Note that tcpproxy does not do any TLS encryption or decryption. It only (via DialProxy) copies bytes around. The SNI hostname in the TLS header is unencrypted, for better or worse. This package makes no API stability promises. If you depend on it, vendor it.
The pubsub package provides facilities for the Publish/Subscribe pattern of message propagation, also known as overlay multicast. The implementation provides topic-based pubsub, with pluggable routing algorithms. The main interface to the library is the PubSub object. You can construct this object with the following constructors: - NewFloodSub creates an instance that uses the floodsub routing algorithm. - NewGossipSub creates an instance that uses the gossipsub routing algorithm. - NewRandomSub creates an instance that uses the randomsub routing algorithm. In addition, there is a generic constructor that creates a pubsub instance with a custom PubSubRouter interface. This procedure is currently reserved for internal use within the package. Once you have constructed a PubSub instance, you need to establish some connections to your peers; the implementation relies on ambient peer discovery, leaving bootstrap and active peer discovery up to the client. To publish a message to some topic, use Publish; you don't need to be subscribed to the topic in order to publish. To subscribe to a topic, use Subscribe; this will give you a subscription interface from which new messages can be pumped.
Pact Go enables consumer driven contract testing, providing a mock service and DSL for the consumer project, and interaction playback and verification for the service provider project. Consumer side Pact testing is an isolated test that ensures a given component is able to collaborate with another (remote) component. Pact will automatically start a Mock server in the background that will act as the collaborators' test double. This implies that any interactions expected on the Mock server will be validated, meaning a test will fail if all interactions were not completed, or if unexpected interactions were found: A typical consumer-side test would look something like this: If this test completed successfully, a Pact file should have been written to ./pacts/my_consumer-my_provider.json containing all of the interactions expected to occur between the Consumer and Provider. In addition to verbatim value matching, you have 3 useful matching functions in the `dsl` package that can increase expressiveness and reduce brittle test cases. Here is a complex example that shows how all 3 terms can be used together: This example will result in a response body from the mock server that looks like: See the examples in the dsl package and the matcher tests (https://github.com/pact-foundation/pact-go/blob/master/dsl/matcher_test.go) for more matching examples. NOTE: You will need to use valid Ruby regular expressions (http://ruby-doc.org/core-2.1.5/Regexp.html) and double escape backslashes. Read more about flexible matching (https://github.com/pact-foundation/pact-ruby/wiki/Regular-expressions-and-type-matching-with-Pact. Provider side Pact testing, involves verifying that the contract - the Pact file - can be satisfied by the Provider. A typical Provider side test would like something like: The `VerifyProvider` will handle all verifications, treating them as subtests and giving you granular test reporting. If you don't like this behaviour, you may call `VerifyProviderRaw` directly and handle the errors manually. Note that `PactURLs` may be a list of local pact files or remote based urls (possibly from a Pact Broker - http://docs.pact.io/documentation/sharings_pacts.html). Pact reads the specified pact files (from remote or local sources) and replays the interactions against a running Provider. If all of the interactions are met we can say that both sides of the contract are satisfied and the test passes. When validating a Provider, you have 3 options to provide the Pact files: 1. Use "PactURLs" to specify the exact set of pacts to be replayed: Options 2 and 3 are particularly useful when you want to validate that your Provider is able to meet the contracts of what's in Production and also the latest in development. See this [article](http://rea.tech/enter-the-pact-matrix-or-how-to-decouple-the-release-cycles-of-your-microservices/) for more on this strategy. Each interaction in a pact should be verified in isolation, with no context maintained from the previous interactions. So how do you test a request that requires data to exist on the provider? Provider states are how you achieve this using Pact. Provider states also allow the consumer to make the same request with different expected responses (e.g. different response codes, or the same resource with a different subset of data). States are configured on the consumer side when you issue a dsl.Given() clause with a corresponding request/response pair. Configuring the provider is a little more involved, and (currently) requires running an API endpoint to configure any [provider states](http://docs.pact.io/documentation/provider_states.html) during the verification process. The option you must provide to the dsl.VerifyRequest is: An example route using the standard Go http package might look like this: See the examples or read more at http://docs.pact.io/documentation/provider_states.html. See the Pact Broker (http://docs.pact.io/documentation/sharings_pacts.html) documentation for more details on the Broker and this article (http://rea.tech/enter-the-pact-matrix-or-how-to-decouple-the-release-cycles-of-your-microservices/) on how to make it work for you. Publishing using Go code: Publishing from the CLI: Use a cURL request like the following to PUT the pact to the right location, specifying your consumer name, provider name and consumer version. The following flags are required to use basic authentication when publishing or retrieving Pact files to/from a Pact Broker: Pact Go uses a simple log utility (logutils - https://github.com/hashicorp/logutils) to filter log messages. The CLI already contains flags to manage this, should you want to control log level in your tests, you can set it like so:
Package cloudwatchevents provides the API client, operations, and parameter types for Amazon CloudWatch Events. Amazon EventBridge helps you to respond to state changes in your Amazon Web Services resources. When your resources change state, they automatically send events to an event stream. You can create rules that match selected events in the stream and route them to targets to take action. You can also use rules to take action on a predetermined schedule. For example, you can configure rules to: Automatically invoke an Lambda function to update DNS entries when an event notifies you that Amazon EC2 instance enters the running state. Direct specific API records from CloudTrail to an Amazon Kinesis data stream for detailed analysis of potential security or availability risks. Periodically invoke a built-in target to create a snapshot of an Amazon EBS volume. For more information about the features of Amazon EventBridge, see the Amazon EventBridge User Guide.
Package oauth1 is a Go implementation of the OAuth1 spec RFC 5849. It allows end-users to authorize a client (consumer) to access protected resources on their behalf (e.g. login) and allows clients to make signed and authorized requests on behalf of a user (e.g. API calls). It takes design cues from golang.org/x/oauth2, providing an http.Client which handles request signing and authorization. Package oauth1 implements the OAuth1 authorization flow and provides an http.Client which can sign and authorize OAuth1 requests. To implement "Login with X", use the https://github.com/dghubble/gologin packages which provide login handlers for OAuth1 and OAuth2 providers. To call the Twitter, Digits, or Tumblr OAuth1 APIs, use the higher level Go API clients. * https://github.com/dghubble/go-twitter * https://github.com/dghubble/go-digits * https://github.com/benfb/go-tumblr Perform the OAuth 1 authorization flow to ask a user to grant an application access to his/her resources via an access token. 1. When a user performs an action (e.g. "Login with X" button calls "/login" route) get an OAuth1 request token (temporary credentials). 2. Obtain authorization from the user by redirecting them to the OAuth1 provider's authorization URL to grant the application access. Receive the callback from the OAuth1 provider in a handler. 3. Acquire the access token (token credentials) which can later be used to make requests on behalf of the user. Check the examples to see this authorization flow in action from the command line, with Twitter PIN-based login and Tumblr login. Use an access Token to make authorized requests on behalf of a user. Check the examples to see Twitter and Tumblr requests in action.
This is inspired by Julien Schmidt's httprouter, in that it uses a patricia tree, but the implementation is rather different. Specifically, the routing rules are relaxed so that a single path segment may be a wildcard in one route and a static token in another. This gives a nice combination of high performance with a lot of convenience in designing the routing patterns.
Package transfer provides the API client, operations, and parameter types for AWS Transfer Family. Transfer Family is a fully managed service that enables the transfer of files over the File Transfer Protocol (FTP), File Transfer Protocol over SSL (FTPS), or Secure Shell (SSH) File Transfer Protocol (SFTP) directly into and out of Amazon Simple Storage Service (Amazon S3) or Amazon EFS. Additionally, you can use Applicability Statement 2 (AS2) to transfer files into and out of Amazon S3. Amazon Web Services helps you seamlessly migrate your file transfer workflows to Transfer Family by integrating with existing authentication systems, and providing DNS routing with Amazon Route 53 so nothing changes for your customers and partners, or their applications. With your data in Amazon S3, you can use it with Amazon Web Services services for processing, analytics, machine learning, and archiving. Getting started with Transfer Family is easy since there is no infrastructure to buy and set up.
This is inspired by Julien Schmidt's httprouter, in that it uses a patricia tree, but the implementation is rather different. Specifically, the routing rules are relaxed so that a single path segment may be a wildcard in one route and a static token in another. This gives a nice combination of high performance with a lot of convenience in designing the routing patterns.
Package atreugo is a high performance and extensible micro web framework with zero memory allocations in hot paths It's build on top of fasthttp and provides the following features: Optimized for speed. Easily handles more than 100K qps and more than 1M concurrent keep-alive connections on modern hardware. Optimized for low memory usage. Easy 'Connection: Upgrade' support via RequestCtx.Hijack. Server provides the following anti-DoS limits: The number of concurrent connections. The number of concurrent connections per client IP. The number of requests per connection. Request read timeout. Response write timeout. Maximum request header size. Maximum request body size. Maximum request execution time. Maximum keep-alive connection lifetime. Early filtering out non-GET requests. * A lot of additional useful info is exposed to request handler: * Middlewares support: * Easy routing: * Common responses (also you could use your own responses):
Package globalaccelerator provides the API client, operations, and parameter types for AWS Global Accelerator. This is the Global Accelerator API Reference. This guide is for developers who need detailed information about Global Accelerator API actions, data types, and errors. For more information about Global Accelerator features, see the Global Accelerator Developer Guide. Global Accelerator is a service in which you create accelerators to improve the performance of your applications for local and global users. Depending on the type of accelerator you choose, you can gain additional benefits. By using a standard accelerator, you can improve availability of your internet applications that are used by a global audience. With a standard accelerator, Global Accelerator directs traffic to optimal endpoints over the Amazon Web Services global network. For other scenarios, you might choose a custom routing accelerator. With a custom routing accelerator, you can use application logic to directly map one or more users to a specific endpoint among many endpoints. Global Accelerator is a global service that supports endpoints in multiple Amazon Web Services Regions but you must specify the US West (Oregon) Region to create, update, or otherwise work with accelerators. That is, for example, specify --region us-west-2 on Amazon Web Services CLI commands. By default, Global Accelerator provides you with static IP addresses that you associate with your accelerator. The static IP addresses are anycast from the Amazon Web Services edge network. For IPv4, Global Accelerator provides two static IPv4 addresses. For dual-stack, Global Accelerator provides a total of four addresses: two static IPv4 addresses and two static IPv6 addresses. With a standard accelerator for IPv4, instead of using the addresses that Global Accelerator provides, you can configure these entry points to be IPv4 addresses from your own IP address ranges that you bring to Global Accelerator (BYOIP). For a standard accelerator, they distribute incoming application traffic across multiple endpoint resources in multiple Amazon Web Services Regions , which increases the availability of your applications. Endpoints for standard accelerators can be Network Load Balancers, Application Load Balancers, Amazon EC2 instances, or Elastic IP addresses that are located in one Amazon Web Services Region or multiple Amazon Web Services Regions. For custom routing accelerators, you map traffic that arrives to the static IP addresses to specific Amazon EC2 servers in endpoints that are virtual private cloud (VPC) subnets. The static IP addresses remain assigned to your accelerator for as long as it exists, even if you disable the accelerator and it no longer accepts or routes traffic. However, when you delete an accelerator, you lose the static IP addresses that are assigned to it, so you can no longer route traffic by using them. You can use IAM policies like tag-based permissions with Global Accelerator to limit the users who have permissions to delete an accelerator. For more information, see Tag-based policies. For standard accelerators, Global Accelerator uses the Amazon Web Services global network to route traffic to the optimal regional endpoint based on health, client location, and policies that you configure. The service reacts instantly to changes in health or configuration to ensure that internet traffic from clients is always directed to healthy endpoints. For more information about understanding and using Global Accelerator, see the Global Accelerator Developer Guide.
Package skipper provides an HTTP routing library with flexible configuration as well as a runtime update of the routing rules. Skipper works as an HTTP reverse proxy that is responsible for mapping incoming requests to multiple HTTP backend services, based on routes that are selected by the request attributes. At the same time, both the requests and the responses can be augmented by a filter chain that is specifically defined for each route. Optionally, it can provide circuit breaker mechanism individually for each backend host. Skipper can load and update the route definitions from multiple data sources without being restarted. It provides a default executable command with a few built-in filters, however, its primary use case is to be extended with custom filters, predicates or data sources. For further information read 'Extending Skipper'. Skipper took the core design and inspiration from Vulcand: https://github.com/mailgun/vulcand. Skipper is 'go get' compatible. If needed, create a 'go workspace' first: Get the Skipper packages: Create a file with a route: Optionally, verify the syntax of the file: Start Skipper and make an HTTP request: The core of Skipper's request processing is implemented by a reverse proxy in the 'proxy' package. The proxy receives the incoming request, forwards it to the routing engine in order to receive the most specific matching route. When a route matches, the request is forwarded to all filters defined by it. The filters can modify the request or execute any kind of program logic. Once the request has been processed by all the filters, it is forwarded to the backend endpoint of the route. The response from the backend goes once again through all the filters in reverse order. Finally, it is mapped as the response of the original incoming request. Besides the default proxying mechanism, it is possible to define routes without a real network backend endpoint. One of these cases is called a 'shunt' backend, in which case one of the filters needs to handle the request providing its own response (e.g. the 'static' filter). Actually, filters themselves can instruct the request flow to shunt by calling the Serve(*http.Response) method of the filter context. Another case of a route without a network backend is the 'loopback'. A loopback route can be used to match a request, modified by filters, against the lookup tree with different conditions and then execute a different route. One example scenario can be to use a single route as an entry point to execute some calculation to get an A/B testing decision and then matching the updated request metadata for the actual destination route. This way the calculation can be executed for only those requests that don't contain information about a previously calculated decision. For further details, see the 'proxy' and 'filters' package documentation. Finding a request's route happens by matching the request attributes to the conditions in the route's definitions. Such definitions may have the following conditions: - method - path (optionally with wildcards) - path regular expressions - host regular expressions - headers - header regular expressions It is also possible to create custom predicates with any other matching criteria. The relation between the conditions in a route definition is 'and', meaning, that a request must fulfill each condition to match a route. For further details, see the 'routing' package documentation. Filters are applied in order of definition to the request and in reverse order to the response. They are used to modify request and response attributes, such as headers, or execute background tasks, like logging. Some filters may handle the requests without proxying them to service backends. Filters, depending on their implementation, may accept/require parameters, that are set specifically to the route. For further details, see the 'filters' package documentation. Each route has one of the following backends: HTTP endpoint, shunt, loopback or dynamic. Backend endpoints can be any HTTP service. They are specified by their network address, including the protocol scheme, the domain name or the IP address, and optionally the port number: e.g. "https://www.example.org:4242". (The path and query are sent from the original request, or set by filters.) A shunt route means that Skipper handles the request alone and doesn't make requests to a backend service. In this case, it is the responsibility of one of the filters to generate the response. A loopback route executes the routing mechanism on current state of the request from the start, including the route lookup. This way it serves as a form of an internal redirect. A dynamic route means that the final target will be defined in a filter. One of the filters in the chain must set the target backend url explicitly. Route definitions consist of the following: - request matching conditions (predicates) - filter chain (optional) - backend The eskip package implements the in-memory and text representations of route definitions, including a parser. (Note to contributors: in order to stay compatible with 'go get', the generated part of the parser is stored in the repository. When changing the grammar, 'go generate' needs to be executed explicitly to update the parser.) For further details, see the 'eskip' package documentation Skipper has filter implementations of basic auth and OAuth2. It can be integrated with tokeninfo based OAuth2 providers. For details, see: https://godoc.org/github.com/zalando/skipper/filters/auth. Skipper's route definitions of Skipper are loaded from one or more data sources. It can receive incremental updates from those data sources at runtime. It provides three different data clients: - Kubernetes: Skipper can be used as part of a Kubernetes Ingress Controller implementation together with https://github.com/zalando-incubator/kube-ingress-aws-controller . In this scenario, Skipper uses the Kubernetes API's Ingress extensions as a source for routing. For a complete deployment example, see more details in: https://github.com/zalando-incubator/kubernetes-on-aws/ . - Innkeeper: the Innkeeper service implements a storage for large sets of Skipper routes, with an HTTP+JSON API, OAuth2 authentication and role management. See the 'innkeeper' package and https://github.com/zalando/innkeeper. - etcd: Skipper can load routes and receive updates from etcd clusters (https://github.com/coreos/etcd). See the 'etcd' package. - static file: package eskipfile implements a simple data client, which can load route definitions from a static file in eskip format. Currently, it loads the routes on startup. It doesn't support runtime updates. Skipper can use additional data sources, provided by extensions. Sources must implement the DataClient interface in the routing package. Skipper provides circuit breakers, configured either globally, based on backend hosts or based on individual routes. It supports two types of circuit breaker behavior: open on N consecutive failures, or open on N failures out of M requests. For details, see: https://godoc.org/github.com/zalando/skipper/circuit. Skipper can be started with the default executable command 'skipper', or as a library built into an application. The easiest way to start Skipper as a library is to execute the 'Run' function of the current, root package. Each option accepted by the 'Run' function is wired in the default executable as well, as a command line flag. E.g. EtcdUrls becomes -etcd-urls as a comma separated list. For command line help, enter: An additional utility, eskip, can be used to verify, print, update and delete routes from/to files or etcd (Innkeeper on the roadmap). See the cmd/eskip command package, and/or enter in the command line: Skipper doesn't use dynamically loaded plugins, however, it can be used as a library, and it can be extended with custom predicates, filters and/or custom data sources. To create a custom predicate, one needs to implement the PredicateSpec interface in the routing package. Instances of the PredicateSpec are used internally by the routing package to create the actual Predicate objects as referenced in eskip routes, with concrete arguments. Example, randompredicate.go: In the above example, a custom predicate is created, that can be referenced in eskip definitions with the name 'Random': To create a custom filter we need to implement the Spec interface of the filters package. 'Spec' is the specification of a filter, and it is used to create concrete filter instances, while the raw route definitions are processed. Example, hellofilter.go: The above example creates a filter specification, and in the routes where they are included, the filter instances will set the 'X-Hello' header for each and every response. The name of the filter is 'hello', and in a route definition it is referenced as: The easiest way to create a custom Skipper variant is to implement the required filters (as in the example above) by importing the Skipper package, and starting it with the 'Run' command. Example, hello.go: A file containing the routes, routes.eskip: Start the custom router: The 'Run' function in the root Skipper package starts its own listener but it doesn't provide the best composability. The proxy package, however, provides a standard http.Handler, so it is possible to use it in a more complex solution as a building block for routing. Skipper provides detailed logging of failures, and access logs in Apache log format. Skipper also collects detailed performance metrics, and exposes them on a separate listener endpoint for pulling snapshots. For details, see the 'logging' and 'metrics' packages documentation. The router's performance depends on the environment and on the used filters. Under ideal circumstances, and without filters, the biggest time factor is the route lookup. Skipper is able to scale to thousands of routes with logarithmic performance degradation. However, this comes at the cost of increased memory consumption, due to storing the whole lookup tree in a single structure. Benchmarks for the tree lookup can be run by: In case more aggressive scale is needed, it is possible to setup Skipper in a cascade model, with multiple Skipper instances for specific route segments.
Package http provides a http.Handler and functions that are intended to be used to add tracing by wrapping existing handlers (with Handler) and routes WithRouteTag.
Package networkfirewall provides the API client, operations, and parameter types for AWS Network Firewall. This is the API Reference for Network Firewall. This guide is for developers who need detailed information about the Network Firewall API actions, data types, and errors. The REST API requires you to handle connection details, such as calculating signatures, handling request retries, and error handling. For general information about using the Amazon Web Services REST APIs, see Amazon Web Services APIs. To view the complete list of Amazon Web Services Regions where Network Firewall is available, see Service endpoints and quotasin the Amazon Web Services General Reference. To access Network Firewall using the IPv4 REST API endpoint: https://network-firewall..amazonaws.com To access Network Firewall using the Dualstack (IPv4 and IPv6) REST API endpoint: https://network-firewall..aws.api Alternatively, you can use one of the Amazon Web Services SDKs to access an API that's tailored to the programming language or platform that you're using. For more information, see Amazon Web Services SDKs. For descriptions of Network Firewall features, including and step-by-step instructions on how to use them through the Network Firewall console, see the Network Firewall Developer Guide. Network Firewall is a stateful, managed, network firewall and intrusion detection and prevention service for Amazon Virtual Private Cloud (Amazon VPC). With Network Firewall, you can filter traffic at the perimeter of your VPC. This includes filtering traffic going to and coming from an internet gateway, NAT gateway, or over VPN or Direct Connect. Network Firewall uses rules that are compatible with Suricata, a free, open source network analysis and threat detection engine. Network Firewall supports Suricata version 7.0.3. For information about Suricata, see the Suricata websiteand the Suricata User Guide. You can use Network Firewall to monitor and protect your VPC traffic in a number of ways. The following are just a few examples: Allow domains or IP addresses for known Amazon Web Services service endpoints, such as Amazon S3, and block all other forms of traffic. Use custom lists of known bad domains to limit the types of domain names that your applications can access. Perform deep packet inspection on traffic entering or leaving your VPC. Use stateful protocol detection to filter protocols like HTTPS, regardless of the port used. To enable Network Firewall for your VPCs, you perform steps in both Amazon VPC and in Network Firewall. For information about using Amazon VPC, see Amazon VPC User Guide. To start using Network Firewall, do the following: (Optional) If you don't already have a VPC that you want to protect, create it in Amazon VPC. In Amazon VPC, in each Availability Zone where you want to have a firewall endpoint, create a subnet for the sole use of Network Firewall. In Network Firewall, define the firewall behavior as follows: Create stateless and stateful rule groups, to define the components of the network traffic filtering behavior that you want your firewall to have. Create a firewall policy that uses your rule groups and specifies additional default traffic filtering behavior. In Network Firewall, create a firewall and specify your new firewall policy and VPC subnets. Network Firewall creates a firewall endpoint in each subnet that you specify, with the behavior that's defined in the firewall policy. In Amazon VPC, use ingress routing enhancements to route traffic through the new firewall endpoints. After your firewall is established, you can add firewall endpoints for new Availability Zones by following the prior steps for the Amazon VPC setup and firewall subnet definitions. You can also add endpoints to Availability Zones that you're using in the firewall, either for the same VPC or for another VPC, by following the prior steps for the Amazon VPC setup, and defining the new VPC subnets as VPC endpoint associations.
Package otelmacaron instruments gopkg.in/macaron.v1. Currently only the routing of a received message can be instrumented. To do it, use the Middleware function. Deprecated: otelmacaron has no Code Owner. After August 21, 2024, it may no longer be supported and may stop receiving new releases unless a new Code Owner is found. See this issue if you would like to become the Code Owner of this module.
Package redisc implements a redis cluster client on top of the redigo client package. It supports all commands that can be executed on a redis cluster, including pub-sub, scripts and read-only connections to read data from replicas. See http://redis.io/topics/cluster-spec for details. The package defines two main types: Cluster and Conn. Both are described in more details below, but the Cluster manages the mapping of keys (or more exactly, hash slots computed from keys) to a group of nodes that form a redis cluster, and a Conn manages a connection to this cluster. The package is designed such that for simple uses, or when keys have been carefully named to play well with a redis cluster, a Cluster value can be used as a drop-in replacement for a redis.Pool from the redigo package. Similarly, the Conn type implements redigo's redis.Conn interface (and the augmented redis.ConnWithTimeout one too), so the API to execute commands is the same - in fact the redisc package uses the redigo package as its only third-party dependency. When more control is needed, the package offers some extra behaviour specific to working with a redis cluster: Slot and SplitBySlot functions to compute the slot for a given key and to split a list of keys into groups of keys from the same slot, so that each group can safely be handled using the same connection. *Conn.Bind (or the BindConn package-level helper function) to explicitly specify the keys that will be used with the connection so that the right node is selected, instead of relying on the automatic detection based on the first parameter of the command. *Conn.ReadOnly (or the ReadOnlyConn package-level helper function) to mark a connection as read-only, allowing commands to be served by a replica instead of the master. RetryConn to wrap a connection into one that automatically follows redirections when the cluster moves slots around. Helper functions to deal with cluster-specific errors. The Cluster type manages a redis cluster and offers an interface compatible with redigo's redis.Pool: Along with some additional methods specific to a cluster: If the CreatePool function field is set, then a redis.Pool is created to manage connections to each of the cluster's nodes. A call to Get returns a connection from that pool. The Dial method, on the other hand, guarantees that the returned connection will not be managed by a pool, even if CreatePool is set. It calls redigo's redis.Dial function to create the unpooled connection, passing along any DialOptions set on the cluster. If the cluster's CreatePool field is nil, Get behaves the same as Dial. The Refresh method refreshes the cluster's internal mapping of hash slots to nodes. It should typically be called only once, after the cluster is created and before it is used, so that the first connections already benefit from smart routing. It is automatically kept up-to-date based on the redis MOVED responses afterwards. The EachNode method visits each node in the cluster and calls the provided function with a connection to that node, which may be useful to run diagnostics commands on each node or to collect keys across the whole cluster. The Stats method returns the pool statistics for each node, with the node's address as key of the map. A cluster must be closed once it is no longer used to release its resources. The connection returned from Get or Dial is a redigo redis.Conn interface (that also implements redis.ConnWithTimeout), with a concrete type of *Conn. In addition to the interface's required methods, *Conn adds the following methods: The returned connection is not yet connected to any node; it is "bound" to a specific node only when a call to Do, Send, Receive or Bind is made. For Do, Send and Receive, the node selection is implicit, it uses the first parameter of the command, and computes the hash slot assuming that first parameter is a key. It then binds the connection to the node corresponding to that slot. If there are no parameters for the command, or if there is no command (e.g. in a call to Receive), a random node is selected. Bind is explicit, it gives control to the caller over which node to select by specifying a list of keys that the caller wishes to handle with the connection. All keys must belong to the same slot, and the connection must not already be bound to a node, otherwise an error is returned. On success, the connection is bound to the node holding the slot of the specified key(s). Because the connection is returned as a redis.Conn interface, a type assertion must be used to access the underlying *Conn and to be able to call Bind: The BindConn package-level function is provided as a helper for this common use-case. The ReadOnly method marks the connection as read-only, meaning that it will attempt to connect to a replica instead of the master node for its slot. Once bound to a node, the READONLY redis command is sent automatically, so it doesn't have to be sent explicitly before use. ReadOnly must be called before the connection is bound to a node, otherwise an error is returned. For the same reason as for Bind, a type assertion must be used to call ReadOnly on a *Conn, so a package-level helper function is also provided, ReadOnlyConn. There is no ReadWrite method, because it can be sent as a normal redis command and will essentially end that connection (all commands will now return MOVED errors). If the connection was wrapped in a RetryConn call, then it will automatically follow the redirection to the master node (see the Redirections section). The connection must be closed after use, to release the underlying resources. The redis cluster may return MOVED and ASK errors when the node that received the command doesn't currently hold the slot corresponding to the key. The package cannot reliably handle those redirections automatically because the redirection error may be returned for a pipeline of commands, some of which may have succeeded. However, a connection can be wrapped by a call to RetryConn, which returns a redis.Conn interface where only calls to Do, Close and Err can succeed. That means pipelining is not supported, and only a single command can be executed at a time, but it will automatically handle MOVED and ASK replies, as well as TRYAGAIN errors. Note that even if RetryConn is not used, the cluster always updates its mapping of slots to nodes automatically by keeping track of MOVED replies. The concurrency model is similar to that of the redigo package: Cluster methods are safe to call concurrently (like redis.Pool). Connections do not support concurrent calls to write methods (Send, Flush) or concurrent calls to the read method (Receive). Connections do allow a concurrent reader and writer. Because the Do method combines the functionality of Send, Flush and Receive, it cannot be called concurrently with other methods. The Bind and ReadOnly methods are safe to call concurrently, but there is not much point in doing so for as both will fail if the connection is already bound. Create and use a cluster.
Package yarpc provides the YARPC service framework. With hundreds to thousands of services communicating with RPC, transport protocols (like HTTP and TChannel), encoding protocols (like JSON or Thrift), and peer choosers are the concepts that vary year over year. Separating these concerns allows services to change transports and wire protocols without changing call sites or request handlers, build proxies and wire protocol bridges, or experiment with load balancing strategies. YARPC is a toolkit for services and proxies. YARPC breaks RPC into interchangeable encodings, transports, and peer choosers. YARPC for Go provides reference implementations for HTTP/1.1, TChannel and gRPC transports, and also raw, JSON, Thrift, and Protobuf encodings. YARPC for Go provides a round robin peer chooser and experimental implementations for debug pages and rate limiting. YARPC for Go plans to provide a load balancer that uses a least-pending-requests strategy. Peer choosers can implement any strategy, including load balancing or sharding, in turn bound to any peer list updater. Regardless of transport, every RPC has some common properties: caller name, service name, procedure name, encoding name, deadline or TTL, headers, baggage (multi-hop headers), and tracing. Each RPC can also have an optional shard key, routing key, or routing delegate for advanced routing. YARPC transports use a shared API for capturing RPC metadata, so middleware can apply to requests over any transport. Each YARPC transport protocol can implement inbound handlers and outbound callers. Each of these can support different RPC types, like unary (request and response) or oneway (request and receipt) RPC. A future release of YARPC will add support for other RPC types including variations on streaming and pubsub.
Package routing provides high performance and powerful HTTP routing capabilities.
Package cadence and its subdirectories contain the Cadence client side framework. The Cadence service is a task orchestrator for your application’s tasks. Applications using Cadence can execute a logical flow of tasks, especially long-running business logic, asynchronously or synchronously. They can also scale at runtime on distributed systems. A quick example illustrates its use case. Consider Uber Eats where Cadence manages the entire business flow from placing an order, accepting it, handling shopping cart processes (adding, updating, and calculating cart items), entering the order in a pipeline (for preparing food and coordinating delivery), to scheduling delivery as well as handling payments. Cadence consists of a programming framework (or client library) and a managed service (or backend). The framework enables developers to author and coordinate tasks in Go code. The root cadence package contains common data structures. The subpackages are: The Cadence hosted service brokers and persists events generated during workflow execution. Worker nodes owned and operated by customers execute the coordination and task logic. To facilitate the implementation of worker nodes Cadence provides a client-side library for the Go language. In Cadence, you can code the logical flow of events separately as a workflow and code business logic as activities. The workflow identifies the activities and sequences them, while an activity executes the logic. Dynamic workflow execution graphs - Determine the workflow execution graphs at runtime based on the data you are processing. Cadence does not pre-compute the execution graphs at compile time or at workflow start time. Therefore, you have the ability to write workflows that can dynamically adjust to the amount of data they are processing. If you need to trigger 10 instances of an activity to efficiently process all the data in one run, but only 3 for a subsequent run, you can do that. Child Workflows - Orchestrate the execution of a workflow from within another workflow. Cadence will return the results of the child workflow execution to the parent workflow upon completion of the child workflow. No polling is required in the parent workflow to monitor status of the child workflow, making the process efficient and fault tolerant. Durable Timers - Implement delayed execution of tasks in your workflows that are robust to worker failures. Cadence provides two easy to use APIs, **workflow.Sleep** and **workflow.Timer**, for implementing time based events in your workflows. Cadence ensures that the timer settings are persisted and the events are generated even if workers executing the workflow crash. Signals - Modify/influence the execution path of a running workflow by pushing additional data directly to the workflow using a signal. Via the Signal facility, Cadence provides a mechanism to consume external events directly in workflow code. Task routing - Efficiently process large amounts of data using a Cadence workflow, by caching the data locally on a worker and executing all activities meant to process that data on that same worker. Cadence enables you to choose the worker you want to execute a certain activity by scheduling that activity execution in the worker's specific task-list. Unique workflow ID enforcement - Use business entity IDs for your workflows and let Cadence ensure that only one workflow is running for a particular entity at a time. Cadence implements an atomic "uniqueness check" and ensures that no race conditions are possible that would result in multiple workflow executions for the same workflow ID. Therefore, you can implement your code to attempt to start a workflow without checking if the ID is already in use, even in the cases where only one active execution per workflow ID is desired. Perpetual/ContinueAsNew workflows - Run periodic tasks as a single perpetually running workflow. With the "ContinueAsNew" facility, Cadence allows you to leverage the "unique workflow ID enforcement" feature for periodic workflows. Cadence will complete the current execution and start the new execution atomically, ensuring you get to keep your workflow ID. By starting a new execution Cadence also ensures that workflow execution history does not grow indefinitely for perpetual workflows. At-most once activity execution - Execute non-idempotent activities as part of your workflows. Cadence will not automatically retry activities on failure. For every activity execution Cadence will return a success result, a failure result, or a timeout to the workflow code and let the workflow code determine how each one of those result types should be handled. Asynch Activity Completion - Incorporate human input or thrid-party service asynchronous callbacks into your workflows. Cadence allows a workflow to pause execution on an activity and wait for an external actor to resume it with a callback. During this pause the activity does not have any actively executing code, such as a polling loop, and is merely an entry in the Cadence datastore. Therefore, the workflow is unaffected by any worker failures happening over the duration of the pause. Activity Heartbeating - Detect unexpected failures/crashes and track progress in long running activities early. By configuring your activity to report progress periodically to the Cadence server, you can detect a crash that occurs 10 minutes into an hour-long activity execution much sooner, instead of waiting for the 60-minute execution timeout. The recorded progress before the crash gives you sufficient information to determine whether to restart the activity from the beginning or resume it from the point of failure. Timeouts for activities and workflow executions - Protect against stuck and unresponsive activities and workflows with appropriate timeout values. Cadence requires that timeout values are provided for every activity or workflow invocation. There is no upper bound on the timeout values, so you can set timeouts that span days, weeks, or even months. Visibility - Get a list of all your active and/or completed workflow. Explore the execution history of a particular workflow execution. Cadence provides a set of visibility APIs that allow you, the workflow owner, to monitor past and current workflow executions. Debuggability - Replay any workflow execution history locally under a debugger. The Cadence client library provides an API to allow you to capture a stack trace from any failed workflow execution history.
Package rtnetlink allows the kernel's routing tables to be read and altered. Network routes, IP addresses, Link parameters, Neighbor setups, Queueing disciplines, Traffic classes and Packet classifiers may all be controlled. It is based on netlink messages. A convenient, high-level API wrapper is available using package rtnl: https://godoc.org/github.com/jsimonetti/rtnetlink/rtnl. The base rtnetlink library xplicitly only exposes a limited low-level API to rtnetlink. It is not the intention (nor wish) to create an iproute2 replacement. When in doubt about your message structure it can always be useful to look at the message send by iproute2 using 'strace -f -esendmsg' or similar. Another (and possibly even more flexible) way would be using 'nlmon' and wireshark. nlmod is a special kernel module which allows you to capture all (not just rtnetlink) netlink traffic inside the kernel. Be aware that this might be overwhelming on a system with a lot of netlink traffic. At this point use wireshark or tcpdump on the nlmon0 interface to view all netlink traffic. Have a look at the examples for common uses of rtnetlink. Add IP address '127.0.0.2/8' to an interface 'lo' Add a route Delete IP address '127.0.0.2/8' from interface 'lo' List all IPv4 addresses configured on interface 'lo' List all interfaces List all neighbors on interface 'lo' List all rules Set the operational state an interface to Down Set the hw address of an interface Set the operational state an interface to Up
Package temporal and its subdirectories contain the Temporal client side framework. The Temporal service is a task orchestrator for your application’s tasks. Applications using Temporal can execute a logical flow of tasks, especially long-running business logic, asynchronously or synchronously. They can also scale at runtime on distributed systems. A quick example illustrates its use case. Consider Uber Eats where Temporal manages the entire business flow from placing an order, accepting it, handling shopping cart processes (adding, updating, and calculating cart items), entering the order in a pipeline (for preparing food and coordinating delivery), to scheduling delivery as well as handling payments. Temporal consists of a programming framework (or client library) and a managed service (or backend). The framework enables developers to author and coordinate tasks in Go code. The root temporal package contains common data structures. The subpackages are: The Temporal hosted service brokers and persists events generated during workflow execution. Worker nodes owned and operated by customers execute the coordination and task logic. To facilitate the implementation of worker nodes Temporal provides a client-side library for the Go language. In Temporal, you can code the logical flow of events separately as a workflow and code business logic as activities. The workflow identifies the activities and sequences them, while an activity executes the logic. Dynamic workflow execution graphs - Determine the workflow execution graphs at runtime based on the data you are processing. Temporal does not pre-compute the execution graphs at compile time or at workflow start time. Therefore, you have the ability to write workflows that can dynamically adjust to the amount of data they are processing. If you need to trigger 10 instances of an activity to efficiently process all the data in one run, but only 3 for a subsequent run, you can do that. Child Workflows - Orchestrate the execution of a workflow from within another workflow. Temporal will return the results of the child workflow execution to the parent workflow upon completion of the child workflow. No polling is required in the parent workflow to monitor status of the child workflow, making the process efficient and fault tolerant. Durable Timers - Implement delayed execution of tasks in your workflows that are robust to worker failures. Temporal provides two easy to use APIs, **workflow.Sleep** and **workflow.Timer**, for implementing time based events in your workflows. Temporal ensures that the timer settings are persisted and the events are generated even if workers executing the workflow crash. Signals - Modify/influence the execution path of a running workflow by pushing additional data directly to the workflow using a signal. Via the Signal facility, Temporal provides a mechanism to consume external events directly in workflow code. Task routing - Efficiently process large amounts of data using a Temporal workflow, by caching the data locally on a worker and executing all activities meant to process that data on that same worker. Temporal enables you to choose the worker you want to execute a certain activity by scheduling that activity execution in the worker's specific task queue. Unique workflow ID enforcement - Use business entity IDs for your workflows and let Temporal ensure that only one workflow is running for a particular entity at a time. Temporal implements an atomic "uniqueness check" and ensures that no race conditions are possible that would result in multiple workflow executions for the same workflow ID. Therefore, you can implement your code to attempt to start a workflow without checking if the ID is already in use, even in the cases where only one active execution per workflow ID is desired. Perpetual/ContinueAsNew workflows - Run periodic tasks as a single perpetually running workflow. With the "ContinueAsNew" facility, Temporal allows you to leverage the "unique workflow ID enforcement" feature for periodic workflows. Temporal will complete the current execution and start the new execution atomically, ensuring you get to keep your workflow ID. By starting a new execution Temporal also ensures that workflow execution history does not grow indefinitely for perpetual workflows. At-most once activity execution - Execute non-idempotent activities as part of your workflows. Temporal will not automatically retry activities on failure. For every activity execution Temporal will return a success result, a failure result, or a timeout to the workflow code and let the workflow code determine how each one of those result types should be handled. Asynch Activity Completion - Incorporate human input or thrid-party service asynchronous callbacks into your workflows. Temporal allows a workflow to pause execution on an activity and wait for an external actor to resume it with a callback. During this pause the activity does not have any actively executing code, such as a polling loop, and is merely an entry in the Temporal datastore. Therefore, the workflow is unaffected by any worker failures happening over the duration of the pause. Activity Heartbeating - Detect unexpected failures/crashes and track progress in long running activities early. By configuring your activity to report progress periodically to the Temporal server, you can detect a crash that occurs 10 minutes into an hour-long activity execution much sooner, instead of waiting for the 60-minute execution timeout. The recorded progress before the crash gives you sufficient information to determine whether to restart the activity from the beginning or resume it from the point of failure. Timeouts for activities and workflow executions - Protect against stuck and unresponsive activities and workflows with appropriate timeout values. Temporal requires that timeout values are provided for every activity or workflow invocation. There is no upper bound on the timeout values, so you can set timeouts that span days, weeks, or even months. Visibility - Get a list of all your active and/or completed workflow. Explore the execution history of a particular workflow execution. Temporal provides a set of visibility APIs that allow you, the workflow owner, to monitor past and current workflow executions. Debuggability - Replay any workflow execution history locally under a debugger. The Temporal client library provides an API to allow you to capture a stack trace from any failed workflow execution history.
Package lingua accurately detects the natural language of written text, be it long or short. Its task is simple: It tells you which language some text is written in. This is very useful as a preprocessing step for linguistic data in natural language processing applications such as text classification and spell checking. Other use cases, for instance, might include routing e-mails to the right geographically located customer service department, based on the e-mails' languages. Language detection is often done as part of large machine learning frameworks or natural language processing applications. In cases where you don't need the full-fledged functionality of those systems or don't want to learn the ropes of those, a small flexible library comes in handy. So far, the only other comprehensive open source library in the Go ecosystem for this task is Whatlanggo (https://github.com/abadojack/whatlanggo). Unfortunately, it has two major drawbacks: 1. Detection only works with quite lengthy text fragments. For very short text snippets such as Twitter messages, it does not provide adequate results. 2. The more languages take part in the decision process, the less accurate are the detection results. Lingua aims at eliminating these problems. It nearly does not need any configuration and yields pretty accurate results on both long and short text, even on single words and phrases. It draws on both rule-based and statistical methods but does not use any dictionaries of words. It does not need a connection to any external API or service either. Once the library has been downloaded, it can be used completely offline. Compared to other language detection libraries, Lingua's focus is on quality over quantity, that is, getting detection right for a small set of languages first before adding new ones. Currently, 75 languages are supported. They are listed as variants of type Language. Lingua is able to report accuracy statistics for some bundled test data available for each supported language. The test data for each language is split into three parts: 1. a list of single words with a minimum length of 5 characters 2. a list of word pairs with a minimum length of 10 characters 3. a list of complete grammatical sentences of various lengths Both the language models and the test data have been created from separate documents of the Wortschatz corpora (https://wortschatz.uni-leipzig.de) offered by Leipzig University, Germany. Data crawled from various news websites have been used for training, each corpus comprising one million sentences. For testing, corpora made of arbitrarily chosen websites have been used, each comprising ten thousand sentences. From each test corpus, a random unsorted subset of 1000 single words, 1000 word pairs and 1000 sentences has been extracted, respectively. Given the generated test data, I have compared the detection results of Lingua, and Whatlanggo running over the data of Lingua's supported 75 languages. Additionally, I have added Google's CLD3 (https://github.com/google/cld3/) to the comparison with the help of the gocld3 bindings (https://github.com/jmhodges/gocld3). Languages that are not supported by CLD3 or Whatlanggo are simply ignored during the detection process. Lingua clearly outperforms its contenders. Every language detector uses a probabilistic n-gram (https://en.wikipedia.org/wiki/N-gram) model trained on the character distribution in some training corpus. Most libraries only use n-grams of size 3 (trigrams) which is satisfactory for detecting the language of longer text fragments consisting of multiple sentences. For short phrases or single words, however, trigrams are not enough. The shorter the input text is, the less n-grams are available. The probabilities estimated from such few n-grams are not reliable. This is why Lingua makes use of n-grams of sizes 1 up to 5 which results in much more accurate prediction of the correct language. A second important difference is that Lingua does not only use such a statistical model, but also a rule-based engine. This engine first determines the alphabet of the input text and searches for characters which are unique in one or more languages. If exactly one language can be reliably chosen this way, the statistical model is not necessary anymore. In any case, the rule-based engine filters out languages that do not satisfy the conditions of the input text. Only then, in a second step, the probabilistic n-gram model is taken into consideration. This makes sense because loading less language models means less memory consumption and better runtime performance. In general, it is always a good idea to restrict the set of languages to be considered in the classification process using the respective api methods. If you know beforehand that certain languages are never to occur in an input text, do not let those take part in the classifcation process. The filtering mechanism of the rule-based engine is quite good, however, filtering based on your own knowledge of the input text is always preferable. There might be classification tasks where you know beforehand that your language data is definitely not written in Latin, for instance. The detection accuracy can become better in such cases if you exclude certain languages from the decision process or just explicitly include relevant languages. Knowing about the most likely language is nice but how reliable is the computed likelihood? And how less likely are the other examined languages in comparison to the most likely one? In the example below, a slice of ConfidenceValue is returned containing those languages which the calling instance of LanguageDetector has been built from. The entries are sorted by their confidence value in descending order. Each value is a probability between 0.0 and 1.0. The probabilities of all languages will sum to 1.0. If the language is unambiguously identified by the rule engine, the value 1.0 will always be returned for this language. The other languages will receive a value of 0.0. By default, Lingua uses lazy-loading to load only those language models on demand which are considered relevant by the rule-based filter engine. For web services, for instance, it is rather beneficial to preload all language models into memory to avoid unexpected latency while waiting for the service response. If you want to enable the eager-loading mode, you can do it as seen below. Multiple instances of LanguageDetector share the same language models in memory which are accessed asynchronously by the instances. By default, Lingua returns the most likely language for a given input text. However, there are certain words that are spelled the same in more than one language. The word `prologue`, for instance, is both a valid English and French word. Lingua would output either English or French which might be wrong in the given context. For cases like that, it is possible to specify a minimum relative distance that the logarithmized and summed up probabilities for each possible language have to satisfy. It can be stated as seen below. Be aware that the distance between the language probabilities is dependent on the length of the input text. The longer the input text, the larger the distance between the languages. So if you want to classify very short text phrases, do not set the minimum relative distance too high. Otherwise Unknown will be returned most of the time as in the example below. This is the return value for cases where language detection is not reliably possible.
Package tcpproxy lets users build TCP proxies, optionally making routing decisions based on HTTP/1 Host headers and the SNI hostname in TLS connections. Typical usage: Calling Run (or Start) on a proxy also starts all the necessary listeners. For each accepted connection, the rules for that ipPort are matched, in order. If one matches (currently HTTP Host, SNI, or always), then the connection is handed to the target. The two predefined Target implementations are: 1) DialProxy, proxying to another address (use the To func to return a DialProxy value), 2) TargetListener, making the matched connection available via a net.Listener.Accept call. But Target is an interface, so you can also write your own. Note that tcpproxy does not do any TLS encryption or decryption. It only (via DialProxy) copies bytes around. The SNI hostname in the TLS header is unencrypted, for better or worse. This package makes no API stability promises. If you depend on it, vendor it.
Package routing provides high performance and powerful HTTP routing capabilities.
Package routing provides high performance and powerful HTTP routing capabilities.
Package vestigo implements a performant, stand-alone, HTTP compliant URL Router for go web applications. Vestigo utilizes a simple radix trie for url route indexing and search, and puts any URL parameters found in a request in the request's Form, much like PAT. Vestigo boasts standards compliance regarding the proper behavior when methods are not allowed on a given resource as well as when a resource isn't found. vestigo also includes built in CORS support on a global and per resource capability.
Pact Go enables consumer driven contract testing, providing a mock service and DSL for the consumer project, and interaction playback and verification for the service provider project. Consumer side Pact testing is an isolated test that ensures a given component is able to collaborate with another (remote) component. Pact will automatically start a Mock server in the background that will act as the collaborators' test double. This implies that any interactions expected on the Mock server will be validated, meaning a test will fail if all interactions were not completed, or if unexpected interactions were found: A typical consumer-side test would look something like this: If this test completed successfully, a Pact file should have been written to ./pacts/my_consumer-my_provider.json containing all of the interactions expected to occur between the Consumer and Provider. In addition to verbatim value matching, you have 3 useful matching functions in the `dsl` package that can increase expressiveness and reduce brittle test cases. Here is a complex example that shows how all 3 terms can be used together: This example will result in a response body from the mock server that looks like: See the examples in the dsl package and the matcher tests (https://github.com/pact-foundation/pact-go/v2/blob/master/dsl/matcher_test.go) for more matching examples. NOTE: You will need to use valid Ruby regular expressions (http://ruby-doc.org/core-2.1.5/Regexp.html) and double escape backslashes. Read more about flexible matching (https://github.com/pact-foundation/pact-ruby/wiki/Regular-expressions-and-type-matching-with-Pact. Provider side Pact testing, involves verifying that the contract - the Pact file - can be satisfied by the Provider. A typical Provider side test would like something like: The `VerifyProvider` will handle all verifications, treating them as subtests and giving you granular test reporting. If you don't like this behaviour, you may call `VerifyProviderRaw` directly and handle the errors manually. Note that `PactURLs` may be a list of local pact files or remote based urls (possibly from a Pact Broker - http://docs.pact.io/documentation/sharings_pacts.html). Pact reads the specified pact files (from remote or local sources) and replays the interactions against a running Provider. If all of the interactions are met we can say that both sides of the contract are satisfied and the test passes. When validating a Provider, you have 3 options to provide the Pact files: 1. Use "PactURLs" to specify the exact set of pacts to be replayed: Options 2 and 3 are particularly useful when you want to validate that your Provider is able to meet the contracts of what's in Production and also the latest in development. See this [article](http://rea.tech/enter-the-pact-matrix-or-how-to-decouple-the-release-cycles-of-your-microservices/) for more on this strategy. Each interaction in a pact should be verified in isolation, with no context maintained from the previous interactions. So how do you test a request that requires data to exist on the provider? Provider states are how you achieve this using Pact. Provider states also allow the consumer to make the same request with different expected responses (e.g. different response codes, or the same resource with a different subset of data). States are configured on the consumer side when you issue a dsl.Given() clause with a corresponding request/response pair. Configuring the provider is a little more involved, and (currently) requires running an API endpoint to configure any [provider states](http://docs.pact.io/documentation/provider_states.html) during the verification process. The option you must provide to the dsl.VerifyRequest is: An example route using the standard Go http package might look like this: See the examples or read more at http://docs.pact.io/documentation/provider_states.html. See the Pact Broker (http://docs.pact.io/documentation/sharings_pacts.html) documentation for more details on the Broker and this article (http://rea.tech/enter-the-pact-matrix-or-how-to-decouple-the-release-cycles-of-your-microservices/) on how to make it work for you. Publishing using Go code: Publishing from the CLI: Use a cURL request like the following to PUT the pact to the right location, specifying your consumer name, provider name and consumer version. The following flags are required to use basic authentication when publishing or retrieving Pact files to/from a Pact Broker: Pact Go uses a simple log utility (logutils - https://github.com/hashicorp/logutils) to filter log messages. The CLI already contains flags to manage this, should you want to control log level in your tests, you can set it like so: