Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package httpproxy provides an HTTP proxy implementation
Package micro is a pluggable framework for microservices
Package httpcache provides a http.RoundTripper implementation that works as a mostly RFC-compliant cache for http responses. It is only suitable for use as a 'private' cache (i.e. for a web-browser or an API-client and not for a shared proxy).
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Taken from $GOROOT/src/pkg/net/http/chunked needed to write https responses to client. Package goproxy provides a customizable HTTP proxy, supporting hijacking HTTPS connection. The intent of the proxy, is to be usable with reasonable amount of traffic yet, customizable and programmable. The proxy itself is simply an `net/http` handler. Typical usage is Adding a header to each request For printing the content type of all incoming responses note that we used the ProxyCtx context variable here. It contains the request and the response (Req and Resp, Resp is nil if unavailable) of this specific client interaction with the proxy. To print the content type of all responses from a certain url, we'll add a ReqCondition to the OnResponse function: We can write the condition ourselves, conditions can be set on request and on response Caution! If you give a RespCondition to the OnRequest function, you'll get a run time panic! It doesn't make sense to read the response, if you still haven't got it! Finally, we have convenience function to throw a quick response we close the body of the original response, and return a new 403 response with a short message. Example use cases: 1. https://github.com/elazarl/goproxy/tree/master/examples/goproxy-avgsize To measure the average size of an Html served in your site. One can ask all the QA team to access the website by a proxy, and the proxy will measure the average size of all text/html responses from your host. 2. [not yet implemented] All requests to your web servers should be directed through the proxy, when the proxy will detect html pieces sent as a response to AJAX request, it'll send a warning email. 3. https://github.com/elazarl/goproxy/blob/master/examples/goproxy-httpdump/ Generate a real traffic to your website by real users using through proxy. Record the traffic, and try it again for more real load testing. 4. https://github.com/elazarl/goproxy/tree/master/examples/goproxy-no-reddit-at-worktime Will allow browsing to reddit.com between 8:00am and 17:00pm 5. https://github.com/elazarl/goproxy/tree/master/examples/goproxy-jquery-version Will warn if multiple versions of jquery are used in the same domain. 6. https://github.com/elazarl/goproxy/blob/master/examples/goproxy-upside-down-ternet/ Modifies image files in an HTTP response via goproxy's image extension found in ext/.
Package yarpc provides the YARPC service framework. With hundreds to thousands of services communicating with RPC, transport protocols (like HTTP and TChannel), encoding protocols (like JSON or Thrift), and peer choosers are the concepts that vary year over year. Separating these concerns allows services to change transports and wire protocols without changing call sites or request handlers, build proxies and wire protocol bridges, or experiment with load balancing strategies. YARPC is a toolkit for services and proxies. YARPC breaks RPC into interchangeable encodings, transports, and peer choosers. YARPC for Go provides reference implementations for HTTP/1.1, TChannel and gRPC transports, and also raw, JSON, Thrift, and Protobuf encodings. YARPC for Go provides a round robin peer chooser and experimental implementations for debug pages and rate limiting. YARPC for Go plans to provide a load balancer that uses a least-pending-requests strategy. Peer choosers can implement any strategy, including load balancing or sharding, in turn bound to any peer list updater. Regardless of transport, every RPC has some common properties: caller name, service name, procedure name, encoding name, deadline or TTL, headers, baggage (multi-hop headers), and tracing. Each RPC can also have an optional shard key, routing key, or routing delegate for advanced routing. YARPC transports use a shared API for capturing RPC metadata, so middleware can apply to requests over any transport. Each YARPC transport protocol can implement inbound handlers and outbound callers. Each of these can support different RPC types, like unary (request and response) or oneway (request and receipt) RPC. A future release of YARPC will add support for other RPC types including variations on streaming and pubsub.
This is a small HTTP server implementing the "photobackup server" endpoint documented here: https://github.com/PhotoBackup/api/ Written because the existing servers make me a touch sad; go means we can avoid a pile of runtime dependencies. Build-time dependencies are being kept low; bcrypt, homedir and graceful are the only luxuries. Adding gorilla mux and, perhaps, negroni, would probably be overkill. We're trying to be compatible, so config file is INI-format: ~/.photobackup In addition to these keys, I'm also supporting: The original server was intended to run over HTTP, I think, hence the client sending a SHA512'd password. We support this scheme, but the on-disc storage format is really better off being bcrypt(sha512(password)), so I've added that. Adding BindAddress and HTTPPrefix means that mounting this behind a HTTP reverse proxy is quite doable, and lets us offload HTTPS to that as well. That's how I'm intending to use it. I think the original servers are designed so you can connect to them using just HTTP; hence the sha512(password) scheme. This is short-sighted; the only thing it gets you is (weak) protection against sniffing if you happen to use the same password elsewhere. Sniffers in this scenario can still upload to your server and view your photos. At some point in the future I might add direct HTTPS support as well, but I don't need it. @author Nick Thomas <photobackup@ur.gs>
This is a small HTTP server implementing the "photobackup server" endpoint documented here: https://github.com/PhotoBackup/api/ Written because the existing servers make me a touch sad; go means we can avoid a pile of runtime dependencies. Build-time dependencies are being kept low; bcrypt, homedir and graceful are the only luxuries. Adding gorilla mux and, perhaps, negroni, would probably be overkill. We're trying to be compatible, so config file is INI-format: ~/.photobackup In addition to these keys, I'm also supporting: The original server was intended to run over HTTP, I think, hence the client sending a SHA512'd password. We support this scheme, but the on-disc storage format is really better off being bcrypt(sha512(password)), so I've added that. Adding BindAddress and HTTPPrefix means that mounting this behind a HTTP reverse proxy is quite doable, and lets us offload HTTPS to that as well. That's how I'm intending to use it. I think the original servers are designed so you can connect to them using just HTTP; hence the sha512(password) scheme. This is short-sighted; the only thing it gets you is (weak) protection against sniffing if you happen to use the same password elsewhere. Sniffers in this scenario can still upload to your server and view your photos. At some point in the future I might add direct HTTPS support as well, but I don't need it. @author Nick Thomas <photobackup@ur.gs>
Package ngrok makes it easy to work with the ngrok API from Go. The package is fully code generated and should always be up to date with the latest ngrok API. Full documentation of the ngrok API can be found at: https://ngrok.com/docs/api This package follows the best practices outlined for Go modules. All releases are tagged and any breaking changes will be reflected as a new major version. You should only import this package for production applications by pointing at a stable tagged version. The following example code demonstrates typical initialization and usage of the package to make an API call: API client configuration and all of the datatypes exchanged by the API are defined in this base package. There are subpackages for every API service and a Client type defined in those packages with methods to interact with that API service. It's usually easiest to find the subpackage of the service you want to work with and begin consulting the documentation there. It is recommended to construct the service-specific clients once at initialization time. The ClientConfig object in the root package supports functional options for configuration. The most common option to use is `WithHTTPClient()` which allows the caller to specify a different net/http.Client object. This allows the caller full customization over the transport if needed for use with proxies, custom TLS setups, observability and tracing, etc. Some arguments to methods in the ngrok API are optional and must be meaningfully distinguished from zero values, especially in Update() methods. This allows the API to distinguish between choosing not to update a value vs. setting it to zero or the empty string. For these arguments, ngrok follows the industry standard practice of using pointers to the primitive types and providing convenince functions like ngrok.String() and ngrok.Bool() for the caller to wrap literals as pointer values. For example: All List methods in the ngrok API are paged. This package abstracts that problem away from you by returning an iterator from any List API call. As you advance the iterator it will transparently fetch new pages of values for you behind the scenes. Note that the context supplied to the initial List() call will be used for all subsequent page fetches so it must be long enough to work through the entire list. Here's an example of paging through all of the TLS certificates on your account. Note that you must check for an error after Next() returns false to determine if the iterator failed to fetch the next page of results. All errors returned by the ngrok API are returned as structured payloads for easy error handling. Most non-networking errors returned by API calls in this package will be an ngrok.Error type. The ngrok.Error type exposes important metadata that will help you handle errors. Specifically it includes the HTTP status code of any failed operation as well as an error code value that uniquely identifies the failure condition. There are two helper functions that will make error handling easy: IsNotFound and IsErrorCode. IsNotFound helps identify the common case of accessing an API resource that no longer exists: IsErrorCode helps you identify specific ngrok errors by their unique ngrok error code. All ngrok error codes are documented at https://ngrok.com/docs/errors To check for a specific error condition, you would structure your code like the following example: All ngrok datatypes in this package define String() and GoString() methods so that they can be formatted into strings in helpful representations. The GoString() method is defined to pretty-print an object for debugging purposes with the "%#v" formatting verb.
Package lunk provides a set of tools for structured logging in the style of Google's Dapper or Twitter's Zipkin. When we consider a complex event in a distributed system, we're actually considering a partially-ordered tree of events from various services, libraries, and modules. Consider a user-initiated web request. Their browser sends an HTTP request to an edge server, which extracts the credentials (e.g., OAuth token) and authenticates the request by communicating with an internal authentication service, which returns a signed set of internal credentials (e.g., signed user ID). The edge web server then proxies the request to a cluster of web servers, each running a PHP application. The PHP application loads some data from several databases, places the user in a number of treatment groups for running A/B experiments, writes some data to a Dynamo-style distributed database, and returns an HTML response. The edge server receives this response and proxies it to the user's browser. In this scenario we have a number of infrastructure-specific events: This scenario also involves a number of events which have little to do with the infrastructure, but are still critical information for the business the system supports: There are a number of different teams all trying to monitor and improve aspects of this system. Operational staff need to know if a particular host or service is experiencing a latency spike or drop in throughput. Development staff need to know if their application's response times have gone down as a result of a recent deploy. Customer support staff need to know if the system is operating nominally as a whole, and for customers in particular. Product designers and managers need to know the effect of an A/B test on user behavior. But the fact that these teams will be consuming the data in different ways for different purposes does mean that they are working on different systems. In order to instrument the various components of the system, we need a common data model. We adopt Dapper's notion of a tree to mean a partially-ordered tree of events from a distributed system. A tree in Lunk is identified by its root ID, which is the unique ID of its root event. All events in a common tree share a root ID. In our photo example, we would assign a unique root ID as soon as the edge server received the request. Events inside a tree are causally ordered: each event has a unique ID, and an optional parent ID. By passing the IDs across systems, we establish causal ordering between events. In our photo example, the two database queries from the app would share the same parent ID--the ID of the event corresponding to the app handling the request which caused those queries. Each event has a schema of properties, which allow us to record specific pieces of information about each event. For HTTP requests, we can record the method, the request URI, the elapsed time to handle the request, etc. Lunk is agnostic in terms of aggregation technologies, but two use cases seem clear: real-time process monitoring and offline causational analysis. For real-time process monitoring, events can be streamed to a aggregation service like Riemann (http://riemann.io) or Storm (http://storm.incubator.apache.org), which can calculate process statistics (e.g., the 95th percentile latency for the edge server responses) in real-time. This allows for adaptive monitoring of all services, with the option of including example root IDs in the alerts (e.g., 95th percentile latency is over 300ms, mostly as a result of requests like those in tree XXXXX). For offline causational analysis, events can be written in batches to batch processing systems like Hadoop or OLAP databases like Vertica. These aggregates can be queried to answer questions traditionally reserved for A/B testing systems. "Did users who were show the new navbar view more photos?" "Did the new image optimization algorithm we enabled for 1% of views run faster? Did it produce smaller images? Did it have any effect on user engagement?" "Did any services have increased exception rates after any recent deploys?" &tc &tc By capturing the root ID of a particular web request, we can assemble a partially-ordered tree of events which were involved in the handling of that request. All events with a common root ID are in a common tree, which allows for O(M) retrieval for a tree of M events. To send a request with a root ID and a parent ID, use the Event-ID HTTP header: The header value is simply the root ID and event ID, hex-encoded and separated with a slash. If the event has a parent ID, that may be included as an optional third parameter. A server that receives a request with this header can use this to properly parent its own events. Each event has a set of named properties, the keys and values of which are strings. This allows aggregation layers to take advantage of simplifying assumptions and either store events in normalized form (with event data separate from property data) or in denormalized form (essentially pre-materializing an outer join of the normalized relations). Durations are always recorded as fractional milliseconds. Lunk currently provides two formats for log entries: text and JSON. Text-based logs encode each entry as a single line of text, using key="value" formatting for all properties. Event property keys are scoped to avoid collisions. JSON logs encode each entry as a single JSON object.
Package http provides a micro to http proxy
Package http provides a micro to http proxy
Package photosrv is a sharding http reverse proxy library supporting CHash and JumpHash (see github.com/dgryski/go-shardedkv/). See the README for more details and examples.
lmdrouter is a simple-to-use library for writing AWS Lambda functions in Go that listen to events of type API Gateway Proxy Request (represented by the `events.APIGatewayV2HTTPRequest` type of the github.com/aws-lambda-go/events package). The library allows creating functions that can match requests based on their URI, just like an HTTP server that uses the standard https://golang.org/pkg/net/http/#ServeMux (or any other community-built routing library such as https://github.com/julienschmidt/httprouter or https://github.com/go-chi/chi) would. The interface provided by the library is very similar to these libraries and should be familiar to anyone who has written HTTP applications in Go. When building large cloud-native applications, there's a certain balance to strike when it comes to deployment of APIs. On one side of the scale, each API endpoint has its own lambda function. This provides the greatest flexibility, but is extremely difficult to maintain. On the other side of the scale, there can be one lambda function for the entire API. This provides the least flexibility, but is the easiest to maintain. Both are probably not a good idea. With `lmdrouter`, one can create small lambda functions for different aspects of the API. For example, if your application model contains multiple domains (e.g. articles, authors, topics, etc...), you can create one lambda function for each domain, and deploy these independently (e.g. everything below "/api/articles" is one lambda function, everything below "/api/authors" is another function). This is also useful for applications where different teams are in charge of different parts of the API. * Supports all HTTP methods. * Supports middleware functions at a global and per-resource level. * Supports path parameters with a simple ":<name>" format (e.g. "/posts/:id"). * Provides ability to automatically "unmarshal" an API Gateway request to an arbitrary Go struct, with data coming either from path and query string parameters, or from the request body (only JSON requests are currently supported). See the documentation for the `UnmarshalRequest` function for more information. * Provides the ability to automatically "marshal" responses of any type to an API Gateway response (only JSON responses are currently generated). See the MarshalResponse function for more information. * Implements net/http.Handler for local development and general usage outside of an AWS Lambda environment.
Package http provides a micro to http proxy
Package http provides a micro rpc to http proxy
Package http is a http reverse proxy handler
Affinity provides user grouping and role-based access controls (RBAC) for any identity provider that defines a small set of authentication primitives. The examples documented here illustrate basic API usage with a very simple hypothetical message board. For a more complete example, reference the unit tests, and the source files in package github.com/juju/affinity/group package, where Affinity uses its own RBAC to control access to user-group administration. github.com/juju/affinity/server exposes an HTTP API over the group service. The command-line interface in cmd/ is a utility to launch the service and connect to it, using Ubuntu SSO as an identity provider. Use the following types for working with identity and RBAC in Affinity. A Principal is a singular User or a corporate Group (a collection of users or subgroups) which can be granted a Role over a Resource. Users are unique individual accounts which can provide a proof of identity. A user is identified by a Scheme and an Id string. The Id string must be unique within the scope of the Scheme. A User can be a member of a Group. A User also can be treated as a Principal. The canonical string representation of a user identity in affinity is "SchemeName:UserId". A Scheme provides two important functions in affinity: 1. Authenticating a user and generating a proof of identity ownership. 2. Validating that a proof of identity belongs to a given User. These functions are intended to be adaptable to OAuth, OpenID, SASL and other token-based authentication mechanisms. The Ubuntu Single-Sign On (SSO) provider is a reference example. Schemes are registered to unique namespaces. This namespace comprises the "SchemeName" component of a canonical User string representation. A group is a collection of Users or sub-Groups with a unique name. Groups should be defined by a common association, rather than by capability you want the members to have with a resource. In other words, don't group users to define permissions on resources. Grant common, reusable permissions on resources to users and groups. It is worth mentioning that some Schemes might support their own concept of user groups. For example, a Launchpad Scheme could access team membership, and a Github Scheme could access Organization membership. Proxying these external groups in Affinity may be supported in future releases. Permissions are fine-grained capabilities or actions which take place in an application. Affinity provides a means to look up whether a given principal has a permission to act on a certain resource. Each permission is given a name identifier unique to the application capability it represents. Let your application define your permissions. For example, if you were writing a filesystem, you might have permissions defined like: read-file, execute-file, write-file, read-directory, write-directory, etc. Roles are a higher-level definition of capabilities. Essentially a Role is a bundle of permissions given a name. Roles should be defined by the "types of access" you wish to grant users and groups on your application's resources. For example, someone in a Pilot role should have permissions like 'board', 'enter-cabin', 'move-cockpit-controls' on an "airplane" resource. A Passenger role should be able to 'board', but not 'enter-cabin' or 'move-cockpit-controls'. A resource is the object to which access is granted. In Affinity, a Resource is declared by a URI, which will have meaning to the application implementating RBAC. Resources also declare the full set of permissions they support. That way, you can't make absurd role grants that don't make sense for the resource object of the grant. Resources do not currently support the concept of hierarchy in Affinity -- they are all flat, and in essence just canonical names to be matched in an access control query. Affinity stores user groupings and role grants in persistent storage. The Store interface defines lower-level primitives which are implemented for different providers, such as MongoDB, in-memory, or others. Use Access to connect to storage and check access permissions for a given user/group on a resource. Admin extends Access with the capability to grant and revoke user or group roles on resources. Role grants in Affinity are "positive" and additive in nature. Granting a role will cause a permission lookup for the user/group on a resource to match if any granted role contains that permission. Revoking a role will remove this relationship, so that the lookup no longer matches and access is denied. It is important to note that when it comes to role grants on groups, there is no way to revoke a role's transitive permissions from a group member. All members of the group will receive the role permission, or none of them will.
Package ngrok makes it easy to work with the ngrok API from Go. The package is fully code generated and should always be up to date with the latest ngrok API. Full documentation of the ngrok API can be found at: https://ngrok.com/docs/api This package follows the best practices outlined for Go modules. All releases are tagged and any breaking changes will be reflected as a new major version. You should only import this package for production applications by pointing at a stable tagged version. The following example code demonstrates typical initialization and usage of the package to make an API call: API client configuration and all of the datatypes exchanged by the API are defined in this base package. There are subpackages for every API service and a Client type defined in those packages with methods to interact with that API service. It's usually easiest to find the subpackage of the service you want to work with and begin consulting the documentation there. It is recommended to construct the service-specific clients once at initialization time. The ClientConfig object in the root package supports functional options for configuration. The most common option to use is `WithHTTPClient()` which allows the caller to specify a different net/http.Client object. This allows the caller full customization over the transport if needed for use with proxies, custom TLS setups, observability and tracing, etc. Some arguments to methods in the ngrok API are optional and must be meaningfully distinguished from zero values, especially in Update() methods. This allows the API to distinguish between choosing not to update a value vs. setting it to zero or the empty string. For these arguments, ngrok follows the industry standard practice of using pointers to the primitive types and providing convenince functions like ngrok.String() and ngrok.Bool() for the caller to wrap literals as pointer values. For example: All List methods in the ngrok API are paged. This package abstracts that problem away from you by returning an iterator from any List API call. As you advance the iterator it will transparently fetch new pages of values for you behind the scenes. Note that the context supplied to the initial List() call will be used for all subsequent page fetches so it must be long enough to work through the entire list. Here's an example of paging through all of the TLS certificates on your account. Note that you must check for an error after Next() returns false to determine if the iterator failed to fetch the next page of results. All errors returned by the ngrok API are returned as structured payloads for easy error handling. Most non-networking errors returned by API calls in this package will be an ngrok.Error type. The ngrok.Error type exposes important metadata that will help you handle errors. Specifically it includes the HTTP status code of any failed operation as well as an error code value that uniquely identifies the failure condition. There are two helper functions that will make error handling easy: IsNotFound and IsErrorCode. IsNotFound helps identify the common case of accessing an API resource that no longer exists: IsErrorCode helps you identify specific ngrok errors by their unique ngrok error code. All ngrok error codes are documented at https://ngrok.com/docs/errors To check for a specific error condition, you would structure your code like the following example: All ngrok datatypes in this package define String() and GoString() methods so that they can be formatted into strings in helpful representations. The GoString() method is defined to pretty-print an object for debugging purposes with the "%#v" formatting verb.