Package sookie provides a simple way to set and get cookies with encryption and compression. It uses the XChaCha20-Poly1305 AEAD algorithm for encryption and Zstandard for compression. The cookie value is base64 encoded and can be safely sent over HTTP headers.
Package websocket implements the WebSocket protocol defined in RFC 6455. The Conn type represents a WebSocket connection. A server application calls the Upgrader.Upgrade method from an HTTP request handler to get a *Conn: Call the connection's WriteMessage and ReadMessage methods to send and receive messages as a slice of bytes. This snippet of code shows how to echo messages using these methods: In above snippet of code, p is a []byte and messageType is an int with value websocket.BinaryMessage or websocket.TextMessage. An application can also send and receive messages using the io.WriteCloser and io.Reader interfaces. To send a message, call the connection NextWriter method to get an io.WriteCloser, write the message to the writer and close the writer when done. To receive a message, call the connection NextReader method to get an io.Reader and read until io.EOF is returned. This snippet shows how to echo messages using the NextWriter and NextReader methods: The WebSocket protocol distinguishes between text and binary data messages. Text messages are interpreted as UTF-8 encoded text. The interpretation of binary messages is left to the application. This package uses the TextMessage and BinaryMessage integer constants to identify the two data message types. The ReadMessage and NextReader methods return the type of the received message. The messageType argument to the WriteMessage and NextWriter methods specifies the type of a sent message. It is the application's responsibility to ensure that text messages are valid UTF-8 encoded text. The WebSocket protocol defines three types of control messages: close, ping and pong. Call the connection WriteControl, WriteMessage or NextWriter methods to send a control message to the peer. Connections handle received close messages by calling the handler function set with the SetCloseHandler method and by returning a *CloseError from the NextReader, ReadMessage or the message Read method. The default close handler sends a close message to the peer. Connections handle received ping messages by calling the handler function set with the SetPingHandler method. The default ping handler sends a pong message to the peer. Connections handle received pong messages by calling the handler function set with the SetPongHandler method. The default pong handler does nothing. If an application sends ping messages, then the application should set a pong handler to receive the corresponding pong. The control message handler functions are called from the NextReader, ReadMessage and message reader Read methods. The default close and ping handlers can block these methods for a short time when the handler writes to the connection. The application must read the connection to process close, ping and pong messages sent from the peer. If the application is not otherwise interested in messages from the peer, then the application should start a goroutine to read and discard messages from the peer. A simple example is: Connections support one concurrent reader and one concurrent writer. Applications are responsible for ensuring that no more than one goroutine calls the write methods (NextWriter, SetWriteDeadline, WriteMessage, WriteJSON, EnableWriteCompression, SetCompressionLevel) concurrently and that no more than one goroutine calls the read methods (NextReader, SetReadDeadline, ReadMessage, ReadJSON, SetPongHandler, SetPingHandler) concurrently. The Close and WriteControl methods can be called concurrently with all other methods. Web browsers allow Javascript applications to open a WebSocket connection to any host. It's up to the server to enforce an origin policy using the Origin request header sent by the browser. The Upgrader calls the function specified in the CheckOrigin field to check the origin. If the CheckOrigin function returns false, then the Upgrade method fails the WebSocket handshake with HTTP status 403. If the CheckOrigin field is nil, then the Upgrader uses a safe default: fail the handshake if the Origin request header is present and the Origin host is not equal to the Host request header. The deprecated package-level Upgrade function does not perform origin checking. The application is responsible for checking the Origin header before calling the Upgrade function. Connections buffer network input and output to reduce the number of system calls when reading or writing messages. Write buffers are also used for constructing WebSocket frames. See RFC 6455, Section 5 for a discussion of message framing. A WebSocket frame header is written to the network each time a write buffer is flushed to the network. Decreasing the size of the write buffer can increase the amount of framing overhead on the connection. The buffer sizes in bytes are specified by the ReadBufferSize and WriteBufferSize fields in the Dialer and Upgrader. The Dialer uses a default size of 4096 when a buffer size field is set to zero. The Upgrader reuses buffers created by the HTTP server when a buffer size field is set to zero. The HTTP server buffers have a size of 4096 at the time of this writing. The buffer sizes do not limit the size of a message that can be read or written by a connection. Buffers are held for the lifetime of the connection by default. If the Dialer or Upgrader WriteBufferPool field is set, then a connection holds the write buffer only when writing a message. Applications should tune the buffer sizes to balance memory use and performance. Increasing the buffer size uses more memory, but can reduce the number of system calls to read or write the network. In the case of writing, increasing the buffer size can reduce the number of frame headers written to the network. Some guidelines for setting buffer parameters are: Limit the buffer sizes to the maximum expected message size. Buffers larger than the largest message do not provide any benefit. Depending on the distribution of message sizes, setting the buffer size to a value less than the maximum expected message size can greatly reduce memory use with a small impact on performance. Here's an example: If 99% of the messages are smaller than 256 bytes and the maximum message size is 512 bytes, then a buffer size of 256 bytes will result in 1.01 more system calls than a buffer size of 512 bytes. The memory savings is 50%. A write buffer pool is useful when the application has a modest number writes over a large number of connections. when buffers are pooled, a larger buffer size has a reduced impact on total memory use and has the benefit of reducing system calls and frame overhead. Per message compression extensions (RFC 7692) are experimentally supported by this package in a limited capacity. Setting the EnableCompression option to true in Dialer or Upgrader will attempt to negotiate per message deflate support. If compression was successfully negotiated with the connection's peer, any message received in compressed form will be automatically decompressed. All Read methods will return uncompressed bytes. Per message compression of messages written to a connection can be enabled or disabled by calling the corresponding Conn method: Currently this package does not support compression with "context takeover". This means that messages must be compressed and decompressed in isolation, without retaining sliding window or dictionary state across messages. For more details refer to RFC 7692. Use of compression is experimental and may result in decreased performance.
Package keystone provides a go http middleware for authentication incoming http request against Openstack Keystone. It it modelled after the original keystone middleware: http://docs.openstack.org/developer/keystonemiddleware/middlewarearchitecture.html The middleware authenticates incoming requests by validating the `X-Auth-Token` header and adding additional headers to the incoming request containing the validation result. The final authentication/authorization decision is delegated to subsequent http handlers.
Package asmauthextension provides an OpenTelemetry Collector extension for authenticating HTTP requests using credentials stored in AWS Secrets Manager. This extension adds headers to outgoing HTTP requests based on secrets retrieved from AWS Secrets Manager. It supports automatic refresh of credentials, fallback headers, and AWS IAM role assumption.
Package asmauthextension provides an OpenTelemetry Collector extension for authenticating HTTP requests using credentials stored in AWS Secrets Manager. This extension adds headers to outgoing HTTP requests based on secrets retrieved from AWS Secrets Manager. It supports automatic refresh of credentials, fallback headers, and AWS IAM role assumption.
A compliant-enough implementation to parse HTTP WWW-Authenticate & Authorization headers
Package fasthttp provides fast HTTP server and client API. Fasthttp provides the following features: Optimized for speed. Easily handles more than 100K qps and more than 1M concurrent keep-alive connections on modern hardware. Optimized for low memory usage. Easy 'Connection: Upgrade' support via RequestCtx.Hijack. Server provides the following anti-DoS limits: The number of concurrent connections. The number of concurrent connections per client IP. The number of requests per connection. Request read timeout. Response write timeout. Maximum request header size. Maximum request body size. Maximum request execution time. Maximum keep-alive connection lifetime. Early filtering out non-GET requests. A lot of additional useful info is exposed to request handler: Server and client address. Per-request logger. Unique request id. Request start time. Connection start time. Request sequence number for the current connection. Client supports automatic retry on idempotent requests' failure. Fasthttp API is designed with the ability to extend existing client and server implementations or to write custom client and server implementations from scratch.
Package linkheader provides functions for parsing HTTP Link headers
Package linkheader provides functions for parsing HTTP Link headers
Package accept provides parsing functions for the HTTP Accept-... headers, which are vital for HTTP content negotiation. This implements RFC-7231 Section 5.3 (content negotiation) rules. Content negotiation headers use a relatively-complex syntax that permits a wide range of header from the very simple, e.g. to much more complex, e.g. See https://tools.ietf.org/html/rfc7231#section-5.3 (content negotiation) and also https://tools.ietf.org/html/rfc7231#section-3.1 (representation metadata).
Package restful , a lean package for creating REST-style WebServices without magic. A WebService has a collection of Route objects that dispatch incoming Http Requests to a function calls. Typically, a WebService has a root path (e.g. /users) and defines common MIME types for its routes. WebServices must be added to a container (see below) in order to handler Http requests from a server. A Route is defined by a HTTP method, an URL path and (optionally) the MIME types it consumes (Content-Type) and produces (Accept). This package has the logic to find the best matching Route and if found, call its Function. The (*Request, *Response) arguments provide functions for reading information from the request and writing information back to the response. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-user-resource.go with a full implementation. A Route parameter can be specified using the format "uri/{var[:regexp]}" or the special version "uri/{var:*}" for matching the tail of the path. For example, /persons/{name:[A-Z][A-Z]} can be used to restrict values for the parameter "name" to only contain capital alphabetic characters. Regular expressions must use the standard Go syntax as described in the regexp package. (https://code.google.com/p/re2/wiki/Syntax) This feature requires the use of a CurlyRouter. A Container holds a collection of WebServices, Filters and a http.ServeMux for multiplexing http requests. Using the statements "restful.Add(...) and restful.Filter(...)" will register WebServices and Filters to the Default Container. The Default container of go-restful uses the http.DefaultServeMux. You can create your own Container and create a new http.Server for that particular container. A filter dynamically intercepts requests and responses to transform or use the information contained in the requests or responses. You can use filters to perform generic logging, measurement, authentication, redirect, set response headers etc. In the restful package there are three hooks into the request,response flow where filters can be added. Each filter must define a FilterFunction: Use the following statement to pass the request,response pair to the next filter or RouteFunction These are processed before any registered WebService. These are processed before any Route of a WebService. These are processed before calling the function associated with the Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-filters.go with full implementations. Two encodings are supported: gzip and deflate. To enable this for all responses: If a Http request includes the Accept-Encoding header then the response content will be compressed using the specified encoding. Alternatively, you can create a Filter that performs the encoding and install it per WebService or Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-encoding-filter.go By installing a pre-defined container filter, your Webservice(s) can respond to the OPTIONS Http request. By installing the filter of a CrossOriginResourceSharing (CORS), your WebService(s) can handle CORS requests. Unexpected things happen. If a request cannot be processed because of a failure, your service needs to tell via the response what happened and why. For this reason HTTP status codes exist and it is important to use the correct code in every exceptional situation. If path or query parameters are not valid (content or type) then use http.StatusBadRequest. Despite a valid URI, the resource requested may not be available If the application logic could not process the request (or write the response) then use http.StatusInternalServerError. The request has a valid URL but the method (GET,PUT,POST,...) is not allowed. The request does not have or has an unknown Accept Header set for this operation. The request does not have or has an unknown Content-Type Header set for this operation. In addition to setting the correct (error) Http status code, you can choose to write a ServiceError message on the response. This package has several options that affect the performance of your service. It is important to understand them and how you can change it. DoNotRecover controls whether panics will be caught to return HTTP 500. If set to false, the container will recover from panics. Default value is true If content encoding is enabled then the default strategy for getting new gzip/zlib writers and readers is to use a sync.Pool. Because writers are expensive structures, performance is even more improved when using a preloaded cache. You can also inject your own implementation. This package has the means to produce detail logging of the complete Http request matching process and filter invocation. Enabling this feature requires you to set an implementation of restful.StdLogger (e.g. log.Logger) instance such as: The restful.SetLogger() method allows you to override the logger used by the package. By default restful uses the standard library `log` package and logs to stdout. Different logging packages are supported as long as they conform to `StdLogger` interface defined in the `log` sub-package, writing an adapter for your preferred package is simple. (c) 2012-2015, http://ernestmicklei.com. MIT License
Package nrb3 supports adding B3 headers to outgoing requests. When using the New Relic Go Agent, use this package if you want to add B3 headers ("X-B3-TraceId", etc., see https://github.com/openzipkin/b3-propagation) to outgoing requests. Distributed tracing must be enabled (https://docs.newrelic.com/docs/understand-dependencies/distributed-tracing/enable-configure/enable-distributed-tracing) for B3 headers to be added properly. This example demonstrates how to create a Zipkin reporter using the standard Zipkin http reporter (https://godoc.org/github.com/openzipkin/zipkin-go/reporter/http) to send Span data to New Relic. Follow this example when your application uses Zipkin for tracing (instead of the New Relic Go Agent) and you wish to send span data to the New Relic backend. The example assumes you have the environment variable NEW_RELIC_API_KEY set to your New Relic Insights Insert Key.
Package rst implements tools and methods to expose resources in a RESTFul web service. The idea behind rst is to have endpoints and resources implement interfaces to support HTTP features. Endpoints can implement Getter, Poster, Patcher, Putter or Deleter to respectively allow the HEAD/GET, POST, PATCH, PUT, and DELETE HTTP methods. Resources can implement Ranger to support partial GET requests, Marshaler to customize the process with which they are encoded, or http.Handler to have a complete control over the ResponseWriter. With these interfaces, the complexity behind dealing with all the headers and status codes of the HTTP protocol is abstracted to let you focus on returning a resource or an error. A resource must implement the rst.Resource interface. For that, you can either wrap an rst.Envelope around an existing type, or define a new type and implement the methods of the interface yourself. Using a rst.Envelope: Using a struct: An endpoint is an access point to a resource in your service. You can either define an endpoint by defining handlers for different methods sharing the same pattern, or by submitting a type that implements Getter, Poster, Patcher, Putter, Deleter and/or Prefligher. Using rst.Mux: Using a struct: In the following example, PersonEP implements Getter and is therefore able to handle GET requests. Routing of requests in rst is powered by Gorilla mux (https://github.com/gorilla/mux). Only URL patterns are available for now. Optional regular expressions are supported. rst supports JSON, XML and text encoding of resources using the encoders in Go's standard library. It negotiates the right encoding format based on the content of the Accept header in the request, calls the appropriate marshaler, and inserts the result in a response with the right status code and headers. You can implement the Marshaler interface if you want to add support for another format, or for more control over the encoding process of a specific resource. rst compresses the payload of responses using the supported algorithm detected in the request's Accept-Encoding header. Payloads under the size defined in the CompressionThreshold const are not compressed. Both Gzip and Flate are supported. OPTIONS requests are implicitly supported by all endpoints. The ETag, Last-Modified and Vary headers are automatically set. rst responds with 304 NOT MODIFIED when an appropriate If-Modified-Since or If-None-Match header is found in the request. The Expires header is also automatically inserted with the duration returned by Resource.TTL(). A resource can implement the Ranger interface to gain the ability to return partial responses with status code 206 PARTIAL CONTENT and Content-Range header automatically inserted. Ranger.Range method will be called when a valid Range header is found in an incoming GET request. The Accept-Range header will be inserted automatically. The supported range units and the range extent will be validated for you. Note that the If-Range conditional header is supported as well. rst can add the headers required to serve cross-origin (CORS) requests for you. You can choose between two provided policies (DefaultAccessControl and PermissiveAccessControl), or define your own. Support can be disabled by passing nil. Preflighted requests are also supported. However, you can customize the responses returned by preflight OPTIONS requests if you implement the Preflighter interface in your endpoint.
Package gizmo is a toolkit that provides packages to put together server and pubsub daemons with the following features: ## The `config` packages The `config` package contains a handful of useful functions to load to configuration structs from JSON files, JSON blobs in Consul k/v or environment variables. The subpackages contain structs meant for managing common configuration options and credentials. There are currently configs for: The package also has a generic `Config` type in the `config/combined` subpackage that contains all of the above types. It's meant to be a 'catch all' convenience struct that many applications should be able to use. The `server` package This package is the bulk of the toolkit and relies on `server.Config` for any managing `Server` implementations. A server must implement the following interface: The package offers 2 server implementations: `SimpleServer`, which is capable of handling basic HTTP and JSON requests via 3 of the available `Service` implementations: `SimpleService`, `JSONService`, `ContextService`, `MixedService` and `MixedContextService`. A service and these implenetations will be defined below. `RPCServer`, which is capable of serving a gRPC server on one port and JSON endpoints on another. This kind of server can only handle the `RPCService` implementation. The `Service` interface is minimal to allow for maximum flexibility: The 3 service types that are accepted and hostable on the `SimpleServer`: Where `JSONEndpoint`, `JSONContextEndpoint`, `ContextHandler` and `ContextHandlerFunc` are defined as: Also, the one service type that works with an `RPCServer`: The `Middleware(..)` functions offer each service a 'hook' to wrap each of its endpoints. This may be handy for adding additional headers or context to the request. This is also the point where other, third-party middleware could be easily be plugged in (ie. oauth, tracing, metrics, logging, etc.) This package contains two generic interfaces for publishing data to queues and subscribing and consuming data from those queues. Where a `SubscriberMessage` is an interface that gives implementations a hook for acknowledging/delete messages. Take a look at the docs for each implementation in `pubsub` to see how they behave. There are currently 3 implementations of each type of `pubsub` interfaces: For pubsub via Amazon's SNS/SQS, you can use the `pubsub/aws` package. For pubsub via Google's Pubsub, you can use the `pubsub/gcp` package. For pubsub via Kafka topics, you can use the `pubsub/kafka` package. For publishing via HTTP, you can use the `pubsub/http` package. The `pubsub/pubsubtest` package This package contains 'test' implementations of the `pubsub.Publisher` and `pubsub.Subscriber` interfaces that will allow developers to easily mock out and test their `pubsub` implementations: This package contains a handful of very useful functions for parsing types from request queries and payloads. For examples of how to use the gizmo `server` and `pubsub` packages, take a look at the 'examples' subdirectory. The Gizmo logo was based on the Go mascot designed by Renée French and copyrighted under the Creative Commons Attribution 3.0 license.
Package websocket implements the WebSocket protocol defined in RFC 6455. The Conn type represents a WebSocket connection. A server application calls the Upgrader.Upgrade method from an HTTP request handler to get a *Conn: Call the connection's WriteMessage and ReadMessage methods to send and receive messages as a slice of bytes. This snippet of code shows how to echo messages using these methods: In above snippet of code, p is a []byte and messageType is an int with value websocket.BinaryMessage or websocket.TextMessage. An application can also send and receive messages using the io.WriteCloser and io.Reader interfaces. To send a message, call the connection NextWriter method to get an io.WriteCloser, write the message to the writer and close the writer when done. To receive a message, call the connection NextReader method to get an io.Reader and read until io.EOF is returned. This snippet shows how to echo messages using the NextWriter and NextReader methods: The WebSocket protocol distinguishes between text and binary data messages. Text messages are interpreted as UTF-8 encoded text. The interpretation of binary messages is left to the application. This package uses the TextMessage and BinaryMessage integer constants to identify the two data message types. The ReadMessage and NextReader methods return the type of the received message. The messageType argument to the WriteMessage and NextWriter methods specifies the type of a sent message. It is the application's responsibility to ensure that text messages are valid UTF-8 encoded text. The WebSocket protocol defines three types of control messages: close, ping and pong. Call the connection WriteControl, WriteMessage or NextWriter methods to send a control message to the peer. Connections handle received close messages by calling the handler function set with the SetCloseHandler method and by returning a *CloseError from the NextReader, ReadMessage or the message Read method. The default close handler sends a close message to the peer. Connections handle received ping messages by calling the handler function set with the SetPingHandler method. The default ping handler sends a pong message to the peer. Connections handle received pong messages by calling the handler function set with the SetPongHandler method. The default pong handler does nothing. If an application sends ping messages, then the application should set a pong handler to receive the corresponding pong. The control message handler functions are called from the NextReader, ReadMessage and message reader Read methods. The default close and ping handlers can block these methods for a short time when the handler writes to the connection. The application must read the connection to process close, ping and pong messages sent from the peer. If the application is not otherwise interested in messages from the peer, then the application should start a goroutine to read and discard messages from the peer. A simple example is: Connections support one concurrent reader and one concurrent writer. Applications are responsible for ensuring that no more than one goroutine calls the write methods (NextWriter, SetWriteDeadline, WriteMessage, WriteJSON, EnableWriteCompression, SetCompressionLevel) concurrently and that no more than one goroutine calls the read methods (NextReader, SetReadDeadline, ReadMessage, ReadJSON, SetPongHandler, SetPingHandler) concurrently. The Close and WriteControl methods can be called concurrently with all other methods. Web browsers allow Javascript applications to open a WebSocket connection to any host. It's up to the server to enforce an origin policy using the Origin request header sent by the browser. The Upgrader calls the function specified in the CheckOrigin field to check the origin. If the CheckOrigin function returns false, then the Upgrade method fails the WebSocket handshake with HTTP status 403. If the CheckOrigin field is nil, then the Upgrader uses a safe default: fail the handshake if the Origin request header is present and the Origin host is not equal to the Host request header. The deprecated package-level Upgrade function does not perform origin checking. The application is responsible for checking the Origin header before calling the Upgrade function. Connections buffer network input and output to reduce the number of system calls when reading or writing messages. Write buffers are also used for constructing WebSocket frames. See RFC 6455, Section 5 for a discussion of message framing. A WebSocket frame header is written to the network each time a write buffer is flushed to the network. Decreasing the size of the write buffer can increase the amount of framing overhead on the connection. The buffer sizes in bytes are specified by the ReadBufferSize and WriteBufferSize fields in the Dialer and Upgrader. The Dialer uses a default size of 4096 when a buffer size field is set to zero. The Upgrader reuses buffers created by the HTTP server when a buffer size field is set to zero. The HTTP server buffers have a size of 4096 at the time of this writing. The buffer sizes do not limit the size of a message that can be read or written by a connection. Buffers are held for the lifetime of the connection by default. If the Dialer or Upgrader WriteBufferPool field is set, then a connection holds the write buffer only when writing a message. Applications should tune the buffer sizes to balance memory use and performance. Increasing the buffer size uses more memory, but can reduce the number of system calls to read or write the network. In the case of writing, increasing the buffer size can reduce the number of frame headers written to the network. Some guidelines for setting buffer parameters are: Limit the buffer sizes to the maximum expected message size. Buffers larger than the largest message do not provide any benefit. Depending on the distribution of message sizes, setting the buffer size to a value less than the maximum expected message size can greatly reduce memory use with a small impact on performance. Here's an example: If 99% of the messages are smaller than 256 bytes and the maximum message size is 512 bytes, then a buffer size of 256 bytes will result in 1.01 more system calls than a buffer size of 512 bytes. The memory savings is 50%. A write buffer pool is useful when the application has a modest number writes over a large number of connections. when buffers are pooled, a larger buffer size has a reduced impact on total memory use and has the benefit of reducing system calls and frame overhead. Per message compression extensions (RFC 7692) are experimentally supported by this package in a limited capacity. Setting the EnableCompression option to true in Dialer or Upgrader will attempt to negotiate per message deflate support. If compression was successfully negotiated with the connection's peer, any message received in compressed form will be automatically decompressed. All Read methods will return uncompressed bytes. Per message compression of messages written to a connection can be enabled or disabled by calling the corresponding Conn method: Currently this package does not support compression with "context takeover". This means that messages must be compressed and decompressed in isolation, without retaining sliding window or dictionary state across messages. For more details refer to RFC 7692. Use of compression is experimental and may result in decreased performance.
Package websocket implements the WebSocket protocol defined in RFC 6455. The Conn type represents a WebSocket connection. A server application calls the Upgrader.Upgrade method from an HTTP request handler to get a *Conn: Call the connection's WriteMessage and ReadMessage methods to send and receive messages as a slice of bytes. This snippet of code shows how to echo messages using these methods: In above snippet of code, p is a []byte and messageType is an int with value websocket.BinaryMessage or websocket.TextMessage. An application can also send and receive messages using the io.WriteCloser and io.Reader interfaces. To send a message, call the connection NextWriter method to get an io.WriteCloser, write the message to the writer and close the writer when done. To receive a message, call the connection NextReader method to get an io.Reader and read until io.EOF is returned. This snippet shows how to echo messages using the NextWriter and NextReader methods: The WebSocket protocol distinguishes between text and binary data messages. Text messages are interpreted as UTF-8 encoded text. The interpretation of binary messages is left to the application. This package uses the TextMessage and BinaryMessage integer constants to identify the two data message types. The ReadMessage and NextReader methods return the type of the received message. The messageType argument to the WriteMessage and NextWriter methods specifies the type of a sent message. It is the application's responsibility to ensure that text messages are valid UTF-8 encoded text. The WebSocket protocol defines three types of control messages: close, ping and pong. Call the connection WriteControl, WriteMessage or NextWriter methods to send a control message to the peer. Connections handle received close messages by calling the handler function set with the SetCloseHandler method and by returning a *CloseError from the NextReader, ReadMessage or the message Read method. The default close handler sends a close message to the peer. Connections handle received ping messages by calling the handler function set with the SetPingHandler method. The default ping handler sends a pong message to the peer. Connections handle received pong messages by calling the handler function set with the SetPongHandler method. The default pong handler does nothing. If an application sends ping messages, then the application should set a pong handler to receive the corresponding pong. The control message handler functions are called from the NextReader, ReadMessage and message reader Read methods. The default close and ping handlers can block these methods for a short time when the handler writes to the connection. The application must read the connection to process close, ping and pong messages sent from the peer. If the application is not otherwise interested in messages from the peer, then the application should start a goroutine to read and discard messages from the peer. A simple example is: Connections support one concurrent reader and one concurrent writer. Applications are responsible for ensuring that no more than one goroutine calls the write methods (NextWriter, SetWriteDeadline, WriteMessage, WriteJSON, EnableWriteCompression, SetCompressionLevel) concurrently and that no more than one goroutine calls the read methods (NextReader, SetReadDeadline, ReadMessage, ReadJSON, SetPongHandler, SetPingHandler) concurrently. The Close and WriteControl methods can be called concurrently with all other methods. Web browsers allow Javascript applications to open a WebSocket connection to any host. It's up to the server to enforce an origin policy using the Origin request header sent by the browser. The Upgrader calls the function specified in the CheckOrigin field to check the origin. If the CheckOrigin function returns false, then the Upgrade method fails the WebSocket handshake with HTTP status 403. If the CheckOrigin field is nil, then the Upgrader uses a safe default: fail the handshake if the Origin request header is present and the Origin host is not equal to the Host request header. The deprecated package-level Upgrade function does not perform origin checking. The application is responsible for checking the Origin header before calling the Upgrade function. Per message compression extensions (RFC 7692) are experimentally supported by this package in a limited capacity. Setting the EnableCompression option to true in Dialer or Upgrader will attempt to negotiate per message deflate support. If compression was successfully negotiated with the connection's peer, any message received in compressed form will be automatically decompressed. All Read methods will return uncompressed bytes. Per message compression of messages written to a connection can be enabled or disabled by calling the corresponding Conn method: Currently this package does not support compression with "context takeover". This means that messages must be compressed and decompressed in isolation, without retaining sliding window or dictionary state across messages. For more details refer to RFC 7692. Use of compression is experimental and may result in decreased performance.
Package api2 provides types and functions used to define interfaces of client-server API and facilitate creation of server and client for it. How to use this package. Organize your code in services. Each service provides some domain specific functionality. It is a Go type whose methods correspond to exposed RPC's of the API. Each method has the following signature: Let's define a service Foo with method Bar. A field must not have more than one of tags: json, query, header, cookie. Fields in query, header and cookie parts are encoded and decoded with fmt.Sprintf and fmt.Sscanf. Strings are not decoded with fmt.Sscanf, but passed as is. Types implementing encoding.TextMarshaler and encoding.TextUnmarshaler are encoded and decoded using it. Cookie in Response part must be of type http.Cookie. If no field is no JSON field in the struct, then HTTP body is skipped. You can also set HTTP status code of response by adding a field of type `int` with tag `use_as_status:"true"` to Response. 0 is interpreted as 200. If Response has status field, no HTTP statuses are considered errors. If you need the top-level type matching body JSON to be not a struct, but of some other kind (e.g. slice or map), you should provide a field in your struct with tag `use_as_body:"true"`: If you use `use_as_body:"true"`, you can also set `is_protobuf:"true"` and put a protobuf type (convertible to proto.Message) in that field. It will be sent over wire as protobuf binary form. You can add `use_as_body:"true" is_raw:"true"` to a `[]byte` field, then it will keep the whole HTTP body. Streaming. If you use `use_as_body:"true"`, you can also set `is_stream:"true"`. In this case the field must be of type `io.ReadCloser`. On the client side put any object implementing `io.ReadCloser` to such a field in Request. It will be read and closed by the library and used as HTTP request body. On the server side your handler should read from the reader passed in that field of Request. (You don't have to read the entire body and to close it.) For Response, on the server side, the handler must put any object implementing `io.ReadCloser` to such a field of Response. The library will use it to generate HTTP response's body and close it. On the client side your code must read from that reader the entire response and then close it. If a streaming field is left `nil`, it is interpreted as empty body. Now let's write the function that generates the table of routes: You can add multiple routes with the same path, but in this case their HTTP methods must be different so that they can be distinguished. If Transport is not set, DefaultTransport is used which is defined as &api2.JsonTransport{}. **Error handling**. A handler can return any Go error. `JsonTransport` by default returns JSON. `Error()` value is put into "error" field of that JSON. If the error has `HttpCode() int` method, it is called and the result is used as HTTP return code. You can pass error details (any struct). For that the error must be of a custom type. You should register the error type in `JsonTransport.Errors` map. The key used for that error is put into "code" key of JSON and the object of the registered type - into "detail" field. The error can be wrapped using `fmt.Errorf("%w" ...)`. See test/custom_error_test.go for an example. In the server you need a real instance of service Foo to pass to GetRoutes. Then just bind the routes to http.ServeMux and run the server: The server is running. It serves foo.Bar function on path /v1/foo/bar with HTTP method Post. Now let's create the client: The client sent request to path "/v1/foo/bar/product1", from which the server understood that product=product1. Note that you don't have to pass a real service object to GetRoutes on client side. You can pass nil, it is sufficient to pass all needed information about request and response types in the routes table, that is used by client to find a proper route. You can make GetRoutes accepting an interface instead of a concrete Service type. In this case you can not get method handlers by s.Bar, because this code panics if s is nil interface. As a workaround api2 provides function Method(service pointer, methodName) which you can use: If you have function GetRoutes in package foo as above you can generate static client for it in file client.go located near the file in which GetRoutes is defined: GenerateClient can accept multiple GetRoutes functions, but they must be located in the same package.
Package http is a fork of stdlib's net/http that provides HTTP client and server implementations that are compatible with alternative TLS libraries such as github.com/refraction-networking/utls. See https://github.com/ooni/oohttp for more information on how to use this package and which are its limitations. Get, Head, Post, and PostForm make HTTP (or HTTPS) requests: The caller must close the response body when finished with it: For control over HTTP client headers, redirect policy, and other settings, create a Client: For control over proxies, TLS configuration, keep-alives, compression, and other settings, create a Transport: Clients and Transports are safe for concurrent use by multiple goroutines and for efficiency should only be created once and re-used. ListenAndServe starts an HTTP server with a given address and handler. The handler is usually nil, which means to use DefaultServeMux. Handle and HandleFunc add handlers to DefaultServeMux: More control over the server's behavior is available by creating a custom Server: Starting with Go 1.6, the http package has transparent support for the HTTP/2 protocol when using HTTPS. Programs that must disable HTTP/2 can do so by setting [Transport.TLSNextProto] (for clients) or [Server.TLSNextProto] (for servers) to a non-nil, empty map. Alternatively, the following GODEBUG settings are currently supported: Please report any issues before disabling HTTP/2 support: https://golang.org/s/http2bug The http package's Transport and Server both automatically enable HTTP/2 support for simple configurations. To enable HTTP/2 for more complex configurations, to use lower-level HTTP/2 features, or to use a newer version of Go's http2 package, import "golang.org/x/net/http2" directly and use its ConfigureTransport and/or ConfigureServer functions. Manually configuring HTTP/2 via the golang.org/x/net/http2 package takes precedence over the net/http package's built-in HTTP/2 support.
Ctx represents the context of an HTTP request and response. It provides access to the request, response, headers, query parameters, body, and other necessary attributes for handling HTTP requests. Fields: Package quick provides a high-performance, minimalistic web framework for building web applications in Go. This file defines constants for HTTP methods and status codes, as well as a utility function to return human-readable descriptions for status codes. These definitions ensure consistent use of HTTP standards throughout the framework. 🚀 Quick is a flexible and extensible route manager for the Go language. It aims to be fast and performant, and 100% net/http compatible. Quick is a project under constant development and is open for collaboration, everyone is welcome to contribute. 😍 Package quick provides a high-performance, lightweight web framework for building modern HTTP applications in Go. It is designed for speed, efficiency, and simplicity. Features: - Middleware support for request/response processing. - Optimized routing with low overhead. - Built-in support for JSON, XML, and form parsing. - Efficient request handling using sync.Pool for memory optimization. - Customizable response handling with structured output. Quick is ideal for building RESTful APIs, microservices, and high-performance web applications. Package quick provides a high-performance HTTP framework for building web applications in Go. Quick is designed to be lightweight and efficient, offering a simplified API for handling HTTP requests, file uploads, middleware, and routing. Features: Qtest is an advanced HTTP testing function designed to facilitate route validation in the Quick framework. It allows you to test simulated HTTP requests using httptest, supporting: The Qtest function receives a QuickTestOptions structure containing the request parameters, executes the call and returns a QtestReturn object, which provides methods for analyzing and validating the result.
The Escher HTTP request signing framework is intended to provide a secure way for clients to sign HTTP requests, and servers to check the integrity of these messages. The goal of the protocol is to introduce an authentication solution for REST API services, that is more secure than the currently available protocols. RFC 2617 (HTTP Authentication) defines Basic and Digest Access Authentication. They’re widely used, but Basic Access Authentication doesn’t encrypt the secret and doesn’t add integrity checks to the requests. Digest Access Authentication sends the secret encrypted, but the algorithm with creating a checksum with a nonce and using md5 should not be considered highly secure these days, and as with Basic Access Authentication, there’s no way to check the integrity of the message. RFC 6749 (OAuth 2.0 Authorization) enables a third-party application to obtain limited access to an HTTP service, either on behalf of a resource owner by orchestrating an approval interaction between the resource owner and the HTTP service, or by allowing the third-party application to obtain access on its own behalf. This is not helpful for a machine-to-machine communication situation, like a REST API authentication, because typically there’s no third-party user involved. Additionally, after a token is obtained from the authorization endpoint, it is used with no encryption, and doesn’t provide integration checking, or prevent repeating messages. OAuth 2.0 is a stateful protocol which needs a database to store the tokens for client sessions. Amazon and other service providers created protocols addressing these issues, however there is no public standard with open source implementations available from them. As Escher is based on a publicly documented, widely, in-the-wild used protocol, the specification does not include novelty techniques. 2. Signing an HTTP Request Escher defines a stateless signature generation mechanism. The signature is calculated from the key parts of the HTTP request, a service identifier string called credential scope, and a client key and client secret. The signature generation steps are: canonicalizing the request, creating a string to calculate the signature, and adding the signature to the original request. Escher supports two hash algorithms: SHA256 and SHA512 designed by the NSA (U.S. National Security Agency). 2.1. Canonicalizing the Request In order to calculate a checksum from the key HTTP request parts, the HTTP request method, the request URI, the query parts, the headers, and the request body have to be canonicalized. The output of the canonicalization step will be a string including the request parts separated by LF (line feed, “n”) characters. The string will be used to calculate a checksum for the request. 2.1.1. The HTTP method The HTTP method defined by RFC2616 (Hypertext Transfer Protocol) is case sensitive, and must be available in upper case, no transformation has to be applied: POST 2.1.2. The Path The path is the absolute path of the URL. Starts with a slash (/) character, and does not include the query part (and the question mark). Escher follows the rules defined by RFC3986 (Uniform Resource Identifier) to normalize the path. Basically it means: Convert relative paths to absolute, remove redundant path components. URI-encode each path components: the “reserved characters” defined by RFC3986 (Uniform Resource Identifier) have to be kept as they are (no encoding applied) all other characters have to be percent encoded, including SPACE (to %20, instead of +) non-ASCII, UTF-8 characters should be percent encoded to 2 or more pieces (á to %C3%A1) percent encoded hexadecimal numbers have to be upper cased (eg: a%c2%b1b to a%C2%B1b) Normalize empty paths to /. For example: 2.1.3. The Query String RFC3986 (Uniform Resource Identifier) should provide guidance for canonicalization of the query string, but here’s the complete list of the rules to be applied: URI-encode each query parameter names and values the “reserved characters” defined by RFC3986 (Uniform Resource Identifier) have to be kept as they are (no encoding applied) all other characters have to be percent encoded, including SPACE (to %20, instead of +) non-ASCII, UTF-8 characters should be percent encoded to 2 or more pieces (á to %C3%A1) percent encoded hexadecimal numbers have to be upper cased (eg: a%c2%b1b to a%C2%B1b) Normalize empty query strings to empty string. Sort query parameters by the encoded parameter names (ASCII order). Do not shorten parameter values if their parameter name is the same (key=B&key=A is a valid output), the order of parameters in a URL may be significant (this is not defined by the HTTP standard). Separate parameter names and values by = signs, include = for empty values, too Separate parameters by & For example: To canonicalize the headers, the following rules have to be followed: Lower case the header names. Separate header names and values by a :, with no spaces. Sort header names to alphabetical order (ASCII). Group headers with the same names into a header, and separate their values by commas, without sorting. Trim header values, keep all the spaces between quote characters ("). For example: 2.1.5. Signed Headers The list of headers to include when calculating the signature. Lower cased value of header names, separated by ;, like this: date;host 2.1.6. Body Checksum A checksum for the request body, aka the payload has to be calculated. Escher supports SHA-256 and SHA-512 algorithms for checksum calculation. If the request contains no body, an empty string has to be used as the input for the hash algorithm. The selected algorithm will be added later to the authorization header, so the server will be able to use the same algorithm for validation. The checksum of the body has to be presented as a lower cased hexadecimal string, for example: 2.1.7. Concatenating the Canonicalized Parts All the steps above produce a row of data, except the headers canonicalization, as it creates one row per headers. These have to be concatenated with LF (line feed, “n”) characters into a string. An example: 2.2. Creating the Signature The next step is creating another string which will be directly used to calculate the signature. 2.2.1. Algorithm ID The algorithm ID comes from the algo_prefix (default value is ESR) and the algorithm used to calculate checksums during the signing process. The string algo_prefix, “HMAC”, and the algorithm name should be concatenated with dashes, like this: 2.2.2. Long Date The long date is the request date in the ISO 8601 basic format, like YYYYMMDD + T + HHMMSS + Z. Note that the basic format uses no punctuation. Example is: This date has to be added later, too, as a date header (default header name is X-Escher-Date). 2.2.3. Date and Credential Scope Next information is the short date, and the credential scope concatenated with a / character. The short date is the request date’s date part, an ISO 8601 basic formatted representation, the credential scope is defined by the service. Example: This will be added later, too, as part of the authorization header (default header name is X-Escher-Auth). 2.2.4. Checksum of the Canonicalized Request Take the output of step 2.1.7., and create a checksum from the canonicalized checksum string. This checksum has to be represented as a lower cased hexadecimal string, too. Something like this will be an output: 2.2.5. Concatenating the Signing String Concatenate the outputs of steps 2.2. with LF characters. Example output: 2.2.6. The Signing Key The signing key is based on the algo_prefix, the client secret, the parts of the credential scope, and the request date. Take the algo_prefix, concatenate the client secret to it. First apply the HMAC algorithm to the request date, then apply the actual value on each of the credential scope parts (splitted at /). The end result is a binary signing key. Pseudo code: 2.2.7. Create the Signature The signature is created from the output of steps 2.2.5. (Signing String) and 2.2.6. (Signing Key). With the selected algorithm, create a checksum. It has to be represented as a lower cased hexadecimal string. Something like this will be an output: 2.3. Adding the Signature to the Request The final step of the Escher signing process is adding the Signature to the request. Escher adds a new header to the request, by default, the header name is X-Escher-Auth. The header value will include the algorithm ID (see 2.2.1.), the client key, the short date and the credential scope (see 2.2.3.), the signed headers string (see 2.1.5.) and finally the signature (see 2.2.7.). The values of this inputs have to be concatenated like this: 3. Presigning a URL The URL presigning process is very similar to the request signing procedure. But for a URL, there are no headers, no request body, so the calculation of the Signature is different. Also, the Signature cannot be added to the headers, but is included as query parameters. A significant difference is that the presigning allows defining an expiration time. By default, it is 86400 secs, 24 hours. The current time and the expiration time will be included in the URL, and the server has to check if the URL is expired. 3.1. Canonicalizing the URL to Presign The canonicalization for URL presigning is the same process as for HTTP requests, in this section we will cover the differences only. 3.1.1. The HTTP method The HTTP method for presigned URLs is fixed to: For example: 3.1.3. The Query String The query is coming from the URL, but the algorithm, credentials, date, expiration time, and signed headers have to be added to the query parts. 3.1.4. The Headers A URL has no headers, Escher creates the Host header based on the URL’s domain information, and adds it to the canonicalized request. For example: 3.1.5. Signed Headers It will be host, as that’s the only header included. Example:
Package websocket implements the WebSocket protocol defined in RFC 6455. The Conn type represents a WebSocket connection. A server application calls the Upgrader.Upgrade method from an HTTP request handler to get a *Conn: Call the connection's WriteMessage and ReadMessage methods to send and receive messages as a slice of bytes. This snippet of code shows how to echo messages using these methods: In above snippet of code, p is a []byte and messageType is an int with value websocket.BinaryMessage or websocket.TextMessage. An application can also send and receive messages using the io.WriteCloser and io.Reader interfaces. To send a message, call the connection NextWriter method to get an io.WriteCloser, write the message to the writer and close the writer when done. To receive a message, call the connection NextReader method to get an io.Reader and read until io.EOF is returned. This snippet shows how to echo messages using the NextWriter and NextReader methods: The WebSocket protocol distinguishes between text and binary data messages. Text messages are interpreted as UTF-8 encoded text. The interpretation of binary messages is left to the application. This package uses the TextMessage and BinaryMessage integer constants to identify the two data message types. The ReadMessage and NextReader methods return the type of the received message. The messageType argument to the WriteMessage and NextWriter methods specifies the type of a sent message. It is the application's responsibility to ensure that text messages are valid UTF-8 encoded text. The WebSocket protocol defines three types of control messages: close, ping and pong. Call the connection WriteControl, WriteMessage or NextWriter methods to send a control message to the peer. Connections handle received close messages by calling the handler function set with the SetCloseHandler method and by returning a *CloseError from the NextReader, ReadMessage or the message Read method. The default close handler sends a close message to the peer. Connections handle received ping messages by calling the handler function set with the SetPingHandler method. The default ping handler sends a pong message to the peer. Connections handle received pong messages by calling the handler function set with the SetPongHandler method. The default pong handler does nothing. If an application sends ping messages, then the application should set a pong handler to receive the corresponding pong. The control message handler functions are called from the NextReader, ReadMessage and message reader Read methods. The default close and ping handlers can block these methods for a short time when the handler writes to the connection. The application must read the connection to process close, ping and pong messages sent from the peer. If the application is not otherwise interested in messages from the peer, then the application should start a goroutine to read and discard messages from the peer. A simple example is: Connections support one concurrent reader and one concurrent writer. Applications are responsible for ensuring that no more than one goroutine calls the write methods (NextWriter, SetWriteDeadline, WriteMessage, WriteJSON, EnableWriteCompression, SetCompressionLevel) concurrently and that no more than one goroutine calls the read methods (NextReader, SetReadDeadline, ReadMessage, ReadJSON, SetPongHandler, SetPingHandler) concurrently. The Close and WriteControl methods can be called concurrently with all other methods. Web browsers allow Javascript applications to open a WebSocket connection to any host. It's up to the server to enforce an origin policy using the Origin request header sent by the browser. The Upgrader calls the function specified in the CheckOrigin field to check the origin. If the CheckOrigin function returns false, then the Upgrade method fails the WebSocket handshake with HTTP status 403. If the CheckOrigin field is nil, then the Upgrader uses a safe default: fail the handshake if the Origin request header is present and the Origin host is not equal to the Host request header. The deprecated package-level Upgrade function does not perform origin checking. The application is responsible for checking the Origin header before calling the Upgrade function. Per message compression extensions (RFC 7692) are experimentally supported by this package in a limited capacity. Setting the EnableCompression option to true in Dialer or Upgrader will attempt to negotiate per message deflate support. If compression was successfully negotiated with the connection's peer, any message received in compressed form will be automatically decompressed. All Read methods will return uncompressed bytes. Per message compression of messages written to a connection can be enabled or disabled by calling the corresponding Conn method: Currently this package does not support compression with "context takeover". This means that messages must be compressed and decompressed in isolation, without retaining sliding window or dictionary state across messages. For more details refer to RFC 7692. Use of compression is experimental and may result in decreased performance.
Package requester builds and executes HTTP requests. It's a thin wrapper around the http package with conveniences for configuring requests and processing responses. The central, package-level functions are: Context-aware variants are also available. requester.Requester{} has the same methods. A Requester instance can be used to repeat a request, as a template for similar requests across a REST API surface, or embedded in another type, as the core of a language binding to a REST API. The exported attributes of requester.Requester{} control how it constructs requests, what client it uses to execute them, and how the responses are handled. Most methods and functions in the package accept Options, which are functions that configure the attributes of Requesters. The package provides many options for configuring most attributes Requester. Receive() builds a request, executes it, and reads the response body. If a target value is provided, Receive will attempt to unmarshal the body into the target value. The body of the response, if present, is always returned as a []byte, even when unmarshaling or returning an error. By default, Receive uses the response's Content-Type header to determine how to unmarshal the response body into a struct. This can be customized by setting Requester.Unmarshaler: Requester.QueryParams will be merged into any query parameters encoded into the URL. For example: The QueryParams() option can take a map[string]string, a map[string]interface{}, a url.Values, or a struct. Structs are marshaled into url.Values using "github.com/google/go-querystring": If Requester.Body is set to a string, []byte, or io.Reader, the value will be used directly as the request body: If Body is any other value, it will be marshaled into the body, using the Requester.Marshaler: Note the default Marshaler is JSON, and sets the request's Content-Type header. The HTTP client used to execute requests can also be customized with Options: "github.com/gemalto/requester/httpclient" is a standalone package for constructing and configuring http.Clients. The requester.Client(...httpclient.Option) option constructs a new HTTP client and installs it into Requester.Doer. Requester uses a Doer to execute requests, which is an interface. By default, http.DefaultClient is used, but this can be replaced by a customized client, or a mock Doer: Requester itself is a Doer, so it can be nested in another Requester or composed with other packages that support Doers. You can also install middleware into Requester, which can intercept the request and response: