Package websocket implements the WebSocket protocol defined in RFC 6455. The Conn type represents a WebSocket connection. A server application calls the Upgrader.Upgrade method from an HTTP request handler to get a *Conn: Call the connection's WriteMessage and ReadMessage methods to send and receive messages as a slice of bytes. This snippet of code shows how to echo messages using these methods: In above snippet of code, p is a []byte and messageType is an int with value websocket.BinaryMessage or websocket.TextMessage. An application can also send and receive messages using the io.WriteCloser and io.Reader interfaces. To send a message, call the connection NextWriter method to get an io.WriteCloser, write the message to the writer and close the writer when done. To receive a message, call the connection NextReader method to get an io.Reader and read until io.EOF is returned. This snippet shows how to echo messages using the NextWriter and NextReader methods: The WebSocket protocol distinguishes between text and binary data messages. Text messages are interpreted as UTF-8 encoded text. The interpretation of binary messages is left to the application. This package uses the TextMessage and BinaryMessage integer constants to identify the two data message types. The ReadMessage and NextReader methods return the type of the received message. The messageType argument to the WriteMessage and NextWriter methods specifies the type of a sent message. It is the application's responsibility to ensure that text messages are valid UTF-8 encoded text. The WebSocket protocol defines three types of control messages: close, ping and pong. Call the connection WriteControl, WriteMessage or NextWriter methods to send a control message to the peer. Connections handle received close messages by calling the handler function set with the SetCloseHandler method and by returning a *CloseError from the NextReader, ReadMessage or the message Read method. The default close handler sends a close message to the peer. Connections handle received ping messages by calling the handler function set with the SetPingHandler method. The default ping handler sends a pong message to the peer. Connections handle received pong messages by calling the handler function set with the SetPongHandler method. The default pong handler does nothing. If an application sends ping messages, then the application should set a pong handler to receive the corresponding pong. The control message handler functions are called from the NextReader, ReadMessage and message reader Read methods. The default close and ping handlers can block these methods for a short time when the handler writes to the connection. The application must read the connection to process close, ping and pong messages sent from the peer. If the application is not otherwise interested in messages from the peer, then the application should start a goroutine to read and discard messages from the peer. A simple example is: Connections support one concurrent reader and one concurrent writer. Applications are responsible for ensuring that no more than one goroutine calls the write methods (NextWriter, SetWriteDeadline, WriteMessage, WriteJSON, EnableWriteCompression, SetCompressionLevel) concurrently and that no more than one goroutine calls the read methods (NextReader, SetReadDeadline, ReadMessage, ReadJSON, SetPongHandler, SetPingHandler) concurrently. The Close and WriteControl methods can be called concurrently with all other methods. Web browsers allow Javascript applications to open a WebSocket connection to any host. It's up to the server to enforce an origin policy using the Origin request header sent by the browser. The Upgrader calls the function specified in the CheckOrigin field to check the origin. If the CheckOrigin function returns false, then the Upgrade method fails the WebSocket handshake with HTTP status 403. If the CheckOrigin field is nil, then the Upgrader uses a safe default: fail the handshake if the Origin request header is present and the Origin host is not equal to the Host request header. The deprecated package-level Upgrade function does not perform origin checking. The application is responsible for checking the Origin header before calling the Upgrade function. Connections buffer network input and output to reduce the number of system calls when reading or writing messages. Write buffers are also used for constructing WebSocket frames. See RFC 6455, Section 5 for a discussion of message framing. A WebSocket frame header is written to the network each time a write buffer is flushed to the network. Decreasing the size of the write buffer can increase the amount of framing overhead on the connection. The buffer sizes in bytes are specified by the ReadBufferSize and WriteBufferSize fields in the Dialer and Upgrader. The Dialer uses a default size of 4096 when a buffer size field is set to zero. The Upgrader reuses buffers created by the HTTP server when a buffer size field is set to zero. The HTTP server buffers have a size of 4096 at the time of this writing. The buffer sizes do not limit the size of a message that can be read or written by a connection. Buffers are held for the lifetime of the connection by default. If the Dialer or Upgrader WriteBufferPool field is set, then a connection holds the write buffer only when writing a message. Applications should tune the buffer sizes to balance memory use and performance. Increasing the buffer size uses more memory, but can reduce the number of system calls to read or write the network. In the case of writing, increasing the buffer size can reduce the number of frame headers written to the network. Some guidelines for setting buffer parameters are: Limit the buffer sizes to the maximum expected message size. Buffers larger than the largest message do not provide any benefit. Depending on the distribution of message sizes, setting the buffer size to a value less than the maximum expected message size can greatly reduce memory use with a small impact on performance. Here's an example: If 99% of the messages are smaller than 256 bytes and the maximum message size is 512 bytes, then a buffer size of 256 bytes will result in 1.01 more system calls than a buffer size of 512 bytes. The memory savings is 50%. A write buffer pool is useful when the application has a modest number writes over a large number of connections. when buffers are pooled, a larger buffer size has a reduced impact on total memory use and has the benefit of reducing system calls and frame overhead. Per message compression extensions (RFC 7692) are experimentally supported by this package in a limited capacity. Setting the EnableCompression option to true in Dialer or Upgrader will attempt to negotiate per message deflate support. If compression was successfully negotiated with the connection's peer, any message received in compressed form will be automatically decompressed. All Read methods will return uncompressed bytes. Per message compression of messages written to a connection can be enabled or disabled by calling the corresponding Conn method: Currently this package does not support compression with "context takeover". This means that messages must be compressed and decompressed in isolation, without retaining sliding window or dictionary state across messages. For more details refer to RFC 7692. Use of compression is experimental and may result in decreased performance.
Package otlptracegrpc provides an OTLP span exporter using gRPC. By default the telemetry is sent to https://localhost:4317. Exporter should be created using New. The environment variables described below can be used for configuration. OTEL_EXPORTER_OTLP_ENDPOINT, OTEL_EXPORTER_OTLP_TRACES_ENDPOINT (default: "https://localhost:4317") - target to which the exporter sends telemetry. The target syntax is defined in https://github.com/grpc/grpc/blob/master/doc/naming.md. The value must contain a scheme ("http" or "https") and host. The value may additionally contain a port, and a path. The value should not contain a query string or fragment. OTEL_EXPORTER_OTLP_TRACES_ENDPOINT takes precedence over OTEL_EXPORTER_OTLP_ENDPOINT. The configuration can be overridden by WithEndpoint, WithEndpointURL, WithInsecure, and WithGRPCConn options. OTEL_EXPORTER_OTLP_INSECURE, OTEL_EXPORTER_OTLP_TRACES_INSECURE (default: "false") - setting "true" disables client transport security for the exporter's gRPC connection. You can use this only when an endpoint is provided without the http or https scheme. OTEL_EXPORTER_OTLP_ENDPOINT, OTEL_EXPORTER_OTLP_TRACES_ENDPOINT setting overrides the scheme defined via OTEL_EXPORTER_OTLP_ENDPOINT, OTEL_EXPORTER_OTLP_TRACES_ENDPOINT. OTEL_EXPORTER_OTLP_TRACES_INSECURE takes precedence over OTEL_EXPORTER_OTLP_INSECURE. The configuration can be overridden by WithInsecure, WithGRPCConn options. OTEL_EXPORTER_OTLP_HEADERS, OTEL_EXPORTER_OTLP_TRACES_HEADERS (default: none) - key-value pairs used as gRPC metadata associated with gRPC requests. The value is expected to be represented in a format matching the W3C Baggage HTTP Header Content Format, except that additional semi-colon delimited metadata is not supported. Example value: "key1=value1,key2=value2". OTEL_EXPORTER_OTLP_TRACES_HEADERS takes precedence over OTEL_EXPORTER_OTLP_HEADERS. The configuration can be overridden by WithHeaders option. OTEL_EXPORTER_OTLP_TIMEOUT, OTEL_EXPORTER_OTLP_TRACES_TIMEOUT (default: "10000") - maximum time in milliseconds the OTLP exporter waits for each batch export. OTEL_EXPORTER_OTLP_TRACES_TIMEOUT takes precedence over OTEL_EXPORTER_OTLP_TIMEOUT. The configuration can be overridden by WithTimeout option. OTEL_EXPORTER_OTLP_COMPRESSION, OTEL_EXPORTER_OTLP_TRACES_COMPRESSION (default: none) - the gRPC compressor the exporter uses. Supported value: "gzip". OTEL_EXPORTER_OTLP_TRACES_COMPRESSION takes precedence over OTEL_EXPORTER_OTLP_COMPRESSION. The configuration can be overridden by WithCompressor, WithGRPCConn options. OTEL_EXPORTER_OTLP_CERTIFICATE, OTEL_EXPORTER_OTLP_TRACES_CERTIFICATE (default: none) - the filepath to the trusted certificate to use when verifying a server's TLS credentials. OTEL_EXPORTER_OTLP_TRACES_CERTIFICATE takes precedence over OTEL_EXPORTER_OTLP_CERTIFICATE. The configuration can be overridden by WithTLSCredentials, WithGRPCConn options. OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE, OTEL_EXPORTER_OTLP_TRACES_CLIENT_CERTIFICATE (default: none) - the filepath to the client certificate/chain trust for client's private key to use in mTLS communication in PEM format. OTEL_EXPORTER_OTLP_TRACES_CLIENT_CERTIFICATE takes precedence over OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE. The configuration can be overridden by WithTLSCredentials, WithGRPCConn options. OTEL_EXPORTER_OTLP_CLIENT_KEY, OTEL_EXPORTER_OTLP_TRACES_CLIENT_KEY (default: none) - the filepath to the client's private key to use in mTLS communication in PEM format. OTEL_EXPORTER_OTLP_TRACES_CLIENT_KEY takes precedence over OTEL_EXPORTER_OTLP_CLIENT_KEY. The configuration can be overridden by WithTLSCredentials, WithGRPCConn option.
Package restful , a lean package for creating REST-style WebServices without magic. A WebService has a collection of Route objects that dispatch incoming Http Requests to a function calls. Typically, a WebService has a root path (e.g. /users) and defines common MIME types for its routes. WebServices must be added to a container (see below) in order to handler Http requests from a server. A Route is defined by a HTTP method, an URL path and (optionally) the MIME types it consumes (Content-Type) and produces (Accept). This package has the logic to find the best matching Route and if found, call its Function. The (*Request, *Response) arguments provide functions for reading information from the request and writing information back to the response. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-user-resource.go with a full implementation. A Route parameter can be specified using the format "uri/{var[:regexp]}" or the special version "uri/{var:*}" for matching the tail of the path. For example, /persons/{name:[A-Z][A-Z]} can be used to restrict values for the parameter "name" to only contain capital alphabetic characters. Regular expressions must use the standard Go syntax as described in the regexp package. (https://code.google.com/p/re2/wiki/Syntax) This feature requires the use of a CurlyRouter. A Container holds a collection of WebServices, Filters and a http.ServeMux for multiplexing http requests. Using the statements "restful.Add(...) and restful.Filter(...)" will register WebServices and Filters to the Default Container. The Default container of go-restful uses the http.DefaultServeMux. You can create your own Container and create a new http.Server for that particular container. A filter dynamically intercepts requests and responses to transform or use the information contained in the requests or responses. You can use filters to perform generic logging, measurement, authentication, redirect, set response headers etc. In the restful package there are three hooks into the request,response flow where filters can be added. Each filter must define a FilterFunction: Use the following statement to pass the request,response pair to the next filter or RouteFunction These are processed before any registered WebService. These are processed before any Route of a WebService. These are processed before calling the function associated with the Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-filters.go with full implementations. Two encodings are supported: gzip and deflate. To enable this for all responses: If a Http request includes the Accept-Encoding header then the response content will be compressed using the specified encoding. Alternatively, you can create a Filter that performs the encoding and install it per WebService or Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-encoding-filter.go By installing a pre-defined container filter, your Webservice(s) can respond to the OPTIONS Http request. By installing the filter of a CrossOriginResourceSharing (CORS), your WebService(s) can handle CORS requests. Unexpected things happen. If a request cannot be processed because of a failure, your service needs to tell via the response what happened and why. For this reason HTTP status codes exist and it is important to use the correct code in every exceptional situation. If path or query parameters are not valid (content or type) then use http.StatusBadRequest. Despite a valid URI, the resource requested may not be available If the application logic could not process the request (or write the response) then use http.StatusInternalServerError. The request has a valid URL but the method (GET,PUT,POST,...) is not allowed. The request does not have or has an unknown Accept Header set for this operation. The request does not have or has an unknown Content-Type Header set for this operation. In addition to setting the correct (error) Http status code, you can choose to write a ServiceError message on the response. This package has several options that affect the performance of your service. It is important to understand them and how you can change it. DoNotRecover controls whether panics will be caught to return HTTP 500. If set to false, the container will recover from panics. Default value is true If content encoding is enabled then the default strategy for getting new gzip/zlib writers and readers is to use a sync.Pool. Because writers are expensive structures, performance is even more improved when using a preloaded cache. You can also inject your own implementation. This package has the means to produce detail logging of the complete Http request matching process and filter invocation. Enabling this feature requires you to set an implementation of restful.StdLogger (e.g. log.Logger) instance such as: The restful.SetLogger() method allows you to override the logger used by the package. By default restful uses the standard library `log` package and logs to stdout. Different logging packages are supported as long as they conform to `StdLogger` interface defined in the `log` sub-package, writing an adapter for your preferred package is simple. (c) 2012-2015, http://ernestmicklei.com. MIT License
Package restful , a lean package for creating REST-style WebServices without magic. A WebService has a collection of Route objects that dispatch incoming Http Requests to a function calls. Typically, a WebService has a root path (e.g. /users) and defines common MIME types for its routes. WebServices must be added to a container (see below) in order to handler Http requests from a server. A Route is defined by a HTTP method, an URL path and (optionally) the MIME types it consumes (Content-Type) and produces (Accept). This package has the logic to find the best matching Route and if found, call its Function. The (*Request, *Response) arguments provide functions for reading information from the request and writing information back to the response. See the example https://github.com/emicklei/go-restful/blob/v3/examples/user-resource/restful-user-resource.go with a full implementation. A Route parameter can be specified using the format "uri/{var[:regexp]}" or the special version "uri/{var:*}" for matching the tail of the path. For example, /persons/{name:[A-Z][A-Z]} can be used to restrict values for the parameter "name" to only contain capital alphabetic characters. Regular expressions must use the standard Go syntax as described in the regexp package. (https://code.google.com/p/re2/wiki/Syntax) This feature requires the use of a CurlyRouter. A Container holds a collection of WebServices, Filters and a http.ServeMux for multiplexing http requests. Using the statements "restful.Add(...) and restful.Filter(...)" will register WebServices and Filters to the Default Container. The Default container of go-restful uses the http.DefaultServeMux. You can create your own Container and create a new http.Server for that particular container. A filter dynamically intercepts requests and responses to transform or use the information contained in the requests or responses. You can use filters to perform generic logging, measurement, authentication, redirect, set response headers etc. In the restful package there are three hooks into the request,response flow where filters can be added. Each filter must define a FilterFunction: Use the following statement to pass the request,response pair to the next filter or RouteFunction These are processed before any registered WebService. These are processed before any Route of a WebService. These are processed before calling the function associated with the Route. See the example https://github.com/emicklei/go-restful/blob/v3/examples/filters/restful-filters.go with full implementations. Two encodings are supported: gzip and deflate. To enable this for all responses: If a Http request includes the Accept-Encoding header then the response content will be compressed using the specified encoding. Alternatively, you can create a Filter that performs the encoding and install it per WebService or Route. See the example https://github.com/emicklei/go-restful/blob/v3/examples/encoding/restful-encoding-filter.go By installing a pre-defined container filter, your Webservice(s) can respond to the OPTIONS Http request. By installing the filter of a CrossOriginResourceSharing (CORS), your WebService(s) can handle CORS requests. Unexpected things happen. If a request cannot be processed because of a failure, your service needs to tell via the response what happened and why. For this reason HTTP status codes exist and it is important to use the correct code in every exceptional situation. If path or query parameters are not valid (content or type) then use http.StatusBadRequest. Despite a valid URI, the resource requested may not be available If the application logic could not process the request (or write the response) then use http.StatusInternalServerError. The request has a valid URL but the method (GET,PUT,POST,...) is not allowed. The request does not have or has an unknown Accept Header set for this operation. The request does not have or has an unknown Content-Type Header set for this operation. In addition to setting the correct (error) Http status code, you can choose to write a ServiceError message on the response. This package has several options that affect the performance of your service. It is important to understand them and how you can change it. DoNotRecover controls whether panics will be caught to return HTTP 500. If set to false, the container will recover from panics. Default value is true If content encoding is enabled then the default strategy for getting new gzip/zlib writers and readers is to use a sync.Pool. Because writers are expensive structures, performance is even more improved when using a preloaded cache. You can also inject your own implementation. This package has the means to produce detail logging of the complete Http request matching process and filter invocation. Enabling this feature requires you to set an implementation of restful.StdLogger (e.g. log.Logger) instance such as: The restful.SetLogger() method allows you to override the logger used by the package. By default restful uses the standard library `log` package and logs to stdout. Different logging packages are supported as long as they conform to `StdLogger` interface defined in the `log` sub-package, writing an adapter for your preferred package is simple. (c) 2012-2015, http://ernestmicklei.com. MIT License
Package otlptracehttp provides an OTLP span exporter using HTTP with protobuf payloads. By default the telemetry is sent to https://localhost:4318/v1/traces. Exporter should be created using New. The environment variables described below can be used for configuration. OTEL_EXPORTER_OTLP_ENDPOINT (default: "https://localhost:4318") - target base URL ("/v1/traces" is appended) to which the exporter sends telemetry. The value must contain a scheme ("http" or "https") and host. The value may additionally contain a port and a path. The value should not contain a query string or fragment. The configuration can be overridden by OTEL_EXPORTER_OTLP_TRACES_ENDPOINT environment variable and by WithEndpoint, WithEndpointURL, WithInsecure options. OTEL_EXPORTER_OTLP_TRACES_ENDPOINT (default: "https://localhost:4318/v1/traces") - target URL to which the exporter sends telemetry. The value must contain a scheme ("http" or "https") and host. The value may additionally contain a port and a path. The value should not contain a query string or fragment. The configuration can be overridden by WithEndpoint, WithEndpointURL, WithInsecure, and WithURLPath options. OTEL_EXPORTER_OTLP_HEADERS, OTEL_EXPORTER_OTLP_TRACES_HEADERS (default: none) - key-value pairs used as headers associated with HTTP requests. The value is expected to be represented in a format matching the W3C Baggage HTTP Header Content Format, except that additional semi-colon delimited metadata is not supported. Example value: "key1=value1,key2=value2". OTEL_EXPORTER_OTLP_TRACES_HEADERS takes precedence over OTEL_EXPORTER_OTLP_HEADERS. The configuration can be overridden by WithHeaders option. OTEL_EXPORTER_OTLP_TIMEOUT, OTEL_EXPORTER_OTLP_TRACES_TIMEOUT (default: "10000") - maximum time in milliseconds the OTLP exporter waits for each batch export. OTEL_EXPORTER_OTLP_TRACES_TIMEOUT takes precedence over OTEL_EXPORTER_OTLP_TIMEOUT. The configuration can be overridden by WithTimeout option. OTEL_EXPORTER_OTLP_COMPRESSION, OTEL_EXPORTER_OTLP_TRACES_COMPRESSION (default: none) - the compression strategy the exporter uses to compress the HTTP body. Supported value: "gzip". OTEL_EXPORTER_OTLP_TRACES_COMPRESSION takes precedence over OTEL_EXPORTER_OTLP_COMPRESSION. The configuration can be overridden by WithCompression option. OTEL_EXPORTER_OTLP_CERTIFICATE, OTEL_EXPORTER_OTLP_TRACES_CERTIFICATE (default: none) - the filepath to the trusted certificate to use when verifying a server's TLS credentials. OTEL_EXPORTER_OTLP_TRACES_CERTIFICATE takes precedence over OTEL_EXPORTER_OTLP_CERTIFICATE. The configuration can be overridden by WithTLSClientConfig option. OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE, OTEL_EXPORTER_OTLP_TRACES_CLIENT_CERTIFICATE (default: none) - the filepath to the client certificate/chain trust for client's private key to use in mTLS communication in PEM format. OTEL_EXPORTER_OTLP_TRACES_CLIENT_CERTIFICATE takes precedence over OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE. The configuration can be overridden by WithTLSClientConfig option. OTEL_EXPORTER_OTLP_CLIENT_KEY, OTEL_EXPORTER_OTLP_TRACES_CLIENT_KEY (default: none) - the filepath to the client's private key to use in mTLS communication in PEM format. OTEL_EXPORTER_OTLP_TRACES_CLIENT_KEY takes precedence over OTEL_EXPORTER_OTLP_CLIENT_KEY. The configuration can be overridden by WithTLSClientConfig option.
Package gofpdf implements a PDF document generator with high level support for text, drawing and images. - UTF-8 support - Choice of measurement unit, page format and margins - Page header and footer management - Automatic page breaks, line breaks, and text justification - Inclusion of JPEG, PNG, GIF, TIFF and basic path-only SVG images - Colors, gradients and alpha channel transparency - Outline bookmarks - Internal and external links - TrueType, Type1 and encoding support - Page compression - Lines, Bézier curves, arcs, and ellipses - Rotation, scaling, skewing, translation, and mirroring - Clipping - Document protection - Layers - Templates - Barcodes - Charting facility - Import PDFs as templates gofpdf has no dependencies other than the Go standard library. All tests pass on Linux, Mac and Windows platforms. gofpdf supports UTF-8 TrueType fonts and “right-to-left” languages. Note that Chinese, Japanese, and Korean characters may not be included in many general purpose fonts. For these languages, a specialized font (for example, NotoSansSC for simplified Chinese) can be used. Also, support is provided to automatically translate UTF-8 runes to code page encodings for languages that have fewer than 256 glyphs. This repository will not be maintained, at least for some unknown duration. But it is hoped that gofpdf has a bright future in the open source world. Due to Go’s promise of compatibility, gofpdf should continue to function without modification for a longer time than would be the case with many other languages. Forks should be based on the last viable commit. Tools such as active-forks can be used to select a fork that looks promising for your needs. If a particular fork looks like it has taken the lead in attracting followers, this README will be updated to point people in that direction. The efforts of all contributors to this project have been deeply appreciated. Best wishes to all of you. To install the package on your system, run Later, to receive updates, run The following Go code generates a simple PDF file. See the functions in the fpdf_test.go file (shown as examples in this documentation) for more advanced PDF examples. If an error occurs in an Fpdf method, an internal error field is set. After this occurs, Fpdf method calls typically return without performing any operations and the error state is retained. This error management scheme facilitates PDF generation since individual method calls do not need to be examined for failure; it is generally sufficient to wait until after Output() is called. For the same reason, if an error occurs in the calling application during PDF generation, it may be desirable for the application to transfer the error to the Fpdf instance by calling the SetError() method or the SetErrorf() method. At any time during the life cycle of the Fpdf instance, the error state can be determined with a call to Ok() or Err(). The error itself can be retrieved with a call to Error(). This package is a relatively straightforward translation from the original FPDF library written in PHP (despite the caveat in the introduction to Effective Go). The API names have been retained even though the Go idiom would suggest otherwise (for example, pdf.GetX() is used rather than simply pdf.X()). The similarity of the two libraries makes the original FPDF website a good source of information. It includes a forum and FAQ. However, some internal changes have been made. Page content is built up using buffers (of type bytes.Buffer) rather than repeated string concatenation. Errors are handled as explained above rather than panicking. Output is generated through an interface of type io.Writer or io.WriteCloser. A number of the original PHP methods behave differently based on the type of the arguments that are passed to them; in these cases additional methods have been exported to provide similar functionality. Font definition files are produced in JSON rather than PHP. A side effect of running go test ./... is the production of a number of example PDFs. These can be found in the gofpdf/pdf directory after the tests complete. Please note that these examples run in the context of a test. In order run an example as a standalone application, you’ll need to examine fpdf_test.go for some helper routines, for example exampleFilename() and summary(). Example PDFs can be compared with reference copies in order to verify that they have been generated as expected. This comparison will be performed if a PDF with the same name as the example PDF is placed in the gofpdf/pdf/reference directory and if the third argument to ComparePDFFiles() in internal/example/example.go is true. (By default it is false.) The routine that summarizes an example will look for this file and, if found, will call ComparePDFFiles() to check the example PDF for equality with its reference PDF. If differences exist between the two files they will be printed to standard output and the test will fail. If the reference file is missing, the comparison is considered to succeed. In order to successfully compare two PDFs, the placement of internal resources must be consistent and the internal creation timestamps must be the same. To do this, the methods SetCatalogSort() and SetCreationDate() need to be called for both files. This is done automatically for all examples. Nothing special is required to use the standard PDF fonts (courier, helvetica, times, zapfdingbats) in your documents other than calling SetFont(). You should use AddUTF8Font() or AddUTF8FontFromBytes() to add a TrueType UTF-8 encoded font. Use RTL() and LTR() methods switch between “right-to-left” and “left-to-right” mode. In order to use a different non-UTF-8 TrueType or Type1 font, you will need to generate a font definition file and, if the font will be embedded into PDFs, a compressed version of the font file. This is done by calling the MakeFont function or using the included makefont command line utility. To create the utility, cd into the makefont subdirectory and run “go build”. This will produce a standalone executable named makefont. Select the appropriate encoding file from the font subdirectory and run the command as in the following example. In your PDF generation code, call AddFont() to load the font and, as with the standard fonts, SetFont() to begin using it. Most examples, including the package example, demonstrate this method. Good sources of free, open-source fonts include Google Fonts and DejaVu Fonts. The draw2d package is a two dimensional vector graphics library that can generate output in different forms. It uses gofpdf for its document production mode. gofpdf is a global community effort and you are invited to make it even better. If you have implemented a new feature or corrected a problem, please consider contributing your change to the project. A contribution that does not directly pertain to the core functionality of gofpdf should be placed in its own directory directly beneath the contrib directory. Here are guidelines for making submissions. Your change should - be compatible with the MIT License - be properly documented - be formatted with go fmt - include an example in fpdf_test.go if appropriate - conform to the standards of golint and go vet, that is, golint . and go vet . should not generate any warnings - not diminish test coverage Pull requests are the preferred means of accepting your changes. gofpdf is released under the MIT License. It is copyrighted by Kurt Jung and the contributors acknowledged below. This package’s code and documentation are closely derived from the FPDF library created by Olivier Plathey, and a number of font and image resources are copied directly from it. Bruno Michel has provided valuable assistance with the code. Drawing support is adapted from the FPDF geometric figures script by David Hernández Sanz. Transparency support is adapted from the FPDF transparency script by Martin Hall-May. Support for gradients and clipping is adapted from FPDF scripts by Andreas Würmser. Support for outline bookmarks is adapted from Olivier Plathey by Manuel Cornes. Layer support is adapted from Olivier Plathey. Support for transformations is adapted from the FPDF transformation script by Moritz Wagner and Andreas Würmser. PDF protection is adapted from the work of Klemen Vodopivec for the FPDF product. Lawrence Kesteloot provided code to allow an image’s extent to be determined prior to placement. Support for vertical alignment within a cell was provided by Stefan Schroeder. Ivan Daniluk generalized the font and image loading code to use the Reader interface while maintaining backward compatibility. Anthony Starks provided code for the Polygon function. Robert Lillack provided the Beziergon function and corrected some naming issues with the internal curve function. Claudio Felber provided implementations for dashed line drawing and generalized font loading. Stani Michiels provided support for multi-segment path drawing with smooth line joins, line join styles, enhanced fill modes, and has helped greatly with package presentation and tests. Templating is adapted by Marcus Downing from the FPDF_Tpl library created by Jan Slabon and Setasign. Jelmer Snoeck contributed packages that generate a variety of barcodes and help with registering images on the web. Jelmer Snoek and Guillermo Pascual augmented the basic HTML functionality with aligned text. Kent Quirk implemented backwards-compatible support for reading DPI from images that support it, and for setting DPI manually and then having it properly taken into account when calculating image size. Paulo Coutinho provided support for static embedded fonts. Dan Meyers added support for embedded JavaScript. David Fish added a generic alias-replacement function to enable, among other things, table of contents functionality. Andy Bakun identified and corrected a problem in which the internal catalogs were not sorted stably. Paul Montag added encoding and decoding functionality for templates, including images that are embedded in templates; this allows templates to be stored independently of gofpdf. Paul also added support for page boxes used in printing PDF documents. Wojciech Matusiak added supported for word spacing. Artem Korotkiy added support of UTF-8 fonts. Dave Barnes added support for imported objects and templates. Brigham Thompson added support for rounded rectangles. Joe Westcott added underline functionality and optimized image storage. Benoit KUGLER contributed support for rectangles with corners of unequal radius, modification times, and for file attachments and annotations. - Remove all legacy code page font support; use UTF-8 exclusively - Improve test coverage as reported by the coverage tool. Example demonstrates the generation of a simple PDF document. Note that since only core fonts are used (in this case Arial, a synonym for Helvetica), an empty string can be specified for the font directory in the call to New(). Note also that the example.Filename() and example.Summary() functions belong to a separate, internal package and are not part of the gofpdf library. If an error occurs at some point during the construction of the document, subsequent method calls exit immediately and the error is finally retrieved with the output call where it can be handled by the application.
Package otlpmetricgrpc provides an OTLP metrics exporter using gRPC. By default the telemetry is sent to https://localhost:4317. Exporter should be created using New and used with a metric.PeriodicReader. The environment variables described below can be used for configuration. OTEL_EXPORTER_OTLP_ENDPOINT, OTEL_EXPORTER_OTLP_METRICS_ENDPOINT (default: "https://localhost:4317") - target to which the exporter sends telemetry. The target syntax is defined in https://github.com/grpc/grpc/blob/master/doc/naming.md. The value must contain a scheme ("http" or "https") and host. The value may additionally contain a port, and a path. The value should not contain a query string or fragment. OTEL_EXPORTER_OTLP_METRICS_ENDPOINT takes precedence over OTEL_EXPORTER_OTLP_ENDPOINT. The configuration can be overridden by WithEndpoint, WithEndpointURL, WithInsecure, and WithGRPCConn options. OTEL_EXPORTER_OTLP_INSECURE, OTEL_EXPORTER_OTLP_METRICS_INSECURE (default: "false") - setting "true" disables client transport security for the exporter's gRPC connection. You can use this only when an endpoint is provided without the http or https scheme. OTEL_EXPORTER_OTLP_ENDPOINT, OTEL_EXPORTER_OTLP_METRICS_ENDPOINT setting overrides the scheme defined via OTEL_EXPORTER_OTLP_ENDPOINT, OTEL_EXPORTER_OTLP_METRICS_ENDPOINT. OTEL_EXPORTER_OTLP_METRICS_INSECURE takes precedence over OTEL_EXPORTER_OTLP_INSECURE. The configuration can be overridden by WithInsecure, WithGRPCConn options. OTEL_EXPORTER_OTLP_HEADERS, OTEL_EXPORTER_OTLP_METRICS_HEADERS (default: none) - key-value pairs used as gRPC metadata associated with gRPC requests. The value is expected to be represented in a format matching the W3C Baggage HTTP Header Content Format, except that additional semi-colon delimited metadata is not supported. Example value: "key1=value1,key2=value2". OTEL_EXPORTER_OTLP_METRICS_HEADERS takes precedence over OTEL_EXPORTER_OTLP_HEADERS. The configuration can be overridden by WithHeaders option. OTEL_EXPORTER_OTLP_TIMEOUT, OTEL_EXPORTER_OTLP_METRICS_TIMEOUT (default: "10000") - maximum time in milliseconds the OTLP exporter waits for each batch export. OTEL_EXPORTER_OTLP_METRICS_TIMEOUT takes precedence over OTEL_EXPORTER_OTLP_TIMEOUT. The configuration can be overridden by WithTimeout option. OTEL_EXPORTER_OTLP_COMPRESSION, OTEL_EXPORTER_OTLP_METRICS_COMPRESSION (default: none) - the gRPC compressor the exporter uses. Supported value: "gzip". OTEL_EXPORTER_OTLP_METRICS_COMPRESSION takes precedence over OTEL_EXPORTER_OTLP_COMPRESSION. The configuration can be overridden by WithCompressor, WithGRPCConn options. OTEL_EXPORTER_OTLP_CERTIFICATE, OTEL_EXPORTER_OTLP_METRICS_CERTIFICATE (default: none) - the filepath to the trusted certificate to use when verifying a server's TLS credentials. OTEL_EXPORTER_OTLP_METRICS_CERTIFICATE takes precedence over OTEL_EXPORTER_OTLP_CERTIFICATE. The configuration can be overridden by WithTLSCredentials, WithGRPCConn options. OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE, OTEL_EXPORTER_OTLP_METRICS_CLIENT_CERTIFICATE (default: none) - the filepath to the client certificate/chain trust for client's private key to use in mTLS communication in PEM format. OTEL_EXPORTER_OTLP_METRICS_CLIENT_CERTIFICATE takes precedence over OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE. The configuration can be overridden by WithTLSCredentials, WithGRPCConn options. OTEL_EXPORTER_OTLP_CLIENT_KEY, OTEL_EXPORTER_OTLP_METRICS_CLIENT_KEY (default: none) - the filepath to the client's private key to use in mTLS communication in PEM format. OTEL_EXPORTER_OTLP_METRICS_CLIENT_KEY takes precedence over OTEL_EXPORTER_OTLP_CLIENT_KEY. The configuration can be overridden by WithTLSCredentials, WithGRPCConn option. OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE (default: "cumulative") - aggregation temporality to use on the basis of instrument kind. Supported values: The configuration can be overridden by WithTemporalitySelector option. OTEL_EXPORTER_OTLP_METRICS_DEFAULT_HISTOGRAM_AGGREGATION (default: "explicit_bucket_histogram") - default aggregation to use for histogram instruments. Supported values: The configuration can be overridden by WithAggregationSelector option.
Package archiver facilitates convenient, cross-platform, high-level archival and compression operations for a variety of formats and compression algorithms. This package and its dependencies are written in pure Go (not cgo) and have no external dependencies, so they should run on all major platforms. (It also comes with a command for CLI use in the cmd/arc folder.) Each supported format or algorithm has a unique type definition that implements the interfaces corresponding to the tasks they perform. For example, the Tar type implements Reader, Writer, Archiver, Unarchiver, Walker, and several other interfaces. The most common functions are implemented at the package level for convenience: Archive, Unarchive, Walk, Extract, CompressFile, and DecompressFile. With these, the format type is chosen implicitly, and a sane default configuration is used. To customize a format's configuration, create an instance of its struct with its fields set to the desired values. You can also use and customize the handy Default* (replace the wildcard with the format's type name) for a quick, one-off instance of the format's type. To obtain a new instance of a format's struct with the default config, use the provided New*() functions. This is not required, however. An empty struct of any type, for example &Zip{} is perfectly valid, so you may create the structs manually, too. The examples on this page show how either may be done. See the examples in this package for an idea of how to wield this package for common tasks. Most of the examples which are specific to a certain format type, for example Zip, can be applied to other types that implement the same interfaces. For example, using Zip is very similar to using Tar or TarGz (etc), and using Gz is very similar to using Sz or Xz (etc). When creating archives or compressing files using a specific instance of the format's type, the name of the output file MUST match that of the format, to prevent confusion later on. If you absolutely need a different file extension, you may rename the file afterward. Values in this package are NOT safe for concurrent use. There is no performance benefit of reusing them, and since they may contain important state (especially while walking, reading, or writing), it is NOT recommended to reuse values from this package or change their configuration after they are in use.
Package archiver facilitates convenient, cross-platform, high-level archival and compression operations for a variety of formats and compression algorithms. This package and its dependencies are written in pure Go (not cgo) and have no external dependencies, so they should run on all major platforms. (It also comes with a command for CLI use in the cmd/arc folder.) Each supported format or algorithm has a unique type definition that implements the interfaces corresponding to the tasks they perform. For example, the Tar type implements Reader, Writer, Archiver, Unarchiver, Walker, and several other interfaces. The most common functions are implemented at the package level for convenience: Archive, Unarchive, Walk, Extract, CompressFile, and DecompressFile. With these, the format type is chosen implicitly, and a sane default configuration is used. To customize a format's configuration, create an instance of its struct with its fields set to the desired values. You can also use and customize the handy Default* (replace the wildcard with the format's type name) for a quick, one-off instance of the format's type. To obtain a new instance of a format's struct with the default config, use the provided New*() functions. This is not required, however. An empty struct of any type, for example &Zip{} is perfectly valid, so you may create the structs manually, too. The examples on this page show how either may be done. See the examples in this package for an idea of how to wield this package for common tasks. Most of the examples which are specific to a certain format type, for example Zip, can be applied to other types that implement the same interfaces. For example, using Zip is very similar to using Tar or TarGz (etc), and using Gz is very similar to using Sz or Xz (etc). When creating archives or compressing files using a specific instance of the format's type, the name of the output file MUST match that of the format, to prevent confusion later on. If you absolutely need a different file extension, you may rename the file afterward. Values in this package are NOT safe for concurrent use. There is no performance benefit of reusing them, and since they may contain important state (especially while walking, reading, or writing), it is NOT recommended to reuse values from this package or change their configuration after they are in use.
Package otlpmetrichttp provides an OTLP metrics exporter using HTTP with protobuf payloads. By default the telemetry is sent to https://localhost:4318/v1/metrics. Exporter should be created using New and used with a metric.PeriodicReader. The environment variables described below can be used for configuration. OTEL_EXPORTER_OTLP_ENDPOINT (default: "https://localhost:4318") - target base URL ("/v1/metrics" is appended) to which the exporter sends telemetry. The value must contain a scheme ("http" or "https") and host. The value may additionally contain a port and a path. The value should not contain a query string or fragment. The configuration can be overridden by OTEL_EXPORTER_OTLP_METRICS_ENDPOINT environment variable and by WithEndpoint, WithEndpointURL, and WithInsecure options. OTEL_EXPORTER_OTLP_METRICS_ENDPOINT (default: "https://localhost:4318/v1/metrics") - target URL to which the exporter sends telemetry. The value must contain a scheme ("http" or "https") and host. The value may additionally contain a port and a path. The value should not contain a query string or fragment. The configuration can be overridden by WithEndpoint, WithEndpointURL, WithInsecure, and WithURLPath options. OTEL_EXPORTER_OTLP_HEADERS, OTEL_EXPORTER_OTLP_METRICS_HEADERS (default: none) - key-value pairs used as headers associated with HTTP requests. The value is expected to be represented in a format matching the W3C Baggage HTTP Header Content Format, except that additional semi-colon delimited metadata is not supported. Example value: "key1=value1,key2=value2". OTEL_EXPORTER_OTLP_METRICS_HEADERS takes precedence over OTEL_EXPORTER_OTLP_HEADERS. The configuration can be overridden by WithHeaders option. OTEL_EXPORTER_OTLP_TIMEOUT, OTEL_EXPORTER_OTLP_METRICS_TIMEOUT (default: "10000") - maximum time in milliseconds the OTLP exporter waits for each batch export. OTEL_EXPORTER_OTLP_METRICS_TIMEOUT takes precedence over OTEL_EXPORTER_OTLP_TIMEOUT. The configuration can be overridden by WithTimeout option. OTEL_EXPORTER_OTLP_COMPRESSION, OTEL_EXPORTER_OTLP_METRICS_COMPRESSION (default: none) - compression strategy the exporter uses to compress the HTTP body. Supported values: "gzip". OTEL_EXPORTER_OTLP_METRICS_COMPRESSION takes precedence over OTEL_EXPORTER_OTLP_COMPRESSION. The configuration can be overridden by WithCompression option. OTEL_EXPORTER_OTLP_CERTIFICATE, OTEL_EXPORTER_OTLP_METRICS_CERTIFICATE (default: none) - filepath to the trusted certificate to use when verifying a server's TLS credentials. OTEL_EXPORTER_OTLP_METRICS_CERTIFICATE takes precedence over OTEL_EXPORTER_OTLP_CERTIFICATE. The configuration can be overridden by WithTLSClientConfig option. OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE, OTEL_EXPORTER_OTLP_METRICS_CLIENT_CERTIFICATE (default: none) - filepath to the client certificate/chain trust for client's private key to use in mTLS communication in PEM format. OTEL_EXPORTER_OTLP_METRICS_CLIENT_CERTIFICATE takes precedence over OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE. The configuration can be overridden by WithTLSClientConfig option. OTEL_EXPORTER_OTLP_CLIENT_KEY, OTEL_EXPORTER_OTLP_METRICS_CLIENT_KEY (default: none) - filepath to the client's private key to use in mTLS communication in PEM format. OTEL_EXPORTER_OTLP_METRICS_CLIENT_KEY takes precedence over OTEL_EXPORTER_OTLP_CLIENT_KEY. The configuration can be overridden by WithTLSClientConfig option. OTEL_EXPORTER_OTLP_METRICS_TEMPORALITY_PREFERENCE (default: "cumulative") - aggregation temporality to use on the basis of instrument kind. Supported values: The configuration can be overridden by WithTemporalitySelector option. OTEL_EXPORTER_OTLP_METRICS_DEFAULT_HISTOGRAM_AGGREGATION (default: "explicit_bucket_histogram") - default aggregation to use for histogram instruments. Supported values: The configuration can be overridden by WithAggregationSelector option.
Package handlers is a collection of handlers (aka "HTTP middleware") for use with Go's net/http package (or any framework supporting http.Handler). The package includes handlers for logging in standardised formats, compressing HTTP responses, validating content types and other useful tools for manipulating requests and responses.
Package roaring is an implementation of Roaring Bitmaps in Go. They provide fast compressed bitmap data structures (also called bitset). They are ideally suited to represent sets of integers over relatively small ranges. See http://roaringbitmap.org for details.
Package snappy implements the Snappy compression format. It aims for very high speeds and reasonable compression. There are actually two Snappy formats: block and stream. They are related, but different: trying to decompress block-compressed data as a Snappy stream will fail, and vice versa. The block format is the Decode and Encode functions and the stream format is the Reader and Writer types. The block format, the more common case, is used when the complete size (the number of bytes) of the original data is known upfront, at the time compression starts. The stream format, also known as the framing format, is for when that isn't always true. The canonical, C++ implementation is at https://github.com/google/snappy and it only implements the block format.
Package lz4 implements reading and writing lz4 compressed data. The package supports both the LZ4 stream format, as specified in http://fastcompression.blogspot.fr/2013/04/lz4-streaming-format-final.html, and the LZ4 block format, defined at http://fastcompression.blogspot.fr/2011/05/lz4-explained.html. See https://github.com/lz4/lz4 for the reference C implementation.
Package lz4 implements reading and writing lz4 compressed data (a frame), as specified in http://fastcompression.blogspot.fr/2013/04/lz4-streaming-format-final.html. Although the block level compression and decompression functions are exposed and are fully compatible with the lz4 block format definition, they are low level and should not be used directly. For a complete description of an lz4 compressed block, see: http://fastcompression.blogspot.fr/2011/05/lz4-explained.html See https://github.com/Cyan4973/lz4 for the reference C implementation.
bindata converts any file into managable Go source code. Useful for embedding binary data into a go program. The file data is optionally gzip compressed before being converted to a raw byte slice. The following paragraphs cover some of the customization options which can be specified in the Config struct, which must be passed into the Translate() call. When used with the `Debug` option, the generated code does not actually include the asset data. Instead, it generates function stubs which load the data from the original file on disk. The asset API remains identical between debug and release builds, so your code will not have to change. This is useful during development when you expect the assets to change often. The host application using these assets uses the same API in both cases and will not have to care where the actual data comes from. An example is a Go webserver with some embedded, static web content like HTML, JS and CSS files. While developing it, you do not want to rebuild the whole server and restart it every time you make a change to a bit of javascript. You just want to build and launch the server once. Then just press refresh in the browser to see those changes. Embedding the assets with the `debug` flag allows you to do just that. When you are finished developing and ready for deployment, just re-invoke `go-bindata` without the `-debug` flag. It will now embed the latest version of the assets. The `NoMemCopy` option will alter the way the output file is generated. It will employ a hack that allows us to read the file data directly from the compiled program's `.rodata` section. This ensures that when we call call our generated function, we omit unnecessary memcopies. The downside of this, is that it requires dependencies on the `reflect` and `unsafe` packages. These may be restricted on platforms like AppEngine and thus prevent you from using this mode. Another disadvantage is that the byte slice we create, is strictly read-only. For most use-cases this is not a problem, but if you ever try to alter the returned byte slice, a runtime panic is thrown. Use this mode only on target platforms where memory constraints are an issue. The default behaviour is to use the old code generation method. This prevents the two previously mentioned issues, but will employ at least one extra memcopy and thus increase memory requirements. For instance, consider the following two examples: This would be the default mode, using an extra memcopy but gives a safe implementation without dependencies on `reflect` and `unsafe`: Here is the same functionality, but uses the `.rodata` hack. The byte slice returned from this example can not be written to without generating a runtime error. The NoCompress option indicates that the supplied assets are *not* GZIP compressed before being turned into Go code. The data should still be accessed through a function call, so nothing changes in the API. This feature is useful if you do not care for compression, or the supplied resource is already compressed. Doing it again would not add any value and may even increase the size of the data. The default behaviour of the program is to use compression. The keys used in the `_bindata` map are the same as the input file name passed to `go-bindata`. This includes the path. In most cases, this is not desireable, as it puts potentially sensitive information in your code base. For this purpose, the tool supplies another command line flag `-prefix`. This accepts a portion of a path name, which should be stripped off from the map keys and function names. For example, running without the `-prefix` flag, we get: Running with the `-prefix` flag, we get: With the optional Tags field, you can specify any go build tags that must be fulfilled for the output file to be included in a build. This is useful when including binary data in multiple formats, where the desired format is specified at build time with the appropriate tags. The tags are appended to a `// +build` line in the beginning of the output file and must follow the build tags syntax specified by the go tool.
Package vfsgen takes an http.FileSystem (likely at `go generate` time) and generates Go code that statically implements the provided http.FileSystem. Features: - Efficient generated code without unneccessary overhead. - Uses gzip compression internally (selectively, only for files that compress well). - Enables direct access to internal gzip compressed bytes via an optional interface. - Outputs `gofmt`ed Go code. This code will generate an assets_vfsdata.go file with `var assets http.FileSystem = ...` that statically implements the contents of "assets" directory. vfsgen is great to use with go generate directives. This code can go in an assets_gen.go file, which can then be invoked via "//go:generate go run assets_gen.go". The input virtual filesystem can read directly from disk, or it can be more involved.
Package pgzip implements reading and writing of gzip format compressed files, as specified in RFC 1952. This is a drop in replacement for "compress/gzip". This will split compression into blocks that are compressed in parallel. This can be useful for compressing big amounts of data. The gzip decompression has not been modified, but remains in the package, so you can use it as a complete replacement for "compress/gzip". See more at https://github.com/klauspost/pgzip
bindata converts any file into manageable Go source code. Useful for embedding binary data into a go program. The file data is optionally gzip compressed before being converted to a raw byte slice. The following paragraphs cover some of the customization options which can be specified in the Config struct, which must be passed into the Translate() call. When used with the `Debug` option, the generated code does not actually include the asset data. Instead, it generates function stubs which load the data from the original file on disk. The asset API remains identical between debug and release builds, so your code will not have to change. This is useful during development when you expect the assets to change often. The host application using these assets uses the same API in both cases and will not have to care where the actual data comes from. An example is a Go webserver with some embedded, static web content like HTML, JS and CSS files. While developing it, you do not want to rebuild the whole server and restart it every time you make a change to a bit of javascript. You just want to build and launch the server once. Then just press refresh in the browser to see those changes. Embedding the assets with the `debug` flag allows you to do just that. When you are finished developing and ready for deployment, just re-invoke `go-bindata` without the `-debug` flag. It will now embed the latest version of the assets. The `NoMemCopy` option will alter the way the output file is generated. It will employ a hack that allows us to read the file data directly from the compiled program's `.rodata` section. This ensures that when we call call our generated function, we omit unnecessary memcopies. The downside of this, is that it requires dependencies on the `reflect` and `unsafe` packages. These may be restricted on platforms like AppEngine and thus prevent you from using this mode. Another disadvantage is that the byte slice we create, is strictly read-only. For most use-cases this is not a problem, but if you ever try to alter the returned byte slice, a runtime panic is thrown. Use this mode only on target platforms where memory constraints are an issue. The default behaviour is to use the old code generation method. This prevents the two previously mentioned issues, but will employ at least one extra memcopy and thus increase memory requirements. For instance, consider the following two examples: This would be the default mode, using an extra memcopy but gives a safe implementation without dependencies on `reflect` and `unsafe`: Here is the same functionality, but uses the `.rodata` hack. The byte slice returned from this example can not be written to without generating a runtime error. The NoCompress option indicates that the supplied assets are *not* GZIP compressed before being turned into Go code. The data should still be accessed through a function call, so nothing changes in the API. This feature is useful if you do not care for compression, or the supplied resource is already compressed. Doing it again would not add any value and may even increase the size of the data. The default behaviour of the program is to use compression. The keys used in the `_bindata` map are the same as the input file name passed to `go-bindata`. This includes the path. In most cases, this is not desirable, as it puts potentially sensitive information in your code base. For this purpose, the tool supplies another command line flag `-prefix`. This accepts a portion of a path name, which should be stripped off from the map keys and function names. For example, running without the `-prefix` flag, we get: Running with the `-prefix` flag, we get: With the optional Tags field, you can specify any go build tags that must be fulfilled for the output file to be included in a build. This is useful when including binary data in multiple formats, where the desired format is specified at build time with the appropriate tags. The tags are appended to a `// +build` line in the beginning of the output file and must follow the build tags syntax specified by the go tool.
bindata converts any file into managable Go source code. Useful for embedding binary data into a go program. The file data is optionally gzip compressed before being converted to a raw byte slice. The following paragraphs cover some of the customization options which can be specified in the Config struct, which must be passed into the Translate() call. When used with the `Debug` option, the generated code does not actually include the asset data. Instead, it generates function stubs which load the data from the original file on disk. The asset API remains identical between debug and release builds, so your code will not have to change. This is useful during development when you expect the assets to change often. The host application using these assets uses the same API in both cases and will not have to care where the actual data comes from. An example is a Go webserver with some embedded, static web content like HTML, JS and CSS files. While developing it, you do not want to rebuild the whole server and restart it every time you make a change to a bit of javascript. You just want to build and launch the server once. Then just press refresh in the browser to see those changes. Embedding the assets with the `debug` flag allows you to do just that. When you are finished developing and ready for deployment, just re-invoke `go-bindata` without the `-debug` flag. It will now embed the latest version of the assets. The `NoMemCopy` option will alter the way the output file is generated. It will employ a hack that allows us to read the file data directly from the compiled program's `.rodata` section. This ensures that when we call call our generated function, we omit unnecessary memcopies. The downside of this, is that it requires dependencies on the `reflect` and `unsafe` packages. These may be restricted on platforms like AppEngine and thus prevent you from using this mode. Another disadvantage is that the byte slice we create, is strictly read-only. For most use-cases this is not a problem, but if you ever try to alter the returned byte slice, a runtime panic is thrown. Use this mode only on target platforms where memory constraints are an issue. The default behaviour is to use the old code generation method. This prevents the two previously mentioned issues, but will employ at least one extra memcopy and thus increase memory requirements. For instance, consider the following two examples: This would be the default mode, using an extra memcopy but gives a safe implementation without dependencies on `reflect` and `unsafe`: Here is the same functionality, but uses the `.rodata` hack. The byte slice returned from this example can not be written to without generating a runtime error. The NoCompress option indicates that the supplied assets are *not* GZIP compressed before being turned into Go code. The data should still be accessed through a function call, so nothing changes in the API. This feature is useful if you do not care for compression, or the supplied resource is already compressed. Doing it again would not add any value and may even increase the size of the data. The default behaviour of the program is to use compression. The keys used in the `_bindata` map are the same as the input file name passed to `go-bindata`. This includes the path. In most cases, this is not desireable, as it puts potentially sensitive information in your code base. For this purpose, the tool supplies another command line flag `-prefix`. This accepts a portion of a path name, which should be stripped off from the map keys and function names. For example, running without the `-prefix` flag, we get: Running with the `-prefix` flag, we get: With the optional Tags field, you can specify any go build tags that must be fulfilled for the output file to be included in a build. This is useful when including binary data in multiple formats, where the desired format is specified at build time with the appropriate tags. The tags are appended to a `// +build` line in the beginning of the output file and must follow the build tags syntax specified by the go tool.
Package cidranger provides utility to store CIDR blocks and perform ip inclusion tests against it. To create a new instance of the path-compressed trie: To insert or remove an entry (any object that satisfies the RangerEntry interface): If you desire for any value to be attached to the entry, simply create custom struct that satisfies the RangerEntry interface: To test whether an IP is contained in the constructed networks ranger: To get a list of CIDR blocks in constructed ranger that contains IP: To get a list of all IPv4/IPv6 rangers respectively:
Package websocket implements the WebSocket protocol defined in RFC 6455. The Conn type represents a WebSocket connection. A server application calls the Upgrader.Upgrade method from an HTTP request handler to get a *Conn: Call the connection's WriteMessage and ReadMessage methods to send and receive messages as a slice of bytes. This snippet of code shows how to echo messages using these methods: In above snippet of code, p is a []byte and messageType is an int with value websocket.BinaryMessage or websocket.TextMessage. An application can also send and receive messages using the io.WriteCloser and io.Reader interfaces. To send a message, call the connection NextWriter method to get an io.WriteCloser, write the message to the writer and close the writer when done. To receive a message, call the connection NextReader method to get an io.Reader and read until io.EOF is returned. This snippet shows how to echo messages using the NextWriter and NextReader methods: The WebSocket protocol distinguishes between text and binary data messages. Text messages are interpreted as UTF-8 encoded text. The interpretation of binary messages is left to the application. This package uses the TextMessage and BinaryMessage integer constants to identify the two data message types. The ReadMessage and NextReader methods return the type of the received message. The messageType argument to the WriteMessage and NextWriter methods specifies the type of a sent message. It is the application's responsibility to ensure that text messages are valid UTF-8 encoded text. The WebSocket protocol defines three types of control messages: close, ping and pong. Call the connection WriteControl, WriteMessage or NextWriter methods to send a control message to the peer. Connections handle received close messages by calling the handler function set with the SetCloseHandler method and by returning a *CloseError from the NextReader, ReadMessage or the message Read method. The default close handler sends a close message to the peer. Connections handle received ping messages by calling the handler function set with the SetPingHandler method. The default ping handler sends a pong message to the peer. Connections handle received pong messages by calling the handler function set with the SetPongHandler method. The default pong handler does nothing. If an application sends ping messages, then the application should set a pong handler to receive the corresponding pong. The control message handler functions are called from the NextReader, ReadMessage and message reader Read methods. The default close and ping handlers can block these methods for a short time when the handler writes to the connection. The application must read the connection to process close, ping and pong messages sent from the peer. If the application is not otherwise interested in messages from the peer, then the application should start a goroutine to read and discard messages from the peer. A simple example is: Connections support one concurrent reader and one concurrent writer. Applications are responsible for ensuring that no more than one goroutine calls the write methods (NextWriter, SetWriteDeadline, WriteMessage, WriteJSON, EnableWriteCompression, SetCompressionLevel) concurrently and that no more than one goroutine calls the read methods (NextReader, SetReadDeadline, ReadMessage, ReadJSON, SetPongHandler, SetPingHandler) concurrently. The Close and WriteControl methods can be called concurrently with all other methods. Web browsers allow Javascript applications to open a WebSocket connection to any host. It's up to the server to enforce an origin policy using the Origin request header sent by the browser. The Upgrader calls the function specified in the CheckOrigin field to check the origin. If the CheckOrigin function returns false, then the Upgrade method fails the WebSocket handshake with HTTP status 403. If the CheckOrigin field is nil, then the Upgrader uses a safe default: fail the handshake if the Origin request header is present and the Origin host is not equal to the Host request header. The deprecated package-level Upgrade function does not perform origin checking. The application is responsible for checking the Origin header before calling the Upgrade function. Per message compression extensions (RFC 7692) are experimentally supported by this package in a limited capacity. Setting the EnableCompression option to true in Dialer or Upgrader will attempt to negotiate per message deflate support. If compression was successfully negotiated with the connection's peer, any message received in compressed form will be automatically decompressed. All Read methods will return uncompressed bytes. Per message compression of messages written to a connection can be enabled or disabled by calling the corresponding Conn method: Currently this package does not support compression with "context takeover". This means that messages must be compressed and decompressed in isolation, without retaining sliding window or dictionary state across messages. For more details refer to RFC 7692. Use of compression is experimental and may result in decreased performance.
Package lz4 implements reading and writing lz4 compressed data (a frame), as specified in http://fastcompression.blogspot.fr/2013/04/lz4-streaming-format-final.html. Although the block level compression and decompression functions are exposed and are fully compatible with the lz4 block format definition, they are low level and should not be used directly. For a complete description of an lz4 compressed block, see: http://fastcompression.blogspot.fr/2011/05/lz4-explained.html See https://github.com/Cyan4973/lz4 for the reference C implementation.