Package pubsub provides an easy way to publish and receive Google Cloud Pub/Sub messages, hiding the details of the underlying server RPCs. Pub/Sub is a many-to-many, asynchronous messaging system that decouples senders and receivers. More information about Pub/Sub is available at https://cloud.google.com/pubsub/docs See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Pub/Sub messages are published to topics. A Topic may be created using Client.CreateTopic like so: Messages may then be published to a Topic: Topic.Publish queues the message for publishing and returns immediately. When enough messages have accumulated, or enough time has elapsed, the batch of messages is sent to the Pub/Sub service. Topic.Publish returns a PublishResult, which behaves like a future: its Get method blocks until the message has been sent to the service. The first time you call Topic.Publish on a Topic, goroutines are started in the background. To clean up these goroutines, call Topic.Stop: To receive messages published to a Topic, clients create a Subscription for the topic. There may be more than one subscription per topic ; each message that is published to the topic will be delivered to all associated subscriptions. A Subscription may be created like so: Messages are then consumed from a Subscription via callback. The callback is invoked concurrently by multiple goroutines, maximizing throughput. To terminate a call to Subscription.Receive, cancel its context. Once client code has processed the Message, it must call Message.Ack or Message.Nack; otherwise the Message will eventually be redelivered. Ack/Nack MUST be called within the Subscription.Receive handler function, and not from a goroutine. Otherwise, flow control (e.g. ReceiveSettings.MaxOutstandingMessages) will not be respected, and messages can get orphaned when cancelling Receive. If the client cannot or doesn't want to process the message, it can call Message.Nack to speed redelivery. For more information and configuration options, see Ack Deadlines below. Note: It is possible for a Message to be redelivered even if Message.Ack has been called. Client code must be robust to multiple deliveries of messages. Note: This uses pubsub's streaming pull feature. This feature has properties that may be surprising. Please take a look at https://cloud.google.com/pubsub/docs/pull#streamingpull for more details on how streaming pull behaves compared to the synchronous pull method. The number of StreamingPull connections can be configured by setting NumGoroutines in ReceiveSettings. The default value of 10 means the client library will maintain 10 StreamingPull connections. This is more than sufficient for most use cases, as StreamingPull connections can handle up to 10 MB/s https://cloud.google.com/pubsub/quotas#resource_limits. In some cases, using too many streams can lead to client library behaving poorly as the application becomes I/O bound. By default, the number of connections in the gRPC conn pool is min(4,GOMAXPROCS). Each connection supports up to 100 streams. Thus, if you have 4 or more CPU cores, the default setting allows a maximum of 400 streams which is already excessive for most use cases. If you want to change the limits on the number of streams, you can change the number of connections in the gRPC connection pool as shown below: The default pubsub deadlines are suitable for most use cases, but may be overridden. This section describes the tradeoffs that should be considered when overriding the defaults. Behind the scenes, each message returned by the Pub/Sub server has an associated lease, known as an "ack deadline". Unless a message is acknowledged within the ack deadline, or the client requests that the ack deadline be extended, the message will become eligible for redelivery. As a convenience, the pubsub client will automatically extend deadlines until either: Ack deadlines are extended periodically by the client. The period between extensions, as well as the length of the extension, automatically adjusts based on the time it takes the subscriber application to ack messages (based on the 99th percentile of ack latency). By default, this extension period is capped at 10m, but this limit can be configured by the "MaxExtensionPeriod" setting. This has the effect that subscribers that process messages quickly have their message ack deadlines extended for a short amount, whereas subscribers that process message slowly have their message ack deadlines extended for a large amount. The net effect is fewer RPCs sent from the client library. For example, consider a subscriber that takes 3 minutes to process each message. Since the library has already recorded several 3-minute "ack latencies"s in a percentile distribution, future message extensions are sent with a value of 3 minutes, every 3 minutes. Suppose the application crashes 5 seconds after the library sends such an extension: the Pub/Sub server would wait the remaining 2m55s before re-sending the messages out to other subscribers. Please note that by default, the client library does not use the subscription's AckDeadline for the MaxExtension value. For use cases where message processing exceeds 30 minutes, we recommend using the base client in a pull model, since long-lived streams are periodically killed by firewalls. See the example at https://godoc.org/cloud.google.com/go/pubsub/apiv1#example-SubscriberClient-Pull-LengthyClientProcessing To use an emulator with this library, you can set the PUBSUB_EMULATOR_HOST environment variable to the address at which your emulator is running. This will send requests to that address instead of to Pub/Sub. You can then create and use a client as usual:
Package restful , a lean package for creating REST-style WebServices without magic. A WebService has a collection of Route objects that dispatch incoming Http Requests to a function calls. Typically, a WebService has a root path (e.g. /users) and defines common MIME types for its routes. WebServices must be added to a container (see below) in order to handler Http requests from a server. A Route is defined by a HTTP method, an URL path and (optionally) the MIME types it consumes (Content-Type) and produces (Accept). This package has the logic to find the best matching Route and if found, call its Function. The (*Request, *Response) arguments provide functions for reading information from the request and writing information back to the response. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-user-resource.go with a full implementation. A Route parameter can be specified using the format "uri/{var[:regexp]}" or the special version "uri/{var:*}" for matching the tail of the path. For example, /persons/{name:[A-Z][A-Z]} can be used to restrict values for the parameter "name" to only contain capital alphabetic characters. Regular expressions must use the standard Go syntax as described in the regexp package. (https://code.google.com/p/re2/wiki/Syntax) This feature requires the use of a CurlyRouter. A Container holds a collection of WebServices, Filters and a http.ServeMux for multiplexing http requests. Using the statements "restful.Add(...) and restful.Filter(...)" will register WebServices and Filters to the Default Container. The Default container of go-restful uses the http.DefaultServeMux. You can create your own Container and create a new http.Server for that particular container. A filter dynamically intercepts requests and responses to transform or use the information contained in the requests or responses. You can use filters to perform generic logging, measurement, authentication, redirect, set response headers etc. In the restful package there are three hooks into the request,response flow where filters can be added. Each filter must define a FilterFunction: Use the following statement to pass the request,response pair to the next filter or RouteFunction These are processed before any registered WebService. These are processed before any Route of a WebService. These are processed before calling the function associated with the Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-filters.go with full implementations. Two encodings are supported: gzip and deflate. To enable this for all responses: If a Http request includes the Accept-Encoding header then the response content will be compressed using the specified encoding. Alternatively, you can create a Filter that performs the encoding and install it per WebService or Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-encoding-filter.go By installing a pre-defined container filter, your Webservice(s) can respond to the OPTIONS Http request. By installing the filter of a CrossOriginResourceSharing (CORS), your WebService(s) can handle CORS requests. Unexpected things happen. If a request cannot be processed because of a failure, your service needs to tell via the response what happened and why. For this reason HTTP status codes exist and it is important to use the correct code in every exceptional situation. If path or query parameters are not valid (content or type) then use http.StatusBadRequest. Despite a valid URI, the resource requested may not be available If the application logic could not process the request (or write the response) then use http.StatusInternalServerError. The request has a valid URL but the method (GET,PUT,POST,...) is not allowed. The request does not have or has an unknown Accept Header set for this operation. The request does not have or has an unknown Content-Type Header set for this operation. In addition to setting the correct (error) Http status code, you can choose to write a ServiceError message on the response. This package has several options that affect the performance of your service. It is important to understand them and how you can change it. DoNotRecover controls whether panics will be caught to return HTTP 500. If set to false, the container will recover from panics. Default value is true If content encoding is enabled then the default strategy for getting new gzip/zlib writers and readers is to use a sync.Pool. Because writers are expensive structures, performance is even more improved when using a preloaded cache. You can also inject your own implementation. This package has the means to produce detail logging of the complete Http request matching process and filter invocation. Enabling this feature requires you to set an implementation of restful.StdLogger (e.g. log.Logger) instance such as: The restful.SetLogger() method allows you to override the logger used by the package. By default restful uses the standard library `log` package and logs to stdout. Different logging packages are supported as long as they conform to `StdLogger` interface defined in the `log` sub-package, writing an adapter for your preferred package is simple. (c) 2012-2015, http://ernestmicklei.com. MIT License
Package restful , a lean package for creating REST-style WebServices without magic. A WebService has a collection of Route objects that dispatch incoming Http Requests to a function calls. Typically, a WebService has a root path (e.g. /users) and defines common MIME types for its routes. WebServices must be added to a container (see below) in order to handler Http requests from a server. A Route is defined by a HTTP method, an URL path and (optionally) the MIME types it consumes (Content-Type) and produces (Accept). This package has the logic to find the best matching Route and if found, call its Function. The (*Request, *Response) arguments provide functions for reading information from the request and writing information back to the response. See the example https://github.com/emicklei/go-restful/blob/v3/examples/user-resource/restful-user-resource.go with a full implementation. A Route parameter can be specified using the format "uri/{var[:regexp]}" or the special version "uri/{var:*}" for matching the tail of the path. For example, /persons/{name:[A-Z][A-Z]} can be used to restrict values for the parameter "name" to only contain capital alphabetic characters. Regular expressions must use the standard Go syntax as described in the regexp package. (https://code.google.com/p/re2/wiki/Syntax) This feature requires the use of a CurlyRouter. A Container holds a collection of WebServices, Filters and a http.ServeMux for multiplexing http requests. Using the statements "restful.Add(...) and restful.Filter(...)" will register WebServices and Filters to the Default Container. The Default container of go-restful uses the http.DefaultServeMux. You can create your own Container and create a new http.Server for that particular container. A filter dynamically intercepts requests and responses to transform or use the information contained in the requests or responses. You can use filters to perform generic logging, measurement, authentication, redirect, set response headers etc. In the restful package there are three hooks into the request,response flow where filters can be added. Each filter must define a FilterFunction: Use the following statement to pass the request,response pair to the next filter or RouteFunction These are processed before any registered WebService. These are processed before any Route of a WebService. These are processed before calling the function associated with the Route. See the example https://github.com/emicklei/go-restful/blob/v3/examples/filters/restful-filters.go with full implementations. Two encodings are supported: gzip and deflate. To enable this for all responses: If a Http request includes the Accept-Encoding header then the response content will be compressed using the specified encoding. Alternatively, you can create a Filter that performs the encoding and install it per WebService or Route. See the example https://github.com/emicklei/go-restful/blob/v3/examples/encoding/restful-encoding-filter.go By installing a pre-defined container filter, your Webservice(s) can respond to the OPTIONS Http request. By installing the filter of a CrossOriginResourceSharing (CORS), your WebService(s) can handle CORS requests. Unexpected things happen. If a request cannot be processed because of a failure, your service needs to tell via the response what happened and why. For this reason HTTP status codes exist and it is important to use the correct code in every exceptional situation. If path or query parameters are not valid (content or type) then use http.StatusBadRequest. Despite a valid URI, the resource requested may not be available If the application logic could not process the request (or write the response) then use http.StatusInternalServerError. The request has a valid URL but the method (GET,PUT,POST,...) is not allowed. The request does not have or has an unknown Accept Header set for this operation. The request does not have or has an unknown Content-Type Header set for this operation. In addition to setting the correct (error) Http status code, you can choose to write a ServiceError message on the response. This package has several options that affect the performance of your service. It is important to understand them and how you can change it. DoNotRecover controls whether panics will be caught to return HTTP 500. If set to false, the container will recover from panics. Default value is true If content encoding is enabled then the default strategy for getting new gzip/zlib writers and readers is to use a sync.Pool. Because writers are expensive structures, performance is even more improved when using a preloaded cache. You can also inject your own implementation. This package has the means to produce detail logging of the complete Http request matching process and filter invocation. Enabling this feature requires you to set an implementation of restful.StdLogger (e.g. log.Logger) instance such as: The restful.SetLogger() method allows you to override the logger used by the package. By default restful uses the standard library `log` package and logs to stdout. Different logging packages are supported as long as they conform to `StdLogger` interface defined in the `log` sub-package, writing an adapter for your preferred package is simple. (c) 2012-2015, http://ernestmicklei.com. MIT License
Goserial is a simple go package to allow you to read and write from the serial port as a stream of bytes. It aims to have the same API on all platforms, including windows. As an added bonus, the windows package does not use cgo, so you can cross compile for windows from another platform. Unfortunately goinstall does not currently let you cross compile so you will have to do it manually: Currently there is very little in the way of configurability. You can set the baud rate. Then you can Read(), Write(), or Close() the connection. Read() will block until at least one byte is returned. Write is the same. There is currently no exposed way to set the timeouts, though patches are welcome. Currently all ports are opened with 8 data bits, 1 stop bit, no parity, no hardware flow control, and no software flow control. This works fine for many real devices and many faux serial devices including usb-to-serial converters and bluetooth serial ports. You may Read() and Write() simulantiously on the same connection (from different goroutines). Example usage:
Package swf provides the API client, operations, and parameter types for Amazon Simple Workflow Service. The Amazon Simple Workflow Service (Amazon SWF) makes it easy to build applications that use Amazon's cloud to coordinate work across distributed components. In Amazon SWF, a task represents a logical unit of work that is performed by a component of your workflow. Coordinating tasks in a workflow involves managing intertask dependencies, scheduling, and concurrency in accordance with the logical flow of the application. Amazon SWF gives you full control over implementing tasks and coordinating them without worrying about underlying complexities such as tracking their progress and maintaining their state. This documentation serves as reference only. For a broader overview of the Amazon SWF programming model, see the Amazon SWF Developer Guide.
Package gosnowflake is a pure Go Snowflake driver for the database/sql package. Clients can use the database/sql package directly. For example: Use the Open() function to create a database handle with connection parameters: The Go Snowflake Driver supports the following connection syntaxes (or data source name (DSN) formats): where all parameters must be escaped or use Config and DSN to construct a DSN string. For information about account identifiers, see the Snowflake documentation (https://docs.snowflake.com/en/user-guide/admin-account-identifier.html). The following example opens a database handle with the Snowflake account named "my_account" under the organization named "my_organization", where the username is "jsmith", password is "mypassword", database is "mydb", schema is "testschema", and warehouse is "mywh": The connection string (DSN) can contain both connection parameters (described below) and session parameters (https://docs.snowflake.com/en/sql-reference/parameters.html). The following connection parameters are supported: account <string>: Specifies your Snowflake account, where "<string>" is the account identifier assigned to your account by Snowflake. For information about account identifiers, see the Snowflake documentation (https://docs.snowflake.com/en/user-guide/admin-account-identifier.html). If you are using a global URL, then append the connection group and ".global" (e.g. "<account_identifier>-<connection_group>.global"). The account identifier and the connection group are separated by a dash ("-"), as shown above. This parameter is optional if your account identifier is specified after the "@" character in the connection string. region <string>: DEPRECATED. You may specify a region, such as "eu-central-1", with this parameter. However, since this parameter is deprecated, it is best to specify the region as part of the account parameter. For details, see the description of the account parameter. database: Specifies the database to use by default in the client session (can be changed after login). schema: Specifies the database schema to use by default in the client session (can be changed after login). warehouse: Specifies the virtual warehouse to use by default for queries, loading, etc. in the client session (can be changed after login). role: Specifies the role to use by default for accessing Snowflake objects in the client session (can be changed after login). passcode: Specifies the passcode provided by Duo when using multi-factor authentication (MFA) for login. passcodeInPassword: false by default. Set to true if the MFA passcode is embedded in the login password. Appends the MFA passcode to the end of the password. loginTimeout: Specifies the timeout, in seconds, for login. The default is 60 seconds. The login request gives up after the timeout length if the HTTP response is success. requestTimeout: Specifies the timeout, in seconds, for a query to complete. 0 (zero) specifies that the driver should wait indefinitely. The default is 0 seconds. The query request gives up after the timeout length if the HTTP response is success. authenticator: Specifies the authenticator to use for authenticating user credentials: To use the internal Snowflake authenticator, specify snowflake (Default). If you want to cache your MFA logins, use AuthTypeUsernamePasswordMFA authenticator. To use programmatic access tokens, specify programmatic_access_token. To authenticate through Okta, specify https://<okta_account_name>.okta.com (URL prefix for Okta). To authenticate using your IDP via a browser, specify externalbrowser. To authenticate via OAuth with token, specify oauth and provide an OAuth Access Token (see the token parameter below). To authenticate via full OAuth flow, specify oauth_authorization_code or oauth_client_credentials and fill relevant parameters (oauthClientId, oauthClientSecret, oauthAuthorizationUrl, oauthTokenRequestUrl, oauthRedirectUri, oauthScope). Specify URLs if you want to use external OAuth2 IdP, otherwise Snowflake will be used as a default IdP. If oauthScope is not configured, the role is used (giving session:role:<roleName> scope). For more information, please reach to official Snowflake documentation. application: Identifies your application to Snowflake Support. disableOCSPChecks: false by default. Set to true to bypass the Online Certificate Status Protocol (OCSP) certificate revocation check. IMPORTANT: Change the default value for testing or emergency situations only. insecureMode: deprecated. Use disableOCSPChecks instead. token: a token that can be used to authenticate. Should be used in conjunction with the "oauth" authenticator. client_session_keep_alive: Set to true have a heartbeat in the background every hour to keep the connection alive such that the connection session will never expire. Care should be taken in using this option as it opens up the access forever as long as the process is alive. ocspFailOpen: true by default. Set to false to make OCSP check fail closed mode. validateDefaultParameters: true by default. Set to false to disable checks on existence and privileges check for Database, Schema, Warehouse and Role when setting up the connection tracing: Specifies the logging level to be used. Set to error by default. Valid values are trace, debug, info, print, warning, error, fatal, panic. disableQueryContextCache: disables parsing of query context returned from server and resending it to server as well. Default value is false. clientConfigFile: specifies the location of the client configuration json file. In this file you can configure Easy Logging feature. disableSamlURLCheck: disables the SAML URL check. Default value is false. All other parameters are interpreted as session parameters (https://docs.snowflake.com/en/sql-reference/parameters.html). For example, the TIMESTAMP_OUTPUT_FORMAT session parameter can be set by adding: A complete connection string looks similar to the following: Session-level parameters can also be set by using the SQL command "ALTER SESSION" (https://docs.snowflake.com/en/sql-reference/sql/alter-session.html). Alternatively, use OpenWithConfig() function to create a database handle with the specified Config. You can also connect to your warehouse using the connection config. The dbSql library states that when you want to take advantage of driver-specific connection features that aren’t available in a connection string. Each driver supports its own set of connection properties, often providing ways to customize the connection request specific to the DBMS For example: If you are using this method, you dont need to pass a driver name to specify the driver type in which you are looking to connect. Since the driver name is not needed, you can optionally bypass driver registration on startup. To do this, set `GOSNOWFLAKE_SKIP_REGISTERATION` in your environment. This is useful you wish to register multiple verions of the driver. Note: `GOSNOWFLAKE_SKIP_REGISTERATION` should not be used if sql.Open() is used as the method to connect to the server, as sql.Open will require registration so it can map the driver name to the driver type, which in this case is "snowflake" and SnowflakeDriver{}. You can load the connnection configuration with .toml file format. With two environment variables, `SNOWFLAKE_HOME` (`connections.toml` file directory) and `SNOWFLAKE_DEFAULT_CONNECTION_NAME` (DSN name), the driver will search the config file and load the connection. You can find how to use this connection way at ./cmd/tomlfileconnection or Snowflake doc: https://docs.snowflake.com/en/developer-guide/snowflake-cli-v2/connecting/specify-credentials The Go Snowflake Driver honors the environment variables HTTP_PROXY, HTTPS_PROXY and NO_PROXY for the forward proxy setting. NO_PROXY specifies which hostname endings should be allowed to bypass the proxy server, e.g. no_proxy=.amazonaws.com means that Amazon S3 access does not need to go through the proxy. NO_PROXY does not support wildcards. Each value specified should be one of the following: The end of a hostname (or a complete hostname), for example: ".amazonaws.com" or "xy12345.snowflakecomputing.com". An IP address, for example "192.196.1.15". If more than one value is specified, values should be separated by commas, for example: By default, the driver's builtin logger is exposing logrus's FieldLogger and default at INFO level. Users can use SetLogger in driver.go to set a customized logger for gosnowflake package. In order to enable debug logging for the driver, user could use SetLogLevel("debug") in SFLogger interface as shown in demo code at cmd/logger.go. To redirect the logs SFlogger.SetOutput method could do the work. If you want to define S3 client logging, override S3LoggingMode variable using configuration: https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/aws#ClientLogMode Example: A custom query tag can be set in the context. Each query run with this context will include the custom query tag as metadata that will appear in the Query Tag column in the Query History log. For example: A specific query request ID can be set in the context and will be passed through in place of the default randomized request ID. For example: If you need query ID for your query you have to use raw connection. For queries: ``` ``` For execs: ``` ``` The result of your query can be retrieved by setting the query ID in the WithFetchResultByID context. ``` ``` From 0.5.0, a signal handling responsibility has moved to the applications. If you want to cancel a query/command by Ctrl+C, add a os.Interrupt trap in context to execute methods that can take the context parameter (e.g. QueryContext, ExecContext). See cmd/selectmany.go for the full example. The Go Snowflake Driver now supports the Arrow data format for data transfers between Snowflake and the Golang client. The Arrow data format avoids extra conversions between binary and textual representations of the data. The Arrow data format can improve performance and reduce memory consumption in clients. Snowflake continues to support the JSON data format. The data format is controlled by the session-level parameter GO_QUERY_RESULT_FORMAT. To use JSON format, execute: The valid values for the parameter are: If the user attempts to set the parameter to an invalid value, an error is returned. The parameter name and the parameter value are case-insensitive. This parameter can be set only at the session level. Usage notes: The Arrow data format reduces rounding errors in floating point numbers. You might see slightly different values for floating point numbers when using Arrow format than when using JSON format. In order to take advantage of the increased precision, you must pass in the context.Context object provided by the WithHigherPrecision function when querying. Traditionally, the rows.Scan() method returned a string when a variable of types interface was passed in. Turning on the flag ENABLE_HIGHER_PRECISION via WithHigherPrecision will return the natural, expected data type as well. For some numeric data types, the driver can retrieve larger values when using the Arrow format than when using the JSON format. For example, using Arrow format allows the full range of SQL NUMERIC(38,0) values to be retrieved, while using JSON format allows only values in the range supported by the Golang int64 data type. Users should ensure that Golang variables are declared using the appropriate data type for the full range of values contained in the column. For an example, see below. When using the Arrow format, the driver supports more Golang data types and more ways to convert SQL values to those Golang data types. The table below lists the supported Snowflake SQL data types and the corresponding Golang data types. The columns are: The SQL data type. The default Golang data type that is returned when you use snowflakeRows.Scan() to read data from Arrow data format via an interface{}. The possible Golang data types that can be returned when you use snowflakeRows.Scan() to read data from Arrow data format directly. The default Golang data type that is returned when you use snowflakeRows.Scan() to read data from JSON data format via an interface{}. (All returned values are strings.) The standard Golang data type that is returned when you use snowflakeRows.Scan() to read data from JSON data format directly. Go Data Types for Scan() =================================================================================================================== | ARROW | JSON =================================================================================================================== SQL Data Type | Default Go Data Type | Supported Go Data | Default Go Data Type | Supported Go Data | for Scan() interface{} | Types for Scan() | for Scan() interface{} | Types for Scan() =================================================================================================================== BOOLEAN | bool | string | bool ------------------------------------------------------------------------------------------------------------------- VARCHAR | string | string ------------------------------------------------------------------------------------------------------------------- DOUBLE | float32, float64 [1] , [2] | string | float32, float64 ------------------------------------------------------------------------------------------------------------------- INTEGER that | int, int8, int16, int32, int64 | string | int, int8, int16, fits in int64 | [1] , [2] | | int32, int64 ------------------------------------------------------------------------------------------------------------------- INTEGER that doesn't | int, int8, int16, int32, int64, *big.Int | string | error fit in int64 | [1] , [2] , [3] , [4] | ------------------------------------------------------------------------------------------------------------------- NUMBER(P, S) | float32, float64, *big.Float | string | float32, float64 where S > 0 | [1] , [2] , [3] , [5] | ------------------------------------------------------------------------------------------------------------------- DATE | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIME | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_LTZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_NTZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_TZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- BINARY | []byte | string | []byte ------------------------------------------------------------------------------------------------------------------- ARRAY [6] | string / array | string / array ------------------------------------------------------------------------------------------------------------------- OBJECT [6] | string / struct | string / struct ------------------------------------------------------------------------------------------------------------------- VARIANT | string | string ------------------------------------------------------------------------------------------------------------------- MAP | map | map [1] Converting from a higher precision data type to a lower precision data type via the snowflakeRows.Scan() method can lose low bits (lose precision), lose high bits (completely change the value), or result in error. [2] Attempting to convert from a higher precision data type to a lower precision data type via interface{} causes an error. [3] Higher precision data types like *big.Int and *big.Float can be accessed by querying with a context returned by WithHigherPrecision(). [4] You cannot directly Scan() into the alternative data types via snowflakeRows.Scan(), but can convert to those data types by using .Int64()/.String()/.Uint64() methods. For an example, see below. [5] You cannot directly Scan() into the alternative data types via snowflakeRows.Scan(), but can convert to those data types by using .Float32()/.String()/.Float64() methods. For an example, see below. [6] Arrays and objects can be either semistructured or structured, see more info in section below. Note: SQL NULL values are converted to Golang nil values, and vice-versa. Snowflake supports two flavours of "structured data" - semistructured and structured. Semistructured types are variants, objects and arrays without schema. When data is fetched, it's represented as strings and the client is responsible for its interpretation. Example table definition: The data not have any corresponding schema, so values in table may be slightly different. Semistuctured variants, objects and arrays are always represented as strings for scanning: When inserting, a marker indicating correct type must be used, for example: Structured types differentiate from semistructured types by having specific schema. In all rows of the table, values must conform to this schema. Example table definition: To retrieve structured objects, follow these steps: 1. Create a struct implementing sql.Scanner interface, example: a) b) Automatic scan goes through all fields in a struct and read object fields. Struct fields have to be public. Embedded structs have to be pointers. Matching name is built using struct field name with first letter lowercase. Additionally, `sf` tag can be added: - first value is always a name of a field in an SQL object - additionally `ignore` parameter can be passed to omit this field 2. Use WithStructuredTypesEnabled context while querying data. 3. Use it in regular scan: See StructuredObject for all available operations including null support, embedding nested structs, etc. Retrieving array of simple types works exactly the same like normal values - using Scan function. You can use WithMapValuesNullable and WithArrayValuesNullable contexts to handle null values in, respectively, maps and arrays of simple types in the database. In that case, sql null types will be used: If you want to scan array of structs, you have to use a helper function ScanArrayOfScanners: Retrieving structured maps is very similar to retrieving arrays: To bind structured objects use: 1. Create a type which implements a StructuredObjectWriter interface, example: a) b) 2. Use an instance as regular bind. 3. If you need to bind nil value, use special syntax: Binding structured arrays are like any other parameter. The only difference is - if you want to insert empty array (not nil but empty), you have to use: The following example shows how to retrieve very large values using the math/big package. This example retrieves a large INTEGER value to an interface and then extracts a big.Int value from that interface. If the value fits into an int64, then the code also copies the value to a variable of type int64. Note that a context that enables higher precision must be passed in with the query. If the variable named "rows" is known to contain a big.Int, then you can use the following instead of scanning into an interface and then converting to a big.Int: If the variable named "rows" contains a big.Int, then each of the following fails: Similar code and rules also apply to big.Float values. If you are not sure what data type will be returned, you can use code similar to the following to check the data type of the returned value: You can retrieve data in a columnar format similar to the format a server returns, without transposing them to rows. When working with the arrow columnar format in go driver, ArrowBatch structs are used. These are structs mostly corresponding to data chunks received from the backend. They allow for access to specific arrow.Record structs. An ArrowBatch can exist in a state where the underlying data has not yet been loaded. The data is downloaded and translated only on demand. Translation options are retrieved from a context.Context interface, which is either passed from query context or set by the user using WithContext(ctx) method. In order to access them you must use `WithArrowBatches` context, similar to the following: This returns []*ArrowBatch. ArrowBatch functions: GetRowCount(): Returns the number of rows in the ArrowBatch. Note that this returns 0 if the data has not yet been loaded, irrespective of it’s actual size. WithContext(ctx context.Context): Sets the context of the ArrowBatch to the one provided. Note that the context will not retroactively apply to data that has already been downloaded. For example: will produce the same result in records1 and records2, irrespective of the newly provided ctx. Context worth noting are: -WithArrowBatchesTimestampOption -WithHigherPrecision -WithArrowBatchesUtf8Validation described in more detail later. Fetch(): Returns the underlying records as *[]arrow.Record. When this function is called, the ArrowBatch checks whether the underlying data has already been loaded, and downloads it if not. Limitations: How to handle timestamps in Arrow batches: Snowflake returns timestamps natively (from backend to driver) in multiple formats. The Arrow timestamp is an 8-byte data type, which is insufficient to handle the larger date and time ranges used by Snowflake. Also, Snowflake supports 0-9 (nanosecond) digit precision for seconds, while Arrow supports only 3 (millisecond), 6 (microsecond), an 9 (nanosecond) precision. Consequently, Snowflake uses a custom timestamp format in Arrow, which differs on timestamp type and precision. If you want to use timestamps in Arrow batches, you have two options: How to handle invalid UTF-8 characters in Arrow batches: Snowflake previously allowed users to upload data with invalid UTF-8 characters. Consequently, Arrow records containing string columns in Snowflake could include these invalid UTF-8 characters. However, according to the Arrow specifications (https://arrow.apache.org/docs/cpp/api/datatype.html and https://github.com/apache/arrow/blob/a03d957b5b8d0425f9d5b6c98b6ee1efa56a1248/go/arrow/datatype.go#L73-L74), Arrow string columns should only contain UTF-8 characters. To address this issue and prevent potential downstream disruptions, the context WithArrowBatchesUtf8Validation, is introduced. When enabled, this feature iterates through all values in string columns, identifying and replacing any invalid characters with `�`. This ensures that Arrow records conform to the UTF-8 standards, preventing validation failures in downstream services like the Rust Arrow library that impose strict validation checks. How to handle higher precision in Arrow batches: To preserve BigDecimal values within Arrow batches, use WithHigherPrecision. This offers two main benefits: it helps avoid precision loss and defers the conversion to upstream services. Alternatively, without this setting, all non-zero scale numbers will be converted to float64, potentially resulting in loss of precision. Zero-scale numbers (DECIMAL256, DECIMAL128) will be converted to int64, which could lead to overflow. WHen using NUMBERs with non zero scale, the value is returned as an integer type and a scale is provided in record metadata. Example. When we have a 123.45 value that comes from NUMBER(9, 4), it will be represented as 1234500 with scale equal to 4. It is a client responsibility to interpret it correctly. Also - see limitations section above. How to handle JSON responses in Arrow batches: Due to technical limitations Snowflake backend may return JSON even if client expects Arrow. In that case Arrow batches are not available and the error with code ErrNonArrowResponseInArrowBatches is returned. The response is parsed to regular rows. You can read rows in a way described in transform_batches_to_rows.go example. This has a very strong limitation though - this is a very low level API (Go driver API), so there are no conversions ready. All values are returned as strings. Alternative approach is to rerun a query, but without enabling Arrow batches and use a general Go SQL API instead of driver API. It can be optimized by using `WithRequestID`, so backend returns results from cache. Binding allows a SQL statement to use a value that is stored in a Golang variable. Without binding, a SQL statement specifies values by specifying literals inside the statement. For example, the following statement uses the literal value “42“ in an UPDATE statement: With binding, you can execute a SQL statement that uses a value that is inside a variable. For example: The “?“ inside the “VALUES“ clause specifies that the SQL statement uses the value from a variable. Binding data that involves time zones can require special handling. For details, see the section titled "Timestamps with Time Zones". Version 1.6.23 (and later) of the driver takes advantage of sql.Null types which enables the proper handling of null parameters inside function calls, i.e.: The timestamp nullability had to be achieved by wrapping the sql.NullTime type as the Snowflake provides several date and time types which are mapped to single Go time.Time type: Version 1.3.9 (and later) of the Go Snowflake Driver supports the ability to bind an array variable to a parameter in a SQL INSERT statement. You can use this technique to insert multiple rows in a single batch. As an example, the following code inserts rows into a table that contains integer, float, boolean, and string columns. The example binds arrays to the parameters in the INSERT statement. If the array contains SQL NULL values, use slice []interface{}, which allows Golang nil values. This feature is available in version 1.6.12 (and later) of the driver. For example, For slices []interface{} containing time.Time values, a binding parameter flag is required for the preceding array variable in the Array() function. This feature is available in version 1.6.13 (and later) of the driver. For example, Note: For alternative ways to load data into the Snowflake database (including bulk loading using the COPY command), see Loading Data into Snowflake (https://docs.snowflake.com/en/user-guide-data-load.html). When you use array binding to insert a large number of values, the driver can improve performance by streaming the data (without creating files on the local machine) to a temporary stage for ingestion. The driver automatically does this when the number of values exceeds a threshold (no changes are needed to user code). In order for the driver to send the data to a temporary stage, the user must have the following privilege on the schema: If the user does not have this privilege, the driver falls back to sending the data with the query to the Snowflake database. In addition, the current database and schema for the session must be set. If these are not set, the CREATE TEMPORARY STAGE command executed by the driver can fail with the following error: For alternative ways to load data into the Snowflake database (including bulk loading using the COPY command), see Loading Data into Snowflake (https://docs.snowflake.com/en/user-guide-data-load.html). Go's database/sql package supports the ability to bind a parameter in a SQL statement to a time.Time variable. However, when the client binds data to send to the server, the driver cannot determine the correct Snowflake date/timestamp data type to associate with the binding parameter. For example: To resolve this issue, a binding parameter flag is introduced that associates any subsequent time.Time type to the DATE, TIME, TIMESTAMP_LTZ, TIMESTAMP_NTZ or BINARY data type. The above example could be rewritten as follows: The driver fetches TIMESTAMP_TZ (timestamp with time zone) data using the offset-based Location types, which represent a collection of time offsets in use in a geographical area, such as CET (Central European Time) or UTC (Coordinated Universal Time). The offset-based Location data is generated and cached when a Go Snowflake Driver application starts, and if the given offset is not in the cache, it is generated dynamically. Currently, Snowflake does not support the name-based Location types (e.g. "America/Los_Angeles"). For more information about Location types, see the Go documentation for https://golang.org/pkg/time/#Location. Internally, this feature leverages the []byte data type. As a result, BINARY data cannot be bound without the binding parameter flag. In the following example, sf is an alias for the gosnowflake package: The driver directly downloads a result set from the cloud storage if the size is large. It is required to shift workloads from the Snowflake database to the clients for scale. The download takes place by goroutine named "Chunk Downloader" asynchronously so that the driver can fetch the next result set while the application can consume the current result set. The application may change the number of result set chunk downloader if required. Note this does not help reduce memory footprint by itself. Consider Custom JSON Decoder. Custom JSON Decoder for Parsing Result Set (Experimental) The application may have the driver use a custom JSON decoder that incrementally parses the result set as follows. This option will reduce the memory footprint to half or even quarter, but it can significantly degrade the performance depending on the environment. The test cases running on Travis Ubuntu box show five times less memory footprint while four times slower. Be cautious when using the option. The Go Snowflake Driver supports JWT (JSON Web Token) authentication. To enable this feature, construct the DSN with fields "authenticator=SNOWFLAKE_JWT&privateKey=<your_private_key>", or using a Config structure specifying: The <your_private_key> should be a base64 URL encoded PKCS8 rsa private key string. One way to encode a byte slice to URL base 64 URL format is through the base64.URLEncoding.EncodeToString() function. On the server side, you can alter the public key with the SQL command: The <your_public_key> should be a base64 Standard encoded PKI public key string. One way to encode a byte slice to base 64 Standard format is through the base64.StdEncoding.EncodeToString() function. To generate the valid key pair, you can execute the following commands in the shell: Note: As of February 2020, Golang's official library does not support passcode-encrypted PKCS8 private key. For security purposes, Snowflake highly recommends that you store the passcode-encrypted private key on the disk and decrypt the key in your application using a library you trust. JWT tokens are recreated on each retry and they are valid (`exp` claim) for `jwtTimeout` seconds. Each retry timeout is configured by `jwtClientTimeout`. Retries are limited by total time of `loginTimeout`. The driver allows to authenticate using the external browser. When a connection is created, the driver will open the browser window and ask the user to sign in. To enable this feature, construct the DSN with field "authenticator=EXTERNALBROWSER" or using a Config structure with following Authenticator specified: The external browser authentication implements timeout mechanism. This prevents the driver from hanging interminably when browser window was closed, or not responding. Timeout defaults to 120s and can be changed through setting DSN field "externalBrowserTimeout=240" (time in seconds) or using a Config structure with following ExternalBrowserTimeout specified: This feature is available in version 1.3.8 or later of the driver. By default, Snowflake returns an error for queries issued with multiple statements. This restriction helps protect against SQL Injection attacks (https://en.wikipedia.org/wiki/SQL_injection). The multi-statement feature allows users skip this restriction and execute multiple SQL statements through a single Golang function call. However, this opens up the possibility for SQL injection, so it should be used carefully. The risk can be reduced by specifying the exact number of statements to be executed, which makes it more difficult to inject a statement by appending it. More details are below. The Go Snowflake Driver provides two functions that can execute multiple SQL statements in a single call: To compose a multi-statement query, simply create a string that contains all the queries, separated by semicolons, in the order in which the statements should be executed. To protect against SQL Injection attacks while using the multi-statement feature, pass a Context that specifies the number of statements in the string. For example: When multiple queries are executed by a single call to QueryContext(), multiple result sets are returned. After you process the first result set, get the next result set (for the next SQL statement) by calling NextResultSet(). The following pseudo-code shows how to process multiple result sets: The function db.ExecContext() returns a single result, which is the sum of the number of rows changed by each individual statement. For example, if your multi-statement query executed two UPDATE statements, each of which updated 10 rows, then the result returned would be 20. Individual row counts for individual statements are not available. The following code shows how to retrieve the result of a multi-statement query executed through db.ExecContext(): Note: Because a multi-statement ExecContext() returns a single value, you cannot detect offsetting errors. For example, suppose you expected the return value to be 20 because you expected each UPDATE statement to update 10 rows. If one UPDATE statement updated 15 rows and the other UPDATE statement updated only 5 rows, the total would still be 20. You would see no indication that the UPDATES had not functioned as expected. The ExecContext() function does not return an error if passed a query (e.g. a SELECT statement). However, it still returns only a single value, not a result set, so using it to execute queries (or a mix of queries and non-query statements) is impractical. The QueryContext() function does not return an error if passed non-query statements (e.g. DML). The function returns a result set for each statement, whether or not the statement is a query. For each non-query statement, the result set contains a single row that contains a single column; the value is the number of rows changed by the statement. If you want to execute a mix of query and non-query statements (e.g. a mix of SELECT and DML statements) in a multi-statement query, use QueryContext(). You can retrieve the result sets for the queries, and you can retrieve or ignore the row counts for the non-query statements. Note: PUT statements are not supported for multi-statement queries. If a SQL statement passed to ExecQuery() or QueryContext() fails to compile or execute, that statement is aborted, and subsequent statements are not executed. Any statements prior to the aborted statement are unaffected. For example, if the statements below are run as one multi-statement query, the multi-statement query fails on the third statement, and an exception is thrown. If you then query the contents of the table named "test", the values 1 and 2 would be present. When using the QueryContext() and ExecContext() functions, golang code can check for errors the usual way. For example: Preparing statements and using bind variables are also not supported for multi-statement queries. The Go Snowflake Driver supports asynchronous execution of SQL statements. Asynchronous execution allows you to start executing a statement and then retrieve the result later without being blocked while waiting. While waiting for the result of a SQL statement, you can perform other tasks, including executing other SQL statements. Most of the steps to execute an asynchronous query are the same as the steps to execute a synchronous query. However, there is an additional step, which is that you must call the WithAsyncMode() function to update your Context object to specify that asynchronous mode is enabled. In the code below, the call to "WithAsyncMode()" is specific to asynchronous mode. The rest of the code is compatible with both asynchronous mode and synchronous mode. The function db.QueryContext() returns an object of type snowflakeRows regardless of whether the query is synchronous or asynchronous. However: The call to the Next() function of snowflakeRows is always synchronous (i.e. blocking). If the query has not yet completed and the snowflakeRows object (named "rows" in this example) has not been filled in yet, then rows.Next() waits until the result set has been filled in. More generally, calls to any Golang SQL API function implemented in snowflakeRows or snowflakeResult are blocking calls, and wait if results are not yet available. (Examples of other synchronous calls include: snowflakeRows.Err(), snowflakeRows.Columns(), snowflakeRows.columnTypes(), snowflakeRows.Scan(), and snowflakeResult.RowsAffected().) Because the example code above executes only one query and no other activity, there is no significant difference in behavior between asynchronous and synchronous behavior. The differences become significant if, for example, you want to perform some other activity after the query starts and before it completes. The example code below starts a query, which run in the background, and then retrieves the results later. This example uses small SELECT statements that do not retrieve enough data to require asynchronous handling. However, the technique works for larger data sets, and for situations where the programmer might want to do other work after starting the queries and before retrieving the results. For a more elaborative example please see cmd/async/async.go The Go Snowflake Driver supports the PUT and GET commands. The PUT command copies a file from a local computer (the computer where the Golang client is running) to a stage on the cloud platform. The GET command copies data files from a stage on the cloud platform to a local computer. See the following for information on the syntax and supported parameters: Using PUT: The following example shows how to run a PUT command by passing a string to the db.Query() function: "<local_file>" should include the file path as well as the name. Snowflake recommends using an absolute path rather than a relative path. For example: Different client platforms (e.g. linux, Windows) have different path name conventions. Ensure that you specify path names appropriately. This is particularly important on Windows, which uses the backslash character as both an escape character and as a separator in path names. To send information from a stream (rather than a file) use code similar to the code below. (The ReplaceAll() function is needed on Windows to handle backslashes in the path to the file.) Note: PUT statements are not supported for multi-statement queries. Using GET: The following example shows how to run a GET command by passing a string to the db.Query() function: "<local_file>" should include the file path as well as the name. Snowflake recommends using an absolute path rather than a relative path. For example: To download a file into an in-memory stream (rather than a file) use code similar to the code below. Note: GET statements are not supported for multi-statement queries. Specifying temporary directory for encryption and compression: Putting and getting requires compression and/or encryption, which is done in the OS temporary directory. If you cannot use default temporary directory for your OS or you want to specify it yourself, you can use "tmpDirPath" DSN parameter. Remember, to encode slashes. Example: Using custom configuration for PUT/GET: If you want to override some default configuration options, you can use `WithFileTransferOptions` context. There are multiple config parameters including progress bars or compression. Default behaviour is to propagate the potential underlying errors encountered during executing calls associated with the PUT or GET commands to the caller, for increased awareness and easier handling or troubleshooting them. The behaviour is governed by the `RaisePutGetError` flag on `SnowflakeFileTransferOptions` (default: `true`) If you wish to ignore those errors instead, you can set `RaisePutGetError: false`. Example snippet:
Package skipper provides an HTTP routing library with flexible configuration as well as a runtime update of the routing rules. Skipper works as an HTTP reverse proxy that is responsible for mapping incoming requests to multiple HTTP backend services, based on routes that are selected by the request attributes. At the same time, both the requests and the responses can be augmented by a filter chain that is specifically defined for each route. Optionally, it can provide circuit breaker mechanism individually for each backend host. Skipper can load and update the route definitions from multiple data sources without being restarted. It provides a default executable command with a few built-in filters, however, its primary use case is to be extended with custom filters, predicates or data sources. For further information read 'Extending Skipper'. Skipper took the core design and inspiration from Vulcand: https://github.com/mailgun/vulcand. Skipper is 'go get' compatible. If needed, create a 'go workspace' first: Get the Skipper packages: Create a file with a route: Optionally, verify the syntax of the file: Start Skipper and make an HTTP request: The core of Skipper's request processing is implemented by a reverse proxy in the 'proxy' package. The proxy receives the incoming request, forwards it to the routing engine in order to receive the most specific matching route. When a route matches, the request is forwarded to all filters defined by it. The filters can modify the request or execute any kind of program logic. Once the request has been processed by all the filters, it is forwarded to the backend endpoint of the route. The response from the backend goes once again through all the filters in reverse order. Finally, it is mapped as the response of the original incoming request. Besides the default proxying mechanism, it is possible to define routes without a real network backend endpoint. One of these cases is called a 'shunt' backend, in which case one of the filters needs to handle the request providing its own response (e.g. the 'static' filter). Actually, filters themselves can instruct the request flow to shunt by calling the Serve(*http.Response) method of the filter context. Another case of a route without a network backend is the 'loopback'. A loopback route can be used to match a request, modified by filters, against the lookup tree with different conditions and then execute a different route. One example scenario can be to use a single route as an entry point to execute some calculation to get an A/B testing decision and then matching the updated request metadata for the actual destination route. This way the calculation can be executed for only those requests that don't contain information about a previously calculated decision. For further details, see the 'proxy' and 'filters' package documentation. Finding a request's route happens by matching the request attributes to the conditions in the route's definitions. Such definitions may have the following conditions: - method - path (optionally with wildcards) - path regular expressions - host regular expressions - headers - header regular expressions It is also possible to create custom predicates with any other matching criteria. The relation between the conditions in a route definition is 'and', meaning, that a request must fulfill each condition to match a route. For further details, see the 'routing' package documentation. Filters are applied in order of definition to the request and in reverse order to the response. They are used to modify request and response attributes, such as headers, or execute background tasks, like logging. Some filters may handle the requests without proxying them to service backends. Filters, depending on their implementation, may accept/require parameters, that are set specifically to the route. For further details, see the 'filters' package documentation. Each route has one of the following backends: HTTP endpoint, shunt, loopback or dynamic. Backend endpoints can be any HTTP service. They are specified by their network address, including the protocol scheme, the domain name or the IP address, and optionally the port number: e.g. "https://www.example.org:4242". (The path and query are sent from the original request, or set by filters.) A shunt route means that Skipper handles the request alone and doesn't make requests to a backend service. In this case, it is the responsibility of one of the filters to generate the response. A loopback route executes the routing mechanism on current state of the request from the start, including the route lookup. This way it serves as a form of an internal redirect. A dynamic route means that the final target will be defined in a filter. One of the filters in the chain must set the target backend url explicitly. Route definitions consist of the following: - request matching conditions (predicates) - filter chain (optional) - backend The eskip package implements the in-memory and text representations of route definitions, including a parser. (Note to contributors: in order to stay compatible with 'go get', the generated part of the parser is stored in the repository. When changing the grammar, 'go generate' needs to be executed explicitly to update the parser.) For further details, see the 'eskip' package documentation Skipper has filter implementations of basic auth and OAuth2. It can be integrated with tokeninfo based OAuth2 providers. For details, see: https://godoc.org/github.com/zalando/skipper/filters/auth. Skipper's route definitions of Skipper are loaded from one or more data sources. It can receive incremental updates from those data sources at runtime. It provides three different data clients: - Kubernetes: Skipper can be used as part of a Kubernetes Ingress Controller implementation together with https://github.com/zalando-incubator/kube-ingress-aws-controller . In this scenario, Skipper uses the Kubernetes API's Ingress extensions as a source for routing. For a complete deployment example, see more details in: https://github.com/zalando-incubator/kubernetes-on-aws/ . - Innkeeper: the Innkeeper service implements a storage for large sets of Skipper routes, with an HTTP+JSON API, OAuth2 authentication and role management. See the 'innkeeper' package and https://github.com/zalando/innkeeper. - etcd: Skipper can load routes and receive updates from etcd clusters (https://github.com/coreos/etcd). See the 'etcd' package. - static file: package eskipfile implements a simple data client, which can load route definitions from a static file in eskip format. Currently, it loads the routes on startup. It doesn't support runtime updates. Skipper can use additional data sources, provided by extensions. Sources must implement the DataClient interface in the routing package. Skipper provides circuit breakers, configured either globally, based on backend hosts or based on individual routes. It supports two types of circuit breaker behavior: open on N consecutive failures, or open on N failures out of M requests. For details, see: https://godoc.org/github.com/zalando/skipper/circuit. Skipper can be started with the default executable command 'skipper', or as a library built into an application. The easiest way to start Skipper as a library is to execute the 'Run' function of the current, root package. Each option accepted by the 'Run' function is wired in the default executable as well, as a command line flag. E.g. EtcdUrls becomes -etcd-urls as a comma separated list. For command line help, enter: An additional utility, eskip, can be used to verify, print, update and delete routes from/to files or etcd (Innkeeper on the roadmap). See the cmd/eskip command package, and/or enter in the command line: Skipper doesn't use dynamically loaded plugins, however, it can be used as a library, and it can be extended with custom predicates, filters and/or custom data sources. To create a custom predicate, one needs to implement the PredicateSpec interface in the routing package. Instances of the PredicateSpec are used internally by the routing package to create the actual Predicate objects as referenced in eskip routes, with concrete arguments. Example, randompredicate.go: In the above example, a custom predicate is created, that can be referenced in eskip definitions with the name 'Random': To create a custom filter we need to implement the Spec interface of the filters package. 'Spec' is the specification of a filter, and it is used to create concrete filter instances, while the raw route definitions are processed. Example, hellofilter.go: The above example creates a filter specification, and in the routes where they are included, the filter instances will set the 'X-Hello' header for each and every response. The name of the filter is 'hello', and in a route definition it is referenced as: The easiest way to create a custom Skipper variant is to implement the required filters (as in the example above) by importing the Skipper package, and starting it with the 'Run' command. Example, hello.go: A file containing the routes, routes.eskip: Start the custom router: The 'Run' function in the root Skipper package starts its own listener but it doesn't provide the best composability. The proxy package, however, provides a standard http.Handler, so it is possible to use it in a more complex solution as a building block for routing. Skipper provides detailed logging of failures, and access logs in Apache log format. Skipper also collects detailed performance metrics, and exposes them on a separate listener endpoint for pulling snapshots. For details, see the 'logging' and 'metrics' packages documentation. The router's performance depends on the environment and on the used filters. Under ideal circumstances, and without filters, the biggest time factor is the route lookup. Skipper is able to scale to thousands of routes with logarithmic performance degradation. However, this comes at the cost of increased memory consumption, due to storing the whole lookup tree in a single structure. Benchmarks for the tree lookup can be run by: In case more aggressive scale is needed, it is possible to setup Skipper in a cascade model, with multiple Skipper instances for specific route segments.
Package pointer implements Andersen's analysis, an inclusion-based pointer analysis algorithm first described in (Andersen, 1994). A pointer analysis relates every pointer expression in a whole program to the set of memory locations to which it might point. This information can be used to construct a call graph of the program that precisely represents the destinations of dynamic function and method calls. It can also be used to determine, for example, which pairs of channel operations operate on the same channel. The package allows the client to request a set of expressions of interest for which the points-to information will be returned once the analysis is complete. In addition, the client may request that a callgraph is constructed. The example program in example_test.go demonstrates both of these features. Clients should not request more information than they need since it may increase the cost of the analysis significantly. Our algorithm is INCLUSION-BASED: the points-to sets for x and y will be related by pts(y) ⊇ pts(x) if the program contains the statement y = x. It is FLOW-INSENSITIVE: it ignores all control flow constructs and the order of statements in a program. It is therefore a "MAY ALIAS" analysis: its facts are of the form "P may/may not point to L", not "P must point to L". It is FIELD-SENSITIVE: it builds separate points-to sets for distinct fields, such as x and y in struct { x, y *int }. It is mostly CONTEXT-INSENSITIVE: most functions are analyzed once, so values can flow in at one call to the function and return out at another. Only some smaller functions are analyzed with consideration of their calling context. It has a CONTEXT-SENSITIVE HEAP: objects are named by both allocation site and context, so the objects returned by two distinct calls to f: are distinguished up to the limits of the calling context. It is a WHOLE PROGRAM analysis: it requires SSA-form IR for the complete Go program and summaries for native code. See the (Hind, PASTE'01) survey paper for an explanation of these terms. The analysis is fully sound when invoked on pure Go programs that do not use reflection or unsafe.Pointer conversions. In other words, if there is any possible execution of the program in which pointer P may point to object O, the analysis will report that fact. By default, the "reflect" library is ignored by the analysis, as if all its functions were no-ops, but if the client enables the Reflection flag, the analysis will make a reasonable attempt to model the effects of calls into this library. However, this comes at a significant performance cost, and not all features of that library are yet implemented. In addition, some simplifying approximations must be made to ensure that the analysis terminates; for example, reflection can be used to construct an infinite set of types and values of those types, but the analysis arbitrarily bounds the depth of such types. Most but not all reflection operations are supported. In particular, addressable reflect.Values are not yet implemented, so operations such as (reflect.Value).Set have no analytic effect. The pointer analysis makes no attempt to understand aliasing between the operand x and result y of an unsafe.Pointer conversion: It is as if the conversion allocated an entirely new object: The analysis cannot model the aliasing effects of functions written in languages other than Go, such as runtime intrinsics in C or assembly, or code accessed via cgo. The result is as if such functions are no-ops. However, various important intrinsics are understood by the analysis, along with built-ins such as append. The analysis currently provides no way for users to specify the aliasing effects of native code. ------------------------------------------------------------------------ The remaining documentation is intended for package maintainers and pointer analysis specialists. Maintainers should have a solid understanding of the referenced papers (especially those by H&L and PKH) before making making significant changes. The implementation is similar to that described in (Pearce et al, PASTE'04). Unlike many algorithms which interleave constraint generation and solving, constructing the callgraph as they go, this implementation for the most part observes a phase ordering (generation before solving), with only simple (copy) constraints being generated during solving. (The exception is reflection, which creates various constraints during solving as new types flow to reflect.Value operations.) This improves the traction of presolver optimisations, but imposes certain restrictions, e.g. potential context sensitivity is limited since all variants must be created a priori. A type is said to be "pointer-like" if it is a reference to an object. Pointer-like types include pointers and also interfaces, maps, channels, functions and slices. We occasionally use C's x->f notation to distinguish the case where x is a struct pointer from x.f where is a struct value. Pointer analysis literature (and our comments) often uses the notation dst=*src+offset to mean something different than what it means in Go. It means: for each node index p in pts(src), the node index p+offset is in pts(dst). Similarly *dst+offset=src is used for store constraints and dst=src+offset for offset-address constraints. Nodes are the key datastructure of the analysis, and have a dual role: they represent both constraint variables (equivalence classes of pointers) and members of points-to sets (things that can be pointed at, i.e. "labels"). Nodes are naturally numbered. The numbering enables compact representations of sets of nodes such as bitvectors (or BDDs); and the ordering enables a very cheap way to group related nodes together. For example, passing n parameters consists of generating n parallel constraints from caller+i to callee+i for 0<=i<n. The zero nodeid means "not a pointer". For simplicity, we generate flow constraints even for non-pointer types such as int. The pointer equivalence (PE) presolver optimization detects which variables cannot point to anything; this includes not only all variables of non-pointer types (such as int) but also variables of pointer-like types if they are always nil, or are parameters to a function that is never called. Each node represents a scalar part of a value or object. Aggregate types (structs, tuples, arrays) are recursively flattened out into a sequential list of scalar component types, and all the elements of an array are represented by a single node. (The flattening of a basic type is a list containing a single node.) Nodes are connected into a graph with various kinds of labelled edges: simple edges (or copy constraints) represent value flow. Complex edges (load, store, etc) trigger the creation of new simple edges during the solving phase. Conceptually, an "object" is a contiguous sequence of nodes denoting an addressable location: something that a pointer can point to. The first node of an object has a non-nil obj field containing information about the allocation: its size, context, and ssa.Value. Objects include: Many objects have no Go types. For example, the func, map and chan type kinds in Go are all varieties of pointers, but their respective objects are actual functions (executable code), maps (hash tables), and channels (synchronized queues). Given the way we model interfaces, they too are pointers to "tagged" objects with no Go type. And an *ssa.Global denotes the address of a global variable, but the object for a Global is the actual data. So, the types of an ssa.Value that creates an object is "off by one indirection": a pointer to the object. The individual nodes of an object are sometimes referred to as "labels". For uniformity, all objects have a non-zero number of fields, even those of the empty type struct{}. (All arrays are treated as if of length 1, so there are no empty arrays. The empty tuple is never address-taken, so is never an object.) An tagged object has the following layout: The T node's typ field is the dynamic type of the "payload": the value v which follows, flattened out. The T node's obj has the otTagged flag. Tagged objects are needed when generalizing across types: interfaces, reflect.Values, reflect.Types. Each of these three types is modelled as a pointer that exclusively points to tagged objects. Tagged objects may be indirect (obj.flags ⊇ {otIndirect}) meaning that the value v is not of type T but *T; this is used only for reflect.Values that represent lvalues. (These are not implemented yet.) Variables of the following "scalar" types may be represented by a single node: basic types, pointers, channels, maps, slices, 'func' pointers, interfaces. Pointers: Nothing to say here, oddly. Basic types (bool, string, numbers, unsafe.Pointer): Currently all fields in the flattening of a type, including non-pointer basic types such as int, are represented in objects and values. Though non-pointer nodes within values are uninteresting, non-pointer nodes in objects may be useful (if address-taken) because they permit the analysis to deduce, in this example, that p points to s.x. If we ignored such object fields, we could only say that p points somewhere within s. All other basic types are ignored. Expressions of these types have zero nodeid, and fields of these types within aggregate other types are omitted. unsafe.Pointers are not modelled as pointers, so a conversion of an unsafe.Pointer to *T is (unsoundly) treated equivalent to new(T). Channels: An expression of type 'chan T' is a kind of pointer that points exclusively to channel objects, i.e. objects created by MakeChan (or reflection). 'chan T' is treated like *T. *ssa.MakeChan is treated as equivalent to new(T). *ssa.Send and receive (*ssa.UnOp(ARROW)) and are equivalent to store Maps: An expression of type 'map[K]V' is a kind of pointer that points exclusively to map objects, i.e. objects created by MakeMap (or reflection). map K[V] is treated like *M where M = struct{k K; v V}. *ssa.MakeMap is equivalent to new(M). *ssa.MapUpdate is equivalent to *y=x where *y and x have type M. *ssa.Lookup is equivalent to y=x.v where x has type *M. Slices: A slice []T, which dynamically resembles a struct{array *T, len, cap int}, is treated as if it were just a *T pointer; the len and cap fields are ignored. *ssa.MakeSlice is treated like new([1]T): an allocation of a *ssa.Index on a slice is equivalent to a load. *ssa.IndexAddr on a slice returns the address of the sole element of the slice, i.e. the same address. *ssa.Slice is treated as a simple copy. Functions: An expression of type 'func...' is a kind of pointer that points exclusively to function objects. A function object has the following layout: There may be multiple function objects for the same *ssa.Function due to context-sensitive treatment of some functions. The first node is the function's identity node. Associated with every callsite is a special "targets" variable, whose pts() contains the identity node of each function to which the call may dispatch. Identity words are not otherwise used during the analysis, but we construct the call graph from the pts() solution for such nodes. The following block of contiguous nodes represents the flattened-out types of the parameters ("P-block") and results ("R-block") of the function object. The treatment of free variables of closures (*ssa.FreeVar) is like that of global variables; it is not context-sensitive. *ssa.MakeClosure instructions create copy edges to Captures. A Go value of type 'func' (i.e. a pointer to one or more functions) is a pointer whose pts() contains function objects. The valueNode() for an *ssa.Function returns a singleton for that function. Interfaces: An expression of type 'interface{...}' is a kind of pointer that points exclusively to tagged objects. All tagged objects pointed to by an interface are direct (the otIndirect flag is clear) and concrete (the tag type T is not itself an interface type). The associated ssa.Value for an interface's tagged objects may be an *ssa.MakeInterface instruction, or nil if the tagged object was created by an instrinsic (e.g. reflection). Constructing an interface value causes generation of constraints for all of the concrete type's methods; we can't tell a priori which ones may be called. TypeAssert y = x.(T) is implemented by a dynamic constraint triggered by each tagged object O added to pts(x): a typeFilter constraint if T is an interface type, or an untag constraint if T is a concrete type. A typeFilter tests whether O.typ implements T; if so, O is added to pts(y). An untagFilter tests whether O.typ is assignable to T,and if so, a copy edge O.v -> y is added. ChangeInterface is a simple copy because the representation of tagged objects is independent of the interface type (in contrast to the "method tables" approach used by the gc runtime). y := Invoke x.m(...) is implemented by allocating contiguous P/R blocks for the callsite and adding a dynamic rule triggered by each tagged object added to pts(x). The rule adds param/results copy edges to/from each discovered concrete method. (Q. Why do we model an interface as a pointer to a pair of type and value, rather than as a pair of a pointer to type and a pointer to value? A. Control-flow joins would merge interfaces ({T1}, {V1}) and ({T2}, {V2}) to make ({T1,T2}, {V1,V2}), leading to the infeasible and type-unsafe combination (T1,V2). Treating the value and its concrete type as inseparable makes the analysis type-safe.) Type parameters: Type parameters are not directly supported by the analysis. Calls to generic functions will be left as if they had empty bodies. Users of the package are expected to use the ssa.InstantiateGenerics builder mode when building code that uses or depends on code containing generics. reflect.Value: A reflect.Value is modelled very similar to an interface{}, i.e. as a pointer exclusively to tagged objects, but with two generalizations. 1. a reflect.Value that represents an lvalue points to an indirect (obj.flags ⊇ {otIndirect}) tagged object, which has a similar layout to an tagged object except that the value is a pointer to the dynamic type. Indirect tagged objects preserve the correct aliasing so that mutations made by (reflect.Value).Set can be observed. Indirect objects only arise when an lvalue is derived from an rvalue by indirection, e.g. the following code: Whether indirect or not, the concrete type of the tagged object corresponds to the user-visible dynamic type, and the existence of a pointer is an implementation detail. (NB: indirect tagged objects are not yet implemented) 2. The dynamic type tag of a tagged object pointed to by a reflect.Value may be an interface type; it need not be concrete. This arises in code such as this: pts(eface) is a singleton containing an interface{}-tagged object. That tagged object's payload is an interface{} value, i.e. the pts of the payload contains only concrete-tagged objects, although in this example it's the zero interface{} value, so its pts is empty. reflect.Type: Just as in the real "reflect" library, we represent a reflect.Type as an interface whose sole implementation is the concrete type, *reflect.rtype. (This choice is forced on us by go/types: clients cannot fabricate types with arbitrary method sets.) rtype instances are canonical: there is at most one per dynamic type. (rtypes are in fact large structs but since identity is all that matters, we represent them by a single node.) The payload of each *rtype-tagged object is an *rtype pointer that points to exactly one such canonical rtype object. We exploit this by setting the node.typ of the payload to the dynamic type, not '*rtype'. This saves us an indirection in each resolution rule. As an optimisation, *rtype-tagged objects are canonicalized too. Aggregate types: Aggregate types are treated as if all directly contained aggregates are recursively flattened out. Structs: *ssa.Field y = x.f creates a simple edge to y from x's node at f's offset. *ssa.FieldAddr y = &x->f requires a dynamic closure rule to create The nodes of a struct consist of a special 'identity' node (whose type is that of the struct itself), followed by the nodes for all the struct's fields, recursively flattened out. A pointer to the struct is a pointer to its identity node. That node allows us to distinguish a pointer to a struct from a pointer to its first field. Field offsets are logical field offsets (plus one for the identity node), so the sizes of the fields can be ignored by the analysis. (The identity node is non-traditional but enables the distinction described above, which is valuable for code comprehension tools. Typical pointer analyses for C, whose purpose is compiler optimization, must soundly model unsafe.Pointer (void*) conversions, and this requires fidelity to the actual memory layout using physical field offsets.) *ssa.Field y = x.f creates a simple edge to y from x's node at f's offset. *ssa.FieldAddr y = &x->f requires a dynamic closure rule to create Arrays: We model an array by an identity node (whose type is that of the array itself) followed by a node representing all the elements of the array; the analysis does not distinguish elements with different indices. Effectively, an array is treated like struct{elem T}, a load y=x[i] like y=x.elem, and a store x[i]=y like x.elem=y; the index i is ignored. A pointer to an array is pointer to its identity node. (A slice is also a pointer to an array's identity node.) The identity node allows us to distinguish a pointer to an array from a pointer to one of its elements, but it is rather costly because it introduces more offset constraints into the system. Furthermore, sound treatment of unsafe.Pointer would require us to dispense with this node. Arrays may be allocated by Alloc, by make([]T), by calls to append, and via reflection. Tuples (T, ...): Tuples are treated like structs with naturally numbered fields. *ssa.Extract is analogous to *ssa.Field. However, tuples have no identity field since by construction, they cannot be address-taken. There are three kinds of function call: Cases 1 and 2 apply equally to methods and standalone functions. Static calls: A static call consists three steps: A static function call is little more than two struct value copies between the P/R blocks of caller and callee: Context sensitivity: Static calls (alone) may be treated context sensitively, i.e. each callsite may cause a distinct re-analysis of the callee, improving precision. Our current context-sensitivity policy treats all intrinsics and getter/setter methods in this manner since such functions are small and seem like an obvious source of spurious confluences, though this has not yet been evaluated. Dynamic function calls: Dynamic calls work in a similar manner except that the creation of copy edges occurs dynamically, in a similar fashion to a pair of struct copies in which the callee is indirect: (Recall that the function object's P- and R-blocks are contiguous.) Interface method invocation: For invoke-mode calls, we create a params/results block for the callsite and attach a dynamic closure rule to the interface. For each new tagged object that flows to the interface, we look up the concrete method, find its function object, and connect its P/R blocks to the callsite's P/R blocks, adding copy edges to the graph during solving. Recording call targets: The analysis notifies its clients of each callsite it encounters, passing a CallSite interface. Among other things, the CallSite contains a synthetic constraint variable ("targets") whose points-to solution includes the set of all function objects to which the call may dispatch. It is via this mechanism that the callgraph is made available. Clients may also elect to be notified of callgraph edges directly; internally this just iterates all "targets" variables' pts(·)s. We implement Hash-Value Numbering (HVN), a pre-solver constraint optimization described in Hardekopf & Lin, SAS'07. This is documented in more detail in hvn.go. We intend to add its cousins HR and HU in future. The solver is currently a naive Andersen-style implementation; it does not perform online cycle detection, though we plan to add solver optimisations such as Hybrid- and Lazy- Cycle Detection from (Hardekopf & Lin, PLDI'07). It uses difference propagation (Pearce et al, SQC'04) to avoid redundant re-triggering of closure rules for values already seen. Points-to sets are represented using sparse bit vectors (similar to those used in LLVM and gcc), which are more space- and time-efficient than sets based on Go's built-in map type or dense bit vectors. Nodes are permuted prior to solving so that object nodes (which may appear in points-to sets) are lower numbered than non-object (var) nodes. This improves the density of the set over which the PTSs range, and thus the efficiency of the representation. Partly thanks to avoiding map iteration, the execution of the solver is 100% deterministic, a great help during debugging. Andersen, L. O. 1994. Program analysis and specialization for the C programming language. Ph.D. dissertation. DIKU, University of Copenhagen. David J. Pearce, Paul H. J. Kelly, and Chris Hankin. 2004. Efficient field-sensitive pointer analysis for C. In Proceedings of the 5th ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering (PASTE '04). ACM, New York, NY, USA, 37-42. http://doi.acm.org/10.1145/996821.996835 David J. Pearce, Paul H. J. Kelly, and Chris Hankin. 2004. Online Cycle Detection and Difference Propagation: Applications to Pointer Analysis. Software Quality Control 12, 4 (December 2004), 311-337. http://dx.doi.org/10.1023/B:SQJO.0000039791.93071.a2 David Grove and Craig Chambers. 2001. A framework for call graph construction algorithms. ACM Trans. Program. Lang. Syst. 23, 6 (November 2001), 685-746. http://doi.acm.org/10.1145/506315.506316 Ben Hardekopf and Calvin Lin. 2007. The ant and the grasshopper: fast and accurate pointer analysis for millions of lines of code. In Proceedings of the 2007 ACM SIGPLAN conference on Programming language design and implementation (PLDI '07). ACM, New York, NY, USA, 290-299. http://doi.acm.org/10.1145/1250734.1250767 Ben Hardekopf and Calvin Lin. 2007. Exploiting pointer and location equivalence to optimize pointer analysis. In Proceedings of the 14th international conference on Static Analysis (SAS'07), Hanne Riis Nielson and Gilberto Filé (Eds.). Springer-Verlag, Berlin, Heidelberg, 265-280. Atanas Rountev and Satish Chandra. 2000. Off-line variable substitution for scaling points-to analysis. In Proceedings of the ACM SIGPLAN 2000 conference on Programming language design and implementation (PLDI '00). ACM, New York, NY, USA, 47-56. DOI=10.1145/349299.349310 http://doi.acm.org/10.1145/349299.349310 This program demonstrates how to use the pointer analysis to obtain a conservative call-graph of a Go program. It also shows how to compute the points-to set of a variable, in this case, (C).f's ch parameter.
Package rtcp implements encoding and decoding of RTCP packets according to RFCs 3550 and 5506. RTCP is a sister protocol of the Real-time Transport Protocol (RTP). Its basic functionality and packet structure is defined in RFC 3550. RTCP provides out-of-band statistics and control information for an RTP session. It partners with RTP in the delivery and packaging of multimedia data, but does not transport any media data itself. The primary function of RTCP is to provide feedback on the quality of service (QoS) in media distribution by periodically sending statistics information such as transmitted octet and packet counts, packet loss, packet delay variation, and round-trip delay time to participants in a streaming multimedia session. An application may use this information to control quality of service parameters, perhaps by limiting flow, or using a different codec. Decoding RTCP packets: Encoding RTCP packets:
Package restful , a lean package for creating REST-style WebServices without magic. A WebService has a collection of Route objects that dispatch incoming Http Requests to a function calls. Typically, a WebService has a root path (e.g. /users) and defines common MIME types for its routes. WebServices must be added to a container (see below) in order to handler Http requests from a server. A Route is defined by a HTTP method, an URL path and (optionally) the MIME types it consumes (Content-Type) and produces (Accept). This package has the logic to find the best matching Route and if found, call its Function. The (*Request, *Response) arguments provide functions for reading information from the request and writing information back to the response. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-user-resource.go with a full implementation. A Route parameter can be specified using the format "uri/{var[:regexp]}" or the special version "uri/{var:*}" for matching the tail of the path. For example, /persons/{name:[A-Z][A-Z]} can be used to restrict values for the parameter "name" to only contain capital alphabetic characters. Regular expressions must use the standard Go syntax as described in the regexp package. (https://code.google.com/p/re2/wiki/Syntax) This feature requires the use of a CurlyRouter. A Container holds a collection of WebServices, Filters and a http.ServeMux for multiplexing http requests. Using the statements "restful.Add(...) and restful.Filter(...)" will register WebServices and Filters to the Default Container. The Default container of go-restful uses the http.DefaultServeMux. You can create your own Container and create a new http.Server for that particular container. A filter dynamically intercepts requests and responses to transform or use the information contained in the requests or responses. You can use filters to perform generic logging, measurement, authentication, redirect, set response headers etc. In the restful package there are three hooks into the request,response flow where filters can be added. Each filter must define a FilterFunction: Use the following statement to pass the request,response pair to the next filter or RouteFunction These are processed before any registered WebService. These are processed before any Route of a WebService. These are processed before calling the function associated with the Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-filters.go with full implementations. Two encodings are supported: gzip and deflate. To enable this for all responses: If a Http request includes the Accept-Encoding header then the response content will be compressed using the specified encoding. Alternatively, you can create a Filter that performs the encoding and install it per WebService or Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-encoding-filter.go By installing a pre-defined container filter, your Webservice(s) can respond to the OPTIONS Http request. By installing the filter of a CrossOriginResourceSharing (CORS), your WebService(s) can handle CORS requests. Unexpected things happen. If a request cannot be processed because of a failure, your service needs to tell via the response what happened and why. For this reason HTTP status codes exist and it is important to use the correct code in every exceptional situation. If path or query parameters are not valid (content or type) then use http.StatusBadRequest. Despite a valid URI, the resource requested may not be available If the application logic could not process the request (or write the response) then use http.StatusInternalServerError. The request has a valid URL but the method (GET,PUT,POST,...) is not allowed. The request does not have or has an unknown Accept Header set for this operation. The request does not have or has an unknown Content-Type Header set for this operation. In addition to setting the correct (error) Http status code, you can choose to write a ServiceError message on the response. This package has several options that affect the performance of your service. It is important to understand them and how you can change it. DoNotRecover controls whether panics will be caught to return HTTP 500. If set to false, the container will recover from panics. Default value is true If content encoding is enabled then the default strategy for getting new gzip/zlib writers and readers is to use a sync.Pool. Because writers are expensive structures, performance is even more improved when using a preloaded cache. You can also inject your own implementation. This package has the means to produce detail logging of the complete Http request matching process and filter invocation. Enabling this feature requires you to set an implementation of restful.StdLogger (e.g. log.Logger) instance such as: The restful.SetLogger() method allows you to override the logger used by the package. By default restful uses the standard library `log` package and logs to stdout. Different logging packages are supported as long as they conform to `StdLogger` interface defined in the `log` sub-package, writing an adapter for your preferred package is simple. (c) 2012-2015, http://ernestmicklei.com. MIT License
Utilities missing from the "reflect" package: easier struct traversal, deep dereferencing, various boolean tests such as generic `IsNil`, and some more. Small and dependency-free. • Functions that take a `reflect.Value` have "Rval" in the name. • Functions that take a `reflect.Type` have "Rtype" in the name. In general, utils for struct traversal can be implemented as function that take closures, or as stateful iterator objects. Both approaches were tested and demonstrated similar performance characteristics. The iterator-based approach can be more convenient to use because it doesn't interrupt the control flow of the caller code. However, the closure-based approach is much simpler to implement and get right. It's also much simpler to wrap, making it easier for users of this package to implement their own iterator functions in terms of the existing ones.
Goserial is a simple go package to allow you to read and write from the serial port as a stream of bytes. It aims to have the same API on all platforms, including windows. As an added bonus, the windows package does not use cgo, so you can cross compile for windows from another platform. Unfortunately goinstall does not currently let you cross compile so you will have to do it manually: Currently there is very little in the way of configurability. You can set the baud rate. Then you can Read(), Write(), or Close() the connection. Read() will block until at least one byte is returned. Write is the same. There is currently no exposed way to set the timeouts, though patches are welcome. Currently all ports are opened with 8 data bits, 1 stop bit, no parity, no hardware flow control, and no software flow control. This works fine for many real devices and many faux serial devices including usb-to-serial converters and bluetooth serial ports. You may Read() and Write() simulantiously on the same connection (from different goroutines). Example usage:
Package rtcp implements encoding and decoding of RTCP packets according to RFC 3550. RTCP is a sister protocol of the Real-time Transport Protocol (RTP). Its basic functionality and packet structure is defined in RFC 3550. RTCP provides out-of-band statistics and control information for an RTP session. It partners with RTP in the delivery and packaging of multimedia data, but does not transport any media data itself. The primary function of RTCP is to provide feedback on the quality of service (QoS) in media distribution by periodically sending statistics information such as transmitted octet and packet counts, packet loss, packet delay variation, and round-trip delay time to participants in a streaming multimedia session. An application may use this information to control quality of service parameters, perhaps by limiting flow, or using a different codec. Decoding RTCP packets: Encoding RTCP packets:
Package floc allows to orchestrate goroutines with ease. The goal of the project is to make the process of running goroutines in parallel and synchronizing them easy. Floc follows for objectives: -- Easy to use functional interface. -- Better control over execution with one entry point and one exit point. -- Simple parallelism and synchronization of jobs. -- As little overhead, in comparison to direct use of goroutines and sync primitives, as possible. The package categorizes middleware used for architect flows in subpackages. -- `guard` contains middleware which help protect flow from falling into panic or unpredicted behavior. -- `pred` contains some basic predicates for AND, OR, NOT logic. -- `run` provides middleware for designing flow, i.e. for running job sequentially, in parallel, in background and so on. Here is a quick example of what the package capable of.
Goserial is a simple go package to allow you to read and write from the serial port as a stream of bytes. It aims to have the same API on all platforms, including windows. As an added bonus, the windows package does not use cgo, so you can cross compile for windows from another platform. Unfortunately goinstall does not currently let you cross compile so you will have to do it manually: Currently there is very little in the way of configurability. You can set the baud rate. Then you can Read(), Write(), or Close() the connection. Read() will block until at least one byte is returned. Write is the same. There is currently no exposed way to set the timeouts, though patches are welcome. Currently ports are opened with 8 data bits, 1 stop bit, no parity, no hardware flow control, and no software flow control by default. This works fine for many real devices and many faux serial devices including usb-to-serial converters and bluetooth serial ports. You may Read() and Write() simulantiously on the same connection (from different goroutines). Example usage:
The package defines a single interface and a few implementations of the interface. The externalization of the loop flow control makes it easy to test the internal functions of background goroutines by, for instance, only running the loop once while under test. The design is that any errors which need to be returned from the loop will be passed back on a channel whose implementation is left up to the individual Looper. Calling methods can wait on execution and for any resulting errors by calling the Wait() method on the Looper. In this example, we are going to run a FreeLooper with 5 iterations. In the course of running, an error is generated, which the parent function captures and outputs. As a result of the error only 3 of the 5 iterations are completed and the output reflects this.
Extensible Go library for creating fast, SSR-first frontend avoiding vanilla templating downsides. Creating asynchronous and dynamic layout parts is a complex problem for larger projects using `html/template`. This library tries to simplify overall setup and process. Let's go straight into a simple example. Then, we will dig into details, step by step, how it works. Kyoto provides a set of simple net/http handlers, handler builders and function wrappers to provide serving, pages rendering, component actions, etc. Anyway, this is not an ultimative solution for any case. If you ever need to wrap/extend existing functionality, library encourages this. See functions inside of nethttp.go file for details and advanced usage. Example: Kyoto provides a way to define components. It's a very common approach for modern libraries to manage frontend parts. In kyoto each component is a context receiver, which returns it's state. Each component becomes a part of the page or top-level component, which executes component asynchronously and gets a state future object. In that way your components are executing in a non-blocking way. Pages are just top-level components, where you can configure rendering and page related stuff. Example: As an option, you can wrap component with another function to accept additional paramenters from top-level page/component. Example: Kyoto provides a context, which holds common objects like http.ResponseWriter, *http.Request, etc. See kyoto.Context for details. Example: Kyoto provides a set of parameters and functions to provide a comfortable template building process. You can configure template building parameters with kyoto.TemplateConf configuration. See template.go for available functions and kyoto.TemplateConfiguration for configuration details. Example: Kyoto provides a way to simplify building dynamic UIs. For this purpose it has a feature named actions. Logic is pretty simple. Client calls an action (sends a request to the server). Action is executing on server side and server is sending updated component markup to the client which will be morphed into DOM. That's it. To use actions, you need to go through a few steps. You'll need to include a client into page (JS functions for communication) and register an actions handler for a needed component. Let's start from including a client. Then, let's register an actions handler for a needed component. That's all! Now we ready to use actions to provide a dynamic UI. Example: In this example you can see provided modifications to the quick start example. First, we've added a state and name into our components' markup. In this way we are saving our components' state between actions and find a component root. Unfortunately, we have to manually provide a component name for now, we haven't found a way to provide it dynamically. Second, we've added a reload button with onclick function call. We're using a function Action provided by a client. Action triggering will be described in details later. Third, we've added an action handler inside of our component. This handler will be executed when a client calls an action with a corresponding name. It's highly recommended to keep components' state as small as possible. It will be transmitted on each action call. Kyoto have multiple ways to trigger actions. Now we will check them one by one. This is the simplest way to trigger an action. It's just a function call with a referer (usually 'this', f.e. button) as a first argument (used to determine root), action name as a second argument and arguments as a rest. Arguments must to be JSON serializable. It's possible to trigger an action of another component. If you want to call an action of parent component, use $ prefix in action name. If you want to call an action of component by id, use <id:action> as an action name. This is a specific action which is triggered when a form is submitted. Usually called in onsubmit="..." attribute of a form. You'll need to implement 'Submit' action to handle this trigger. This is a special HTML attribute which will trigger an action on page load. This may be useful for components' lazy loading. With this special HTML attributes you can trigger an action with interval. Useful for components that must to be updated over time (f.e. charts, stats, etc). You can use this trigger with ssa:poll and ssa:poll.interval HTML attributes. This one attribute allows you to trigger an action when an element is visible on the screen. May be useful for lazy loading. Kyoto provides a way to control action flow. For now, it's possible to control display style on component call and push multiple UI updates to the client during a single action. Because kyoto makes a roundtrip to the server every time an action is triggered on the page, there are cases where the page may not react immediately to a user event (like a click). That's why the library provides a way to easily control display attributes on action call. You can use this HTML attribute to control display during action call. At the end of an action the layout will be restored. A small note. Don't forget to set a default display for loading elements like spinners and loaders. You can push multiple component UI updates during a single action call. Just call kyoto.ActionFlush(ctx, state) to initiate an update. Kyoto provides a way to control action rendering. Now there is at least 2 rendering options after an action call: morph (default) and replace. Morph will try to morph received markup to the current one with morphdom library. In case of an error, or explicit "replace" mode, markup will be replaced with x.outerHTML = '...'.
Package hunch provides functions like: `All`, `First`, `Retry`, `Waterfall` etc., that makes asynchronous flow control more intuitive.
Extensible Go library for creating fast, SSR-first frontend avoiding vanilla templating downsides. Creating asynchronous and dynamic layout parts is a complex problem for larger projects using `html/template`. Library tries to simplify this process. Let's go straight into a simple example. Then, we will dig into details, step by step, how it works. Kyoto provides a simple net/http handlers and function wrappers to handle pages rendering and serving. See functions inside of nethttp.go file for details and advanced usage. Example: Kyoto provides a way to define components. It's a very common approach for modern libraries to manage frontend parts. In kyoto each component is a context receiver, which returns it's state. Each component becomes a part of the page or top-level component, which executes component asynchronously and gets a state future object. In that way your components are executing in a non-blocking way. Pages are just top-level components, where you can configure rendering and page related stuff. Example: As an option, you can wrap component with another function to accept additional paramenters from top-level page/component. Example: Kyoto provides a context, which holds common objects like http.ResponseWriter, *http.Request, etc. See kyoto.Context for details. Example: Kyoto provides a set of parameters and functions to provide a comfortable template building process. You can configure template building parameters with kyoto.TemplateConf configuration. See template.go for available functions and kyoto.TemplateConfiguration for configuration details. Example: Kyoto provides a way to simplify building dynamic UIs. For this purpose it has a feature named actions. Logic is pretty simple. Client calls an action (sends a request to the server). Action is executing on server side and server is sending updated component markup to the client which will be morphed into DOM. That's it. To use actions, you need to go through a few steps. You'll need to include a client into page (JS functions for communication) and register an actions handler for a needed component. Let's start from including a client. Then, let's register an actions handler for a needed component. That's all! Now we ready to use actions to provide a dynamic UI. Example: In this example you can see provided modifications to the quick start example. First, we've added a state and name into our components' markup. In this way we are saving our components' state between actions and find a component root. Unfortunately, we have to manually provide a component name for now, we haven't found a way to provide it dynamically. Second, we've added a reload button with onclick function call. We're using a function Action provided by a client. Action triggering will be described in details later. Third, we've added an action handler inside of our component. This handler will be executed when a client calls an action with a corresponding name. It's highly recommended to keep components' state as small as possible. It will be transmitted on each action call. Kyoto have multiple ways to trigger actions. Now we will check them one by one. This is the simplest way to trigger an action. It's just a function call with a referer (usually 'this', f.e. button) as a first argument (used to determine root), action name as a second argument and arguments as a rest. Arguments must to be JSON serializable. It's possible to trigger an action of another component. If you want to call an action of parent component, use $ prefix in action name. If you want to call an action of component by id, use <id:action> as an action name. This is a specific action which is triggered when a form is submitted. Usually called in onsubmit="..." attribute of a form. You'll need to implement 'Submit' action to handle this trigger. This is a special HTML attribute which will trigger an action on page load. This may be useful for components' lazy loading. With this special HTML attributes you can trigger an action with interval. Useful for components that must to be updated over time (f.e. charts, stats, etc). You can use this trigger with ssa:poll and ssa:poll.interval HTML attributes. This one attribute allows you to trigger an action when an element is visible on the screen. May be useful for lazy loading. Kyoto provides a way to control action flow. For now, it's possible to control display style on component call and push multiple UI updates to the client during a single action. Because kyoto makes a roundtrip to the server every time an action is triggered on the page, there are cases where the page may not react immediately to a user event (like a click). That's why the library provides a way to easily control display attributes on action call. You can use this HTML attribute to control display during action call. At the end of an action the layout will be restored. A small note. Don't forget to set a default display for loading elements like spinners and loaders. You can push multiple component UI updates during a single action call. Just call kyoto.ActionFlush(ctx, state) to initiate an update. Kyoto provides a way to control action rendering. Now there is at least 2 rendering options after an action call: morph (default) and replace. Morph will try to morph received markup to the current one with morphdom library. In case of an error, or explicit "replace" mode, markup will be replaced with x.outerHTML = '...'.
Package restful , a lean package for creating REST-style WebServices without magic. A WebService has a collection of Route objects that dispatch incoming Http Requests to a function calls. Typically, a WebService has a root path (e.g. /users) and defines common MIME types for its routes. WebServices must be added to a container (see below) in order to handler Http requests from a server. A Route is defined by a HTTP method, an URL path and (optionally) the MIME types it consumes (Content-Type) and produces (Accept). This package has the logic to find the best matching Route and if found, call its Function. The (*Request, *Response) arguments provide functions for reading information from the request and writing information back to the response. See the example https://github.com/emicklei/go-restful/blob/v3/examples/user-resource/restful-user-resource.go with a full implementation. A Route parameter can be specified using the format "uri/{var[:regexp]}" or the special version "uri/{var:*}" for matching the tail of the path. For example, /persons/{name:[A-Z][A-Z]} can be used to restrict values for the parameter "name" to only contain capital alphabetic characters. Regular expressions must use the standard Go syntax as described in the regexp package. (https://code.google.com/p/re2/wiki/Syntax) This feature requires the use of a CurlyRouter. A Container holds a collection of WebServices, Filters and a http.ServeMux for multiplexing http requests. Using the statements "restful.Add(...) and restful.Filter(...)" will register WebServices and Filters to the Default Container. The Default container of go-restful uses the http.DefaultServeMux. You can create your own Container and create a new http.Server for that particular container. A filter dynamically intercepts requests and responses to transform or use the information contained in the requests or responses. You can use filters to perform generic logging, measurement, authentication, redirect, set response headers etc. In the restful package there are three hooks into the request,response flow where filters can be added. Each filter must define a FilterFunction: Use the following statement to pass the request,response pair to the next filter or RouteFunction These are processed before any registered WebService. These are processed before any Route of a WebService. These are processed before calling the function associated with the Route. See the example https://github.com/emicklei/go-restful/blob/v3/examples/filters/restful-filters.go with full implementations. Two encodings are supported: gzip and deflate. To enable this for all responses: If a Http request includes the Accept-Encoding header then the response content will be compressed using the specified encoding. Alternatively, you can create a Filter that performs the encoding and install it per WebService or Route. See the example https://github.com/emicklei/go-restful/blob/v3/examples/encoding/restful-encoding-filter.go By installing a pre-defined container filter, your Webservice(s) can respond to the OPTIONS Http request. By installing the filter of a CrossOriginResourceSharing (CORS), your WebService(s) can handle CORS requests. Unexpected things happen. If a request cannot be processed because of a failure, your service needs to tell via the response what happened and why. For this reason HTTP status codes exist and it is important to use the correct code in every exceptional situation. If path or query parameters are not valid (content or type) then use http.StatusBadRequest. Despite a valid URI, the resource requested may not be available If the application logic could not process the request (or write the response) then use http.StatusInternalServerError. The request has a valid URL but the method (GET,PUT,POST,...) is not allowed. The request does not have or has an unknown Accept Header set for this operation. The request does not have or has an unknown Content-Type Header set for this operation. In addition to setting the correct (error) Http status code, you can choose to write a ServiceError message on the response. This package has several options that affect the performance of your service. It is important to understand them and how you can change it. DoNotRecover controls whether panics will be caught to return HTTP 500. If set to false, the container will recover from panics. Default value is true If content encoding is enabled then the default strategy for getting new gzip/zlib writers and readers is to use a sync.Pool. Because writers are expensive structures, performance is even more improved when using a preloaded cache. You can also inject your own implementation. This package has the means to produce detail logging of the complete Http request matching process and filter invocation. Enabling this feature requires you to set an implementation of restful.StdLogger (e.g. log.Logger) instance such as: The restful.SetLogger() method allows you to override the logger used by the package. By default restful uses the standard library `log` package and logs to stdout. Different logging packages are supported as long as they conform to `StdLogger` interface defined in the `log` sub-package, writing an adapter for your preferred package is simple. (c) 2012-2015, http://ernestmicklei.com. MIT License
Package restful , a lean package for creating REST-style WebServices without magic. A WebService has a collection of Route objects that dispatch incoming Http Requests to a function calls. Typically, a WebService has a root path (e.g. /users) and defines common MIME types for its routes. WebServices must be added to a container (see below) in order to handler Http requests from a server. A Route is defined by a HTTP method, an URL path and (optionally) the MIME types it consumes (Content-Type) and produces (Accept). This package has the logic to find the best matching Route and if found, call its Function. The (*Request, *Response) arguments provide functions for reading information from the request and writing information back to the response. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-user-resource.go with a full implementation. A Route parameter can be specified using the format "uri/{var[:regexp]}" or the special version "uri/{var:*}" for matching the tail of the path. For example, /persons/{name:[A-Z][A-Z]} can be used to restrict values for the parameter "name" to only contain capital alphabetic characters. Regular expressions must use the standard Go syntax as described in the regexp package. (https://code.google.com/p/re2/wiki/Syntax) This feature requires the use of a CurlyRouter. A Container holds a collection of WebServices, Filters and a http.ServeMux for multiplexing http requests. Using the statements "restful.Add(...) and restful.Filter(...)" will register WebServices and Filters to the Default Container. The Default container of go-restful uses the http.DefaultServeMux. You can create your own Container and create a new http.Server for that particular container. A filter dynamically intercepts requests and responses to transform or use the information contained in the requests or responses. You can use filters to perform generic logging, measurement, authentication, redirect, set response headers etc. In the restful package there are three hooks into the request,response flow where filters can be added. Each filter must define a FilterFunction: Use the following statement to pass the request,response pair to the next filter or RouteFunction These are processed before any registered WebService. These are processed before any Route of a WebService. These are processed before calling the function associated with the Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-filters.go with full implementations. Two encodings are supported: gzip and deflate. To enable this for all responses: If a Http request includes the Accept-Encoding header then the response content will be compressed using the specified encoding. Alternatively, you can create a Filter that performs the encoding and install it per WebService or Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-encoding-filter.go By installing a pre-defined container filter, your Webservice(s) can respond to the OPTIONS Http request. By installing the filter of a CrossOriginResourceSharing (CORS), your WebService(s) can handle CORS requests. Unexpected things happen. If a request cannot be processed because of a failure, your service needs to tell via the response what happened and why. For this reason HTTP status codes exist and it is important to use the correct code in every exceptional situation. If path or query parameters are not valid (content or type) then use http.StatusBadRequest. Despite a valid URI, the resource requested may not be available If the application logic could not process the request (or write the response) then use http.StatusInternalServerError. The request has a valid URL but the method (GET,PUT,POST,...) is not allowed. The request does not have or has an unknown Accept Header set for this operation. The request does not have or has an unknown Content-Type Header set for this operation. In addition to setting the correct (error) Http status code, you can choose to write a ServiceError message on the response. This package has several options that affect the performance of your service. It is important to understand them and how you can change it. DoNotRecover controls whether panics will be caught to return HTTP 500. If set to false, the container will recover from panics. Default value is true If content encoding is enabled then the default strategy for getting new gzip/zlib writers and readers is to use a sync.Pool. Because writers are expensive structures, performance is even more improved when using a preloaded cache. You can also inject your own implementation. This package has the means to produce detail logging of the complete Http request matching process and filter invocation. Enabling this feature requires you to set an implementation of restful.StdLogger (e.g. log.Logger) instance such as: The restful.SetLogger() method allows you to override the logger used by the package. By default restful uses the standard library `log` package and logs to stdout. Different logging packages are supported as long as they conform to `StdLogger` interface defined in the `log` sub-package, writing an adapter for your preferred package is simple. (c) 2012-2015, http://ernestmicklei.com. MIT License
Package restful , a lean package for creating REST-style WebServices without magic. A WebService has a collection of Route objects that dispatch incoming Http Requests to a function calls. Typically, a WebService has a root path (e.g. /users) and defines common MIME types for its routes. WebServices must be added to a container (see below) in order to handler Http requests from a server. A Route is defined by a HTTP method, an URL path and (optionally) the MIME types it consumes (Content-Type) and produces (Accept). This package has the logic to find the best matching Route and if found, call its Function. The (*Request, *Response) arguments provide functions for reading information from the request and writing information back to the response. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-user-resource.go with a full implementation. A Route parameter can be specified using the format "uri/{var[:regexp]}" or the special version "uri/{var:*}" for matching the tail of the path. For example, /persons/{name:[A-Z][A-Z]} can be used to restrict values for the parameter "name" to only contain capital alphabetic characters. Regular expressions must use the standard Go syntax as described in the regexp package. (https://code.google.com/p/re2/wiki/Syntax) This feature requires the use of a CurlyRouter. A Container holds a collection of WebServices, Filters and a http.ServeMux for multiplexing http requests. Using the statements "restful.Add(...) and restful.Filter(...)" will register WebServices and Filters to the Default Container. The Default container of go-restful uses the http.DefaultServeMux. You can create your own Container and create a new http.Server for that particular container. A filter dynamically intercepts requests and responses to transform or use the information contained in the requests or responses. You can use filters to perform generic logging, measurement, authentication, redirect, set response headers etc. In the restful package there are three hooks into the request,response flow where filters can be added. Each filter must define a FilterFunction: Use the following statement to pass the request,response pair to the next filter or RouteFunction These are processed before any registered WebService. These are processed before any Route of a WebService. These are processed before calling the function associated with the Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-filters.go with full implementations. Two encodings are supported: gzip and deflate. To enable this for all responses: If a Http request includes the Accept-Encoding header then the response content will be compressed using the specified encoding. Alternatively, you can create a Filter that performs the encoding and install it per WebService or Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-encoding-filter.go By installing a pre-defined container filter, your Webservice(s) can respond to the OPTIONS Http request. By installing the filter of a CrossOriginResourceSharing (CORS), your WebService(s) can handle CORS requests. Unexpected things happen. If a request cannot be processed because of a failure, your service needs to tell via the response what happened and why. For this reason HTTP status codes exist and it is important to use the correct code in every exceptional situation. If path or query parameters are not valid (content or type) then use http.StatusBadRequest. Despite a valid URI, the resource requested may not be available If the application logic could not process the request (or write the response) then use http.StatusInternalServerError. The request has a valid URL but the method (GET,PUT,POST,...) is not allowed. The request does not have or has an unknown Accept Header set for this operation. The request does not have or has an unknown Content-Type Header set for this operation. In addition to setting the correct (error) Http status code, you can choose to write a ServiceError message on the response. This package has several options that affect the performance of your service. It is important to understand them and how you can change it. DoNotRecover controls whether panics will be caught to return HTTP 500. If set to false, the container will recover from panics. Default value is true If content encoding is enabled then the default strategy for getting new gzip/zlib writers and readers is to use a sync.Pool. Because writers are expensive structures, performance is even more improved when using a preloaded cache. You can also inject your own implementation. This package has the means to produce detail logging of the complete Http request matching process and filter invocation. Enabling this feature requires you to set an implementation of restful.StdLogger (e.g. log.Logger) instance such as: The restful.SetLogger() method allows you to override the logger used by the package. By default restful uses the standard library `log` package and logs to stdout. Different logging packages are supported as long as they conform to `StdLogger` interface defined in the `log` sub-package, writing an adapter for your preferred package is simple. (c) 2012-2015, http://ernestmicklei.com. MIT License
Package restful, a lean package for creating REST-style WebServices without magic. A WebService has a collection of Route objects that dispatch incoming Http Requests to a function calls. Typically, a WebService has a root path (e.g. /users) and defines common MIME types for its routes. WebServices must be added to a container (see below) in order to handler Http requests from a server. A Route is defined by a HTTP method, an URL path and (optionally) the MIME types it consumes (Content-Type) and produces (Accept). This package has the logic to find the best matching Route and if found, call its Function. The (*Request, *Response) arguments provide functions for reading information from the request and writing information back to the response. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-user-resource.go with a full implementation. A Route parameter can be specified using the format "uri/{var[:regexp]}" or the special version "uri/{var:*}" for matching the tail of the path. For example, /persons/{name:[A-Z][A-Z]} can be used to restrict values for the parameter "name" to only contain capital alphabetic characters. Regular expressions must use the standard Go syntax as described in the regexp package. (https://code.google.com/p/re2/wiki/Syntax) This feature requires the use of a CurlyRouter. A Container holds a collection of WebServices, Filters and a http.ServeMux for multiplexing http requests. Using the statements "restful.Add(...) and restful.Filter(...)" will register WebServices and Filters to the Default Container. The Default container of go-restful uses the http.DefaultServeMux. You can create your own Container and create a new http.Server for that particular container. A filter dynamically intercepts requests and responses to transform or use the information contained in the requests or responses. You can use filters to perform generic logging, measurement, authentication, redirect, set response headers etc. In the restful package there are three hooks into the request,response flow where filters can be added. Each filter must define a FilterFunction: Use the following statement to pass the request,response pair to the next filter or RouteFunction These are processed before any registered WebService. These are processed before any Route of a WebService. These are processed before calling the function associated with the Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-filters.go with full implementations. Two encodings are supported: gzip and deflate. To enable this for all responses: If a Http request includes the Accept-Encoding header then the response content will be compressed using the specified encoding. Alternatively, you can create a Filter that performs the encoding and install it per WebService or Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-encoding-filter.go By installing a pre-defined container filter, your Webservice(s) can respond to the OPTIONS Http request. By installing the filter of a CrossOriginResourceSharing (CORS), your WebService(s) can handle CORS requests. Unexpected things happen. If a request cannot be processed because of a failure, your service needs to tell via the response what happened and why. For this reason HTTP status codes exist and it is important to use the correct code in every exceptional situation. If path or query parameters are not valid (content or type) then use http.StatusBadRequest. Despite a valid URI, the resource requested may not be available If the application logic could not process the request (or write the response) then use http.StatusInternalServerError. The request has a valid URL but the method (GET,PUT,POST,...) is not allowed. The request does not have or has an unknown Accept Header set for this operation. The request does not have or has an unknown Content-Type Header set for this operation. In addition to setting the correct (error) Http status code, you can choose to write a ServiceError message on the response. This package has several options that affect the performance of your service. It is important to understand them and how you can change it. The default router is the RouterJSR311 which is an implementation of its spec (http://jsr311.java.net/nonav/releases/1.1/spec/spec.html). However, it uses regular expressions for all its routes which, depending on your usecase, may consume a significant amount of time. The CurlyRouter implementation is more lightweight that also allows you to use wildcards and expressions, but only if needed. DoNotRecover controls whether panics will be caught to return HTTP 500. If set to true, Route functions are responsible for handling any error situation. Default value is false; it will recover from panics. This has performance implications. SetCacheReadEntity controls whether the response data ([]byte) is cached such that ReadEntity is repeatable. If you expect to read large amounts of payload data, and you do not use this feature, you should set it to false. This package has the means to produce detail logging of the complete Http request matching process and filter invocation. Enabling this feature requires you to set a log.Logger instance such as: (c) 2012-2014, http://ernestmicklei.com. MIT License
Goserial is a simple go package to allow you to read and write from the serial port as a stream of bytes. It aims to have the same API on all platforms, including windows. As an added bonus, the windows package does not use cgo, so you can cross compile for windows from another platform. Unfortunately goinstall does not currently let you cross compile so you will have to do it manually: Currently there is very little in the way of configurability. You can set the baud rate. Then you can Read(), Write(), or Close() the connection. Read() will block until at least one byte is returned. Write is the same. There is currently no exposed way to set the timeouts, though patches are welcome. Currently all ports are opened with 8 data bits, 1 stop bit, no parity, no hardware flow control, and no software flow control. This works fine for many real devices and many faux serial devices including usb-to-serial converters and bluetooth serial ports. You may Read() and Write() simulantiously on the same connection (from different goroutines). Example usage:
Goserial is a simple go package to allow you to read and write from the serial port as a stream of bytes. It aims to have the same API on all platforms, including windows. As an added bonus, the windows package does not use cgo, so you can cross compile for windows from another platform. Unfortunately goinstall does not currently let you cross compile so you will have to do it manually: Currently there is very little in the way of configurability. You can set the baud rate. Then you can Read(), Write(), or Close() the connection. Read() will block until at least one byte is returned. Write is the same. There is currently no exposed way to set the timeouts, though patches are welcome. Currently all ports are opened with 8 data bits, 1 stop bit, no parity, no hardware flow control, and no software flow control. This works fine for many real devices and many faux serial devices including usb-to-serial converters and bluetooth serial ports. You may Read() and Write() simulantiously on the same connection (from different goroutines). Example usage:
Goserial is a simple go package to allow you to read and write from the serial port as a stream of bytes. It aims to have the same API on all platforms, including windows. As an added bonus, the windows package does not use cgo, so you can cross compile for windows from another platform. Unfortunately goinstall does not currently let you cross compile so you will have to do it manually: Currently there is very little in the way of configurability. You can set the baud rate. Then you can Read(), Write(), or Close() the connection. Read() will block until at least one byte is returned. Write is the same. There is currently no exposed way to set the timeouts, though patches are welcome. Currently all ports are opened with 8 data bits, 1 stop bit, no parity, no hardware flow control, and no software flow control. This works fine for many real devices and many faux serial devices including usb-to-serial converters and bluetooth serial ports. You may Read() and Write() simulantiously on the same connection (from different goroutines). Example usage:
Package rtcp implements encoding and decoding of RTCP packets according to RFCs 3550 and 5506. RTCP is a sister protocol of the Real-time Transport Protocol (RTP). Its basic functionality and packet structure is defined in RFC 3550. RTCP provides out-of-band statistics and control information for an RTP session. It partners with RTP in the delivery and packaging of multimedia data, but does not transport any media data itself. The primary function of RTCP is to provide feedback on the quality of service (QoS) in media distribution by periodically sending statistics information such as transmitted octet and packet counts, packet loss, packet delay variation, and round-trip delay time to participants in a streaming multimedia session. An application may use this information to control quality of service parameters, perhaps by limiting flow, or using a different codec. Decoding RTCP packets: Encoding RTCP packets:
Package floc allows to orchestrate goroutines with ease. The goal of the project is to make the process of running goroutines in parallel and synchronizing them easy. Floc follows for objectives: -- Easy to use functional interface. -- Better control over execution with one entry point and one exit point. -- Simple parallelism and synchronization of jobs. -- As little overhead, in comparison to direct use of goroutines and sync primitives, as possible. The package categorizes middleware used for architect flows in subpackages. -- `guard` contains middleware which help protect flow from falling into panic or unpredicted behavior. -- `pred` contains some basic predicates for AND, OR, NOT logic. -- `run` provides middleware for designing flow, i.e. for running job sequentially, in parallel, in background and so on. Here is a quick example of what the package capable of.
Package floc allows to orchestrate goroutines with ease. The goal of the project is to make the process of running goroutines in parallel and synchronizing them easy. Floc follows for objectives: -- Split the overall work into the number of small jobs. Floc cannot force you to do that but doing that grants many advantages starting from simpler testing and up to better control on execution. -- Make end algorithms more clear and simpler by expressing them through the combination of jobs. In short terms floc allows to express job through jobs. -- Provide better control over execution with one entry point and one exit point. That is achieved by allowing any job finish execution with Cancel or Complete. -- Simple parallelism and synchronization of jobs. -- As little overhead, in comparison to direct use of goroutines and sync primitives, as possible. The package categorizes middleware used for flow building in subpackages. -- `guard` contains middleware which help protect flow from falling into panic or unpredicted behavior. -- `pred` contains some basic predicates for AND, OR, NOT logics. -- `run` provides middleware for designing flow, i.e. for running job sequentially, in parallel, in background and so on. Here is a quick example of what the package capable of.
Package tello provides an unofficial, easy-to-use, standalone API for the Ryze Tello® drone. Tello is a registered trademark of Ryze Tech. The author(s) of this package is/are in no way affiliated with Ryze, DJI, or Intel. The package has been developed by gathering together information from a variety of sources on the Internet (especially the generous contributors at https://tellopilots.com), and by examining data packets sent to/from the Tello. The package will probably be extended as more knowledge of the drone's protocol is obtained. Use this package at your own risk. The author(s) is/are in no way responsible for any damage caused either to or by the drone when using this software. The following features have been implemented... An example application using this package is available at http://github.com/SMerrony/telloterm This documentation should be consulted alongside https://github.com/SMerrony/tello/blob/master/ImplementationChart.md The drone provides two types of connection: a 'control' connection which handles all commands to and from the drone including flight, status and (still) pictures, and a 'video' connection which provides an H.264 video stream from the forward-facing camera. You must establish a control connection to use the drone, but the video connection is optional and cannot be started unless a control connection is running. Funcs vs. Channels Certain functionality is made available in two forms: single-shot function calls and streaming (channel) data flows. Eg. GetFlightData() vs. StreamFlightData(), and UpdateSticks() vs. StartStickListener(). Use whichever paradigm you prefer, but be aware that the channel-based calls should return immediately (the channels are buffered) whereas the function-based options could conceivably cause your application to pause very briefly if the Tello is very busy. (In practice, the author has not found this to be an issue.)
Package webpackager implements the control flow of Web Packager. The code below illustrates the usage of this package: Config allows you to change some behaviors of the Packager. packager.Run(url) retrieves an HTTP response using FetchClient, processes it using Processor, and turns it into a signed exchange using ExchangeFactory. Processor inspects the HTTP response to see the eligibility for signed exchanges and manipulates it to optimize the page loading. The generated signed exchanges are stored in ResourceCache to prevent duplicates. The code above sets just two parameters, ExchangeFactory and ResourceCache, and uses the defaults for other parameters. With this setup, the packager retrieves the content just through HTTP, applies the recommended set of optimizations, generates signed exchanges compliant with the version b3, and saves them in files named like "index.html.sxg" under "/tmp/sxg". Config has a few more parameters. See its definition for the details. You can also pass your own implementations to Config to inject custom logic into the packaging flow. You could write, for example, a custom FetchClient to retrieve the content from a database table instead of web servers, a custom Processor or HTMLTask to apply your optimization techniques, a ResourceCache to store the produced signed exchanges into another database table in addition to a local drive, and so on. The cmd/webpackager package provides a command line interface to execute the packaging flow without writing the driver code on your own.
Package floc allows to orchestrate goroutines with ease. The goal of the project is to make the process of running goroutines in parallel and synchronizing them easy. Floc follows for objectives: -- Easy to use functional interface. -- Better control over execution with one entry point and one exit point. -- Simple parallelism and synchronization of jobs. -- As little overhead, in comparison to direct use of goroutines and sync primitives, as possible. The package categorizes middleware used for architect flows in subpackages. -- `guard` contains middleware which help protect flow from falling into panic or unpredicted behavior. -- `pred` contains some basic predicates for AND, OR, NOT logic. -- `run` provides middleware for designing flow, i.e. for running job sequentially, in parallel, in background and so on. -- `errors` all error types used in the package. Here is a quick example of what the package capable of.
Package restful , a lean package for creating REST-style WebServices without magic. A WebService has a collection of Route objects that dispatch incoming Http Requests to a function calls. Typically, a WebService has a root path (e.g. /users) and defines common MIME types for its routes. WebServices must be added to a container (see below) in order to handler Http requests from a server. A Route is defined by a HTTP method, an URL path and (optionally) the MIME types it consumes (Content-Type) and produces (Accept). This package has the logic to find the best matching Route and if found, call its Function. The (*Request, *Response) arguments provide functions for reading information from the request and writing information back to the response. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-user-resource.go with a full implementation. A Route parameter can be specified using the format "uri/{var[:regexp]}" or the special version "uri/{var:*}" for matching the tail of the path. For example, /persons/{name:[A-Z][A-Z]} can be used to restrict values for the parameter "name" to only contain capital alphabetic characters. Regular expressions must use the standard Go syntax as described in the regexp package. (https://code.google.com/p/re2/wiki/Syntax) This feature requires the use of a CurlyRouter. A Container holds a collection of WebServices, Filters and a http.ServeMux for multiplexing http requests. Using the statements "restful.Add(...) and restful.Filter(...)" will register WebServices and Filters to the Default Container. The Default container of go-restful uses the http.DefaultServeMux. You can create your own Container and create a new http.Server for that particular container. A filter dynamically intercepts requests and responses to transform or use the information contained in the requests or responses. You can use filters to perform generic logging, measurement, authentication, redirect, set response headers etc. In the restful package there are three hooks into the request,response flow where filters can be added. Each filter must define a FilterFunction: Use the following statement to pass the request,response pair to the next filter or RouteFunction These are processed before any registered WebService. These are processed before any Route of a WebService. These are processed before calling the function associated with the Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-filters.go with full implementations. Two encodings are supported: gzip and deflate. To enable this for all responses: If a Http request includes the Accept-Encoding header then the response content will be compressed using the specified encoding. Alternatively, you can create a Filter that performs the encoding and install it per WebService or Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-encoding-filter.go By installing a pre-defined container filter, your Webservice(s) can respond to the OPTIONS Http request. By installing the filter of a CrossOriginResourceSharing (CORS), your WebService(s) can handle CORS requests. Unexpected things happen. If a request cannot be processed because of a failure, your service needs to tell via the response what happened and why. For this reason HTTP status codes exist and it is important to use the correct code in every exceptional situation. If path or query parameters are not valid (content or type) then use http.StatusBadRequest. Despite a valid URI, the resource requested may not be available If the application logic could not process the request (or write the response) then use http.StatusInternalServerError. The request has a valid URL but the method (GET,PUT,POST,...) is not allowed. The request does not have or has an unknown Accept Header set for this operation. The request does not have or has an unknown Content-Type Header set for this operation. In addition to setting the correct (error) Http status code, you can choose to write a ServiceError message on the response. This package has several options that affect the performance of your service. It is important to understand them and how you can change it. DoNotRecover controls whether panics will be caught to return HTTP 500. If set to false, the container will recover from panics. Default value is true If content encoding is enabled then the default strategy for getting new gzip/zlib writers and readers is to use a sync.Pool. Because writers are expensive structures, performance is even more improved when using a preloaded cache. You can also inject your own implementation. This package has the means to produce detail logging of the complete Http request matching process and filter invocation. Enabling this feature requires you to set an implementation of restful.StdLogger (e.g. log.Logger) instance such as: The restful.SetLogger() method allows you to override the logger used by the package. By default restful uses the standard library `log` package and logs to stdout. Different logging packages are supported as long as they conform to `StdLogger` interface defined in the `log` sub-package, writing an adapter for your preferred package is simple. (c) 2012-2015, http://ernestmicklei.com. MIT License
Package sypl provides a Simple Yet Powerful Logger built on top of the Golang logger. A sypl logger can have many `Output`s, and each `Output` is responsible for writing to a specified destination. Each Output can have multiple `Processor`s, which run in isolation manipulating the log message. The order of execution is according to the registering order. The above features allow sypl to fit into many different logging flows and needs. In a application with many loggers, and child loggers, sometimes more fine control is needed, specially when debugging applications. Sypl offers two powerful ways to achieve that: `SYPL_FILTER`, and `SYPL_DEBUG` env vars. `SYPL_FILTER` allows to specify the name(s) of the component(s) that should be logged, for example, for a given application with the following loggers: `svc`, `pv`, and `cm`, if a developer wants only to see `svc`, and `pv` logging, it's achieved just setting `SYPL_FILTER="svc,pv"`. `SYPL_DEBUG` allows to specify the max level, for example, for a given application with the following loggers: `svc`, `pv`, and `cm`, if a developer sets: The possibilities are endless! Checkout the [`debugAndFilter`](example_test.go) for more.
Package throttle provides rate limiting and flow control components for data processing pipelines. It offers flexible mechanisms to control the rate at which items flow through a pipeline, helping to manage resource utilization and prevent system overload. Usage:
Package iolang implements a version of the Io programming language. Io is a dynamic, prototype-based language inspired by Smalltalk (everything is an object), Self (prototypes - *everything* is an object), Lisp (even the program code is made up of objects), NewtonScript (differential inheritance), Act1 (concurrency via actors and futures), and Lua (small and embeddable implementations). It was originally developed in C by Steve Dekorte. Currently, this implementation is focusing primarily on becoming a fully fledged interpreter. A read-eval-print loop exists, primarily for integration testing. There is no file interpreter, so written Io programs are not yet executable. The interpreter can easily be embedded in another program. To start, use the NewVM function to create and initialize the interpreter. The VM object has a number of fields that then can be used to make objects available to Io code, especially the Lobby and Addons objects. Use the VM's NewNumber, NewString, or any other object creation methods to create the object, then use SetSlot to set those objects' slots to them. Hello World in Io: Io code executes via message passing. For example, the above snippet compiles to two messages: Upon evaluation, the first message, being a literal, immediately becomes that string value. Then, the println message is sent to it, activating the slot on the string object named "println", which is a method that writes the object to standard output. Messages can also be provided with arguments, which are simply additional messages to provide more information with the message, in parentheses. We could add arguments to our program: But the program executes the same way, because neither string literals nor (the default) println use their arguments. When an object receives a message, it checks for a slot on the object with the same name as the message text. If the object has such a slot, then it is activated and produces a value. Slots can be added to objects using the := assignment operator; executing creates a slot on Object named x with the value 1. Certain objects have special behavior when activated. Methods are encapsulated messages that, when activated, send that message to the object which received the message that activated the method. The println method above is an example: it is defined (roughly) as In the Hello World example, this method is activated with the "Hello, world!" string as its receiver, so it executes in the context of the string, meaning that slot lookups are performed on the string, and the "self" slot within the method refers to the string. Methods have their own locals, so new slots created within them exist only inside the method body. They can also be defined to take arguments; for example, if the println method were instead defined as then we would pass the object to print as an argument instead of printing the receiver. If an object does not own a slot with the same name as a message passed to it, the object will instead check for that slot in its prototypes. Objects can have any number of protos, and the search proceeds in depth-first order without duplicates. If the slot is found in one of the protos, then it is activated, but still with the original object as the receiver. This implements an inheritance-like concept, but with "superclasses" being themselves objects. (If there isn't any proto with the slot, then the message is sent to the object's "forward" slot, and if that slot doesn't exist, either, an exception is raised.) Producing "subclasses" is done using the clone method. A clone of an object is a new object having empty slots and that object as its only prototype. If we say "y := x clone", then y will respond to the same messages as x via delegation; new slots can be created which will be activated in place of the proto's, providing polymorphism. The typical pattern to implement a new "type" is to say: do is an Object method which evaluates code in the context of its receiver. "Instances" of the new "class" can then be created by saying: Now instance is an object which responds to NewType's x, z, and someMethod slots, as well as all Object slots. Notice that clone is the method used both to create a new type and an instance of that type - this is prototype-based programming. An important aspect of Io lacking syntax beyond messages is that control flow is implemented as Object methods. "Object if" is a method taking one to three arguments: All three of these forms are essentially equivalent. if evaluates the condition, then if it is true (controlled by the result's asBoolean slot), it evaluates the second, otherwise it evaluates the third. If the branch to evaluate doesn't exist, if instead returns the boolean to enable the other forms, as well as to support the elseif method. Note that because message arguments are themselves messages, the wrong branches are never evaluated, so their side effects don't happen. There are also loops, including but not limited to: Each loop, being a method, produces a value, which is by default the last value encountered during evaluation of the loop. Object methods continue, break, and return also do what their equivalents in most other programming languages do, except that continue and break can be supplied an argument to change the loop result. For a more in-depth introduction to Io, check out the official guide and reference at http://iolanguage.org/ as well as the original implementation's GitHub repository at https://github.com/IoLanguage/io for code samples.