Package pubsub provides an easy way to publish and receive Google Cloud Pub/Sub messages, hiding the details of the underlying server RPCs. This package is **deprecated**. All users should use cloud.google.com/go/pubsub/v2 instead. This version will be supported with bug fixes and security patches until the deprecation date of December 31st, 2026. For more information about migrating, see https://github.com/googleapis/google-cloud-go/blob/main/pubsub/MIGRATING.md Pub/Sub is a many-to-many, asynchronous messaging system that decouples senders and receivers. More information about Pub/Sub is available at https://cloud.google.com/pubsub/docs See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Pub/Sub messages are published to topics. A Topic may be created using Client.CreateTopic like so: Messages may then be published to a Topic: Topic.Publish queues the message for publishing and returns immediately. When enough messages have accumulated, or enough time has elapsed, the batch of messages is sent to the Pub/Sub service. Topic.Publish returns a PublishResult, which behaves like a future: its Get method blocks until the message has been sent to the service. The first time you call Topic.Publish on a Topic, goroutines are started in the background. To clean up these goroutines, call Topic.Stop: To receive messages published to a Topic, clients create a Subscription for the topic. There may be more than one subscription per topic ; each message that is published to the topic will be delivered to all associated subscriptions. A Subscription may be created like so: Messages are then consumed from a Subscription via callback. The callback is invoked concurrently by multiple goroutines, maximizing throughput. To terminate a call to Subscription.Receive, cancel its context. Once client code has processed the Message, it must call Message.Ack or Message.Nack; otherwise the Message will eventually be redelivered. Ack/Nack MUST be called within the Subscription.Receive handler function, and not from a goroutine. Otherwise, flow control (e.g. ReceiveSettings.MaxOutstandingMessages) will not be respected, and messages can get orphaned when cancelling Receive. If the client cannot or doesn't want to process the message, it can call Message.Nack to speed redelivery. For more information and configuration options, see Ack Deadlines below. Note: It is possible for a Message to be redelivered even if Message.Ack has been called. Client code must be robust to multiple deliveries of messages. Note: This uses pubsub's streaming pull feature. This feature has properties that may be surprising. Please take a look at https://cloud.google.com/pubsub/docs/pull#streamingpull for more details on how streaming pull behaves compared to the synchronous pull method. The number of StreamingPull connections can be configured by setting NumGoroutines in ReceiveSettings. The default value of 10 means the client library will maintain 10 StreamingPull connections. This is more than sufficient for most use cases, as StreamingPull connections can handle up to 10 MB/s https://cloud.google.com/pubsub/quotas#resource_limits. In some cases, using too many streams can lead to client library behaving poorly as the application becomes I/O bound. By default, the number of connections in the gRPC conn pool is min(4,GOMAXPROCS). Each connection supports up to 100 streams. Thus, if you have 4 or more CPU cores, the default setting allows a maximum of 400 streams which is already excessive for most use cases. If you want to change the limits on the number of streams, you can change the number of connections in the gRPC connection pool as shown below: The default pubsub deadlines are suitable for most use cases, but may be overridden. This section describes the tradeoffs that should be considered when overriding the defaults. Behind the scenes, each message returned by the Pub/Sub server has an associated lease, known as an "ack deadline". Unless a message is acknowledged within the ack deadline, or the client requests that the ack deadline be extended, the message will become eligible for redelivery. As a convenience, the pubsub client will automatically extend deadlines until either: Ack deadlines are extended periodically by the client. The period between extensions, as well as the length of the extension, automatically adjusts based on the time it takes the subscriber application to ack messages (based on the 99th percentile of ack latency). By default, this extension period is capped at 10m, but this limit can be configured by the "MaxExtensionPeriod" setting. This has the effect that subscribers that process messages quickly have their message ack deadlines extended for a short amount, whereas subscribers that process message slowly have their message ack deadlines extended for a large amount. The net effect is fewer RPCs sent from the client library. For example, consider a subscriber that takes 3 minutes to process each message. Since the library has already recorded several 3-minute "ack latencies"s in a percentile distribution, future message extensions are sent with a value of 3 minutes, every 3 minutes. Suppose the application crashes 5 seconds after the library sends such an extension: the Pub/Sub server would wait the remaining 2m55s before re-sending the messages out to other subscribers. Please note that by default, the client library does not use the subscription's AckDeadline for the MaxExtension value. For use cases where message processing exceeds 30 minutes, we recommend using the base client in a pull model, since long-lived streams are periodically killed by firewalls. See the example at https://godoc.org/cloud.google.com/go/pubsub/apiv1#example-SubscriberClient-Pull-LengthyClientProcessing To use an emulator with this library, you can set the PUBSUB_EMULATOR_HOST environment variable to the address at which your emulator is running. This will send requests to that address instead of to Pub/Sub. You can then create and use a client as usual: Deprecated: Please use cloud.google.com/go/pubsub/v2.
Package restful , a lean package for creating REST-style WebServices without magic. A WebService has a collection of Route objects that dispatch incoming Http Requests to a function calls. Typically, a WebService has a root path (e.g. /users) and defines common MIME types for its routes. WebServices must be added to a container (see below) in order to handler Http requests from a server. A Route is defined by a HTTP method, an URL path and (optionally) the MIME types it consumes (Content-Type) and produces (Accept). This package has the logic to find the best matching Route and if found, call its Function. The (*Request, *Response) arguments provide functions for reading information from the request and writing information back to the response. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-user-resource.go with a full implementation. A Route parameter can be specified using the format "uri/{var[:regexp]}" or the special version "uri/{var:*}" for matching the tail of the path. For example, /persons/{name:[A-Z][A-Z]} can be used to restrict values for the parameter "name" to only contain capital alphabetic characters. Regular expressions must use the standard Go syntax as described in the regexp package. (https://code.google.com/p/re2/wiki/Syntax) This feature requires the use of a CurlyRouter. A Container holds a collection of WebServices, Filters and a http.ServeMux for multiplexing http requests. Using the statements "restful.Add(...) and restful.Filter(...)" will register WebServices and Filters to the Default Container. The Default container of go-restful uses the http.DefaultServeMux. You can create your own Container and create a new http.Server for that particular container. A filter dynamically intercepts requests and responses to transform or use the information contained in the requests or responses. You can use filters to perform generic logging, measurement, authentication, redirect, set response headers etc. In the restful package there are three hooks into the request,response flow where filters can be added. Each filter must define a FilterFunction: Use the following statement to pass the request,response pair to the next filter or RouteFunction These are processed before any registered WebService. These are processed before any Route of a WebService. These are processed before calling the function associated with the Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-filters.go with full implementations. Two encodings are supported: gzip and deflate. To enable this for all responses: If a Http request includes the Accept-Encoding header then the response content will be compressed using the specified encoding. Alternatively, you can create a Filter that performs the encoding and install it per WebService or Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-encoding-filter.go By installing a pre-defined container filter, your Webservice(s) can respond to the OPTIONS Http request. By installing the filter of a CrossOriginResourceSharing (CORS), your WebService(s) can handle CORS requests. Unexpected things happen. If a request cannot be processed because of a failure, your service needs to tell via the response what happened and why. For this reason HTTP status codes exist and it is important to use the correct code in every exceptional situation. If path or query parameters are not valid (content or type) then use http.StatusBadRequest. Despite a valid URI, the resource requested may not be available If the application logic could not process the request (or write the response) then use http.StatusInternalServerError. The request has a valid URL but the method (GET,PUT,POST,...) is not allowed. The request does not have or has an unknown Accept Header set for this operation. The request does not have or has an unknown Content-Type Header set for this operation. In addition to setting the correct (error) Http status code, you can choose to write a ServiceError message on the response. This package has several options that affect the performance of your service. It is important to understand them and how you can change it. DoNotRecover controls whether panics will be caught to return HTTP 500. If set to false, the container will recover from panics. Default value is true If content encoding is enabled then the default strategy for getting new gzip/zlib writers and readers is to use a sync.Pool. Because writers are expensive structures, performance is even more improved when using a preloaded cache. You can also inject your own implementation. This package has the means to produce detail logging of the complete Http request matching process and filter invocation. Enabling this feature requires you to set an implementation of restful.StdLogger (e.g. log.Logger) instance such as: The restful.SetLogger() method allows you to override the logger used by the package. By default restful uses the standard library `log` package and logs to stdout. Different logging packages are supported as long as they conform to `StdLogger` interface defined in the `log` sub-package, writing an adapter for your preferred package is simple. (c) 2012-2015, http://ernestmicklei.com. MIT License
Package restful , a lean package for creating REST-style WebServices without magic. ### WebServices and Routes A WebService has a collection of Route objects that dispatch incoming Http Requests to a function calls. Typically, a WebService has a root path (e.g. /users) and defines common MIME types for its routes. WebServices must be added to a container (see below) in order to handler Http requests from a server. A Route is defined by a HTTP method, an URL path and (optionally) the MIME types it consumes (Content-Type) and produces (Accept). This package has the logic to find the best matching Route and if found, call its Function. The (*Request, *Response) arguments provide functions for reading information from the request and writing information back to the response. See the example https://github.com/emicklei/go-restful/blob/v3/examples/user-resource/restful-user-resource.go with a full implementation. ### Regular expression matching Routes A Route parameter can be specified using the format "uri/{var[:regexp]}" or the special version "uri/{var:*}" for matching the tail of the path. For example, /persons/{name:[A-Z][A-Z]} can be used to restrict values for the parameter "name" to only contain capital alphabetic characters. Regular expressions must use the standard Go syntax as described in the regexp package. (https://code.google.com/p/re2/wiki/Syntax) This feature requires the use of a CurlyRouter. ### Containers A Container holds a collection of WebServices, Filters and a http.ServeMux for multiplexing http requests. Using the statements "restful.Add(...) and restful.Filter(...)" will register WebServices and Filters to the Default Container. The Default container of go-restful uses the http.DefaultServeMux. You can create your own Container and create a new http.Server for that particular container. ### Filters A filter dynamically intercepts requests and responses to transform or use the information contained in the requests or responses. You can use filters to perform generic logging, measurement, authentication, redirect, set response headers etc. In the restful package there are three hooks into the request,response flow where filters can be added. Each filter must define a FilterFunction: Use the following statement to pass the request,response pair to the next filter or RouteFunction ### Container Filters These are processed before any registered WebService. ### WebService Filters These are processed before any Route of a WebService. ### Route Filters These are processed before calling the function associated with the Route. See the example https://github.com/emicklei/go-restful/blob/v3/examples/filters/restful-filters.go with full implementations. ### Response Encoding Two encodings are supported: gzip and deflate. To enable this for all responses: If a Http request includes the Accept-Encoding header then the response content will be compressed using the specified encoding. Alternatively, you can create a Filter that performs the encoding and install it per WebService or Route. See the example https://github.com/emicklei/go-restful/blob/v3/examples/encoding/restful-encoding-filter.go ### OPTIONS support By installing a pre-defined container filter, your Webservice(s) can respond to the OPTIONS Http request. ### CORS By installing the filter of a CrossOriginResourceSharing (CORS), your WebService(s) can handle CORS requests. ### Error Handling Unexpected things happen. If a request cannot be processed because of a failure, your service needs to tell via the response what happened and why. For this reason HTTP status codes exist and it is important to use the correct code in every exceptional situation. If path or query parameters are not valid (content or type) then use http.StatusBadRequest. Despite a valid URI, the resource requested may not be available If the application logic could not process the request (or write the response) then use http.StatusInternalServerError. The request has a valid URL but the method (GET,PUT,POST,...) is not allowed. The request does not have or has an unknown Accept Header set for this operation. The request does not have or has an unknown Content-Type Header set for this operation. ### ServiceError In addition to setting the correct (error) Http status code, you can choose to write a ServiceError message on the response. ### Performance options This package has several options that affect the performance of your service. It is important to understand them and how you can change it. DoNotRecover controls whether panics will be caught to return HTTP 500. If set to false, the container will recover from panics. Default value is true If content encoding is enabled then the default strategy for getting new gzip/zlib writers and readers is to use a sync.Pool. Because writers are expensive structures, performance is even more improved when using a preloaded cache. You can also inject your own implementation. ### Trouble shooting This package has the means to produce detail logging of the complete Http request matching process and filter invocation. Enabling this feature requires you to set an implementation of restful.StdLogger (e.g. log.Logger) instance such as: ### Logging The restful.SetLogger() method allows you to override the logger used by the package. By default restful uses the standard library `log` package and logs to stdout. Different logging packages are supported as long as they conform to `StdLogger` interface defined in the `log` sub-package, writing an adapter for your preferred package is simple. ### Resources (c) 2012-2025, http://ernestmicklei.com. MIT License
Goserial is a simple go package to allow you to read and write from the serial port as a stream of bytes. It aims to have the same API on all platforms, including windows. As an added bonus, the windows package does not use cgo, so you can cross compile for windows from another platform. Unfortunately goinstall does not currently let you cross compile so you will have to do it manually: Currently there is very little in the way of configurability. You can set the baud rate. Then you can Read(), Write(), or Close() the connection. Read() will block until at least one byte is returned. Write is the same. There is currently no exposed way to set the timeouts, though patches are welcome. Currently all ports are opened with 8 data bits, 1 stop bit, no parity, no hardware flow control, and no software flow control. This works fine for many real devices and many faux serial devices including usb-to-serial converters and bluetooth serial ports. You may Read() and Write() simulantiously on the same connection (from different goroutines). Example usage:
Package restful , a lean package for creating REST-style WebServices without magic. A WebService has a collection of Route objects that dispatch incoming Http Requests to a function calls. Typically, a WebService has a root path (e.g. /users) and defines common MIME types for its routes. WebServices must be added to a container (see below) in order to handler Http requests from a server. A Route is defined by a HTTP method, an URL path and (optionally) the MIME types it consumes (Content-Type) and produces (Accept). This package has the logic to find the best matching Route and if found, call its Function. The (*Request, *Response) arguments provide functions for reading information from the request and writing information back to the response. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-user-resource.go with a full implementation. A Route parameter can be specified using the format "uri/{var[:regexp]}" or the special version "uri/{var:*}" for matching the tail of the path. For example, /persons/{name:[A-Z][A-Z]} can be used to restrict values for the parameter "name" to only contain capital alphabetic characters. Regular expressions must use the standard Go syntax as described in the regexp package. (https://code.google.com/p/re2/wiki/Syntax) This feature requires the use of a CurlyRouter. A Container holds a collection of WebServices, Filters and a http.ServeMux for multiplexing http requests. Using the statements "restful.Add(...) and restful.Filter(...)" will register WebServices and Filters to the Default Container. The Default container of go-restful uses the http.DefaultServeMux. You can create your own Container and create a new http.Server for that particular container. A filter dynamically intercepts requests and responses to transform or use the information contained in the requests or responses. You can use filters to perform generic logging, measurement, authentication, redirect, set response headers etc. In the restful package there are three hooks into the request,response flow where filters can be added. Each filter must define a FilterFunction: Use the following statement to pass the request,response pair to the next filter or RouteFunction These are processed before any registered WebService. These are processed before any Route of a WebService. These are processed before calling the function associated with the Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-filters.go with full implementations. Two encodings are supported: gzip and deflate. To enable this for all responses: If a Http request includes the Accept-Encoding header then the response content will be compressed using the specified encoding. Alternatively, you can create a Filter that performs the encoding and install it per WebService or Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-encoding-filter.go By installing a pre-defined container filter, your Webservice(s) can respond to the OPTIONS Http request. By installing the filter of a CrossOriginResourceSharing (CORS), your WebService(s) can handle CORS requests. Unexpected things happen. If a request cannot be processed because of a failure, your service needs to tell via the response what happened and why. For this reason HTTP status codes exist and it is important to use the correct code in every exceptional situation. If path or query parameters are not valid (content or type) then use http.StatusBadRequest. Despite a valid URI, the resource requested may not be available If the application logic could not process the request (or write the response) then use http.StatusInternalServerError. The request has a valid URL but the method (GET,PUT,POST,...) is not allowed. The request does not have or has an unknown Accept Header set for this operation. The request does not have or has an unknown Content-Type Header set for this operation. In addition to setting the correct (error) Http status code, you can choose to write a ServiceError message on the response. This package has several options that affect the performance of your service. It is important to understand them and how you can change it. DoNotRecover controls whether panics will be caught to return HTTP 500. If set to false, the container will recover from panics. Default value is true If content encoding is enabled then the default strategy for getting new gzip/zlib writers and readers is to use a sync.Pool. Because writers are expensive structures, performance is even more improved when using a preloaded cache. You can also inject your own implementation. This package has the means to produce detail logging of the complete Http request matching process and filter invocation. Enabling this feature requires you to set an implementation of restful.StdLogger (e.g. log.Logger) instance such as: The restful.SetLogger() method allows you to override the logger used by the package. By default restful uses the standard library `log` package and logs to stdout. Different logging packages are supported as long as they conform to `StdLogger` interface defined in the `log` sub-package, writing an adapter for your preferred package is simple. (c) 2012-2015, http://ernestmicklei.com. MIT License
Package restful , a lean package for creating REST-style WebServices without magic. A WebService has a collection of Route objects that dispatch incoming Http Requests to a function calls. Typically, a WebService has a root path (e.g. /users) and defines common MIME types for its routes. WebServices must be added to a container (see below) in order to handler Http requests from a server. A Route is defined by a HTTP method, an URL path and (optionally) the MIME types it consumes (Content-Type) and produces (Accept). This package has the logic to find the best matching Route and if found, call its Function. The (*Request, *Response) arguments provide functions for reading information from the request and writing information back to the response. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-user-resource.go with a full implementation. A Route parameter can be specified using the format "uri/{var[:regexp]}" or the special version "uri/{var:*}" for matching the tail of the path. For example, /persons/{name:[A-Z][A-Z]} can be used to restrict values for the parameter "name" to only contain capital alphabetic characters. Regular expressions must use the standard Go syntax as described in the regexp package. (https://code.google.com/p/re2/wiki/Syntax) This feature requires the use of a CurlyRouter. A Container holds a collection of WebServices, Filters and a http.ServeMux for multiplexing http requests. Using the statements "restful.Add(...) and restful.Filter(...)" will register WebServices and Filters to the Default Container. The Default container of go-restful uses the http.DefaultServeMux. You can create your own Container and create a new http.Server for that particular container. A filter dynamically intercepts requests and responses to transform or use the information contained in the requests or responses. You can use filters to perform generic logging, measurement, authentication, redirect, set response headers etc. In the restful package there are three hooks into the request,response flow where filters can be added. Each filter must define a FilterFunction: Use the following statement to pass the request,response pair to the next filter or RouteFunction These are processed before any registered WebService. These are processed before any Route of a WebService. These are processed before calling the function associated with the Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-filters.go with full implementations. Two encodings are supported: gzip and deflate. To enable this for all responses: If a Http request includes the Accept-Encoding header then the response content will be compressed using the specified encoding. Alternatively, you can create a Filter that performs the encoding and install it per WebService or Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-encoding-filter.go By installing a pre-defined container filter, your Webservice(s) can respond to the OPTIONS Http request. By installing the filter of a CrossOriginResourceSharing (CORS), your WebService(s) can handle CORS requests. Unexpected things happen. If a request cannot be processed because of a failure, your service needs to tell via the response what happened and why. For this reason HTTP status codes exist and it is important to use the correct code in every exceptional situation. If path or query parameters are not valid (content or type) then use http.StatusBadRequest. Despite a valid URI, the resource requested may not be available If the application logic could not process the request (or write the response) then use http.StatusInternalServerError. The request has a valid URL but the method (GET,PUT,POST,...) is not allowed. The request does not have or has an unknown Accept Header set for this operation. The request does not have or has an unknown Content-Type Header set for this operation. In addition to setting the correct (error) Http status code, you can choose to write a ServiceError message on the response. This package has several options that affect the performance of your service. It is important to understand them and how you can change it. DoNotRecover controls whether panics will be caught to return HTTP 500. If set to false, the container will recover from panics. Default value is true If content encoding is enabled then the default strategy for getting new gzip/zlib writers and readers is to use a sync.Pool. Because writers are expensive structures, performance is even more improved when using a preloaded cache. You can also inject your own implementation. This package has the means to produce detail logging of the complete Http request matching process and filter invocation. Enabling this feature requires you to set an implementation of restful.StdLogger (e.g. log.Logger) instance such as: The restful.SetLogger() method allows you to override the logger used by the package. By default restful uses the standard library `log` package and logs to stdout. Different logging packages are supported as long as they conform to `StdLogger` interface defined in the `log` sub-package, writing an adapter for your preferred package is simple. (c) 2012-2015, http://ernestmicklei.com. MIT License
Package cogito provides LLM-powered reasoning chains for Go. cogito implements a Thought-Note architecture for building autonomous systems that reason and adapt. The package is built around two core concepts: Use New or NewWithTrace to create thoughts: Cogito provides a comprehensive set of reasoning primitives: Decision & Analysis: Control Flow: Reflection: Session Management: Synthesis: Cogito wraps pipz connectors for Thought processing: LLM access uses a resolution hierarchy: Use SetProvider to configure the global default: Cogito emits capitan signals throughout execution. See [signals.go] for the complete list of events including ThoughtCreated, StepStarted, StepCompleted, StepFailed, NoteAdded, and NotesPublished.
Goserial is a simple go package to allow you to read and write from the serial port as a stream of bytes. It aims to have the same API on all platforms, including windows. As an added bonus, the windows package does not use cgo, so you can cross compile for windows from another platform. Unfortunately goinstall does not currently let you cross compile so you will have to do it manually: Currently there is very little in the way of configurability. You can set the baud rate. Then you can Read(), Write(), or Close() the connection. Read() will block until at least one byte is returned. Write is the same. There is currently no exposed way to set the timeouts, though patches are welcome. Currently all ports are opened with 8 data bits, 1 stop bit, no parity, no hardware flow control, and no software flow control. This works fine for many real devices and many faux serial devices including usb-to-serial converters and bluetooth serial ports. You may Read() and Write() simulantiously on the same connection (from different goroutines). Example usage:
Package uis (Userspace Internet Simulation) provides basic building blocks for userspace networking tests using gVisor. The package models a virtual internet where multiple network stacks can communicate. It provides direct control over packet flow, leaving routing policy and network conditions to the caller. The typical usage is to create a *Internet and use *Internet.NewStack to create two or more *Stack instances. The created instances are already configured for sending and receiving raw internet packets. The Connector type is a stdlib-like dialer for IP literal endpoints only. The ListenConfig type is a stdlib-like listener config for IP literal endpoints only. Use these types to plug this package into higher-level code that expects the net package interfaces. To route packets, you need to read packets using *Internet.InFlight. If you choose to forward the read packets, then you can deliver them to the right destination using *Internet.Deliver. We don't model L2 frames (we just move raw IP packets around) and we don't model multiple hops. These choices keep this package focused on fundamental primitives rather than full frameworks. The *PCAPTrace type allows you to capture packets in flight in a PCAP format so that you can inspect what happened using tools such as wireshark. This example creates a client and the server and the client downloads a small number of bytes from the server. This example creates a client and server using UDP over IPv4. The server echoes back whatever it receives. This example creates a client and server using UDP over IPv6. The server echoes back whatever it receives.
Package swf provides the API client, operations, and parameter types for Amazon Simple Workflow Service. The Amazon Simple Workflow Service (Amazon SWF) makes it easy to build applications that use Amazon's cloud to coordinate work across distributed components. In Amazon SWF, a task represents a logical unit of work that is performed by a component of your workflow. Coordinating tasks in a workflow involves managing intertask dependencies, scheduling, and concurrency in accordance with the logical flow of the application. Amazon SWF gives you full control over implementing tasks and coordinating them without worrying about underlying complexities such as tracking their progress and maintaining their state. This documentation serves as reference only. For a broader overview of the Amazon SWF programming model, see the Amazon SWF Developer Guide.
Package gosnowflake is a pure Go Snowflake driver for the database/sql package. Clients can use the database/sql package directly. For example: Use the Open() function to create a database handle with connection parameters: The Go Snowflake Driver supports the following connection syntaxes (or data source name (DSN) formats): where all parameters must be escaped or use Config and DSN to construct a DSN string. For information about account identifiers, see the Snowflake documentation (https://docs.snowflake.com/en/user-guide/admin-account-identifier.html). The following example opens a database handle with the Snowflake account named "my_account" under the organization named "my_organization", where the username is "jsmith", password is "mypassword", database is "mydb", schema is "testschema", and warehouse is "mywh": The connection string (DSN) can contain both connection parameters (described below) and session parameters (https://docs.snowflake.com/en/sql-reference/parameters.html). The following connection parameters are supported: account <string>: Specifies your Snowflake account, where "<string>" is the account identifier assigned to your account by Snowflake. For information about account identifiers, see the Snowflake documentation (https://docs.snowflake.com/en/user-guide/admin-account-identifier.html). If you are using a global URL, then append the connection group and ".global" (e.g. "<account_identifier>-<connection_group>.global"). The account identifier and the connection group are separated by a dash ("-"), as shown above. This parameter is optional if your account identifier is specified after the "@" character in the connection string. region <string>: DEPRECATED. You may specify a region, such as "eu-central-1", with this parameter. However, since this parameter is deprecated, it is best to specify the region as part of the account parameter. For details, see the description of the account parameter. --> Important note: for the database object and other objects (schema, role, etc), please always adhere to the rules for Snowflake Object Identifiers; especially https://docs.snowflake.com/en/sql-reference/identifiers-syntax#double-quoted-identifiers. As mentioned in the docs, if you have e.g. a database with mIxEDcAsE naming, as you needed to create it with enclosing it in double quotes, similarly you'll need to reference it also with double quotes when specifying it in the connection string / DSN. In practice, this means you'll need to escape the second pair of double quotes, which are part of the database name, and not the String notation. database: Specifies the database to use by default in the client session (can be changed after login). schema: Specifies the database schema to use by default in the client session (can be changed after login). warehouse: Specifies the virtual warehouse to use by default for queries, loading, etc. in the client session (can be changed after login). role: Specifies the role to use by default for accessing Snowflake objects in the client session (can be changed after login). passcode: Specifies the passcode provided by Duo when using multi-factor authentication (MFA) for login. passcodeInPassword: false by default. Set to true if the MFA passcode is embedded in the login password. Appends the MFA passcode to the end of the password. loginTimeout: Specifies the timeout, in seconds, for login. The default is 60 seconds. The login request gives up after the timeout length if the HTTP response is success. requestTimeout: Specifies the timeout, in seconds, for a query to complete. 0 (zero) specifies that the driver should wait indefinitely. The default is 0 seconds. The query request gives up after the timeout length if the HTTP response is success. authenticator: Specifies the authenticator to use for authenticating user credentials. See "Authenticator Values" section below for supported values. singleAuthenticationPrompt: specifies whether only one authentication should be performed at the same time for authentications that needs human interactions (like MFA or OAuth authorization code). By default it is true. application: Identifies your application to Snowflake Support. disableOCSPChecks: false by default. Set to true to bypass the Online Certificate Status Protocol (OCSP) certificate revocation check. OCSP module caches responses internally. If your application is long running, you can enable cache clearing by calling StartOCSPCacheClearer and disable by calling StopOCSPCacheClearer. IMPORTANT: Change the default value for testing or emergency situations only. insecureMode: deprecated. Use disableOCSPChecks instead. token: a token that can be used to authenticate. Should be used in conjunction with the "oauth" authenticator. client_session_keep_alive: Set to true have a heartbeat in the background every hour by default or the value of client_session_keep_alive_heartbeat_frequency, if set, to keep the connection alive such that the connection session will never expire. Care should be taken in using this option as it opens up the access forever as long as the process is alive. client_session_keep_alive_heartbeat_frequency: Number of seconds in-between client attempts to update the token for the session. > The default is 3600 seconds > Minimum value is 900 seconds. A smaller value will be reset to 900 seconds. > Maximum value is 3600 seconds. A larger value will be reset to 3600 seconds. > This parameter is only valid if client_session_keep_alive is set to true. ocspFailOpen: true by default. Set to false to make OCSP check fail closed mode. certRevocationCheckMode (enabled, advisory, disabled): Specifies the certificate revocation check mode. When enabled, the driver performs a certificate revocation check using CRL. When advisory, the driver performs a certificate revocation check using CRL, but fails the connection only if the certificate is revoked. If the status cannot be determined, the connection is established. When disabled, the driver does not perform a certificate revocation check. Keep in mind that the certificate revocation check with CRLs is a heavy task, both for memory and CPU. The default is disabled. crlAllowCertificatesWithoutCrlURL: if a certificate does not have a CRL URL, the driver will allow the connection to be established. The default is false. SNOWFLAKE_CRL_CACHE_VALIDITY_TIME (environment variable): specifies the validity time of the CRL cache in seconds. crlInMemoryCacheDisabled: set to disable in-memory caching of CRLs. crlOnDiskCacheDisabled: set to disable on-disk caching of CRLs (on-disk cache may help with cold starts). crlDownloadMaxSize: maximum size (in bytes) of a CRL to download. Default is 200MB. SNOWFLAKE_CRL_ON_DISK_CACHE_DIR (environment variable): set to customize the directory for on-disk caching of CRLs. SNOWFLAKE_CRL_ON_DISK_CACHE_REMOVAL_DELAY (environment variable): set the delay (in seconds) for removing the on-disk cache (for debuggability). crlHTTPClientTimeout: customize the HTTP client timeout for downloading CRLs. validateDefaultParameters: true by default. Set to false to disable checks on existence and privileges check for Database, Schema, Warehouse and Role when setting up the connection --> Important note: with the default true value, the connection will fail as the validation fails, if you specify a non-existent database/schema/etc name. This is particularly important when you have a miXedCaSE-named object (e.g. database) and you forgot to properly double quote it. This behaviour is still preferable as it provides a very clear, fail-fast indication of the configuration error. If you would still like to forego this validation, which ensures that the driver always connects with proper database, schema etc. and creates a proper context for it, you can set this configuration to false to allow connection with invalid object identifiers. In this case (with this default validation deliberately turned off) the driver cannot guarantee that the actual behaviour inside the session will match with the one you'd expect, i.e. not actually using the database you expect, and so on. tracing: Specifies the logging level to be used. Set to error by default. Valid values are trace, debug, info, print, warning, error, fatal, panic. logQueryText: when set to true, the full query text will be logged. Be aware that it may include sensitive information. Default value is false. logQueryParameters: when set to true, the parameters will be logged. Requires logQueryText to be enabled first. Be aware that it may include sensitive information. Default value is false. disableQueryContextCache: disables parsing of query context returned from server and resending it to server as well. Default value is false. clientConfigFile: specifies the location of the client configuration json file. In this file you can configure Easy Logging feature. disableSamlURLCheck: disables the SAML URL check. Default value is false. All other parameters are interpreted as session parameters (https://docs.snowflake.com/en/sql-reference/parameters.html). For example, the TIMESTAMP_OUTPUT_FORMAT session parameter can be set by adding: A complete connection string looks similar to the following: Session-level parameters can also be set by using the SQL command "ALTER SESSION" (https://docs.snowflake.com/en/sql-reference/sql/alter-session.html). Alternatively, use OpenWithConfig() function to create a database handle with the specified Config. To use the internal Snowflake authenticator, specify snowflake (Default). To use programmatic access tokens, specify programmatic_access_token. If you want to cache your MFA logins, specify username_password_mfa. You can pass TOTP in a separate passcode parameter or append it to the password setting in which case you need to set passcodeInPassword = true. To authenticate through Okta, specify https://<okta_account_name>.okta.com (URL prefix for Okta). To authenticate using your IDP via a browser, specify externalbrowser. To authenticate via OAuth with token, specify oauth and provide an OAuth Access Token (see the token parameter below). To authenticate via full OAuth flow, specify oauth_authorization_code or oauth_client_credentials and fill relevant parameters (oauthClientId, oauthClientSecret, oauthAuthorizationUrl, oauthTokenRequestUrl, oauthRedirectUri, oauthScope). Specify URLs if you want to use external OAuth2 IdP, otherwise Snowflake will be used as a default IdP. If oauthScope is not configured, the role is used (giving session:role:<roleName> scope). For more information, please reach to official Snowflake documentation. To authenticate via workload identity, specify workload_identity. This option requires workloadIdentityProvider option to be set (AWS, GCP, AZURE, OIDC). When workloadIdentityProvider=AZURE, workloadIdentityEntraResource can be optionally set to customize entra resource used to fetch JWT token. When workloadIdentityProvider=GCP or AWS, workloadIdentityImpersonationPath can be optionally set to customize impersonation path. This is a comma separated list. For GCP the last parameter is a target service account and the rest are chained delegation. For AWS this is the list of role ARNs to assume. For more details, refer to the usage guide: https://docs.snowflake.com/en/user-guide/workload-identity-federation You can also connect to your warehouse using the connection config. The dbSql library states that when you want to take advantage of driver-specific connection features that aren’t available in a connection string. Each driver supports its own set of connection properties, often providing ways to customize the connection request specific to the DBMS For example: If you are using this method, you dont need to pass a driver name to specify the driver type in which you are looking to connect. Since the driver name is not needed, you can optionally bypass driver registration on startup. To do this, set `GOSNOWFLAKE_SKIP_REGISTERATION` in your environment. This is useful you wish to register multiple verions of the driver. Note: `GOSNOWFLAKE_SKIP_REGISTERATION` should not be used if sql.Open() is used as the method to connect to the server, as sql.Open will require registration so it can map the driver name to the driver type, which in this case is "snowflake" and SnowflakeDriver{}. You can load the connnection configuration with .toml file format. With two environment variables, `SNOWFLAKE_HOME` (`connections.toml` file directory) and `SNOWFLAKE_DEFAULT_CONNECTION_NAME` (DSN name), the driver will search the config file and load the connection. You can find how to use this connection way at ./cmd/tomlfileconnection or Snowflake doc: https://docs.snowflake.com/en/developer-guide/snowflake-cli-v2/connecting/specify-credentials If the connection.toml file is readable by others, a warning will be logged. To disable it you need to set the environment variable `SF_SKIP_WARNING_FOR_READ_PERMISSIONS_ON_CONFIG_FILE` to true. It you wish to specify a custom transporter (e.g. to provide a custom TLS config to be used with your custom truststore) pass it through the `NewConnector`. Example: As an alternative, you can use the `RegisterTLSConfig` / `DeregisterTLSConfig` functions as seen in the unit tests: https://github.com/snowflakedb/gosnowflake/blob/v1.16.0/transport_test.go#L127 The Go Snowflake Driver honors the environment variables HTTP_PROXY, HTTPS_PROXY and NO_PROXY for the forward proxy setting. NO_PROXY specifies which hostname endings should be allowed to bypass the proxy server, e.g. no_proxy=.amazonaws.com means that Amazon S3 access does not need to go through the proxy. NO_PROXY does not support wildcards. Each value specified should be one of the following: The end of a hostname (or a complete hostname), for example: ".amazonaws.com" or "xy12345.snowflakecomputing.com". An IP address, for example "192.196.1.15". If more than one value is specified, values should be separated by commas, for example: In addition to environment variables, the Go Snowflake Driver also supports configuring the proxy via connection parameters. When these parameters are provided in the connection string or DSN, they take precedence and any environment proxy settings (HTTP_PROXY, HTTPS_PROXY, NO_PROXY) will be ignored. | Parameter | Description | Default | |-----------------|-----------------------------------------------------------------------------|---------| | `proxyHost` | Hostname or IP address of the proxy server. | | | `proxyPort` | Port number of the proxy server. | | | `proxyUser` | Username for proxy authentication. | | | `proxyPassword` | Password for proxy authentication. | | | `proxyProtocol` | Protocol to use for proxy connection. Valid values: `http`, `https`. | `http` | | `NoProxy“ | Comma-separated list of hosts that should bypass the proxy. | | For more details, please refer to the example in ./cmd/proxyconnection. By default, the driver's builtin logger is exposing logrus's FieldLogger and default at INFO level. Users can use SetLogger in driver.go to set a customized logger for gosnowflake package. In order to enable debug logging for the driver, user could use SetLogLevel("debug") in SFLogger interface as shown in demo code at cmd/logger.go. To redirect the logs SFlogger.SetOutput method could do the work. If you want to define S3 client logging, override S3LoggingMode variable using configuration: https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/aws#ClientLogMode Example: A custom query tag can be set in the context. Each query run with this context will include the custom query tag as metadata that will appear in the Query Tag column in the Query History log. For example: A specific query request ID can be set in the context and will be passed through in place of the default randomized request ID. For example: If you need query ID for your query you have to use raw connection. For queries: ``` ``` For execs: ``` ``` The result of your query can be retrieved by setting the query ID in the WithFetchResultByID context. ``` ``` From 0.5.0, a signal handling responsibility has moved to the applications. If you want to cancel a query/command by Ctrl+C, add a os.Interrupt trap in context to execute methods that can take the context parameter (e.g. QueryContext, ExecContext). See cmd/selectmany.go for the full example. A context containing OpenTelemetry headers for distributed tracing can be created. Each query, both synchronous and asynchronous, run with this context will include the Trace ID and Span ID as metadata. If you are instrumenting your program with OpenTelemetry and exporting telemetry data to Snowflake, then queries run with this context will be properly nested under the appropriate parent span. This can be viewed in the Traces and Logs tab in Snowsight. For example: The Go Snowflake Driver now supports the Arrow data format for data transfers between Snowflake and the Golang client. The Arrow data format avoids extra conversions between binary and textual representations of the data. The Arrow data format can improve performance and reduce memory consumption in clients. Snowflake continues to support the JSON data format. The data format is controlled by the session-level parameter GO_QUERY_RESULT_FORMAT. To use JSON format, execute: The valid values for the parameter are: If the user attempts to set the parameter to an invalid value, an error is returned. The parameter name and the parameter value are case-insensitive. This parameter can be set only at the session level. Usage notes: The Arrow data format reduces rounding errors in floating point numbers. You might see slightly different values for floating point numbers when using Arrow format than when using JSON format. In order to take advantage of the increased precision, you must pass in the context.Context object provided by the WithHigherPrecision function when querying. Traditionally, the rows.Scan() method returned a string when a variable of types interface was passed in. Turning on the flag ENABLE_HIGHER_PRECISION via WithHigherPrecision will return the natural, expected data type as well. For some numeric data types, the driver can retrieve larger values when using the Arrow format than when using the JSON format. For example, using Arrow format allows the full range of SQL NUMERIC(38,0) values to be retrieved, while using JSON format allows only values in the range supported by the Golang int64 data type. Users should ensure that Golang variables are declared using the appropriate data type for the full range of values contained in the column. For an example, see below. When using the Arrow format, the driver supports more Golang data types and more ways to convert SQL values to those Golang data types. The table below lists the supported Snowflake SQL data types and the corresponding Golang data types. The columns are: The SQL data type. The default Golang data type that is returned when you use snowflakeRows.Scan() to read data from Arrow data format via an interface{}. The possible Golang data types that can be returned when you use snowflakeRows.Scan() to read data from Arrow data format directly. The default Golang data type that is returned when you use snowflakeRows.Scan() to read data from JSON data format via an interface{}. (All returned values are strings.) The standard Golang data type that is returned when you use snowflakeRows.Scan() to read data from JSON data format directly. Go Data Types for Scan() =================================================================================================================== | ARROW | JSON =================================================================================================================== SQL Data Type | Default Go Data Type | Supported Go Data | Default Go Data Type | Supported Go Data | for Scan() interface{} | Types for Scan() | for Scan() interface{} | Types for Scan() =================================================================================================================== BOOLEAN | bool | string | bool ------------------------------------------------------------------------------------------------------------------- VARCHAR | string | string ------------------------------------------------------------------------------------------------------------------- DOUBLE | float32, float64 [1] , [2] | string | float32, float64 ------------------------------------------------------------------------------------------------------------------- INTEGER that | int, int8, int16, int32, int64 | string | int, int8, int16, fits in int64 | [1] , [2] | | int32, int64 ------------------------------------------------------------------------------------------------------------------- INTEGER that doesn't | int, int8, int16, int32, int64, *big.Int | string | error fit in int64 | [1] , [2] , [3] , [4] | ------------------------------------------------------------------------------------------------------------------- NUMBER(P, S) | float32, float64, *big.Float | string | float32, float64 where S > 0 | [1] , [2] , [3] , [5] | ------------------------------------------------------------------------------------------------------------------- DATE | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIME | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_LTZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_NTZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_TZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- BINARY | []byte | string | []byte ------------------------------------------------------------------------------------------------------------------- ARRAY [6] | string / array | string / array ------------------------------------------------------------------------------------------------------------------- OBJECT [6] | string / struct | string / struct ------------------------------------------------------------------------------------------------------------------- VARIANT | string | string ------------------------------------------------------------------------------------------------------------------- MAP | map | map [1] Converting from a higher precision data type to a lower precision data type via the snowflakeRows.Scan() method can lose low bits (lose precision), lose high bits (completely change the value), or result in error. [2] Attempting to convert from a higher precision data type to a lower precision data type via interface{} causes an error. [3] Higher precision data types like *big.Int and *big.Float can be accessed by querying with a context returned by WithHigherPrecision(). [4] You cannot directly Scan() into the alternative data types via snowflakeRows.Scan(), but can convert to those data types by using .Int64()/.String()/.Uint64() methods. For an example, see below. [5] You cannot directly Scan() into the alternative data types via snowflakeRows.Scan(), but can convert to those data types by using .Float32()/.String()/.Float64() methods. For an example, see below. [6] Arrays and objects can be either semistructured or structured, see more info in section below. Note: SQL NULL values are converted to Golang nil values, and vice-versa. Snowflake supports two flavours of "structured data" - semistructured and structured. Semistructured types are variants, objects and arrays without schema. When data is fetched, it's represented as strings and the client is responsible for its interpretation. Example table definition: The data not have any corresponding schema, so values in table may be slightly different. Semistuctured variants, objects and arrays are always represented as strings for scanning: When inserting, a marker indicating correct type must be used, for example: Structured types differentiate from semistructured types by having specific schema. In all rows of the table, values must conform to this schema. Example table definition: To retrieve structured objects, follow these steps: 1. Create a struct implementing sql.Scanner interface, example: a) b) Automatic scan goes through all fields in a struct and read object fields. Struct fields have to be public. Embedded structs have to be pointers. Matching name is built using struct field name with first letter lowercase. Additionally, `sf` tag can be added: - first value is always a name of a field in an SQL object - additionally `ignore` parameter can be passed to omit this field 2. Use WithStructuredTypesEnabled context while querying data. 3. Use it in regular scan: See StructuredObject for all available operations including null support, embedding nested structs, etc. Retrieving array of simple types works exactly the same like normal values - using Scan function. You can use WithMapValuesNullable and WithArrayValuesNullable contexts to handle null values in, respectively, maps and arrays of simple types in the database. In that case, sql null types will be used: If you want to scan array of structs, you have to use a helper function ScanArrayOfScanners: Retrieving structured maps is very similar to retrieving arrays: To bind structured objects use: 1. Create a type which implements a StructuredObjectWriter interface, example: a) b) 2. Use an instance as regular bind. 3. If you need to bind nil value, use special syntax: Binding structured arrays are like any other parameter. The only difference is - if you want to insert empty array (not nil but empty), you have to use: The following example shows how to retrieve very large values using the math/big package. This example retrieves a large INTEGER value to an interface and then extracts a big.Int value from that interface. If the value fits into an int64, then the code also copies the value to a variable of type int64. Note that a context that enables higher precision must be passed in with the query. If the variable named "rows" is known to contain a big.Int, then you can use the following instead of scanning into an interface and then converting to a big.Int: If the variable named "rows" contains a big.Int, then each of the following fails: Similar code and rules also apply to big.Float values. If you are not sure what data type will be returned, you can use code similar to the following to check the data type of the returned value: By default, DECFLOAT values are returned as string values. If you want to retrieve them as numbers, you have to use the WithDecfloatMappingEnabled context. If higher precision is enabled, the driver will return them as *big.Float values. Otherwise, they will be returned as float64 values. Keep in mind that both float64 and *big.Float are not able to precisely represent some DECFLOAT values. If precision is important, you have to use string representation and use your own library to parse it. You can retrieve data in a columnar format similar to the format a server returns, without transposing them to rows. When working with the arrow columnar format in go driver, ArrowBatch structs are used. These are structs mostly corresponding to data chunks received from the backend. They allow for access to specific arrow.Record structs. An ArrowBatch can exist in a state where the underlying data has not yet been loaded. The data is downloaded and translated only on demand. Translation options are retrieved from a context.Context interface, which is either passed from query context or set by the user using WithContext(ctx) method. In order to access them you must use `WithArrowBatches` context, similar to the following: This returns []*ArrowBatch. ArrowBatch functions: GetRowCount(): Returns the number of rows in the ArrowBatch. Note that this returns 0 if the data has not yet been loaded, irrespective of it’s actual size. WithContext(ctx context.Context): Sets the context of the ArrowBatch to the one provided. Note that the context will not retroactively apply to data that has already been downloaded. For example: will produce the same result in records1 and records2, irrespective of the newly provided ctx. Context worth noting are: -WithArrowBatchesTimestampOption -WithHigherPrecision -WithArrowBatchesUtf8Validation described in more detail later. Fetch(): Returns the underlying records as *[]arrow.Record. When this function is called, the ArrowBatch checks whether the underlying data has already been loaded, and downloads it if not. Limitations: How to handle timestamps in Arrow batches: Snowflake returns timestamps natively (from backend to driver) in multiple formats. The Arrow timestamp is an 8-byte data type, which is insufficient to handle the larger date and time ranges used by Snowflake. Also, Snowflake supports 0-9 (nanosecond) digit precision for seconds, while Arrow supports only 3 (millisecond), 6 (microsecond), an 9 (nanosecond) precision. Consequently, Snowflake uses a custom timestamp format in Arrow, which differs on timestamp type and precision. If you want to use timestamps in Arrow batches, you have two options: How to handle invalid UTF-8 characters in Arrow batches: Snowflake previously allowed users to upload data with invalid UTF-8 characters. Consequently, Arrow records containing string columns in Snowflake could include these invalid UTF-8 characters. However, according to the Arrow specifications (https://arrow.apache.org/docs/cpp/api/datatype.html and https://github.com/apache/arrow/blob/a03d957b5b8d0425f9d5b6c98b6ee1efa56a1248/go/arrow/datatype.go#L73-L74), Arrow string columns should only contain UTF-8 characters. To address this issue and prevent potential downstream disruptions, the context WithArrowBatchesUtf8Validation, is introduced. When enabled, this feature iterates through all values in string columns, identifying and replacing any invalid characters with `�`. This ensures that Arrow records conform to the UTF-8 standards, preventing validation failures in downstream services like the Rust Arrow library that impose strict validation checks. How to handle higher precision in Arrow batches: To preserve BigDecimal values within Arrow batches, use WithHigherPrecision. This offers two main benefits: it helps avoid precision loss and defers the conversion to upstream services. Alternatively, without this setting, all non-zero scale numbers will be converted to float64, potentially resulting in loss of precision. Zero-scale numbers (DECIMAL256, DECIMAL128) will be converted to int64, which could lead to overflow. WHen using NUMBERs with non zero scale, the value is returned as an integer type and a scale is provided in record metadata. Example. When we have a 123.45 value that comes from NUMBER(9, 4), it will be represented as 1234500 with scale equal to 4. It is a client responsibility to interpret it correctly. Also - see limitations section above. How to handle JSON responses in Arrow batches: Due to technical limitations Snowflake backend may return JSON even if client expects Arrow. In that case Arrow batches are not available and the error with code ErrNonArrowResponseInArrowBatches is returned. The response is parsed to regular rows. You can read rows in a way described in transform_batches_to_rows.go example. This has a very strong limitation though - this is a very low level API (Go driver API), so there are no conversions ready. All values are returned as strings. Alternative approach is to rerun a query, but without enabling Arrow batches and use a general Go SQL API instead of driver API. It can be optimized by using `WithRequestID`, so backend returns results from cache. Binding allows a SQL statement to use a value that is stored in a Golang variable. Without binding, a SQL statement specifies values by specifying literals inside the statement. For example, the following statement uses the literal value “42“ in an UPDATE statement: With binding, you can execute a SQL statement that uses a value that is inside a variable. For example: The “?“ inside the “VALUES“ clause specifies that the SQL statement uses the value from a variable. Binding data that involves time zones can require special handling. For details, see the section titled "Timestamps with Time Zones". Version 1.6.23 (and later) of the driver takes advantage of sql.Null types which enables the proper handling of null parameters inside function calls, i.e.: The timestamp nullability had to be achieved by wrapping the sql.NullTime type as the Snowflake provides several date and time types which are mapped to single Go time.Time type: Version 1.3.9 (and later) of the Go Snowflake Driver supports the ability to bind an array variable to a parameter in a SQL INSERT statement. You can use this technique to insert multiple rows in a single batch. As an example, the following code inserts rows into a table that contains integer, float, boolean, and string columns. The example binds arrays to the parameters in the INSERT statement. If the array contains SQL NULL values, use slice []interface{}, which allows Golang nil values. This feature is available in version 1.6.12 (and later) of the driver. For example, For slices []interface{} containing time.Time values, a binding parameter flag is required for the preceding array variable in the Array() function. This feature is available in version 1.6.13 (and later) of the driver. For example, Note: For alternative ways to load data into the Snowflake database (including bulk loading using the COPY command), see Loading Data into Snowflake (https://docs.snowflake.com/en/user-guide-data-load.html). When you use array binding to insert a large number of values, the driver can improve performance by streaming the data (without creating files on the local machine) to a temporary stage for ingestion. The driver automatically does this when the number of values exceeds a threshold (no changes are needed to user code). In order for the driver to send the data to a temporary stage, the user must have the following privilege on the schema: If the user does not have this privilege, the driver falls back to sending the data with the query to the Snowflake database. In addition, the current database and schema for the session must be set. If these are not set, the CREATE TEMPORARY STAGE command executed by the driver can fail with the following error: For alternative ways to load data into the Snowflake database (including bulk loading using the COPY command), see Loading Data into Snowflake (https://docs.snowflake.com/en/user-guide-data-load.html). Go's database/sql package supports the ability to bind a parameter in a SQL statement to a time.Time variable. However, when the client binds data to send to the server, the driver cannot determine the correct Snowflake date/timestamp data type to associate with the binding parameter. For example: To resolve this issue, a binding parameter flag is introduced that associates any subsequent time.Time type to the DATE, TIME, TIMESTAMP_LTZ, TIMESTAMP_NTZ or BINARY data type. The above example could be rewritten as follows: The driver fetches TIMESTAMP_TZ (timestamp with time zone) data using the offset-based Location types, which represent a collection of time offsets in use in a geographical area, such as CET (Central European Time) or UTC (Coordinated Universal Time). The offset-based Location data is generated and cached when a Go Snowflake Driver application starts, and if the given offset is not in the cache, it is generated dynamically. Currently, Snowflake does not support the name-based Location types (e.g. "America/Los_Angeles"). For more information about Location types, see the Go documentation for https://golang.org/pkg/time/#Location. Internally, this feature leverages the []byte data type. As a result, BINARY data cannot be bound without the binding parameter flag. In the following example, sf is an alias for the gosnowflake package: The driver directly downloads a result set from the cloud storage if the size is large. It is required to shift workloads from the Snowflake database to the clients for scale. The download takes place by goroutine named "Chunk Downloader" asynchronously so that the driver can fetch the next result set while the application can consume the current result set. The application may change the number of result set chunk downloader if required. Note this does not help reduce memory footprint by itself. Consider Custom JSON Decoder. Custom JSON Decoder for Parsing Result Set (Experimental) The application may have the driver use a custom JSON decoder that incrementally parses the result set as follows. This option will reduce the memory footprint to half or even quarter, but it can significantly degrade the performance depending on the environment. The test cases running on Travis Ubuntu box show five times less memory footprint while four times slower. Be cautious when using the option. The Go Snowflake Driver supports JWT (JSON Web Token) authentication. To enable this feature, construct the DSN with fields "authenticator=SNOWFLAKE_JWT&privateKey=<your_private_key>", or using a Config structure specifying: The <your_private_key> should be a base64 URL encoded PKCS8 rsa private key string. One way to encode a byte slice to URL base 64 URL format is through the base64.URLEncoding.EncodeToString() function. On the server side, you can alter the public key with the SQL command: The <your_public_key> should be a base64 Standard encoded PKI public key string. One way to encode a byte slice to base 64 Standard format is through the base64.StdEncoding.EncodeToString() function. To generate the valid key pair, you can execute the following commands in the shell: Note: As of February 2020, Golang's official library does not support passcode-encrypted PKCS8 private key. For security purposes, Snowflake highly recommends that you store the passcode-encrypted private key on the disk and decrypt the key in your application using a library you trust. JWT tokens are recreated on each retry and they are valid (`exp` claim) for `jwtTimeout` seconds. Each retry timeout is configured by `jwtClientTimeout`. Retries are limited by total time of `loginTimeout`. The driver allows to authenticate using the external browser. When a connection is created, the driver will open the browser window and ask the user to sign in. To enable this feature, construct the DSN with field "authenticator=EXTERNALBROWSER" or using a Config structure with following Authenticator specified: The external browser authentication implements timeout mechanism. This prevents the driver from hanging interminably when browser window was closed, or not responding. Timeout defaults to 120s and can be changed through setting DSN field "externalBrowserTimeout=240" (time in seconds) or using a Config structure with following ExternalBrowserTimeout specified: This feature is available in version 1.3.8 or later of the driver. By default, Snowflake returns an error for queries issued with multiple statements. This restriction helps protect against SQL Injection attacks (https://en.wikipedia.org/wiki/SQL_injection). The multi-statement feature allows users skip this restriction and execute multiple SQL statements through a single Golang function call. However, this opens up the possibility for SQL injection, so it should be used carefully. The risk can be reduced by specifying the exact number of statements to be executed, which makes it more difficult to inject a statement by appending it. More details are below. The Go Snowflake Driver provides two functions that can execute multiple SQL statements in a single call: To compose a multi-statement query, simply create a string that contains all the queries, separated by semicolons, in the order in which the statements should be executed. To protect against SQL Injection attacks while using the multi-statement feature, pass a Context that specifies the number of statements in the string. For example: When multiple queries are executed by a single call to QueryContext(), multiple result sets are returned. After you process the first result set, get the next result set (for the next SQL statement) by calling NextResultSet(). The following pseudo-code shows how to process multiple result sets: The function db.ExecContext() returns a single result, which is the sum of the number of rows changed by each individual statement. For example, if your multi-statement query executed two UPDATE statements, each of which updated 10 rows, then the result returned would be 20. Individual row counts for individual statements are not available. The following code shows how to retrieve the result of a multi-statement query executed through db.ExecContext(): Note: Because a multi-statement ExecContext() returns a single value, you cannot detect offsetting errors. For example, suppose you expected the return value to be 20 because you expected each UPDATE statement to update 10 rows. If one UPDATE statement updated 15 rows and the other UPDATE statement updated only 5 rows, the total would still be 20. You would see no indication that the UPDATES had not functioned as expected. The ExecContext() function does not return an error if passed a query (e.g. a SELECT statement). However, it still returns only a single value, not a result set, so using it to execute queries (or a mix of queries and non-query statements) is impractical. The QueryContext() function does not return an error if passed non-query statements (e.g. DML). The function returns a result set for each statement, whether or not the statement is a query. For each non-query statement, the result set contains a single row that contains a single column; the value is the number of rows changed by the statement. If you want to execute a mix of query and non-query statements (e.g. a mix of SELECT and DML statements) in a multi-statement query, use QueryContext(). You can retrieve the result sets for the queries, and you can retrieve or ignore the row counts for the non-query statements. Note: PUT statements are not supported for multi-statement queries. If a SQL statement passed to ExecQuery() or QueryContext() fails to compile or execute, that statement is aborted, and subsequent statements are not executed. Any statements prior to the aborted statement are unaffected. For example, if the statements below are run as one multi-statement query, the multi-statement query fails on the third statement, and an exception is thrown. If you then query the contents of the table named "test", the values 1 and 2 would be present. When using the QueryContext() and ExecContext() functions, golang code can check for errors the usual way. For example: Preparing statements and using bind variables are also not supported for multi-statement queries. The Go Snowflake Driver supports asynchronous execution of SQL statements. Asynchronous execution allows you to start executing a statement and then retrieve the result later without being blocked while waiting. While waiting for the result of a SQL statement, you can perform other tasks, including executing other SQL statements. Most of the steps to execute an asynchronous query are the same as the steps to execute a synchronous query. However, there is an additional step, which is that you must call the WithAsyncMode() function to update your Context object to specify that asynchronous mode is enabled. In the code below, the call to "WithAsyncMode()" is specific to asynchronous mode. The rest of the code is compatible with both asynchronous mode and synchronous mode. The function db.QueryContext() returns an object of type snowflakeRows regardless of whether the query is synchronous or asynchronous. However: The call to the Next() function of snowflakeRows is always synchronous (i.e. blocking). If the query has not yet completed and the snowflakeRows object (named "rows" in this example) has not been filled in yet, then rows.Next() waits until the result set has been filled in. More generally, calls to any Golang SQL API function implemented in snowflakeRows or snowflakeResult are blocking calls, and wait if results are not yet available. (Examples of other synchronous calls include: snowflakeRows.Err(), snowflakeRows.Columns(), snowflakeRows.columnTypes(), snowflakeRows.Scan(), and snowflakeResult.RowsAffected().) Because the example code above executes only one query and no other activity, there is no significant difference in behavior between asynchronous and synchronous behavior. The differences become significant if, for example, you want to perform some other activity after the query starts and before it completes. The example code below starts a query, which run in the background, and then retrieves the results later. This example uses small SELECT statements that do not retrieve enough data to require asynchronous handling. However, the technique works for larger data sets, and for situations where the programmer might want to do other work after starting the queries and before retrieving the results. For a more elaborative example please see cmd/async/async.go ==> Some considerations related to the KeepSessionAlive configuration option in context of asynchronous query execution When SQL Go connection is being closed, it performs the following actions: * stops the scheduled heartbeats (CLIENT_SESSION_KEEP_ALIVE), if it was enabled * cleans up all the http connections which are already idle - doesn't touch the ones which are in active use currently * if Config.KeepSessionAlive is false (default), then actively logs out the current Snowflake session. !! Caveat: If there are any queries which are currently executing in the same Snowflake session (e.g. async queries sent with WithAsyncMode()), then those queries are automatically cancelled from the client side a couple minutes later after the Close() call, as a Snowflake session which has been actively logged out from, cannot sustain any queries. You can govern this behaviour with setting Config.KeepSessionAlive to true; when the corresponding Snowflake session will be kept alive for a long time (determined by the Snowflake engine) even after an explicit Connection.Close() call past the time when the last running query in the session finished executing. The behaviour is also dependent on ABORT_DETACHED_QUERY parameter, please see the detailed explanation in the parameter description at https://docs.snowflake.com/en/sql-reference/parameters#abort-detached-query. As a consequence, best practice would be to isolate all long-running async tasks (especially ones supposed to be continued after the connection is closed) into a separate connection. The Go Snowflake Driver supports the PUT and GET commands. The PUT command copies a file from a local computer (the computer where the Golang client is running) to a stage on the cloud platform. The GET command copies data files from a stage on the cloud platform to a local computer. See the following for information on the syntax and supported parameters: Using PUT: The following example shows how to run a PUT command by passing a string to the db.Query() function: "<local_file>" should include the file path as well as the name. Snowflake recommends using an absolute path rather than a relative path. For example: Different client platforms (e.g. linux, Windows) have different path name conventions. Ensure that you specify path names appropriately. This is particularly important on Windows, which uses the backslash character as both an escape character and as a separator in path names. To send information from a stream (rather than a file) use code similar to the code below. (The ReplaceAll() function is needed on Windows to handle backslashes in the path to the file.) Note: PUT statements are not supported for multi-statement queries. Using GET: The following example shows how to run a GET command by passing a string to the db.Query() function: "<local_file>" should include the file path as well as the name. Snowflake recommends using an absolute path rather than a relative path. For example: To download a file into an in-memory stream (rather than a file) use code similar to the code below. Note: GET statements are not supported for multi-statement queries. Specifying temporary directory for encryption and compression: Putting and getting requires compression and/or encryption, which is done in the OS temporary directory. If you cannot use default temporary directory for your OS or you want to specify it yourself, you can use "tmpDirPath" DSN parameter. Remember, to encode slashes. Example: Using custom configuration for PUT/GET: If you want to override some default configuration options, you can use `WithFileTransferOptions` context. There are multiple config parameters including progress bars or compression. Default behaviour is to propagate the potential underlying errors encountered during executing calls associated with the PUT or GET commands to the caller, for increased awareness and easier handling or troubleshooting them. The behaviour is governed by the `RaisePutGetError` flag on `SnowflakeFileTransferOptions` (default: `true`) If you wish to ignore those errors instead, you can set `RaisePutGetError: false`. Example snippet: The Go Snowflake Driver includes an embedded native library called "minicore" that verifies loading of native Rust extensions on various platforms. By default, minicore is enabled and loaded dynamically at runtime. ## Disabling Minicore There are two ways to disable minicore: 1. **At runtime using environment variable:** 2. **At compile time using build tags:** When minicore is disabled (either at runtime or compile time), the driver continues to work normally but without the additional functionality provided by the native library. ## Static Linking On Linux, if the binary is fully statically linked (e.g., built with -linkmode external -extldflags '-static'), the driver automatically detects this and skips minicore loading. Calling dlopen from a statically linked glibc binary would crash with SIGFPE, so the driver inspects the ELF header for a dynamic linker (PT_INTERP) and gracefully skips minicore if none is found. When minicore is disabled (either at runtime, at compile time, or automatically due to static linking), the driver continues to work normally but without the additional functionality provided by the native library. ==> Relevant configuration - `ConnectionDiagnosticsEnabled` (default: false) - main flag to enable the diagnostics - `ConnectionDiagnosticsAllowlistFile` - specify `/path/to/allowlist.json` to use a specific allowlist file which the driver should parse. If not specified, the driver tries to open `allowlist.json` from the current directory. The `ConnectionDiagnosticsAllowlistFile` is only taken into consideration when `ConnectionDiagnosticsEnabled=true` ==> Flow of operation when `ConnectionDiagnosticsEnabled=true` 1. upon initial startup, driver opens and reads the `allowlist.json` to determine which hosts it needs to connect to, and then for each entry in the allowlist 2. perform a DNS resolution test to see if the hostname is resolvable 3. driver logs an Error, when encountering a _public_ IP address for a host which looks to be a _private_ link hostname 4. checks if proxy is used in the connection 5. sets up a connection; for which we use the same transport which is driven by the driver's config (custom transport, or when OCSP disabled then OCSP-less transport, or by default, the OCSP-enabled transport) 6. for HTTP endpoints, issues a HTTP GET request and see if it connects 7. for HTTPS endpoints, the same , plus 8. the driver exits after performing diagnostics. If you want to use the driver 'normally' after performing connection diagnostics, set `ConnectionDiagnosticsEnabled=false` or remove it from the config
Package skipper provides an HTTP routing library with flexible configuration as well as a runtime update of the routing rules. Skipper works as an HTTP reverse proxy that is responsible for mapping incoming requests to multiple HTTP backend services, based on routes that are selected by the request attributes. At the same time, both the requests and the responses can be augmented by a filter chain that is specifically defined for each route. Optionally, it can provide circuit breaker mechanism individually for each backend host. Skipper can load and update the route definitions from multiple data sources without being restarted. It provides a default executable command with a few built-in filters, however, its primary use case is to be extended with custom filters, predicates or data sources. For further information read 'Extending Skipper'. Skipper took the core design and inspiration from Vulcand: https://github.com/mailgun/vulcand. Skipper is 'go get' compatible. If needed, create a 'go workspace' first: Get the Skipper packages: Create a file with a route: Optionally, verify the syntax of the file: Start Skipper and make an HTTP request: The core of Skipper's request processing is implemented by a reverse proxy in the 'proxy' package. The proxy receives the incoming request, forwards it to the routing engine in order to receive the most specific matching route. When a route matches, the request is forwarded to all filters defined by it. The filters can modify the request or execute any kind of program logic. Once the request has been processed by all the filters, it is forwarded to the backend endpoint of the route. The response from the backend goes once again through all the filters in reverse order. Finally, it is mapped as the response of the original incoming request. Besides the default proxying mechanism, it is possible to define routes without a real network backend endpoint. One of these cases is called a 'shunt' backend, in which case one of the filters needs to handle the request providing its own response (e.g. the 'static' filter). Actually, filters themselves can instruct the request flow to shunt by calling the Serve(*http.Response) method of the filter context. Another case of a route without a network backend is the 'loopback'. A loopback route can be used to match a request, modified by filters, against the lookup tree with different conditions and then execute a different route. One example scenario can be to use a single route as an entry point to execute some calculation to get an A/B testing decision and then matching the updated request metadata for the actual destination route. This way the calculation can be executed for only those requests that don't contain information about a previously calculated decision. For further details, see the 'proxy' and 'filters' package documentation. Finding a request's route happens by matching the request attributes to the conditions in the route's definitions. Such definitions may have the following conditions: - method - path (optionally with wildcards) - path regular expressions - host regular expressions - headers - header regular expressions It is also possible to create custom predicates with any other matching criteria. The relation between the conditions in a route definition is 'and', meaning, that a request must fulfill each condition to match a route. For further details, see the 'routing' package documentation. Filters are applied in order of definition to the request and in reverse order to the response. They are used to modify request and response attributes, such as headers, or execute background tasks, like logging. Some filters may handle the requests without proxying them to service backends. Filters, depending on their implementation, may accept/require parameters, that are set specifically to the route. For further details, see the 'filters' package documentation. Each route has one of the following backends: HTTP endpoint, shunt, loopback or dynamic. Backend endpoints can be any HTTP service. They are specified by their network address, including the protocol scheme, the domain name or the IP address, and optionally the port number: e.g. "https://www.example.org:4242". (The path and query are sent from the original request, or set by filters.) A shunt route means that Skipper handles the request alone and doesn't make requests to a backend service. In this case, it is the responsibility of one of the filters to generate the response. A loopback route executes the routing mechanism on current state of the request from the start, including the route lookup. This way it serves as a form of an internal redirect. A dynamic route means that the final target will be defined in a filter. One of the filters in the chain must set the target backend url explicitly. Route definitions consist of the following: - request matching conditions (predicates) - filter chain (optional) - backend The eskip package implements the in-memory and text representations of route definitions, including a parser. (Note to contributors: in order to stay compatible with 'go get', the generated part of the parser is stored in the repository. When changing the grammar, 'go generate' needs to be executed explicitly to update the parser.) For further details, see the 'eskip' package documentation Skipper has filter implementations of basic auth and OAuth2. It can be integrated with tokeninfo based OAuth2 providers. For details, see: https://godoc.org/github.com/zalando/skipper/filters/auth. Skipper's route definitions of Skipper are loaded from one or more data sources. It can receive incremental updates from those data sources at runtime. It provides three different data clients: - Kubernetes: Skipper can be used as part of a Kubernetes Ingress Controller implementation together with https://github.com/zalando-incubator/kube-ingress-aws-controller . In this scenario, Skipper uses the Kubernetes API's Ingress extensions as a source for routing. For a complete deployment example, see more details in: https://github.com/zalando-incubator/kubernetes-on-aws/ . - Innkeeper: the Innkeeper service implements a storage for large sets of Skipper routes, with an HTTP+JSON API, OAuth2 authentication and role management. See the 'innkeeper' package and https://github.com/zalando/innkeeper. - etcd: Skipper can load routes and receive updates from etcd clusters (https://github.com/coreos/etcd). See the 'etcd' package. - static file: package eskipfile implements a simple data client, which can load route definitions from a static file in eskip format. Currently, it loads the routes on startup. It doesn't support runtime updates. Skipper can use additional data sources, provided by extensions. Sources must implement the DataClient interface in the routing package. Skipper provides circuit breakers, configured either globally, based on backend hosts or based on individual routes. It supports two types of circuit breaker behavior: open on N consecutive failures, or open on N failures out of M requests. For details, see: https://godoc.org/github.com/zalando/skipper/circuit. Skipper can be started with the default executable command 'skipper', or as a library built into an application. The easiest way to start Skipper as a library is to execute the 'Run' function of the current, root package. Each option accepted by the 'Run' function is wired in the default executable as well, as a command line flag. E.g. EtcdUrls becomes -etcd-urls as a comma separated list. For command line help, enter: An additional utility, eskip, can be used to verify, print, update and delete routes from/to files or etcd (Innkeeper on the roadmap). See the cmd/eskip command package, and/or enter in the command line: Skipper doesn't use dynamically loaded plugins, however, it can be used as a library, and it can be extended with custom predicates, filters and/or custom data sources. To create a custom predicate, one needs to implement the PredicateSpec interface in the routing package. Instances of the PredicateSpec are used internally by the routing package to create the actual Predicate objects as referenced in eskip routes, with concrete arguments. Example, randompredicate.go: In the above example, a custom predicate is created, that can be referenced in eskip definitions with the name 'Random': To create a custom filter we need to implement the Spec interface of the filters package. 'Spec' is the specification of a filter, and it is used to create concrete filter instances, while the raw route definitions are processed. Example, hellofilter.go: The above example creates a filter specification, and in the routes where they are included, the filter instances will set the 'X-Hello' header for each and every response. The name of the filter is 'hello', and in a route definition it is referenced as: The easiest way to create a custom Skipper variant is to implement the required filters (as in the example above) by importing the Skipper package, and starting it with the 'Run' command. Example, hello.go: A file containing the routes, routes.eskip: Start the custom router: The 'Run' function in the root Skipper package starts its own listener but it doesn't provide the best composability. The proxy package, however, provides a standard http.Handler, so it is possible to use it in a more complex solution as a building block for routing. Skipper provides detailed logging of failures, and access logs in Apache log format. Skipper also collects detailed performance metrics, and exposes them on a separate listener endpoint for pulling snapshots. For details, see the 'logging' and 'metrics' packages documentation. The router's performance depends on the environment and on the used filters. Under ideal circumstances, and without filters, the biggest time factor is the route lookup. Skipper is able to scale to thousands of routes with logarithmic performance degradation. However, this comes at the cost of increased memory consumption, due to storing the whole lookup tree in a single structure. Benchmarks for the tree lookup can be run by: In case more aggressive scale is needed, it is possible to setup Skipper in a cascade model, with multiple Skipper instances for specific route segments.
Example_audioTranscription demonstrates how to transcribe speech to text using Azure OpenAI's Whisper model. This example shows how to: - Create an Azure OpenAI client with token credentials - Read an audio file and send it to the API - Convert spoken language to written text using the Whisper model - Process the transcription response The example uses environment variables for configuration: - AOAI_WHISPER_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_WHISPER_MODEL: The deployment name of your Whisper model - AZURE_OPENAI_API_VERSION: Azure OpenAI service API version to use. See https://learn.microsoft.com/azure/ai-foundry/openai/api-version-lifecycle?tabs=go for information about API versions. Audio transcription is useful for accessibility features, creating searchable archives of audio content, generating captions or subtitles, and enabling voice commands in applications. Example_audioTranslation demonstrates how to translate speech from one language to English text. This example shows how to: - Create an Azure OpenAI client with token credentials - Read a non-English audio file - Translate the spoken content to English text - Process the translation response The example uses environment variables for configuration: - AOAI_WHISPER_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_WHISPER_MODEL: The deployment name of your Whisper model - AZURE_OPENAI_API_VERSION: Azure OpenAI service API version to use. See https://learn.microsoft.com/azure/ai-foundry/openai/api-version-lifecycle?tabs=go for information about API versions. Speech translation is essential for cross-language communication, creating multilingual content, and building applications that break down language barriers. Example_chatCompletionStream demonstrates streaming responses from the Chat Completions API. This example shows how to: - Create an Azure OpenAI client with token credentials - Set up a streaming chat completion request - Process incremental response chunks - Handle streaming errors and completion The example uses environment variables for configuration: - AOAI_CHAT_COMPLETIONS_MODEL: The deployment name of your chat model - AOAI_CHAT_COMPLETIONS_ENDPOINT: Your Azure OpenAI endpoint URL (ex: "https://yourservice.openai.azure.com") Streaming is useful for: - Real-time response display - Improved perceived latency - Interactive chat interfaces - Long-form content generation Example_chatCompletionsLegacyFunctions demonstrates using the legacy function calling format. This example shows how to: - Create an Azure OpenAI client with token credentials - Define a function schema using the legacy format - Use tools API for backward compatibility - Handle function calling responses The example uses environment variables for configuration: - AOAI_CHAT_COMPLETIONS_MODEL_LEGACY_FUNCTIONS_MODEL: The deployment name of your chat model - AOAI_CHAT_COMPLETIONS_MODEL_LEGACY_FUNCTIONS_ENDPOINT: Your Azure OpenAI endpoint URL - AZURE_OPENAI_API_VERSION: Azure OpenAI service API version to use. See https://learn.microsoft.com/azure/ai-foundry/openai/api-version-lifecycle?tabs=go for information about API versions. Legacy function support ensures: - Compatibility with older implementations - Smooth transition to new tools API - Support for existing function-based workflows Example_chatCompletionsStructuredOutputs demonstrates using structured outputs with function calling. This example shows how to: - Create an Azure OpenAI client with token credentials - Define complex JSON schemas for structured output - Request specific data structures through function calls - Parse and validate structured responses The example uses environment variables for configuration: - AOAI_CHAT_COMPLETIONS_STRUCTURED_OUTPUTS_MODEL: The deployment name of your chat model - AOAI_CHAT_COMPLETIONS_STRUCTURED_OUTPUTS_ENDPOINT: Your Azure OpenAI endpoint URL (ex: "https://yourservice.openai.azure.com") Structured outputs are useful for: - Database query generation - Data extraction and transformation - API request formatting - Consistent response formatting Example_completions demonstrates how to use Azure OpenAI's legacy Completions API. This example shows how to: - Create an Azure OpenAI client with token credentials - Send a simple text completion request - Handle the completion response - Process the generated text output The example uses environment variables for configuration: - AOAI_COMPLETIONS_MODEL: The deployment name of your completions model - AOAI_COMPLETIONS_ENDPOINT: Your Azure OpenAI endpoint URL - AZURE_OPENAI_API_VERSION: Azure OpenAI service API version to use. See https://learn.microsoft.com/azure/ai-foundry/openai/api-version-lifecycle?tabs=go for information about API versions. Legacy completions are useful for: - Simple text generation tasks - Completing partial text - Single-turn interactions - Basic language generation scenarios Example_createImage demonstrates how to generate images using Azure OpenAI's DALL-E model. This example shows how to: - Create an Azure OpenAI client with token credentials - Configure image generation parameters including size and format - Generate an image from a text prompt - Verify the generated image URL is accessible The example uses environment variables for configuration: - AOAI_DALLE_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_DALLE_MODEL: The deployment name of your DALL-E model - AZURE_OPENAI_API_VERSION: Azure OpenAI service API version to use. See https://learn.microsoft.com/azure/ai-foundry/openai/api-version-lifecycle?tabs=go for information about API versions. Image generation is useful for: - Creating custom illustrations and artwork - Generating visual content for applications - Prototyping design concepts - Producing visual aids for documentation Example_deepseekReasoningBasic demonstrates basic chat completions using DeepSeek-R1 reasoning model. This example shows how to: - Create an Azure OpenAI client with token credentials - Send a simple prompt to the DeepSeek-R1 reasoning model - Configure parameters for optimal reasoning performance - Process the response with step-by-step reasoning The example uses environment variables for configuration: - AOAI_DEEPSEEK_ENDPOINT: Your Azure OpenAI endpoint URL with DeepSeek model access - AOAI_DEEPSEEK_MODEL: The DeepSeek model deployment name (e.g., "deepseek-r1") DeepSeek-R1 is a reasoning model that provides detailed step-by-step analysis for complex problems, making it ideal for mathematical reasoning, logical deduction, and analytical problem solving. Example_deepseekReasoningMultiTurn demonstrates multi-turn conversations with DeepSeek-R1. This example shows how to: - Maintain conversation context across multiple turns - Build upon previous reasoning steps - Ask follow-up questions that reference earlier parts of the conversation - Handle complex problem-solving scenarios that require multiple interactions - Manage conversation history in a chat application When using the model for a chat application, you'll need to manage the history of that conversation and send the latest messages to the model. The example uses environment variables for configuration: - AOAI_DEEPSEEK_ENDPOINT: Your Azure OpenAI endpoint URL with DeepSeek model access - AOAI_DEEPSEEK_MODEL: The DeepSeek model deployment name (e.g., "deepseek-r1") Example_deepseekReasoningStreaming demonstrates streaming responses with DeepSeek-R1. This example shows how to: - Create a streaming chat completion request - Process streaming responses as they arrive - Handle the reasoning process in real-time - Provide a better user experience with immediate feedback The example uses environment variables for configuration: - AOAI_DEEPSEEK_ENDPOINT: Your Azure OpenAI endpoint URL with DeepSeek model access - AOAI_DEEPSEEK_MODEL: The DeepSeek model deployment name (e.g., "deepseek-r1") - AZURE_OPENAI_API_VERSION: Azure OpenAI service API version to use. See https://learn.microsoft.com/azure/ai-foundry/openai/api-version-lifecycle?tabs=go for information about API versions. This example uses a simple math problem to demonstrate DeepSeek-R1's step-by-step reasoning capabilities in a streaming context. Example_embeddings demonstrates how to generate text embeddings using Azure OpenAI's embedding models. This example shows how to: - Create an Azure OpenAI client with token credentials - Convert text input into numerical vector representations - Process the embedding vectors from the response - Handle embedding results for semantic analysis The example uses environment variables for configuration: - AOAI_EMBEDDINGS_MODEL: The deployment name of your embedding model (e.g., text-embedding-ada-002) - AOAI_EMBEDDINGS_ENDPOINT: Your Azure OpenAI endpoint URL (ex: "https://yourservice.openai.azure.com") Text embeddings are useful for: - Semantic search and information retrieval - Text classification and clustering - Content recommendation systems - Document similarity analysis - Natural language understanding tasks Example_generateSpeechFromText demonstrates how to convert text to speech using Azure OpenAI's text-to-speech service. This example shows how to: - Create an Azure OpenAI client with token credentials - Send text to be converted to speech - Specify voice and audio format parameters - Handle the audio response stream The example uses environment variables for configuration: - AOAI_TTS_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_TTS_MODEL: The deployment name of your text-to-speech model - AZURE_OPENAI_API_VERSION: Azure OpenAI service API version to use. See https://learn.microsoft.com/azure/ai-foundry/openai/api-version-lifecycle?tabs=go for information about API versions. Text-to-speech conversion is valuable for creating audiobooks, virtual assistants, accessibility tools, and adding voice interfaces to applications. Example_getChatCompletions demonstrates how to use Azure OpenAI's Chat Completions API. This example shows how to: - Create an Azure OpenAI client with token credentials - Structure a multi-turn conversation with different message roles - Send a chat completion request and handle the response - Process multiple response choices and finish reasons The example uses environment variables for configuration: - AOAI_CHAT_COMPLETIONS_MODEL: The deployment name of your chat model - AOAI_CHAT_COMPLETIONS_ENDPOINT: Your Azure OpenAI endpoint URL (ex: "https://yourservice.openai.azure.com") Chat completions are useful for: - Building conversational AI interfaces - Creating chatbots with personality - Maintaining context across multiple interactions - Generating human-like text responses Example_chatCompletionsFunctions demonstrates how to use Azure OpenAI's function calling feature. This example shows how to: - Create an Azure OpenAI client with token credentials - Define a function schema for weather information - Request function execution through the chat API - Parse and handle function call responses The example uses environment variables for configuration: - AOAI_CHAT_COMPLETIONS_MODEL: The deployment name of your chat model - AOAI_CHAT_COMPLETIONS_ENDPOINT: Your Azure OpenAI endpoint URL (ex: "https://yourservice.openai.azure.com") Tool calling is useful for: - Integrating external APIs and services - Structured data extraction from natural language - Task automation and workflow integration - Building context-aware applications Example_responsesApiChaining demonstrates how to chain multiple responses together in a conversation flow using the Azure OpenAI Responses API. This example shows how to: - Create an initial response - Chain a follow-up response using the previous response ID - Process both responses - Delete both responses to clean up The example uses environment variables for configuration: - AZURE_OPENAI_ENDPOINT: Your Azure OpenAI endpoint URL (ex: "https://yourservice.openai.azure.com") - AZURE_OPENAI_MODEL: The deployment name of your model (e.g., "gpt-4o") Example_responsesApiFunctionCalling demonstrates how to use the Azure OpenAI Responses API with function calling. This example shows how to: - Create an Azure OpenAI client with token credentials - Define tools (functions) that the model can call - Process the response containing function calls - Provide function outputs back to the model - Delete the responses to clean up The example uses environment variables for configuration: - AZURE_OPENAI_ENDPOINT: Your Azure OpenAI endpoint URL (ex: "https://yourservice.openai.azure.com") - AZURE_OPENAI_MODEL: The deployment name of your model (e.g., "gpt-4o") Example_responsesApiImageInput demonstrates how to use the Azure OpenAI Responses API with image input. This example shows how to: - Create an Azure OpenAI client with token credentials - Fetch an image from a URL and encode it to Base64 - Send a query with both text and a Base64-encoded image - Process the response The example uses environment variables for configuration: - AZURE_OPENAI_ENDPOINT: Your Azure OpenAI endpoint URL (ex: "https://yourservice.openai.azure.com") - AZURE_OPENAI_MODEL: The deployment name of your model (e.g., "gpt-4o") Note: This example fetches and encodes an image from a URL because there is a known issue with image url based image input. Currently only base64 encoded images are supported. Example_responsesApiReasoning demonstrates how to use the Azure OpenAI Responses API with reasoning. This example shows how to: - Create an Azure OpenAI client with token credentials - Send a complex problem-solving request that requires reasoning - Enable the reasoning parameter to get step-by-step thought process - Process the response The example uses environment variables for configuration: - AZURE_OPENAI_ENDPOINT: Your Azure OpenAI endpoint URL (ex: "https://yourservice.openai.azure.com") - AZURE_OPENAI_MODEL: The deployment name of your model (e.g., "gpt-4o") Example_responsesApiStreaming demonstrates how to use streaming with the Azure OpenAI Responses API. This example shows how to: - Create a streaming response - Process the stream events as they arrive - Clean up by deleting the response The example uses environment variables for configuration: - AZURE_OPENAI_ENDPOINT: Your Azure OpenAI endpoint URL (ex: "https://yourservice.openai.azure.com") - AZURE_OPENAI_MODEL: The deployment name of your model (e.g., "gpt-4o") Example_responsesApiTextGeneration demonstrates how to use the Azure OpenAI Responses API for text generation. This example shows how to: - Create an Azure OpenAI client with token credentials - Send a simple text prompt - Process the response - Delete the response to clean up The example uses environment variables for configuration: - AZURE_OPENAI_ENDPOINT: Your Azure OpenAI endpoint URL (ex: "https://yourservice.openai.azure.com") - AZURE_OPENAI_MODEL: The deployment name of your model (e.g., "gpt-4o") The Responses API is a new stateful API from Azure OpenAI that brings together capabilities from chat completions and assistants APIs in a unified experience. Example_streamCompletions demonstrates streaming responses from the legacy Completions API. This example shows how to: - Create an Azure OpenAI client with token credentials - Set up a streaming completion request - Process incremental text chunks - Handle streaming errors and completion The example uses environment variables for configuration: - AOAI_COMPLETIONS_MODEL: The deployment name of your completions model - AOAI_COMPLETIONS_ENDPOINT: Your Azure OpenAI endpoint URL - AZURE_OPENAI_API_VERSION: Azure OpenAI service API version to use. See https://learn.microsoft.com/azure/ai-foundry/openai/api-version-lifecycle?tabs=go for information about API versions. Streaming completions are useful for: - Real-time text generation display - Reduced latency in responses - Interactive text generation - Long-form content creation Example_structuredOutputsResponseFormat demonstrates using JSON response formatting. This example shows how to: - Create an Azure OpenAI client with token credentials - Define JSON schema for response formatting - Request structured mathematical solutions - Parse and process formatted JSON responses The example uses environment variables for configuration: - AOAI_CHAT_COMPLETIONS_STRUCTURED_OUTPUTS_MODEL: The deployment name of your model - AOAI_CHAT_COMPLETIONS_STRUCTURED_OUTPUTS_ENDPOINT: Your Azure OpenAI endpoint URL (ex: "https://yourservice.openai.azure.com") Response formatting is useful for: - Mathematical problem solving - Step-by-step explanations - Structured data generation - Consistent output formatting Example_usingAzureContentFiltering demonstrates how to use Azure OpenAI's content filtering capabilities. This example shows how to: - Create an Azure OpenAI client with token credentials - Make a chat completion request - Extract and handle content filter results - Process content filter errors - Access Azure-specific content filter information from responses The example uses environment variables for configuration: - AOAI_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_MODEL: The deployment name of your model - AZURE_OPENAI_API_VERSION: Azure OpenAI service API version to use. See https://learn.microsoft.com/azure/ai-foundry/openai/api-version-lifecycle?tabs=go for information about API versions. Content filtering is essential for: - Maintaining content safety and compliance - Monitoring content severity levels - Implementing content moderation policies - Handling filtered content gracefully Example_usingAzureOnYourData demonstrates how to use Azure OpenAI's Azure-On-Your-Data feature. This example shows how to: - Create an Azure OpenAI client with token credentials - Configure an Azure Cognitive Search data source - Send a chat completion request with data source integration - Process Azure-specific response data including citations and content filtering results The example uses environment variables for configuration: - AOAI_OYD_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_OYD_MODEL: The deployment name of your model - COGNITIVE_SEARCH_API_ENDPOINT: Your Azure Cognitive Search endpoint - COGNITIVE_SEARCH_API_INDEX: The name of your search index - AZURE_OPENAI_API_VERSION: Azure OpenAI service API version to use. See https://learn.microsoft.com/azure/ai-foundry/openai/api-version-lifecycle?tabs=go for information about API versions. Azure-On-Your-Data enables you to enhance chat completions with information from your own data sources, allowing for more contextual and accurate responses based on your content. Example_usingAzurePromptFilteringWithStreaming demonstrates how to use Azure OpenAI's prompt filtering with streaming responses. This example shows how to: - Create an Azure OpenAI client with token credentials - Set up a streaming chat completion request - Handle streaming responses with Azure extensions - Monitor prompt filter results in real-time - Accumulate and process streamed content The example uses environment variables for configuration: - AOAI_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_MODEL: The deployment name of your model - AZURE_OPENAI_API_VERSION: Azure OpenAI service API version to use. See https://learn.microsoft.com/azure/ai-foundry/openai/api-version-lifecycle?tabs=go for information about API versions. Streaming with prompt filtering is useful for: - Real-time content moderation - Progressive content delivery - Monitoring content safety during generation - Building responsive applications with content safety checks Example_usingDefaultAzureCredential demonstrates how to authenticate with Azure OpenAI using Azure Active Directory credentials. This example shows how to: - Create an Azure OpenAI client using DefaultAzureCredential - Configure authentication options with tenant ID - Make a simple request to test the authentication The example uses environment variables for configuration: - AOAI_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_MODEL: The deployment name of your model - AZURE_TENANT_ID: Your Azure tenant ID - AZURE_CLIENT_ID: (Optional) Your Azure client ID - AZURE_CLIENT_SECRET: (Optional) Your Azure client secret DefaultAzureCredential supports multiple authentication methods including: - Environment variables - Managed Identity - Azure CLI credentials Example_usingEnhancements demonstrates how to use Azure OpenAI's enhanced features. This example shows how to: - Create an Azure OpenAI client with token credentials - Configure chat completion enhancements like grounding - Process Azure-specific response data including content filtering - Handle message context and citations The example uses environment variables for configuration: - AOAI_OYD_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_OYD_MODEL: The deployment name of your model - AZURE_OPENAI_API_VERSION: Azure OpenAI service API version to use. See https://learn.microsoft.com/azure/ai-foundry/openai/api-version-lifecycle?tabs=go for information about API versions. Azure OpenAI enhancements provide additional capabilities beyond standard OpenAI features, such as improved grounding and content filtering for more accurate and controlled responses. Example_usingManagedIdentityCredential demonstrates how to authenticate with Azure OpenAI using Managed Identity. This example shows how to: - Create an Azure OpenAI client using ManagedIdentityCredential - Support both system-assigned and user-assigned managed identities - Make authenticated requests without storing credentials The example uses environment variables for configuration: - AOAI_ENDPOINT: Your Azure OpenAI endpoint URL - AOAI_MODEL: The deployment name of your model Managed Identity is ideal for: - Azure services (VMs, App Service, Azure Functions, etc.) - Azure DevOps pipelines with the Azure DevOps service connection - CI/CD scenarios where you want to avoid storing secrets - Production workloads requiring secure, credential-free authentication Example_vision demonstrates how to use Azure OpenAI's Vision capabilities for image analysis. This example shows how to: - Create an Azure OpenAI client with token credentials - Send an image URL to the model for analysis - Configure the chat completion request with image content - Process the model's description of the image The example uses environment variables for configuration: - AOAI_VISION_MODEL: The deployment name of your vision-capable model (e.g., gpt-4-vision) - AOAI_VISION_ENDPOINT: Your Azure OpenAI endpoint URL - AZURE_OPENAI_API_VERSION: Azure OpenAI service API version to use. See https://learn.microsoft.com/azure/ai-foundry/openai/api-version-lifecycle?tabs=go for information about API versions. Vision capabilities are useful for: - Image description and analysis - Visual question answering - Content moderation - Accessibility features - Image-based search and retrieval
Package pointer implements Andersen's analysis, an inclusion-based pointer analysis algorithm first described in (Andersen, 1994). A pointer analysis relates every pointer expression in a whole program to the set of memory locations to which it might point. This information can be used to construct a call graph of the program that precisely represents the destinations of dynamic function and method calls. It can also be used to determine, for example, which pairs of channel operations operate on the same channel. The package allows the client to request a set of expressions of interest for which the points-to information will be returned once the analysis is complete. In addition, the client may request that a callgraph is constructed. The example program in example_test.go demonstrates both of these features. Clients should not request more information than they need since it may increase the cost of the analysis significantly. Our algorithm is INCLUSION-BASED: the points-to sets for x and y will be related by pts(y) ⊇ pts(x) if the program contains the statement y = x. It is FLOW-INSENSITIVE: it ignores all control flow constructs and the order of statements in a program. It is therefore a "MAY ALIAS" analysis: its facts are of the form "P may/may not point to L", not "P must point to L". It is FIELD-SENSITIVE: it builds separate points-to sets for distinct fields, such as x and y in struct { x, y *int }. It is mostly CONTEXT-INSENSITIVE: most functions are analyzed once, so values can flow in at one call to the function and return out at another. Only some smaller functions are analyzed with consideration of their calling context. It has a CONTEXT-SENSITIVE HEAP: objects are named by both allocation site and context, so the objects returned by two distinct calls to f: are distinguished up to the limits of the calling context. It is a WHOLE PROGRAM analysis: it requires SSA-form IR for the complete Go program and summaries for native code. See the (Hind, PASTE'01) survey paper for an explanation of these terms. The analysis is fully sound when invoked on pure Go programs that do not use reflection or unsafe.Pointer conversions. In other words, if there is any possible execution of the program in which pointer P may point to object O, the analysis will report that fact. By default, the "reflect" library is ignored by the analysis, as if all its functions were no-ops, but if the client enables the Reflection flag, the analysis will make a reasonable attempt to model the effects of calls into this library. However, this comes at a significant performance cost, and not all features of that library are yet implemented. In addition, some simplifying approximations must be made to ensure that the analysis terminates; for example, reflection can be used to construct an infinite set of types and values of those types, but the analysis arbitrarily bounds the depth of such types. Most but not all reflection operations are supported. In particular, addressable reflect.Values are not yet implemented, so operations such as (reflect.Value).Set have no analytic effect. The pointer analysis makes no attempt to understand aliasing between the operand x and result y of an unsafe.Pointer conversion: It is as if the conversion allocated an entirely new object: The analysis cannot model the aliasing effects of functions written in languages other than Go, such as runtime intrinsics in C or assembly, or code accessed via cgo. The result is as if such functions are no-ops. However, various important intrinsics are understood by the analysis, along with built-ins such as append. The analysis currently provides no way for users to specify the aliasing effects of native code. ------------------------------------------------------------------------ The remaining documentation is intended for package maintainers and pointer analysis specialists. Maintainers should have a solid understanding of the referenced papers (especially those by H&L and PKH) before making making significant changes. The implementation is similar to that described in (Pearce et al, PASTE'04). Unlike many algorithms which interleave constraint generation and solving, constructing the callgraph as they go, this implementation for the most part observes a phase ordering (generation before solving), with only simple (copy) constraints being generated during solving. (The exception is reflection, which creates various constraints during solving as new types flow to reflect.Value operations.) This improves the traction of presolver optimisations, but imposes certain restrictions, e.g. potential context sensitivity is limited since all variants must be created a priori. A type is said to be "pointer-like" if it is a reference to an object. Pointer-like types include pointers and also interfaces, maps, channels, functions and slices. We occasionally use C's x->f notation to distinguish the case where x is a struct pointer from x.f where is a struct value. Pointer analysis literature (and our comments) often uses the notation dst=*src+offset to mean something different than what it means in Go. It means: for each node index p in pts(src), the node index p+offset is in pts(dst). Similarly *dst+offset=src is used for store constraints and dst=src+offset for offset-address constraints. Nodes are the key datastructure of the analysis, and have a dual role: they represent both constraint variables (equivalence classes of pointers) and members of points-to sets (things that can be pointed at, i.e. "labels"). Nodes are naturally numbered. The numbering enables compact representations of sets of nodes such as bitvectors (or BDDs); and the ordering enables a very cheap way to group related nodes together. For example, passing n parameters consists of generating n parallel constraints from caller+i to callee+i for 0<=i<n. The zero nodeid means "not a pointer". For simplicity, we generate flow constraints even for non-pointer types such as int. The pointer equivalence (PE) presolver optimization detects which variables cannot point to anything; this includes not only all variables of non-pointer types (such as int) but also variables of pointer-like types if they are always nil, or are parameters to a function that is never called. Each node represents a scalar part of a value or object. Aggregate types (structs, tuples, arrays) are recursively flattened out into a sequential list of scalar component types, and all the elements of an array are represented by a single node. (The flattening of a basic type is a list containing a single node.) Nodes are connected into a graph with various kinds of labelled edges: simple edges (or copy constraints) represent value flow. Complex edges (load, store, etc) trigger the creation of new simple edges during the solving phase. Conceptually, an "object" is a contiguous sequence of nodes denoting an addressable location: something that a pointer can point to. The first node of an object has a non-nil obj field containing information about the allocation: its size, context, and ssa.Value. Objects include: Many objects have no Go types. For example, the func, map and chan type kinds in Go are all varieties of pointers, but their respective objects are actual functions (executable code), maps (hash tables), and channels (synchronized queues). Given the way we model interfaces, they too are pointers to "tagged" objects with no Go type. And an *ssa.Global denotes the address of a global variable, but the object for a Global is the actual data. So, the types of an ssa.Value that creates an object is "off by one indirection": a pointer to the object. The individual nodes of an object are sometimes referred to as "labels". For uniformity, all objects have a non-zero number of fields, even those of the empty type struct{}. (All arrays are treated as if of length 1, so there are no empty arrays. The empty tuple is never address-taken, so is never an object.) An tagged object has the following layout: The T node's typ field is the dynamic type of the "payload": the value v which follows, flattened out. The T node's obj has the otTagged flag. Tagged objects are needed when generalizing across types: interfaces, reflect.Values, reflect.Types. Each of these three types is modelled as a pointer that exclusively points to tagged objects. Tagged objects may be indirect (obj.flags ⊇ {otIndirect}) meaning that the value v is not of type T but *T; this is used only for reflect.Values that represent lvalues. (These are not implemented yet.) Variables of the following "scalar" types may be represented by a single node: basic types, pointers, channels, maps, slices, 'func' pointers, interfaces. Pointers: Nothing to say here, oddly. Basic types (bool, string, numbers, unsafe.Pointer): Currently all fields in the flattening of a type, including non-pointer basic types such as int, are represented in objects and values. Though non-pointer nodes within values are uninteresting, non-pointer nodes in objects may be useful (if address-taken) because they permit the analysis to deduce, in this example, that p points to s.x. If we ignored such object fields, we could only say that p points somewhere within s. All other basic types are ignored. Expressions of these types have zero nodeid, and fields of these types within aggregate other types are omitted. unsafe.Pointers are not modelled as pointers, so a conversion of an unsafe.Pointer to *T is (unsoundly) treated equivalent to new(T). Channels: An expression of type 'chan T' is a kind of pointer that points exclusively to channel objects, i.e. objects created by MakeChan (or reflection). 'chan T' is treated like *T. *ssa.MakeChan is treated as equivalent to new(T). *ssa.Send and receive (*ssa.UnOp(ARROW)) and are equivalent to store Maps: An expression of type 'map[K]V' is a kind of pointer that points exclusively to map objects, i.e. objects created by MakeMap (or reflection). map K[V] is treated like *M where M = struct{k K; v V}. *ssa.MakeMap is equivalent to new(M). *ssa.MapUpdate is equivalent to *y=x where *y and x have type M. *ssa.Lookup is equivalent to y=x.v where x has type *M. Slices: A slice []T, which dynamically resembles a struct{array *T, len, cap int}, is treated as if it were just a *T pointer; the len and cap fields are ignored. *ssa.MakeSlice is treated like new([1]T): an allocation of a *ssa.Index on a slice is equivalent to a load. *ssa.IndexAddr on a slice returns the address of the sole element of the slice, i.e. the same address. *ssa.Slice is treated as a simple copy. Functions: An expression of type 'func...' is a kind of pointer that points exclusively to function objects. A function object has the following layout: There may be multiple function objects for the same *ssa.Function due to context-sensitive treatment of some functions. The first node is the function's identity node. Associated with every callsite is a special "targets" variable, whose pts() contains the identity node of each function to which the call may dispatch. Identity words are not otherwise used during the analysis, but we construct the call graph from the pts() solution for such nodes. The following block of contiguous nodes represents the flattened-out types of the parameters ("P-block") and results ("R-block") of the function object. The treatment of free variables of closures (*ssa.FreeVar) is like that of global variables; it is not context-sensitive. *ssa.MakeClosure instructions create copy edges to Captures. A Go value of type 'func' (i.e. a pointer to one or more functions) is a pointer whose pts() contains function objects. The valueNode() for an *ssa.Function returns a singleton for that function. Interfaces: An expression of type 'interface{...}' is a kind of pointer that points exclusively to tagged objects. All tagged objects pointed to by an interface are direct (the otIndirect flag is clear) and concrete (the tag type T is not itself an interface type). The associated ssa.Value for an interface's tagged objects may be an *ssa.MakeInterface instruction, or nil if the tagged object was created by an instrinsic (e.g. reflection). Constructing an interface value causes generation of constraints for all of the concrete type's methods; we can't tell a priori which ones may be called. TypeAssert y = x.(T) is implemented by a dynamic constraint triggered by each tagged object O added to pts(x): a typeFilter constraint if T is an interface type, or an untag constraint if T is a concrete type. A typeFilter tests whether O.typ implements T; if so, O is added to pts(y). An untagFilter tests whether O.typ is assignable to T,and if so, a copy edge O.v -> y is added. ChangeInterface is a simple copy because the representation of tagged objects is independent of the interface type (in contrast to the "method tables" approach used by the gc runtime). y := Invoke x.m(...) is implemented by allocating contiguous P/R blocks for the callsite and adding a dynamic rule triggered by each tagged object added to pts(x). The rule adds param/results copy edges to/from each discovered concrete method. (Q. Why do we model an interface as a pointer to a pair of type and value, rather than as a pair of a pointer to type and a pointer to value? A. Control-flow joins would merge interfaces ({T1}, {V1}) and ({T2}, {V2}) to make ({T1,T2}, {V1,V2}), leading to the infeasible and type-unsafe combination (T1,V2). Treating the value and its concrete type as inseparable makes the analysis type-safe.) Type parameters: Type parameters are not directly supported by the analysis. Calls to generic functions will be left as if they had empty bodies. Users of the package are expected to use the ssa.InstantiateGenerics builder mode when building code that uses or depends on code containing generics. reflect.Value: A reflect.Value is modelled very similar to an interface{}, i.e. as a pointer exclusively to tagged objects, but with two generalizations. 1. a reflect.Value that represents an lvalue points to an indirect (obj.flags ⊇ {otIndirect}) tagged object, which has a similar layout to an tagged object except that the value is a pointer to the dynamic type. Indirect tagged objects preserve the correct aliasing so that mutations made by (reflect.Value).Set can be observed. Indirect objects only arise when an lvalue is derived from an rvalue by indirection, e.g. the following code: Whether indirect or not, the concrete type of the tagged object corresponds to the user-visible dynamic type, and the existence of a pointer is an implementation detail. (NB: indirect tagged objects are not yet implemented) 2. The dynamic type tag of a tagged object pointed to by a reflect.Value may be an interface type; it need not be concrete. This arises in code such as this: pts(eface) is a singleton containing an interface{}-tagged object. That tagged object's payload is an interface{} value, i.e. the pts of the payload contains only concrete-tagged objects, although in this example it's the zero interface{} value, so its pts is empty. reflect.Type: Just as in the real "reflect" library, we represent a reflect.Type as an interface whose sole implementation is the concrete type, *reflect.rtype. (This choice is forced on us by go/types: clients cannot fabricate types with arbitrary method sets.) rtype instances are canonical: there is at most one per dynamic type. (rtypes are in fact large structs but since identity is all that matters, we represent them by a single node.) The payload of each *rtype-tagged object is an *rtype pointer that points to exactly one such canonical rtype object. We exploit this by setting the node.typ of the payload to the dynamic type, not '*rtype'. This saves us an indirection in each resolution rule. As an optimisation, *rtype-tagged objects are canonicalized too. Aggregate types: Aggregate types are treated as if all directly contained aggregates are recursively flattened out. Structs: *ssa.Field y = x.f creates a simple edge to y from x's node at f's offset. *ssa.FieldAddr y = &x->f requires a dynamic closure rule to create The nodes of a struct consist of a special 'identity' node (whose type is that of the struct itself), followed by the nodes for all the struct's fields, recursively flattened out. A pointer to the struct is a pointer to its identity node. That node allows us to distinguish a pointer to a struct from a pointer to its first field. Field offsets are logical field offsets (plus one for the identity node), so the sizes of the fields can be ignored by the analysis. (The identity node is non-traditional but enables the distinction described above, which is valuable for code comprehension tools. Typical pointer analyses for C, whose purpose is compiler optimization, must soundly model unsafe.Pointer (void*) conversions, and this requires fidelity to the actual memory layout using physical field offsets.) *ssa.Field y = x.f creates a simple edge to y from x's node at f's offset. *ssa.FieldAddr y = &x->f requires a dynamic closure rule to create Arrays: We model an array by an identity node (whose type is that of the array itself) followed by a node representing all the elements of the array; the analysis does not distinguish elements with different indices. Effectively, an array is treated like struct{elem T}, a load y=x[i] like y=x.elem, and a store x[i]=y like x.elem=y; the index i is ignored. A pointer to an array is pointer to its identity node. (A slice is also a pointer to an array's identity node.) The identity node allows us to distinguish a pointer to an array from a pointer to one of its elements, but it is rather costly because it introduces more offset constraints into the system. Furthermore, sound treatment of unsafe.Pointer would require us to dispense with this node. Arrays may be allocated by Alloc, by make([]T), by calls to append, and via reflection. Tuples (T, ...): Tuples are treated like structs with naturally numbered fields. *ssa.Extract is analogous to *ssa.Field. However, tuples have no identity field since by construction, they cannot be address-taken. There are three kinds of function call: Cases 1 and 2 apply equally to methods and standalone functions. Static calls: A static call consists three steps: A static function call is little more than two struct value copies between the P/R blocks of caller and callee: Context sensitivity: Static calls (alone) may be treated context sensitively, i.e. each callsite may cause a distinct re-analysis of the callee, improving precision. Our current context-sensitivity policy treats all intrinsics and getter/setter methods in this manner since such functions are small and seem like an obvious source of spurious confluences, though this has not yet been evaluated. Dynamic function calls: Dynamic calls work in a similar manner except that the creation of copy edges occurs dynamically, in a similar fashion to a pair of struct copies in which the callee is indirect: (Recall that the function object's P- and R-blocks are contiguous.) Interface method invocation: For invoke-mode calls, we create a params/results block for the callsite and attach a dynamic closure rule to the interface. For each new tagged object that flows to the interface, we look up the concrete method, find its function object, and connect its P/R blocks to the callsite's P/R blocks, adding copy edges to the graph during solving. Recording call targets: The analysis notifies its clients of each callsite it encounters, passing a CallSite interface. Among other things, the CallSite contains a synthetic constraint variable ("targets") whose points-to solution includes the set of all function objects to which the call may dispatch. It is via this mechanism that the callgraph is made available. Clients may also elect to be notified of callgraph edges directly; internally this just iterates all "targets" variables' pts(·)s. We implement Hash-Value Numbering (HVN), a pre-solver constraint optimization described in Hardekopf & Lin, SAS'07. This is documented in more detail in hvn.go. We intend to add its cousins HR and HU in future. The solver is currently a naive Andersen-style implementation; it does not perform online cycle detection, though we plan to add solver optimisations such as Hybrid- and Lazy- Cycle Detection from (Hardekopf & Lin, PLDI'07). It uses difference propagation (Pearce et al, SQC'04) to avoid redundant re-triggering of closure rules for values already seen. Points-to sets are represented using sparse bit vectors (similar to those used in LLVM and gcc), which are more space- and time-efficient than sets based on Go's built-in map type or dense bit vectors. Nodes are permuted prior to solving so that object nodes (which may appear in points-to sets) are lower numbered than non-object (var) nodes. This improves the density of the set over which the PTSs range, and thus the efficiency of the representation. Partly thanks to avoiding map iteration, the execution of the solver is 100% deterministic, a great help during debugging. Andersen, L. O. 1994. Program analysis and specialization for the C programming language. Ph.D. dissertation. DIKU, University of Copenhagen. David J. Pearce, Paul H. J. Kelly, and Chris Hankin. 2004. Efficient field-sensitive pointer analysis for C. In Proceedings of the 5th ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering (PASTE '04). ACM, New York, NY, USA, 37-42. http://doi.acm.org/10.1145/996821.996835 David J. Pearce, Paul H. J. Kelly, and Chris Hankin. 2004. Online Cycle Detection and Difference Propagation: Applications to Pointer Analysis. Software Quality Control 12, 4 (December 2004), 311-337. http://dx.doi.org/10.1023/B:SQJO.0000039791.93071.a2 David Grove and Craig Chambers. 2001. A framework for call graph construction algorithms. ACM Trans. Program. Lang. Syst. 23, 6 (November 2001), 685-746. http://doi.acm.org/10.1145/506315.506316 Ben Hardekopf and Calvin Lin. 2007. The ant and the grasshopper: fast and accurate pointer analysis for millions of lines of code. In Proceedings of the 2007 ACM SIGPLAN conference on Programming language design and implementation (PLDI '07). ACM, New York, NY, USA, 290-299. http://doi.acm.org/10.1145/1250734.1250767 Ben Hardekopf and Calvin Lin. 2007. Exploiting pointer and location equivalence to optimize pointer analysis. In Proceedings of the 14th international conference on Static Analysis (SAS'07), Hanne Riis Nielson and Gilberto Filé (Eds.). Springer-Verlag, Berlin, Heidelberg, 265-280. Atanas Rountev and Satish Chandra. 2000. Off-line variable substitution for scaling points-to analysis. In Proceedings of the ACM SIGPLAN 2000 conference on Programming language design and implementation (PLDI '00). ACM, New York, NY, USA, 47-56. DOI=10.1145/349299.349310 http://doi.acm.org/10.1145/349299.349310 This program demonstrates how to use the pointer analysis to obtain a conservative call-graph of a Go program. It also shows how to compute the points-to set of a variable, in this case, (C).f's ch parameter.
Package rtcp implements encoding and decoding of RTCP packets according to RFCs 3550 and 5506. RTCP is a sister protocol of the Real-time Transport Protocol (RTP). Its basic functionality and packet structure is defined in RFC 3550. RTCP provides out-of-band statistics and control information for an RTP session. It partners with RTP in the delivery and packaging of multimedia data, but does not transport any media data itself. The primary function of RTCP is to provide feedback on the quality of service (QoS) in media distribution by periodically sending statistics information such as transmitted octet and packet counts, packet loss, packet delay variation, and round-trip delay time to participants in a streaming multimedia session. An application may use this information to control quality of service parameters, perhaps by limiting flow, or using a different codec. Decoding RTCP packets: Encoding RTCP packets:
Package restful , a lean package for creating REST-style WebServices without magic. A WebService has a collection of Route objects that dispatch incoming Http Requests to a function calls. Typically, a WebService has a root path (e.g. /users) and defines common MIME types for its routes. WebServices must be added to a container (see below) in order to handler Http requests from a server. A Route is defined by a HTTP method, an URL path and (optionally) the MIME types it consumes (Content-Type) and produces (Accept). This package has the logic to find the best matching Route and if found, call its Function. The (*Request, *Response) arguments provide functions for reading information from the request and writing information back to the response. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-user-resource.go with a full implementation. A Route parameter can be specified using the format "uri/{var[:regexp]}" or the special version "uri/{var:*}" for matching the tail of the path. For example, /persons/{name:[A-Z][A-Z]} can be used to restrict values for the parameter "name" to only contain capital alphabetic characters. Regular expressions must use the standard Go syntax as described in the regexp package. (https://code.google.com/p/re2/wiki/Syntax) This feature requires the use of a CurlyRouter. A Container holds a collection of WebServices, Filters and a http.ServeMux for multiplexing http requests. Using the statements "restful.Add(...) and restful.Filter(...)" will register WebServices and Filters to the Default Container. The Default container of go-restful uses the http.DefaultServeMux. You can create your own Container and create a new http.Server for that particular container. A filter dynamically intercepts requests and responses to transform or use the information contained in the requests or responses. You can use filters to perform generic logging, measurement, authentication, redirect, set response headers etc. In the restful package there are three hooks into the request,response flow where filters can be added. Each filter must define a FilterFunction: Use the following statement to pass the request,response pair to the next filter or RouteFunction These are processed before any registered WebService. These are processed before any Route of a WebService. These are processed before calling the function associated with the Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-filters.go with full implementations. Two encodings are supported: gzip and deflate. To enable this for all responses: If a Http request includes the Accept-Encoding header then the response content will be compressed using the specified encoding. Alternatively, you can create a Filter that performs the encoding and install it per WebService or Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-encoding-filter.go By installing a pre-defined container filter, your Webservice(s) can respond to the OPTIONS Http request. By installing the filter of a CrossOriginResourceSharing (CORS), your WebService(s) can handle CORS requests. Unexpected things happen. If a request cannot be processed because of a failure, your service needs to tell via the response what happened and why. For this reason HTTP status codes exist and it is important to use the correct code in every exceptional situation. If path or query parameters are not valid (content or type) then use http.StatusBadRequest. Despite a valid URI, the resource requested may not be available If the application logic could not process the request (or write the response) then use http.StatusInternalServerError. The request has a valid URL but the method (GET,PUT,POST,...) is not allowed. The request does not have or has an unknown Accept Header set for this operation. The request does not have or has an unknown Content-Type Header set for this operation. In addition to setting the correct (error) Http status code, you can choose to write a ServiceError message on the response. This package has several options that affect the performance of your service. It is important to understand them and how you can change it. DoNotRecover controls whether panics will be caught to return HTTP 500. If set to false, the container will recover from panics. Default value is true If content encoding is enabled then the default strategy for getting new gzip/zlib writers and readers is to use a sync.Pool. Because writers are expensive structures, performance is even more improved when using a preloaded cache. You can also inject your own implementation. This package has the means to produce detail logging of the complete Http request matching process and filter invocation. Enabling this feature requires you to set an implementation of restful.StdLogger (e.g. log.Logger) instance such as: The restful.SetLogger() method allows you to override the logger used by the package. By default restful uses the standard library `log` package and logs to stdout. Different logging packages are supported as long as they conform to `StdLogger` interface defined in the `log` sub-package, writing an adapter for your preferred package is simple. (c) 2012-2015, http://ernestmicklei.com. MIT License
Utilities missing from the "reflect" package: easier struct traversal, deep dereferencing, various boolean tests such as generic `IsNil`, and some more. Small and dependency-free. • Functions that take a `reflect.Value` have "Rval" in the name. • Functions that take a `reflect.Type` have "Rtype" in the name. In general, utils for struct traversal can be implemented as function that take closures, or as stateful iterator objects. Both approaches were tested and demonstrated similar performance characteristics. The iterator-based approach can be more convenient to use because it doesn't interrupt the control flow of the caller code. However, the closure-based approach is much simpler to implement and get right. It's also much simpler to wrap, making it easier for users of this package to implement their own iterator functions in terms of the existing ones.
Goserial is a simple go package to allow you to read and write from the serial port as a stream of bytes. It aims to have the same API on all platforms, including windows. As an added bonus, the windows package does not use cgo, so you can cross compile for windows from another platform. Unfortunately goinstall does not currently let you cross compile so you will have to do it manually: Currently there is very little in the way of configurability. You can set the baud rate. Then you can Read(), Write(), or Close() the connection. Read() will block until at least one byte is returned. Write is the same. There is currently no exposed way to set the timeouts, though patches are welcome. Currently all ports are opened with 8 data bits, 1 stop bit, no parity, no hardware flow control, and no software flow control. This works fine for many real devices and many faux serial devices including usb-to-serial converters and bluetooth serial ports. You may Read() and Write() simulantiously on the same connection (from different goroutines). Example usage:
Package rtcp implements encoding and decoding of RTCP packets according to RFC 3550. RTCP is a sister protocol of the Real-time Transport Protocol (RTP). Its basic functionality and packet structure is defined in RFC 3550. RTCP provides out-of-band statistics and control information for an RTP session. It partners with RTP in the delivery and packaging of multimedia data, but does not transport any media data itself. The primary function of RTCP is to provide feedback on the quality of service (QoS) in media distribution by periodically sending statistics information such as transmitted octet and packet counts, packet loss, packet delay variation, and round-trip delay time to participants in a streaming multimedia session. An application may use this information to control quality of service parameters, perhaps by limiting flow, or using a different codec. Decoding RTCP packets: Encoding RTCP packets:
Package floc allows to orchestrate goroutines with ease. The goal of the project is to make the process of running goroutines in parallel and synchronizing them easy. Floc follows for objectives: -- Easy to use functional interface. -- Better control over execution with one entry point and one exit point. -- Simple parallelism and synchronization of jobs. -- As little overhead, in comparison to direct use of goroutines and sync primitives, as possible. The package categorizes middleware used for architect flows in subpackages. -- `guard` contains middleware which help protect flow from falling into panic or unpredicted behavior. -- `pred` contains some basic predicates for AND, OR, NOT logic. -- `run` provides middleware for designing flow, i.e. for running job sequentially, in parallel, in background and so on. Here is a quick example of what the package capable of.
Goserial is a simple go package to allow you to read and write from the serial port as a stream of bytes. It aims to have the same API on all platforms, including windows. As an added bonus, the windows package does not use cgo, so you can cross compile for windows from another platform. Unfortunately goinstall does not currently let you cross compile so you will have to do it manually: Currently there is very little in the way of configurability. You can set the baud rate. Then you can Read(), Write(), or Close() the connection. Read() will block until at least one byte is returned. Write is the same. There is currently no exposed way to set the timeouts, though patches are welcome. Currently ports are opened with 8 data bits, 1 stop bit, no parity, no hardware flow control, and no software flow control by default. This works fine for many real devices and many faux serial devices including usb-to-serial converters and bluetooth serial ports. You may Read() and Write() simulantiously on the same connection (from different goroutines). Example usage:
The package defines a single interface and a few implementations of the interface. The externalization of the loop flow control makes it easy to test the internal functions of background goroutines by, for instance, only running the loop once while under test. The design is that any errors which need to be returned from the loop will be passed back on a channel whose implementation is left up to the individual Looper. Calling methods can wait on execution and for any resulting errors by calling the Wait() method on the Looper. In this example, we are going to run a FreeLooper with 5 iterations. In the course of running, an error is generated, which the parent function captures and outputs. As a result of the error only 3 of the 5 iterations are completed and the output reflects this.
Extensible Go library for creating fast, SSR-first frontend avoiding vanilla templating downsides. Creating asynchronous and dynamic layout parts is a complex problem for larger projects using `html/template`. This library tries to simplify overall setup and process. Let's go straight into a simple example. Then, we will dig into details, step by step, how it works. Kyoto provides a set of simple net/http handlers, handler builders and function wrappers to provide serving, pages rendering, component actions, etc. Anyway, this is not an ultimative solution for any case. If you ever need to wrap/extend existing functionality, library encourages this. See functions inside of nethttp.go file for details and advanced usage. Example: Kyoto provides a way to define components. It's a very common approach for modern libraries to manage frontend parts. In kyoto each component is a context receiver, which returns it's state. Each component becomes a part of the page or top-level component, which executes component asynchronously and gets a state future object. In that way your components are executing in a non-blocking way. Pages are just top-level components, where you can configure rendering and page related stuff. Example: As an option, you can wrap component with another function to accept additional paramenters from top-level page/component. Example: Kyoto provides a context, which holds common objects like http.ResponseWriter, *http.Request, etc. See kyoto.Context for details. Example: Kyoto provides a set of parameters and functions to provide a comfortable template building process. You can configure template building parameters with kyoto.TemplateConf configuration. See template.go for available functions and kyoto.TemplateConfiguration for configuration details. Example: Kyoto provides a way to simplify building dynamic UIs. For this purpose it has a feature named actions. Logic is pretty simple. Client calls an action (sends a request to the server). Action is executing on server side and server is sending updated component markup to the client which will be morphed into DOM. That's it. To use actions, you need to go through a few steps. You'll need to include a client into page (JS functions for communication) and register an actions handler for a needed component. Let's start from including a client. Then, let's register an actions handler for a needed component. That's all! Now we ready to use actions to provide a dynamic UI. Example: In this example you can see provided modifications to the quick start example. First, we've added a state and name into our components' markup. In this way we are saving our components' state between actions and find a component root. Unfortunately, we have to manually provide a component name for now, we haven't found a way to provide it dynamically. Second, we've added a reload button with onclick function call. We're using a function Action provided by a client. Action triggering will be described in details later. Third, we've added an action handler inside of our component. This handler will be executed when a client calls an action with a corresponding name. It's highly recommended to keep components' state as small as possible. It will be transmitted on each action call. Kyoto have multiple ways to trigger actions. Now we will check them one by one. This is the simplest way to trigger an action. It's just a function call with a referer (usually 'this', f.e. button) as a first argument (used to determine root), action name as a second argument and arguments as a rest. Arguments must to be JSON serializable. It's possible to trigger an action of another component. If you want to call an action of parent component, use $ prefix in action name. If you want to call an action of component by id, use <id:action> as an action name. This is a specific action which is triggered when a form is submitted. Usually called in onsubmit="..." attribute of a form. You'll need to implement 'Submit' action to handle this trigger. This is a special HTML attribute which will trigger an action on page load. This may be useful for components' lazy loading. With this special HTML attributes you can trigger an action with interval. Useful for components that must to be updated over time (f.e. charts, stats, etc). You can use this trigger with ssa:poll and ssa:poll.interval HTML attributes. This one attribute allows you to trigger an action when an element is visible on the screen. May be useful for lazy loading. Kyoto provides a way to control action flow. For now, it's possible to control display style on component call and push multiple UI updates to the client during a single action. Because kyoto makes a roundtrip to the server every time an action is triggered on the page, there are cases where the page may not react immediately to a user event (like a click). That's why the library provides a way to easily control display attributes on action call. You can use this HTML attribute to control display during action call. At the end of an action the layout will be restored. A small note. Don't forget to set a default display for loading elements like spinners and loaders. You can push multiple component UI updates during a single action call. Just call kyoto.ActionFlush(ctx, state) to initiate an update. Kyoto provides a way to control action rendering. Now there is at least 2 rendering options after an action call: morph (default) and replace. Morph will try to morph received markup to the current one with morphdom library. In case of an error, or explicit "replace" mode, markup will be replaced with x.outerHTML = '...'.
Package hunch provides functions like: `All`, `First`, `Retry`, `Waterfall` etc., that makes asynchronous flow control more intuitive.
Extensible Go library for creating fast, SSR-first frontend avoiding vanilla templating downsides. Creating asynchronous and dynamic layout parts is a complex problem for larger projects using `html/template`. Library tries to simplify this process. Let's go straight into a simple example. Then, we will dig into details, step by step, how it works. Kyoto provides a simple net/http handlers and function wrappers to handle pages rendering and serving. See functions inside of nethttp.go file for details and advanced usage. Example: Kyoto provides a way to define components. It's a very common approach for modern libraries to manage frontend parts. In kyoto each component is a context receiver, which returns it's state. Each component becomes a part of the page or top-level component, which executes component asynchronously and gets a state future object. In that way your components are executing in a non-blocking way. Pages are just top-level components, where you can configure rendering and page related stuff. Example: As an option, you can wrap component with another function to accept additional paramenters from top-level page/component. Example: Kyoto provides a context, which holds common objects like http.ResponseWriter, *http.Request, etc. See kyoto.Context for details. Example: Kyoto provides a set of parameters and functions to provide a comfortable template building process. You can configure template building parameters with kyoto.TemplateConf configuration. See template.go for available functions and kyoto.TemplateConfiguration for configuration details. Example: Kyoto provides a way to simplify building dynamic UIs. For this purpose it has a feature named actions. Logic is pretty simple. Client calls an action (sends a request to the server). Action is executing on server side and server is sending updated component markup to the client which will be morphed into DOM. That's it. To use actions, you need to go through a few steps. You'll need to include a client into page (JS functions for communication) and register an actions handler for a needed component. Let's start from including a client. Then, let's register an actions handler for a needed component. That's all! Now we ready to use actions to provide a dynamic UI. Example: In this example you can see provided modifications to the quick start example. First, we've added a state and name into our components' markup. In this way we are saving our components' state between actions and find a component root. Unfortunately, we have to manually provide a component name for now, we haven't found a way to provide it dynamically. Second, we've added a reload button with onclick function call. We're using a function Action provided by a client. Action triggering will be described in details later. Third, we've added an action handler inside of our component. This handler will be executed when a client calls an action with a corresponding name. It's highly recommended to keep components' state as small as possible. It will be transmitted on each action call. Kyoto have multiple ways to trigger actions. Now we will check them one by one. This is the simplest way to trigger an action. It's just a function call with a referer (usually 'this', f.e. button) as a first argument (used to determine root), action name as a second argument and arguments as a rest. Arguments must to be JSON serializable. It's possible to trigger an action of another component. If you want to call an action of parent component, use $ prefix in action name. If you want to call an action of component by id, use <id:action> as an action name. This is a specific action which is triggered when a form is submitted. Usually called in onsubmit="..." attribute of a form. You'll need to implement 'Submit' action to handle this trigger. This is a special HTML attribute which will trigger an action on page load. This may be useful for components' lazy loading. With this special HTML attributes you can trigger an action with interval. Useful for components that must to be updated over time (f.e. charts, stats, etc). You can use this trigger with ssa:poll and ssa:poll.interval HTML attributes. This one attribute allows you to trigger an action when an element is visible on the screen. May be useful for lazy loading. Kyoto provides a way to control action flow. For now, it's possible to control display style on component call and push multiple UI updates to the client during a single action. Because kyoto makes a roundtrip to the server every time an action is triggered on the page, there are cases where the page may not react immediately to a user event (like a click). That's why the library provides a way to easily control display attributes on action call. You can use this HTML attribute to control display during action call. At the end of an action the layout will be restored. A small note. Don't forget to set a default display for loading elements like spinners and loaders. You can push multiple component UI updates during a single action call. Just call kyoto.ActionFlush(ctx, state) to initiate an update. Kyoto provides a way to control action rendering. Now there is at least 2 rendering options after an action call: morph (default) and replace. Morph will try to morph received markup to the current one with morphdom library. In case of an error, or explicit "replace" mode, markup will be replaced with x.outerHTML = '...'.
Package restful , a lean package for creating REST-style WebServices without magic. A WebService has a collection of Route objects that dispatch incoming Http Requests to a function calls. Typically, a WebService has a root path (e.g. /users) and defines common MIME types for its routes. WebServices must be added to a container (see below) in order to handler Http requests from a server. A Route is defined by a HTTP method, an URL path and (optionally) the MIME types it consumes (Content-Type) and produces (Accept). This package has the logic to find the best matching Route and if found, call its Function. The (*Request, *Response) arguments provide functions for reading information from the request and writing information back to the response. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-user-resource.go with a full implementation. A Route parameter can be specified using the format "uri/{var[:regexp]}" or the special version "uri/{var:*}" for matching the tail of the path. For example, /persons/{name:[A-Z][A-Z]} can be used to restrict values for the parameter "name" to only contain capital alphabetic characters. Regular expressions must use the standard Go syntax as described in the regexp package. (https://code.google.com/p/re2/wiki/Syntax) This feature requires the use of a CurlyRouter. A Container holds a collection of WebServices, Filters and a http.ServeMux for multiplexing http requests. Using the statements "restful.Add(...) and restful.Filter(...)" will register WebServices and Filters to the Default Container. The Default container of go-restful uses the http.DefaultServeMux. You can create your own Container and create a new http.Server for that particular container. A filter dynamically intercepts requests and responses to transform or use the information contained in the requests or responses. You can use filters to perform generic logging, measurement, authentication, redirect, set response headers etc. In the restful package there are three hooks into the request,response flow where filters can be added. Each filter must define a FilterFunction: Use the following statement to pass the request,response pair to the next filter or RouteFunction These are processed before any registered WebService. These are processed before any Route of a WebService. These are processed before calling the function associated with the Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-filters.go with full implementations. Two encodings are supported: gzip and deflate. To enable this for all responses: If a Http request includes the Accept-Encoding header then the response content will be compressed using the specified encoding. Alternatively, you can create a Filter that performs the encoding and install it per WebService or Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-encoding-filter.go By installing a pre-defined container filter, your Webservice(s) can respond to the OPTIONS Http request. By installing the filter of a CrossOriginResourceSharing (CORS), your WebService(s) can handle CORS requests. Unexpected things happen. If a request cannot be processed because of a failure, your service needs to tell via the response what happened and why. For this reason HTTP status codes exist and it is important to use the correct code in every exceptional situation. If path or query parameters are not valid (content or type) then use http.StatusBadRequest. Despite a valid URI, the resource requested may not be available If the application logic could not process the request (or write the response) then use http.StatusInternalServerError. The request has a valid URL but the method (GET,PUT,POST,...) is not allowed. The request does not have or has an unknown Accept Header set for this operation. The request does not have or has an unknown Content-Type Header set for this operation. In addition to setting the correct (error) Http status code, you can choose to write a ServiceError message on the response. This package has several options that affect the performance of your service. It is important to understand them and how you can change it. DoNotRecover controls whether panics will be caught to return HTTP 500. If set to false, the container will recover from panics. Default value is true If content encoding is enabled then the default strategy for getting new gzip/zlib writers and readers is to use a sync.Pool. Because writers are expensive structures, performance is even more improved when using a preloaded cache. You can also inject your own implementation. This package has the means to produce detail logging of the complete Http request matching process and filter invocation. Enabling this feature requires you to set an implementation of restful.StdLogger (e.g. log.Logger) instance such as: The restful.SetLogger() method allows you to override the logger used by the package. By default restful uses the standard library `log` package and logs to stdout. Different logging packages are supported as long as they conform to `StdLogger` interface defined in the `log` sub-package, writing an adapter for your preferred package is simple. (c) 2012-2015, http://ernestmicklei.com. MIT License
Package restful , a lean package for creating REST-style WebServices without magic. ### WebServices and Routes A WebService has a collection of Route objects that dispatch incoming Http Requests to a function calls. Typically, a WebService has a root path (e.g. /users) and defines common MIME types for its routes. WebServices must be added to a container (see below) in order to handler Http requests from a server. A Route is defined by a HTTP method, an URL path and (optionally) the MIME types it consumes (Content-Type) and produces (Accept). This package has the logic to find the best matching Route and if found, call its Function. The (*Request, *Response) arguments provide functions for reading information from the request and writing information back to the response. See the example https://github.com/emicklei/go-restful/blob/v3/examples/user-resource/restful-user-resource.go with a full implementation. ### Regular expression matching Routes A Route parameter can be specified using the format "uri/{var[:regexp]}" or the special version "uri/{var:*}" for matching the tail of the path. For example, /persons/{name:[A-Z][A-Z]} can be used to restrict values for the parameter "name" to only contain capital alphabetic characters. Regular expressions must use the standard Go syntax as described in the regexp package. (https://code.google.com/p/re2/wiki/Syntax) This feature requires the use of a CurlyRouter. ### Containers A Container holds a collection of WebServices, Filters and a http.ServeMux for multiplexing http requests. Using the statements "restful.Add(...) and restful.Filter(...)" will register WebServices and Filters to the Default Container. The Default container of go-restful uses the http.DefaultServeMux. You can create your own Container and create a new http.Server for that particular container. ### Filters A filter dynamically intercepts requests and responses to transform or use the information contained in the requests or responses. You can use filters to perform generic logging, measurement, authentication, redirect, set response headers etc. In the restful package there are three hooks into the request,response flow where filters can be added. Each filter must define a FilterFunction: Use the following statement to pass the request,response pair to the next filter or RouteFunction ### Container Filters These are processed before any registered WebService. ### WebService Filters These are processed before any Route of a WebService. ### Route Filters These are processed before calling the function associated with the Route. See the example https://github.com/emicklei/go-restful/blob/v3/examples/filters/restful-filters.go with full implementations. ### Response Encoding Two encodings are supported: gzip and deflate. To enable this for all responses: If a Http request includes the Accept-Encoding header then the response content will be compressed using the specified encoding. Alternatively, you can create a Filter that performs the encoding and install it per WebService or Route. See the example https://github.com/emicklei/go-restful/blob/v3/examples/encoding/restful-encoding-filter.go ### OPTIONS support By installing a pre-defined container filter, your Webservice(s) can respond to the OPTIONS Http request. ### CORS By installing the filter of a CrossOriginResourceSharing (CORS), your WebService(s) can handle CORS requests. ### Error Handling Unexpected things happen. If a request cannot be processed because of a failure, your service needs to tell via the response what happened and why. For this reason HTTP status codes exist and it is important to use the correct code in every exceptional situation. If path or query parameters are not valid (content or type) then use http.StatusBadRequest. Despite a valid URI, the resource requested may not be available If the application logic could not process the request (or write the response) then use http.StatusInternalServerError. The request has a valid URL but the method (GET,PUT,POST,...) is not allowed. The request does not have or has an unknown Accept Header set for this operation. The request does not have or has an unknown Content-Type Header set for this operation. ### ServiceError In addition to setting the correct (error) Http status code, you can choose to write a ServiceError message on the response. ### Performance options This package has several options that affect the performance of your service. It is important to understand them and how you can change it. DoNotRecover controls whether panics will be caught to return HTTP 500. If set to false, the container will recover from panics. Default value is true If content encoding is enabled then the default strategy for getting new gzip/zlib writers and readers is to use a sync.Pool. Because writers are expensive structures, performance is even more improved when using a preloaded cache. You can also inject your own implementation. ### Trouble shooting This package has the means to produce detail logging of the complete Http request matching process and filter invocation. Enabling this feature requires you to set an implementation of restful.StdLogger (e.g. log.Logger) instance such as: ### Logging The restful.SetLogger() method allows you to override the logger used by the package. By default restful uses the standard library `log` package and logs to stdout. Different logging packages are supported as long as they conform to `StdLogger` interface defined in the `log` sub-package, writing an adapter for your preferred package is simple. ### Resources (c) 2012-2025, http://ernestmicklei.com. MIT License
Package restful, a lean package for creating REST-style WebServices without magic. A WebService has a collection of Route objects that dispatch incoming Http Requests to a function calls. Typically, a WebService has a root path (e.g. /users) and defines common MIME types for its routes. WebServices must be added to a container (see below) in order to handler Http requests from a server. A Route is defined by a HTTP method, an URL path and (optionally) the MIME types it consumes (Content-Type) and produces (Accept). This package has the logic to find the best matching Route and if found, call its Function. The (*Request, *Response) arguments provide functions for reading information from the request and writing information back to the response. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-user-resource.go with a full implementation. A Route parameter can be specified using the format "uri/{var[:regexp]}" or the special version "uri/{var:*}" for matching the tail of the path. For example, /persons/{name:[A-Z][A-Z]} can be used to restrict values for the parameter "name" to only contain capital alphabetic characters. Regular expressions must use the standard Go syntax as described in the regexp package. (https://code.google.com/p/re2/wiki/Syntax) This feature requires the use of a CurlyRouter. A Container holds a collection of WebServices, Filters and a http.ServeMux for multiplexing http requests. Using the statements "restful.Add(...) and restful.Filter(...)" will register WebServices and Filters to the Default Container. The Default container of go-restful uses the http.DefaultServeMux. You can create your own Container and create a new http.Server for that particular container. A filter dynamically intercepts requests and responses to transform or use the information contained in the requests or responses. You can use filters to perform generic logging, measurement, authentication, redirect, set response headers etc. In the restful package there are three hooks into the request,response flow where filters can be added. Each filter must define a FilterFunction: Use the following statement to pass the request,response pair to the next filter or RouteFunction These are processed before any registered WebService. These are processed before any Route of a WebService. These are processed before calling the function associated with the Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-filters.go with full implementations. Two encodings are supported: gzip and deflate. To enable this for all responses: If a Http request includes the Accept-Encoding header then the response content will be compressed using the specified encoding. Alternatively, you can create a Filter that performs the encoding and install it per WebService or Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-encoding-filter.go By installing a pre-defined container filter, your Webservice(s) can respond to the OPTIONS Http request. By installing the filter of a CrossOriginResourceSharing (CORS), your WebService(s) can handle CORS requests. Unexpected things happen. If a request cannot be processed because of a failure, your service needs to tell via the response what happened and why. For this reason HTTP status codes exist and it is important to use the correct code in every exceptional situation. If path or query parameters are not valid (content or type) then use http.StatusBadRequest. Despite a valid URI, the resource requested may not be available If the application logic could not process the request (or write the response) then use http.StatusInternalServerError. The request has a valid URL but the method (GET,PUT,POST,...) is not allowed. The request does not have or has an unknown Accept Header set for this operation. The request does not have or has an unknown Content-Type Header set for this operation. In addition to setting the correct (error) Http status code, you can choose to write a ServiceError message on the response. This package has several options that affect the performance of your service. It is important to understand them and how you can change it. The default router is the RouterJSR311 which is an implementation of its spec (http://jsr311.java.net/nonav/releases/1.1/spec/spec.html). However, it uses regular expressions for all its routes which, depending on your usecase, may consume a significant amount of time. The CurlyRouter implementation is more lightweight that also allows you to use wildcards and expressions, but only if needed. DoNotRecover controls whether panics will be caught to return HTTP 500. If set to true, Route functions are responsible for handling any error situation. Default value is false; it will recover from panics. This has performance implications. SetCacheReadEntity controls whether the response data ([]byte) is cached such that ReadEntity is repeatable. If you expect to read large amounts of payload data, and you do not use this feature, you should set it to false. This package has the means to produce detail logging of the complete Http request matching process and filter invocation. Enabling this feature requires you to set a log.Logger instance such as: (c) 2012-2014, http://ernestmicklei.com. MIT License
Package restful , a lean package for creating REST-style WebServices without magic. A WebService has a collection of Route objects that dispatch incoming Http Requests to a function calls. Typically, a WebService has a root path (e.g. /users) and defines common MIME types for its routes. WebServices must be added to a container (see below) in order to handler Http requests from a server. A Route is defined by a HTTP method, an URL path and (optionally) the MIME types it consumes (Content-Type) and produces (Accept). This package has the logic to find the best matching Route and if found, call its Function. The (*Request, *Response) arguments provide functions for reading information from the request and writing information back to the response. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-user-resource.go with a full implementation. A Route parameter can be specified using the format "uri/{var[:regexp]}" or the special version "uri/{var:*}" for matching the tail of the path. For example, /persons/{name:[A-Z][A-Z]} can be used to restrict values for the parameter "name" to only contain capital alphabetic characters. Regular expressions must use the standard Go syntax as described in the regexp package. (https://code.google.com/p/re2/wiki/Syntax) This feature requires the use of a CurlyRouter. A Container holds a collection of WebServices, Filters and a http.ServeMux for multiplexing http requests. Using the statements "restful.Add(...) and restful.Filter(...)" will register WebServices and Filters to the Default Container. The Default container of go-restful uses the http.DefaultServeMux. You can create your own Container and create a new http.Server for that particular container. A filter dynamically intercepts requests and responses to transform or use the information contained in the requests or responses. You can use filters to perform generic logging, measurement, authentication, redirect, set response headers etc. In the restful package there are three hooks into the request,response flow where filters can be added. Each filter must define a FilterFunction: Use the following statement to pass the request,response pair to the next filter or RouteFunction These are processed before any registered WebService. These are processed before any Route of a WebService. These are processed before calling the function associated with the Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-filters.go with full implementations. Two encodings are supported: gzip and deflate. To enable this for all responses: If a Http request includes the Accept-Encoding header then the response content will be compressed using the specified encoding. Alternatively, you can create a Filter that performs the encoding and install it per WebService or Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-encoding-filter.go By installing a pre-defined container filter, your Webservice(s) can respond to the OPTIONS Http request. By installing the filter of a CrossOriginResourceSharing (CORS), your WebService(s) can handle CORS requests. Unexpected things happen. If a request cannot be processed because of a failure, your service needs to tell via the response what happened and why. For this reason HTTP status codes exist and it is important to use the correct code in every exceptional situation. If path or query parameters are not valid (content or type) then use http.StatusBadRequest. Despite a valid URI, the resource requested may not be available If the application logic could not process the request (or write the response) then use http.StatusInternalServerError. The request has a valid URL but the method (GET,PUT,POST,...) is not allowed. The request does not have or has an unknown Accept Header set for this operation. The request does not have or has an unknown Content-Type Header set for this operation. In addition to setting the correct (error) Http status code, you can choose to write a ServiceError message on the response. This package has several options that affect the performance of your service. It is important to understand them and how you can change it. DoNotRecover controls whether panics will be caught to return HTTP 500. If set to false, the container will recover from panics. Default value is true If content encoding is enabled then the default strategy for getting new gzip/zlib writers and readers is to use a sync.Pool. Because writers are expensive structures, performance is even more improved when using a preloaded cache. You can also inject your own implementation. This package has the means to produce detail logging of the complete Http request matching process and filter invocation. Enabling this feature requires you to set an implementation of restful.StdLogger (e.g. log.Logger) instance such as: The restful.SetLogger() method allows you to override the logger used by the package. By default restful uses the standard library `log` package and logs to stdout. Different logging packages are supported as long as they conform to `StdLogger` interface defined in the `log` sub-package, writing an adapter for your preferred package is simple. (c) 2012-2015, http://ernestmicklei.com. MIT License
Goserial is a simple go package to allow you to read and write from the serial port as a stream of bytes. It aims to have the same API on all platforms, including windows. As an added bonus, the windows package does not use cgo, so you can cross compile for windows from another platform. Unfortunately goinstall does not currently let you cross compile so you will have to do it manually: Currently there is very little in the way of configurability. You can set the baud rate. Then you can Read(), Write(), or Close() the connection. Read() will block until at least one byte is returned. Write is the same. There is currently no exposed way to set the timeouts, though patches are welcome. Currently all ports are opened with 8 data bits, 1 stop bit, no parity, no hardware flow control, and no software flow control. This works fine for many real devices and many faux serial devices including usb-to-serial converters and bluetooth serial ports. You may Read() and Write() simulantiously on the same connection (from different goroutines). Example usage:
Goserial is a simple go package to allow you to read and write from the serial port as a stream of bytes. It aims to have the same API on all platforms, including windows. As an added bonus, the windows package does not use cgo, so you can cross compile for windows from another platform. Unfortunately goinstall does not currently let you cross compile so you will have to do it manually: Currently there is very little in the way of configurability. You can set the baud rate. Then you can Read(), Write(), or Close() the connection. Read() will block until at least one byte is returned. Write is the same. There is currently no exposed way to set the timeouts, though patches are welcome. Currently all ports are opened with 8 data bits, 1 stop bit, no parity, no hardware flow control, and no software flow control. This works fine for many real devices and many faux serial devices including usb-to-serial converters and bluetooth serial ports. You may Read() and Write() simulantiously on the same connection (from different goroutines). Example usage:
Package lpcode provides a simple and fluent API for programmatic Go code generation. It consists of two main components: Code: A builder for constructing Go source code strings using method chaining. The Code type offers a wide range of methods for generating common Go syntax including function declarations, struct types, control flow statements, variable declarations, composite literals, and more. It supports formatting via go/format. Codefile: A utility for managing the lifecycle of code files. It handles creating new files, appending header/footer templates, writing Code content, and formatting the final output. It integrates with the tsfio package for file operations and tserr for consistent error handling. The package is designed primarily to support the tserrgen code generator but can be used for other code generation tasks. It does not perform syntax validation; the generated code should be validated by go/format or other tools. Copyright (c) 2023-2026 thorsphere. All Rights Reserved. Use is governed with GNU Affero General Public License v3.0 that can be found in the LICENSE file. Copyright (c) 2023-2026 thorsphere. All Rights Reserved. Use is governed with GNU Affero General Public License v3.0 that can be found in the LICENSE file.
Package cogito provides LLM-powered reasoning chains for Go. cogito implements a Thought-Note architecture for building autonomous systems that reason and adapt. The package is built around two core concepts: Use New or NewWithTrace to create thoughts: Cogito provides a comprehensive set of reasoning primitives: Decision & Analysis: Control Flow: Reflection: Session Management: Streaming: Synthesis: Cogito wraps pipz connectors for Thought processing: LLM access uses a resolution hierarchy: Use SetProvider to configure the global default: Cogito emits capitan signals throughout execution. See [signals.go] for the complete list of events including ThoughtCreated, StepStarted, StepCompleted, StepFailed, NoteAdded, and NotesPublished.
Package pipz provides a lightweight, type-safe library for building composable data processing pipelines in Go. pipz enables developers to create clean, testable, and maintainable data processing workflows by composing small, focused functions into larger pipelines. It addresses common challenges in Go applications such as scattered business logic, repetitive error handling, and difficult-to-test code that mixes pure logic with external dependencies. Requires Go 1.24+ for generic type constraints. The library is built around a single, uniform interface: Key components: Design philosophy: Everything implements Chainable[T], enabling seamless composition while maintaining type safety through Go generics. Context support provides timeout control and cancellation. Execution follows a fail-fast pattern where processing stops at the first error. Adapters wrap your functions to implement the Chainable interface: Transform - Pure transformations that cannot fail: Apply - Operations that can fail: Effect - Side effects without modifying data: Mutate - Conditional modifications: Enrich - Optional enhancements that log failures: Connectors compose multiple Chainables. Choose based on your needs: Sequential Processing: Parallel Processing (requires T implements Cloner[T]): Error Handling: Control Flow: Resource Protection: Simple example - transform strings through a pipeline: For parallel processing with Concurrent or Race, types must implement Cloner[T]: pipz provides rich error information through the Error[T] type: Error handling example: pipz is designed for exceptional performance: See PERFORMANCE.md for detailed benchmarks. 10. Use timeouts at the pipeline level, not individual processors Validation Pipeline: API with Retry and Timeout: Multi-path Processing: For more examples, see the examples directory.
Package foxhound is a high-performance Go web scraping framework with native anti-detection built on Camoufox, a Firefox fork designed to evade bot fingerprinting. Foxhound is a scraping framework for Go — it handles the full lifecycle of web data extraction: fetching pages (with or without a real browser), navigating JavaScript-heavy sites, solving CAPTCHAs, rotating identities and proxies, extracting structured data, and exporting results. Think of it as Scrapy for Go, but with first-class browser automation and anti-detection built in from day one. Modern websites deploy increasingly sophisticated bot detection: TLS fingerprinting, JavaScript challenges (Cloudflare, DataDome, PerimeterX), canvas/WebGL fingerprint checks, and behavioral analysis. Traditional HTTP-only scrapers fail silently against these defenses. Headless Chrome is widely fingerprinted. Foxhound solves this by combining two fetching strategies behind a single API: The smart router starts with the fast static client and automatically escalates to the full browser when it detects blocks (403, 429, 503, CAPTCHA pages). This means you get HTTP-client speed on easy targets and browser-level evasion on hard ones, without changing your code. Foxhound is organized around five core concepts: Hunt is the top-level campaign orchestrator. It owns the queue, spawns Walker goroutines, collects stats, and coordinates shutdown. You configure a Hunt with seed URLs, a Processor (your extraction logic), middleware, pipelines, and writers. Trail is a fluent navigation path builder. It chains browser actions — Navigate, Click, Fill, Wait, Scroll, InfiniteScroll, Evaluate (custom JS), and CaptureXHR — into a reusable sequence that gets compiled into Jobs. Trails describe what a human would do on the page. Walker is a goroutine that acts as a virtual user. Each Walker pops Jobs from the queue, fetches pages through the middleware chain, runs your Processor, writes extracted Items through the pipeline, and enqueues discovered follow-up Jobs. A Hunt runs multiple Walkers concurrently. Job is the unit of work: a URL plus fetch mode, priority, browser steps, metadata, and optional session routing. Jobs flow through the queue and are consumed by Walkers. Session is a stateful client that wraps a fetcher, cookie jar, identity profile, and proxy into a reusable unit. Use it standalone for ad-hoc scraping, or register multiple Sessions with a Hunt via Hunt.AddSession to route different Jobs through different identities and proxies. Every request flows through a middleware chain before reaching the fetcher: The static fetcher (fetch.NewStealth) uses Go's HTTP client with precise header ordering and TLS fingerprints matched to the identity profile. The browser fetcher (fetch.NewCamoufox) launches a real Camoufox browser instance via the Juggler protocol (Firefox's native remote protocol, less targeted by anti-bot than CDP). The smart fetcher (fetch.NewSmart) wraps both and auto-escalates based on response signals and Bayesian domain risk scoring. Every request uses a complete, internally consistent identity profile: user agent, TLS fingerprint, header order, OS, hardware specs, screen dimensions, locale, timezone, and geolocation all match. Randomness without consistency is the number one cause of bot detection — a Windows UA with a macOS font list, or a US locale with a Tokyo timezone, triggers instant blocks. Foxhound ships 60 embedded device profiles. The identity package generates profiles with functional options: When using Camoufox, the identity is serialized to a JSON config that sets navigator properties, WebGL vendor/renderer, canvas noise, OS-specific fonts, screen dimensions, and timezone at the C++ level inside the browser — not via JavaScript injection that anti-bot scripts can detect. Foxhound models human behavior using statistical distributions observed from real user sessions: Three built-in profiles ("careful", "moderate", "aggressive") control the overall pacing. Configure via BehaviorConfig or Hunt options. The NopeCHA browser extension is automatically downloaded from GitHub and loaded into Camoufox on first launch. It solves reCAPTCHA, hCAPTCHA, and Cloudflare Turnstile challenges without API keys. The extension is cached at ~/.cache/foxhound/extensions/nopecha/ and updated automatically. The design philosophy: the goal is to never trigger a CAPTCHA. If one appears, earlier layers (identity, timing, proxy rotation) failed. NopeCHA is the safety net, not the primary strategy. Disable with extension_path: "none" in config or WithExtensionPath("none"). Foxhound provides 13 middleware layers that wrap the fetcher: Middleware is composable: each layer wraps a Fetcher and returns a Fetcher, so you can stack them in any order or add custom middleware. Websites frequently change their DOM structure — class names rotate, IDs are randomized, layouts shift. Foxhound's adaptive selector system survives these rewrites by building element signatures (tag, position, text patterns, ancestor structure) alongside CSS selectors. When a selector stops matching, the system falls back to similarity matching against saved signatures. Enable with Hunt.WithAdaptive and use via Response.Adaptive, Response.CSSAdaptive, Response.CSSAdaptiveAll, or Trail.Adaptive. Signatures can be stored in JSON files or SQLite. A Hunt is the standard way to scrape at scale. Define a Processor, configure middleware and writers, add seed URLs, and run: Trails describe multi-step browser interactions for JS-heavy pages. This example searches Google Maps and scrolls through results: Session is the lightweight alternative to Hunt for quick, stateful fetches. Cookies persist across calls, and the identity stays consistent: Response provides built-in CSS and XPath querying without importing the parse package: Response.Follow extracts links from the page and generates follow-up Jobs: Capture background API calls that JavaScript makes after page load. This is essential for SPAs where data loads via XHR/fetch, not in the initial HTML: The captured responses are available in Response.CapturedXHR as a slice of maps with keys: request_url, request_method, status, headers, body. For sites behind Cloudflare's JavaScript challenge, Foxhound can detect and wait for the challenge to complete: Route different jobs through different identities and proxies within a single Hunt: Cache responses on disk for zero-network iteration during development: The foxhound module is organized into focused sub-packages: [engine] — Hunt, Trail, Walker, scheduler, retry logic, stats collection, and ItemList for thread-safe item accumulation with CSV/JSON/JSONL export. [fetch] — Stealth HTTP client (TLS fingerprinting + header ordering), Camoufox browser automation (Juggler protocol), Smart router (auto-escalation), XHR capture, page pool management, domain risk scoring, and SOCKS5 auth relay. [identity] — Profile generation with 60 embedded device profiles. Produces consistent identity bundles (UA, TLS, headers, OS, hardware, screen, locale, geo) and Camoufox fingerprint configs. [behavior] — Human behavior simulation: timing (Weibull/Gamma distributions), mouse (Bezier curves), scroll patterns, keyboard input, navigation profiles, and session fatigue modeling. [middleware] — 13 composable middleware layers: rate limiting, dedup, retry, autothrottle, cookies, referer, redirect, depth, delta-fetch, circuit breaker, metrics, blocked detection, and robots.txt. [parse] — Content extraction: CSS (goquery), JSON (dot-path), XPath (subset), regex, structured schema, Markdown/text conversion, metadata (JSON-LD, OpenGraph, NextData, NuxtData), contact deobfuscation, sitemap/feed parsing, adaptive selectors, HTML table extraction, JS preload detection, directory listings, pagination detection, and auto-detection with Readability-style scoring. [pipeline] — Item processing stages: validation, cleaning, deduplication, field transformation (regex, rename, type coercion), and chain composition. pipeline/export — Output writers: JSON, JSONL, CSV, XML, SQLite, PostgreSQL, Markdown, Text, and Webhook. [proxy] — Proxy pool management with geo-aware selection, health checking, cooldown tracking, and provider adapters (BrightData, Oxylabs, Smartproxy). [queue] — Job queue implementations: in-memory (heap-based priority queue), Redis (sorted sets), and SQLite (persistent). [cache] — Response caching: in-memory (LRU + TTL), file-based (SHA256 keys), Redis, and SQLite. [captcha] — CAPTCHA detection (Cloudflare, reCAPTCHA, hCAPTCHA, GeeTest) and solving via NopeCHA, CapSolver, 2Captcha, and Turnstile. [monitor] — Observability: atomic stat counters, Prometheus metrics (isolated registry), and webhook-based alerting rules. cmd/foxhound — CLI tool: init, run, check, proxy-test, shell, browser-shell, resume, curl2fox, and preview commands.
Package subroutines provides a lifecycle engine for Kubernetes controllers built on multicluster-runtime. It offers explicit Result-based flow control, opt-in condition/status/finalizer management, and built-in observability.
Package jwt is a lightweight JWT/JWS/JWK library for JOSE, OIDC, and OAuth 2.1, designed from first principles for modern Go (1.26+) and current standards (OIDC Core 1.0 errata set 2, MCP). High convenience. Low boilerplate. Easy to customize. Focused: Rather than implementing to the spec article by article, this library implements by flow. This was created with Ai assistance to be able to iterate quickly over different design choices, but every line of the code has been manually reviewed for correctness, as well as many of the tests. Convenience is not convenient if it gets in your way. This is a library, not a framework: it gives you composable pieces you call and control, not scaffolding you must conform to. Key takeaway: Your claims are your own. You can take what you get for free, or add what you need at no cost to you. You're building the thing that has the Private Keys, signs the tokens + verifies tokens and validates claims. You're building a thing that uses Public Keys to verify and validate tokens. An MCP Host (the AI application) is a Relying Party to the MCP Server. The MCP Server may be an Issuer - minting tokens specifically for Agents to call your API - or it may be a Relying Party to your main auth system, forwarding tokens it received from an upstream Issuer. In either case the same building blocks apply: the Host verifies and validates tokens from the Server, and the Server either signs its own tokens (NewSigner) or verifies tokens from your auth provider (NewVerifier or keyfetch.KeyFetcher). For APIs that accept OAuth 2.1 access tokens (typ: "at+jwt", RFC 9068), use NewAccessTokenValidator with TokenClaims (which includes the client_id and scope fields): The keyfile package loads cryptographic keys from local files in JWK, PEM, or DER format. All functions auto-compute KID from the RFC 7638 thumbprint when not already set: For fetching keys from remote URLs, use keyfetch.FetchURL (JWKS endpoints) or keyfetch.FetchOIDC (OIDC discovery). You don't need to be a crypto expert to use this library - but if you are, hopefully you find it to be the best you've ever used. 1. YAGNI: Don't implement what you don't need = less surface area = greater security. The researchers who write specifications are notorious for imagining every hypothetical - which has resulted in numerous security flaws over the years. There's nothing in here that I haven't seen in the wild and found useful. And I'm happy to extend if needed. 2. Verify AND Validate As an Issuer (owner) you Signer.Sign and then Encode. As a Relying Party (client) you Decode, Verifier.Verify and Validator.Validate. Why not a single step? Because Claims (sometimes called "User" in other libs) is the thing you actually care about, and actually want type safety for. After trying various approaches with embedding and generics, what I landed on is that the most ergonomic type-safe way to Verify a JWT and Validate Claims is to have the two be separate operations. It's why you get to use this library as a library and how you get to have all of the convenience without sacrificing control and customization of the thing you're most likely to want to be able to customize (and debug). 3. Algorithms: The fewer the merrier. Only asymmetric (public-key) algorithms are implemented. You should use Ed25519. It's the end-game algorithm - all upside, no known downsides, and it's supported ubiquitously - Go, JavaScript, Web Browsers, Node, Rust, etc. Ed25519 is the recommended algorithm. ECDSA is provided for backwards compatibility with existing systems. RSA is provided only for backwards compatibility - it's larger, slower, with no real benefit. Supported algorithms are derived automatically from the key type - you never configure alg directly. The verification process selects a key by matching the "kid" (KeyID) of token and the key and then checking "alg" before any cryptographic operation is attempted. An alg/key-type mismatch is a hard error.
Package scoped provides structured concurrency primitives for Go. Structured concurrency ensures that concurrent tasks have well-defined lifecycles: they are spawned and joined within a clear scope, preventing goroutine leaks, orphaned tasks, and unpredictable control flow. The primary entry point is Run, which creates a scope, executes a function that spawns tasks via Spawner, and waits for all tasks to complete before returning: Use [Spawner.Go] for simple tasks and [Spawner.Spawn] when the task needs to spawn sub-tasks of its own. For manual lifecycle control, New returns a Scope and root Spawner separately. The caller must call Scope.Wait to finalize. Scope.WaitTimeout adds a deadline to finalization. Error policies control how the scope reacts to task failures: All task errors are wrapped in *TaskError for attribution. Use IsTaskError, TaskOf, CauseOf, and AllTaskErrors to inspect them. Convenience functions for common patterns: Use WithLimit to restrict the number of goroutines executing concurrently within a scope. Tasks beyond the limit wait for a slot, respecting context cancellation while waiting. For standalone use outside scopes, Semaphore provides a weighted semaphore with Semaphore.Acquire, Semaphore.TryAcquire, and Semaphore.Release. Pool provides a reusable fixed-size worker pool. Tasks are submitted via Pool.Submit (blocking) or Pool.TrySubmit (non-blocking) and processed by a fixed number of goroutines. Call Pool.Close to drain the queue and collect errors. By default, a panic in any task is captured with its full stack trace and re-raised in Scope.Wait. Use WithPanicAsError to convert panics to *PanicError values and return them as regular errors instead. This package intentionally panics for programmer misuse (invalid options, nil required callbacks, invalid lifecycle calls such as spawning after shutdown, or concurrent Stream.Next calls). These panics enforce API invariants and are considered contract violations, not recoverable runtime data errors. Register hooks for task lifecycle events: Pool exposes Pool.Stats returning a PoolStats snapshot, and WithPoolMetrics for periodic pool metrics callbacks. Stream tracks items read, errors, and timing automatically. Stream.Stats returns a StreamStats snapshot including throughput. Observe wraps a stream with a per-item StreamEvent callback. Stream provides a pull-based, composable data pipeline. Create streams with NewStream, FromSlice, FromSliceRef, FromChan, Empty, Repeat, or Generate. Chains of Stream.Filter, Stream.Take, Stream.Skip, Stream.Peek, Stream.TakeWhile, Stream.DropWhile, Map, Batch, FlatMap, Distinct, Scan, and Zip are evaluated lazily. Reduce folds a stream into a single value. ParallelMap processes items concurrently with optional ordering and backpressure via [StreamOptions.MaxPending]. Terminal methods (Stream.ToSlice, Stream.ForEach, Stream.Count) return partial results alongside any error, following io.Reader conventions. Stream.ToChanScope bridges a stream to a channel within a scope. Stream errors are aggregated via errors.Join. Streams are single-consumer; concurrent Stream.Next calls panic. Each task function receives a child Spawner that is valid only for the duration of the task. Storing the child Spawner and calling Spawn on it after the task returns will panic. This is by design: structured concurrency requires that all child tasks are scoped to their parent. The github.com/baxromumarov/scoped/chanx subpackage provides context-aware channel operations (Send, Recv, TrySend, TryRecv, SendTimeout, RecvTimeout, SendBatch, RecvBatch), fan-in/fan-out patterns (Merge, Tee, FanOut, Broadcast), transformation pipelines (Map, Filter), rate limiting (Throttle), batching (Buffer, BufferWithReason), timing (Debounce, Window), combining (Zip, First), and an idempotent-close channel wrapper (Closable).
Package gosnowflake is a pure Go Snowflake driver for the database/sql package. Clients can use the database/sql package directly. For example: Use the Open() function to create a database handle with connection parameters: The Go Snowflake Driver supports the following connection syntaxes (or data source name (DSN) formats): where all parameters must be escaped or use Config and DSN to construct a DSN string. For information about account identifiers, see the Snowflake documentation (https://docs.snowflake.com/en/user-guide/admin-account-identifier.html). The following example opens a database handle with the Snowflake account named "my_account" under the organization named "my_organization", where the username is "jsmith", password is "mypassword", database is "mydb", schema is "testschema", and warehouse is "mywh": The connection string (DSN) can contain both connection parameters (described below) and session parameters (https://docs.snowflake.com/en/sql-reference/parameters.html). The following connection parameters are supported: account <string>: Specifies your Snowflake account, where "<string>" is the account identifier assigned to your account by Snowflake. For information about account identifiers, see the Snowflake documentation (https://docs.snowflake.com/en/user-guide/admin-account-identifier.html). If you are using a global URL, then append the connection group and ".global" (e.g. "<account_identifier>-<connection_group>.global"). The account identifier and the connection group are separated by a dash ("-"), as shown above. This parameter is optional if your account identifier is specified after the "@" character in the connection string. region <string>: DEPRECATED. You may specify a region, such as "eu-central-1", with this parameter. However, since this parameter is deprecated, it is best to specify the region as part of the account parameter. For details, see the description of the account parameter. --> Important note: for the database object and other objects (schema, role, etc), please always adhere to the rules for Snowflake Object Identifiers; especially https://docs.snowflake.com/en/sql-reference/identifiers-syntax#double-quoted-identifiers. As mentioned in the docs, if you have e.g. a database with mIxEDcAsE naming, as you needed to create it with enclosing it in double quotes, similarly you'll need to reference it also with double quotes when specifying it in the connection string / DSN. In practice, this means you'll need to escape the second pair of double quotes, which are part of the database name, and not the String notation. database: Specifies the database to use by default in the client session (can be changed after login). schema: Specifies the database schema to use by default in the client session (can be changed after login). warehouse: Specifies the virtual warehouse to use by default for queries, loading, etc. in the client session (can be changed after login). role: Specifies the role to use by default for accessing Snowflake objects in the client session (can be changed after login). passcode: Specifies the passcode provided by Duo when using multi-factor authentication (MFA) for login. passcodeInPassword: false by default. Set to true if the MFA passcode is embedded in the login password. Appends the MFA passcode to the end of the password. loginTimeout: Specifies the timeout, in seconds, for login. The default is 60 seconds. The login request gives up after the timeout length if the HTTP response is success. requestTimeout: Specifies the timeout, in seconds, for a query to complete. 0 (zero) specifies that the driver should wait indefinitely. The default is 0 seconds. The query request gives up after the timeout length if the HTTP response is success. authenticator: Specifies the authenticator to use for authenticating user credentials. See "Authenticator Values" section below for supported values. singleAuthenticationPrompt: specifies whether only one authentication should be performed at the same time for authentications that needs human interactions (like MFA or OAuth authorization code). By default it is true. application: Identifies your application to Snowflake Support. disableOCSPChecks: false by default. Set to true to bypass the Online Certificate Status Protocol (OCSP) certificate revocation check. OCSP module caches responses internally. If your application is long running, you can enable cache clearing by calling StartOCSPCacheClearer and disable by calling StopOCSPCacheClearer. IMPORTANT: Change the default value for testing or emergency situations only. token: a token that can be used to authenticate. Should be used in conjunction with the "oauth" authenticator. client_session_keep_alive: Set to true have a heartbeat in the background every hour by default or the value of client_session_keep_alive_heartbeat_frequency, if set, to keep the connection alive such that the connection session will never expire. Care should be taken in using this option as it opens up the access forever as long as the process is alive. client_session_keep_alive_heartbeat_frequency: Number of seconds in-between client attempts to update the token for the session. > The default is 3600 seconds > Minimum value is 900 seconds. A smaller value will be reset to 900 seconds. > Maximum value is 3600 seconds. A larger value will be reset to 3600 seconds. > This parameter is only valid if client_session_keep_alive is set to true. ocspFailOpen: true by default. Set to false to make OCSP check fail closed mode. certRevocationCheckMode (enabled, advisory, disabled): Specifies the certificate revocation check mode. When enabled, the driver performs a certificate revocation check using CRL. When advisory, the driver performs a certificate revocation check using CRL, but fails the connection only if the certificate is revoked. If the status cannot be determined, the connection is established. When disabled, the driver does not perform a certificate revocation check. Keep in mind that the certificate revocation check with CRLs is a heavy task, both for memory and CPU. The default is disabled. crlAllowCertificatesWithoutCrlURL: if a certificate does not have a CRL URL, the driver will allow the connection to be established. The default is false. SNOWFLAKE_CRL_CACHE_VALIDITY_TIME (environment variable): specifies the validity time of the CRL cache in seconds. crlInMemoryCacheDisabled: set to disable in-memory caching of CRLs. crlOnDiskCacheDisabled: set to disable on-disk caching of CRLs (on-disk cache may help with cold starts). crlDownloadMaxSize: maximum size (in bytes) of a CRL to download. Default is 200MB. SNOWFLAKE_CRL_ON_DISK_CACHE_DIR (environment variable): set to customize the directory for on-disk caching of CRLs. SNOWFLAKE_CRL_ON_DISK_CACHE_REMOVAL_DELAY (environment variable): set the delay (in seconds) for removing the on-disk cache (for debuggability). crlHTTPClientTimeout: customize the HTTP client timeout for downloading CRLs. validateDefaultParameters: true by default. Set to false to disable checks on existence and privileges check for Database, Schema, Warehouse and Role when setting up the connection --> Important note: with the default true value, the connection will fail as the validation fails, if you specify a non-existent database/schema/etc name. This is particularly important when you have a miXedCaSE-named object (e.g. database) and you forgot to properly double quote it. This behaviour is still preferable as it provides a very clear, fail-fast indication of the configuration error. If you would still like to forego this validation, which ensures that the driver always connects with proper database, schema etc. and creates a proper context for it, you can set this configuration to false to allow connection with invalid object identifiers. In this case (with this default validation deliberately turned off) the driver cannot guarantee that the actual behaviour inside the session will match with the one you'd expect, i.e. not actually using the database you expect, and so on. tracing: Specifies the logging level to be used. Set to error by default. Valid values are trace, debug, info, print, warning, error, fatal, panic. logQueryText: when set to true, the full query text will be logged. Be aware that it may include sensitive information. Default value is false. logQueryParameters: when set to true, the parameters will be logged. Requires logQueryText to be enabled first. Be aware that it may include sensitive information. Default value is false. disableQueryContextCache: disables parsing of query context returned from server and resending it to server as well. Default value is false. clientConfigFile: specifies the location of the client configuration json file. In this file you can configure Easy Logging feature. disableSamlURLCheck: disables the SAML URL check. Default value is false. All other parameters are interpreted as session parameters (https://docs.snowflake.com/en/sql-reference/parameters.html). For example, the TIMESTAMP_OUTPUT_FORMAT session parameter can be set by adding: A complete connection string looks similar to the following: Session-level parameters can also be set by using the SQL command "ALTER SESSION" (https://docs.snowflake.com/en/sql-reference/sql/alter-session.html). Alternatively, use OpenWithConfig() function to create a database handle with the specified Config. To use the internal Snowflake authenticator, specify snowflake (Default). To use programmatic access tokens, specify programmatic_access_token. If you want to cache your MFA logins, specify username_password_mfa. You can pass TOTP in a separate passcode parameter or append it to the password setting in which case you need to set passcodeInPassword = true. To authenticate through Okta, specify https://<okta_account_name>.okta.com (URL prefix for Okta). To authenticate using your IDP via a browser, specify externalbrowser. To authenticate via OAuth with token, specify oauth and provide an OAuth Access Token (see the token parameter below). To authenticate via full OAuth flow, specify oauth_authorization_code or oauth_client_credentials and fill relevant parameters (oauthClientId, oauthClientSecret, oauthAuthorizationUrl, oauthTokenRequestUrl, oauthRedirectUri, oauthScope). Specify URLs if you want to use external OAuth2 IdP, otherwise Snowflake will be used as a default IdP. If oauthScope is not configured, the role is used (giving session:role:<roleName> scope). For more information, please reach to official Snowflake documentation. To authenticate via workload identity, specify workload_identity. This option requires workloadIdentityProvider option to be set (AWS, GCP, AZURE, OIDC). When workloadIdentityProvider=AZURE, workloadIdentityEntraResource can be optionally set to customize entra resource used to fetch JWT token. When workloadIdentityProvider=GCP or AWS, workloadIdentityImpersonationPath can be optionally set to customize impersonation path. This is a comma separated list. For GCP the last parameter is a target service account and the rest are chained delegation. For AWS this is the list of role ARNs to assume. For more details, refer to the usage guide: https://docs.snowflake.com/en/user-guide/workload-identity-federation You can also connect to your warehouse using the connection config. The dbSql library states that when you want to take advantage of driver-specific connection features that aren’t available in a connection string. Each driver supports its own set of connection properties, often providing ways to customize the connection request specific to the DBMS For example: If you are using this method, you dont need to pass a driver name to specify the driver type in which you are looking to connect. Since the driver name is not needed, you can optionally bypass driver registration on startup. To do this, set `GOSNOWFLAKE_SKIP_REGISTRATION` in your environment. This is useful you wish to register multiple verions of the driver. Note: `GOSNOWFLAKE_SKIP_REGISTRATION` should not be used if sql.Open() is used as the method to connect to the server, as sql.Open will require registration so it can map the driver name to the driver type, which in this case is "snowflake" and SnowflakeDriver{}. You can load the connnection configuration with .toml file format. With two environment variables, `SNOWFLAKE_HOME` (`connections.toml` file directory) and `SNOWFLAKE_DEFAULT_CONNECTION_NAME` (DSN name), the driver will search the config file and load the connection. You can find how to use this connection way at ./cmd/tomlfileconnection or Snowflake doc: https://docs.snowflake.com/en/developer-guide/snowflake-cli-v2/connecting/specify-credentials If the connection.toml file is readable by others, a warning will be logged. To disable it you need to set the environment variable `SF_SKIP_WARNING_FOR_READ_PERMISSIONS_ON_CONFIG_FILE` to true. It you wish to specify a custom transporter (e.g. to provide a custom TLS config to be used with your custom truststore) pass it through the `NewConnector`. Example: As an alternative, you can use the `RegisterTLSConfig` / `DeregisterTLSConfig` functions as seen in the unit tests: https://github.com/snowflakedb/gosnowflake/blob/v1.16.0/transport_test.go#L127 The Go Snowflake Driver honors the environment variables HTTP_PROXY, HTTPS_PROXY and NO_PROXY for the forward proxy setting. NO_PROXY specifies which hostname endings should be allowed to bypass the proxy server, e.g. no_proxy=.amazonaws.com means that Amazon S3 access does not need to go through the proxy. NO_PROXY does not support wildcards. Each value specified should be one of the following: The end of a hostname (or a complete hostname), for example: ".amazonaws.com" or "xy12345.snowflakecomputing.com". An IP address, for example "192.196.1.15". If more than one value is specified, values should be separated by commas, for example: In addition to environment variables, the Go Snowflake Driver also supports configuring the proxy via connection parameters. When these parameters are provided in the connection string or DSN, they take precedence and any environment proxy settings (HTTP_PROXY, HTTPS_PROXY, NO_PROXY) will be ignored. | Parameter | Description | Default | |-----------------|-----------------------------------------------------------------------------|---------| | `proxyHost` | Hostname or IP address of the proxy server. | | | `proxyPort` | Port number of the proxy server. | | | `proxyUser` | Username for proxy authentication. | | | `proxyPassword` | Password for proxy authentication. | | | `proxyProtocol` | Protocol to use for proxy connection. Valid values: `http`, `https`. | `http` | | `NoProxy“ | Comma-separated list of hosts that should bypass the proxy. | | For more details, please refer to the example in ./cmd/proxyconnection. By default, the driver uses a built-in slog-based logger at ERROR level. The driver automatically masks secrets in all log messages to prevent credential leakage. Users can customize logging in two ways: 1. Using a custom slog.Handler (if you want to use slog with custom formatting): 2. Using a complete custom logger implementation (if you want full control): Important notes: If you want to define S3 client logging, override S3LoggingMode variable using configuration: https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/aws#ClientLogMode Example: A custom query tag can be set in the context. Each query run with this context will include the custom query tag as metadata that will appear in the Query Tag column in the Query History log. For example: A specific query request ID can be set in the context and will be passed through in place of the default randomized request ID. For example: If you need query ID for your query you have to use raw connection. For queries: ``` ``` For execs: ``` ``` The result of your query can be retrieved by setting the query ID in the WithFetchResultByID context. ``` ``` From 0.5.0, a signal handling responsibility has moved to the applications. If you want to cancel a query/command by Ctrl+C, add a os.Interrupt trap in context to execute methods that can take the context parameter (e.g. QueryContext, ExecContext). See cmd/selectmany.go for the full example. A context containing OpenTelemetry headers for distributed tracing can be created. Each query, both synchronous and asynchronous, run with this context will include the Trace ID and Span ID as metadata. If you are instrumenting your program with OpenTelemetry and exporting telemetry data to Snowflake, then queries run with this context will be properly nested under the appropriate parent span. This can be viewed in the Traces and Logs tab in Snowsight. For example: The Go Snowflake Driver now supports the Arrow data format for data transfers between Snowflake and the Golang client. The Arrow data format avoids extra conversions between binary and textual representations of the data. The Arrow data format can improve performance and reduce memory consumption in clients. Snowflake continues to support the JSON data format. The data format is controlled by the session-level parameter GO_QUERY_RESULT_FORMAT. To use JSON format, execute: The valid values for the parameter are: If the user attempts to set the parameter to an invalid value, an error is returned. The parameter name and the parameter value are case-insensitive. This parameter can be set only at the session level. Usage notes: The Arrow data format reduces rounding errors in floating point numbers. You might see slightly different values for floating point numbers when using Arrow format than when using JSON format. In order to take advantage of the increased precision, you must pass in the context.Context object provided by the WithHigherPrecision function when querying. Traditionally, the rows.Scan() method returned a string when a variable of types interface was passed in. Turning on the flag ENABLE_HIGHER_PRECISION via WithHigherPrecision will return the natural, expected data type as well. For some numeric data types, the driver can retrieve larger values when using the Arrow format than when using the JSON format. For example, using Arrow format allows the full range of SQL NUMERIC(38,0) values to be retrieved, while using JSON format allows only values in the range supported by the Golang int64 data type. Users should ensure that Golang variables are declared using the appropriate data type for the full range of values contained in the column. For an example, see below. When using the Arrow format, the driver supports more Golang data types and more ways to convert SQL values to those Golang data types. The table below lists the supported Snowflake SQL data types and the corresponding Golang data types. The columns are: The SQL data type. The default Golang data type that is returned when you use snowflakeRows.Scan() to read data from Arrow data format via an interface{}. The possible Golang data types that can be returned when you use snowflakeRows.Scan() to read data from Arrow data format directly. The default Golang data type that is returned when you use snowflakeRows.Scan() to read data from JSON data format via an interface{}. (All returned values are strings.) The standard Golang data type that is returned when you use snowflakeRows.Scan() to read data from JSON data format directly. Go Data Types for Scan() =================================================================================================================== | ARROW | JSON =================================================================================================================== SQL Data Type | Default Go Data Type | Supported Go Data | Default Go Data Type | Supported Go Data | for Scan() interface{} | Types for Scan() | for Scan() interface{} | Types for Scan() =================================================================================================================== BOOLEAN | bool | string | bool ------------------------------------------------------------------------------------------------------------------- VARCHAR | string | string ------------------------------------------------------------------------------------------------------------------- DOUBLE | float32, float64 [1] , [2] | string | float32, float64 ------------------------------------------------------------------------------------------------------------------- INTEGER that | int, int8, int16, int32, int64 | string | int, int8, int16, fits in int64 | [1] , [2] | | int32, int64 ------------------------------------------------------------------------------------------------------------------- INTEGER that doesn't | int, int8, int16, int32, int64, *big.Int | string | error fit in int64 | [1] , [2] , [3] , [4] | ------------------------------------------------------------------------------------------------------------------- NUMBER(P, S) | float32, float64, *big.Float | string | float32, float64 where S > 0 | [1] , [2] , [3] , [5] | ------------------------------------------------------------------------------------------------------------------- DATE | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIME | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_LTZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_NTZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_TZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- BINARY | []byte | string | []byte ------------------------------------------------------------------------------------------------------------------- ARRAY [6] | string / array | string / array ------------------------------------------------------------------------------------------------------------------- OBJECT [6] | string / struct | string / struct ------------------------------------------------------------------------------------------------------------------- VARIANT | string | string ------------------------------------------------------------------------------------------------------------------- MAP | map | map [1] Converting from a higher precision data type to a lower precision data type via the snowflakeRows.Scan() method can lose low bits (lose precision), lose high bits (completely change the value), or result in error. [2] Attempting to convert from a higher precision data type to a lower precision data type via interface{} causes an error. [3] Higher precision data types like *big.Int and *big.Float can be accessed by querying with a context returned by WithHigherPrecision(). [4] You cannot directly Scan() into the alternative data types via snowflakeRows.Scan(), but can convert to those data types by using .Int64()/.String()/.Uint64() methods. For an example, see below. [5] You cannot directly Scan() into the alternative data types via snowflakeRows.Scan(), but can convert to those data types by using .Float32()/.String()/.Float64() methods. For an example, see below. [6] Arrays and objects can be either semistructured or structured, see more info in section below. Note: SQL NULL values are converted to Golang nil values, and vice-versa. Snowflake supports two flavours of "structured data" - semistructured and structured. Semistructured types are variants, objects and arrays without schema. When data is fetched, it's represented as strings and the client is responsible for its interpretation. Example table definition: The data not have any corresponding schema, so values in table may be slightly different. Semistuctured variants, objects and arrays are always represented as strings for scanning: When inserting, a marker indicating correct type must be used, for example: Structured types differentiate from semistructured types by having specific schema. In all rows of the table, values must conform to this schema. Example table definition: To retrieve structured objects, follow these steps: 1. Create a struct implementing sql.Scanner interface, example: a) b) Automatic scan goes through all fields in a struct and read object fields. Struct fields have to be public. Embedded structs have to be pointers. Matching name is built using struct field name with first letter lowercase. Additionally, `sf` tag can be added: - first value is always a name of a field in an SQL object - additionally `ignore` parameter can be passed to omit this field 2. Use WithStructuredTypesEnabled context while querying data. 3. Use it in regular scan: See StructuredObject for all available operations including null support, embedding nested structs, etc. Retrieving array of simple types works exactly the same like normal values - using Scan function. You can use WithEmbeddedValuesNullable context to handle null values in maps and arrays of simple types in the database. In that case, sql null types will be used: If you want to scan array of structs, you have to use a helper function ScanArrayOfScanners: Retrieving structured maps is very similar to retrieving arrays: To bind structured objects use: 1. Create a type which implements a StructuredObjectWriter interface, example: a) b) 2. Use an instance as regular bind. 3. If you need to bind nil value, use special syntax: Binding structured arrays are like any other parameter. The only difference is - if you want to insert empty array (not nil but empty), you have to use: The following example shows how to retrieve very large values using the math/big package. This example retrieves a large INTEGER value to an interface and then extracts a big.Int value from that interface. If the value fits into an int64, then the code also copies the value to a variable of type int64. Note that a context that enables higher precision must be passed in with the query. If the variable named "rows" is known to contain a big.Int, then you can use the following instead of scanning into an interface and then converting to a big.Int: If the variable named "rows" contains a big.Int, then each of the following fails: Similar code and rules also apply to big.Float values. If you are not sure what data type will be returned, you can use code similar to the following to check the data type of the returned value: By default, DECFLOAT values are returned as string values. If you want to retrieve them as numbers, you have to use the WithDecfloatMappingEnabled context. If higher precision is enabled, the driver will return them as *big.Float values. Otherwise, they will be returned as float64 values. Keep in mind that both float64 and *big.Float are not able to precisely represent some DECFLOAT values. If precision is important, you have to use string representation and use your own library to parse it. You can retrieve data in a columnar format similar to the format a server returns, without transposing them to rows. Arrow Batches mode is available through the separate `arrowbatches` sub-package (`github.com/snowflakedb/gosnowflake/v2/arrowbatches`). This sub-package provides access to Arrow columnar data using ArrowBatch structs, which correspond to data chunks received from the backend. They allow for access to specific arrow.Record structs. The arrow-compute dependency (which significantly increases binary size) is only pulled in when you import the arrowbatches sub-package. If you don't need Arrow batch support, simply don't import it. An ArrowBatch can exist in a state where the underlying data has not yet been loaded. The data is downloaded and translated only on demand. Translation options are retrieved from a context.Context interface, which is either passed from query context or set by the user using WithContext(ctx) method. In order to access them you must use `arrowbatches.WithArrowBatches` context, similar to the following: This returns []*arrowbatches.ArrowBatch. ArrowBatch functions: GetRowCount(): Returns the number of rows in the ArrowBatch. Note that this returns 0 if the data has not yet been loaded, irrespective of it’s actual size. WithContext(ctx context.Context): Sets the context of the ArrowBatch to the one provided. Note that the context will not retroactively apply to data that has already been downloaded. For example: will produce the same result in records1 and records2, irrespective of the newly provided ctx. Context worth noting are: -arrowbatches.WithTimestampOption -WithHigherPrecision -arrowbatches.WithUtf8Validation described in more detail later. Fetch(): Returns the underlying records as *[]arrow.Record. When this function is called, the ArrowBatch checks whether the underlying data has already been loaded, and downloads it if not. Limitations: How to handle timestamps in Arrow batches: Snowflake returns timestamps natively (from backend to driver) in multiple formats. The Arrow timestamp is an 8-byte data type, which is insufficient to handle the larger date and time ranges used by Snowflake. Also, Snowflake supports 0-9 (nanosecond) digit precision for seconds, while Arrow supports only 3 (millisecond), 6 (microsecond), an 9 (nanosecond) precision. Consequently, Snowflake uses a custom timestamp format in Arrow, which differs on timestamp type and precision. If you want to use timestamps in Arrow batches, you have two options: How to handle invalid UTF-8 characters in Arrow batches: Snowflake previously allowed users to upload data with invalid UTF-8 characters. Consequently, Arrow records containing string columns in Snowflake could include these invalid UTF-8 characters. However, according to the Arrow specifications (https://arrow.apache.org/docs/cpp/api/datatype.html and https://github.com/apache/arrow/blob/a03d957b5b8d0425f9d5b6c98b6ee1efa56a1248/go/arrow/datatype.go#L73-L74), Arrow string columns should only contain UTF-8 characters. To address this issue and prevent potential downstream disruptions, the context arrowbatches.WithUtf8Validation is introduced. When enabled, this feature iterates through all values in string columns, identifying and replacing any invalid characters with `�`. This ensures that Arrow records conform to the UTF-8 standards, preventing validation failures in downstream services like the Rust Arrow library that impose strict validation checks. How to handle higher precision in Arrow batches: To preserve BigDecimal values within Arrow batches, use WithHigherPrecision. This offers two main benefits: it helps avoid precision loss and defers the conversion to upstream services. Alternatively, without this setting, all non-zero scale numbers will be converted to float64, potentially resulting in loss of precision. Zero-scale numbers (DECIMAL256, DECIMAL128) will be converted to int64, which could lead to overflow. WHen using NUMBERs with non zero scale, the value is returned as an integer type and a scale is provided in record metadata. Example. When we have a 123.45 value that comes from NUMBER(9, 4), it will be represented as 1234500 with scale equal to 4. It is a client responsibility to interpret it correctly. Also - see limitations section above. How to handle JSON responses in Arrow batches: Due to technical limitations Snowflake backend may return JSON even if client expects Arrow. In that case Arrow batches are not available and the error with code ErrNonArrowResponseInArrowBatches is returned. The response is parsed to regular rows. You can read rows in a way described in transform_batches_to_rows.go example. This has a very strong limitation though - this is a very low level API (Go driver API), so there are no conversions ready. All values are returned as strings. Alternative approach is to rerun a query, but without enabling Arrow batches and use a general Go SQL API instead of driver API. It can be optimized by using `WithRequestID`, so backend returns results from cache. Binding allows a SQL statement to use a value that is stored in a Golang variable. Without binding, a SQL statement specifies values by specifying literals inside the statement. For example, the following statement uses the literal value “42“ in an UPDATE statement: With binding, you can execute a SQL statement that uses a value that is inside a variable. For example: The “?“ inside the “VALUES“ clause specifies that the SQL statement uses the value from a variable. Binding data that involves time zones can require special handling. For details, see the section titled "Timestamps with Time Zones". Version 1.6.23 (and later) of the driver takes advantage of sql.Null types which enables the proper handling of null parameters inside function calls, i.e.: The timestamp nullability had to be achieved by wrapping the sql.NullTime type as the Snowflake provides several date and time types which are mapped to single Go time.Time type: Version 1.3.9 (and later) of the Go Snowflake Driver supports the ability to bind an array variable to a parameter in a SQL INSERT statement. You can use this technique to insert multiple rows in a single batch. As an example, the following code inserts rows into a table that contains integer, float, boolean, and string columns. The example binds arrays to the parameters in the INSERT statement. If the array contains SQL NULL values, use slice []interface{}, which allows Golang nil values. This feature is available in version 1.6.12 (and later) of the driver. For example, For slices []interface{} containing time.Time values, a binding parameter flag is required for the preceding array variable in the Array() function. This feature is available in version 1.6.13 (and later) of the driver. For example, Note: For alternative ways to load data into the Snowflake database (including bulk loading using the COPY command), see Loading Data into Snowflake (https://docs.snowflake.com/en/user-guide-data-load.html). When you use array binding to insert a large number of values, the driver can improve performance by streaming the data (without creating files on the local machine) to a temporary stage for ingestion. The driver automatically does this when the number of values exceeds a threshold (no changes are needed to user code). In order for the driver to send the data to a temporary stage, the user must have the following privilege on the schema: If the user does not have this privilege, the driver falls back to sending the data with the query to the Snowflake database. In addition, the current database and schema for the session must be set. If these are not set, the CREATE TEMPORARY STAGE command executed by the driver can fail with the following error: For alternative ways to load data into the Snowflake database (including bulk loading using the COPY command), see Loading Data into Snowflake (https://docs.snowflake.com/en/user-guide-data-load.html). Go's database/sql package supports the ability to bind a parameter in a SQL statement to a time.Time variable. However, when the client binds data to send to the server, the driver cannot determine the correct Snowflake date/timestamp data type to associate with the binding parameter. For example: To resolve this issue, a binding parameter flag is introduced that associates any subsequent time.Time type to the DATE, TIME, TIMESTAMP_LTZ, TIMESTAMP_NTZ or BINARY data type. The above example could be rewritten as follows: The driver fetches TIMESTAMP_TZ (timestamp with time zone) data using the offset-based Location types, which represent a collection of time offsets in use in a geographical area, such as CET (Central European Time) or UTC (Coordinated Universal Time). The offset-based Location data is generated and cached when a Go Snowflake Driver application starts, and if the given offset is not in the cache, it is generated dynamically. Currently, Snowflake does not support the name-based Location types (e.g. "America/Los_Angeles"). For more information about Location types, see the Go documentation for https://golang.org/pkg/time/#Location. Internally, this feature leverages the []byte data type. As a result, BINARY data cannot be bound without the binding parameter flag. In the following example, sf is an alias for the gosnowflake package: The Go Snowflake Driver supports JWT (JSON Web Token) authentication. To enable this feature, construct the DSN with fields "authenticator=SNOWFLAKE_JWT&privateKey=<your_private_key>", or using a Config structure specifying: The <your_private_key> should be a base64 URL encoded PKCS8 rsa private key string. One way to encode a byte slice to URL base 64 URL format is through the base64.URLEncoding.EncodeToString() function. On the server side, you can alter the public key with the SQL command: The <your_public_key> should be a base64 Standard encoded PKI public key string. One way to encode a byte slice to base 64 Standard format is through the base64.StdEncoding.EncodeToString() function. To generate the valid key pair, you can execute the following commands in the shell: Note: As of February 2020, Golang's official library does not support passcode-encrypted PKCS8 private key. For security purposes, Snowflake highly recommends that you store the passcode-encrypted private key on the disk and decrypt the key in your application using a library you trust. JWT tokens are recreated on each retry and they are valid (`exp` claim) for `jwtTimeout` seconds. Each retry timeout is configured by `jwtClientTimeout`. Retries are limited by total time of `loginTimeout`. The driver allows to authenticate using the external browser. When a connection is created, the driver will open the browser window and ask the user to sign in. To enable this feature, construct the DSN with field "authenticator=EXTERNALBROWSER" or using a Config structure with following Authenticator specified: The external browser authentication implements timeout mechanism. This prevents the driver from hanging interminably when browser window was closed, or not responding. Timeout defaults to 120s and can be changed through setting DSN field "externalBrowserTimeout=240" (time in seconds) or using a Config structure with following ExternalBrowserTimeout specified: This feature is available in version 1.3.8 or later of the driver. By default, Snowflake returns an error for queries issued with multiple statements. This restriction helps protect against SQL Injection attacks (https://en.wikipedia.org/wiki/SQL_injection). The multi-statement feature allows users skip this restriction and execute multiple SQL statements through a single Golang function call. However, this opens up the possibility for SQL injection, so it should be used carefully. The risk can be reduced by specifying the exact number of statements to be executed, which makes it more difficult to inject a statement by appending it. More details are below. The Go Snowflake Driver provides two functions that can execute multiple SQL statements in a single call: To compose a multi-statement query, simply create a string that contains all the queries, separated by semicolons, in the order in which the statements should be executed. To protect against SQL Injection attacks while using the multi-statement feature, pass a Context that specifies the number of statements in the string. For example: When multiple queries are executed by a single call to QueryContext(), multiple result sets are returned. After you process the first result set, get the next result set (for the next SQL statement) by calling NextResultSet(). The following pseudo-code shows how to process multiple result sets: The function db.ExecContext() returns a single result, which is the sum of the number of rows changed by each individual statement. For example, if your multi-statement query executed two UPDATE statements, each of which updated 10 rows, then the result returned would be 20. Individual row counts for individual statements are not available. The following code shows how to retrieve the result of a multi-statement query executed through db.ExecContext(): Note: Because a multi-statement ExecContext() returns a single value, you cannot detect offsetting errors. For example, suppose you expected the return value to be 20 because you expected each UPDATE statement to update 10 rows. If one UPDATE statement updated 15 rows and the other UPDATE statement updated only 5 rows, the total would still be 20. You would see no indication that the UPDATES had not functioned as expected. The ExecContext() function does not return an error if passed a query (e.g. a SELECT statement). However, it still returns only a single value, not a result set, so using it to execute queries (or a mix of queries and non-query statements) is impractical. The QueryContext() function does not return an error if passed non-query statements (e.g. DML). The function returns a result set for each statement, whether or not the statement is a query. For each non-query statement, the result set contains a single row that contains a single column; the value is the number of rows changed by the statement. If you want to execute a mix of query and non-query statements (e.g. a mix of SELECT and DML statements) in a multi-statement query, use QueryContext(). You can retrieve the result sets for the queries, and you can retrieve or ignore the row counts for the non-query statements. Note: PUT statements are not supported for multi-statement queries. If a SQL statement passed to ExecQuery() or QueryContext() fails to compile or execute, that statement is aborted, and subsequent statements are not executed. Any statements prior to the aborted statement are unaffected. For example, if the statements below are run as one multi-statement query, the multi-statement query fails on the third statement, and an exception is thrown. If you then query the contents of the table named "test", the values 1 and 2 would be present. When using the QueryContext() and ExecContext() functions, golang code can check for errors the usual way. For example: Preparing statements and using bind variables are also not supported for multi-statement queries. The Go Snowflake Driver supports asynchronous execution of SQL statements. Asynchronous execution allows you to start executing a statement and then retrieve the result later without being blocked while waiting. While waiting for the result of a SQL statement, you can perform other tasks, including executing other SQL statements. Most of the steps to execute an asynchronous query are the same as the steps to execute a synchronous query. However, there is an additional step, which is that you must call the WithAsyncMode() function to update your Context object to specify that asynchronous mode is enabled. In the code below, the call to "WithAsyncMode()" is specific to asynchronous mode. The rest of the code is compatible with both asynchronous mode and synchronous mode. The function db.QueryContext() returns an object of type snowflakeRows regardless of whether the query is synchronous or asynchronous. However: The call to the Next() function of snowflakeRows is always synchronous (i.e. blocking). If the query has not yet completed and the snowflakeRows object (named "rows" in this example) has not been filled in yet, then rows.Next() waits until the result set has been filled in. More generally, calls to any Golang SQL API function implemented in snowflakeRows or snowflakeResult are blocking calls, and wait if results are not yet available. (Examples of other synchronous calls include: snowflakeRows.Err(), snowflakeRows.Columns(), snowflakeRows.columnTypes(), snowflakeRows.Scan(), and snowflakeResult.RowsAffected().) Because the example code above executes only one query and no other activity, there is no significant difference in behavior between asynchronous and synchronous behavior. The differences become significant if, for example, you want to perform some other activity after the query starts and before it completes. The example code below starts a query, which run in the background, and then retrieves the results later. This example uses small SELECT statements that do not retrieve enough data to require asynchronous handling. However, the technique works for larger data sets, and for situations where the programmer might want to do other work after starting the queries and before retrieving the results. For a more elaborative example please see cmd/async/async.go ==> Some considerations related to the ServerSessionKeepAlive configuration option in context of asynchronous query execution When SQL Go connection is being closed, it performs the following actions: * stops the scheduled heartbeats (CLIENT_SESSION_KEEP_ALIVE), if it was enabled * cleans up all the http connections which are already idle - doesn't touch the ones which are in active use currently * if Config.ServerSessionKeepAlive is false (default), then actively logs out the current Snowflake session. !! Caveat: If there are any queries which are currently executing in the same Snowflake session (e.g. async queries sent with WithAsyncMode()), then those queries are automatically cancelled from the client side a couple minutes later after the Close() call, as a Snowflake session which has been actively logged out from, cannot sustain any queries. You can govern this behaviour with setting Config.ServerSessionKeepAlive to true; when the corresponding Snowflake session will be kept alive for a long time (determined by the Snowflake engine) even after an explicit Connection.Close() call past the time when the last running query in the session finished executing. The behaviour is also dependent on ABORT_DETACHED_QUERY parameter, please see the detailed explanation in the parameter description at https://docs.snowflake.com/en/sql-reference/parameters#abort-detached-query. As a consequence, best practice would be to isolate all long-running async tasks (especially ones supposed to be continued after the connection is closed) into a separate connection. The Go Snowflake Driver supports the PUT and GET commands. The PUT command copies a file from a local computer (the computer where the Golang client is running) to a stage on the cloud platform. The GET command copies data files from a stage on the cloud platform to a local computer. See the following for information on the syntax and supported parameters: Using PUT: The following example shows how to run a PUT command by passing a string to the db.Query() function: "<local_file>" should include the file path as well as the name. Snowflake recommends using an absolute path rather than a relative path. For example: Different client platforms (e.g. linux, Windows) have different path name conventions. Ensure that you specify path names appropriately. This is particularly important on Windows, which uses the backslash character as both an escape character and as a separator in path names. To send information from a stream (rather than a file) use code similar to the code below. (The ReplaceAll() function is needed on Windows to handle backslashes in the path to the file.) Note: PUT statements are not supported for multi-statement queries. Using GET: The following example shows how to run a GET command by passing a string to the db.Query() function: "<local_file>" should include the file path as well as the name. Snowflake recommends using an absolute path rather than a relative path. For example: To download a file into an in-memory stream (rather than a file) use code similar to the code below. Note: GET statements are not supported for multi-statement queries. Specifying temporary directory for encryption and compression: Putting and getting requires compression and/or encryption, which is done in the OS temporary directory. If you cannot use default temporary directory for your OS or you want to specify it yourself, you can use "tmpDirPath" DSN parameter. Remember, to encode slashes. Example: Using custom configuration for PUT/GET: If you want to override some default configuration options, you can use `WithFileTransferOptions` context. There are multiple config parameters including progress bars or compression. The Go Snowflake Driver includes an embedded native library called "minicore" that verifies loading of native Rust extensions on various platforms. By default, minicore is enabled and loaded dynamically at runtime. ## Disabling Minicore There are two ways to disable minicore: 1. **At runtime using environment variable:** 2. **At compile time using build tags:** ## Static Linking On Linux, if the binary is fully statically linked (e.g., built with -linkmode external -extldflags '-static'), the driver automatically detects this and skips minicore loading. Calling dlopen from a statically linked glibc binary would crash with SIGFPE, so the driver inspects the ELF header for a dynamic linker (PT_INTERP) and gracefully skips minicore if none is found. When minicore is disabled (either at runtime, at compile time, or automatically due to static linking), the driver continues to work normally but without the additional functionality provided by the native library. If you force FIPS mode using fips140 GODEBUG option, driver will switch OCSP requests from SHA-1 to SHA-256. Be aware, that Snowflake cache server doesn't support OCSP requests signed with SHA-256, so driver may work slower, and, in case of OCSP cache server unavailability, OCSP requests will fail, and if OCSP is enabled, then connection attempts will fail as well. ==> Relevant configuration - `ConnectionDiagnosticsEnabled` (default: false) - main flag to enable the diagnostics - `ConnectionDiagnosticsAllowlistFile` - specify `/path/to/allowlist.json` to use a specific allowlist file which the driver should parse. If not specified, the driver tries to open `allowlist.json` from the current directory. The `ConnectionDiagnosticsAllowlistFile` is only taken into consideration when `ConnectionDiagnosticsEnabled=true` ==> Flow of operation when `ConnectionDiagnosticsEnabled=true` 1. upon initial startup, driver opens and reads the `allowlist.json` to determine which hosts it needs to connect to, and then for each entry in the allowlist 2. perform a DNS resolution test to see if the hostname is resolvable 3. driver logs an Error, when encountering a _public_ IP address for a host which looks to be a _private_ link hostname 4. checks if proxy is used in the connection 5. sets up a connection; for which we use the same transport which is driven by the driver's config (custom transport, or when OCSP disabled then OCSP-less transport, or by default, the OCSP-enabled transport) 6. for HTTP endpoints, issues a HTTP GET request and see if it connects 7. for HTTPS endpoints, the same , plus 8. the driver exits after performing diagnostics. If you want to use the driver 'normally' after performing connection diagnostics, set `ConnectionDiagnosticsEnabled=false` or remove it from the config
Goserial is a simple go package to allow you to read and write from the serial port as a stream of bytes. It aims to have the same API on all platforms, including windows. As an added bonus, the windows package does not use cgo, so you can cross compile for windows from another platform. Unfortunately goinstall does not currently let you cross compile so you will have to do it manually: Currently there is very little in the way of configurability. You can set the baud rate. Then you can Read(), Write(), or Close() the connection. Read() will block until at least one byte is returned. Write is the same. There is currently no exposed way to set the timeouts, though patches are welcome. Currently all ports are opened with 8 data bits, 1 stop bit, no parity, no hardware flow control, and no software flow control. This works fine for many real devices and many faux serial devices including usb-to-serial converters and bluetooth serial ports. You may Read() and Write() simulantiously on the same connection (from different goroutines). Example usage:
z13ctl — system control for the 2025 ASUS ROG Flow Z13 Controls keyboard and lightbar RGB (via Linux hidraw), performance profile, and battery charge limit (via asus-wmi sysfs interfaces). Usage: Commands: apply brightness list off profile batterylimit setup
Package mqttv5 provides an SDK for implementing MQTT v5.0 clients and brokers. This package implements the MQTT Version 5.0 OASIS Standard: https://docs.oasis-open.org/mqtt/mqtt/v5.0/mqtt-v5.0.html The package provides structs for all MQTT v5.0 control packets: Use ReadPacket and WritePacket to read/write packets from/to connections: Use the high-level Client API for connecting to MQTT brokers: TLS connections: WebSocket connections: Use the high-level Server API for building MQTT brokers: For TLS server, create a TLS listener: Multiple listeners can be added to serve on multiple ports: For WebSocket server, use WSServer as an http.Handler: Session state can be managed using the Session and SessionStore interfaces. A reference implementation is provided with MemorySession and MemorySessionStore: Sessions track subscriptions, pending messages, and packet IDs: For QoS 1 and 2 message flows, use the provided state machines: Flow control prevents overwhelming clients with too many in-flight messages: Topic validation and matching support MQTT wildcards: Implement the Authenticator interface for basic authentication: For enhanced authentication (multi-step), implement EnhancedAuthenticator. Implement the Authorizer interface for access control: Use the built-in metrics collectors for operational metrics: Implement the Logger interface for structured logging:
Package goAuth provides a low-latency authentication engine with JWT access tokens, rotating opaque refresh tokens, Redis-backed session controls, and bitmask-based RBAC. The package is designed for concurrent server workloads: Engine methods are safe to call from multiple goroutines after initialization through Builder.Build. goAuth is the public surface. It exposes Engine, Builder, Config, and value types (MetricsSnapshot, SessionInfo, etc.). All internal coordination — flow orchestration, session encoding, rate limiting, audit dispatch — lives under internal/ and is never exported. Validate is the hot path. It must not allocate beyond the returned Claims struct and must complete without Redis round-trips in ModeJWTOnly. Refresh, Login, and account operations are allowed one Redis round-trip per call.
Package trellis is a deterministic state machine engine (DFA) designed for building robust conversational agents, CLIs, and automation workflows. It implements a "Reentrant DFA with Controlled Side-Effects" architecture, separating the narrative graph (Logic) from the execution state (Context) and side-effects (Tools). Trellis treats your application flow as a graph of nodes. The engine manages the state transitions, data binding, and persistence, while your application ("Host") manages the I/O and external tool execution. This Hexagonal Architecture allows Trellis to be embedded in any interface: CLI, HTTP Server, or AI Agent infrastructure. Initialize the engine using the "Start" entrypoint. You can use the default filesystem loader (Loam) or inject a custom one.
Package iobuf provides lock-free buffer pools and memory management utilities for high-performance I/O operations. The package implements a 12-tier buffer size hierarchy and lock-free bounded pools optimized for zero-allocation hot paths. All pools use semantic error types from iox for non-blocking control flow. Buffers are organized into 12 size tiers following a power-of-4 progression: Each tier has corresponding type aliases (e.g., SmallBuffer, LargeBuffer) and factory functions for bounded pools (e.g., NewSmallBufferPool). BoundedPool is a lock-free multi-producer multi-consumer (MPMC) pool based on the algorithm from "A Scalable, Portable, and Memory-Efficient Lock-Free FIFO Queue" (Ruslan Nikolaev, 2019). Key characteristics: Pools store indices (int) rather than buffer values directly. This enables: Usage pattern: For DMA and io_uring operations requiring page alignment: IoVec provides scatter/gather I/O support for readv/writev syscalls: This package requires a 64-bit CPU architecture (amd64, arm64, riscv64, loong64, ppc64, ppc64le, s390x, mips64, mips64le). 32-bit architectures are not supported due to 64-bit atomic operations in BoundedPool. All pool operations are safe for concurrent use. BoundedPool supports multiple concurrent producers and consumers without external synchronization. iobuf depends on: