Package spanner provides a client for reading and writing to Cloud Spanner databases. See the packages under admin for clients that operate on databases and instances. See https://cloud.google.com/spanner/docs/getting-started/go/ for an introduction to Cloud Spanner and additional help on using this API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. To start working with this package, create a client that refers to the database of interest: Remember to close the client after use to free up the sessions in the session pool. To use an emulator with this library, you can set the SPANNER_EMULATOR_HOST environment variable to the address at which your emulator is running. This will send requests to that address instead of to Cloud Spanner. You can then create and use a client as usual: Two Client methods, Apply and Single, work well for simple reads and writes. As a quick introduction, here we write a new row to the database and read it back: All the methods used above are discussed in more detail below. Every Cloud Spanner row has a unique key, composed of one or more columns. Construct keys with a literal of type Key: The keys of a Cloud Spanner table are ordered. You can specify ranges of keys using the KeyRange type: By default, a KeyRange includes its start key but not its end key. Use the Kind field to specify other boundary conditions: A KeySet represents a set of keys. A single Key or KeyRange can act as a KeySet. Use the KeySets function to build the union of several KeySets: AllKeys returns a KeySet that refers to all the keys in a table: All Cloud Spanner reads and writes occur inside transactions. There are two types of transactions, read-only and read-write. Read-only transactions cannot change the database, do not acquire locks, and may access either the current database state or states in the past. Read-write transactions can read the database before writing to it, and always apply to the most recent database state. The simplest and fastest transaction is a ReadOnlyTransaction that supports a single read operation. Use Client.Single to create such a transaction. You can chain the call to Single with a call to a Read method. When you only want one row whose key you know, use ReadRow. Provide the table name, key, and the columns you want to read: Read multiple rows with the Read method. It takes a table name, KeySet, and list of columns: Read returns a RowIterator. You can call the Do method on the iterator and pass a callback: RowIterator also follows the standard pattern for the Google Cloud Client Libraries: Always call Stop when you finish using an iterator this way, whether or not you iterate to the end. (Failing to call Stop could lead you to exhaust the database's session quota.) To read rows with an index, use ReadUsingIndex. The most general form of reading uses SQL statements. Construct a Statement with NewStatement, setting any parameters using the Statement's Params map: You can also construct a Statement directly with a struct literal, providing your own map of parameters. Use the Query method to run the statement and obtain an iterator: Once you have a Row, via an iterator or a call to ReadRow, you can extract column values in several ways. Pass in a pointer to a Go variable of the appropriate type when you extract a value. You can extract by column position or name: You can extract all the columns at once: Or you can define a Go struct that corresponds to your columns, and extract into that: For Cloud Spanner columns that may contain NULL, use one of the NullXXX types, like NullString: To perform more than one read in a transaction, use ReadOnlyTransaction: You must call Close when you are done with the transaction. Cloud Spanner read-only transactions conceptually perform all their reads at a single moment in time, called the transaction's read timestamp. Once a read has started, you can call ReadOnlyTransaction's Timestamp method to obtain the read timestamp. By default, a transaction will pick the most recent time (a time where all previously committed transactions are visible) for its reads. This provides the freshest data, but may involve some delay. You can often get a quicker response if you are willing to tolerate "stale" data. You can control the read timestamp selected by a transaction by calling the WithTimestampBound method on the transaction before using it. For example, to perform a query on data that is at most one minute stale, use See the documentation of TimestampBound for more details. To write values to a Cloud Spanner database, construct a Mutation. The spanner package has functions for inserting, updating and deleting rows. Except for the Delete methods, which take a Key or KeyRange, each mutation-building function comes in three varieties. One takes lists of columns and values along with the table name: One takes a map from column names to values: And the third accepts a struct value, and determines the columns from the struct field names: To apply a list of mutations to the database, use Apply: If you need to read before writing in a single transaction, use a ReadWriteTransaction. ReadWriteTransactions may be aborted automatically by the backend and need to be retried. You pass in a function to ReadWriteTransaction, and the client will handle the retries automatically. Use the transaction's BufferWrite method to buffer mutations, which will all be executed at the end of the transaction: Cloud Spanner STRUCT (aka STRUCT) values (https://cloud.google.com/spanner/docs/data-types#struct-type) can be represented by a Go struct value. A proto StructType is built from the field types and field tag information of the Go struct. If a field in the struct type definition has a "spanner:<field_name>" tag, then the value of the "spanner" key in the tag is used as the name for that field in the built StructType, otherwise the field name in the struct definition is used. To specify a field with an empty field name in a Cloud Spanner STRUCT type, use the `spanner:""` tag annotation against the corresponding field in the Go struct's type definition. A STRUCT value can contain STRUCT-typed and Array-of-STRUCT typed fields and these can be specified using named struct-typed and []struct-typed fields inside a Go struct. However, embedded struct fields are not allowed. Unexported struct fields are ignored. NULL STRUCT values in Cloud Spanner are typed. A nil pointer to a Go struct value can be used to specify a NULL STRUCT value of the corresponding StructType. Nil and empty slices of a Go STRUCT type can be used to specify NULL and empty array values respectively of the corresponding StructType. A slice of pointers to a Go struct type can be used to specify an array of NULL-able STRUCT values. Spanner supports DML statements like INSERT, UPDATE and DELETE. Use ReadWriteTransaction.Update to run DML statements. It returns the number of rows affected. (You can call use ReadWriteTransaction.Query with a DML statement. The first call to Next on the resulting RowIterator will return iterator.Done, and the RowCount field of the iterator will hold the number of affected rows.) For large databases, it may be more efficient to partition the DML statement. Use client.PartitionedUpdate to run a DML statement in this way. Not all DML statements can be partitioned. This client has been instrumented to use OpenCensus tracing (http://opencensus.io). To enable tracing, see "Enabling Tracing for a Program" at https://godoc.org/go.opencensus.io/trace. OpenCensus tracing requires Go 1.8 or higher.
Package firestore provides a client for reading and writing to a Cloud Firestore database. See https://cloud.google.com/firestore/docs for an introduction to Cloud Firestore and additional help on using the Firestore API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Note: you can't use both Cloud Firestore and Cloud Datastore in the same project. To start working with this package, create a client with a project ID: In Firestore, documents are sets of key-value pairs, and collections are groups of documents. A Firestore database consists of a hierarchy of alternating collections and documents, referred to by slash-separated paths like "States/California/Cities/SanFrancisco". This client is built around references to collections and documents. CollectionRefs and DocumentRefs are lightweight values that refer to the corresponding database entities. Creating a ref does not involve any network traffic. Use DocumentRef.Get to read a document. The result is a DocumentSnapshot. Call its Data method to obtain the entire document contents as a map. You can also obtain a single field with DataAt, or extract the data into a struct with DataTo. With the type definition we can extract the document's data into a value of type State: Note that this client supports struct tags beginning with "firestore:" that work like the tags of the encoding/json package, letting you rename fields, ignore them, or omit their values when empty. To retrieve multiple documents from their references in a single call, use Client.GetAll. For writing individual documents, use the methods on DocumentReference. Create creates a new document. The first return value is a WriteResult, which contains the time at which the document was updated. Create fails if the document exists. Another method, Set, either replaces an existing document or creates a new one. To update some fields of an existing document, use Update. It takes a list of paths to update and their corresponding values. Use DocumentRef.Delete to delete a document. You can condition Deletes or Updates on when a document was last changed. Specify these preconditions as an option to a Delete or Update method. The check and the write happen atomically with a single RPC. Here we update a doc only if it hasn't changed since we read it. You could also do this with a transaction. To perform multiple writes at once, use a WriteBatch. Its methods chain for convenience. WriteBatch.Commit sends the collected writes to the server, where they happen atomically. You can use SQL to select documents from a collection. Begin with the collection, and build up a query using Select, Where and other methods of Query. Supported operators include '<', '<=', '>', '>=', '==', 'in', 'array-contains', and 'array-contains-any'. Call the Query's Documents method to get an iterator, and use it like the other Google Cloud Client iterators. To get all the documents in a collection, you can use the collection itself as a query. Firestore supports similarity search over embedding vectors. See Query.FindNearest for details. You can partition the documents of a Collection Group allowing for smaller subqueries. You can also Serialize/Deserialize queries making it possible to run/stream the queries elsewhere; another process or machine for instance. Use a transaction to execute reads and writes atomically. All reads must happen before any writes. Transaction creation, commit, rollback and retry are handled for you by the Client.RunTransaction method; just provide a function and use the read and write methods of the Transaction passed to it. This package supports the Cloud Firestore emulator, which is useful for testing and development. Environment variables are used to indicate that Firestore traffic should be directed to the emulator instead of the production Firestore service. To install and run the emulator and its environment variables, see the documentation at https://cloud.google.com/sdk/gcloud/reference/beta/emulators/firestore/. Once the emulator is running, set FIRESTORE_EMULATOR_HOST to the API endpoint.
Package gocql implements a fast and robust Cassandra driver for the Go programming language. Pass a list of initial node IP addresses to NewCluster to create a new cluster configuration: Port can be specified as part of the address, the above is equivalent to: It is recommended to use the value set in the Cassandra config for broadcast_address or listen_address, an IP address not a domain name. This is because events from Cassandra will use the configured IP address, which is used to index connected hosts. If the domain name specified resolves to more than 1 IP address then the driver may connect multiple times to the same host, and will not mark the node being down or up from events. Then you can customize more options (see ClusterConfig): The driver tries to automatically detect the protocol version to use if not set, but you might want to set the protocol version explicitly, as it's not defined which version will be used in certain situations (for example during upgrade of the cluster when some of the nodes support different set of protocol versions than other nodes). The driver advertises the module name and version in the STARTUP message, so servers are able to detect the version. If you use replace directive in go.mod, the driver will send information about the replacement module instead. When ready, create a session from the configuration. Don't forget to Close the session once you are done with it: CQL protocol uses a SASL-based authentication mechanism and so consists of an exchange of server challenges and client response pairs. The details of the exchanged messages depend on the authenticator used. To use authentication, set ClusterConfig.Authenticator or ClusterConfig.AuthProvider. PasswordAuthenticator is provided to use for username/password authentication: It is possible to secure traffic between the client and server with TLS. To use TLS, set the ClusterConfig.SslOpts field. SslOptions embeds *tls.Config so you can set that directly. There are also helpers to load keys/certificates from files. Warning: Due to historical reasons, the SslOptions is insecure by default, so you need to set EnableHostVerification to true if no Config is set. Most users should set SslOptions.Config to a *tls.Config. SslOptions and Config.InsecureSkipVerify interact as follows: For example: To route queries to local DC first, use DCAwareRoundRobinPolicy. For example, if the datacenter you want to primarily connect is called dc1 (as configured in the database): The driver can route queries to nodes that hold data replicas based on partition key (preferring local DC). Note that TokenAwareHostPolicy can take options such as gocql.ShuffleReplicas and gocql.NonLocalReplicasFallback. We recommend running with a token aware host policy in production for maximum performance. The driver can only use token-aware routing for queries where all partition key columns are query parameters. For example, instead of use The DCAwareRoundRobinPolicy can be replaced with RackAwareRoundRobinPolicy, which takes two parameters, datacenter and rack. Instead of dividing hosts with two tiers (local datacenter and remote datacenters) it divides hosts into three (the local rack, the rest of the local datacenter, and everything else). RackAwareRoundRobinPolicy can be combined with TokenAwareHostPolicy in the same way as DCAwareRoundRobinPolicy. Create queries with Session.Query. Query values must not be reused between different executions and must not be modified after starting execution of the query. To execute a query without reading results, use Query.Exec: Single row can be read by calling Query.Scan: Multiple rows can be read using Iter.Scanner: See Example for complete example. The driver automatically prepares DML queries (SELECT/INSERT/UPDATE/DELETE/BATCH statements) and maintains a cache of prepared statements. CQL protocol does not support preparing other query types. When using CQL protocol >= 4, it is possible to use gocql.UnsetValue as the bound value of a column. This will cause the database to ignore writing the column. The main advantage is the ability to keep the same prepared statement even when you don't want to update some fields, where before you needed to make another prepared statement. Session is safe to use from multiple goroutines, so to execute multiple concurrent queries, just execute them from several worker goroutines. Gocql provides synchronously-looking API (as recommended for Go APIs) and the queries are executed asynchronously at the protocol level. Null values are are unmarshalled as zero value of the type. If you need to distinguish for example between text column being null and empty string, you can unmarshal into *string variable instead of string. See Example_nulls for full example. The driver reuses backing memory of slices when unmarshalling. This is an optimization so that a buffer does not need to be allocated for every processed row. However, you need to be careful when storing the slices to other memory structures. When you want to save the data for later use, pass a new slice every time. A common pattern is to declare the slice variable within the scanner loop: The driver supports paging of results with automatic prefetch, see ClusterConfig.PageSize, Session.SetPrefetch, Query.PageSize, and Query.Prefetch. It is also possible to control the paging manually with Query.PageState (this disables automatic prefetch). Manual paging is useful if you want to store the page state externally, for example in a URL to allow users browse pages in a result. You might want to sign/encrypt the paging state when exposing it externally since it contains data from primary keys. Paging state is specific to the CQL protocol version and the exact query used. It is meant as opaque state that should not be modified. If you send paging state from different query or protocol version, then the behaviour is not defined (you might get unexpected results or an error from the server). For example, do not send paging state returned by node using protocol version 3 to a node using protocol version 4. Also, when using protocol version 4, paging state between Cassandra 2.2 and 3.0 is incompatible (https://issues.apache.org/jira/browse/CASSANDRA-10880). The driver does not check whether the paging state is from the same protocol version/statement. You might want to validate yourself as this could be a problem if you store paging state externally. For example, if you store paging state in a URL, the URLs might become broken when you upgrade your cluster. Call Query.PageState(nil) to fetch just the first page of the query results. Pass the page state returned by Iter.PageState to Query.PageState of a subsequent query to get the next page. If the length of slice returned by Iter.PageState is zero, there are no more pages available (or an error occurred). Using too low values of PageSize will negatively affect performance, a value below 100 is probably too low. While Cassandra returns exactly PageSize items (except for last page) in a page currently, the protocol authors explicitly reserved the right to return smaller or larger amount of items in a page for performance reasons, so don't rely on the page having the exact count of items. See Example_paging for an example of manual paging. There are certain situations when you don't know the list of columns in advance, mainly when the query is supplied by the user. Iter.Columns, Iter.RowData, Iter.MapScan and Iter.SliceMap can be used to handle this case. See Example_dynamicColumns. The CQL protocol supports sending batches of DML statements (INSERT/UPDATE/DELETE) and so does gocql. Use Session.NewBatch to create a new batch and then fill-in details of individual queries. Then execute the batch with Session.ExecuteBatch. Logged batches ensure atomicity, either all or none of the operations in the batch will succeed, but they have overhead to ensure this property. Unlogged batches don't have the overhead of logged batches, but don't guarantee atomicity. Updates of counters are handled specially by Cassandra so batches of counter updates have to use CounterBatch type. A counter batch can only contain statements to update counters. For unlogged batches it is recommended to send only single-partition batches (i.e. all statements in the batch should involve only a single partition). Multi-partition batch needs to be split by the coordinator node and re-sent to correct nodes. With single-partition batches you can send the batch directly to the node for the partition without incurring the additional network hop. It is also possible to pass entire BEGIN BATCH .. APPLY BATCH statement to Query.Exec. There are differences how those are executed. BEGIN BATCH statement passed to Query.Exec is prepared as a whole in a single statement. Session.ExecuteBatch prepares individual statements in the batch. If you have variable-length batches using the same statement, using Session.ExecuteBatch is more efficient. See Example_batch for an example. Query.ScanCAS or Query.MapScanCAS can be used to execute a single-statement lightweight transaction (an INSERT/UPDATE .. IF statement) and reading its result. See example for Query.MapScanCAS. Multiple-statement lightweight transactions can be executed as a logged batch that contains at least one conditional statement. All the conditions must return true for the batch to be applied. You can use Session.ExecuteBatchCAS and Session.MapExecuteBatchCAS when executing the batch to learn about the result of the LWT. See example for Session.MapExecuteBatchCAS. Queries can be marked as idempotent. Marking the query as idempotent tells the driver that the query can be executed multiple times without affecting its result. Non-idempotent queries are not eligible for retrying nor speculative execution. Idempotent queries are retried in case of errors based on the configured RetryPolicy. Queries can be retried even before they fail by setting a SpeculativeExecutionPolicy. The policy can cause the driver to retry on a different node if the query is taking longer than a specified delay even before the driver receives an error or timeout from the server. When a query is speculatively executed, the original execution is still executing. The two parallel executions of the query race to return a result, the first received result will be returned. UDTs can be mapped (un)marshaled from/to map[string]interface{} a Go struct (or a type implementing UDTUnmarshaler, UDTMarshaler, Unmarshaler or Marshaler interfaces). For structs, cql tag can be used to specify the CQL field name to be mapped to a struct field: See Example_userDefinedTypesMap, Example_userDefinedTypesStruct, ExampleUDTMarshaler, ExampleUDTUnmarshaler. It is possible to provide observer implementations that could be used to gather metrics: CQL protocol also supports tracing of queries. When enabled, the database will write information about internal events that happened during execution of the query. You can use Query.Trace to request tracing and receive the session ID that the database used to store the trace information in system_traces.sessions and system_traces.events tables. NewTraceWriter returns an implementation of Tracer that writes the events to a writer. Gathering trace information might be essential for debugging and optimizing queries, but writing traces has overhead, so this feature should not be used on production systems with very high load unless you know what you are doing. Example_batch demonstrates how to execute a batch of statements. Example_dynamicColumns demonstrates how to handle dynamic column list. Example_marshalerUnmarshaler demonstrates how to implement a Marshaler and Unmarshaler. Example_nulls demonstrates how to distinguish between null and zero value when needed. Null values are unmarshalled as zero value of the type. If you need to distinguish for example between text column being null and empty string, you can unmarshal into *string field. Example_paging demonstrates how to manually fetch pages and use page state. See also package documentation about paging. Example_set demonstrates how to use sets. Example_userDefinedTypesMap demonstrates how to work with user-defined types as maps. See also Example_userDefinedTypesStruct and examples for UDTMarshaler and UDTUnmarshaler if you want to map to structs. Example_userDefinedTypesStruct demonstrates how to work with user-defined types as structs. See also examples for UDTMarshaler and UDTUnmarshaler if you need more control/better performance.
Package buffalo is a Go web development eco-system, designed to make your life easier. Buffalo helps you to generate a web project that already has everything from front-end (JavaScript, SCSS, etc.) to back-end (database, routing, etc.) already hooked up and ready to run. From there it provides easy APIs to build your web application quickly in Go. Buffalo **isn't just a framework**, it's a holistic web development environment and project structure that **lets developers get straight to the business** of, well, building their business.
Package tcell provides a lower-level, portable API for building programs that interact with terminals or consoles. It works with both common (and many uncommon!) terminals or terminal emulators, and Windows console implementations. It provides support for up to 256 colors, text attributes, and box drawing elements. A database of terminals built from a real terminfo database is provided, along with code to generate new database entries. Tcell offers very rich support for mice, dependent upon the terminal of course. (Windows, XTerm, and iTerm 2 are known to work very well.) If the environment is not Unicode by default, such as an ISO8859 based locale or GB18030, Tcell can convert input and output, so that your terminal can operate in whatever locale is most convenient, while the application program can just assume "everything is UTF-8". Reasonable defaults are used for updating characters to something suitable for display. Unicode box drawing characters will be converted to use the alternate character set of your terminal, if native conversions are not available. If no ACS is available, then some ASCII fallbacks will be used. Note that support for non-UTF-8 locales (other than C) must be enabled by the application using RegisterEncoding() -- we don't have them all enabled by default to avoid bloating the application unnecessarily. (These days UTF-8 is good enough for almost everyone, and nobody should be using legacy locales anymore.) Also, actual glyphs for various code point will only be displayed if your terminal or emulator (or the font the emulator is using) supports them. A rich set of key codes is supported, with support for up to 65 function keys, and various other special keys.
SQL Schema migration tool for Go. Key features: To install the library and command line program, use the following: The main command is called sql-migrate. Each command requires a configuration file (which defaults to dbconfig.yml, but can be specified with the -config flag). This config file should specify one or more environments: The `table` setting is optional and will default to `gorp_migrations`. The environment that will be used can be specified with the -env flag (defaults to development). Use the --help flag in combination with any of the commands to get an overview of its usage: The up command applies all available migrations. By contrast, down will only apply one migration by default. This behavior can be changed for both by using the -limit parameter. The redo command will unapply the last migration and reapply it. This is useful during development, when you're writing migrations. Use the status command to see the state of the applied migrations: If you are using MySQL, you must append ?parseTime=true to the datasource configuration. For example: See https://github.com/go-sql-driver/mysql#parsetime for more information. Import sql-migrate into your application: Set up a source of migrations, this can be from memory, from a set of files or from bindata (more on that later): Then use the Exec function to upgrade your database: Note that n can be greater than 0 even if there is an error: any migration that succeeded will remain applied even if a later one fails. The full set of capabilities can be found in the API docs below. Migrations are defined in SQL files, which contain a set of SQL statements. Special comments are used to distinguish up and down migrations. You can put multiple statements in each block, as long as you end them with a semicolon (;). If you have complex statements which contain semicolons, use StatementBegin and StatementEnd to indicate boundaries: The order in which migrations are applied is defined through the filename: sql-migrate will sort migrations based on their name. It's recommended to use an increasing version number or a timestamp as the first part of the filename. Normally each migration is run within a transaction in order to guarantee that it is fully atomic. However some SQL commands (for example creating an index concurrently in PostgreSQL) cannot be executed inside a transaction. In order to execute such a command in a migration, the migration can be run using the notransaction option: If you like your Go applications self-contained (that is: a single binary): use packr (https://github.com/gobuffalo/packr) to embed the migration files. Just write your migration files as usual, as a set of SQL files in a folder. Use the PackrMigrationSource in your application to find the migrations: If you already have a box and would like to use a subdirectory: As an alternative, but slightly less maintained, you can use bindata (https://github.com/shuLhan/go-bindata) to embed the migration files. Just write your migration files as usual, as a set of SQL files in a folder. Then use bindata to generate a .go file with the migrations embedded: The resulting bindata.go file will contain your migrations. Remember to regenerate your bindata.go file whenever you add/modify a migration (go generate will help here, once it arrives). Use the AssetMigrationSource in your application to find the migrations: Both Asset and AssetDir are functions provided by bindata. Then proceed as usual. Adding a new migration source means implementing MigrationSource. The resulting slice of migrations will be executed in the given order, so it should usually be sorted by the Id field.
Package dynamodb provides the API client, operations, and parameter types for Amazon DynamoDB. Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database, so that you don't have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling. With DynamoDB, you can create database tables that can store and retrieve any amount of data, and serve any level of request traffic. You can scale up or scale down your tables' throughput capacity without downtime or performance degradation, and use the Amazon Web Services Management Console to monitor resource utilization and performance metrics. DynamoDB automatically spreads the data and traffic for your tables over a sufficient number of servers to handle your throughput and storage requirements, while maintaining consistent and fast performance. All of your data is stored on solid state disks (SSDs) and automatically replicated across multiple Availability Zones in an Amazon Web Services Region, providing built-in high availability and data durability.
Package tcell provides a lower-level, portable API for building programs that interact with terminals or consoles. It works with both common (and many uncommon!) terminals or terminal emulators, and Windows console implementations. It provides support for up to 256 colors, text attributes, and box drawing elements. A database of terminals built from a real terminfo database is provided, along with code to generate new database entries. Tcell offers very rich support for mice, dependent upon the terminal of course. (Windows, XTerm, and iTerm 2 are known to work very well.) If the environment is not Unicode by default, such as an ISO8859 based locale or GB18030, Tcell can convert input and output, so that your terminal can operate in whatever locale is most convenient, while the application program can just assume "everything is UTF-8". Reasonable defaults are used for updating characters to something suitable for display. Unicode box drawing characters will be converted to use the alternate character set of your terminal, if native conversions are not available. If no ACS is available, then some ASCII fallbacks will be used. Note that support for non-UTF-8 locales (other than C) must be enabled by the application using RegisterEncoding() -- we don't have them all enabled by default to avoid bloating the application unneccessarily. (These days UTF-8 is good enough for almost everyone, and nobody should be using legacy locales anymore.) Also, actual glyphs for various code point will only be displayed if your terminal or emulator (or the font the emulator is using) supports them. A rich set of keycodes is supported, with support for up to 65 function keys, and various other special keys.
Package rds provides the API client, operations, and parameter types for Amazon Relational Database Service. Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the cloud. It provides cost-efficient, resizeable capacity for an industry-standard relational database and manages common database administration tasks, freeing up developers to focus on what makes their applications and businesses unique. Amazon RDS gives you access to the capabilities of a MySQL, MariaDB, PostgreSQL, Microsoft SQL Server, Oracle, Db2, or Amazon Aurora database server. These capabilities mean that the code, applications, and tools you already use today with your existing databases work with Amazon RDS without modification. Amazon RDS automatically backs up your database and maintains the database software that powers your DB instance. Amazon RDS is flexible: you can scale your DB instance's compute resources and storage capacity to meet your application's demand. As with all Amazon Web Services, there are no up-front investments, and you pay only for the resources you use. This interface reference for Amazon RDS contains documentation for a programming or command line interface you can use to manage Amazon RDS. Amazon RDS is asynchronous, which means that some interfaces might require techniques such as polling or callback functions to determine when a command has been applied. In this reference, the parameter descriptions indicate whether a command is applied immediately, on the next instance reboot, or during the maintenance window. The reference structure is as follows, and we list following some related topics from the user guide. Amazon RDS API Reference For the alphabetical list of API actions, see API Actions. For the alphabetical list of data types, see Data Types. For a list of common query parameters, see Common Parameters. For descriptions of the error codes, see Common Errors. Amazon RDS User Guide For a summary of the Amazon RDS interfaces, see Available RDS Interfaces. For more information about how to use the Query API, see Using the Query API.
Package timestreamwrite provides the API client, operations, and parameter types for Amazon Timestream Write. Amazon Timestream is a fast, scalable, fully managed time-series database service that makes it easy to store and analyze trillions of time-series data points per day. With Timestream, you can easily store and analyze IoT sensor data to derive insights from your IoT applications. You can analyze industrial telemetry to streamline equipment management and maintenance. You can also store and analyze log data and metrics to improve the performance and availability of your applications. Timestream is built from the ground up to effectively ingest, process, and store time-series data. It organizes data to optimize query processing. It automatically scales based on the volume of data ingested and on the query volume to ensure you receive optimal performance while inserting and querying data. As your data grows over time, Timestream’s adaptive query processing engine spans across storage tiers to provide fast analysis while reducing costs.
Package geoip2 provides an easy-to-use API for the MaxMind GeoIP2 and GeoLite2 databases; this package does not support GeoIP Legacy databases. The structs provided by this package match the internal structure of the data in the MaxMind databases. See github.com/oschwald/maxminddb-golang for more advanced used cases. Example provides a basic example of using the API. Use of the Country method is analogous to that of the City method.
Package redshift provides the API client, operations, and parameter types for Amazon Redshift. This is an interface reference for Amazon Redshift. It contains documentation for one of the programming or command line interfaces you can use to manage Amazon Redshift clusters. Note that Amazon Redshift is asynchronous, which means that some interfaces may require techniques, such as polling or asynchronous callback handlers, to determine when a command has been applied. In this reference, the parameter descriptions indicate whether a change is applied immediately, on the next instance reboot, or during the next maintenance window. For a summary of the Amazon Redshift cluster management interfaces, go to Using the Amazon Redshift Management Interfaces. Amazon Redshift manages all the work of setting up, operating, and scaling a data warehouse: provisioning capacity, monitoring and backing up the cluster, and applying patches and upgrades to the Amazon Redshift engine. You can focus on using your data to acquire new insights for your business and customers. If you are a first-time user of Amazon Redshift, we recommend that you begin by reading the Amazon Redshift Getting Started Guide. If you are a database developer, the Amazon Redshift Database Developer Guide explains how to design, build, query, and maintain the databases that make up your data warehouse.
Package docdb provides the API client, operations, and parameter types for Amazon DocumentDB with MongoDB compatibility. Amazon DocumentDB is a fast, reliable, and fully managed database service. Amazon DocumentDB makes it easy to set up, operate, and scale MongoDB-compatible databases in the cloud. With Amazon DocumentDB, you can run the same application code and use the same drivers and tools that you use with MongoDB.
Package azcosmos implements the client to interact with the Azure Cosmos DB SQL API. The azcosmos package is capable of: Types of Credentials The clients support different forms of authentication. The azcosmos library supports authorization via Azure Active Directory or an account key. Using Azure Active Directory To create a client, you can use any of the TokenCredential implementations provided by `azidentity`. Using account keys To create a client, you will need the account's endpoint URL and a key credential. Using connection string To create a client, you will need the account's connection string. The following are relevant concepts for the usage of the client: The following sections provide several code snippets covering some of the most common Table tasks, including: Create a database and obtain a `DatabaseClient` to perform operations on your newly created database. Create a container on an existing database and obtain a `ContainerClient` to perform operations on your newly created container. Creating, reading, and deleting items Querying items Querying items with parametrized queries Using Transactional batch
Package neptune provides the API client, operations, and parameter types for Amazon Neptune. Amazon Neptune is a fast, reliable, fully-managed graph database service that makes it easy to build and run applications that work with highly connected datasets. The core of Amazon Neptune is a purpose-built, high-performance graph database engine optimized for storing billions of relationships and querying the graph with milliseconds latency. Amazon Neptune supports popular graph models Property Graph and W3C's RDF, and their respective query languages Apache TinkerPop Gremlin and SPARQL, allowing you to easily build queries that efficiently navigate highly connected datasets. Neptune powers graph use cases such as recommendation engines, fraud detection, knowledge graphs, drug discovery, and network security. This interface reference for Amazon Neptune contains documentation for a programming or command line interface you can use to manage Amazon Neptune. Note that Amazon Neptune is asynchronous, which means that some interfaces might require techniques such as polling or callback functions to determine when a command has been applied. In this reference, the parameter descriptions indicate whether a command is applied immediately, on the next instance reboot, or during the maintenance window. The reference structure is as follows, and we list following some related topics from the user guide.
Package logr defines a general-purpose logging API and abstract interfaces to back that API. Packages in the Go ecosystem can depend on this package, while callers can implement logging with whatever backend is appropriate. Logging is done using a Logger instance. Logger is a concrete type with methods, which defers the actual logging to a LogSink interface. The main methods of Logger are Info() and Error(). Arguments to Info() and Error() are key/value pairs rather than printf-style formatted strings, emphasizing "structured logging". With Go's standard log package, we might write: With logr's structured logging, we'd write: Errors are much the same. Instead of: We'd write: Info() and Error() are very similar, but they are separate methods so that LogSink implementations can choose to do things like attach additional information (such as stack traces) on calls to Error(). Error() messages are always logged, regardless of the current verbosity. If there is no error instance available, passing nil is valid. Often we want to log information only when the application in "verbose mode". To write log lines that are more verbose, Logger has a V() method. The higher the V-level of a log line, the less critical it is considered. Log-lines with V-levels that are not enabled (as per the LogSink) will not be written. Level V(0) is the default, and logger.V(0).Info() has the same meaning as logger.Info(). Negative V-levels have the same meaning as V(0). Error messages do not have a verbosity level and are always logged. Where we might have written: We can write: Logger instances can have name strings so that all messages logged through that instance have additional context. For example, you might want to add a subsystem name: The WithName() method returns a new Logger, which can be passed to constructors or other functions for further use. Repeated use of WithName() will accumulate name "segments". These name segments will be joined in some way by the LogSink implementation. It is strongly recommended that name segments contain simple identifiers (letters, digits, and hyphen), and do not contain characters that could muddle the log output or confuse the joining operation (e.g. whitespace, commas, periods, slashes, brackets, quotes, etc). Logger instances can store any number of key/value pairs, which will be logged alongside all messages logged through that instance. For example, you might want to create a Logger instance per managed object: With the standard log package, we might write: With logr we'd write: Logger has very few hard rules, with the goal that LogSink implementations might have a lot of freedom to differentiate. There are, however, some things to consider. The log message consists of a constant message attached to the log line. This should generally be a simple description of what's occurring, and should never be a format string. Variable information can then be attached using named values. Keys are arbitrary strings, but should generally be constant values. Values may be any Go value, but how the value is formatted is determined by the LogSink implementation. Logger instances are meant to be passed around by value. Code that receives such a value can call its methods without having to check whether the instance is ready for use. The zero logger (= Logger{}) is identical to Discard() and discards all log entries. Code that receives a Logger by value can simply call it, the methods will never crash. For cases where passing a logger is optional, a pointer to Logger should be used. Keys are not strictly required to conform to any specification or regex, but it is recommended that they: These guidelines help ensure that log data is processed properly regardless of the log implementation. For example, log implementations will try to output JSON data or will store data for later database (e.g. SQL) queries. While users are generally free to use key names of their choice, it's generally best to avoid using the following keys, as they're frequently used by implementations: Implementations are encouraged to make use of these keys to represent the above concepts, when necessary (for example, in a pure-JSON output form, it would be necessary to represent at least message and timestamp as ordinary named values). Implementations may choose to give callers access to the underlying logging implementation. The recommended pattern for this is: Logger grants access to the sink to enable type assertions like this: Custom `With*` functions can be implemented by copying the complete Logger struct and replacing the sink in the copy: Don't use New to construct a new Logger with a LogSink retrieved from an existing Logger. Source code attribution might not work correctly and unexported fields in Logger get lost. Beware that the same LogSink instance may be shared by different logger instances. Calling functions that modify the LogSink will affect all of those.
Package databasemigrationservice provides the API client, operations, and parameter types for AWS Database Migration Service. Database Migration Service (DMS) can migrate your data to and from the most widely used commercial and open-source databases such as Oracle, PostgreSQL, Microsoft SQL Server, Amazon Redshift, MariaDB, Amazon Aurora, MySQL, and SAP Adaptive Server Enterprise (ASE). The service supports homogeneous migrations such as Oracle to Oracle, as well as heterogeneous migrations between different database platforms, such as Oracle to MySQL or SQL Server to PostgreSQL. For more information about DMS, see What Is Database Migration Service? in the Database Migration Service User Guide.
Package lightsail provides the API client, operations, and parameter types for Amazon Lightsail. Amazon Lightsail is the easiest way to get started with Amazon Web Services (Amazon Web Services) for developers who need to build websites or web applications. It includes everything you need to launch your project quickly - instances (virtual private servers), container services, storage buckets, managed databases, SSD-based block storage, static IP addresses, load balancers, content delivery network (CDN) distributions, DNS management of registered domains, and resource snapshots (backups) - for a low, predictable monthly price. You can manage your Lightsail resources using the Lightsail console, Lightsail API, Command Line Interface (CLI), or SDKs. For more information about Lightsail concepts and tasks, see the Amazon Lightsail Developer Guide. This API Reference provides detailed information about the actions, data types, parameters, and errors of the Lightsail service. For more information about the supported Amazon Web Services Regions, endpoints, and service quotas of the Lightsail service, see Amazon Lightsail Endpoints and Quotasin the Amazon Web Services General Reference.
Package costexplorer provides the API client, operations, and parameter types for AWS Cost Explorer Service. You can use the Cost Explorer API to programmatically query your cost and usage data. You can query for aggregated data such as total monthly costs or total daily usage. You can also query for granular data. This might include the number of daily write operations for Amazon DynamoDB database tables in your production environment. The Cost Explorer API provides the following endpoint: For information about the costs that are associated with the Cost Explorer API, see Amazon Web Services Cost Management Pricing.
Package datapipeline provides the API client, operations, and parameter types for AWS Data Pipeline. AWS Data Pipeline configures and manages a data-driven workflow called a pipeline. AWS Data Pipeline handles the details of scheduling and ensuring that data dependencies are met so that your application can focus on processing the data. AWS Data Pipeline provides a JAR implementation of a task runner called AWS Data Pipeline Task Runner. AWS Data Pipeline Task Runner provides logic for common data management scenarios, such as performing database queries and running data analysis using Amazon Elastic MapReduce (Amazon EMR). You can use AWS Data Pipeline Task Runner as your task runner, or you can write your own task runner to provide custom data management. AWS Data Pipeline implements two main sets of functionality. Use the first set to create a pipeline and define data sources, schedules, dependencies, and the transforms to be performed on the data. Use the second set in your task runner application to receive the next task ready for processing. The logic for performing the task, such as querying the data, running data analysis, or converting the data from one format to another, is contained within the task runner. The task runner performs the task assigned to it by the web service, reporting progress to the web service as it does so. When the task is done, the task runner reports the final success or failure of the task to the web service.
Package ql implements a pure Go embedded SQL database engine. QL is a member of the SQL family of languages. It is less complex and less powerful than SQL (whichever specification SQL is considered to be). 2018-08-02: Release v1.2.0 adds initial support for Go modules. 2017-01-10: Release v1.1.0 fixes some bugs and adds a configurable WAL headroom. 2016-07-29: Release v1.0.6 enables alternatively using = instead of == for equality operation. 2016-07-11: Release v1.0.5 undoes vendoring of lldb. QL now uses stable lldb (github.com/cznic/lldb). 2016-07-06: Release v1.0.4 fixes a panic when closing the WAL file. 2016-04-03: Release v1.0.3 fixes a data race. 2016-03-23: Release v1.0.2 vendors github.com/cznic/exp/lldb and github.com/camlistore/go4/lock. 2016-03-17: Release v1.0.1 adjusts for latest goyacc. Parser error messages are improved and changed, but their exact form is not considered a API change. 2016-03-05: The current version has been tagged v1.0.0. 2015-06-15: To improve compatibility with other SQL implementations, the count built-in aggregate function now accepts * as its argument. 2015-05-29: The execution planner was rewritten from scratch. It should use indices in all places where they were used before plus in some additional situations. It is possible to investigate the plan using the newly added EXPLAIN statement. The QL tool is handy for such analysis. If the planner would have used an index, but no such exists, the plan includes hints in form of copy/paste ready CREATE INDEX statements. The planner is still quite simple and a lot of work on it is yet ahead. You can help this process by filling an issue with a schema and query which fails to use an index or indices when it should, in your opinion. Bonus points for including output of `ql 'explain <query>'`. 2015-05-09: The grammar of the CREATE INDEX statement now accepts an expression list instead of a single expression, which was further limited to just a column name or the built-in id(). As a side effect, composite indices are now functional. However, the values in the expression-list style index are not yet used by other statements or the statement/query planner. The composite index is useful while having UNIQUE clause to check for semantically duplicate rows before they get added to the table or when such a row is mutated using the UPDATE statement and the expression-list style index tuple of the row is thus recomputed. 2015-05-02: The Schema field of table __Table now correctly reflects any column constraints and/or defaults. Also, the (*DB).Info method now has that information provided in new ColumInfo fields NotNull, Constraint and Default. 2015-04-20: Added support for {LEFT,RIGHT,FULL} [OUTER] JOIN. 2015-04-18: Column definitions can now have constraints and defaults. Details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. 2015-03-06: New built-in functions formatFloat and formatInt. Thanks urandom! (https://github.com/urandom) 2015-02-16: IN predicate now accepts a SELECT statement. See the updated "Predicates" section. 2015-01-17: Logical operators || and && have now alternative spellings: OR and AND (case insensitive). AND was a keyword before, but OR is a new one. This can possibly break existing queries. For the record, it's a good idea to not use any name appearing in, for example, [7] in your queries as the list of QL's keywords may expand for gaining better compatibility with existing SQL "standards". 2015-01-12: ACID guarantees were tightened at the cost of performance in some cases. The write collecting window mechanism, a formerly used implementation detail, was removed. Inserting rows one by one in a transaction is now slow. I mean very slow. Try to avoid inserting single rows in a transaction. Instead, whenever possible, perform batch updates of tens to, say thousands of rows in a single transaction. See also: http://www.sqlite.org/faq.html#q19, the discussed synchronization principles involved are the same as for QL, modulo minor details. Note: A side effect is that closing a DB before exiting an application, both for the Go API and through database/sql driver, is no more required, strictly speaking. Beware that exiting an application while there is an open (uncommitted) transaction in progress means losing the transaction data. However, the DB will not become corrupted because of not closing it. Nor that was the case before, but formerly failing to close a DB could have resulted in losing the data of the last transaction. 2014-09-21: id() now optionally accepts a single argument - a table name. 2014-09-01: Added the DB.Flush() method and the LIKE pattern matching predicate. 2014-08-08: The built in functions max and min now accept also time values. Thanks opennota! (https://github.com/opennota) 2014-06-05: RecordSet interface extended by new methods FirstRow and Rows. 2014-06-02: Indices on id() are now used by SELECT statements. 2014-05-07: Introduction of Marshal, Schema, Unmarshal. 2014-04-15: Added optional IF NOT EXISTS clause to CREATE INDEX and optional IF EXISTS clause to DROP INDEX. 2014-04-12: The column Unique in the virtual table __Index was renamed to IsUnique because the old name is a keyword. Unfortunately, this is a breaking change, sorry. 2014-04-11: Introduction of LIMIT, OFFSET. 2014-04-10: Introduction of query rewriting. 2014-04-07: Introduction of indices. QL imports zappy[8], a block-based compressor, which speeds up its performance by using a C version of the compression/decompression algorithms. If a CGO-free (pure Go) version of QL, or an app using QL, is required, please include 'purego' in the -tags option of go {build,get,install}. For example: If zappy was installed before installing QL, it might be necessary to rebuild zappy first (or rebuild QL with all its dependencies using the -a option): The syntax is specified using Extended Backus-Naur Form (EBNF) Lower-case production names are used to identify lexical tokens. Non-terminals are in CamelCase. Lexical tokens are enclosed in double quotes "" or back quotes “. The form a … b represents the set of characters from a through b as alternatives. The horizontal ellipsis … is also used elsewhere in the spec to informally denote various enumerations or code snippets that are not further specified. QL source code is Unicode text encoded in UTF-8. The text is not canonicalized, so a single accented code point is distinct from the same character constructed from combining an accent and a letter; those are treated as two code points. For simplicity, this document will use the unqualified term character to refer to a Unicode code point in the source text. Each code point is distinct; for instance, upper and lower case letters are different characters. Implementation restriction: For compatibility with other tools, the parser may disallow the NUL character (U+0000) in the statement. Implementation restriction: A byte order mark is disallowed anywhere in QL statements. The following terms are used to denote specific character classes The underscore character _ (U+005F) is considered a letter. Lexical elements are comments, tokens, identifiers, keywords, operators and delimiters, integer, floating-point, imaginary, rune and string literals and QL parameters. Line comments start with the character sequence // or -- and stop at the end of the line. A line comment acts like a space. General comments start with the character sequence /* and continue through the character sequence */. A general comment acts like a space. Comments do not nest. Tokens form the vocabulary of QL. There are four classes: identifiers, keywords, operators and delimiters, and literals. White space, formed from spaces (U+0020), horizontal tabs (U+0009), carriage returns (U+000D), and newlines (U+000A), is ignored except as it separates tokens that would otherwise combine into a single token. The formal grammar uses semicolons ";" as separators of QL statements. A single QL statement or the last QL statement in a list of statements can have an optional semicolon terminator. (Actually a separator from the following empty statement.) Identifiers name entities such as tables or record set columns. An identifier is a sequence of one or more letters and digits. The first character in an identifier must be a letter. For example No identifiers are predeclared, however note that no keyword can be used as an identifier. Identifiers starting with two underscores are used for meta data virtual tables names. For forward compatibility, users should generally avoid using any identifiers starting with two underscores. For example The following keywords are reserved and may not be used as identifiers. Keywords are not case sensitive. The following character sequences represent operators, delimiters, and other special tokens Operators consisting of more than one character are referred to by names in the rest of the documentation An integer literal is a sequence of digits representing an integer constant. An optional prefix sets a non-decimal base: 0 for octal, 0x or 0X for hexadecimal. In hexadecimal literals, letters a-f and A-F represent values 10 through 15. For example A floating-point literal is a decimal representation of a floating-point constant. It has an integer part, a decimal point, a fractional part, and an exponent part. The integer and fractional part comprise decimal digits; the exponent part is an e or E followed by an optionally signed decimal exponent. One of the integer part or the fractional part may be elided; one of the decimal point or the exponent may be elided. For example An imaginary literal is a decimal representation of the imaginary part of a complex constant. It consists of a floating-point literal or decimal integer followed by the lower-case letter i. For example A rune literal represents a rune constant, an integer value identifying a Unicode code point. A rune literal is expressed as one or more characters enclosed in single quotes. Within the quotes, any character may appear except single quote and newline. A single quoted character represents the Unicode value of the character itself, while multi-character sequences beginning with a backslash encode values in various formats. The simplest form represents the single character within the quotes; since QL statements are Unicode characters encoded in UTF-8, multiple UTF-8-encoded bytes may represent a single integer value. For instance, the literal 'a' holds a single byte representing a literal a, Unicode U+0061, value 0x61, while 'ä' holds two bytes (0xc3 0xa4) representing a literal a-dieresis, U+00E4, value 0xe4. Several backslash escapes allow arbitrary values to be encoded as ASCII text. There are four ways to represent the integer value as a numeric constant: \x followed by exactly two hexadecimal digits; \u followed by exactly four hexadecimal digits; \U followed by exactly eight hexadecimal digits, and a plain backslash \ followed by exactly three octal digits. In each case the value of the literal is the value represented by the digits in the corresponding base. Although these representations all result in an integer, they have different valid ranges. Octal escapes must represent a value between 0 and 255 inclusive. Hexadecimal escapes satisfy this condition by construction. The escapes \u and \U represent Unicode code points so within them some values are illegal, in particular those above 0x10FFFF and surrogate halves. After a backslash, certain single-character escapes represent special values All other sequences starting with a backslash are illegal inside rune literals. For example A string literal represents a string constant obtained from concatenating a sequence of characters. There are two forms: raw string literals and interpreted string literals. Raw string literals are character sequences between back quotes “. Within the quotes, any character is legal except back quote. The value of a raw string literal is the string composed of the uninterpreted (implicitly UTF-8-encoded) characters between the quotes; in particular, backslashes have no special meaning and the string may contain newlines. Carriage returns inside raw string literals are discarded from the raw string value. Interpreted string literals are character sequences between double quotes "". The text between the quotes, which may not contain newlines, forms the value of the literal, with backslash escapes interpreted as they are in rune literals (except that \' is illegal and \" is legal), with the same restrictions. The three-digit octal (\nnn) and two-digit hexadecimal (\xnn) escapes represent individual bytes of the resulting string; all other escapes represent the (possibly multi-byte) UTF-8 encoding of individual characters. Thus inside a string literal \377 and \xFF represent a single byte of value 0xFF=255, while ÿ, \u00FF, \U000000FF and \xc3\xbf represent the two bytes 0xc3 0xbf of the UTF-8 encoding of character U+00FF. For example These examples all represent the same string If the statement source represents a character as two code points, such as a combining form involving an accent and a letter, the result will be an error if placed in a rune literal (it is not a single code point), and will appear as two code points if placed in a string literal. Literals are assigned their values from the respective text representation at "compile" (parse) time. QL parameters provide the same functionality as literals, but their value is assigned at execution time from an expression list passed to DB.Run or DB.Execute. Using '?' or '$' is completely equivalent. For example Keywords 'false' and 'true' (not case sensitive) represent the two possible constant values of type bool (also not case sensitive). Keyword 'NULL' (not case sensitive) represents an untyped constant which is assignable to any type. NULL is distinct from any other value of any type. A type determines the set of values and operations specific to values of that type. A type is specified by a type name. Named instances of the boolean, numeric, and string types are keywords. The names are not case sensitive. Note: The blob type is exchanged between the back end and the API as []byte. On 32 bit platforms this limits the size which the implementation can handle to 2G. A boolean type represents the set of Boolean truth values denoted by the predeclared constants true and false. The predeclared boolean type is bool. A duration type represents the elapsed time between two instants as an int64 nanosecond count. The representation limits the largest representable duration to approximately 290 years. A numeric type represents sets of integer or floating-point values. The predeclared architecture-independent numeric types are The value of an n-bit integer is n bits wide and represented using two's complement arithmetic. Conversions are required when different numeric types are mixed in an expression or assignment. A string type represents the set of string values. A string value is a (possibly empty) sequence of bytes. The case insensitive keyword for the string type is 'string'. The length of a string (its size in bytes) can be discovered using the built-in function len. A time type represents an instant in time with nanosecond precision. Each time has associated with it a location, consulted when computing the presentation form of the time. The following functions are implicitly declared An expression specifies the computation of a value by applying operators and functions to operands. Operands denote the elementary values in an expression. An operand may be a literal, a (possibly qualified) identifier denoting a constant or a function or a table/record set column, or a parenthesized expression. A qualified identifier is an identifier qualified with a table/record set name prefix. For example Primary expression are the operands for unary and binary expressions. For example A primary expression of the form denotes the element of a string indexed by x. Its type is byte. The value x is called the index. The following rules apply - The index x must be of integer type except bigint or duration; it is in range if 0 <= x < len(s), otherwise it is out of range. - A constant index must be non-negative and representable by a value of type int. - A constant index must be in range if the string a is a literal. - If x is out of range at run time, a run-time error occurs. - s[x] is the byte at index x and the type of s[x] is byte. If s is NULL or x is NULL then the result is NULL. Otherwise s[x] is illegal. For a string, the primary expression constructs a substring. The indices low and high select which elements appear in the result. The result has indices starting at 0 and length equal to high - low. For convenience, any of the indices may be omitted. A missing low index defaults to zero; a missing high index defaults to the length of the sliced operand The indices low and high are in range if 0 <= low <= high <= len(a), otherwise they are out of range. A constant index must be non-negative and representable by a value of type int. If both indices are constant, they must satisfy low <= high. If the indices are out of range at run time, a run-time error occurs. Integer values of type bigint or duration cannot be used as indices. If s is NULL the result is NULL. If low or high is not omitted and is NULL then the result is NULL. Given an identifier f denoting a predeclared function, calls f with arguments a1, a2, … an. Arguments are evaluated before the function is called. The type of the expression is the result type of f. In a function call, the function value and arguments are evaluated in the usual order. After they are evaluated, the parameters of the call are passed by value to the function and the called function begins execution. The return value of the function is passed by value when the function returns. Calling an undefined function causes a compile-time error. Operators combine operands into expressions. Comparisons are discussed elsewhere. For other binary operators, the operand types must be identical unless the operation involves shifts or untyped constants. For operations involving constants only, see the section on constant expressions. Except for shift operations, if one operand is an untyped constant and the other operand is not, the constant is converted to the type of the other operand. The right operand in a shift expression must have unsigned integer type or be an untyped constant that can be converted to unsigned integer type. If the left operand of a non-constant shift expression is an untyped constant, the type of the constant is what it would be if the shift expression were replaced by its left operand alone. Expressions of the form yield a boolean value true if expr2, a regular expression, matches expr1 (see also [6]). Both expression must be of type string. If any one of the expressions is NULL the result is NULL. Predicates are special form expressions having a boolean result type. Expressions of the form are equivalent, including NULL handling, to The types of involved expressions must be comparable as defined in "Comparison operators". Another form of the IN predicate creates the expression list from a result of a SelectStmt. The SelectStmt must select only one column. The produced expression list is resource limited by the memory available to the process. NULL values produced by the SelectStmt are ignored, but if all records of the SelectStmt are NULL the predicate yields NULL. The select statement is evaluated only once. If the type of expr is not the same as the type of the field returned by the SelectStmt then the set operation yields false. The type of the column returned by the SelectStmt must be one of the simple (non blob-like) types: Expressions of the form are equivalent, including NULL handling, to The types of involved expressions must be ordered as defined in "Comparison operators". Expressions of the form yield a boolean value true if expr does not have a specific type (case A) or if expr has a specific type (case B). In other cases the result is a boolean value false. Unary operators have the highest precedence. There are five precedence levels for binary operators. Multiplication operators bind strongest, followed by addition operators, comparison operators, && (logical AND), and finally || (logical OR) Binary operators of the same precedence associate from left to right. For instance, x / y * z is the same as (x / y) * z. Note that the operator precedence is reflected explicitly by the grammar. Arithmetic operators apply to numeric values and yield a result of the same type as the first operand. The four standard arithmetic operators (+, -, *, /) apply to integer, rational, floating-point, and complex types; + also applies to strings; +,- also applies to times. All other arithmetic operators apply to integers only. sum integers, rationals, floats, complex values, strings difference integers, rationals, floats, complex values, times product integers, rationals, floats, complex values / quotient integers, rationals, floats, complex values % remainder integers & bitwise AND integers | bitwise OR integers ^ bitwise XOR integers &^ bit clear (AND NOT) integers << left shift integer << unsigned integer >> right shift integer >> unsigned integer Strings can be concatenated using the + operator String addition creates a new string by concatenating the operands. A value of type duration can be added to or subtracted from a value of type time. Times can subtracted from each other producing a value of type duration. For two integer values x and y, the integer quotient q = x / y and remainder r = x % y satisfy the following relationships with x / y truncated towards zero ("truncated division"). As an exception to this rule, if the dividend x is the most negative value for the int type of x, the quotient q = x / -1 is equal to x (and r = 0). If the divisor is a constant expression, it must not be zero. If the divisor is zero at run time, a run-time error occurs. If the dividend is non-negative and the divisor is a constant power of 2, the division may be replaced by a right shift, and computing the remainder may be replaced by a bitwise AND operation The shift operators shift the left operand by the shift count specified by the right operand. They implement arithmetic shifts if the left operand is a signed integer and logical shifts if it is an unsigned integer. There is no upper limit on the shift count. Shifts behave as if the left operand is shifted n times by 1 for a shift count of n. As a result, x << 1 is the same as x*2 and x >> 1 is the same as x/2 but truncated towards negative infinity. For integer operands, the unary operators +, -, and ^ are defined as follows For floating-point and complex numbers, +x is the same as x, while -x is the negation of x. The result of a floating-point or complex division by zero is not specified beyond the IEEE-754 standard; whether a run-time error occurs is implementation-specific. Whenever any operand of any arithmetic operation, unary or binary, is NULL, as well as in the case of the string concatenating operation, the result is NULL. For unsigned integer values, the operations +, -, *, and << are computed modulo 2n, where n is the bit width of the unsigned integer's type. Loosely speaking, these unsigned integer operations discard high bits upon overflow, and expressions may rely on “wrap around”. For signed integers with a finite bit width, the operations +, -, *, and << may legally overflow and the resulting value exists and is deterministically defined by the signed integer representation, the operation, and its operands. No exception is raised as a result of overflow. An evaluator may not optimize an expression under the assumption that overflow does not occur. For instance, it may not assume that x < x + 1 is always true. Integers of type bigint and rationals do not overflow but their handling is limited by the memory resources available to the program. Comparison operators compare two operands and yield a boolean value. In any comparison, the first operand must be of same type as is the second operand, or vice versa. The equality operators == and != apply to operands that are comparable. The ordering operators <, <=, >, and >= apply to operands that are ordered. These terms and the result of the comparisons are defined as follows - Boolean values are comparable. Two boolean values are equal if they are either both true or both false. - Complex values are comparable. Two complex values u and v are equal if both real(u) == real(v) and imag(u) == imag(v). - Integer values are comparable and ordered, in the usual way. Note that durations are integers. - Floating point values are comparable and ordered, as defined by the IEEE-754 standard. - Rational values are comparable and ordered, in the usual way. - String and Blob values are comparable and ordered, lexically byte-wise. - Time values are comparable and ordered. Whenever any operand of any comparison operation is NULL, the result is NULL. Note that slices are always of type string. Logical operators apply to boolean values and yield a boolean result. The right operand is evaluated conditionally. The truth tables for logical operations with NULL values Conversions are expressions of the form T(x) where T is a type and x is an expression that can be converted to type T. A constant value x can be converted to type T in any of these cases: - x is representable by a value of type T. - x is a floating-point constant, T is a floating-point type, and x is representable by a value of type T after rounding using IEEE 754 round-to-even rules. The constant T(x) is the rounded value. - x is an integer constant and T is a string type. The same rule as for non-constant x applies in this case. Converting a constant yields a typed constant as result. A non-constant value x can be converted to type T in any of these cases: - x has type T. - x's type and T are both integer or floating point types. - x's type and T are both complex types. - x is an integer, except bigint or duration, and T is a string type. Specific rules apply to (non-constant) conversions between numeric types or to and from a string type. These conversions may change the representation of x and incur a run-time cost. All other conversions only change the type but not the representation of x. A conversion of NULL to any type yields NULL. For the conversion of non-constant numeric values, the following rules apply 1. When converting between integer types, if the value is a signed integer, it is sign extended to implicit infinite precision; otherwise it is zero extended. It is then truncated to fit in the result type's size. For example, if v == uint16(0x10F0), then uint32(int8(v)) == 0xFFFFFFF0. The conversion always yields a valid value; there is no indication of overflow. 2. When converting a floating-point number to an integer, the fraction is discarded (truncation towards zero). 3. When converting an integer or floating-point number to a floating-point type, or a complex number to another complex type, the result value is rounded to the precision specified by the destination type. For instance, the value of a variable x of type float32 may be stored using additional precision beyond that of an IEEE-754 32-bit number, but float32(x) represents the result of rounding x's value to 32-bit precision. Similarly, x + 0.1 may use more than 32 bits of precision, but float32(x + 0.1) does not. In all non-constant conversions involving floating-point or complex values, if the result type cannot represent the value the conversion succeeds but the result value is implementation-dependent. 1. Converting a signed or unsigned integer value to a string type yields a string containing the UTF-8 representation of the integer. Values outside the range of valid Unicode code points are converted to "\uFFFD". 2. Converting a blob to a string type yields a string whose successive bytes are the elements of the blob. 3. Converting a value of a string type to a blob yields a blob whose successive elements are the bytes of the string. 4. Converting a value of a bigint type to a string yields a string containing the decimal decimal representation of the integer. 5. Converting a value of a string type to a bigint yields a bigint value containing the integer represented by the string value. A prefix of “0x” or “0X” selects base 16; the “0” prefix selects base 8, and a “0b” or “0B” prefix selects base 2. Otherwise the value is interpreted in base 10. An error occurs if the string value is not in any valid format. 6. Converting a value of a rational type to a string yields a string containing the decimal decimal representation of the rational in the form "a/b" (even if b == 1). 7. Converting a value of a string type to a bigrat yields a bigrat value containing the rational represented by the string value. The string can be given as a fraction "a/b" or as a floating-point number optionally followed by an exponent. An error occurs if the string value is not in any valid format. 8. Converting a value of a duration type to a string returns a string representing the duration in the form "72h3m0.5s". Leading zero units are omitted. As a special case, durations less than one second format using a smaller unit (milli-, micro-, or nanoseconds) to ensure that the leading digit is non-zero. The zero duration formats as 0, with no unit. 9. Converting a string value to a duration yields a duration represented by the string. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". 10. Converting a time value to a string returns the time formatted using the format string When evaluating the operands of an expression or of function calls, operations are evaluated in lexical left-to-right order. For example, in the evaluation of the function calls and evaluation of c happen in the order h(), i(), j(), c. Floating-point operations within a single expression are evaluated according to the associativity of the operators. Explicit parentheses affect the evaluation by overriding the default associativity. In the expression x + (y + z) the addition y + z is performed before adding x. Statements control execution. The empty statement does nothing. Alter table statements modify existing tables. With the ADD clause it adds a new column to the table. The column must not exist. With the DROP clause it removes an existing column from a table. The column must exist and it must be not the only (last) column of the table. IOW, there cannot be a table with no columns. For example When adding a column to a table with existing data, the constraint clause of the ColumnDef cannot be used. Adding a constrained column to an empty table is fine. Begin transactions statements introduce a new transaction level. Every transaction level must be eventually balanced by exactly one of COMMIT or ROLLBACK statements. Note that when a transaction is roll-backed because of a statement failure then no explicit balancing of the respective BEGIN TRANSACTION is statement is required nor permitted. Failure to properly balance any opened transaction level may cause dead locks and/or lose of data updated in the uppermost opened but never properly closed transaction level. For example A database cannot be updated (mutated) outside of a transaction. Statements requiring a transaction A database is effectively read only outside of a transaction. Statements not requiring a transaction The commit statement closes the innermost transaction nesting level. If that's the outermost level then the updates to the DB made by the transaction are atomically made persistent. For example Create index statements create new indices. Index is a named projection of ordered values of a table column to the respective records. As a special case the id() of the record can be indexed. Index name must not be the same as any of the existing tables and it also cannot be the same as of any column name of the table the index is on. For example Now certain SELECT statements may use the indices to speed up joins and/or to speed up record set filtering when the WHERE clause is used; or the indices might be used to improve the performance when the ORDER BY clause is present. The UNIQUE modifier requires the indexed values tuple to be index-wise unique or have all values NULL. The optional IF NOT EXISTS clause makes the statement a no operation if the index already exists. A simple index consists of only one expression which must be either a column name or the built-in id(). A more complex and more general index is one that consists of more than one expression or its single expression does not qualify as a simple index. In this case the type of all expressions in the list must be one of the non blob-like types. Note: Blob-like types are blob, bigint, bigrat, time and duration. Create table statements create new tables. A column definition declares the column name and type. Table names and column names are case sensitive. Neither a table or an index of the same name may exist in the DB. For example The optional IF NOT EXISTS clause makes the statement a no operation if the table already exists. The optional constraint clause has two forms. The first one is found in many SQL dialects. This form prevents the data in column DepartmentName to be NULL. The second form allows an arbitrary boolean expression to be used to validate the column. If the value of the expression is true then the validation succeeded. If the value of the expression is false or NULL then the validation fails. If the value of the expression is not of type bool an error occurs. The optional DEFAULT clause is an expression which, if present, is substituted instead of a NULL value when the colum is assigned a value. Note that the constraint and/or default expressions may refer to other columns by name: When a table row is inserted by the INSERT INTO statement or when a table row is updated by the UPDATE statement, the order of operations is as follows: 1. The new values of the affected columns are set and the values of all the row columns become the named values which can be referred to in default expressions evaluated in step 2. 2. If any row column value is NULL and the DEFAULT clause is present in the column's definition, the default expression is evaluated and its value is set as the respective column value. 3. The values, potentially updated, of row columns become the named values which can be referred to in constraint expressions evaluated during step 4. 4. All row columns which definition has the constraint clause present will have that constraint checked. If any constraint violation is detected, the overall operation fails and no changes to the table are made. Delete from statements remove rows from a table, which must exist. For example If the WHERE clause is not present then all rows are removed and the statement is equivalent to the TRUNCATE TABLE statement. Drop index statements remove indices from the DB. The index must exist. For example The optional IF EXISTS clause makes the statement a no operation if the index does not exist. Drop table statements remove tables from the DB. The table must exist. For example The optional IF EXISTS clause makes the statement a no operation if the table does not exist. Insert into statements insert new rows into tables. New rows come from literal data, if using the VALUES clause, or are a result of select statement. In the later case the select statement is fully evaluated before the insertion of any rows is performed, allowing to insert values calculated from the same table rows are to be inserted into. If the ColumnNameList part is omitted then the number of values inserted in the row must be the same as are columns in the table. If the ColumnNameList part is present then the number of values per row must be same as the same number of column names. All other columns of the record are set to NULL. The type of the value assigned to a column must be the same as is the column's type or the value must be NULL. For example If any of the columns of the table were defined using the optional constraints clause or the optional defaults clause then those are processed on a per row basis. The details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. Explain statement produces a recordset consisting of lines of text which describe the execution plan of a statement, if any. For example, the QL tool treats the explain statement specially and outputs the joined lines: The explanation may aid in uderstanding how a statement/query would be executed and if indices are used as expected - or which indices may possibly improve the statement performance. The create index statements above were directly copy/pasted in the terminal from the suggestions provided by the filter recordset pipeline part returned by the explain statement. If the statement has nothing special in its plan, the result is the original statement. To get an explanation of the select statement of the IN predicate, use the EXPLAIN statement with that particular select statement. The rollback statement closes the innermost transaction nesting level discarding any updates to the DB made by it. If that's the outermost level then the effects on the DB are as if the transaction never happened. For example The (temporary) record set from the last statement is returned and can be processed by the client. In this case the rollback is the same as 'DROP TABLE tmp;' but it can be a more complex operation. Select from statements produce recordsets. The optional DISTINCT modifier ensures all rows in the result recordset are unique. Either all of the resulting fields are returned ('*') or only those named in FieldList. RecordSetList is a list of table names or parenthesized select statements, optionally (re)named using the AS clause. The result can be filtered using a WhereClause and orderd by the OrderBy clause. For example If Recordset is a nested, parenthesized SelectStmt then it must be given a name using the AS clause if its field are to be accessible in expressions. A field is an named expression. Identifiers, not used as a type in conversion or a function name in the Call clause, denote names of (other) fields, values of which should be used in the expression. The expression can be named using the AS clause. If the AS clause is not present and the expression consists solely of a field name, then that field name is used as the name of the resulting field. Otherwise the field is unnamed. For example The SELECT statement can optionally enumerate the desired/resulting fields in a list. No two identical field names can appear in the list. When more than one record set is used in the FROM clause record set list, the result record set field names are rewritten to be qualified using the record set names. If a particular record set doesn't have a name, its respective fields became unnamed. The optional JOIN clause, for example is mostly equal to except that the rows from a which, when they appear in the cross join, never made expr to evaluate to true, are combined with a virtual row from b, containing all nulls, and added to the result set. For the RIGHT JOIN variant the discussed rules are used for rows from b not satisfying expr == true and the virtual, all-null row "comes" from a. The FULL JOIN adds the respective rows which would be otherwise provided by the separate executions of the LEFT JOIN and RIGHT JOIN variants. For more thorough OUTER JOIN discussion please see the Wikipedia article at [10]. Resultins rows of a SELECT statement can be optionally ordered by the ORDER BY clause. Collating proceeds by considering the expressions in the expression list left to right until a collating order is determined. Any possibly remaining expressions are not evaluated. All of the expression values must yield an ordered type or NULL. Ordered types are defined in "Comparison operators". Collating of elements having a NULL value is different compared to what the comparison operators yield in expression evaluation (NULL result instead of a boolean value). Below, T denotes a non NULL value of any QL type. NULL collates before any non NULL value (is considered smaller than T). Two NULLs have no collating order (are considered equal). The WHERE clause restricts records considered by some statements, like SELECT FROM, DELETE FROM, or UPDATE. It is an error if the expression evaluates to a non null value of non bool type. Another form of the WHERE clause is an existence predicate of a parenthesized select statement. The EXISTS form evaluates to true if the parenthesized SELECT statement produces a non empty record set. The NOT EXISTS form evaluates to true if the parenthesized SELECT statement produces an empty record set. The parenthesized SELECT statement is evaluated only once (TODO issue #159). The GROUP BY clause is used to project rows having common values into a smaller set of rows. For example Using the GROUP BY without any aggregate functions in the selected fields is in certain cases equal to using the DISTINCT modifier. The last two examples above produce the same resultsets. The optional OFFSET clause allows to ignore first N records. For example The above will produce only rows 11, 12, ... of the record set, if they exist. The value of the expression must a non negative integer, but not bigint or duration. The optional LIMIT clause allows to ignore all but first N records. For example The above will return at most the first 10 records of the record set. The value of the expression must a non negative integer, but not bigint or duration. The LIMIT and OFFSET clauses can be combined. For example Considering table t has, say 10 records, the above will produce only records 4 - 8. After returning record #8, no more result rows/records are computed. 1. The FROM clause is evaluated, producing a Cartesian product of its source record sets (tables or nested SELECT statements). 2. If present, the JOIN cluase is evaluated on the result set of the previous evaluation and the recordset specified by the JOIN clause. (... JOIN Recordset ON ...) 3. If present, the WHERE clause is evaluated on the result set of the previous evaluation. 4. If present, the GROUP BY clause is evaluated on the result set of the previous evaluation(s). 5. The SELECT field expressions are evaluated on the result set of the previous evaluation(s). 6. If present, the DISTINCT modifier is evaluated on the result set of the previous evaluation(s). 7. If present, the ORDER BY clause is evaluated on the result set of the previous evaluation(s). 8. If present, the OFFSET clause is evaluated on the result set of the previous evaluation(s). The offset expression is evaluated once for the first record produced by the previous evaluations. 9. If present, the LIMIT clause is evaluated on the result set of the previous evaluation(s). The limit expression is evaluated once for the first record produced by the previous evaluations. Truncate table statements remove all records from a table. The table must exist. For example Update statements change values of fields in rows of a table. For example Note: The SET clause is optional. If any of the columns of the table were defined using the optional constraints clause or the optional defaults clause then those are processed on a per row basis. The details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. To allow to query for DB meta data, there exist specially named tables, some of them being virtual. Note: Virtual system tables may have fake table-wise unique but meaningless and unstable record IDs. Do not apply the built-in id() to any system table. The table __Table lists all tables in the DB. The schema is The Schema column returns the statement to (re)create table Name. This table is virtual. The table __Colum lists all columns of all tables in the DB. The schema is The Ordinal column defines the 1-based index of the column in the record. This table is virtual. The table __Colum2 lists all columns of all tables in the DB which have the constraint NOT NULL or which have a constraint expression defined or which have a default expression defined. The schema is It's possible to obtain a consolidated recordset for all properties of all DB columns using The Name column is the column name in TableName. The table __Index lists all indices in the DB. The schema is The IsUnique columns reflects if the index was created using the optional UNIQUE clause. This table is virtual. Built-in functions are predeclared. The built-in aggregate function avg returns the average of values of an expression. Avg ignores NULL values, but returns NULL if all values of a column are NULL or if avg is applied to an empty record set. The column values must be of a numeric type. The built-in function contains returns true if substr is within s. If any argument to contains is NULL the result is NULL. The built-in aggregate function count returns how many times an expression has a non NULL values or the number of rows in a record set. Note: count() returns 0 for an empty record set. For example Date returns the time corresponding to in the appropriate zone for that time in the given location. The month, day, hour, min, sec, and nsec values may be outside their usual ranges and will be normalized during the conversion. For example, October 32 converts to November 1. A daylight savings time transition skips or repeats times. For example, in the United States, March 13, 2011 2:15am never occurred, while November 6, 2011 1:15am occurred twice. In such cases, the choice of time zone, and therefore the time, is not well-defined. Date returns a time that is correct in one of the two zones involved in the transition, but it does not guarantee which. A location maps time instants to the zone in use at that time. Typically, the location represents the collection of time offsets in use in a geographical area, such as "CEST" and "CET" for central Europe. "local" represents the system's local time zone. "UTC" represents Universal Coordinated Time (UTC). The month specifies a month of the year (January = 1, ...). If any argument to date is NULL the result is NULL. The built-in function day returns the day of the month specified by t. If the argument to day is NULL the result is NULL. The built-in function formatTime returns a textual representation of the time value formatted according to layout, which defines the format by showing how the reference time, would be displayed if it were the value; it serves as an example of the desired output. The same display rules will then be applied to the time value. If any argument to formatTime is NULL the result is NULL. NOTE: The string value of the time zone, like "CET" or "ACDT", is dependent on the time zone of the machine the function is run on. For example, if the t value is in "CET", but the machine is in "ACDT", instead of "CET" the result is "+0100". This is the same what Go (time.Time).String() returns and in fact formatTime directly calls t.String(). returns on a machine in the CET time zone, but may return on a machine in the ACDT zone. The time value is in both cases the same so its ordering and comparing is correct. Only the display value can differ. The built-in functions formatFloat and formatInt format numbers to strings using go's number format functions in the `strconv` package. For all three functions, only the first argument is mandatory. The default values of the rest are shown in the examples. If the first argument is NULL, the result is NULL. returns returns returns Unlike the `strconv` equivalent, the formatInt function handles all integer types, both signed and unsigned. The built-in function hasPrefix tests whether the string s begins with prefix. If any argument to hasPrefix is NULL the result is NULL. The built-in function hasSuffix tests whether the string s ends with suffix. If any argument to hasSuffix is NULL the result is NULL. The built-in function hour returns the hour within the day specified by t, in the range [0, 23]. If the argument to hour is NULL the result is NULL. The built-in function hours returns the duration as a floating point number of hours. If the argument to hours is NULL the result is NULL. The built-in function id takes zero or one arguments. If no argument is provided, id() returns a table-unique automatically assigned numeric identifier of type int. Ids of deleted records are not reused unless the DB becomes completely empty (has no tables). For example If id() without arguments is called for a row which is not a table record then the result value is NULL. For example If id() has one argument it must be a table name of a table in a cross join. For example The built-in function len takes a string argument and returns the lentgh of the string in bytes. The expression len(s) is constant if s is a string constant. If the argument to len is NULL the result is NULL. The built-in aggregate function max returns the largest value of an expression in a record set. Max ignores NULL values, but returns NULL if all values of a column are NULL or if max is applied to an empty record set. The expression values must be of an ordered type. For example The built-in aggregate function min returns the smallest value of an expression in a record set. Min ignores NULL values, but returns NULL if all values of a column are NULL or if min is applied to an empty record set. For example The column values must be of an ordered type. The built-in function minute returns the minute offset within the hour specified by t, in the range [0, 59]. If the argument to minute is NULL the result is NULL. The built-in function minutes returns the duration as a floating point number of minutes. If the argument to minutes is NULL the result is NULL. The built-in function month returns the month of the year specified by t (January = 1, ...). If the argument to month is NULL the result is NULL. The built-in function nanosecond returns the nanosecond offset within the second specified by t, in the range [0, 999999999]. If the argument to nanosecond is NULL the result is NULL. The built-in function nanoseconds returns the duration as an integer nanosecond count. If the argument to nanoseconds is NULL the result is NULL. The built-in function now returns the current local time. The built-in function parseTime parses a formatted string and returns the time value it represents. The layout defines the format by showing how the reference time, would be interpreted if it were the value; it serves as an example of the input format. The same interpretation will then be made to the input string. Elements omitted from the value are assumed to be zero or, when zero is impossible, one, so parsing "3:04pm" returns the time corresponding to Jan 1, year 0, 15:04:00 UTC (note that because the year is 0, this time is before the zero Time). Years must be in the range 0000..9999. The day of the week is checked for syntax but it is otherwise ignored. In the absence of a time zone indicator, parseTime returns a time in UTC. When parsing a time with a zone offset like -0700, if the offset corresponds to a time zone used by the current location, then parseTime uses that location and zone in the returned time. Otherwise it records the time as being in a fabricated location with time fixed at the given zone offset. When parsing a time with a zone abbreviation like MST, if the zone abbreviation has a defined offset in the current location, then that offset is used. The zone abbreviation "UTC" is recognized as UTC regardless of location. If the zone abbreviation is unknown, Parse records the time as being in a fabricated location with the given zone abbreviation and a zero offset. This choice means that such a time can be parses and reformatted with the same layout losslessly, but the exact instant used in the representation will differ by the actual zone offset. To avoid such problems, prefer time layouts that use a numeric zone offset. If any argument to parseTime is NULL the result is NULL. The built-in function second returns the second offset within the minute specified by t, in the range [0, 59]. If the argument to second is NULL the result is NULL. The built-in function seconds returns the duration as a floating point number of seconds. If the argument to seconds is NULL the result is NULL. The built-in function since returns the time elapsed since t. It is shorthand for now()-t. If the argument to since is NULL the result is NULL. The built-in aggregate function sum returns the sum of values of an expression for all rows of a record set. Sum ignores NULL values, but returns NULL if all values of a column are NULL or if sum is applied to an empty record set. The column values must be of a numeric type. The built-in function timeIn returns t with the location information set to loc. For discussion of the loc argument please see date(). If any argument to timeIn is NULL the result is NULL. The built-in function weekday returns the day of the week specified by t. Sunday == 0, Monday == 1, ... If the argument to weekday is NULL the result is NULL. The built-in function year returns the year in which t occurs. If the argument to year is NULL the result is NULL. The built-in function yearDay returns the day of the year specified by t, in the range [1,365] for non-leap years, and [1,366] in leap years. If the argument to yearDay is NULL the result is NULL. Three functions assemble and disassemble complex numbers. The built-in function complex constructs a complex value from a floating-point real and imaginary part, while real and imag extract the real and imaginary parts of a complex value. The type of the arguments and return value correspond. For complex, the two arguments must be of the same floating-point type and the return type is the complex type with the corresponding floating-point constituents: complex64 for float32, complex128 for float64. The real and imag functions together form the inverse, so for a complex value z, z == complex(real(z), imag(z)). If the operands of these functions are all constants, the return value is a constant. If any argument to any of complex, real, imag functions is NULL the result is NULL. For the numeric types, the following sizes are guaranteed Portions of this specification page are modifications based on work[2] created and shared by Google[3] and used according to terms described in the Creative Commons 3.0 Attribution License[4]. This specification is licensed under the Creative Commons Attribution 3.0 License, and code is licensed under a BSD license[5]. Links from the above documentation This section is not part of the specification. WARNING: The implementation of indices is new and it surely needs more time to become mature. Indices are used currently used only by the WHERE clause. The following expression patterns of 'WHERE expression' are recognized and trigger index use. The relOp is one of the relation operators <, <=, ==, >=, >. For the equality operator both operands must be of comparable types. For all other operators both operands must be of ordered types. The constant expression is a compile time constant expression. Some constant folding is still a TODO. Parameter is a QL parameter ($1 etc.). Consider tables t and u, both with an indexed field f. The WHERE expression doesn't comply with the above simple detected cases. However, such query is now automatically rewritten to which will use both of the indices. The impact of using the indices can be substantial (cf. BenchmarkCrossJoin*) if the resulting rows have low "selectivity", ie. only few rows from both tables are selected by the respective WHERE filtering. Note: Existing QL DBs can be used and indices can be added to them. However, once any indices are present in the DB, the old QL versions cannot work with such DB anymore. Running a benchmark with -v (-test.v) outputs information about the scale used to report records/s and a brief description of the benchmark. For example Running the full suite of benchmarks takes a lot of time. Use the -timeout flag to avoid them being killed after the default time limit (10 minutes).
Package dax provides the API client, operations, and parameter types for Amazon DynamoDB Accelerator (DAX). DAX is a managed caching service engineered for Amazon DynamoDB. DAX dramatically speeds up database reads by caching frequently-accessed data from DynamoDB, so applications can access that data with sub-millisecond latency. You can create a DAX cluster easily, using the AWS Management Console. With a few simple modifications to your code, your application can begin taking advantage of the DAX cluster and realize significant improvements in read performance.
Package gosnowflake is a pure Go Snowflake driver for the database/sql package. Clients can use the database/sql package directly. For example: Use the Open() function to create a database handle with connection parameters: The Go Snowflake Driver supports the following connection syntaxes (or data source name (DSN) formats): where all parameters must be escaped or use Config and DSN to construct a DSN string. For information about account identifiers, see the Snowflake documentation (https://docs.snowflake.com/en/user-guide/admin-account-identifier.html). The following example opens a database handle with the Snowflake account named "my_account" under the organization named "my_organization", where the username is "jsmith", password is "mypassword", database is "mydb", schema is "testschema", and warehouse is "mywh": The connection string (DSN) can contain both connection parameters (described below) and session parameters (https://docs.snowflake.com/en/sql-reference/parameters.html). The following connection parameters are supported: account <string>: Specifies your Snowflake account, where "<string>" is the account identifier assigned to your account by Snowflake. For information about account identifiers, see the Snowflake documentation (https://docs.snowflake.com/en/user-guide/admin-account-identifier.html). If you are using a global URL, then append the connection group and ".global" (e.g. "<account_identifier>-<connection_group>.global"). The account identifier and the connection group are separated by a dash ("-"), as shown above. This parameter is optional if your account identifier is specified after the "@" character in the connection string. region <string>: DEPRECATED. You may specify a region, such as "eu-central-1", with this parameter. However, since this parameter is deprecated, it is best to specify the region as part of the account parameter. For details, see the description of the account parameter. database: Specifies the database to use by default in the client session (can be changed after login). schema: Specifies the database schema to use by default in the client session (can be changed after login). warehouse: Specifies the virtual warehouse to use by default for queries, loading, etc. in the client session (can be changed after login). role: Specifies the role to use by default for accessing Snowflake objects in the client session (can be changed after login). passcode: Specifies the passcode provided by Duo when using multi-factor authentication (MFA) for login. passcodeInPassword: false by default. Set to true if the MFA passcode is embedded in the login password. Appends the MFA passcode to the end of the password. loginTimeout: Specifies the timeout, in seconds, for login. The default is 60 seconds. The login request gives up after the timeout length if the HTTP response is success. requestTimeout: Specifies the timeout, in seconds, for a query to complete. 0 (zero) specifies that the driver should wait indefinitely. The default is 0 seconds. The query request gives up after the timeout length if the HTTP response is success. authenticator: Specifies the authenticator to use for authenticating user credentials: To use the internal Snowflake authenticator, specify snowflake (Default). If you want to cache your MFA logins, use AuthTypeUsernamePasswordMFA authenticator. To authenticate through Okta, specify https://<okta_account_name>.okta.com (URL prefix for Okta). To authenticate using your IDP via a browser, specify externalbrowser. To authenticate via OAuth, specify oauth and provide an OAuth Access Token (see the token parameter below). application: Identifies your application to Snowflake Support. insecureMode: false by default. Set to true to bypass the Online Certificate Status Protocol (OCSP) certificate revocation check. IMPORTANT: Change the default value for testing or emergency situations only. token: a token that can be used to authenticate. Should be used in conjunction with the "oauth" authenticator. client_session_keep_alive: Set to true have a heartbeat in the background every hour to keep the connection alive such that the connection session will never expire. Care should be taken in using this option as it opens up the access forever as long as the process is alive. ocspFailOpen: true by default. Set to false to make OCSP check fail closed mode. validateDefaultParameters: true by default. Set to false to disable checks on existence and privileges check for Database, Schema, Warehouse and Role when setting up the connection tracing: Specifies the logging level to be used. Set to error by default. Valid values are trace, debug, info, print, warning, error, fatal, panic. disableQueryContextCache: disables parsing of query context returned from server and resending it to server as well. Default value is false. clientConfigFile: specifies the location of the client configuration json file. In this file you can configure Easy Logging feature. disableSamlURLCheck: disables the SAML URL check. Default value is false. All other parameters are interpreted as session parameters (https://docs.snowflake.com/en/sql-reference/parameters.html). For example, the TIMESTAMP_OUTPUT_FORMAT session parameter can be set by adding: A complete connection string looks similar to the following: Session-level parameters can also be set by using the SQL command "ALTER SESSION" (https://docs.snowflake.com/en/sql-reference/sql/alter-session.html). Alternatively, use OpenWithConfig() function to create a database handle with the specified Config. # Connection Config You can also connect to your warehouse using the connection config. The dbSql library states that when you want to take advantage of driver-specific connection features that aren’t available in a connection string. Each driver supports its own set of connection properties, often providing ways to customize the connection request specific to the DBMS For example: If you are using this method, you dont need to pass a driver name to specify the driver type in which you are looking to connect. Since the driver name is not needed, you can optionally bypass driver registration on startup. To do this, set `GOSNOWFLAKE_SKIP_REGISTERATION` in your environment. This is useful you wish to register multiple verions of the driver. Note: GOSNOWFLAKE_SKIP_REGISTERATION should not be used if sql.Open() is used as the method to connect to the server, as sql.Open will require registration so it can map the driver name to the driver type, which in this case is "snowflake" and SnowflakeDriver{}. You can load the connnection configuration with .toml file format. With two environment variables SNOWFLAKE_HOME(connections.toml file directory) SNOWFLAKE_DEFAULT_CONNECTION_NAME(DSN name), the driver will search the config file and load the connection. You can find how to use this connection way at ./cmd/tomlfileconnection or Snowflake doc: https://docs.snowflake.com/en/developer-guide/snowflake-cli-v2/connecting/specify-credentials The Go Snowflake Driver honors the environment variables HTTP_PROXY, HTTPS_PROXY and NO_PROXY for the forward proxy setting. NO_PROXY specifies which hostname endings should be allowed to bypass the proxy server, e.g. no_proxy=.amazonaws.com means that Amazon S3 access does not need to go through the proxy. NO_PROXY does not support wildcards. Each value specified should be one of the following: The end of a hostname (or a complete hostname), for example: ".amazonaws.com" or "xy12345.snowflakecomputing.com". An IP address, for example "192.196.1.15". If more than one value is specified, values should be separated by commas, for example: By default, the driver's builtin logger is exposing logrus's FieldLogger and default at INFO level. Users can use SetLogger in driver.go to set a customized logger for gosnowflake package. In order to enable debug logging for the driver, user could use SetLogLevel("debug") in SFLogger interface as shown in demo code at cmd/logger.go. To redirect the logs SFlogger.SetOutput method could do the work. A custom query tag can be set in the context. Each query run with this context will include the custom query tag as metadata that will appear in the Query Tag column in the Query History log. For example: A specific query request ID can be set in the context and will be passed through in place of the default randomized request ID. For example: If you need query ID for your query you have to use raw connection. For queries: ``` ``` For execs: ``` ``` The result of your query can be retrieved by setting the query ID in the WithFetchResultByID context. ``` ``` From 0.5.0, a signal handling responsibility has moved to the applications. If you want to cancel a query/command by Ctrl+C, add a os.Interrupt trap in context to execute methods that can take the context parameter (e.g. QueryContext, ExecContext). See cmd/selectmany.go for the full example. The Go Snowflake Driver now supports the Arrow data format for data transfers between Snowflake and the Golang client. The Arrow data format avoids extra conversions between binary and textual representations of the data. The Arrow data format can improve performance and reduce memory consumption in clients. Snowflake continues to support the JSON data format. The data format is controlled by the session-level parameter GO_QUERY_RESULT_FORMAT. To use JSON format, execute: The valid values for the parameter are: If the user attempts to set the parameter to an invalid value, an error is returned. The parameter name and the parameter value are case-insensitive. This parameter can be set only at the session level. Usage notes: The Arrow data format reduces rounding errors in floating point numbers. You might see slightly different values for floating point numbers when using Arrow format than when using JSON format. In order to take advantage of the increased precision, you must pass in the context.Context object provided by the WithHigherPrecision function when querying. Traditionally, the rows.Scan() method returned a string when a variable of types interface was passed in. Turning on the flag ENABLE_HIGHER_PRECISION via WithHigherPrecision will return the natural, expected data type as well. For some numeric data types, the driver can retrieve larger values when using the Arrow format than when using the JSON format. For example, using Arrow format allows the full range of SQL NUMERIC(38,0) values to be retrieved, while using JSON format allows only values in the range supported by the Golang int64 data type. Users should ensure that Golang variables are declared using the appropriate data type for the full range of values contained in the column. For an example, see below. When using the Arrow format, the driver supports more Golang data types and more ways to convert SQL values to those Golang data types. The table below lists the supported Snowflake SQL data types and the corresponding Golang data types. The columns are: The SQL data type. The default Golang data type that is returned when you use snowflakeRows.Scan() to read data from Arrow data format via an interface{}. The possible Golang data types that can be returned when you use snowflakeRows.Scan() to read data from Arrow data format directly. The default Golang data type that is returned when you use snowflakeRows.Scan() to read data from JSON data format via an interface{}. (All returned values are strings.) The standard Golang data type that is returned when you use snowflakeRows.Scan() to read data from JSON data format directly. Go Data Types for Scan() =================================================================================================================== | ARROW | JSON =================================================================================================================== SQL Data Type | Default Go Data Type | Supported Go Data | Default Go Data Type | Supported Go Data | for Scan() interface{} | Types for Scan() | for Scan() interface{} | Types for Scan() =================================================================================================================== BOOLEAN | bool | string | bool ------------------------------------------------------------------------------------------------------------------- VARCHAR | string | string ------------------------------------------------------------------------------------------------------------------- DOUBLE | float32, float64 [1] , [2] | string | float32, float64 ------------------------------------------------------------------------------------------------------------------- INTEGER that | int, int8, int16, int32, int64 | string | int, int8, int16, fits in int64 | [1] , [2] | | int32, int64 ------------------------------------------------------------------------------------------------------------------- INTEGER that doesn't | int, int8, int16, int32, int64, *big.Int | string | error fit in int64 | [1] , [2] , [3] , [4] | ------------------------------------------------------------------------------------------------------------------- NUMBER(P, S) | float32, float64, *big.Float | string | float32, float64 where S > 0 | [1] , [2] , [3] , [5] | ------------------------------------------------------------------------------------------------------------------- DATE | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIME | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_LTZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_NTZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- TIMESTAMP_TZ | time.Time | string | time.Time ------------------------------------------------------------------------------------------------------------------- BINARY | []byte | string | []byte ------------------------------------------------------------------------------------------------------------------- ARRAY [6] | string / array | string / array ------------------------------------------------------------------------------------------------------------------- OBJECT [6] | string / struct | string / struct ------------------------------------------------------------------------------------------------------------------- VARIANT | string | string ------------------------------------------------------------------------------------------------------------------- MAP | map | map [1] Converting from a higher precision data type to a lower precision data type via the snowflakeRows.Scan() method can lose low bits (lose precision), lose high bits (completely change the value), or result in error. [2] Attempting to convert from a higher precision data type to a lower precision data type via interface{} causes an error. [3] Higher precision data types like *big.Int and *big.Float can be accessed by querying with a context returned by WithHigherPrecision(). [4] You cannot directly Scan() into the alternative data types via snowflakeRows.Scan(), but can convert to those data types by using .Int64()/.String()/.Uint64() methods. For an example, see below. [5] You cannot directly Scan() into the alternative data types via snowflakeRows.Scan(), but can convert to those data types by using .Float32()/.String()/.Float64() methods. For an example, see below. [6] Arrays and objects can be either semistructured or structured, see more info in section below. Note: SQL NULL values are converted to Golang nil values, and vice-versa. Snowflake supports two flavours of "structured data" - semistructured and structured. Semistructured types are variants, objects and arrays without schema. When data is fetched, it's represented as strings and the client is responsible for its interpretation. Example table definition: The data not have any corresponding schema, so values in table may be slightly different. Semistuctured variants, objects and arrays are always represented as strings for scanning: When inserting, a marker indicating correct type must be used, for example: Structured types differentiate from semistructured types by having specific schema. In all rows of the table, values must conform to this schema. Example table definition: To retrieve structured objects, follow these steps: 1. Create a struct implementing sql.Scanner interface, example: a) b) Automatic scan goes through all fields in a struct and read object fields. Struct fields have to be public. Embedded structs have to be pointers. Matching name is built using struct field name with first letter lowercase. Additionally, `sf` tag can be added: - first value is always a name of a field in an SQL object - additionally `ignore` parameter can be passed to omit this field 2. Use WithStructuredTypesEnabled context while querying data. 3. Use it in regular scan: See StructuredObject for all available operations including null support, embedding nested structs, etc. Retrieving array of simple types works exactly the same like normal values - using Scan function. You can use WithMapValuesNullable and WithArrayValuesNullable contexts to handle null values in, respectively, maps and arrays of simple types in the database. In that case, sql null types will be used: If you want to scan array of structs, you have to use a helper function ScanArrayOfScanners: Retrieving structured maps is very similar to retrieving arrays: To bind structured objects use: 1. Create a type which implements a StructuredObjectWriter interface, example: a) b) 2. Use an instance as regular bind. 3. If you need to bind nil value, use special syntax: Binding structured arrays are like any other parameter. The only difference is - if you want to insert empty array (not nil but empty), you have to use: The following example shows how to retrieve very large values using the math/big package. This example retrieves a large INTEGER value to an interface and then extracts a big.Int value from that interface. If the value fits into an int64, then the code also copies the value to a variable of type int64. Note that a context that enables higher precision must be passed in with the query. If the variable named "rows" is known to contain a big.Int, then you can use the following instead of scanning into an interface and then converting to a big.Int: If the variable named "rows" contains a big.Int, then each of the following fails: Similar code and rules also apply to big.Float values. If you are not sure what data type will be returned, you can use code similar to the following to check the data type of the returned value: You can retrieve data in a columnar format similar to the format a server returns, without transposing them to rows. When working with the arrow columnar format in go driver, ArrowBatch structs are used. These are structs mostly corresponding to data chunks received from the backend. They allow for access to specific arrow.Record structs. An ArrowBatch can exist in a state where the underlying data has not yet been loaded. The data is downloaded and translated only on demand. Translation options are retrieved from a context.Context interface, which is either passed from query context or set by the user using WithContext(ctx) method. In order to access them you must use `WithArrowBatches` context, similar to the following: This returns []*ArrowBatch. ArrowBatch functions: GetRowCount(): Returns the number of rows in the ArrowBatch. Note that this returns 0 if the data has not yet been loaded, irrespective of it’s actual size. WithContext(ctx context.Context): Sets the context of the ArrowBatch to the one provided. Note that the context will not retroactively apply to data that has already been downloaded. For example: will produce the same result in records1 and records2, irrespective of the newly provided ctx. Context worth noting are: -WithArrowBatchesTimestampOption -WithHigherPrecision -WithArrowBatchesUtf8Validation described in more detail later. Fetch(): Returns the underlying records as *[]arrow.Record. When this function is called, the ArrowBatch checks whether the underlying data has already been loaded, and downloads it if not. Limitations: How to handle timestamps in Arrow batches: Snowflake returns timestamps natively (from backend to driver) in multiple formats. The Arrow timestamp is an 8-byte data type, which is insufficient to handle the larger date and time ranges used by Snowflake. Also, Snowflake supports 0-9 (nanosecond) digit precision for seconds, while Arrow supports only 3 (millisecond), 6 (microsecond), an 9 (nanosecond) precision. Consequently, Snowflake uses a custom timestamp format in Arrow, which differs on timestamp type and precision. If you want to use timestamps in Arrow batches, you have two options: How to handle invalid UTF-8 characters in Arrow batches: Snowflake previously allowed users to upload data with invalid UTF-8 characters. Consequently, Arrow records containing string columns in Snowflake could include these invalid UTF-8 characters. However, according to the Arrow specifications (https://arrow.apache.org/docs/cpp/api/datatype.html and https://github.com/apache/arrow/blob/a03d957b5b8d0425f9d5b6c98b6ee1efa56a1248/go/arrow/datatype.go#L73-L74), Arrow string columns should only contain UTF-8 characters. To address this issue and prevent potential downstream disruptions, the context WithArrowBatchesUtf8Validation, is introduced. When enabled, this feature iterates through all values in string columns, identifying and replacing any invalid characters with `�`. This ensures that Arrow records conform to the UTF-8 standards, preventing validation failures in downstream services like the Rust Arrow library that impose strict validation checks. How to handle higher precision in Arrow batches: To preserve BigDecimal values within Arrow batches, use WithHigherPrecision. This offers two main benefits: it helps avoid precision loss and defers the conversion to upstream services. Alternatively, without this setting, all non-zero scale numbers will be converted to float64, potentially resulting in loss of precision. Zero-scale numbers (DECIMAL256, DECIMAL128) will be converted to int64, which could lead to overflow. Binding allows a SQL statement to use a value that is stored in a Golang variable. Without binding, a SQL statement specifies values by specifying literals inside the statement. For example, the following statement uses the literal value “42“ in an UPDATE statement: With binding, you can execute a SQL statement that uses a value that is inside a variable. For example: The “?“ inside the “VALUES“ clause specifies that the SQL statement uses the value from a variable. Binding data that involves time zones can require special handling. For details, see the section titled "Timestamps with Time Zones". Version 1.6.23 (and later) of the driver takes advantage of sql.Null types which enables the proper handling of null parameters inside function calls, i.e.: The timestamp nullability had to be achieved by wrapping the sql.NullTime type as the Snowflake provides several date and time types which are mapped to single Go time.Time type: Version 1.3.9 (and later) of the Go Snowflake Driver supports the ability to bind an array variable to a parameter in a SQL INSERT statement. You can use this technique to insert multiple rows in a single batch. As an example, the following code inserts rows into a table that contains integer, float, boolean, and string columns. The example binds arrays to the parameters in the INSERT statement. If the array contains SQL NULL values, use slice []interface{}, which allows Golang nil values. This feature is available in version 1.6.12 (and later) of the driver. For example, For slices []interface{} containing time.Time values, a binding parameter flag is required for the preceding array variable in the Array() function. This feature is available in version 1.6.13 (and later) of the driver. For example, Note: For alternative ways to load data into the Snowflake database (including bulk loading using the COPY command), see Loading Data into Snowflake (https://docs.snowflake.com/en/user-guide-data-load.html). When you use array binding to insert a large number of values, the driver can improve performance by streaming the data (without creating files on the local machine) to a temporary stage for ingestion. The driver automatically does this when the number of values exceeds a threshold (no changes are needed to user code). In order for the driver to send the data to a temporary stage, the user must have the following privilege on the schema: If the user does not have this privilege, the driver falls back to sending the data with the query to the Snowflake database. In addition, the current database and schema for the session must be set. If these are not set, the CREATE TEMPORARY STAGE command executed by the driver can fail with the following error: For alternative ways to load data into the Snowflake database (including bulk loading using the COPY command), see Loading Data into Snowflake (https://docs.snowflake.com/en/user-guide-data-load.html). Go's database/sql package supports the ability to bind a parameter in a SQL statement to a time.Time variable. However, when the client binds data to send to the server, the driver cannot determine the correct Snowflake date/timestamp data type to associate with the binding parameter. For example: To resolve this issue, a binding parameter flag is introduced that associates any subsequent time.Time type to the DATE, TIME, TIMESTAMP_LTZ, TIMESTAMP_NTZ or BINARY data type. The above example could be rewritten as follows: The driver fetches TIMESTAMP_TZ (timestamp with time zone) data using the offset-based Location types, which represent a collection of time offsets in use in a geographical area, such as CET (Central European Time) or UTC (Coordinated Universal Time). The offset-based Location data is generated and cached when a Go Snowflake Driver application starts, and if the given offset is not in the cache, it is generated dynamically. Currently, Snowflake does not support the name-based Location types (e.g. "America/Los_Angeles"). For more information about Location types, see the Go documentation for https://golang.org/pkg/time/#Location. Internally, this feature leverages the []byte data type. As a result, BINARY data cannot be bound without the binding parameter flag. In the following example, sf is an alias for the gosnowflake package: The driver directly downloads a result set from the cloud storage if the size is large. It is required to shift workloads from the Snowflake database to the clients for scale. The download takes place by goroutine named "Chunk Downloader" asynchronously so that the driver can fetch the next result set while the application can consume the current result set. The application may change the number of result set chunk downloader if required. Note this does not help reduce memory footprint by itself. Consider Custom JSON Decoder. Custom JSON Decoder for Parsing Result Set (Experimental) The application may have the driver use a custom JSON decoder that incrementally parses the result set as follows. This option will reduce the memory footprint to half or even quarter, but it can significantly degrade the performance depending on the environment. The test cases running on Travis Ubuntu box show five times less memory footprint while four times slower. Be cautious when using the option. The Go Snowflake Driver supports JWT (JSON Web Token) authentication. To enable this feature, construct the DSN with fields "authenticator=SNOWFLAKE_JWT&privateKey=<your_private_key>", or using a Config structure specifying: The <your_private_key> should be a base64 URL encoded PKCS8 rsa private key string. One way to encode a byte slice to URL base 64 URL format is through the base64.URLEncoding.EncodeToString() function. On the server side, you can alter the public key with the SQL command: The <your_public_key> should be a base64 Standard encoded PKI public key string. One way to encode a byte slice to base 64 Standard format is through the base64.StdEncoding.EncodeToString() function. To generate the valid key pair, you can execute the following commands in the shell: Note: As of February 2020, Golang's official library does not support passcode-encrypted PKCS8 private key. For security purposes, Snowflake highly recommends that you store the passcode-encrypted private key on the disk and decrypt the key in your application using a library you trust. JWT tokens are recreated on each retry and they are valid (`exp` claim) for `jwtTimeout` seconds. Each retry timeout is configured by `jwtClientTimeout`. Retries are limited by total time of `loginTimeout`. The driver allows to authenticate using the external browser. When a connection is created, the driver will open the browser window and ask the user to sign in. To enable this feature, construct the DSN with field "authenticator=EXTERNALBROWSER" or using a Config structure with following Authenticator specified: The external browser authentication implements timeout mechanism. This prevents the driver from hanging interminably when browser window was closed, or not responding. Timeout defaults to 120s and can be changed through setting DSN field "externalBrowserTimeout=240" (time in seconds) or using a Config structure with following ExternalBrowserTimeout specified: This feature is available in version 1.3.8 or later of the driver. By default, Snowflake returns an error for queries issued with multiple statements. This restriction helps protect against SQL Injection attacks (https://en.wikipedia.org/wiki/SQL_injection). The multi-statement feature allows users skip this restriction and execute multiple SQL statements through a single Golang function call. However, this opens up the possibility for SQL injection, so it should be used carefully. The risk can be reduced by specifying the exact number of statements to be executed, which makes it more difficult to inject a statement by appending it. More details are below. The Go Snowflake Driver provides two functions that can execute multiple SQL statements in a single call: To compose a multi-statement query, simply create a string that contains all the queries, separated by semicolons, in the order in which the statements should be executed. To protect against SQL Injection attacks while using the multi-statement feature, pass a Context that specifies the number of statements in the string. For example: When multiple queries are executed by a single call to QueryContext(), multiple result sets are returned. After you process the first result set, get the next result set (for the next SQL statement) by calling NextResultSet(). The following pseudo-code shows how to process multiple result sets: The function db.ExecContext() returns a single result, which is the sum of the number of rows changed by each individual statement. For example, if your multi-statement query executed two UPDATE statements, each of which updated 10 rows, then the result returned would be 20. Individual row counts for individual statements are not available. The following code shows how to retrieve the result of a multi-statement query executed through db.ExecContext(): Note: Because a multi-statement ExecContext() returns a single value, you cannot detect offsetting errors. For example, suppose you expected the return value to be 20 because you expected each UPDATE statement to update 10 rows. If one UPDATE statement updated 15 rows and the other UPDATE statement updated only 5 rows, the total would still be 20. You would see no indication that the UPDATES had not functioned as expected. The ExecContext() function does not return an error if passed a query (e.g. a SELECT statement). However, it still returns only a single value, not a result set, so using it to execute queries (or a mix of queries and non-query statements) is impractical. The QueryContext() function does not return an error if passed non-query statements (e.g. DML). The function returns a result set for each statement, whether or not the statement is a query. For each non-query statement, the result set contains a single row that contains a single column; the value is the number of rows changed by the statement. If you want to execute a mix of query and non-query statements (e.g. a mix of SELECT and DML statements) in a multi-statement query, use QueryContext(). You can retrieve the result sets for the queries, and you can retrieve or ignore the row counts for the non-query statements. Note: PUT statements are not supported for multi-statement queries. If a SQL statement passed to ExecQuery() or QueryContext() fails to compile or execute, that statement is aborted, and subsequent statements are not executed. Any statements prior to the aborted statement are unaffected. For example, if the statements below are run as one multi-statement query, the multi-statement query fails on the third statement, and an exception is thrown. If you then query the contents of the table named "test", the values 1 and 2 would be present. When using the QueryContext() and ExecContext() functions, golang code can check for errors the usual way. For example: Preparing statements and using bind variables are also not supported for multi-statement queries. The Go Snowflake Driver supports asynchronous execution of SQL statements. Asynchronous execution allows you to start executing a statement and then retrieve the result later without being blocked while waiting. While waiting for the result of a SQL statement, you can perform other tasks, including executing other SQL statements. Most of the steps to execute an asynchronous query are the same as the steps to execute a synchronous query. However, there is an additional step, which is that you must call the WithAsyncMode() function to update your Context object to specify that asynchronous mode is enabled. In the code below, the call to "WithAsyncMode()" is specific to asynchronous mode. The rest of the code is compatible with both asynchronous mode and synchronous mode. The function db.QueryContext() returns an object of type snowflakeRows regardless of whether the query is synchronous or asynchronous. However: The call to the Next() function of snowflakeRows is always synchronous (i.e. blocking). If the query has not yet completed and the snowflakeRows object (named "rows" in this example) has not been filled in yet, then rows.Next() waits until the result set has been filled in. More generally, calls to any Golang SQL API function implemented in snowflakeRows or snowflakeResult are blocking calls, and wait if results are not yet available. (Examples of other synchronous calls include: snowflakeRows.Err(), snowflakeRows.Columns(), snowflakeRows.columnTypes(), snowflakeRows.Scan(), and snowflakeResult.RowsAffected().) Because the example code above executes only one query and no other activity, there is no significant difference in behavior between asynchronous and synchronous behavior. The differences become significant if, for example, you want to perform some other activity after the query starts and before it completes. The example code below starts a query, which run in the background, and then retrieves the results later. This example uses small SELECT statements that do not retrieve enough data to require asynchronous handling. However, the technique works for larger data sets, and for situations where the programmer might want to do other work after starting the queries and before retrieving the results. For a more elaborative example please see cmd/async/async.go The Go Snowflake Driver supports the PUT and GET commands. The PUT command copies a file from a local computer (the computer where the Golang client is running) to a stage on the cloud platform. The GET command copies data files from a stage on the cloud platform to a local computer. See the following for information on the syntax and supported parameters: Using PUT: The following example shows how to run a PUT command by passing a string to the db.Query() function: "<local_file>" should include the file path as well as the name. Snowflake recommends using an absolute path rather than a relative path. For example: Different client platforms (e.g. linux, Windows) have different path name conventions. Ensure that you specify path names appropriately. This is particularly important on Windows, which uses the backslash character as both an escape character and as a separator in path names. To send information from a stream (rather than a file) use code similar to the code below. (The ReplaceAll() function is needed on Windows to handle backslashes in the path to the file.) Note: PUT statements are not supported for multi-statement queries. Using GET: The following example shows how to run a GET command by passing a string to the db.Query() function: "<local_file>" should include the file path as well as the name. Snowflake recommends using an absolute path rather than a relative path. For example: To download a file into an in-memory stream (rather than a file) use code similar to the code below. Note: GET statements are not supported for multi-statement queries. Specifying temporary directory for encryption and compression: Putting and getting requires compression and/or encryption, which is done in the OS temporary directory. If you cannot use default temporary directory for your OS or you want to specify it yourself, you can use "tmpDirPath" DSN parameter. Remember, to encode slashes. Example: Using custom configuration for PUT/GET: If you want to override some default configuration options, you can use `WithFileTransferOptions` context. There are multiple config parameters including progress bars or compression.
Package resourcegroups provides the API client, operations, and parameter types for AWS Resource Groups. Resource Groups lets you organize Amazon Web Services resources such as Amazon Elastic Compute Cloud instances, Amazon Relational Database Service databases, and Amazon Simple Storage Service buckets into groups using criteria that you define as tags. A resource group is a collection of resources that match the resource types specified in a query, and share one or more tags or portions of tags. You can create a group of resources based on their roles in your cloud infrastructure, lifecycle stages, regions, application layers, or virtually any criteria. Resource Groups enable you to automate management tasks, such as those in Amazon Web Services Systems Manager Automation documents, on tag-related resources in Amazon Web Services Systems Manager. Groups of tagged resources also let you quickly view a custom console in Amazon Web Services Systems Manager that shows Config compliance and other monitoring data about member resources. To create a resource group, build a resource query, and specify tags that identify the criteria that members of the group have in common. Tags are key-value pairs. For more information about Resource Groups, see the Resource Groups User Guide. Resource Groups uses a REST-compliant API that you can use to perform the following types of operations. Create, Read, Update, and Delete (CRUD) operations on resource groups and resource query entities Applying, editing, and removing tags from resource groups Resolving resource group member Amazon resource names (ARN)s so they can be returned as search results Getting data about resources that are members of a group Searching Amazon Web Services resources based on a resource query
Package freegeoip provides an API for searching the geolocation of IP addresses. It uses a database that can be either a local file or a remote resource from a URL. Local databases are monitored by fsnotify and reloaded when the file is either updated or overwritten. Remote databases are automatically downloaded and updated in background so you can focus on using the API and not managing the database.
Package resourceexplorer2 provides the API client, operations, and parameter types for AWS Resource Explorer. Amazon Web Services Resource Explorer is a resource search and discovery service. By using Resource Explorer, you can explore your resources using an internet search engine-like experience. Examples of resources include Amazon Relational Database Service (Amazon RDS) instances, Amazon Simple Storage Service (Amazon S3) buckets, or Amazon DynamoDB tables. You can search for your resources using resource metadata like names, tags, and IDs. Resource Explorer can search across all of the Amazon Web Services Regions in your account in which you turn the service on, to simplify your cross-Region workloads. Resource Explorer scans the resources in each of the Amazon Web Services Regions in your Amazon Web Services account in which you turn on Resource Explorer. Resource Explorer creates and maintains an indexin each Region, with the details of that Region's resources. You can search across all of the indexed Regions in your account by designating one of your Amazon Web Services Regions to contain the aggregator index for the account. When you promote a local index in a Region to become the aggregator index for the account, Resource Explorer automatically replicates the index information from all local indexes in the other Regions to the aggregator index. Therefore, the Region with the aggregator index has a copy of all resource information for all Regions in the account where you turned on Resource Explorer. As a result, views in the aggregator index Region include resources from all of the indexed Regions in your account. For more information about Amazon Web Services Resource Explorer, including how to enable and configure the service, see the Amazon Web Services Resource Explorer User Guide.
Package sqlite provides a Go interface to SQLite 3. The semantics of this package are deliberately close to the SQLite3 C API, so it is helpful to be familiar with http://www.sqlite.org/c3ref/intro.html. An SQLite connection is represented by a *sqlite.Conn. Connections cannot be used concurrently. A typical Go program will create a pool of connections (using Open to create a *sqlitex.Pool) so goroutines can borrow a connection while they need to talk to the database. This package assumes SQLite will be used concurrently by the process through several connections, so the build options for SQLite enable multi-threading and the shared cache: https://www.sqlite.org/sharedcache.html The implementation automatically handles shared cache locking, see the documentation on Stmt.Step for details. The optional SQLite3 compiled in are: FTS5, RTree, JSON1, Session, GeoPoly This is not a database/sql driver. Statements are prepared with the Prepare and PrepareTransient methods. When using Prepare, statements are keyed inside a connection by the original query string used to create them. This means long-running high-performance code paths can write: After all the connections in a pool have been warmed up by passing through one of these Prepare calls, subsequent calls are simply a map lookup that returns an existing statement. The sqlite package supports the SQLite incremental I/O interface for streaming blob data into and out of the the database without loading the entire blob into a single []byte. (This is important when working either with very large blobs, or more commonly, a large number of moderate-sized blobs concurrently.) To write a blob, first use an INSERT statement to set the size of the blob and assign a rowid: Use BindZeroBlob or SetZeroBlob to set the size of myblob. Then you can open the blob with: Every connection can have a done channel associated with it using the SetInterrupt method. This is typically the channel returned by a context.Context Done method. For example, a timeout can be associated with a connection session: As database connections are long-lived, the SetInterrupt method can be called multiple times to reset the associated lifetime. When using pools, the shorthand for associating a context with a connection is: SQLite transactions have to be managed manually with this package by directly calling BEGIN / COMMIT / ROLLBACK or SAVEPOINT / RELEASE/ ROLLBACK. The sqlitex has a Savepoint function that helps automate this. Using a Pool to execute SQL in a concurrent HTTP handler. For helper functions that make some kinds of statements easier to write see the sqlitex package.
Package safebrowsing implements a client for the Safe Browsing API v4. API v4 emphasizes efficient usage of the network for bandwidth-constrained applications such as mobile devices. It achieves this by maintaining a small portion of the server state locally such that some queries can be answered immediately without any network requests. Thus, fewer API calls made, means less bandwidth is used. At a high-level, the implementation does the following: Essentially the query is presented to three major components: The database, the cache, and the API. Each of these may satisfy the query immediately, or may say that it does not know and that the query should be satisfied by the next component. The goal of the database and cache is to satisfy as many queries as possible to avoid using the API. Starting with a user query, a hash of the query is performed to preserve privacy regarded the exact nature of the query. For example, if the query was for a URL, then this would be the SHA256 hash of the URL in question. Given a query hash, we first check the local database (which is periodically synced with the global Safe Browsing API servers). This database will either tell us that the query is definitely safe, or that it does not have enough information. If we are unsure about the query, we check the local cache, which can be used to satisfy queries immediately if the same query had been made recently. The cache will tell us that the query is either safe, unsafe, or unknown (because the it's not in the cache or the entry expired). If we are still unsure about the query, then we finally query the API server, which is guaranteed to return to us an authoritative answer, assuming no networking failures. For more information, see the API developer's guide:
Package monkit is a flexible code instrumenting and data collection library. I'm going to try and sell you as fast as I can on this library. Example usage We've got tools that capture distribution information (including quantiles) about int64, float64, and bool types. We have tools that capture data about events (we've got meters for deltas, rates, etc). We have rich tools for capturing information about tasks and functions, and literally anything that can generate a name and a number. Almost just as importantly, the amount of boilerplate and code you have to write to get these features is very minimal. Data that's hard to measure probably won't get measured. This data can be collected and sent to Graphite (http://graphite.wikidot.com/) or any other time-series database. Here's a selection of live stats from one of our storage nodes: This library generates call graphs of your live process for you. These call graphs aren't created through sampling. They're full pictures of all of the interesting functions you've annotated, along with quantile information about their successes, failures, how often they panic, return an error (if so instrumented), how many are currently running, etc. The data can be returned in dot format, in json, in text, and can be about just the functions that are currently executing, or all the functions the monitoring system has ever seen. Here's another example of one of our production nodes: https://raw.githubusercontent.com/spacemonkeygo/monkit/master/images/callgraph2.png This library generates trace graphs of your live process for you directly, without requiring standing up some tracing system such as Zipkin (though you can do that too). Inspired by Google's Dapper (http://research.google.com/pubs/pub36356.html) and Twitter's Zipkin (http://zipkin.io), we have process-internal trace graphs, triggerable by a number of different methods. You get this trace information for free whenever you use Go contexts (https://blog.golang.org/context) and function monitoring. The output formats are svg and json. Additionally, the library supports trace observation plugins, and we've written a plugin that sends this data to Zipkin (http://github.com/spacemonkeygo/monkit-zipkin). https://raw.githubusercontent.com/spacemonkeygo/monkit/master/images/trace.png Before our crazy Go rewrite of everything (https://www.spacemonkey.com/blog/posts/go-space-monkey) (and before we had even seen Google's Dapper paper), we were a Python shop, and all of our "interesting" functions were decorated with a helper that collected timing information and sent it to Graphite. When we transliterated to Go, we wanted to preserve that functionality, so the first version of our monitoring package was born. Over time it started to get janky, especially as we found Zipkin and started adding tracing functionality to it. We rewrote all of our Go code to use Google contexts, and then realized we could get call graph information. We decided a refactor and then an all-out rethinking of our monitoring package was best, and so now we have this library. Sometimes you really want callstack contextual information without having to pass arguments through everything on the call stack. In other languages, many people implement this with thread-local storage. Example: let's say you have written a big system that responds to user requests. All of your libraries log using your log library. During initial development everything is easy to debug, since there's low user load, but now you've scaled and there's OVER TEN USERS and it's kind of hard to tell what log lines were caused by what. Wouldn't it be nice to add request ids to all of the log lines kicked off by that request? Then you could grep for all log lines caused by a specific request id. Geez, it would suck to have to pass all contextual debugging information through all of your callsites. Google solved this problem by always passing a context.Context interface through from call to call. A Context is basically just a mapping of arbitrary keys to arbitrary values that users can add new values for. This way if you decide to add a request context, you can add it to your Context and then all callsites that decend from that place will have the new data in their contexts. It is admittedly very verbose to add contexts to every function call. Painfully so. I hope to write more about it in the future, but Google also wrote up their thoughts about it (https://blog.golang.org/context), which you can go read. For now, just swallow your disgust and let's keep moving. Let's make a super simple Varnish (https://www.varnish-cache.org/) clone. Open up gedit! (Okay just kidding, open whatever text editor you want.) For this motivating program, we won't even add the caching, though there's comments for where to add it if you'd like. For now, let's just make a barebones system that will proxy HTTP requests. We'll call it VLite, but maybe we should call it VReallyLite. Run and build this and open localhost:8080 in your browser. If you use the default proxy target, it should inform you that the world hasn't been destroyed yet. The first thing you'll want to do is add the small amount of boilerplate to make the instrumentation we're going to add to your process observable later. Import the basic monkit packages: and then register environmental statistics and kick off a goroutine in your main method to serve debug requests: Rebuild, and then check out localhost:9000/stats (or localhost:9000/stats/json, if you prefer) in your browser! Remember what I said about Google's contexts (https://blog.golang.org/context)? It might seem a bit overkill for such a small project, but it's time to add them. To help out here, I've created a library that constructs contexts for you for incoming HTTP requests. Nothing that's about to happen requires my webhelp library (https://godoc.org/github.com/jtolds/webhelp), but here is the code now refactored to receive and pass contexts through our two per-request calls. You can create a new context for a request however you want. One reason to use something like webhelp is that the cancelation feature of Contexts is hooked up to the HTTP request getting canceled. Let's start to get statistics about how many requests we receive! First, this package (main) will need to get a monitoring Scope. Add this global definition right after all your imports, much like you'd create a logger with many logging libraries: Now, make the error return value of HandleHTTP named (so, (err error)), and add this defer line as the very first instruction of HandleHTTP: Let's also add the same line (albeit modified for the lack of error) to Proxy, replacing &err with nil: You should now have something like: We'll unpack what's going on here, but for now: For this new funcs dataset, if you want a graph, you can download a dot graph at localhost:9000/funcs/dot and json information from localhost:9000/funcs/json. You should see something like: with a similar report for the Proxy method, or a graph like: https://raw.githubusercontent.com/spacemonkeygo/monkit/master/images/handlehttp.png This data reports the overall callgraph of execution for known traces, along with how many of each function are currently running, the most running concurrently (the highwater), how many were successful along with quantile timing information, how many errors there were (with quantile timing information if applicable), and how many panics there were. Since the Proxy method isn't capturing a returned err value, and since HandleHTTP always returns nil, this example won't ever have failures. If you're wondering about the success count being higher than you expected, keep in mind your browser probably requested a favicon.ico. Cool, eh? How it works is an interesting line of code - there's three function calls. If you look at the Go spec, all of the function calls will run at the time the function starts except for the very last one. The first function call, mon.Task(), creates or looks up a wrapper around a Func. You could get this yourself by requesting mon.Func() inside of the appropriate function or mon.FuncNamed(). Both mon.Task() and mon.Func() are inspecting runtime.Caller to determine the name of the function. Because this is a heavy operation, you can actually store the result of mon.Task() and reuse it somehow else if you prefer, so instead of you could instead use which is more performant every time after the first time. runtime.Caller only gets called once. Careful! Don't use the same myFuncMon in different functions unless you want to screw up your statistics! The second function call starts all the various stop watches and bookkeeping to keep track of the function. It also mutates the context pointer it's given to extend the context with information about what current span (in Zipkin parlance) is active. Notably, you *can* pass nil for the context if you really don't want a context. You just lose callgraph information. The last function call stops all the stop watches ad makes a note of any observed errors or panics (it repanics after observing them). Turns out, we don't even need to change our program anymore to get rich tracing information! Open your browser and go to localhost:9000/trace/svg?regex=HandleHTTP. It won't load, and in fact, it's waiting for you to open another tab and refresh localhost:8080 again. Once you retrigger the actual application behavior, the trace regex will capture a trace starting on the first function that matches the supplied regex, and return an svg. Go back to your first tab, and you should see a relatively uninteresting but super promising svg. Let's make the trace more interesting. Add a to your HandleHTTP method, rebuild, and restart. Load localhost:8080, then start a new request to your trace URL, then reload localhost:8080 again. Flip back to your trace, and you should see that the Proxy method only takes a portion of the time of HandleHTTP! https://cdn.rawgit.com/spacemonkeygo/monkit/master/images/trace.svg There's multiple ways to select a trace. You can select by regex using the preselect method (default), which first evaluates the regex on all known functions for sanity checking. Sometimes, however, the function you want to trace may not yet be known to monkit, in which case you'll want to turn preselection off. You may have a bad regex, or you may be in this case if you get the error "Bad Request: regex preselect matches 0 functions." Another way to select a trace is by providing a trace id, which we'll get to next! Make sure to check out what the addition of the time.Sleep call did to the other reports. It's easy to write plugins for monkit! Check out our first one that exports data to Zipkin (http://zipkin.io/)'s Scribe API: https://github.com/spacemonkeygo/monkit-zipkin We plan to have more (for HTrace, OpenTracing, etc, etc), soon!
Package kivik provides a generic interface to CouchDB or CouchDB-like databases. The kivik package must be used in conjunction with a database driver. See https://github.com/go-kivik/kivik/wiki/Kivik-database-drivers for a list. The kivik driver system is modeled after the standard library's sql and sql/driver packages, although the client API is completely different due to the different database models implemented by SQL and NoSQL databases such as CouchDB.
Package kivik provides a generic interface to CouchDB or CouchDB-like databases. The kivik package must be used in conjunction with a database driver. The officially supported drivers are: The Filesystem and Memory drivers are also available, but in early stages of development, and so many features do not yet work: The kivik driver system is modeled after the standard library's `sql` and `sql/driver` packages, although the client API is completely different due to the different database models implemented by SQL and NoSQL databases such as CouchDB. couchDB stores JSON, so Kivik translates Go data structures to and from JSON as necessary. The conversion between Go data types and JSON, and vice versa, is handled automatically according to the rules and behavior described in the documentationf or the standard library's `encoding/json` package (https://golang.org/pkg/encoding/json). One would be well-advised to become familiar with using `json` struct field tags (https://golang.org/pkg/encoding/json/#Marshal) when working with JSON documents. Most Kivik methods take `context.Context` as their first argument. This allows the cancellation of blocking operations in the case that the result is no longer needed. A typical use case for a web application would be to cancel a Kivik request if the remote HTTP client ahs disconnected, rednering the results of the query irrelevant. To learn more about Go's contexts, read the `context` package documentation (https://golang.org/pkg/context/) and read the Go blog post "Go Concurrency Patterns: Context" (https://blog.golang.org/context) for example code. If in doubt, you can pass `context.TODO()` as the context variable. Example: Kivik returns errors that embed an HTTP status code. In most cases, this is the HTTP status code returned by the server. The embedded HTTP status code may be accessed easily using the StatusCode() method, or with a type assertion to `interface { StatusCode() int }`. Example: Any error that does not conform to this interface will be assumed to represent a http.StatusInternalServerError status code. For common usage, authentication should be as simple as including the authentication credentials in the connection DSN. For example: This will connect to `localhost` on port 5984, using the username `admin` and the password `abc123`. When connecting to CouchDB (as in the above example), this will use cookie auth (https://docs.couchdb.org/en/stable/api/server/authn.html?highlight=cookie%20auth#cookie-authentication). Depending on which driver you use, there may be other ways to authenticate, as well. At the moment, the CouchDB driver is the only official driver which offers additional authentication methods. Please refer to the CouchDB package documentation for details (https://pkg.go.dev/github.com/go-kivik/couchdb/v3). With a client handle in hand, you can create a database handle with the DB() method to interact with a specific database.
Package puddle is a generic resource pool with type-parametrized api. Puddle is a tiny generic resource pool library for Go that uses the standard context library to signal cancellation of acquires. It is designed to contain the minimum functionality a resource pool needs that cannot be implemented without concurrency concerns. For example, a database connection pool may use puddle internally and implement health checks and keep-alive behavior without needing to implement any concurrent code of its own.
Package kivik provides a generic interface to CouchDB or CouchDB-like databases. The kivik package must be used in conjunction with a database driver. See https://github.com/go-kivik/kivik/wiki/Kivik-database-drivers for a list. The kivik driver system is modeled after the standard library's sql and sql/driver packages, although the client API is completely different due to the different database models implemented by SQL and NoSQL databases such as CouchDB.
Package kivik provides a generic interface to CouchDB or CouchDB-like databases. The kivik package must be used in conjunction with a database driver. The officially supported drivers are: The Filesystem and Memory drivers are also available, but in early stages of development, and so many features do not yet work: The kivik driver system is modeled after the standard library's `sql` and `sql/driver` packages, although the client API is completely different due to the different database models implemented by SQL and NoSQL databases such as CouchDB. The most methods, including those on Client and DB are safe to call concurrently, unless otherwise noted. CouchDB stores JSON, so Kivik translates Go data structures to and from JSON as necessary. The conversion from Go data types to JSON, and vice versa, is handled automatically according to the rules and behavior described in the documentation for the standard library's encoding/json package. Most client and database methods take optional arguments of the type Option. Multiple options may be passed, and latter options take precedence over earlier ones, in case of a conflict. Params and Param can be used to set options that are generally converted to URL query parameters. Different backend drivers may also provide their own unique options with driver-specific effects. Consult your driver's documentation for specifics. Kivik returns errors that embed an HTTP status code. In most cases, this is the HTTP status code returned by the server. The embedded HTTP status code may be accessed easily using the HTTPStatus() method, or with a type assertion to `interface { HTTPStatus() int }`. Example: Any error that does not conform to this interface will be assumed to represent a http.StatusInternalServerError status code. For common usage, authentication should be as simple as including the authentication credentials in the connection DSN. For example: This will connect to `localhost` on port 5984, using the username `admin` and the password `abc123`. When connecting to CouchDB (as in the above example), this will use cookie auth. Depending on which driver you use, there may be other ways to authenticate, as well. At the moment, the CouchDB driver is the only official driver which offers additional authentication methods. Please refer to the CouchDB package documentation for details. With a client handle in hand, you can create a database handle with the DB() method to interact with a specific database.
Package golangNeo4jBoltDriver implements a driver for the Neo4J Bolt Protocol. The driver is compatible with Golang's sql.driver interface, but aims to implement a more complete featureset in line with what Neo4J and Bolt provides. As such, there are multiple interfaces the user can choose from. It's highly recommended that the user use the Neo4J-specific interfaces as they are more flexible and efficient than the provided sql.driver compatible methods. The interface tries to be consistent throughout. The sql.driver interfaces are standard, but the Neo4J-specific ones contain a naming convention of either "Neo" or "Pipeline". The "Neo" ones are the basic interfaces for making queries to Neo4j and it's expected that these would be used the most. The "Pipeline" ones are to support Bolt's pipelining features. Pipelines allow the user to send Neo4j many queries at once and have them executed by the database concurrently. This is useful if you have a bunch of queries that aren't necessarily dependant on one another, and you want to get better performance. The internal APIs will also pipeline statements where it is able to reliably do so, but by manually using the pipelining feature you can maximize your throughput. The API provides connection pooling using the `NewDriverPool` method. This allows you to pass it the maximum number of open connections to be used in the pool. Once this limit is hit, any new clients will have to wait for a connection to become available again. The sql driver is registered as "neo4j-bolt". The sql.driver interface is much more limited than what bolt and neo4j supports. In some cases, concessions were made in order to make that interface work with the neo4j way of doing things. The main instance of this is the marshalling of objects to/from the sql.driver.Value interface. In order to support object types that aren't supported by this interface, the internal encoding package is used to marshal these objects to byte strings. This ultimately makes for a less efficient and more 'clunky' implementation. A glaring instance of this is passing parameters. Neo4j expects named parameters but the driver interface can only really support positional parameters. To get around this, the user must create a map[string]interface{} of their parameters and marshal it to a driver.Value using the encoding.Marshal function. Similarly, the user must unmarshal data returned from the queries using the encoding.Unmarshal function, then use type assertions to retrieve the proper type. In most cases the driver will return the data from neo as the proper go-specific types. For integers they always come back as int64 and floats always come back as float64. This is for the convenience of the user and acts similarly to go's JSON interface. This prevents the user from having to use reflection to get these values. Internally, the types are always transmitted over the wire with as few bytes as possible. There are also cases where no go-specific type matches the returned values, such as when you query for a node, relationship, or path. The driver exposes specific structs which represent this data in the 'structures.graph' package. There are 4 types - Node, Relationship, UnboundRelationship, and Path. The driver returns interface{} objects which must have their types properly asserted to get the data out. There are some limitations to the types of collections the driver supports. Specifically, maps should always be of type map[string]interface{} and lists should always be of type []interface{}. It doesn't seem that the Bolt protocol supports uint64 either, so the biggest number it can send right now is the int64 max. The URL format is: `bolt://(user):(password)@(host):(port)` Schema must be `bolt`. User and password is only necessary if you are authenticating. TLS is supported by using query parameters on the connection string, like so: `bolt://host:port?tls=true&tls_no_verify=false` The supported query params are: * timeout - the number of seconds to set the connection timeout to. Defaults to 60 seconds. * tls - Set to 'true' or '1' if you want to use TLS encryption * tls_no_verify - Set to 'true' or '1' if you want to accept any server certificate (for testing, not secure) * tls_ca_cert_file - path to a custom ca cert for a self-signed TLS cert * tls_cert_file - path to a cert file for this client (need to verify this is processed by Neo4j) * tls_key_file - path to a key file for this client (need to verify this is processed by Neo4j) Errors returned from the API support wrapping, so if you receive an error from the library, it might be wrapping other errors. You can get the innermost error by using the `InnerMost` method. Failure messages from Neo4J are reported, along with their metadata, as an error. In order to get the failure message metadata from a wrapped error, you can do so by calling `err.(*errors.Error).InnerMost().(messages.FailureMessage).Metadata` If there is an error with the database connection, you should get a sql/driver ErrBadConn as per the best practice recommendations of the Golang SQL Driver. However, this error may be wrapped, so you might have to call `InnerMost` to get it, as specified above.
Package rdsdata provides the API client, operations, and parameter types for AWS RDS DataService. Amazon RDS provides an HTTP endpoint to run SQL statements on an Amazon Aurora DB cluster. To run these statements, you use the RDS Data API (Data API). Data API is available with the following types of Aurora databases: Aurora PostgreSQL - Serverless v2, Serverless v1, and provisioned Aurora MySQL - Serverless v1 only For more information about the Data API, see Using RDS Data API in the Amazon Aurora User Guide.
Package ora implements an Oracle database driver. ### Golang Oracle Database Driver ### #### TL;DR; just use it #### Call stored procedure with OUT parameters: An Oracle database may be accessed through the database/sql(http://golang.org/pkg/database/sql) package or through the ora package directly. database/sql offers connection pooling, thread safety, a consistent API to multiple database technologies and a common set of Go types. The ora package offers additional features including pointers, slices, nullable types, numerics of various sizes, Oracle-specific types, Go return type configuration, and Oracle abstractions such as environment, server and session. The ora package is written with the Oracle Call Interface (OCI) C-language libraries provided by Oracle. The OCI libraries are a standard for client application communication and driver communication with Oracle databases. The ora package has been verified to work with: * Oracle Standard 11g (11.2.0.4.0), Linux x86_64 (RHEL6) * Oracle Enterprise 12c (12.1.0.1.0), Windows 8.1 and AMD64. --- * [Installation](https://github.com/rana/ora#installation) * [Data Types](https://github.com/rana/ora#data-types) * [SQL Placeholder Syntax](https://github.com/rana/ora#sql-placeholder-syntax) * [Working With The Sql Package](https://github.com/rana/ora#working-with-the-sql-package) * [Working With The Oracle Package Directly](https://github.com/rana/ora#working-with-the-oracle-package-directly) * [Logging](https://github.com/rana/ora#logging) * [Test Database Setup](https://github.com/rana/ora#test-database-setup) * [Limitations](https://github.com/rana/ora#limitations) * [License](https://github.com/rana/ora#license) * [API Reference](http://godoc.org/github.com/rana/ora#pkg-index) * [Examples](./examples) --- Minimum requirements are Go 1.3 with CGO enabled, a GCC C compiler, and Oracle 11g (11.2.0.4.0) or Oracle Instant Client (11.2.0.4.0). Install Oracle or Oracle Instant Client. Copy the [oci8.pc](contrib/oci8.pc) from the `contrib` folder (or the one for your system, maybe tailored to your specific locations) to a folder in `$PKG_CONFIG_PATH` or a system folder, such as The ora package has no external Go dependencies and is available on GitHub and gopkg.in: *WARNING*: If you have Oracle Instant Client 11.2, you'll need to add "=lnnz11" to the list of linked libs! Otherwise, you may encounter "undefined reference to `nzosSCSP_SetCertSelectionParams' " errors. Oracle Instant Client 12.1 does not need this. The ora package supports all built-in Oracle data types. The supported Oracle built-in data types are NUMBER, BINARY_DOUBLE, BINARY_FLOAT, FLOAT, DATE, TIMESTAMP, TIMESTAMP WITH TIME ZONE, TIMESTAMP WITH LOCAL TIME ZONE, INTERVAL YEAR TO MONTH, INTERVAL DAY TO SECOND, CHAR, NCHAR, VARCHAR, VARCHAR2, NVARCHAR2, LONG, CLOB, NCLOB, BLOB, LONG RAW, RAW, ROWID and BFILE. SYS_REFCURSOR is also supported. Oracle does not provide a built-in boolean type. Oracle provides a single-byte character type. A common practice is to define two single-byte characters which represent true and false. The ora package adopts this approach. The oracle package associates a Go bool value to a Go rune and sends and receives the rune to a CHAR(1 BYTE) column or CHAR(1 CHAR) column. The default false rune is zero '0'. The default true rune is one '1'. The bool rune association may be configured or disabled when directly using the ora package but not with the database/sql package. Within a SQL string a placeholder may be specified to indicate where a Go variable is placed. The SQL placeholder is an Oracle identifier, from 1 to 30 characters, prefixed with a colon (:). For example: Placeholders within a SQL statement are bound by position. The actual name is not used by the ora package driver e.g., placeholder names :c1, :1, or :xyz are treated equally. The `database/sql` package provides a LastInsertId method to return the last inserted row's id. Oracle does not provide such functionality, but if you append `... RETURNING col /*LastInsertId*/` to your SQL, then it will be presented as LastInsertId. Note that you have to mark with a `/*LastInsertId*/` (case insensitive) your `RETURNING` part, to allow ora to return the last column as `LastInsertId()`. That column must fit in `int64`, though! You may access an Oracle database through the database/sql package. The database/sql package offers a consistent API across different databases, connection pooling, thread safety and a set of common Go types. database/sql makes working with Oracle straight-forward. The ora package implements interfaces in the database/sql/driver package enabling database/sql to communicate with an Oracle database. Using database/sql ensures you never have to call the ora package directly. When using database/sql, the mapping between Go types and Oracle types may be changed slightly. The database/sql package has strict expectations on Go return types. The Go-to-Oracle type mapping for database/sql is: The "ora" driver is automatically registered for use with sql.Open, but you can call ora.SetCfg to set the used configuration options including statement configuration and Rset configuration. When configuring the driver for use with database/sql, keep in mind that database/sql has strict Go type-to-Oracle type mapping expectations. The ora package allows programming with pointers, slices, nullable types, numerics of various sizes, Oracle-specific types, Go return type configuration, and Oracle abstractions such as environment, server and session. When working with the ora package directly, the API is slightly different than database/sql. When using the ora package directly, the mapping between Go types and Oracle types may be changed. The Go-to-Oracle type mapping for the ora package is: An example of using the ora package directly: Pointers may be used to capture out-bound values from a SQL statement such as an insert or stored procedure call. For example, a numeric pointer captures an identity value: A string pointer captures an out parameter from a stored procedure: Slices may be used to insert multiple records with a single insert statement: The ora package provides nullable Go types to support DML operations such as insert and select. The nullable Go types provided by the ora package are Int64, Int32, Int16, Int8, Uint64, Uint32, Uint16, Uint8, Float64, Float32, Time, IntervalYM, IntervalDS, String, Bool, Binary and Bfile. For example, you may insert nullable Strings and select nullable Strings: The `Stmt.Prep` method is variadic accepting zero or more `GoColumnType` which define a Go return type for a select-list column. For example, a Prep call can be configured to return an int64 and a nullable Int64 from the same column: Go numerics of various sizes are supported in DML operations. The ora package supports int64, int32, int16, int8, uint64, uint32, uint16, uint8, float64 and float32. For example, you may insert a uint16 and select numerics of various sizes: If a non-nullable type is defined for a nullable column returning null, the Go type's zero value is returned. GoColumnTypes defined by the ora package are: When Stmt.Prep doesn't receive a GoColumnType, or receives an incorrect GoColumnType, the default value defined in RsetCfg is used. EnvCfg, SrvCfg, SesCfg, StmtCfg and RsetCfg are the main configuration structs. EnvCfg configures aspects of an Env. SrvCfg configures aspects of a Srv. SesCfg configures aspects of a Ses. StmtCfg configures aspects of a Stmt. RsetCfg configures aspects of Rset. StmtCfg and RsetCfg have the most options to configure. RsetCfg defines the default mapping between an Oracle select-list column and a Go type. StmtCfg may be set in an EnvCfg, SrvCfg, SesCfg and StmtCfg. RsetCfg may be set in a Stmt. EnvCfg.StmtCfg, SrvCfg.StmtCfg, SesCfg.StmtCfg may optionally be specified to configure a statement. If StmtCfg isn't specified default values are applied. EnvCfg.StmtCfg, SrvCfg.StmtCfg, SesCfg.StmtCfg cascade to new descendent structs. When ora.OpenEnv() is called a specified EnvCfg is used or a default EnvCfg is created. Creating a Srv with env.OpenSrv() will use SrvCfg.StmtCfg if it is specified; otherwise, EnvCfg.StmtCfg is copied by value to SrvCfg.StmtCfg. Creating a Ses with srv.OpenSes() will use SesCfg.StmtCfg if it is specified; otherwise, SrvCfg.StmtCfg is copied by value to SesCfg.StmtCfg. Creating a Stmt with ses.Prep() will use SesCfg.StmtCfg if it is specified; otherwise, a new StmtCfg with default values is set on the Stmt. Call Stmt.Cfg() to change a Stmt's configuration. An Env may contain multiple Srv. A Srv may contain multiple Ses. A Ses may contain multiple Stmt. A Stmt may contain multiple Rset. Setting a RsetCfg on a StmtCfg does not cascade through descendent structs. Configuration of Stmt.Cfg takes effect prior to calls to Stmt.Exe and Stmt.Qry; consequently, any updates to Stmt.Cfg after a call to Stmt.Exe or Stmt.Qry are not observed. One configuration scenario may be to set a server's select statements to return nullable Go types by default: Another scenario may be to configure the runes mapped to bool values: Oracle-specific types offered by the ora package are ora.Rset, ora.IntervalYM, ora.IntervalDS, ora.Raw, ora.Lob and ora.Bfile. ora.Rset represents an Oracle SYS_REFCURSOR. ora.IntervalYM represents an Oracle INTERVAL YEAR TO MONTH. ora.IntervalDS represents an Oracle INTERVAL DAY TO SECOND. ora.Raw represents an Oracle RAW or LONG RAW. ora.Lob may represent an Oracle BLOB or Oracle CLOB. And ora.Bfile represents an Oracle BFILE. ROWID columns are returned as strings and don't have a unique Go type. #### LOBs The default for SELECTing [BC]LOB columns is a safe Bin or S, which means all the contents of the LOB is slurped into memory and returned as a []byte or string. The DefaultLOBFetchLen says LOBs are prefetched only a minimal way, to minimize extra memory usage - you can override this using `stmt.SetCfg(stmt.Cfg().SetLOBFetchLen(100))`. If you want more control, you can use ora.L in Prep, Qry or `ses.SetCfg(ses.Cfg().SetBlob(ora.L))`. But keep in mind that Oracle restricts the use of LOBs: it is forbidden to do ANYTHING while reading the LOB! No another query, no exec, no close of the Rset - even *advance* to the next record in the result set is forbidden! Failing to adhere these rules results in "Invalid handle" and ORA-03127 errors. You cannot start reading another LOB till you haven't finished reading the previous LOB, not even in the same row! Failing this results in ORA-24804! For examples, see [z_lob_test.go](z_lob_test.go). #### Rset Rset is used to obtain Go values from a SQL select statement. Methods Rset.Next, Rset.NextRow, and Rset.Len are available. Fields Rset.Row, Rset.Err, Rset.Index, and Rset.ColumnNames are also available. The Next method attempts to load data from an Oracle buffer into Row, returning true when successful. When no data is available, or if an error occurs, Next returns false setting Row to nil. Any error in Next is assigned to Err. Calling Next increments Index and method Len returns the total number of rows processed. The NextRow method is convenient for returning a single row. NextRow calls Next and returns Row. ColumnNames returns the names of columns defined by the SQL select statement. Rset has two usages. Rset may be returned from Stmt.Qry when prepared with a SQL select statement: Or, *Rset may be passed to Stmt.Exe when prepared with a stored procedure accepting an OUT SYS_REFCURSOR parameter: Stored procedures with multiple OUT SYS_REFCURSOR parameters enable a single Exe call to obtain multiple Rsets: The types of values assigned to Row may be configured in StmtCfg.Rset. For configuration to take effect, assign StmtCfg.Rset prior to calling Stmt.Qry or Stmt.Exe. Rset prefetching may be controlled by StmtCfg.PrefetchRowCount and StmtCfg.PrefetchMemorySize. PrefetchRowCount works in coordination with PrefetchMemorySize. When PrefetchRowCount is set to zero only PrefetchMemorySize is used; otherwise, the minimum of PrefetchRowCount and PrefetchMemorySize is used. The default uses a PrefetchMemorySize of 134MB. Opening and closing Rsets is managed internally. Rset does not have an Open method or Close method. IntervalYM may be be inserted and selected: IntervalDS may be be inserted and selected: Transactions on an Oracle server are supported. DML statements auto-commit unless a transaction has started: Ses.PrepAndExe, Ses.PrepAndQry, Ses.Ins, Ses.Upd, and Ses.Sel are convenient one-line methods. Ses.PrepAndExe offers a convenient one-line call to Ses.Prep and Stmt.Exe. Ses.PrepAndQry offers a convenient one-line call to Ses.Prep and Stmt.Qry. Ses.Ins composes, prepares and executes a sql INSERT statement. Ses.Ins is useful when you have to create and maintain a simple INSERT statement with a long list of columns. As table columns are added and dropped over the lifetime of a table Ses.Ins is easy to read and revise. Ses.Upd composes, prepares and executes a sql UPDATE statement. Ses.Upd is useful when you have to create and maintain a simple UPDATE statement with a long list of columns. As table columns are added and dropped over the lifetime of a table Ses.Upd is easy to read and revise. Ses.Sel composes, prepares and queries a sql SELECT statement. Ses.Sel is useful when you have to create and maintain a simple SELECT statement with a long list of columns that have non-default GoColumnTypes. As table columns are added and dropped over the lifetime of a table Ses.Sel is easy to read and revise. The Ses.Ping method checks whether the client's connection to an Oracle server is valid. A call to Ping requires an open Ses. Ping will return a nil error when the connection is fine: The Srv.Version method is available to obtain the Oracle server version. A call to Version requires an open Ses: Further code examples are available in the [example file](https://github.com/rana/ora/blob/master/z_example_test.go), test files and [samples folder](https://github.com/rana/ora/tree/master/samples). The ora package provides a simple ora.Logger interface for logging. Logging is disabled by default. Specify one of three optional built-in logging packages to enable logging; or, use your own logging package. ora.Cfg().Log offers various options to enable or disable logging of specific ora driver methods. For example: To use the standard Go log package: which produces a sample log of: Messages are prefixed with 'ORA I' for information or 'ORA E' for an error. The log package is configured to write to os.Stderr by default. Use the ora/lg.Std type to configure an alternative io.Writer. To use the glog package: which produces a sample log of: To use the log15 package: which produces a sample log of: See https://github.com/rana/ora/tree/master/samples/lg15/main.go for sample code which uses the log15 package. Tests are available and require some setup. Setup varies depending on whether the Oracle server is configured as a container database or non-container database. It's simpler to setup a non-container database. An example for each setup is explained. Non-container test database setup steps: Container test database setup steps: Some helpful SQL maintenance statements: Run the tests. database/sql method Stmt.QueryRow is not supported. Go 1.6 introduced stricter cgo (call C from Go) rules, and introduced runtime checks. This is good, as the possibility of C code corrupting Go code is almost completely eliminated, but it also means a severe call overhead grow. [Sometimes](https://groups.google.com/forum/#!topic/golang-nuts/ccMkPG6Bi5k) this can be 22x the go 1.5.3 call time! So if you need performance more than correctness, start your programs with "GODEBUG=cgocheck=0" environment setting. Copyright 2017 Rana Ian, Tamás Gulácsi. All rights reserved. Use of this source code is governed by The MIT License found in the accompanying LICENSE file.
Package restlayer is an API framework heavily inspired by the excellent Python Eve (http://python-eve.org/). It helps you create a comprehensive, customizable, and secure REST (graph) API on top of pluggable backend storages with no boiler plate code so can focus on your business logic. Implemented as a net/http middleware, it plays well with other middleware like CORS (http://github.com/rs/cors) and is net/context aware thanks to xhandler. REST Layer is an opinionated framework. Unlike many API frameworks, you don’t directly control the routing and you don’t have to write handlers. You just define resources and sub-resources with a schema, the framework automatically figures out what routes to generate behind the scene. You don’t have to take care of the HTTP headers and response, JSON encoding, etc. either. REST layer handles HTTP conditional requests, caching, integrity checking for you. A powerful and extensible validation engine make sure that data comes pre-validated to your custom storage handlers. Generic resource handlers for MongoDB (http://github.com/rs/rest-layer-mongo), ElasticSearch (http://github.com/rs/rest-layer-es) and other databases are also available so you have few to no code to write to make the whole system work. Moreover, REST Layer let you create a graph API by linking resources between them. Thanks to its advanced field selection syntax (and coming support of GraphQL), you can gather resources and their dependencies in a single request, saving you from costly network roundtrips. REST Layer is composed of several sub-packages: See https://github.com/rs/rest-layer/blob/master/README.md for full REST Layer documentation.
GravitonDB in short is "ZFS for key-value stores". GravitonDB is a pure Go key/value store having features unmatched by other software (such as Boltdb, berkeleydb, mysql, postgresql etc). The goal of the project is to provide a simple, fast, reliable, versioned, authenticated database for projects which require such features. Since GravitonDB is meant to be used as such a low-level piece of functionality, simplicity is key. The API will be small and only focus on getting values and setting values. That's it. Eg. Minimal code, to write and read back a value (error checking is skipped) Eg, Snapshots, see github.com/deroproject/gravitondb/examples/snapshot_example//snapshot_example.go The design enables infinite trees with infinite snapshots. The design enables designs such as dedeuplicating backups, blockchains which enable proving their data ( both state and content) to users etc
Package guardian . Go-Guardian is a golang library that provides a simple, clean, and idiomatic way to create powerful modern API and web authentication. Go-Guardian sole purpose is to authenticate requests, which it does through an extensible set of authentication methods known as strategies. Go-Guardian does not mount routes or assume any particular database schema, which maximizes flexibility and allows decisions to be made by the developer. The API is simple: you provide go-guardian a request to authenticate, and go-guardian invoke strategies to authenticate end-user request. Strategies provide callbacks for controlling what occurs when authentication `should` succeeds or fails. Why Go-Guardian? When building a modern application, you don't want to implement authentication module from scratch; you want to focus on building awesome software. go-guardian is here to help with that. Here are a few bullet point reasons you might like to try it out:
Package monkit is a flexible code instrumenting and data collection library. I'm going to try and sell you as fast as I can on this library. Example usage We've got tools that capture distribution information (including quantiles) about int64, float64, and bool types. We have tools that capture data about events (we've got meters for deltas, rates, etc). We have rich tools for capturing information about tasks and functions, and literally anything that can generate a name and a number. Almost just as importantly, the amount of boilerplate and code you have to write to get these features is very minimal. Data that's hard to measure probably won't get measured. This data can be collected and sent to Graphite (http://graphite.wikidot.com/) or any other time-series database. Here's a selection of live stats from one of our storage nodes: This library generates call graphs of your live process for you. These call graphs aren't created through sampling. They're full pictures of all of the interesting functions you've annotated, along with quantile information about their successes, failures, how often they panic, return an error (if so instrumented), how many are currently running, etc. The data can be returned in dot format, in json, in text, and can be about just the functions that are currently executing, or all the functions the monitoring system has ever seen. Here's another example of one of our production nodes: https://raw.githubusercontent.com/spacemonkeygo/monkit/master/images/callgraph2.png This library generates trace graphs of your live process for you directly, without requiring standing up some tracing system such as Zipkin (though you can do that too). Inspired by Google's Dapper (http://research.google.com/pubs/pub36356.html) and Twitter's Zipkin (http://zipkin.io), we have process-internal trace graphs, triggerable by a number of different methods. You get this trace information for free whenever you use Go contexts (https://blog.golang.org/context) and function monitoring. The output formats are svg and json. Additionally, the library supports trace observation plugins, and we've written a plugin that sends this data to Zipkin (http://github.com/spacemonkeygo/monkit-zipkin). https://raw.githubusercontent.com/spacemonkeygo/monkit/master/images/trace.png Before our crazy Go rewrite of everything (https://www.spacemonkey.com/blog/posts/go-space-monkey) (and before we had even seen Google's Dapper paper), we were a Python shop, and all of our "interesting" functions were decorated with a helper that collected timing information and sent it to Graphite. When we transliterated to Go, we wanted to preserve that functionality, so the first version of our monitoring package was born. Over time it started to get janky, especially as we found Zipkin and started adding tracing functionality to it. We rewrote all of our Go code to use Google contexts, and then realized we could get call graph information. We decided a refactor and then an all-out rethinking of our monitoring package was best, and so now we have this library. Sometimes you really want callstack contextual information without having to pass arguments through everything on the call stack. In other languages, many people implement this with thread-local storage. Example: let's say you have written a big system that responds to user requests. All of your libraries log using your log library. During initial development everything is easy to debug, since there's low user load, but now you've scaled and there's OVER TEN USERS and it's kind of hard to tell what log lines were caused by what. Wouldn't it be nice to add request ids to all of the log lines kicked off by that request? Then you could grep for all log lines caused by a specific request id. Geez, it would suck to have to pass all contextual debugging information through all of your callsites. Google solved this problem by always passing a context.Context interface through from call to call. A Context is basically just a mapping of arbitrary keys to arbitrary values that users can add new values for. This way if you decide to add a request context, you can add it to your Context and then all callsites that decend from that place will have the new data in their contexts. It is admittedly very verbose to add contexts to every function call. Painfully so. I hope to write more about it in the future, but Google also wrote up their thoughts about it (https://blog.golang.org/context), which you can go read. For now, just swallow your disgust and let's keep moving. Let's make a super simple Varnish (https://www.varnish-cache.org/) clone. Open up gedit! (Okay just kidding, open whatever text editor you want.) For this motivating program, we won't even add the caching, though there's comments for where to add it if you'd like. For now, let's just make a barebones system that will proxy HTTP requests. We'll call it VLite, but maybe we should call it VReallyLite. Run and build this and open localhost:8080 in your browser. If you use the default proxy target, it should inform you that the world hasn't been destroyed yet. The first thing you'll want to do is add the small amount of boilerplate to make the instrumentation we're going to add to your process observable later. Import the basic monkit packages: and then register environmental statistics and kick off a goroutine in your main method to serve debug requests: Rebuild, and then check out localhost:9000/stats (or localhost:9000/stats/json, if you prefer) in your browser! Remember what I said about Google's contexts (https://blog.golang.org/context)? It might seem a bit overkill for such a small project, but it's time to add them. To help out here, I've created a library that constructs contexts for you for incoming HTTP requests. Nothing that's about to happen requires my webhelp library (https://godoc.org/github.com/jtolds/webhelp), but here is the code now refactored to receive and pass contexts through our two per-request calls. You can create a new context for a request however you want. One reason to use something like webhelp is that the cancelation feature of Contexts is hooked up to the HTTP request getting canceled. Let's start to get statistics about how many requests we receive! First, this package (main) will need to get a monitoring Scope. Add this global definition right after all your imports, much like you'd create a logger with many logging libraries: Now, make the error return value of HandleHTTP named (so, (err error)), and add this defer line as the very first instruction of HandleHTTP: Let's also add the same line (albeit modified for the lack of error) to Proxy, replacing &err with nil: You should now have something like: We'll unpack what's going on here, but for now: For this new funcs dataset, if you want a graph, you can download a dot graph at localhost:9000/funcs/dot and json information from localhost:9000/funcs/json. You should see something like: with a similar report for the Proxy method, or a graph like: https://raw.githubusercontent.com/spacemonkeygo/monkit/master/images/handlehttp.png This data reports the overall callgraph of execution for known traces, along with how many of each function are currently running, the most running concurrently (the highwater), how many were successful along with quantile timing information, how many errors there were (with quantile timing information if applicable), and how many panics there were. Since the Proxy method isn't capturing a returned err value, and since HandleHTTP always returns nil, this example won't ever have failures. If you're wondering about the success count being higher than you expected, keep in mind your browser probably requested a favicon.ico. Cool, eh? How it works is an interesting line of code - there's three function calls. If you look at the Go spec, all of the function calls will run at the time the function starts except for the very last one. The first function call, mon.Task(), creates or looks up a wrapper around a Func. You could get this yourself by requesting mon.Func() inside of the appropriate function or mon.FuncNamed(). Both mon.Task() and mon.Func() are inspecting runtime.Caller to determine the name of the function. Because this is a heavy operation, you can actually store the result of mon.Task() and reuse it somehow else if you prefer, so instead of you could instead use which is more performant every time after the first time. runtime.Caller only gets called once. Careful! Don't use the same myFuncMon in different functions unless you want to screw up your statistics! The second function call starts all the various stop watches and bookkeeping to keep track of the function. It also mutates the context pointer it's given to extend the context with information about what current span (in Zipkin parlance) is active. Notably, you *can* pass nil for the context if you really don't want a context. You just lose callgraph information. The last function call stops all the stop watches ad makes a note of any observed errors or panics (it repanics after observing them). Turns out, we don't even need to change our program anymore to get rich tracing information! Open your browser and go to localhost:9000/trace/svg?regex=HandleHTTP. It won't load, and in fact, it's waiting for you to open another tab and refresh localhost:8080 again. Once you retrigger the actual application behavior, the trace regex will capture a trace starting on the first function that matches the supplied regex, and return an svg. Go back to your first tab, and you should see a relatively uninteresting but super promising svg. Let's make the trace more interesting. Add a to your HandleHTTP method, rebuild, and restart. Load localhost:8080, then start a new request to your trace URL, then reload localhost:8080 again. Flip back to your trace, and you should see that the Proxy method only takes a portion of the time of HandleHTTP! https://cdn.rawgit.com/spacemonkeygo/monkit/master/images/trace.svg There's multiple ways to select a trace. You can select by regex using the preselect method (default), which first evaluates the regex on all known functions for sanity checking. Sometimes, however, the function you want to trace may not yet be known to monkit, in which case you'll want to turn preselection off. You may have a bad regex, or you may be in this case if you get the error "Bad Request: regex preselect matches 0 functions." Another way to select a trace is by providing a trace id, which we'll get to next! Make sure to check out what the addition of the time.Sleep call did to the other reports. It's easy to write plugins for monkit! Check out our first one that exports data to Zipkin (http://zipkin.io/)'s Scribe API: https://github.com/spacemonkeygo/monkit-zipkin We plan to have more (for HTrace, OpenTracing, etc, etc), soon!
Package esquery provides a non-obtrusive, idiomatic and easy-to-use query and aggregation builder for the official Go client (https://github.com/elastic/go-elasticsearch) for the ElasticSearch database (https://www.elastic.co/products/elasticsearch). esquery alleviates the need to use extremely nested maps (map[string]interface{}) and serializing queries to JSON manually. It also helps eliminating common mistakes such as misspelling query types, as everything is statically typed. Using `esquery` can make your code much easier to write, read and maintain, and significantly reduce the amount of code you write. esquery provides a method chaining-style API for building and executing queries and aggregations. It does not wrap the official Go client nor does it require you to change your existing code in order to integrate the library. Queries can be directly built with `esquery`, and executed by passing an `*elasticsearch.Client` instance (with optional search parameters). Results are returned as-is from the official client (e.g. `*esapi.Response` objects). Getting started is extremely simple: esquery currently supports version 7 of the ElasticSearch Go client. The library cannot currently generate "short queries". For example, whereas ElasticSearch can accept this: { "query": { "term": { "user": "Kimchy" } } } The library will always generate this: This is also true for queries such as "bool", where fields like "must" can either receive one query object, or an array of query objects. `esquery` will generate an array even if there's only one query object.
Package fetchbot provides a simple and flexible web crawler that follows the robots.txt policies and crawl delays. It is very much a rewrite of gocrawl (https://github.com/PuerkitoBio/gocrawl) with a simpler API, less features built-in, but at the same time more flexibility. As for Go itself, sometimes less is more! To install, simply run in a terminal: The package has a single external dependency, robotstxt (https://github.com/temoto/robotstxt). It also integrates code from the iq package (https://github.com/kylelemons/iq). The API documentation is available on godoc.org (http://godoc.org/github.com/PuerkitoBio/fetchbot). The following example (taken from /example/short/main.go) shows how to create and start a Fetcher, one way to send commands, and how to stop the fetcher once all commands have been handled. A more complex and complete example can be found in the repository, at /example/full/. Basically, a Fetcher is an instance of a web crawler, independent of other Fetchers. It receives Commands via the Queue, executes the requests, and calls a Handler to process the responses. A Command is an interface that tells the Fetcher which URL to fetch, and which HTTP method to use (i.e. "GET", "HEAD", ...). A call to Fetcher.Start() returns the Queue associated with this Fetcher. This is the thread-safe object that can be used to send commands, or to stop the crawler. Both the Command and the Handler are interfaces, and may be implemented in various ways. They are defined like so: A Context is a struct that holds the Command and the Queue, so that the Handler always knows which Command initiated this call, and has a handle to the Queue. A Handler is similar to the net/http Handler, and middleware-style combinations can be built on top of it. A HandlerFunc type is provided so that simple functions with the right signature can be used as Handlers (like net/http.HandlerFunc), and there is also a multiplexer Mux that can be used to dispatch calls to different Handlers based on some criteria. The Fetcher recognizes a number of interfaces that the Command may implement, for more advanced needs. * BasicAuthProvider: Implement this interface to specify the basic authentication credentials to set on the request. * CookiesProvider: If the Command implements this interface, the provided Cookies will be set on the request. * HeaderProvider: Implement this interface to specify the headers to set on the request. * ReaderProvider: Implement this interface to set the body of the request, via an io.Reader. * ValuesProvider: Implement this interface to set the body of the request, as form-encoded values. If the Content-Type is not specifically set via a HeaderProvider, it is set to "application/x-www-form-urlencoded". ReaderProvider and ValuesProvider should be mutually exclusive as they both set the body of the request. If both are implemented, the ReaderProvider interface is used. * Handler: Implement this interface if the Command's response should be handled by a specific callback function. By default, the response is handled by the Fetcher's Handler, but if the Command implements this, this handler function takes precedence and the Fetcher's Handler is ignored. Since the Command is an interface, it can be a custom struct that holds additional information, such as an ID for the URL (e.g. from a database), or a depth counter so that the crawling stops at a certain depth, etc. For basic commands that don't require additional information, the package provides the Cmd struct that implements the Command interface. This is the Command implementation used when using the various Queue.SendString\* methods. There is also a convenience HandlerCmd struct for the commands that should be handled by a specific callback function. It is a Command with a Handler interface implementation. The Fetcher has a number of fields that provide further customization: * HttpClient : By default, the Fetcher uses the net/http default Client to make requests. A different client can be set on the Fetcher.HttpClient field. * CrawlDelay : That value is used only if there is no delay specified by the robots.txt of a given host. * UserAgent : Sets the user agent string to use for the requests and to validate against the robots.txt entries. * WorkerIdleTTL : Sets the duration that a worker goroutine can wait without receiving new commands to fetch. If the idle time-to-live is reached, the worker goroutine is stopped and its resources are released. This can be especially useful for long-running crawlers. * AutoClose : If true, closes the queue automatically once the number of active hosts reach 0. * DisablePoliteness : If true, ignores the robots.txt policies of the hosts. What fetchbot doesn't do - especially compared to gocrawl - is that it doesn't keep track of already visited URLs, and it doesn't normalize the URLs. This is outside the scope of this package - all commands sent on the Queue will be fetched. Normalization can easily be done (e.g. using https://github.com/PuerkitoBio/purell) before sending the Command to the Fetcher. How to keep track of visited URLs depends on the use-case of the specific crawler, but for an example, see /example/full/main.go. The BSD 3-Clause license (http://opensource.org/licenses/BSD-3-Clause), the same as the Go language. The iq_slice.go file is under the CDDL-1.0 license (details in the source file).
<h1 align="center">IrisAdmin</h1> [![Build Status](https://app.travis-ci.com/snowlyg/iris-admin.svg?branch=master)](https://app.travis-ci.com/snowlyg/iris-admin) [![LICENSE](https://img.shields.io/github/license/snowlyg/iris-admin)](https://github.com/snowlyg/iris-admin/blob/master/LICENSE) [![go doc](https://godoc.org/github.com/snowlyg/iris-admin?status.svg)](https://godoc.org/github.com/snowlyg/iris-admin) [![go report](https://goreportcard.com/badge/github.com/snowlyg/iris-admin)](https://goreportcard.com/badge/github.com/snowlyg/iris-admin) [![Build Status](https://codecov.io/gh/snowlyg/iris-admin/branch/master/graph/badge.svg)](https://codecov.io/gh/snowlyg/iris-admin) [简体中文](./README.md) | English #### Project url [GITHUB](https://github.com/snowlyg/iris-admin) | [GITEE](https://gitee.com/snowlyg/iris-admin) **** > This project just for learning golang, welcome to give your suggestions! #### Documentation - [IRIS-ADMIN-DOC](https://doc.snowlyg.com) - [IRIS V12 document for chinese](https://github.com/snowlyg/iris/wiki) - [godoc](https://pkg.go.dev/github.com/snowlyg/iris-admin?utm_source=godoc) [![Gitter](https://badges.gitter.im/iris-go-tenancy/community.svg)](https://gitter.im/iris-go-tenancy/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge) [![Join the chat at https://gitter.im/iris-go-tenancy/iris-admin](https://badges.gitter.im/iris-go-tenancy/iris-admin.svg)](https://gitter.im/iris-go-tenancy/iris-admin?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) #### BLOG - [REST API with iris-go web framework](https://blog.snowlyg.com/iris-go-api-1/) - [How to user iris-go with casbin](https://blog.snowlyg.com/iris-go-api-2/) --- #### Getting started - Get master package , Notice must use `master` version. ```sh ``` #### Program introduction ##### The project consists of multiple plugins, each with different functions - [viper_server] ```go package cache import ( ) var CONFIG Redis // getViperConfig get initialize config db: ` + db + ` addr: "` + CONFIG.Addr + `" password: "` + CONFIG.Password + `" pool-size: ` + poolSize), ``` - [zap_server] ```go ``` - [database] ```go ``` - [casbin] ```go ``` - [cache] ```go ``` - [operation] - [cron_server] ```go ``` - [web] - ```go // WebFunc web framework // - GetTestClient test client // - GetTestLogin test for login // - AddWebStatic add web static path // - AddUploadStatic add upload static path // - Run start ``` - [mongodb] #### Initialize database ##### Simple - Use gorm's `AutoMigrate()` function to auto migrate database. ```go package main import ( ) ``` ##### Custom migrate tools - Use `gormigrate` third party package. Tt's helpful for database migrate and program development. - Detail is see [iris-admin-cmd](https://github.com/snowlyg/iris-admin-example/blob/main/iris/cmd/main.go). --- - Add main.go file. ```go package main import ( ) ``` #### Run project - When you first run this cmd `go run main.go` , you can see some config files in the `config` directory, - and `rbac_model.conf` will be created in your project root directory. ```sh go run main.go ``` #### Module - You can use [iris-admin-rbac](https://github.com/snowlyg/iris-admin-rbac) package to add rbac function for your project quickly. - Your can use AddModule() to add other modules . ```go package main import ( ) ``` #### Default static file path - A static file access path has been built in by default - Static files will upload to `/static/upload` directory. - You can set this config key `static-path` to change the default directory. ```yaml system: ``` #### Use with front-end framework , e.g. vue - Default,you must build vue to the `dist` directory. - Naturally you can set this config key `web-path` to change the default directory. ```go package main import ( ) ``` #### Example - [iris](https://github.com/snowlyg/iris-admin-example/tree/main/iris) - [gin](https://github.com/snowlyg/iris-admin-example/tree/main/gin) #### RBAC - [iris-admin-rbac](https://github.com/snowlyg/iris-admin-rbac) #### Unit test and documentation - Before start unit tests, you need to set two system environment variables `mysqlPwd` and `mysqlAddr`,that will be used when running the test instance。 - helper/tests(https://github.com/snowlyg/helper/tree/main/tests) package the unit test used, it's simple package base on httpexpect/v2(https://github.com/gavv/httpexpect). - [example for unit test](https://github.com/snowlyg/iris-admin-rbac/tree/main/iris/perm/tests) - [example for unit test](https://github.com/snowlyg/iris-admin-rbac/tree/main/gin/authority/test) Before create a http api unit test , you need create a base test file named `main_test.go` , this file have some unit test step : ***Suggest use docker mysql, otherwise if the test fails, there will be a lot of test data left behind*** - 1.create database before test start and delete database when test finish. - 2.create tables and seed test data at once time. - 3.`PartyFunc` and `SeedFunc` use to custom someting for your test model. 内容如下所示: ***main_test.go*** ```go package test import ( ) var TestServer *web_gin.WebServer var TestClient *httptest.Client ``` ***index_test.go*** ```go package test import ( ) var ( ) ``` ## 🔋 JetBrains OS licenses <a href="https://www.jetbrains.com/?from=iris-admin" target="_blank"><img src="https://raw.githubusercontent.com/panjf2000/illustrations/master/jetbrains/jetbrains-variant-4.png" width="230" align="middle"/></a> ## ☕️ Buy me a coffee > Please be sure to leave your name, GitHub account or other social media accounts when you donate by the following means so that I can add it to the list of donors as a token of my appreciation. - [为爱发电](https://afdian.net/@snowlyg/plan) - [donating](https://paypal.me/snowlyg?country.x=C2&locale.x=zh_XC)
Package SQLittle provides pure Go, read-only, access to SQLite (version 3) database files. SQLittle reads SQLite3 tables and indexes. It iterates over tables, and can search efficiently using indexes. SQLittle will deal with all SQLite storage quirks, but otherwise it doesn't try to be smart; if you want to use an index you have to give the name of the index. There is no support for SQL, and if you want to do the most efficient joins possible you'll have to use the low level code. Based on https://sqlite.org/fileformat2.html and some SQLite source code reading. This whole thing is mostly for fun. The normal SQLite libraries are perfectly great, and there is no real need for this. However, since this library is pure Go cross-compilation is much easier. Given the constraints a valid use-case would for example be storing app configuration in read-only sqlite files. https://godoc.org/github.com/alicebob/sqlittle for the go doc and examples. See [LOWLEVEL.md](LOWLEVEL.md) about the low level reader. See [CODE.md](CODE.md) for an overview how the code is structured. Things SQLittle can do: Things SQLittle should do: Things SQLittle can not do: SQLittle has a read-lock on the file during the whole execution of the select-like functions. It's safe to update the database using SQLite while the file is opened in SQLittle. The current level of abstraction is likely the final one (that is: deal with reading single tables; don't even try joins or SQL or query planning), but the API might still change.
Package applicationdiscoveryservice provides the API client, operations, and parameter types for AWS Application Discovery Service. Amazon Web Services Application Discovery Service (Application Discovery Service) helps you plan application migration projects. It automatically identifies servers, virtual machines (VMs), and network dependencies in your on-premises data centers. For more information, see the Amazon Web Services Application Discovery Service FAQ. Application Discovery Service offers three ways of performing discovery and collecting data about your on-premises servers: Agentless discovery using Amazon Web Services Application Discovery Service Agentless Collector (Agentless Collector), which doesn't require you to install an agent on each host. Agentless Collector gathers server information regardless of the operating systems, which minimizes the time required for initial on-premises infrastructure assessment. Agentless Collector doesn't collect information about network dependencies, only agent-based discovery collects that information. Agent-based discovery using the Amazon Web Services Application Discovery Agent (Application Discovery Agent) collects a richer set of data than agentless discovery, which you install on one or more hosts in your data center. The agent captures infrastructure and application information, including an inventory of running processes, system performance information, resource utilization, and network dependencies. The information collected by agents is secured at rest and in transit to the Application Discovery Service database in the Amazon Web Services cloud. For more information, see Amazon Web Services Application Discovery Agent. Amazon Web Services Partner Network (APN) solutions integrate with Application Discovery Service, enabling you to import details of your on-premises environment directly into Amazon Web Services Migration Hub (Migration Hub) without using Agentless Collector or Application Discovery Agent. Third-party application discovery tools can query Amazon Web Services Application Discovery Service, and they can write to the Application Discovery Service database using the public API. In this way, you can import data into Migration Hub and view it, so that you can associate applications with servers and track migrations. This API reference provides descriptions, syntax, and usage examples for each of the actions and data types for Application Discovery Service. The topic for each action shows the API request parameters and the response. Alternatively, you can use one of the Amazon Web Services SDKs to access an API that is tailored to the programming language or platform that you're using. For more information, see Amazon Web Services SDKs. Remember that you must set your Migration Hub home Region before you call any of these APIs. You must make API calls for write actions (create, notify, associate, disassociate, import, or put) while in your home Region, or a HomeRegionNotSetException error is returned. API calls for read actions (list, describe, stop, and delete) are permitted outside of your home Region. Although it is unlikely, the Migration Hub home Region could change. If you call APIs outside the home Region, an InvalidInputException is returned. You must call GetHomeRegion to obtain the latest Migration Hub home Region. This guide is intended for use with the Amazon Web Services Application Discovery Service User Guide. All data is handled according to the Amazon Web Services Privacy Policy. You can operate Application Discovery Service offline to inspect collected data before it is shared with the service.
Package sqlite provides a Go interface to SQLite 3. The semantics of this package are deliberately close to the SQLite3 C API. See the official C API introduction for an overview of the basics. An SQLite connection is represented by a *Conn. Connections cannot be used concurrently. A typical Go program will create a pool of connections (e.g. by using zombiezen.com/go/sqlite/sqlitex.NewPool to create a *zombiezen.com/go/sqlite/sqlitex.Pool) so goroutines can borrow a connection while they need to talk to the database. This package assumes SQLite will be used concurrently by the process through several connections, so the build options for SQLite enable multi-threading and the shared cache. The implementation automatically handles shared cache locking, see the documentation on Stmt.Step for details. The optional SQLite 3 extensions compiled in are: session, FTS5, RTree, JSON1, and GeoPoly. This is not a database/sql driver. For helper functions to make it easier to execute statements, see the zombiezen.com/go/sqlite/sqlitex package. Statements are prepared with the Conn.Prepare and Conn.PrepareTransient methods. When using Conn.Prepare, statements are keyed inside a connection by the original query string used to create them. This means long-running high-performance code paths can write: After all the connections in a pool have been warmed up by passing through one of these Prepare calls, subsequent calls are simply a map lookup that returns an existing statement. SQLite transactions can be managed manually with this package by directly executing BEGIN / COMMIT / ROLLBACK or SAVEPOINT / RELEASE / ROLLBACK statements, but there are also helper functions available in zombiezen.com/go/sqlite/sqlitex: For simple schema migration needs, see the zombiezen.com/go/sqlite/sqlitemigration package. Use Conn.CreateFunction to register Go functions for use as SQL functions. The sqlite package supports the SQLite incremental I/O interface for streaming blob data into and out of the the database without loading the entire blob into a single []byte. (This is important when working either with very large blobs, or more commonly, a large number of moderate-sized blobs concurrently.) See Conn.OpenBlob for more details. Every connection can have a done channel associated with it using the Conn.SetInterrupt method. This is typically the channel returned by a context.Context.Done method. As database connections are long-lived, the Conn.SetInterrupt method can be called multiple times to reset the associated lifetime. Using a Pool to execute SQL in a concurrent HTTP handler. This is the same as the main package example, but uses the SQLite statement API instead of sqlitex.
Package guardian . Go-Guardian is a golang library that provides a simple, clean, and idiomatic way to create powerful modern API and web authentication. Go-Guardian sole purpose is to authenticate requests, which it does through an extensible set of authentication methods known as strategies. Go-Guardian does not mount routes or assume any particular database schema, which maximizes flexibility and allows decisions to be made by the developer. The API is simple: you provide go-guardian a request to authenticate, and go-guardian invoke strategies to authenticate end-user request. Strategies provide callbacks for controlling what occurs when authentication `should` succeeds or fails. Why Go-Guardian? When building a modern application, you don't want to implement authentication module from scratch; you want to focus on building awesome software. go-guardian is here to help with that. Here are a few bullet point reasons you might like to try it out: Example:
Package ql implements a pure Go embedded SQL database engine. Builder results available at QL is a member of the SQL family of languages. It is less complex and less powerful than SQL (whichever specification SQL is considered to be). 2020-12-10: sql/database driver now supports url parameter removeemptywal=N which has the same semantics as passing RemoveEmptyWAL = N != 0 to OpenFile options. 2020-11-09: Add IF NOT EXISTS support for the INSERT INTO statement. Add IsDuplicateUniqueIndexError function. 2018-11-04: Back end file format V2 is now released. To use the new format for newly created databases set the FileFormat field in *Options passed to OpenFile to value 2 or use the driver named "ql2" instead of "ql". - Both the old and new driver will properly open and use, read and write the old (V1) or new file (V2) format of an existing database. - V1 format has a record size limit of ~64 kB. V2 format record size limit is math.MaxInt32. - V1 format uncommitted transaction size is limited by memory resources. V2 format uncommitted transaction is limited by free disk space. - A direct consequence of the previous is that small transactions perform better using V1 format and big transactions perform better using V2 format. - V2 format uses substantially less memory. 2018-08-02: Release v1.2.0 adds initial support for Go modules. 2017-01-10: Release v1.1.0 fixes some bugs and adds a configurable WAL headroom. 2016-07-29: Release v1.0.6 enables alternatively using = instead of == for equality operation. 2016-07-11: Release v1.0.5 undoes vendoring of lldb. QL now uses stable lldb (modernc.org/lldb). 2016-07-06: Release v1.0.4 fixes a panic when closing the WAL file. 2016-04-03: Release v1.0.3 fixes a data race. 2016-03-23: Release v1.0.2 vendors gitlab.com/cznic/exp/lldb and github.com/camlistore/go4/lock. 2016-03-17: Release v1.0.1 adjusts for latest goyacc. Parser error messages are improved and changed, but their exact form is not considered a API change. 2016-03-05: The current version has been tagged v1.0.0. 2015-06-15: To improve compatibility with other SQL implementations, the count built-in aggregate function now accepts * as its argument. 2015-05-29: The execution planner was rewritten from scratch. It should use indices in all places where they were used before plus in some additional situations. It is possible to investigate the plan using the newly added EXPLAIN statement. The QL tool is handy for such analysis. If the planner would have used an index, but no such exists, the plan includes hints in form of copy/paste ready CREATE INDEX statements. The planner is still quite simple and a lot of work on it is yet ahead. You can help this process by filling an issue with a schema and query which fails to use an index or indices when it should, in your opinion. Bonus points for including output of `ql 'explain <query>'`. 2015-05-09: The grammar of the CREATE INDEX statement now accepts an expression list instead of a single expression, which was further limited to just a column name or the built-in id(). As a side effect, composite indices are now functional. However, the values in the expression-list style index are not yet used by other statements or the statement/query planner. The composite index is useful while having UNIQUE clause to check for semantically duplicate rows before they get added to the table or when such a row is mutated using the UPDATE statement and the expression-list style index tuple of the row is thus recomputed. 2015-05-02: The Schema field of table __Table now correctly reflects any column constraints and/or defaults. Also, the (*DB).Info method now has that information provided in new ColumInfo fields NotNull, Constraint and Default. 2015-04-20: Added support for {LEFT,RIGHT,FULL} [OUTER] JOIN. 2015-04-18: Column definitions can now have constraints and defaults. Details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. 2015-03-06: New built-in functions formatFloat and formatInt. Thanks urandom! (https://github.com/urandom) 2015-02-16: IN predicate now accepts a SELECT statement. See the updated "Predicates" section. 2015-01-17: Logical operators || and && have now alternative spellings: OR and AND (case insensitive). AND was a keyword before, but OR is a new one. This can possibly break existing queries. For the record, it's a good idea to not use any name appearing in, for example, [7] in your queries as the list of QL's keywords may expand for gaining better compatibility with existing SQL "standards". 2015-01-12: ACID guarantees were tightened at the cost of performance in some cases. The write collecting window mechanism, a formerly used implementation detail, was removed. Inserting rows one by one in a transaction is now slow. I mean very slow. Try to avoid inserting single rows in a transaction. Instead, whenever possible, perform batch updates of tens to, say thousands of rows in a single transaction. See also: http://www.sqlite.org/faq.html#q19, the discussed synchronization principles involved are the same as for QL, modulo minor details. Note: A side effect is that closing a DB before exiting an application, both for the Go API and through database/sql driver, is no more required, strictly speaking. Beware that exiting an application while there is an open (uncommitted) transaction in progress means losing the transaction data. However, the DB will not become corrupted because of not closing it. Nor that was the case before, but formerly failing to close a DB could have resulted in losing the data of the last transaction. 2014-09-21: id() now optionally accepts a single argument - a table name. 2014-09-01: Added the DB.Flush() method and the LIKE pattern matching predicate. 2014-08-08: The built in functions max and min now accept also time values. Thanks opennota! (https://github.com/opennota) 2014-06-05: RecordSet interface extended by new methods FirstRow and Rows. 2014-06-02: Indices on id() are now used by SELECT statements. 2014-05-07: Introduction of Marshal, Schema, Unmarshal. 2014-04-15: Added optional IF NOT EXISTS clause to CREATE INDEX and optional IF EXISTS clause to DROP INDEX. 2014-04-12: The column Unique in the virtual table __Index was renamed to IsUnique because the old name is a keyword. Unfortunately, this is a breaking change, sorry. 2014-04-11: Introduction of LIMIT, OFFSET. 2014-04-10: Introduction of query rewriting. 2014-04-07: Introduction of indices. QL imports zappy[8], a block-based compressor, which speeds up its performance by using a C version of the compression/decompression algorithms. If a CGO-free (pure Go) version of QL, or an app using QL, is required, please include 'purego' in the -tags option of go {build,get,install}. For example: If zappy was installed before installing QL, it might be necessary to rebuild zappy first (or rebuild QL with all its dependencies using the -a option): The syntax is specified using Extended Backus-Naur Form (EBNF) Lower-case production names are used to identify lexical tokens. Non-terminals are in CamelCase. Lexical tokens are enclosed in double quotes "" or back quotes “. The form a … b represents the set of characters from a through b as alternatives. The horizontal ellipsis … is also used elsewhere in the spec to informally denote various enumerations or code snippets that are not further specified. QL source code is Unicode text encoded in UTF-8. The text is not canonicalized, so a single accented code point is distinct from the same character constructed from combining an accent and a letter; those are treated as two code points. For simplicity, this document will use the unqualified term character to refer to a Unicode code point in the source text. Each code point is distinct; for instance, upper and lower case letters are different characters. Implementation restriction: For compatibility with other tools, the parser may disallow the NUL character (U+0000) in the statement. Implementation restriction: A byte order mark is disallowed anywhere in QL statements. The following terms are used to denote specific character classes The underscore character _ (U+005F) is considered a letter. Lexical elements are comments, tokens, identifiers, keywords, operators and delimiters, integer, floating-point, imaginary, rune and string literals and QL parameters. Line comments start with the character sequence // or -- and stop at the end of the line. A line comment acts like a space. General comments start with the character sequence /* and continue through the character sequence */. A general comment acts like a space. Comments do not nest. Tokens form the vocabulary of QL. There are four classes: identifiers, keywords, operators and delimiters, and literals. White space, formed from spaces (U+0020), horizontal tabs (U+0009), carriage returns (U+000D), and newlines (U+000A), is ignored except as it separates tokens that would otherwise combine into a single token. The formal grammar uses semicolons ";" as separators of QL statements. A single QL statement or the last QL statement in a list of statements can have an optional semicolon terminator. (Actually a separator from the following empty statement.) Identifiers name entities such as tables or record set columns. There are two kinds of identifiers, normal idententifiers and quoted identifiers. An normal identifier is a sequence of one or more letters and digits. The first character in an identifier must be a letter. For example A quoted identifier is a string of any charaters between guillmets «». Quoted identifiers allow QL key words or phrases with spaces to be used as identifiers. The guillemets were chosen because QL already uses double quotes, single quotes, and backticks for other quoting purposes. «TRANSACTION» «duration» «lovely stories» No identifiers are predeclared, however note that no keyword can be used as a normal identifier. Identifiers starting with two underscores are used for meta data virtual tables names. For forward compatibility, users should generally avoid using any identifiers starting with two underscores. For example The following keywords are reserved and may not be used as identifiers. Keywords are not case sensitive. The following character sequences represent operators, delimiters, and other special tokens Operators consisting of more than one character are referred to by names in the rest of the documentation An integer literal is a sequence of digits representing an integer constant. An optional prefix sets a non-decimal base: 0 for octal, 0x or 0X for hexadecimal. In hexadecimal literals, letters a-f and A-F represent values 10 through 15. For example A floating-point literal is a decimal representation of a floating-point constant. It has an integer part, a decimal point, a fractional part, and an exponent part. The integer and fractional part comprise decimal digits; the exponent part is an e or E followed by an optionally signed decimal exponent. One of the integer part or the fractional part may be elided; one of the decimal point or the exponent may be elided. For example An imaginary literal is a decimal representation of the imaginary part of a complex constant. It consists of a floating-point literal or decimal integer followed by the lower-case letter i. For example A rune literal represents a rune constant, an integer value identifying a Unicode code point. A rune literal is expressed as one or more characters enclosed in single quotes. Within the quotes, any character may appear except single quote and newline. A single quoted character represents the Unicode value of the character itself, while multi-character sequences beginning with a backslash encode values in various formats. The simplest form represents the single character within the quotes; since QL statements are Unicode characters encoded in UTF-8, multiple UTF-8-encoded bytes may represent a single integer value. For instance, the literal 'a' holds a single byte representing a literal a, Unicode U+0061, value 0x61, while 'ä' holds two bytes (0xc3 0xa4) representing a literal a-dieresis, U+00E4, value 0xe4. Several backslash escapes allow arbitrary values to be encoded as ASCII text. There are four ways to represent the integer value as a numeric constant: \x followed by exactly two hexadecimal digits; \u followed by exactly four hexadecimal digits; \U followed by exactly eight hexadecimal digits, and a plain backslash \ followed by exactly three octal digits. In each case the value of the literal is the value represented by the digits in the corresponding base. Although these representations all result in an integer, they have different valid ranges. Octal escapes must represent a value between 0 and 255 inclusive. Hexadecimal escapes satisfy this condition by construction. The escapes \u and \U represent Unicode code points so within them some values are illegal, in particular those above 0x10FFFF and surrogate halves. After a backslash, certain single-character escapes represent special values All other sequences starting with a backslash are illegal inside rune literals. For example A string literal represents a string constant obtained from concatenating a sequence of characters. There are two forms: raw string literals and interpreted string literals. Raw string literals are character sequences between back quotes “. Within the quotes, any character is legal except back quote. The value of a raw string literal is the string composed of the uninterpreted (implicitly UTF-8-encoded) characters between the quotes; in particular, backslashes have no special meaning and the string may contain newlines. Carriage returns inside raw string literals are discarded from the raw string value. Interpreted string literals are character sequences between double quotes "". The text between the quotes, which may not contain newlines, forms the value of the literal, with backslash escapes interpreted as they are in rune literals (except that \' is illegal and \" is legal), with the same restrictions. The three-digit octal (\nnn) and two-digit hexadecimal (\xnn) escapes represent individual bytes of the resulting string; all other escapes represent the (possibly multi-byte) UTF-8 encoding of individual characters. Thus inside a string literal \377 and \xFF represent a single byte of value 0xFF=255, while ÿ, \u00FF, \U000000FF and \xc3\xbf represent the two bytes 0xc3 0xbf of the UTF-8 encoding of character U+00FF. For example These examples all represent the same string If the statement source represents a character as two code points, such as a combining form involving an accent and a letter, the result will be an error if placed in a rune literal (it is not a single code point), and will appear as two code points if placed in a string literal. Literals are assigned their values from the respective text representation at "compile" (parse) time. QL parameters provide the same functionality as literals, but their value is assigned at execution time from an expression list passed to DB.Run or DB.Execute. Using '?' or '$' is completely equivalent. For example Keywords 'false' and 'true' (not case sensitive) represent the two possible constant values of type bool (also not case sensitive). Keyword 'NULL' (not case sensitive) represents an untyped constant which is assignable to any type. NULL is distinct from any other value of any type. A type determines the set of values and operations specific to values of that type. A type is specified by a type name. Named instances of the boolean, numeric, and string types are keywords. The names are not case sensitive. Note: The blob type is exchanged between the back end and the API as []byte. On 32 bit platforms this limits the size which the implementation can handle to 2G. A boolean type represents the set of Boolean truth values denoted by the predeclared constants true and false. The predeclared boolean type is bool. A duration type represents the elapsed time between two instants as an int64 nanosecond count. The representation limits the largest representable duration to approximately 290 years. A numeric type represents sets of integer or floating-point values. The predeclared architecture-independent numeric types are The value of an n-bit integer is n bits wide and represented using two's complement arithmetic. Conversions are required when different numeric types are mixed in an expression or assignment. A string type represents the set of string values. A string value is a (possibly empty) sequence of bytes. The case insensitive keyword for the string type is 'string'. The length of a string (its size in bytes) can be discovered using the built-in function len. A time type represents an instant in time with nanosecond precision. Each time has associated with it a location, consulted when computing the presentation form of the time. The following functions are implicitly declared An expression specifies the computation of a value by applying operators and functions to operands. Operands denote the elementary values in an expression. An operand may be a literal, a (possibly qualified) identifier denoting a constant or a function or a table/record set column, or a parenthesized expression. A qualified identifier is an identifier qualified with a table/record set name prefix. For example Primary expression are the operands for unary and binary expressions. For example A primary expression of the form denotes the element of a string indexed by x. Its type is byte. The value x is called the index. The following rules apply - The index x must be of integer type except bigint or duration; it is in range if 0 <= x < len(s), otherwise it is out of range. - A constant index must be non-negative and representable by a value of type int. - A constant index must be in range if the string a is a literal. - If x is out of range at run time, a run-time error occurs. - s[x] is the byte at index x and the type of s[x] is byte. If s is NULL or x is NULL then the result is NULL. Otherwise s[x] is illegal. For a string, the primary expression constructs a substring. The indices low and high select which elements appear in the result. The result has indices starting at 0 and length equal to high - low. For convenience, any of the indices may be omitted. A missing low index defaults to zero; a missing high index defaults to the length of the sliced operand The indices low and high are in range if 0 <= low <= high <= len(a), otherwise they are out of range. A constant index must be non-negative and representable by a value of type int. If both indices are constant, they must satisfy low <= high. If the indices are out of range at run time, a run-time error occurs. Integer values of type bigint or duration cannot be used as indices. If s is NULL the result is NULL. If low or high is not omitted and is NULL then the result is NULL. Given an identifier f denoting a predeclared function, calls f with arguments a1, a2, … an. Arguments are evaluated before the function is called. The type of the expression is the result type of f. In a function call, the function value and arguments are evaluated in the usual order. After they are evaluated, the parameters of the call are passed by value to the function and the called function begins execution. The return value of the function is passed by value when the function returns. Calling an undefined function causes a compile-time error. Operators combine operands into expressions. Comparisons are discussed elsewhere. For other binary operators, the operand types must be identical unless the operation involves shifts or untyped constants. For operations involving constants only, see the section on constant expressions. Except for shift operations, if one operand is an untyped constant and the other operand is not, the constant is converted to the type of the other operand. The right operand in a shift expression must have unsigned integer type or be an untyped constant that can be converted to unsigned integer type. If the left operand of a non-constant shift expression is an untyped constant, the type of the constant is what it would be if the shift expression were replaced by its left operand alone. Expressions of the form yield a boolean value true if expr2, a regular expression, matches expr1 (see also [6]). Both expression must be of type string. If any one of the expressions is NULL the result is NULL. Predicates are special form expressions having a boolean result type. Expressions of the form are equivalent, including NULL handling, to The types of involved expressions must be comparable as defined in "Comparison operators". Another form of the IN predicate creates the expression list from a result of a SelectStmt. The SelectStmt must select only one column. The produced expression list is resource limited by the memory available to the process. NULL values produced by the SelectStmt are ignored, but if all records of the SelectStmt are NULL the predicate yields NULL. The select statement is evaluated only once. If the type of expr is not the same as the type of the field returned by the SelectStmt then the set operation yields false. The type of the column returned by the SelectStmt must be one of the simple (non blob-like) types: Expressions of the form are equivalent, including NULL handling, to The types of involved expressions must be ordered as defined in "Comparison operators". Expressions of the form yield a boolean value true if expr does not have a specific type (case A) or if expr has a specific type (case B). In other cases the result is a boolean value false. Unary operators have the highest precedence. There are five precedence levels for binary operators. Multiplication operators bind strongest, followed by addition operators, comparison operators, && (logical AND), and finally || (logical OR) Binary operators of the same precedence associate from left to right. For instance, x / y * z is the same as (x / y) * z. Note that the operator precedence is reflected explicitly by the grammar. Arithmetic operators apply to numeric values and yield a result of the same type as the first operand. The four standard arithmetic operators (+, -, *, /) apply to integer, rational, floating-point, and complex types; + also applies to strings; +,- also applies to times. All other arithmetic operators apply to integers only. sum integers, rationals, floats, complex values, strings difference integers, rationals, floats, complex values, times product integers, rationals, floats, complex values / quotient integers, rationals, floats, complex values % remainder integers & bitwise AND integers | bitwise OR integers ^ bitwise XOR integers &^ bit clear (AND NOT) integers << left shift integer << unsigned integer >> right shift integer >> unsigned integer Strings can be concatenated using the + operator String addition creates a new string by concatenating the operands. A value of type duration can be added to or subtracted from a value of type time. Times can subtracted from each other producing a value of type duration. For two integer values x and y, the integer quotient q = x / y and remainder r = x % y satisfy the following relationships with x / y truncated towards zero ("truncated division"). As an exception to this rule, if the dividend x is the most negative value for the int type of x, the quotient q = x / -1 is equal to x (and r = 0). If the divisor is a constant expression, it must not be zero. If the divisor is zero at run time, a run-time error occurs. If the dividend is non-negative and the divisor is a constant power of 2, the division may be replaced by a right shift, and computing the remainder may be replaced by a bitwise AND operation The shift operators shift the left operand by the shift count specified by the right operand. They implement arithmetic shifts if the left operand is a signed integer and logical shifts if it is an unsigned integer. There is no upper limit on the shift count. Shifts behave as if the left operand is shifted n times by 1 for a shift count of n. As a result, x << 1 is the same as x*2 and x >> 1 is the same as x/2 but truncated towards negative infinity. For integer operands, the unary operators +, -, and ^ are defined as follows For floating-point and complex numbers, +x is the same as x, while -x is the negation of x. The result of a floating-point or complex division by zero is not specified beyond the IEEE-754 standard; whether a run-time error occurs is implementation-specific. Whenever any operand of any arithmetic operation, unary or binary, is NULL, as well as in the case of the string concatenating operation, the result is NULL. For unsigned integer values, the operations +, -, *, and << are computed modulo 2n, where n is the bit width of the unsigned integer's type. Loosely speaking, these unsigned integer operations discard high bits upon overflow, and expressions may rely on “wrap around”. For signed integers with a finite bit width, the operations +, -, *, and << may legally overflow and the resulting value exists and is deterministically defined by the signed integer representation, the operation, and its operands. No exception is raised as a result of overflow. An evaluator may not optimize an expression under the assumption that overflow does not occur. For instance, it may not assume that x < x + 1 is always true. Integers of type bigint and rationals do not overflow but their handling is limited by the memory resources available to the program. Comparison operators compare two operands and yield a boolean value. In any comparison, the first operand must be of same type as is the second operand, or vice versa. The equality operators == and != apply to operands that are comparable. The ordering operators <, <=, >, and >= apply to operands that are ordered. These terms and the result of the comparisons are defined as follows - Boolean values are comparable. Two boolean values are equal if they are either both true or both false. - Complex values are comparable. Two complex values u and v are equal if both real(u) == real(v) and imag(u) == imag(v). - Integer values are comparable and ordered, in the usual way. Note that durations are integers. - Floating point values are comparable and ordered, as defined by the IEEE-754 standard. - Rational values are comparable and ordered, in the usual way. - String and Blob values are comparable and ordered, lexically byte-wise. - Time values are comparable and ordered. Whenever any operand of any comparison operation is NULL, the result is NULL. Note that slices are always of type string. Logical operators apply to boolean values and yield a boolean result. The right operand is evaluated conditionally. The truth tables for logical operations with NULL values Conversions are expressions of the form T(x) where T is a type and x is an expression that can be converted to type T. A constant value x can be converted to type T in any of these cases: - x is representable by a value of type T. - x is a floating-point constant, T is a floating-point type, and x is representable by a value of type T after rounding using IEEE 754 round-to-even rules. The constant T(x) is the rounded value. - x is an integer constant and T is a string type. The same rule as for non-constant x applies in this case. Converting a constant yields a typed constant as result. A non-constant value x can be converted to type T in any of these cases: - x has type T. - x's type and T are both integer or floating point types. - x's type and T are both complex types. - x is an integer, except bigint or duration, and T is a string type. Specific rules apply to (non-constant) conversions between numeric types or to and from a string type. These conversions may change the representation of x and incur a run-time cost. All other conversions only change the type but not the representation of x. A conversion of NULL to any type yields NULL. For the conversion of non-constant numeric values, the following rules apply 1. When converting between integer types, if the value is a signed integer, it is sign extended to implicit infinite precision; otherwise it is zero extended. It is then truncated to fit in the result type's size. For example, if v == uint16(0x10F0), then uint32(int8(v)) == 0xFFFFFFF0. The conversion always yields a valid value; there is no indication of overflow. 2. When converting a floating-point number to an integer, the fraction is discarded (truncation towards zero). 3. When converting an integer or floating-point number to a floating-point type, or a complex number to another complex type, the result value is rounded to the precision specified by the destination type. For instance, the value of a variable x of type float32 may be stored using additional precision beyond that of an IEEE-754 32-bit number, but float32(x) represents the result of rounding x's value to 32-bit precision. Similarly, x + 0.1 may use more than 32 bits of precision, but float32(x + 0.1) does not. In all non-constant conversions involving floating-point or complex values, if the result type cannot represent the value the conversion succeeds but the result value is implementation-dependent. 1. Converting a signed or unsigned integer value to a string type yields a string containing the UTF-8 representation of the integer. Values outside the range of valid Unicode code points are converted to "\uFFFD". 2. Converting a blob to a string type yields a string whose successive bytes are the elements of the blob. 3. Converting a value of a string type to a blob yields a blob whose successive elements are the bytes of the string. 4. Converting a value of a bigint type to a string yields a string containing the decimal decimal representation of the integer. 5. Converting a value of a string type to a bigint yields a bigint value containing the integer represented by the string value. A prefix of “0x” or “0X” selects base 16; the “0” prefix selects base 8, and a “0b” or “0B” prefix selects base 2. Otherwise the value is interpreted in base 10. An error occurs if the string value is not in any valid format. 6. Converting a value of a rational type to a string yields a string containing the decimal decimal representation of the rational in the form "a/b" (even if b == 1). 7. Converting a value of a string type to a bigrat yields a bigrat value containing the rational represented by the string value. The string can be given as a fraction "a/b" or as a floating-point number optionally followed by an exponent. An error occurs if the string value is not in any valid format. 8. Converting a value of a duration type to a string returns a string representing the duration in the form "72h3m0.5s". Leading zero units are omitted. As a special case, durations less than one second format using a smaller unit (milli-, micro-, or nanoseconds) to ensure that the leading digit is non-zero. The zero duration formats as 0, with no unit. 9. Converting a string value to a duration yields a duration represented by the string. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". 10. Converting a time value to a string returns the time formatted using the format string When evaluating the operands of an expression or of function calls, operations are evaluated in lexical left-to-right order. For example, in the evaluation of the function calls and evaluation of c happen in the order h(), i(), j(), c. Floating-point operations within a single expression are evaluated according to the associativity of the operators. Explicit parentheses affect the evaluation by overriding the default associativity. In the expression x + (y + z) the addition y + z is performed before adding x. Statements control execution. The empty statement does nothing. Alter table statements modify existing tables. With the ADD clause it adds a new column to the table. The column must not exist. With the DROP clause it removes an existing column from a table. The column must exist and it must be not the only (last) column of the table. IOW, there cannot be a table with no columns. For example When adding a column to a table with existing data, the constraint clause of the ColumnDef cannot be used. Adding a constrained column to an empty table is fine. Begin transactions statements introduce a new transaction level. Every transaction level must be eventually balanced by exactly one of COMMIT or ROLLBACK statements. Note that when a transaction is roll-backed because of a statement failure then no explicit balancing of the respective BEGIN TRANSACTION is statement is required nor permitted. Failure to properly balance any opened transaction level may cause dead locks and/or lose of data updated in the uppermost opened but never properly closed transaction level. For example A database cannot be updated (mutated) outside of a transaction. Statements requiring a transaction A database is effectively read only outside of a transaction. Statements not requiring a transaction The commit statement closes the innermost transaction nesting level. If that's the outermost level then the updates to the DB made by the transaction are atomically made persistent. For example Create index statements create new indices. Index is a named projection of ordered values of a table column to the respective records. As a special case the id() of the record can be indexed. Index name must not be the same as any of the existing tables and it also cannot be the same as of any column name of the table the index is on. For example Now certain SELECT statements may use the indices to speed up joins and/or to speed up record set filtering when the WHERE clause is used; or the indices might be used to improve the performance when the ORDER BY clause is present. The UNIQUE modifier requires the indexed values tuple to be index-wise unique or have all values NULL. The optional IF NOT EXISTS clause makes the statement a no operation if the index already exists. A simple index consists of only one expression which must be either a column name or the built-in id(). A more complex and more general index is one that consists of more than one expression or its single expression does not qualify as a simple index. In this case the type of all expressions in the list must be one of the non blob-like types. Note: Blob-like types are blob, bigint, bigrat, time and duration. Create table statements create new tables. A column definition declares the column name and type. Table names and column names are case sensitive. Neither a table or an index of the same name may exist in the DB. For example The optional IF NOT EXISTS clause makes the statement a no operation if the table already exists. The optional constraint clause has two forms. The first one is found in many SQL dialects. This form prevents the data in column DepartmentName to be NULL. The second form allows an arbitrary boolean expression to be used to validate the column. If the value of the expression is true then the validation succeeded. If the value of the expression is false or NULL then the validation fails. If the value of the expression is not of type bool an error occurs. The optional DEFAULT clause is an expression which, if present, is substituted instead of a NULL value when the colum is assigned a value. Note that the constraint and/or default expressions may refer to other columns by name: When a table row is inserted by the INSERT INTO statement or when a table row is updated by the UPDATE statement, the order of operations is as follows: 1. The new values of the affected columns are set and the values of all the row columns become the named values which can be referred to in default expressions evaluated in step 2. 2. If any row column value is NULL and the DEFAULT clause is present in the column's definition, the default expression is evaluated and its value is set as the respective column value. 3. The values, potentially updated, of row columns become the named values which can be referred to in constraint expressions evaluated during step 4. 4. All row columns which definition has the constraint clause present will have that constraint checked. If any constraint violation is detected, the overall operation fails and no changes to the table are made. Delete from statements remove rows from a table, which must exist. For example If the WHERE clause is not present then all rows are removed and the statement is equivalent to the TRUNCATE TABLE statement. Drop index statements remove indices from the DB. The index must exist. For example The optional IF EXISTS clause makes the statement a no operation if the index does not exist. Drop table statements remove tables from the DB. The table must exist. For example The optional IF EXISTS clause makes the statement a no operation if the table does not exist. Insert into statements insert new rows into tables. New rows come from literal data, if using the VALUES clause, or are a result of select statement. In the later case the select statement is fully evaluated before the insertion of any rows is performed, allowing to insert values calculated from the same table rows are to be inserted into. If the ColumnNameList part is omitted then the number of values inserted in the row must be the same as are columns in the table. If the ColumnNameList part is present then the number of values per row must be same as the same number of column names. All other columns of the record are set to NULL. The type of the value assigned to a column must be the same as is the column's type or the value must be NULL. If there exists an unique index that would make the insert statement fail, the optional IF NOT EXISTS turns the insert statement in such case into a no-op. For example If any of the columns of the table were defined using the optional constraints clause or the optional defaults clause then those are processed on a per row basis. The details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. Explain statement produces a recordset consisting of lines of text which describe the execution plan of a statement, if any. For example, the QL tool treats the explain statement specially and outputs the joined lines: The explanation may aid in uderstanding how a statement/query would be executed and if indices are used as expected - or which indices may possibly improve the statement performance. The create index statements above were directly copy/pasted in the terminal from the suggestions provided by the filter recordset pipeline part returned by the explain statement. If the statement has nothing special in its plan, the result is the original statement. To get an explanation of the select statement of the IN predicate, use the EXPLAIN statement with that particular select statement. The rollback statement closes the innermost transaction nesting level discarding any updates to the DB made by it. If that's the outermost level then the effects on the DB are as if the transaction never happened. For example The (temporary) record set from the last statement is returned and can be processed by the client. In this case the rollback is the same as 'DROP TABLE tmp;' but it can be a more complex operation. Select from statements produce recordsets. The optional DISTINCT modifier ensures all rows in the result recordset are unique. Either all of the resulting fields are returned ('*') or only those named in FieldList. RecordSetList is a list of table names or parenthesized select statements, optionally (re)named using the AS clause. The result can be filtered using a WhereClause and orderd by the OrderBy clause. For example If Recordset is a nested, parenthesized SelectStmt then it must be given a name using the AS clause if its field are to be accessible in expressions. A field is an named expression. Identifiers, not used as a type in conversion or a function name in the Call clause, denote names of (other) fields, values of which should be used in the expression. The expression can be named using the AS clause. If the AS clause is not present and the expression consists solely of a field name, then that field name is used as the name of the resulting field. Otherwise the field is unnamed. For example The SELECT statement can optionally enumerate the desired/resulting fields in a list. No two identical field names can appear in the list. When more than one record set is used in the FROM clause record set list, the result record set field names are rewritten to be qualified using the record set names. If a particular record set doesn't have a name, its respective fields became unnamed. The optional JOIN clause, for example is mostly equal to except that the rows from a which, when they appear in the cross join, never made expr to evaluate to true, are combined with a virtual row from b, containing all nulls, and added to the result set. For the RIGHT JOIN variant the discussed rules are used for rows from b not satisfying expr == true and the virtual, all-null row "comes" from a. The FULL JOIN adds the respective rows which would be otherwise provided by the separate executions of the LEFT JOIN and RIGHT JOIN variants. For more thorough OUTER JOIN discussion please see the Wikipedia article at [10]. Resultins rows of a SELECT statement can be optionally ordered by the ORDER BY clause. Collating proceeds by considering the expressions in the expression list left to right until a collating order is determined. Any possibly remaining expressions are not evaluated. All of the expression values must yield an ordered type or NULL. Ordered types are defined in "Comparison operators". Collating of elements having a NULL value is different compared to what the comparison operators yield in expression evaluation (NULL result instead of a boolean value). Below, T denotes a non NULL value of any QL type. NULL collates before any non NULL value (is considered smaller than T). Two NULLs have no collating order (are considered equal). The WHERE clause restricts records considered by some statements, like SELECT FROM, DELETE FROM, or UPDATE. It is an error if the expression evaluates to a non null value of non bool type. Another form of the WHERE clause is an existence predicate of a parenthesized select statement. The EXISTS form evaluates to true if the parenthesized SELECT statement produces a non empty record set. The NOT EXISTS form evaluates to true if the parenthesized SELECT statement produces an empty record set. The parenthesized SELECT statement is evaluated only once (TODO issue #159). The GROUP BY clause is used to project rows having common values into a smaller set of rows. For example Using the GROUP BY without any aggregate functions in the selected fields is in certain cases equal to using the DISTINCT modifier. The last two examples above produce the same resultsets. The optional OFFSET clause allows to ignore first N records. For example The above will produce only rows 11, 12, ... of the record set, if they exist. The value of the expression must a non negative integer, but not bigint or duration. The optional LIMIT clause allows to ignore all but first N records. For example The above will return at most the first 10 records of the record set. The value of the expression must a non negative integer, but not bigint or duration. The LIMIT and OFFSET clauses can be combined. For example Considering table t has, say 10 records, the above will produce only records 4 - 8. After returning record #8, no more result rows/records are computed. 1. The FROM clause is evaluated, producing a Cartesian product of its source record sets (tables or nested SELECT statements). 2. If present, the JOIN cluase is evaluated on the result set of the previous evaluation and the recordset specified by the JOIN clause. (... JOIN Recordset ON ...) 3. If present, the WHERE clause is evaluated on the result set of the previous evaluation. 4. If present, the GROUP BY clause is evaluated on the result set of the previous evaluation(s). 5. The SELECT field expressions are evaluated on the result set of the previous evaluation(s). 6. If present, the DISTINCT modifier is evaluated on the result set of the previous evaluation(s). 7. If present, the ORDER BY clause is evaluated on the result set of the previous evaluation(s). 8. If present, the OFFSET clause is evaluated on the result set of the previous evaluation(s). The offset expression is evaluated once for the first record produced by the previous evaluations. 9. If present, the LIMIT clause is evaluated on the result set of the previous evaluation(s). The limit expression is evaluated once for the first record produced by the previous evaluations. Truncate table statements remove all records from a table. The table must exist. For example Update statements change values of fields in rows of a table. For example Note: The SET clause is optional. If any of the columns of the table were defined using the optional constraints clause or the optional defaults clause then those are processed on a per row basis. The details are discussed in the "Constraints and defaults" chapter below the CREATE TABLE statement documentation. To allow to query for DB meta data, there exist specially named tables, some of them being virtual. Note: Virtual system tables may have fake table-wise unique but meaningless and unstable record IDs. Do not apply the built-in id() to any system table. The table __Table lists all tables in the DB. The schema is The Schema column returns the statement to (re)create table Name. This table is virtual. The table __Colum lists all columns of all tables in the DB. The schema is The Ordinal column defines the 1-based index of the column in the record. This table is virtual. The table __Colum2 lists all columns of all tables in the DB which have the constraint NOT NULL or which have a constraint expression defined or which have a default expression defined. The schema is It's possible to obtain a consolidated recordset for all properties of all DB columns using The Name column is the column name in TableName. The table __Index lists all indices in the DB. The schema is The IsUnique columns reflects if the index was created using the optional UNIQUE clause. This table is virtual. Built-in functions are predeclared. The built-in aggregate function avg returns the average of values of an expression. Avg ignores NULL values, but returns NULL if all values of a column are NULL or if avg is applied to an empty record set. The column values must be of a numeric type. The built-in function coalesce takes at least one argument and returns the first of its arguments which is not NULL. If all arguments are NULL, this function returns NULL. This is useful for providing defaults for NULL values in a select query. The built-in function contains returns true if substr is within s. If any argument to contains is NULL the result is NULL. The built-in aggregate function count returns how many times an expression has a non NULL values or the number of rows in a record set. Note: count() returns 0 for an empty record set. For example Date returns the time corresponding to in the appropriate zone for that time in the given location. The month, day, hour, min, sec, and nsec values may be outside their usual ranges and will be normalized during the conversion. For example, October 32 converts to November 1. A daylight savings time transition skips or repeats times. For example, in the United States, March 13, 2011 2:15am never occurred, while November 6, 2011 1:15am occurred twice. In such cases, the choice of time zone, and therefore the time, is not well-defined. Date returns a time that is correct in one of the two zones involved in the transition, but it does not guarantee which. A location maps time instants to the zone in use at that time. Typically, the location represents the collection of time offsets in use in a geographical area, such as "CEST" and "CET" for central Europe. "local" represents the system's local time zone. "UTC" represents Universal Coordinated Time (UTC). The month specifies a month of the year (January = 1, ...). If any argument to date is NULL the result is NULL. The built-in function day returns the day of the month specified by t. If the argument to day is NULL the result is NULL. The built-in function formatTime returns a textual representation of the time value formatted according to layout, which defines the format by showing how the reference time, would be displayed if it were the value; it serves as an example of the desired output. The same display rules will then be applied to the time value. If any argument to formatTime is NULL the result is NULL. NOTE: The string value of the time zone, like "CET" or "ACDT", is dependent on the time zone of the machine the function is run on. For example, if the t value is in "CET", but the machine is in "ACDT", instead of "CET" the result is "+0100". This is the same what Go (time.Time).String() returns and in fact formatTime directly calls t.String(). returns on a machine in the CET time zone, but may return on a machine in the ACDT zone. The time value is in both cases the same so its ordering and comparing is correct. Only the display value can differ. The built-in functions formatFloat and formatInt format numbers to strings using go's number format functions in the `strconv` package. For all three functions, only the first argument is mandatory. The default values of the rest are shown in the examples. If the first argument is NULL, the result is NULL. returns returns returns Unlike the `strconv` equivalent, the formatInt function handles all integer types, both signed and unsigned. The built-in function hasPrefix tests whether the string s begins with prefix. If any argument to hasPrefix is NULL the result is NULL. The built-in function hasSuffix tests whether the string s ends with suffix. If any argument to hasSuffix is NULL the result is NULL. The built-in function hour returns the hour within the day specified by t, in the range [0, 23]. If the argument to hour is NULL the result is NULL. The built-in function hours returns the duration as a floating point number of hours. If the argument to hours is NULL the result is NULL. The built-in function id takes zero or one arguments. If no argument is provided, id() returns a table-unique automatically assigned numeric identifier of type int. Ids of deleted records are not reused unless the DB becomes completely empty (has no tables). For example If id() without arguments is called for a row which is not a table record then the result value is NULL. For example If id() has one argument it must be a table name of a table in a cross join. For example The built-in function len takes a string argument and returns the lentgh of the string in bytes. The expression len(s) is constant if s is a string constant. If the argument to len is NULL the result is NULL. The built-in aggregate function max returns the largest value of an expression in a record set. Max ignores NULL values, but returns NULL if all values of a column are NULL or if max is applied to an empty record set. The expression values must be of an ordered type. For example The built-in aggregate function min returns the smallest value of an expression in a record set. Min ignores NULL values, but returns NULL if all values of a column are NULL or if min is applied to an empty record set. For example The column values must be of an ordered type. The built-in function minute returns the minute offset within the hour specified by t, in the range [0, 59]. If the argument to minute is NULL the result is NULL. The built-in function minutes returns the duration as a floating point number of minutes. If the argument to minutes is NULL the result is NULL. The built-in function month returns the month of the year specified by t (January = 1, ...). If the argument to month is NULL the result is NULL. The built-in function nanosecond returns the nanosecond offset within the second specified by t, in the range [0, 999999999]. If the argument to nanosecond is NULL the result is NULL. The built-in function nanoseconds returns the duration as an integer nanosecond count. If the argument to nanoseconds is NULL the result is NULL. The built-in function now returns the current local time. The built-in function parseTime parses a formatted string and returns the time value it represents. The layout defines the format by showing how the reference time, would be interpreted if it were the value; it serves as an example of the input format. The same interpretation will then be made to the input string. Elements omitted from the value are assumed to be zero or, when zero is impossible, one, so parsing "3:04pm" returns the time corresponding to Jan 1, year 0, 15:04:00 UTC (note that because the year is 0, this time is before the zero Time). Years must be in the range 0000..9999. The day of the week is checked for syntax but it is otherwise ignored. In the absence of a time zone indicator, parseTime returns a time in UTC. When parsing a time with a zone offset like -0700, if the offset corresponds to a time zone used by the current location, then parseTime uses that location and zone in the returned time. Otherwise it records the time as being in a fabricated location with time fixed at the given zone offset. When parsing a time with a zone abbreviation like MST, if the zone abbreviation has a defined offset in the current location, then that offset is used. The zone abbreviation "UTC" is recognized as UTC regardless of location. If the zone abbreviation is unknown, Parse records the time as being in a fabricated location with the given zone abbreviation and a zero offset. This choice means that such a time can be parses and reformatted with the same layout losslessly, but the exact instant used in the representation will differ by the actual zone offset. To avoid such problems, prefer time layouts that use a numeric zone offset. If any argument to parseTime is NULL the result is NULL. The built-in function second returns the second offset within the minute specified by t, in the range [0, 59]. If the argument to second is NULL the result is NULL. The built-in function seconds returns the duration as a floating point number of seconds. If the argument to seconds is NULL the result is NULL. The built-in function since returns the time elapsed since t. It is shorthand for now()-t. If the argument to since is NULL the result is NULL. The built-in aggregate function sum returns the sum of values of an expression for all rows of a record set. Sum ignores NULL values, but returns NULL if all values of a column are NULL or if sum is applied to an empty record set. The column values must be of a numeric type. The built-in function timeIn returns t with the location information set to loc. For discussion of the loc argument please see date(). If any argument to timeIn is NULL the result is NULL. The built-in function weekday returns the day of the week specified by t. Sunday == 0, Monday == 1, ... If the argument to weekday is NULL the result is NULL. The built-in function year returns the year in which t occurs. If the argument to year is NULL the result is NULL. The built-in function yearDay returns the day of the year specified by t, in the range [1,365] for non-leap years, and [1,366] in leap years. If the argument to yearDay is NULL the result is NULL. Three functions assemble and disassemble complex numbers. The built-in function complex constructs a complex value from a floating-point real and imaginary part, while real and imag extract the real and imaginary parts of a complex value. The type of the arguments and return value correspond. For complex, the two arguments must be of the same floating-point type and the return type is the complex type with the corresponding floating-point constituents: complex64 for float32, complex128 for float64. The real and imag functions together form the inverse, so for a complex value z, z == complex(real(z), imag(z)). If the operands of these functions are all constants, the return value is a constant. If any argument to any of complex, real, imag functions is NULL the result is NULL. For the numeric types, the following sizes are guaranteed Portions of this specification page are modifications based on work[2] created and shared by Google[3] and used according to terms described in the Creative Commons 3.0 Attribution License[4]. This specification is licensed under the Creative Commons Attribution 3.0 License, and code is licensed under a BSD license[5]. Links from the above documentation This section is not part of the specification. WARNING: The implementation of indices is new and it surely needs more time to become mature. Indices are used currently used only by the WHERE clause. The following expression patterns of 'WHERE expression' are recognized and trigger index use. The relOp is one of the relation operators <, <=, ==, >=, >. For the equality operator both operands must be of comparable types. For all other operators both operands must be of ordered types. The constant expression is a compile time constant expression. Some constant folding is still a TODO. Parameter is a QL parameter ($1 etc.). Consider tables t and u, both with an indexed field f. The WHERE expression doesn't comply with the above simple detected cases. However, such query is now automatically rewritten to which will use both of the indices. The impact of using the indices can be substantial (cf. BenchmarkCrossJoin*) if the resulting rows have low "selectivity", ie. only few rows from both tables are selected by the respective WHERE filtering. Note: Existing QL DBs can be used and indices can be added to them. However, once any indices are present in the DB, the old QL versions cannot work with such DB anymore. Running a benchmark with -v (-test.v) outputs information about the scale used to report records/s and a brief description of the benchmark. For example Running the full suite of benchmarks takes a lot of time. Use the -timeout flag to avoid them being killed after the default time limit (10 minutes).
Package grammes is an API/Wrapper for the Gremlin traversal language to interact with graph databases. It includes various levels of functionality to add, remove, and change vertices and edges in the database. Usage of higher level API is shown in various examples in the `examples/` directory with full documentation. To get started with this package you may begin by making a Grammes client using either the Dial function or the DialWithWebSocket function. With this client you may begin interacting with your graph database with the client's multitude of function options. To narrow down what you want to do it may be easier to choose one of the `client.Querier` options. What this example does is create a new Grammes Client using the DialWithWebSocket function. With this client it executes a simple string query that just does some simple addition. Then it will return the raw result out. For further customizability you may check out packages within the `query/` directory. To see examples on how to use this package further then check out the `examples/` directory.