Package firestore provides a client for reading and writing to a Cloud Firestore database. See https://cloud.google.com/firestore/docs for an introduction to Cloud Firestore and additional help on using the Firestore API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Note: you can't use both Cloud Firestore and Cloud Datastore in the same project. To start working with this package, create a client with a project ID: In Firestore, documents are sets of key-value pairs, and collections are groups of documents. A Firestore database consists of a hierarchy of alternating collections and documents, referred to by slash-separated paths like "States/California/Cities/SanFrancisco". This client is built around references to collections and documents. CollectionRefs and DocumentRefs are lightweight values that refer to the corresponding database entities. Creating a ref does not involve any network traffic. Use DocumentRef.Get to read a document. The result is a DocumentSnapshot. Call its Data method to obtain the entire document contents as a map. You can also obtain a single field with DataAt, or extract the data into a struct with DataTo. With the type definition we can extract the document's data into a value of type State: Note that this client supports struct tags beginning with "firestore:" that work like the tags of the encoding/json package, letting you rename fields, ignore them, or omit their values when empty. To retrieve multiple documents from their references in a single call, use Client.GetAll. For writing individual documents, use the methods on DocumentReference. Create creates a new document. The first return value is a WriteResult, which contains the time at which the document was updated. Create fails if the document exists. Another method, Set, either replaces an existing document or creates a new one. To update some fields of an existing document, use Update. It takes a list of paths to update and their corresponding values. Use DocumentRef.Delete to delete a document. You can condition Deletes or Updates on when a document was last changed. Specify these preconditions as an option to a Delete or Update method. The check and the write happen atomically with a single RPC. Here we update a doc only if it hasn't changed since we read it. You could also do this with a transaction. To perform multiple writes at once, use a WriteBatch. Its methods chain for convenience. WriteBatch.Commit sends the collected writes to the server, where they happen atomically. You can use SQL to select documents from a collection. Begin with the collection, and build up a query using Select, Where and other methods of Query. Supported operators include `<`, `<=`, `>`, `>=`, `==`, and 'array-contains'. Call the Query's Documents method to get an iterator, and use it like the other Google Cloud Client iterators. To get all the documents in a collection, you can use the collection itself as a query. Use a transaction to execute reads and writes atomically. All reads must happen before any writes. Transaction creation, commit, rollback and retry are handled for you by the Client.RunTransaction method; just provide a function and use the read and write methods of the Transaction passed to it.
Package slowql provides everything needed to parse slow query logs from different databases (such as MySQL, MariaDB). Along to a parser, it proposes a simple API with few functions that allow you to get everything needed to compute your slow queries.
Package rethinkgo implements the RethinkDB API for Go. RethinkDB (http://www.rethinkdb.com/) is an open-source distributed database in the style of MongoDB. If you haven't tried it out, it takes about a minute to setup and has a sweet web console. Even runs on Mac OS X. (http://www.rethinkdb.com/docs/guides/quickstart/) If you are familiar with the RethinkDB API, this package should be straightforward to use. If not, the docs for this package contain plenty of examples, but you may want to look through the RethinkDB tutorials (http://www.rethinkdb.com/docs/). Example usage: Besides this simple read query, you can run almost arbitrary expressions on the server, even Javascript code. See the rest of these docs for more details.
Package glick provides a simple plug-in environment. The central feature of glick is the Library which contains example types for the input and output of each API on the system. Each of these APIs can have a number of "actions" upon them, for example a file conversion API may have one action for each of the file formats to be convereted. Using the Run() method of glick.Library, a given API/Action combination runs the code in a function of Go type Plugin. Although it is easy to create your own plugins, there are three types built-in: Remote Procedure Calls (RPC), simple URL fetch (URL) and OS commands (CMD). A number of sub-packages simplify the use of third-party libraries when providing further types of plugin. The mapping of which plugin code to run occurs at three levels: 1) Intialisation and set-up code for the application will establish the glick.Library using glick.New(), then add API specifications using RegAPI(), it may also add the application's base plugins using RegPlugin(). 2) The base set-up can be extended and overloaded using a JSON format configuration description (probaly held in a file) by calling the Config() method of glick.Library. This configuration process is extensible, using the AddConfigurator() method - see the glick/glpie or glick/glkit sub-pakages for examples. 3) Which plugin to use can also be set-up or overloaded at runtime within Run(). Each call to a plugin includes a Context (as described in https://blog.golang.org/context). This context can contain for example user details, which could be matched against a database to see if that user should be directed to one plugin for a given action, rather than another. It could also be used to wrap every plugin call by a particular user with some other code, for example to log or meter activity.
Package gocql implements a fast and robust Cassandra driver for the Go programming language. Pass a list of initial node IP addresses to NewCluster to create a new cluster configuration: Port can be specified as part of the address, the above is equivalent to: It is recommended to use the value set in the Cassandra config for broadcast_address or listen_address, an IP address not a domain name. This is because events from Cassandra will use the configured IP address, which is used to index connected hosts. If the domain name specified resolves to more than 1 IP address then the driver may connect multiple times to the same host, and will not mark the node being down or up from events. Then you can customize more options (see ClusterConfig): The driver tries to automatically detect the protocol version to use if not set, but you might want to set the protocol version explicitly, as it's not defined which version will be used in certain situations (for example during upgrade of the cluster when some of the nodes support different set of protocol versions than other nodes). The driver advertises the module name and version in the STARTUP message, so servers are able to detect the version. If you use replace directive in go.mod, the driver will send information about the replacement module instead. When ready, create a session from the configuration. Don't forget to Close the session once you are done with it: CQL protocol uses a SASL-based authentication mechanism and so consists of an exchange of server challenges and client response pairs. The details of the exchanged messages depend on the authenticator used. To use authentication, set ClusterConfig.Authenticator or ClusterConfig.AuthProvider. PasswordAuthenticator is provided to use for username/password authentication: It is possible to secure traffic between the client and server with TLS. To use TLS, set the ClusterConfig.SslOpts field. SslOptions embeds *tls.Config so you can set that directly. There are also helpers to load keys/certificates from files. Warning: Due to historical reasons, the SslOptions is insecure by default, so you need to set EnableHostVerification to true if no Config is set. Most users should set SslOptions.Config to a *tls.Config. SslOptions and Config.InsecureSkipVerify interact as follows: For example: To route queries to local DC first, use DCAwareRoundRobinPolicy. For example, if the datacenter you want to primarily connect is called dc1 (as configured in the database): The driver can route queries to nodes that hold data replicas based on partition key (preferring local DC). Note that TokenAwareHostPolicy can take options such as gocql.ShuffleReplicas and gocql.NonLocalReplicasFallback. We recommend running with a token aware host policy in production for maximum performance. The driver can only use token-aware routing for queries where all partition key columns are query parameters. For example, instead of use The DCAwareRoundRobinPolicy can be replaced with RackAwareRoundRobinPolicy, which takes two parameters, datacenter and rack. Instead of dividing hosts with two tiers (local datacenter and remote datacenters) it divides hosts into three (the local rack, the rest of the local datacenter, and everything else). RackAwareRoundRobinPolicy can be combined with TokenAwareHostPolicy in the same way as DCAwareRoundRobinPolicy. Create queries with Session.Query. Query values must not be reused between different executions and must not be modified after starting execution of the query. To execute a query without reading results, use Query.Exec: Single row can be read by calling Query.Scan: Multiple rows can be read using Iter.Scanner: See Example for complete example. The driver automatically prepares DML queries (SELECT/INSERT/UPDATE/DELETE/BATCH statements) and maintains a cache of prepared statements. CQL protocol does not support preparing other query types. When using CQL protocol >= 4, it is possible to use gocql.UnsetValue as the bound value of a column. This will cause the database to ignore writing the column. The main advantage is the ability to keep the same prepared statement even when you don't want to update some fields, where before you needed to make another prepared statement. Session is safe to use from multiple goroutines, so to execute multiple concurrent queries, just execute them from several worker goroutines. Gocql provides synchronously-looking API (as recommended for Go APIs) and the queries are executed asynchronously at the protocol level. Null values are are unmarshalled as zero value of the type. If you need to distinguish for example between text column being null and empty string, you can unmarshal into *string variable instead of string. See Example_nulls for full example. The driver reuses backing memory of slices when unmarshalling. This is an optimization so that a buffer does not need to be allocated for every processed row. However, you need to be careful when storing the slices to other memory structures. When you want to save the data for later use, pass a new slice every time. A common pattern is to declare the slice variable within the scanner loop: The driver supports paging of results with automatic prefetch, see ClusterConfig.PageSize, Session.SetPrefetch, Query.PageSize, and Query.Prefetch. It is also possible to control the paging manually with Query.PageState (this disables automatic prefetch). Manual paging is useful if you want to store the page state externally, for example in a URL to allow users browse pages in a result. You might want to sign/encrypt the paging state when exposing it externally since it contains data from primary keys. Paging state is specific to the CQL protocol version and the exact query used. It is meant as opaque state that should not be modified. If you send paging state from different query or protocol version, then the behaviour is not defined (you might get unexpected results or an error from the server). For example, do not send paging state returned by node using protocol version 3 to a node using protocol version 4. Also, when using protocol version 4, paging state between Cassandra 2.2 and 3.0 is incompatible (https://issues.apache.org/jira/browse/CASSANDRA-10880). The driver does not check whether the paging state is from the same protocol version/statement. You might want to validate yourself as this could be a problem if you store paging state externally. For example, if you store paging state in a URL, the URLs might become broken when you upgrade your cluster. Call Query.PageState(nil) to fetch just the first page of the query results. Pass the page state returned by Iter.PageState to Query.PageState of a subsequent query to get the next page. If the length of slice returned by Iter.PageState is zero, there are no more pages available (or an error occurred). Using too low values of PageSize will negatively affect performance, a value below 100 is probably too low. While Cassandra returns exactly PageSize items (except for last page) in a page currently, the protocol authors explicitly reserved the right to return smaller or larger amount of items in a page for performance reasons, so don't rely on the page having the exact count of items. See Example_paging for an example of manual paging. There are certain situations when you don't know the list of columns in advance, mainly when the query is supplied by the user. Iter.Columns, Iter.RowData, Iter.MapScan and Iter.SliceMap can be used to handle this case. See Example_dynamicColumns. The CQL protocol supports sending batches of DML statements (INSERT/UPDATE/DELETE) and so does gocql. Use Session.NewBatch to create a new batch and then fill-in details of individual queries. Then execute the batch with Session.ExecuteBatch. Logged batches ensure atomicity, either all or none of the operations in the batch will succeed, but they have overhead to ensure this property. Unlogged batches don't have the overhead of logged batches, but don't guarantee atomicity. Updates of counters are handled specially by Cassandra so batches of counter updates have to use CounterBatch type. A counter batch can only contain statements to update counters. For unlogged batches it is recommended to send only single-partition batches (i.e. all statements in the batch should involve only a single partition). Multi-partition batch needs to be split by the coordinator node and re-sent to correct nodes. With single-partition batches you can send the batch directly to the node for the partition without incurring the additional network hop. It is also possible to pass entire BEGIN BATCH .. APPLY BATCH statement to Query.Exec. There are differences how those are executed. BEGIN BATCH statement passed to Query.Exec is prepared as a whole in a single statement. Session.ExecuteBatch prepares individual statements in the batch. If you have variable-length batches using the same statement, using Session.ExecuteBatch is more efficient. See Example_batch for an example. Query.ScanCAS or Query.MapScanCAS can be used to execute a single-statement lightweight transaction (an INSERT/UPDATE .. IF statement) and reading its result. See example for Query.MapScanCAS. Multiple-statement lightweight transactions can be executed as a logged batch that contains at least one conditional statement. All the conditions must return true for the batch to be applied. You can use Session.ExecuteBatchCAS and Session.MapExecuteBatchCAS when executing the batch to learn about the result of the LWT. See example for Session.MapExecuteBatchCAS. Queries can be marked as idempotent. Marking the query as idempotent tells the driver that the query can be executed multiple times without affecting its result. Non-idempotent queries are not eligible for retrying nor speculative execution. Idempotent queries are retried in case of errors based on the configured RetryPolicy. If the query is LWT and the configured RetryPolicy additionally implements LWTRetryPolicy interface, then the policy will be cast to LWTRetryPolicy and used this way. Queries can be retried even before they fail by setting a SpeculativeExecutionPolicy. The policy can cause the driver to retry on a different node if the query is taking longer than a specified delay even before the driver receives an error or timeout from the server. When a query is speculatively executed, the original execution is still executing. The two parallel executions of the query race to return a result, the first received result will be returned. UDTs can be mapped (un)marshaled from/to map[string]interface{} a Go struct (or a type implementing UDTUnmarshaler, UDTMarshaler, Unmarshaler or Marshaler interfaces). For structs, cql tag can be used to specify the CQL field name to be mapped to a struct field: See Example_userDefinedTypesMap, Example_userDefinedTypesStruct, ExampleUDTMarshaler, ExampleUDTUnmarshaler. It is possible to provide observer implementations that could be used to gather metrics: CQL protocol also supports tracing of queries. When enabled, the database will write information about internal events that happened during execution of the query. You can use Query.Trace to request tracing and receive the session ID that the database used to store the trace information in system_traces.sessions and system_traces.events tables. NewTraceWriter returns an implementation of Tracer that writes the events to a writer. Gathering trace information might be essential for debugging and optimizing queries, but writing traces has overhead, so this feature should not be used on production systems with very high load unless you know what you are doing. Example_batch demonstrates how to execute a batch of statements. Example_dynamicColumns demonstrates how to handle dynamic column list. Example_marshalerUnmarshaler demonstrates how to implement a Marshaler and Unmarshaler. Example_nulls demonstrates how to distinguish between null and zero value when needed. Null values are unmarshalled as zero value of the type. If you need to distinguish for example between text column being null and empty string, you can unmarshal into *string field. Example_paging demonstrates how to manually fetch pages and use page state. See also package documentation about paging. Example_set demonstrates how to use sets. Example_userDefinedTypesMap demonstrates how to work with user-defined types as maps. See also Example_userDefinedTypesStruct and examples for UDTMarshaler and UDTUnmarshaler if you want to map to structs. Example_userDefinedTypesStruct demonstrates how to work with user-defined types as structs. See also examples for UDTMarshaler and UDTUnmarshaler if you need more control/better performance.
Package Redka implements Redis-like database backed by SQLite. It provides an API to interact with data structures like keys, strings and hashes. Typically, you open a database with Open and use the returned DB instance methods like DB.Key or DB.Str to access the data structures. You should only use one instance of DB throughout your program and close it with DB.Close when the program exits.
Copyright 2024 The txnredis Authors Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Package txnredis provides a transaction adapter for Redis, seamlessly integrating with the txn package to enable distributed transaction management across multiple data sources, including Redis. Key Features: * Transactional Support: Leverages Redis transactions (MULTI/EXEC) to ensure atomicity of operations. * Seamless Integration: Easily integrates with the txn package for managing transactions across different databases. * Convenient API: Provides a simple interface (RAdapter) for interacting with Redis within a transaction.
Catalyst started out as a microservice base that can be used to create REST APIs. It contains many essential parts that you would need for a microservice such as, - Configurability - A basic dependency injection mechanism - Request response cycle handling - Structure and field validations - Error handling - Logging - Database resource management - Application metrics Written using the Clean Architecture paradigm it offers clean separation between business (domain) logic and facilitation logic. In the context of `Catalyst` we use a concept called `Transport mediums` to define ways in which you can communicate with the microservice. A package inside the `transport` directory consists of all the logic needed to handle communication with the outside world using one type of transport medium. Out of the box, Catalyst contain two such transport mediums. - http (to handle REST web requests) - metrics (to expose application metrics) What makes Catalyst a REST API is this `http` package which handles the complete lifecycle of REST web requests. Likewise the `metrics` transport medium exposes an endpoint to let `Prometheus` scrape application metrics. You can add other transport mediums to leverage a project based on Catalyst. For an example a `stream` package can be added to communicate with a streaming platform like `Kafka`. Or an `mqtt` package can be added to communicate with `IoT` devices.
Package hord provides a simple and extensible interface for interacting with various database systems in a uniform way. Hord is designed to be a database-agnostic library that provides a common interface for interacting with different database systems. It allows developers to write code that is decoupled from the underlying database technology, making it easier to switch between databases or support multiple databases in the same application. To use Hord, import it as follows: To create a database client, you need to import and use the appropriate driver package along with the `hord` package. For example, to use the Redis driver: Each driver provides its own `Dial` function to establish a connection to the database. Refer to the specific driver documentation for more details. Once you have a database client, you can use it to perform various database operations. The API is consistent across different drivers. Refer to the `hord.Database` interface documentation for a complete list of available methods. Hord provides common error types and constants for consistent error handling across drivers. Refer to the `hord` package documentation for more information on error handling. Contributions to Hord are welcome! If you want to add support for a new database driver or improve the existing codebase, please refer to the contribution guidelines in the project's repository.
Package multitenancy provides a Go framework for building multi-tenant applications, streamlining tenant management and model migrations. It abstracts multitenancy complexities through a unified, database-agnostic API compatible with GORM. The framework supports two primary multitenancy strategies: The following sections provide an overview of the package's features and usage instructions for implementing multitenancy in Go applications. The package supports multitenancy for both PostgreSQL and MySQL databases, offering three methods for establishing a new database connection with multitenancy support: OpenDB allows opening a database connection using a URL-like DSN string, providing a flexible and easy way to switch between drivers. This method abstracts the underlying driver mechanics, offering a straightforward connection process and a unified, database-agnostic API through the returned *DB instance, which embeds the gorm.DB instance. For constructing the DSN string, refer to the driver-specific documentation for the required parameters and formats. This approach is useful for applications that need to dynamically switch between different database drivers or configurations without changing the codebase. Postgres: MySQL: Open with a supported driver offers a unified, database-agnostic API for managing tenant-specific and shared data, embedding the gorm.DB instance. This method allows developers to initialize and configure the dialect themselves before opening the database connection, providing granular control over the database connection and configuration. It facilitates seamless switching between database drivers while maintaining access to GORM's full functionality. Postgres: MySQL: For direct access to the gorm.DB API and multitenancy features for specific tasks, this approach allows invoking driver-specific functions directly. It provides a lower level of abstraction, requiring manual management of database-specific operations. Prior to version 8, this was the only method available for using the package. Postgres: MySQL: All models must implement driver.TenantTabler, which extends GORM's Tabler interface. This extension allows models to define their table name and indicate whether they are shared across tenants. Public Models: These are models which are shared across tenants. driver.TenantTabler.TableName should return the table name prefixed with 'public.'. driver.TenantTabler.IsSharedModel should return true. Tenant-Specific Models: These models are specific to a single tenant and should not be shared across tenants. driver.TenantTabler.TableName should return the table name without any prefix. driver.TenantTabler.IsSharedModel should return false. Tenant Model: This package provides a TenantModel struct that can be embedded in any public model that requires tenant scoping, enriching it with essential fields for managing tenant-specific information. This structure incorporates fields for the tenant's domain URL and schema name, facilitating the linkage of tenant-specific models to their respective schemas. Before performing any migrations or operations on tenant-specific models, the models must be registered with the DB instance using DB.RegisterModels. Postgres Adapter: Use postgres.RegisterModels to register models. MySQL Adapter: Use mysql.RegisterModels to register models. To ensure data integrity and schema isolation across tenants,gorm.DB.AutoMigrate has been disabled. Instead, use the provided shared and tenant-specific migration methods. driver.ErrInvalidMigration is returned if the `AutoMigrate` method is called directly. To ensure tenant isolation and facilitate concurrent migrations, driver-specific locking mechanisms are used. These locks prevent concurrent migrations from interfering with each other, ensuring that only one migration process can run at a time for a given tenant. Consult the driver-specific documentation for more information. After registering models, shared models are migrated using DB.MigrateSharedModels. Postgres Adapter: Use postgres.MigrateSharedModels to migrate shared models. MySQL Adapter: Use mysql.MigrateSharedModels to migrate shared models. After registering models, tenant-specific models are migrated using DB.MigrateTenantModels. Postgres Adapter: Use postgres.MigrateTenantModels to migrate tenant-specific models. MySQL Adapter: Use mysql.MigrateTenantModels to migrate tenant-specific models. When a tenant is removed from the system, the tenant-specific schema and associated tables should be cleaned up using DB.OffboardTenant. Postgres Adapter: Use postgres.DropSchemaForTenant to offboard a tenant. MySQL Adapter: Use mysql.DropDatabaseForTenant to offboard a tenant. DB.UseTenant configures the database for operations specific to a tenant, abstracting database-specific operations for tenant context configuration. This method returns a reset function to revert the database context and an error if the operation fails. Postgres Adapter: Use postgres.SetSearchPath to set the search path for a tenant. MySQL Adapter: Use mysql.UseDatabase function to set the database for a tenant. For the most part, foreign key constraints work as expected, but there are some restrictions and considerations to keep in mind when working with multitenancy. The following guidelines outline the supported relationships between tables: Between Tables in the Public Schema: From Public Schema Tables to Tenant-Specific Tables: From Tenant-Specific Tables to Public Schema Tables: Between Tenant-Specific Tables: See the example application for a more comprehensive demonstration of the framework's capabilities. Always sanitize input to prevent SQL injection vulnerabilities. This framework does not perform any validation on the database name or schema name parameters. It is the responsibility of the caller to ensure that these parameters are sanitized. To facilitate this, the framework provides the following utilities: For a detailed technical overview of SQL design strategies adopted by the framework, see the STRATEGY.md file.
Gaby is an experimental new bot running in the Go issue tracker as @gabyhelp, to try to help automate various mundane things that a machine can do reasonably well, as well as to try to discover new things that a machine can do reasonably well. The name gaby is short for “Go AI Bot”, because one of the purposes of the experiment is to learn what LLMs can be used for effectively, including identifying what they should not be used for. Some of the gaby functionality will involve LLMs; other functionality will not. The guiding principle is to create something that helps maintainers and that maintainers like, which means to use LLMs when they make sense and help but not when they don't. In the long term, the intention is for this code base or a successor version to take over the current functionality of “gopherbot” and become @gopherbot, at which point the @gabyhelp account will be retired. At the moment we are not accepting new code contributions or PRs. We hope to move this code to somewhere more official soon, at which point we will accept contributions. The GitHub Discussion is a good place to leave feedback about @gabyhelp. The bot functionality is implemented in internal packages in subdirectories. This comment gives a brief tour of the structure. An explicit goal for the Gaby code base is that it run well in many different environments, ranging from a maintainer's home server or even Raspberry Pi all the way up to a hosted cloud. (At the moment, Gaby runs on a Linux server in my basement.) Due to this emphasis on portability, Gaby defines its own interfaces for all the functionality it needs from the surrounding environment and then also defines a variety of implementations of those interfaces. Another explicit goal for the Gaby code base is that it be very well tested. (See my [Go Testing talk] for more about why this is so important.) Abstracting the various external functionality into interfaces also helps make testing easier, and some packages also provide explicit testing support. The result of both these goals is that Gaby defines some basic functionality like time-ordered indexing for itself instead of relying on some specific other implementation. In the grand scheme of things, these are a small amount of code to maintain, and the benefits to both portability and testability are significant. Code interacting with services like GitHub and code running on cloud servers is typically difficult to test and therefore undertested. It is an explicit requirement this repo to test all the code, even (and especially) when testing is difficult. A useful command to have available when working in the code is rsc.io/uncover, which prints the package source lines not covered by a unit test. A useful invocation is: The first “go test” command checks that the test passes. The second repeats the test with coverage enabled. Running the test twice this way makes sure that any syntax or type errors reported by the compiler are reported without coverage, because coverage can mangle the error output. After both tests pass and second writes a coverage profile, running “uncover /tmp/c.out” prints the uncovered lines. In this output, there are three error paths that are untested. In general, error paths should be tested, so tests should be written to cover these lines of code. In limited cases, it may not be practical to test a certain section, such as when code is unreachable but left in case of future changes or mistaken assumptions. That part of the code can be labeled with a comment beginning “// Unreachable” or “// unreachable” (usually with explanatory text following), and then uncover will not report it. If a code section should be tested but the test is being deferred to later, that section can be labeled “// Untested” or “// untested” instead. The rsc.io/gaby/internal/testutil package provides a few other testing helpers. The overview of the code now proceeds from bottom up, starting with storage and working up to the actual bot. Gaby needs to manage a few secret keys used to access services. The rsc.io/gaby/internal/secret package defines the interface for obtaining those secrets. The only implementations at the moment are an in-memory map and a disk-based implementation that reads $HOME/.netrc. Future implementations may include other file formats as well as cloud-based secret storage services. Secret storage is intentionally separated from the main database storage, described below. The main database should hold public data, not secrets. Gaby defines the interface it expects from a large language model. The llm.Embedder interface abstracts an LLM that can take a collection of documents and return their vector embeddings, each of type llm.Vector. The only real implementation to date is rsc.io/gaby/internal/gemini. It would be good to add an offline implementation using Ollama as well. For tests that need an embedder but don't care about the quality of the embeddings, llm.QuoteEmbedder copies a prefix of the text into the vector (preserving vector unit length) in a deterministic way. This is good enough for testing functionality like vector search and simplifies tests by avoiding a dependence on a real LLM. At the moment, only the embedding interface is defined. In the future we expect to add more interfaces around text generation and tool use. As noted above, Gaby defines interfaces for all the functionality it needs from its external environment, to admit a wide variety of implementations for both execution and testing. The lowest level interface is storage, defined in rsc.io/gaby/internal/storage. Gaby requires a key-value store that supports ordered traversal of key ranges and atomic batch writes up to a modest size limit (at least a few megabytes). The basic interface is storage.DB. storage.MemDB returns an in-memory implementation useful for testing. Other implementations can be put through their paces using storage.TestDB. The only real storage.DB implementation is rsc.io/gaby/internal/pebble, which is a LevelDB-derived on-disk key-value store developed and used as part of CockroachDB. It is a production-quality local storage implementation and maintains the database as a directory of files. In the future we plan to add an implementation using Google Cloud Firestore, which provides a production-quality key-value lookup as a Cloud service without fixed baseline server costs. (Firestore is the successor to Google Cloud Datastore.) The storage.DB makes the simplifying assumption that storage never fails, or rather that if storage has failed then you'd rather crash your program than try to proceed through typically untested code paths. As such, methods like Get and Set do not return errors. They panic on failure, and clients of a DB can call the DB's Panic method to invoke the same kind of panic if they notice any corruption. It remains to be seen whether this decision is kept. In addition to the usual methods like Get, Set, and Delete, storage.DB defines Lock and Unlock methods that acquire and release named mutexes managed by the database layer. The purpose of these methods is to enable coordination when multiple instances of a Gaby program are running on a serverless cloud execution platform. So far Gaby has only run on an underground basement server (the opposite of cloud), so these have not been exercised much and the APIs may change. In addition to the regular database, package storage also defines storage.VectorDB, a vector database for use with LLM embeddings. The basic operations are Set, Get, and Search. storage.MemVectorDB returns an in-memory implementation that stores the actual vectors in a storage.DB for persistence but also keeps a copy in memory and searches by comparing against all the vectors. When backed by a storage.MemDB, this implementation is useful for testing, but when backed by a persistent database, the implementation suffices for small-scale production use (say, up to a million documents, which would require 3 GB of vectors). It is possible that the package ordering here is wrong and that VectorDB should be defined in the llm package, built on top of storage, and not the current “storage builds on llm”. Because Gaby makes minimal demands of its storage layer, any structure we want to impose must be implemented on top of it. Gaby uses the rsc.io/ordered encoding format to produce database keys that order in useful ways. For example, ordered.Encode("issue", 123) < ordered.Encode("issue", 1001), so that keys of this form can be used to scan through issues in numeric order. In contrast, using something like fmt.Sprintf("issue%d", n) would visit issue 1001 before issue 123 because "1001" < "123". Using this kind of encoding is common when using NoSQL key-value storage. See the rsc.io/ordered package for the details of the specific encoding. One of the implied jobs Gaby has is to collect all the relevant information about an open source project: its issues, its code changes, its documentation, and so on. Those sources are always changing, so derived operations like adding embeddings for documents need to be able to identify what is new and what has been processed already. To enable this, Gaby implements time-stamped—or just “timed”—storage, in which a collection of key-value pairs also has a “by time” index of ((timestamp, key), no-value) pairs to make it possible to scan only the key-value pairs modified after the previous scan. This kind of incremental scan only has to remember the last timestamp processed and then start an ordered key range scan just after that timestamp. This convention is implemented by rsc.io/gaby/internal/timed, along with a [timed.Watcher] that formalizes the incremental scan pattern. Various package take care of downloading state from issue trackers and the like, but then all that state needs to be unified into a common document format that can be indexed and searched. That document format is defined by rsc.io/gaby/internal/docs. A document consists of an ID (conventionally a URL), a document title, and document text. Documents are stored using timed storage, enabling incremental processing of newly added documents . The next stop for any new document is embedding it into a vector and storing that vector in a vector database. The rsc.io/gaby/internal/embeddocs package does this, and there is very little to it, given the abstractions of a document store with incremental scanning, an LLM embedder, and a vector database, all of which are provided by other packages. None of the packages mentioned so far involve network operations, but the next few do. It is important to test those but also equally important not to depend on external network services in the tests. Instead, the package rsc.io/gaby/internal/httprr provides an HTTP record/replay system specifically designed to help testing. It can be run once in a mode that does use external network servers and records the HTTP exchanges, but by default tests look up the expected responses in the previously recorded log, replaying those responses. The result is that code making HTTP request can be tested with real server traffic once and then re-tested with recordings of that traffic afterward. This avoids having to write entire fakes of services but also avoids needing the services to stay available in order for tests to pass. It also typically makes the tests much faster than using the real servers. Gaby uses GitHub in two main ways. First, it downloads an entire copy of the issue tracker state, with incremental updates, into timed storage. Second, it performs actions in the issue tracker, like editing issues or comments, applying labels, or posting new comments. These operations are provided by rsc.io/gaby/internal/github. Gaby downloads the issue tracker state using GitHub's REST API, which makes incremental updating very easy but does not provide access to a few newer features such as project boards and discussions, which are only available in the GraphQL API. Sync'ing using the GraphQL API is left for future work: there is enough data available from the REST API that for now we can focus on what to do with that data and not that a few newer GitHub features are missing. The github package provides two important aids for testing. For issue tracker state, it also allows loading issue data from a simple text-based issue description, avoiding any actual GitHub use at all and making it easier to modify the test data. For issue tracker actions, the github package defaults in tests to not actually making changes, instead diverting edits into an in-memory log. Tests can then check the log to see whether the right edits were requested. The rsc.io/gaby/internal/githubdocs package takes care of adding content from the downloaded GitHub state into the general document store. Currently the only GitHub-derived documents are one document per issue, consisting of the issue title and body. It may be worth experimenting with incorporating issue comments in some way, although they bring with them a significant amount of potential noise. Gaby will need to download and store Gerrit state into the database and then derive documents from it. That code has not yet been written, although rsc.io/gerrit/reviewdb provides a basic version that can be adapted. Gaby will also need to download and store project documentation into the database and derive documents from it corresponding to cutting the page at each heading. That code has been written but is not yet tested well enough to commit. It will be added later. The simplest job Gaby has is to go around fixing new comments, including issue descriptions (which look like comments but are a different kind of GitHub data). The rsc.io/gaby/internal/commentfix package implements this, watching GitHub state incrementally and applying a few kinds of rewrite rules to each new comment or issue body. The commentfix package allows automatically editing text, automatically editing URLs, and automatically hyperlinking text. The next job Gaby has is to respond to new issues with related issues and documents. The rsc.io/gaby/internal/related package implements this, watching GitHub state incrementally for new issues, filtering out ones that should be ignored, and then finding related issues and documents and posting a list. This package was originally intended to identify and automatically close duplicates, but the difference between a duplicate and a very similar or not-quite-fixed issue is too difficult a judgement to make for an LLM. Even so, the act of bringing forward related context that may have been forgotten or never known by the people reading the issue has turned out to be incredibly helpful. All of these pieces are put together in the main program, this package, rsc.io/gaby. The actual main package has no tests yet but is also incredibly straightforward. It does need tests, but we also need to identify ways that the hard-coded policies in the package can be lifted out into data that a natural language interface can manipulate. For example the current policy choices in package main amount to: These could be stored somewhere as data and manipulated and added to by the LLM in response to prompts from maintainers. And other features could be added and configured in a similar way. Exactly how to do this is an important thing to learn in future experimentation. As mentioned above, the two jobs Gaby does already are both fairly simple and straightforward. It seems like a general approach that should work well is well-written, well-tested deterministic traditional functionality such as the comment fixer and related-docs poster, configured by LLMs in response to specific directions or eventually higher-level goals specified by project maintainers. Other functionality that is worth exploring is rules for automatically labeling issues, rules for identifying issues or CLs that need to be pinged, rules for identifying CLs that need maintainer attention or that need submitting, and so on. Another stretch goal might be to identify when an issue needs more information and ask for that information. Of course, it would be very important not to ask for information that is already present or irrelevant, so getting that right would be a very high bar. There is no guarantee that today's LLMs work well enough to build a useful version of that. Another important area of future work will be running Gaby on top of cloud databases and then moving Gaby's own execution into the cloud. Getting it a server with a URL will enable GitHub callbacks instead of the current 2-minute polling loop, which will enable interactive conversations with Gaby. Overall, we believe that there are a few good ideas for ways that LLM-based bots can help make project maintainers' jobs easier and less monotonous, and they are waiting to be found. There are also many bad ideas, and they must be filtered out. Understanding the difference will take significant care, thought, and experimentation. We have work to do.
Package couch implements a client for a CouchDB database. Version 0.1 focuses on basic operations, proper conflict management, error handling and replication. Not part of this version are attachment handling, general statistics and optimizations, change detection and creating views. Most of the features are accessible using the generic Do() function, though. Getting started: Every document in CouchDB has to be identifiable by a document id and a revision id. Two types already implement this interface called Identifiable: Doc and DynamicDoc. Doc can be used as an anonymous field in your own struct. DynamicDoc is a type alias for map[string]interface{}, use it when your documents have no implicit schema at all. To make code examples easier to follow, there will be no explicit error handling in these examples even though it's fully supported throughout the API. Insert() will create a new document if it doesn't have an id yet: After the operation the final id and revision id will be written back to p. That's why you can now just edit p and call Insert() again which will save the same document under a new revision. After this edit, p will contain the latest revision id. Note that it is possible that this second edit fails because someone else edited and saved the same document in the meantime. You will be notified of this in form of an error and you should then first retrieve the latest document revision to see the changes of this lost update: CouchDB doesn't edit documents in-place but adds a complete revision for each edit. That's why you will be correctly informed of any lost update. Because CouchDB supports multi-master replication of databases, it is possible that conflicts like the one described above can't be avoided. CouchDB is not going to interrupt replication because of a lost update. Let's say you have two instances running, maybe a central one and a mobile one and both are kept in sync by replication. Now let's assume you edit a document on your mobile DB and someone else edits the same document on the central DB. After you've come online again, you use bi-directional replication to sync the databases. CouchDB will now create a branch structure for your document, similar to version control systems. Your document has two conflicting revisions and in this case they can't necessarily be resolved automatically. This client helps you with a number of methods to resolve such an issue quickly. Read more about the conflict model http://docs.couchdb.org/en/latest/replication/conflicts.html Continuing with above example, replicate the database: Now, on the other database, edit the document (note that it has the same id there): Now edit the document on the first database. Retrieve it first to make sure it has the correct revision id: Now replicate anotherDB back to our first database: Now we have two conflicting versions of a document. Only you as the editor can decide whether "LatestAnna" or "AnotherAnna" is correct. To detect this conflict there are a number of methods. First, you can just ask a document: You probably want to have a look at the revisions in your preferred format, use Revisions() to unmarshal the revision data into a slice of a custom data type: Pick one of the revisions or create a new document to solve the conflict: That's it. You can detect conflicts like these throughout your database using: Errors returned by CouchDB will be converted into a Go error. Its regular Error() method will then return a combination of the shortform (e.g. bad_request) as well as the longer and more specific description. To be able to identify a specific error within your application, use ErrorType() to get the shortform only.
Package helper: common project provides commonly used helper utility functions, custom utility types, and third party package wrappers. common project helps code reuse, and faster composition of logic without having to delve into commonly recurring code logic and related testings. common project source directories and brief description: + /ascii = helper types and/or functions related to ascii manipulations. + /crypto = helper types and/or functions related to encryption, decryption, hashing, such as rsa, aes, sha, tls etc. + /csv = helper types and/or functions related to csv file manipulations. + /rest = helper types and/or functions related to http rest api GET, POST, PUT, DELETE actions invoked from client side. + /tcp = helper types providing wrapped tcp client and tcp server logic. - /wrapper = wrappers provides a simpler usage path to third party packages, as well as adding additional enhancements. /helper-conv.go = helpers for data conversion operations. /helper-db.go = helpers for database data type operations. /helper-emv.go = helpers for emv chip card related operations. /helper-io.go = helpers for io related operations. /helper-net.go = helpers for network related operations. /helper-num.go = helpers for numeric related operations. /helper-other.go = helpers for misc. uncategorized operations. /helper-reflect.go = helpers for reflection based operations. /helper-regex.go = helpers for regular express related operations. /helper-str.go = helpers for string operations. /helper-struct.go = helpers for struct related operations. /helper-time.go = helpers for time related operations. /helper-uuid.go = helpers for generating globally unique ids.
go-rexster-client is a Rexster graph database client for Go. See https://github.com/tinkerpop/rexster/wiki for more information about Rexster. It implements a subset of the Rexster REST API: https://github.com/tinkerpop/rexster/wiki/Basic-REST-API. To use the *Batch functions, you must have the batch kibble installed. See https://github.com/tinkerpop/rexster/tree/master/rexster-kibbles/batch-kibble for more information. In the Rexster source dir, this means copying batch-kibble-2.4.0-SNAPSHOT.jar to ./rexster-server/target/rexster-server-2.4.0-SNAPSHOT-standalone/lib/.
Package Authaus is an authentication and authorization system. Authaus brings together the following pluggable components: Any of these five components can be swapped out, and in fact the fourth, and fifth ones (Role Groups and User Store) are entirely optional. A typical setup is to use LDAP as an Authenticator, and Postgres as a Session, Permit, and Role Groups database. Your session database does not need to be particularly performant, since Authaus maintains an in-process cache of session keys and their associated tokens. Authaus was NOT designed to be a "Facebook Scale" system. The target audience is a system of perhaps 100,000 users. There is nothing fundamentally limiting about the API of Authaus, but the internals certainly have not been built with millions of users in mind. The intended usage model is this: Authaus is intended to be embedded inside your security system, and run as a standalone HTTP service (aka a REST service). This HTTP service CAN be open to the wide world, but it's also completely OK to let it listen only to servers inside your DMZ. Authaus only gives you the skeleton and some examples of HTTP responders. It's up to you to flesh out the details of your authentication HTTP interface, and whether you'd like that to face the world, or whether it should only be accessible via other services that you control. At startup, your services open an HTTP connection to the Authaus service. This connection will typically live for the duration of the service. For every incoming request, you peel off whatever authentication information is associated with that request. This is either a session key, or a username/password combination. Let's call it the authorization information. You then ask Authaus to tell you WHO this authorization information belongs to, as well as WHAT this authorization information allows the requester to do (ie Authentication and Authorization). Authaus responds either with a 401 (Unauthorized), 403 (Forbidden), or a 200 (OK) and a JSON object that tells you the identity of the agent submitting this request, as well the permissions that this agent posesses. It's up to your individual services to decide what to do with that information. It should be very easy to expose Authaus over a protocol other than HTTP, since Authaus is intended to be easy to embed. The HTTP API is merely an illustrative example. A `Session Key` is the long random number that is typically stored as a cookie. A `Permit` is a set of roles that has been granted to a user. Authaus knows nothing about the contents of a permit. It simply treats it as a binary blob, and when writing it to an SQL database, encodes it as base64. The interpretation of the permit is application dependent. Typically, a Permit will hold information such as "Allowed to view billing information", or "Allowed to paint your bathroom yellow". Authaus does have a built-in module called RoleGroupDB, which has its own interpretation of what a Permit is, but you do not need to use this. A `Token` is the result of a successful authentication. It stores the identity of a user, an expiry date, and a Permit. A token will usually be retrieved by a session key. However, you can also perform a once-off authentication, which also yields you a token, which you will typically throw away when you are finished with it. All public methods of the `Central` object are callable from multiple threads. Reader-Writer locks are used in all of the caching systems. The number of concurrent connections is limited only by the limits of the Go runtime, and the performance limits that are inherent to the simple reader-writer locks used to protect shared state. Authaus must be deployed as a single process (which implies running on a single logical machine). The sole reason why it must run on only one process and not more, is because of the state that lives inside the various Authaus caches. Were it not for these caches, then there would be nothing preventing you from running Authaus on as many machines as necessary. The cached state stored inside the Authaus server is: If you wanted to make Authaus runnable across multiple processes, then you would need to implement a cache invalidation system for these caches. Authaus makes no attempt to mitigate DOS attacks. The most sane approach in this domain seems to be this (http://security.stackexchange.com/questions/12101/prevent-denial-of-service-attacks-against-slow-hashing-functions). The password database (created via NewAuthenticationDB_SQL) stores password hashes using the scrypt key derivation system (http://www.tarsnap.com/scrypt.html). Internally, we store our hash in a format that can later be extended, should we wish to double-hash the passwords, etc. The hash is 65 bytes and looks like this: The first byte of the hash is a version number of the hash. The remaining 64 bytes are the salt and the hash itself. At present, only one version is supported, which is version 1. It consists of 32 bytes of salt, and 32 bytes of scrypt'ed hash, with scrypt parameters N=256 r=8 p=1. Note that the parameter N=256 is quite low, meaning that it is possible to compute this in approximately 1 millisecond (1,000,000 nanoseconds) on a 2009-era Intel Core i7. This is a deliberate tradeoff. On the same CPU, a SHA256 hash takes about 500 nanoseconds to compute, so we are still making it 2000 times harder to brute force the passwords than an equivalent system storing only a SHA256 salted hash. This discussion is only of relevance in the event that the password table is compromised. No cookie signing mechanism is implemented. Cookies are not presently transmitted with Secure:true. This must change. The LDAP Authenticator is extremely simple, and provides only one function: Authenticate a user against an LDAP system (often this means Active Directory, AKA a Windows Domain). It calls the LDAP "Bind" method, and if that succeeds for the given identity/password, then the user is considered authenticated. We take care not to allow an "anonymous bind", which many LDAP servers allow when the password is blank. The Session Database runs on Postgres. It stores a table of sessions, where each row contains the following information: When a permit is altered with Authaus, then all existing sessions have their permits altered transparently. For example, imagine User X is logged in, and his administrator grants him a new permission. User X does not need to log out and log back in again in order for his new permissions to be reflected. His new permissions will be available immediately. Similarly, if a password is changed with Authaus, then all sessions are invalidated. Do take note though, that if a password is changed through an external mechanism (such as with LDAP), then Authaus will have no way of knowing this, and will continue to serve up sessions that were authenticated with the old password. This is a problem that needs addressing. You can limit the number of concurrent sessions per user to 1, by setting MaxActiveSessions.ConfigSessionDB to 1. This setting may only be zero or one. Zero, which is the default, means an unlimited number of concurrent sessions per user. Authaus will always place your Session Database behind its own Session Cache. This session cache is a very simple single-process in-memory cache of recent sessions. The limit on the number of entries in this cache is hard-coded, and that should probably change. The Permit database runs on Postgres. It stores a table of permits, which is simply a 1:1 mapping from Identity -> Permit. The Permit is just an array of bytes, which we store base64 encoded, inside a text field. This part of the system doesn't care how you interpret that blob. The Role Group Database is an entirely optional component of Authaus. The other components of Authaus (Authenticator, PermitDB, SessionDB) do not understand your Permits. To them, a Permit is simply an arbitrary array of bytes. The Role Group Database is a component that adds a specific meaning to a permit blob. Let's see what that specific meaning looks like... The built-in Role Group Database interprets a permit blob as a string of 32-bit integer IDs: These 32-bit integer IDs refer to "role groups" inside a database table. The "role groups" table might look like this: The Role Group IDs use 32-bit indices, because we assume that you are not going to create more than 2^32 different role groups. The worst case we assume here is that of an automated system that creates 100,000 roles per day. Such a system would run for more than 100 years, given a 32-bit ID. These constraints are extraordinary, suggesting that we do not even need 32 bits, but could even get away with just a 16-bit group ID. However, we expect the number of groups to be relatively small. Our aim here, arbitrary though it may be, is to fit the permit and identity into a single ethernet packet, which one can reasonably peg at 1500 bytes. 1500 / 4 = 375. We assume that no sane human administrator will assign 375 security groups to any individual. We expect the number of groups assigned to any individual to be in the range of 1 to 20. This makes 375 a gigantic buffer. OAuth support in Authaus is limited to a very simple scenario: * You wish to allow your users to login using an OAuth service - thereby outsourcing the Authentication to that external service, and using it to populate the email address of your users. OAuth was developed in order to work with Microsoft Azure Active Directory, however it should be fairly easy to extend the code to be able to handle other OAuth providers. Inside the database are two tables related to OAuth: oauthchallenge: The challenge table holds OAuth sessions which have been started, and which are expected to either succeed or fail within the next few minutes. The default timeout for a challenge is 5 minutes. A challenge record is usually created the moment the user clicks on the "Sign in with Microsoft" button on your site, and it tracks that authentication attempt. oauthsession: The session table holds OAuth sessions which have successfully authenticated, and also the token that was retrieved by a successful authorization. If a token has expired, then it is refreshed and updated in-place, inside the oauthsession table. An OAuth login follows this sequence of events: 1. User clicks on a "Signin with X" button on your login page 2. A record is created in the oauthchallenge table, with a unique ID. This ID is a secret known only to the authaus server and the OAuth server. It is used as the `state` parameter in the OAuth login mechanism. 3. The HTTP call which prompts #2 return a redirect URL (eg via an HTTP 302 response), which redirects the user's browser to the OAuth website, so that the user can either grant or refuse access. If the user refuses, or fails to login, then the login sequence ends here. 4. Upon successful authorization with the OAuth system, the OAuth website redirects the user back to your website, to a URL such as example.com/auth/oauth/finish, and you'll typically want Authaus to handle this request directly (via HttpHandlerOAuthFinish). Authaus will extract the secrets from the URL, perform any validations necessary, and then move the record from the oauthchallenge table, into the oauthsession table. While 'moving' the record over, it will also add any additional information that was provided by the successful authentication, such as the token provided by the OAuth provider. 5. Authaus makes an API call to the OAuth system, to retrieve the email address and name of the person that just logged in, using the token just received. 6. If that email address does not exist inside authuserstore, then create a new user record for this identity. 7. Log the user into Authaus, by creating a record inside authsession, for the relevant identity. Inside the authsession table, store a link to the oauthsession record, so that there is a 1:1 link from the authsession table, to the oauthsession table (ie Authaus Session to OAuth Token). 8. Return an Authaus session cookie to the browser, thereby completing the login. Although we only use our OAuth token a single time, during login, to retrieve the user's email address and name, we retain the OAuth token, and so we maintain the ability to make other API calls on behalf of that user. This hasn't proven necessary yet, but it seems like a reasonable bit of future-proofing. See the guidelines at the top of all_test.go for testing instructions.
Package freeGeoIP or go-freeGeoIP is a Golang client for Free IP Geolocation information API with inbuilt cache support to increase the 15k per hour rate limit of the application https://freegeoip.app/ By default, the client will cache the IP Geolocation information for 24 hours, but the expiry can be set manually. If you want set the information cache with no expiration time set the expiry function to nil. You can use the package using the following command: freegeoip.app provides a free IP geolocation API for software developers. It uses a database of IP addresses that are associated to cities along with other relevant information like time zone, latitude and longitude. You're allowed up to 15,000 queries per hour by default. Once this limit is reached, all of your requests will result in HTTP 403, forbidden, until your quota is cleared. The HTTP API takes GET requests in the following schema: Supported formats are: csv, xml, json and jsonp. If no IP or hostname is provided, then your own IP is looked up. Contributors are more than welcome and much appreciated. Please feel free to open a PR to improve anything you don't like, or would like to add. Please make your changes in a specific branch and request to pull into master! If you can please make sure all the changes work properly and does not affect the existing functioning. No PR is too small! Even the smallest effort is countable. This project is licensed under the MIT license.(https://github.com/Shivam010/go-freeGeoIP/blob/master/LICENSE)
Main entrypoint for Artifact Server. This is a REST API (no html frontend) implemented using Martini. Database: Postgresql Storage: S3
A time series database optimized for performance measurements. Let's say you measure application latency several times per second. Each sample is a JSON document: To persist measurements, send the following HTTP request: where: Obviously, you will rather use your favourite programming language to send HTTP requests. It's absolutely OK to create thousands of databases. This API returns JSON document with aggregated characteristics (mean, percentiles, and etc.): Please notice that Python is used for demonstration purpose only. Finally, it is possible to generate heat map graphs in SVG format (use your browser to view): Each rectangle is a cluster of values. The darker color corresponds to the denser population. The legend on the right side of the graph (the vertical bar) should help to understand the density. To list all available database, use the following request: To list all metrics, use request similar to: Only bulk queries are supported, but even they are not recommended. To get the list of samples, use request similar to: Output is a JSON document with all timestamps and values: The first value in the nested list is the timestamp (the number of milliseconds elapsed since January 1, 1970 UTC). The second value is the stored measurement (integer or float).
Package sqlite provides a Go interface to SQLite 3. The semantics of this package are deliberately close to the SQLite3 C API, so it is helpful to be familiar with http://www.sqlite.org/c3ref/intro.html. An SQLite connection is represented by a *sqlite.Conn. Connections cannot be used concurrently. A typical Go program will create a pool of connections (using Open to create a *sqlitex.Pool) so goroutines can borrow a connection while they need to talk to the database. This package assumes SQLite will be used concurrently by the process through several connections, so the build options for SQLite enable multi-threading and the shared cache: https://www.sqlite.org/sharedcache.html The implementation automatically handles shared cache locking, see the documentation on Stmt.Step for details. The optional SQLite3 compiled in are: FTS5, RTree, JSON1, Session, GeoPoly This is not a database/sql driver. Statements are prepared with the Prepare and PrepareTransient methods. When using Prepare, statements are keyed inside a connection by the original query string used to create them. This means long-running high-performance code paths can write: After all the connections in a pool have been warmed up by passing through one of these Prepare calls, subsequent calls are simply a map lookup that returns an existing statement. The sqlite package supports the SQLite incremental I/O interface for streaming blob data into and out of the the database without loading the entire blob into a single []byte. (This is important when working either with very large blobs, or more commonly, a large number of moderate-sized blobs concurrently.) To write a blob, first use an INSERT statement to set the size of the blob and assign a rowid: Use BindZeroBlob or SetZeroBlob to set the size of myblob. Then you can open the blob with: Every connection can have a done channel associated with it using the SetInterrupt method. This is typically the channel returned by a context.Context Done method. For example, a timeout can be associated with a connection session: As database connections are long-lived, the SetInterrupt method can be called multiple times to reset the associated lifetime. When using pools, the shorthand for associating a context with a connection is: SQLite transactions have to be managed manually with this package by directly calling BEGIN / COMMIT / ROLLBACK or SAVEPOINT / RELEASE/ ROLLBACK. The sqlitex has a Savepoint function that helps automate this. Using a Pool to execute SQL in a concurrent HTTP handler. For helper functions that make some kinds of statements easier to write see the sqlitex package.