Package genji implements a document-oriented, embedded SQL database. Genji supports various engines that write data on-disk, like BoltDB or Badger, and in memory.
Package pglock provides a simple utility for using PostgreSQL for managing distributed locks. In order to use this package, the client must create a table in the database, although the client provides a convenience method for creating that table (CreateTable). Basic usage: pglock.Client.Do can be used for long-running processes: This package is covered by this SLA: https://github.com/cirello-io/public/blob/master/SLA.md
Package pi provides the API client, operations, and parameter types for AWS Performance Insights. Amazon RDS Performance Insights enables you to monitor and explore different dimensions of database load based on data captured from a running DB instance. The guide provides detailed information about Performance Insights data types, parameters and errors. When Performance Insights is enabled, the Amazon RDS Performance Insights API provides visibility into the performance of your DB instance. Amazon CloudWatch provides the authoritative source for Amazon Web Services service-vended monitoring metrics. Performance Insights offers a domain-specific view of DB load. DB load is measured as average active sessions. Performance Insights provides the data to API consumers as a two-dimensional time-series dataset. The time dimension provides DB load data for each time point in the queried time range. Each time point decomposes overall load in relation to the requested dimensions, measured at that time point. Examples include SQL, Wait event, User, and Host. To learn more about Performance Insights and Amazon Aurora DB instances, go to the Amazon Aurora User Guide. To learn more about Performance Insights and Amazon RDS DB instances, go to the Amazon RDS User Guide. To learn more about Performance Insights and Amazon DocumentDB clusters, go to the Amazon DocumentDB Developer Guide.
Package modl provides a non-declarative database modelling layer to ease the use of frequently repeated patterns in database-backed applications and centralize database use to ease profiling and reporting. It is a fork of the wonderful github.com/coopernurse/gorp package, but is rewritten to use github.com/jmoiron/sqlx as a base. Use of this source code is governed by a MIT-style license that can be found in the LICENSE file.
Package docdbelastic provides the API client, operations, and parameter types for Amazon DocumentDB Elastic Clusters. Amazon DocumentDB elastic-clusters support workloads with millions of reads/writes per second and petabytes of storage capacity. Amazon DocumentDB elastic clusters also simplify how developers interact with Amazon DocumentDB elastic-clusters by eliminating the need to choose, manage or upgrade instances. Amazon DocumentDB elastic-clusters were created to: provide a solution for customers looking for a database that provides virtually limitless scale with rich query capabilities and MongoDB API compatibility. give customers higher connection limits, and to reduce downtime from patching. continue investing in a cloud-native, elastic, and class leading architecture for JSON workloads.
Package dbx provides a set of DB-agnostic and easy-to-use query building methods for relational databases. This example shows how to do CRUD operations. This example shows how to populate DB data in different ways. This example shows how to use query builder to build DB queries. This example shows how to use query builder in transactions.
Package ccgo translates C to Go source code. This v3 package is obsolete. Please use current ccgo/v4: Invocation 2021-12-23: v3.13.0 add clang support. To compile the resulting Go programs the package modernc.org/libc has to be installed. CCGO_CPP selects which command is used by the C front end to obtain target configuration. Defaults to `cpp`. Ignored when --load-config <path> is used. TARGET_GOARCH selects the GOARCH of the resulting Go code. Defaults to $GOARCH or runtime.GOARCH if $GOARCH is not set. Ignored when --load-config <path> is used. TARGET_GOOS selects the GOOS of the resulting Go code. Defaults to $GOOS or runtime.GOOS if $GOOS is not set. Ignored when --load-config <path> is used. To compile for the host invoke something like To cross compile set TARGET_GOARCH and/or TARGET_GOOS, not GOARCH/GOOS. Cross compile depends on availability of C stdlib headers for the target platform as well on the set of predefined macros for the target platform. For example, to cross compile on a Linux host, targeting windows/amd64, it's necessary to have mingw64 installed in $PATH. Then invoke something like Only files with extension .c, .h or .json are recognized as input files. A .json file is interpreted as a compile database. All other command line arguments following the .json file are interpreted as items that should be found in the database and included in the output file. Each item should be on object file (.o) or static archive (.a) or a command (no extension). Command line options requiring an argument. -Dfoo Equals `#define foo 1`. -Dfoo=bar Equals `#define foo bar`. -Ipath Add path to the list of include files search path. The option is a capital letter I (India), not a lowercase letter l (Lima). -limport-path The package at <import-path> must have been produced without using the -nocapi option, ie. the package must have a proper capi_$GOOS_$GOARCH.go file. The option is a lowercase letter l (Lima), not a capital letter I (India). -Ufoo Equals `#undef foo`. -compiledb name When this option appears anywhere, most preceding options are ignored and all following command line arguments are interpreted as a command with arguments that will be executed to produce the compilation database. For example: This will execute `make -DFOO -w` and attempts to extract the compile and archive commands. Only POSIX operating systems are supported. The supported build system must output information about entering directories that is compatible with GNU make. The only compilers supported are `gcc` and `clang`. The only archiver supported is `ar`. Format specification: https://clang.llvm.org/docs/JSONCompilationDatabase.html Note: This option produces also information about libraries created with `ar cr` and include it in the json file, which is above the specification. -crt-import-path path Unless disabled by the -nostdlib option, every produced Go file imports the C runtime library. Default is `modernc.org/libc`. -export-defines "" Export C numeric/string defines as Go constants by capitalizing the first letter of the define's name. -export-defines prefix Export C numeric/string defines as Go constants by prefixing the define's name with `prefix`. Name conflicts are resolved by adding a numeric suffix. -export-enums "" Export C enum constants as Go constants by capitalizing the first letter of the enum constant name. -export-enums prefix Export C enum constants as Go constants by prefixing the enum constant name with `prefix`. Name conflicts are resolved by adding a numeric suffix. -export-externs "" Export C extern definitions as Go definitions by capitalizing the first letter of the definition name. -export-externs prefix Export C extern definitions as Go definitions by prefixing the definition name with `prefix`. Name conflicts are resolved by adding a numeric suffix. -export-fields "" Export C struct fields as Go fields by capitalizing the first letter of the field name. -export-fields prefix Export C struct fields as Go fields by prefixing the field name with `prefix`. Name conflicts are resolved by adding a numeric suffix. -export-structs "" Export tagged C struct/union types as Go types by capitalizing the first letter of the tag name. -export-structs prefix Export tagged C struct/union types as Go types by prefixing the tag name with `prefix`. Name conflicts are resolved by adding a numeric suffix. -export-typedefs "" Export C typedefs as Go types by capitalizing the first letter of the typedef name. -export-structs prefix Export C typedefs as as Go types by prefixing the typedef name with `prefix`. Name conflicts are resolved by adding a numeric suffix. -static-locals-prefix prefix Prefix C static local declarators names with 'prefix'. -host-config-cmd command This option has the same effect as setting `CCGO_CPP=command`. -host-config-opts comma-separated-list The separated items of the list are added to the invocation of the configuration command. -pkgname name Set the resulting Go package name to 'name'. Defaults to `main`. -script filename Ccgo does not yet have a concept of object files. All C files that are needed for producing the resulting Go file have to be compiled together and "linked" in memory. There are some problems with this approach, one of them is the situation when foo.c has to be compiled using, for example `-Dbar=42` and "linked" with baz.c that needs to be compiled with `-Dbar=314`. Or `bar` must not defined at all for baz.c, etc. A script in a named file is a CSV file. It is opened like this (error handling omitted): The first field of every record in the CSV file is the directory to use. The remaining fields are the arguments of the ccgo command. This way different C files can be translated using different options. The CSV file may look something like: -volatile comma-separated-list The separated items of the list are added to the list of file scope extern variables the will be accessed atomically, like if their C declarator used the 'volatile' type specifier. Currently only C scalar types of size 4 and 8 bytes are supported. Other types/sizes will ignore both the volatile specifier and the -volatile option. -save-config path This option copies every header included during compilation or compile database generation to a file under the path argument. Additionally the host configuration, ie. predefined macros, include search paths, os and architecture is stored in path/config.json. When this option is used, no Go code is generated, meaning no link phase occurs and thus the memory consumption should stay low. Passing an empty string as an argument of -save-config is the same as if the option is not present at all. Possibly useful when the option set is generated in code. This option is ignored when -compiledb <path> is used. --load-config path Note that this option must have the double dash prefix to distinguish it from -lfoo, the [traditional] short form of `-l foo`. This option configures the compiler using path/config.json. The include paths are adjusted to be relative to path. For example: Assume on machine A the default C preprocessor reports a system include search path "/usr/include". Running ccgo on A with -save-config /tmp/foo to compile foo.c that #includes <stdlib.h>, which is found in /usr/include/stdlib.h on the host results in Assume /tmp/foo from machine A will be recursively copied to machine B, that may run a different operating system and/or architecture. Let the copy be for example in /tmp/bar. Using --load-config /tmp/bar will instruct ccgo to configure its preprocessor with a system include path /tmp/bar/usr/include and thus use the original machine A stdlib.h found there. When the --load-config is used, no host configuration from a machine B cross C preprocessor/compiler is needed to transpile the foo.c source on machine B as if the compiler would be running on machine A. The particular usefulness of this mechanism is for transpiling big projects for 32 bit architectures. There the lack if ccgo having an object format and thus linking everything in RAM can need too much memory for the system to handle. The way around this is possibly to run something like on machine A, transfer path/* to machine B and run the link phase there with eg. Note that the C sources for the project must be in the same path on both machines because the compile database stores absolute paths. It might be convenient to put the sources in path/src, the config in path/config, for example, and transfer the [archive of] path/ to the same directory on the second machine. That also solves the issue when ./configure generates files and the result differs per operating system or architecture. Passing an empty string as an argument of -load-config is the same as if the option is not present at all. Possibly useful when the option set is generated in code. These command line options don't take arguments. -E When this option is present the compiler does not produce any Go files and instead prints the preprocessor output to stdout. -all-errors Normally only the first 10 or so errors are shown. With this option the compiler will show all errors. -header Using this option suppresses producing of any function definitions. This is possibly useful for producing Go files from C header files. Including function signatures with -header. -func-sig Add this option to include fucntion signature when compiling headers (using -header). -nostdinc This option disables the default C include search paths. -nostdlib This option disables importing of the runtime library by the resulting Go code. -trace-pinning This option will print the positions and names of local declarators that are being pinned. -version Ignore all other options, print version and exit. -verbose-compiledb Enable verbose output when -compiledb is present. -ignore-undefined This option tells the linker to not insist on finding definitions for declarators that are not implicitly declared and used - but not defined. This might be useful when the intent is to define the missing function in Go functions manually. Name conflict resolution for such declarator names may or may not be applied. -ignore-unsupported-alignment This option tells the compiler to not complain about alignments that Go cannot support. -trace-included-files This option outputs the path names of all included files. This option is ignored when -compiledb <path> is used. There may exist other options not listed above. Those should be considered temporary and/or unsupported and may be removed without notice. Alternatively, they may eventually get promoted to "documented" options.
Package badger implements an embeddable, simple and fast key-value database, written in pure Go. It is designed to be highly performant for both reads and writes simultaneously. Badger uses Multi-Version Concurrency Control (MVCC), and supports transactions. It runs transactions concurrently, with serializable snapshot isolation guarantees. Badger uses an LSM tree along with a value log to separate keys from values, hence reducing both write amplification and the size of the LSM tree. This allows LSM tree to be served entirely from RAM, while the values are served from SSD. Badger has the following main types: DB, Txn, Item and Iterator. DB contains keys that are associated with values. It must be opened with the appropriate options before it can be accessed. All operations happen inside a Txn. Txn represents a transaction, which can be read-only or read-write. Read-only transactions can read values for a given key (which are returned inside an Item), or iterate over a set of key-value pairs using an Iterator (which are returned as Item type values as well). Read-write transactions can also update and delete keys from the DB. See the examples for more usage details.
Package xorm is a simple and powerful ORM for Go. Make sure you have installed Go 1.11+ and then: Firstly, we should new an engine for a database Method NewEngine's parameters is the same as sql.Open. It depends drivers' implementation. Generally, one engine for an application is enough. You can set it as package variable. XORM also support raw SQL execution: 1. query a SQL string, the returned results is []map[string][]byte 2. execute a SQL string, the returned results There are 8 major ORM methods and many helpful methods to use to operate database. 1. Insert one or multiple records to database 2. Query one record or one variable from database 3. Query multiple records from database 4. Query multiple records and record by record handle, there two methods, one is Iterate, another is Rows 5. Update one or more records 6. Delete one or more records, Delete MUST has condition 7. Count records 8. Sum records The above 8 methods could use with condition methods chainable. Attention: the above 8 methods should be the last chainable method. 1. ID, In 2. Where, And, Or 3. OrderBy, Asc, Desc 4. Limit, Top 5. SQL, let you custom SQL 6. Cols, Omit, Distinct 7. Join, GroupBy, Having More usage, please visit http://github.com/xormsharp/xorm/docs
Package db (or upper-db) provides a common interface to work with a variety of data sources using adapters that wrap mature database drivers. Install upper-db: Usage See more usage examples and documentation for users at https://upper.io/db.v3.
Package sqlz (pronounced "sequelize") is an un-opinionated, un-obtrusive SQL query builder for Go projects, based on github.com/jmoiron/sqlx. As opposed to other query builders, sqlz does not mean to bridge the gap between different SQL servers and implementations by providing a unified interface. Instead, it aims to support an extended SQL syntax that may be implementation-specific. For example, if you wish to use PostgreSQL-specific features such as JSON operators and upsert statements, sqlz means to support these without caring if the underlying database backend really is PostgreSQL. In other words, sqlz builds whatever queries you want it to build. sqlz is easy to integrate into existing code, as it does not require you to create your database connections through the sqlz API; in fact, it doesn't supply one. You can either use your existing `*sql.DB` connection or an `*sqlx.DB` connection, so you can start writing new queries with sqlz without having to modify any existing code. sqlz leverages sqlx for easy loading of query results. Please make sure you are familiar with how sqlx works in order to understand how row scanning is performed. You may need to add `db` struct tags to your Go structures. sqlz provides a comfortable API for running queries in a transaction, and will automatically commit or rollback the transaction as necessary.
Package badger implements an embeddable, simple and fast key-value database, written in pure Go. It is designed to be highly performant for both reads and writes simultaneously. Badger uses Multi-Version Concurrency Control (MVCC), and supports transactions. It runs transactions concurrently, with serializable snapshot isolation guarantees. Badger uses an LSM tree along with a value log to separate keys from values, hence reducing both write amplification and the size of the LSM tree. This allows LSM tree to be served entirely from RAM, while the values are served from SSD. Badger has the following main types: DB, Txn, Item and Iterator. DB contains keys that are associated with values. It must be opened with the appropriate options before it can be accessed. All operations happen inside a Txn. Txn represents a transaction, which can be read-only or read-write. Read-only transactions can read values for a given key (which are returned inside an Item), or iterate over a set of key-value pairs using an Iterator (which are returned as Item type values as well). Read-write transactions can also update and delete keys from the DB. See the examples for more usage details.
Package mempool provides a policy-enforced pool of unmined Decred transactions. A key responsibility of the Decred network is mining transactions – regular transactions and stake transactions – into blocks. In order to facilitate this, the mining process relies on having a readily-available source of transactions to include in a block that is being solved. At a high level, this package satisfies that requirement by providing an in-memory pool of fully validated transactions that can also optionally be further filtered based upon a configurable policy. The Policy configuration options has flags that control whether or not "standard" transactions and old votes are accepted into the mempool. In essence, a "standard" transaction is one that satisfies a fairly strict set of requirements that are largely intended to help provide fair use of the system to all users. It is important to note that what is considered to be a "standard" transaction changes over time as policy and consensus rules evolve. For some insight, at the time of this writing, an example of _some_ of the criteria that are required for a transaction to be considered standard are that it is of the most-recently supported version, finalized, does not exceed a specific size, and only consists of specific script forms. Since this package does not deal with other Decred specifics such as network communication and transaction relay, it returns a list of transactions that were accepted which gives the caller a high level of flexibility in how they want to proceed. Typically, this will involve things such as relaying the transactions to other peers on the network and notifying the mining process that new transactions are available. This package has intentionally been designed so it can be used as a standalone package for any projects needing the ability create an in-memory pool of Decred transactions that are not only valid by consensus rules, but also adhere to a configurable policy ## Feature Overview The following is a quick overview of the major features. It is not intended to be an exhaustive list. - Maintain a pool of fully validated transactions - Stake transaction support (ticket purchases, votes and revocations) - Orphan transaction support (transactions that spend from unknown outputs) - Configurable transaction acceptance policy - Additional metadata tracking for each transaction - Manual control of transaction removal Errors returned by this package are either the raw errors provided by underlying calls or of type mempool.RuleError. Since there are two classes of rules (mempool acceptance rules and blockchain (consensus) acceptance rules), the mempool.RuleError type contains a single Err field which will, in turn, either be a mempool.TxRuleError or a blockchain.RuleError. The first indicates a violation of mempool acceptance rules while the latter indicates a violation of consensus acceptance rules. This allows the caller to easily differentiate between unexpected errors, such as database errors, versus errors due to rule violations through type assertions. In addition, callers can programmatically determine the specific rule violation by type asserting the Err field to one of the aforementioned types and examining their underlying ErrorCode field.
Package keyspaces provides the API client, operations, and parameter types for Amazon Keyspaces. Amazon Keyspaces (for Apache Cassandra) is a scalable, highly available, and managed Apache Cassandra-compatible database service. Amazon Keyspaces makes it easy to migrate, run, and scale Cassandra workloads in the Amazon Web Services Cloud. With just a few clicks on the Amazon Web Services Management Console or a few lines of code, you can create keyspaces and tables in Amazon Keyspaces, without deploying any infrastructure or installing software. In addition to supporting Cassandra Query Language (CQL) requests via open-source Cassandra drivers, Amazon Keyspaces supports data definition language (DDL) operations to manage keyspaces and tables using the Amazon Web Services SDK and CLI, as well as infrastructure as code (IaC) services and tools such as CloudFormation and Terraform. This API reference describes the supported DDL operations in detail. For the list of all supported CQL APIs, see Supported Cassandra APIs, operations, and data types in Amazon Keyspaces in the Amazon Keyspaces Developer Guide. To learn how Amazon Keyspaces API actions are recorded with CloudTrail, see Amazon Keyspaces information in CloudTrail in the Amazon Keyspaces Developer Guide. For more information about Amazon Web Services APIs, for example how to implement retry logic or how to sign Amazon Web Services API requests, see Amazon Web Services APIsin the General Reference.
Package db (or upper-db) provides a common interface to work with a variety of data sources using adapters that wrap mature database drivers. Install upper-db: Usage See more usage examples and documentation for users at https://upper.io/db.v3.
Package geoip2 provides an easy-to-use API for the MaxMind GeoIP2 and GeoLite2 databases; this package does not support GeoIP Legacy databases. The structs provided by this package match the internal structure of the data in the MaxMind databases. See github.com/oschwald/maxminddb-golang for more advanced used cases. Example provides a basic example of using the API. Use of the Country method is analogous to that of the City method.
Package restlayer is an API framework heavily inspired by the excellent Python Eve (http://python-eve.org/). It helps you create a comprehensive, customizable, and secure REST (graph) API on top of pluggable backend storages with no boiler plate code so can focus on your business logic. Implemented as a net/http middleware, it plays well with other middleware like CORS (http://github.com/rs/cors) and is net/context aware thanks to xhandler. REST Layer is an opinionated framework. Unlike many API frameworks, you don’t directly control the routing and you don’t have to write handlers. You just define resources and sub-resources with a schema, the framework automatically figures out what routes to generate behind the scene. You don’t have to take care of the HTTP headers and response, JSON encoding, etc. either. REST layer handles HTTP conditional requests, caching, integrity checking for you. A powerful and extensible validation engine make sure that data comes pre-validated to your custom storage handlers. Generic resource handlers for MongoDB (http://github.com/clarify/rested/storers/mongo) and other databases are also available so you have few to no code to write to make the whole system work. Moreover, REST Layer let you create a graph API by linking resources between them. Thanks to its advanced field selection syntax, you can gather resources and their dependencies in a single request, saving you from costly network roundtrips. REST Layer is composed of several sub-packages: See https://github.com/clarify/rested/blob/master/README.md for full REST Layer documentation.
Package sessions provides sessions support for net/http and valyala/fasthttp unique with auto-GC, register unlimited number of databases to Load and Update/Save the sessions in external server or to an external (no/or/and sql) database Usage net/http: // init a new sessions manager( if you use only one web framework inside your app then you can use the package-level functions like: sessions.Start/sessions.Destroy) manager := sessions.New(sessions.Config{}) // start a session for a particular client manager.Start(http.ResponseWriter, *http.Request) // destroy a session from the server and client, manager.Destroy(http.ResponseWriter, *http.Request) Usage valyala/fasthttp: // init a new sessions manager( if you use only one web framework inside your app then you can use the package-level functions like: sessions.Start/sessions.Destroy) manager := sessions.New(sessions.Config{}) // start a session for a particular client manager.StartFasthttp(*fasthttp.RequestCtx) // destroy a session from the server and client, manager.DestroyFasthttp(*fasthttp.Request) Note that, now, you can use both fasthttp and net/http within the same sessions manager(.New) instance! So now, you can share sessions between a net/http app and valyala/fasthttp app
Package sqlite is a sql/database driver using a CGo-free port of the C SQLite3 library. SQLite is an in-process implementation of a self-contained, serverless, zero-configuration, transactional SQL database engine. This project is sponsored by Schleibinger Geräte Teubert u. Greim GmbH by allowing one of the maintainers to work on it also in office hours. These combinations of GOOS and GOARCH are currently supported Builder results available at: https://modern-c.appspot.com/-/builder/?importpath=modernc.org%2fsqlite Numbers for the pure Go version were produced by Numbers for the pure C version were produced by The results are from Go version 1.20.4 and GCC version 10.2.1 on a Linux/amd64 machine, CPU: AMD Ryzen 9 3900X 12-Core Processor × 24, 128GB RAM. Shown are the best of 3 runs. This particular test executes 16.1% faster in the C version. 2023-08-03 v1.25.0: enable SQLITE_ENABLE_DBSTAT_VTAB. 2023-07-11 v1.24.0: Add (*conn).{Serialize,Deserialize,NewBackup,NewRestore} methods, add Backup type. 2023-06-01 v1.23.0: Allow registering aggregate functions. 2023-04-22 v1.22.0: Support linux/s390x. 2023-02-23 v1.21.0: Upgrade to SQLite 3.41.0, release notes at https://sqlite.org/releaselog/3_41_0.html. 2022-11-28 v1.20.0 Support linux/ppc64le. 2022-09-16 v1.19.0: Support frebsd/arm64. 2022-07-26 v1.18.0: Adds support for Go fs.FS based SQLite virtual filesystems, see function New in modernc.org/sqlite/vfs and/or TestVFS in all_test.go 2022-04-24 v1.17.0: Support windows/arm64. 2022-04-04 v1.16.0: Support scalar application defined functions written in Go. 2022-03-13 v1.15.0: Support linux/riscv64. 2021-11-13 v1.14.0: Support windows/amd64. This target had previously only experimental status because of a now resolved memory leak. 2021-09-07 v1.13.0: Support freebsd/amd64. 2021-06-23 v1.11.0: Upgrade to use sqlite 3.36.0, release notes at https://www.sqlite.org/releaselog/3_36_0.html. 2021-05-06 v1.10.6: Fixes a memory corruption issue (https://gitlab.com/cznic/sqlite/-/issues/53). Versions since v1.8.6 were affected and should be updated to v1.10.6. 2021-03-14 v1.10.0: Update to use sqlite 3.35.0, release notes at https://www.sqlite.org/releaselog/3_35_0.html. 2021-03-11 v1.9.0: Support darwin/arm64. 2021-01-08 v1.8.0: Support darwin/amd64. 2020-09-13 v1.7.0: Support linux/arm and linux/arm64. 2020-09-08 v1.6.0: Support linux/386. 2020-09-03 v1.5.0: This project is now completely CGo-free, including the Tcl tests. 2020-08-26 v1.4.0: First stable release for linux/amd64. The database/sql driver and its tests are CGo free. Tests of the translated sqlite3.c library still require CGo. 2020-07-26 v1.4.0-beta1: The project has reached beta status while supporting linux/amd64 only at the moment. The 'extraquick' Tcl testsuite reports and some memory leaks 2019-12-28 v1.2.0-alpha.3: Third alpha fixes issue #19. It also bumps the minor version as the repository was wrongly already tagged with v1.1.0 before. Even though the tag was deleted there are proxies that cached that tag. Thanks /u/garaktailor for detecting the problem and suggesting this solution. 2019-12-26 v1.1.0-alpha.2: Second alpha release adds support for accessing a database concurrently by multiple goroutines and/or processes. v1.1.0 is now considered feature-complete. Next planed release should be a beta with a proper test suite. 2019-12-18 v1.1.0-alpha.1: First alpha release using the new cc/v3, gocc, qbe toolchain. Some primitive tests pass on linux_{amd64,386}. Not yet safe for concurrent access by multiple goroutines. Next alpha release is planed to arrive before the end of this year. 2017-06-10 Windows/Intel no more uses the VM (thanks Steffen Butzer). 2017-06-05 Linux/Intel no more uses the VM (cznic/virtual). To access a Sqlite database do something like A comma separated list of options can be passed to `go generate` via the environment variable GO_GENERATE. Some useful options include for example: To create a debug/development version, issue for example: Note: To run `go generate` you need to have modernc.org/ccgo/v3 installed. This is an example of how to use the debug logs in modernc.org/libc when hunting a bug. The /tmp/libc.log file is created as requested. No useful messages there because none are enabled in libc. Let's try to enable Xwrite as an example. We need to tell the Go build system to use our local, patched/debug libc: And run the test again: See https://sqlite.org/docs.html
Package dbcleaner helps cleaning up database's tables upon unit test. With the help of https://github.com/stretchr/testify/tree/master/suite, we can easily acquire the tables using in the test in SetupTest or SetupSuite, and cleanup all data using TearDownTest or TearDownSuite
Package rootcerts provides an embedded copy of the "Mozilla Included CA Certificate List" (https://wiki.mozilla.org/CA/Included_Certificates), more specifically the "PEM of Root Certificates in Mozilla's Root Store with the Websites (TLS/SSL) Trust Bit Enabled" (https://ccadb-public.secure.force.com/mozilla/IncludedRootsPEMTxt?TrustBitsInclude=Websites). The "Mozilla Included CA Certificate List" is maintained as part of the Common CA Database effort (https://golang.org/pkg/crypto/x509/). If this package is imported anywhere in the program, then if the crypto/x509 package cannot find the system certificate pool, it will use this embedded information. Additionally, the usage of this embedded information can be forced by setting the the environment variable `GO_ROOTCERTS_ENABLE=1` while running a program, which includes this package. Importing this package will increase the size of a program by about 250 KB. This package should normally be imported by a program's main package, not by a library. Libraries normally shouldn't decide whether to include the "Mozilla Included CA Certificate List" in a program.
Package dbcleaner helps cleaning up database's tables upon unit test. With the help of https://github.com/stretchr/testify/tree/master/suite, we can easily acquire the tables using in the test in SetupTest or SetupSuite, and cleanup all data using TearDownTest or TearDownSuite
Package rlog provides a partial reimplementation of the log package in the standard library. This package provides simple logging with loglevels and an interface quite similar to the standard library. The following environment variables are supported: GO_RLOG_LEVEL GO_RLOG_JOURNAL In addition to the SetLogLevel() methods, the loglevel can be set globally via an environment variable. The recognized values are: CRIT, ERR, WARN, NOTICE, INFO, and DEBUG. These constants are defined in RFC5435, Section 6.2.1. For your convenience, there is a package level logger available, which can be accessed using the relevant methods. The package level logger is there for programs that do not need several loggers. It aims to simplify this particular usecase. If you need several loggers, please instantiate them as needed and do not use the package level logger. Timestamps are not supported by design, since this is almost always not needed. On the terminal, you can pipe the output through ts(1) from moreutils. Using a logging service, the timestamps are already handled perfectly. This avoids the annoying problem of doubled timestamps in the log database, such as: Aug 10 07:24:37 host service[pid]: TIMESTAMP MESSAGE.
kms is a package that provides key management system features for go-kms-wrapping Wrappers. The following domain terms are key to understanding the system and how to use it: wrapper: all keys within the system are a Wrapper from go-kms-wrapping. root external wrapper: the external wrapper that will serve as the root of trust for the kms system. Typically you'd get this root wrapper via go-kms-wrapper from a KMS provider. See the go-kms-wrapper docs for further details. scope: a scope defines a rotational boundary for a set of keys. The system tracks only the scope identifier and which is used to find keys with a specific scope. **IMPORTANT**: You should define a FK from kms_root_key.scope_id with cascading deletes and updates to the PK of the table within your domain that tracks scopes. This FK will prevent orphaned kms keys. For example, you could assign organizations and projects scope IDs and then associate a set of keys with each org and project within your domain. root key: the KEKs (keys to encrypt keys) of the system. data key: the DEKs (keys to encrypt data) of the system and must have a parent root key and a purpose. purpose: Each data key (DEK) must have a one purpose. For example, you could define a purpose of client-secrets for a DEK that will be used encrypt/decrypt wrapper operations on `client-secrets` You'll find the database schema within the migrations directory. Currently postgres and sqlite are supported. The implementation does make some use of triggers to ensure some of its data integrity. The migrations are intended to be incorporated into your existing go-migrate migrations. Feel free to change the migration file names, as long as they are applied in the same order as defined here. FYI, the migrations include `kms_version` table which is used to ensure that the schema and module are compatible.
Package blacksmith is the package to create applications on top of Blacksmith. Blacksmith is a low-code platform offering a complete and consistent approach for self-managed data engineering. Blacksmith allows software engineers to write low-code ETL using the Go language. It also allows data engineers to write templated SQL for TLT and database migrations on top of one or multiple databases. Any team that is building — or think about building — a data engineering platform knows the tremendous amount of work needed to properly accomplish this mission. Think of Blacksmith as the central piece of your data engineering workflow, leading you to save months of customized and professional work. A new application can be generated using the Blacksmith CLI:
Structable makes a loose distinction between a Record (a description of the data to be stored) and a Recorder (the thing that does the storing). A Record is a simple annotated struct that describes the properties of an object. Structable provides the Recorder (an interface usually backed by a *DbRecorder). The Recorder is capable of doing the following: Structable is pragmatic in the sense that it allows ActiveRecord-like extension of the Record object to allow business logic. A Record does not *have* to be a simple data-only struct. It can have methods -- even methods that operate on the database. Importantly, Structable does not do any relation management. There is no magic to convert structs, arrays, or maps to references to other tables. (If you want that, you may prefer GORM or GORP.) The preferred method of handling relations is to attach additional methods to the Record struct. Structable uses Squirrel for statement building, and you may also use Squirrel for working with your data. The following example is taken from the `example/users.go` file. The above pattern closely binds the Recorder to the Record. Essentially, in this usage Structable works like an ActiveRecord. It is also possible to emulate a DAO-type model and use the Recorder as a data access object and the Record as the data description object. An example of this method can be found in the `example/fence.go` code. The `stbl` tag is of the form: The field name is passed verbatim to the database. So `fieldName` will go to the database as `fieldName`. Structable is not at all opinionated about how you name your tables or fields. Some databases are, though, so you may need to be careful about your own naming conventions. `PRIMARY_KEY` tells Structable that this field is (one of the pieces of) the primary key. Aliases: 'PRIMARY KEY' `AUTO_INCREMENT` tells Structable that this field is created by the database, and should never be assigned during an Insert(). Aliases: SERIAL, AUTO INCREMENT Things Structable doesn't do (by design) However, Squirrel can ease many of these tasks.
Package schema provides access to database schema metadata, for database/sql drivers. For further information about current driver support status, see https://github.com/jimsmart/schema The schema package works alongside database/sql and its underlying driver to provide schema metadata. Both user permissions and current database/schema effect table visibility. Use schema.ColumnTypes() to query column type metadata for a single table: To query table names and column type metadata for all tables, use schema.Tables(). See also https://golang.org/pkg/database/sql/#ColumnType Note: underlying support for column type metadata is driver implementation specific and somewhat variable. The same metadata can also be queried for views also: To obtain a list of columns making up the primary key for a given table:
package pgxmock is a mock library implementing pgx connector. Which has one and only purpose - to simulate pgx driver behavior in tests, without needing a real database connection. It helps to maintain correct **TDD** workflow. It does not require (almost) any modifications to your source code in order to test and mock database operations. Supports concurrency and multiple database mocking. The driver allows to mock any pgx driver method behavior.
Qri is a distributed dataset version control tool. Bigger than a spreadsheet, smaller than a database, datasets are all around us. Use Qri to browse, download, create, fork, and publish datasets on a peer-to-peer network that works both on and offline. more info at: https://qri.io
Package cdb provides a native implementation of cdb, a constant key/value database with some very nice properties. For more information on cdb, see the original design doc at http://cr.yp.to/cdb.html.
Package unchained provides password hashers that are compatible with Django. These hashers can be also used to perform validation against legacy and shared Django databases. Django provides a flexible password storage system and uses PBKDF2 by default. The password format/representation is the same as the one used in Django: This library supports Argon2, BCrypt, PBKDF2, MD5 and SHA1 algorithms.
Package passwordreset implements creation and verification of secure tokens useful for implementation of "reset forgotten password" feature in web applications. This package generates and verifies signed one-time tokens that can be embedded in a link sent to users when they initiate the password reset procedure. When a user changes their password, or when the expiry time passes, the token becomes invalid. Secure token format: where expiration time is the number of seconds since Unix epoch UTC indicating when this token must expire (4 bytes, big-endian, uint32), login is a byte string of arbitrary length (at least 1 byte, not null-terminated), and signature is 32 bytes of HMAC-SHA256(expiration_time || login, k), where k = HMAC-SHA256(expiration_time || login, userkey), where userkey = HMAC-SHA256(password value, secret key), where password value is any piece of information derived from user's password, which will change once the user changes their password (for example, a hash of the password), and secret key is an application-specific secret key. Password value is used to make tokens one-time, that is, once a user changes their password, the token which they used to do a reset, becomes invalid. Usage example: Your application must have a strong secret key for password reset purposes. This key will be used to generate and verify password reset tokens. (If you already have a secret key, for example, for authcookie package, it's better not to reuse it, just use a different one.) Create a function that will query your users database and return some password-related value for the given login. A password-related value means some value that will change once a user changes their password, for example: a password hash, a random salt used to generate it, or time of password creation. This value, mixed with app-specific secret key, will be used as a key for password reset token, thus it will be kept secret. When a user initiates password reset (by entering their login, and maybe answering a secret question), generate a reset token: Send a link with this token to the user by email, for example: https://www.example.com/reset?token=Talo3mRjaGVzdITUAGOXYZwCMq7EtHfYH4ILcBgKaoWXDHTJOIlBUfcr Once a user clicks this link, read a token from it, then verify this token by passing it to VerifyToken function along with the getPasswordHash function, and an app-specific secret key: If verification succeeded, allow to change password for the returned login.
Package edgedb is the official Go driver for EdgeDB. Additionally, github.com/edgedb/edgedb-go/cmd/edgeql-go is a code generator that generates go functions from edgeql files. Typical client usage looks like this: We recommend using environment variables for connection parameters. See the client connection docs for more information. You may also connect to a database using a DSN: Or you can use Option fields. edgedb never returns underlying errors directly. If you are checking for things like context expiration use errors.Is() or errors.As(). Most errors returned by the edgedb package will satisfy the edgedb.Error interface which has methods for introspecting. The following list shows the marshal/unmarshal mapping between EdgeDB types and go types: Note that EdgeDB's std::duration type is represented in int64 microseconds while go's time.Duration type is int64 nanoseconds. It is incorrect to cast one directly to the other. Shape fields that are not required must use optional types for receiving query results. The edgedb.Optional struct can be embedded to make structs optional. Not all types listed above are valid query parameters. To pass a slice of scalar values use array in your query. EdgeDB doesn't currently support using sets as parameters. Nested structures are also not directly allowed but you can use json instead. By default EdgeDB will ignore embedded structs when marshaling/unmarshaling. To treat an embedded struct's fields as part of the parent struct's fields, tag the embedded struct with `edgedb:"$inline"`. Interfaces for user defined marshaler/unmarshalers are documented in the internal/marshal package. Link properties are treated as fields in the linked to struct, and the @ is omitted from the field's tag.
A plugin is defined by plugin.Plugin. Create the file <plugin-name>/plugin.go, then implement a plugin.PluginFunc that creates a plugin.Plugin and returns a pointer to it. Note: The Go files for your plugin (except main.go) should reside in the <plugin-name> folder. Examples: Create main.go in your plugin's root directory. Add a main function which is the entry point for your plugin. This function must call plugin.Serve to instantiate your plugin's gRPC server, and pass the plugin.PluginFunc that you just wrote. Examples: By convention, each table lives in a separate file named table_<table name>.go. Each table has a single table definition function that returns a pointer to a plugin.Table. The table definition includes the name and description of the table, a set of column definitions, and the functions to call in order to list the data for all the rows, or to get data for a single row. Every table MUST define a List and/or Get function. Examples: This is a plugin.HydrateFunc that calls an API and returns data for all rows for the table. To define it, set the property plugin.Plugin.ListConfig. This is a plugin.HydrateFunc that calls an API and returns data for one row of the table. If the API can return a single item keyed by id, you should implement Get so that queries can filter the data as cheaply as possible. To define it, set the property plugin.Plugin.GetConfig. Use plugin.Column to define columns. A column may be populated by a List or Get call. If a column requires data not provide by List or Get, it may define a plugin.HydrateFunc that makes an additional API call for each row. Add a hydrate function for a column by setting plugin.Column.Hydrate. Use transform functions to extract and/or reformat data returned by a hydrate function. A plugin.Logger is passed to the plugin via its context.Context. Messages are written to ~/.steampipe/logs, e.g. ~/.steampipe/logs/plugin-2022-01-01.log. Steampipe uses go-hclog which supports standard log levels: TRACE, DEBUG, INFO, WARN, ERROR. The default is WARN. Use the STEAMPIPE_LOG_LEVEL environment variable to set the level. Steampipe parallelizes hydrate functions as much as possible. Sometimes, however, one hydrate function requires the output from another. Use plugin.HydrateConfig to define the dependency. Use dynamic_tables when you cannot know a table's schema in advance, e.g. the CSV plugin. A user runs a query. Postgres parses the query and sends the parsed request to the Steampipe foreign data wrapper (FDW). The FDW determines which tables and columns are required. The FDW calls one or more [HydrateFunc] to fetch API data. Each table defines special hydrate functions: List and optionally Get. These will always be called before any other hydrate function in the table, as the other functions typically depend on the List or Get. One or more transform functions are called for each column. These extract and/or reformat data returned by the hydrate functions. The plugin returns the transformed data to the FDW. Steampipe FDW returns the results to the database.
Package freetds provides interface to Microsoft Sql Server database by using freetds C lib: http://www.freetds.org.
Package p contains an HTTP Cloud Function. Package p contains an HTTP Cloud Function. Package p contains an HTTP Cloud Function. Package p contains an HTTP Cloud Function. Package p contains an HTTP Cloud Function. Package p contains an HTTP Cloud Function. Package p contains an HTTP Cloud Function. Package p contains an HTTP Cloud Function. Package p contains an HTTP Cloud Function. Package p contains an HTTP Cloud Function. Package p contains an HTTP Cloud Function. Package p contains an HTTP Cloud Function. Package p contains an HTTP Cloud Function. Package p contains an HTTP Cloud Function. Package p contains an HTTP Cloud Function.