Package dbobj provides a simple ORM-like framework for SQLite and derivatives
Package db is a way to interface with sql databases as more than just simple data stores. At the moment, it is an ORM-like in an alpha state. A Connection is both a sql.DB connection to an active database and a mechanism for translating that database connection into structs. Structs can be configured by the Connection.Config mechanism or struct tags for special cases. Mappers turn rows from sql tables into structs. Mappers can be used to save structs to databases or retrieve subsets of the table using scopes. Scopes are implemented on Mappers, which return a Queryable that you can then chain off of to further filter the results that would be returned with RetrieveAll. Scopes are useful for more than just filtering results, you can also act on them. You will be able to pull out specific attributes into arrays of simple types. For example you should be able to run the following code to get a list of users who meet certain conditions and pull back their email addresses. Either way would work. Option 1 Option 2 But wait there's more. You can also run delete or update statements from scopes. You can update via a column and value, a map of columns to values, or straight sql. You can also run a sql delete from a scope. Scopes are used for more than just that. Has Many relations can have both an array of related structs, and a scope that are filled in by the retrieval process. Calling user.Posts.Save(post) would set the user relation on the post before saving it. user.Posts.Where(...) would start with a queryable scoped to the user's posts before applying the rest of the scope. Start building an environment by calling NewConnection with you dialect name ("mysql", "sqlite", "postgres") the name of the database to use, and the connector string for the database adapter (document this). There is code to manage Transactions using Connections, this isn't well supported and should be removed if it is not well supported or semi-supported until EphemeralConnections come around. Mappers are created by running CreateMapper on a Connection. You will receive back a Mapper object that you can then use to retrieve database rows, save structs into the database and create scopes of the struct. Mappers can also save structs to the database using the SaveAll function. You can pass either a single instance or multiple instances to it, and it will use the primary key value of the passed instance(s) to determine whether it needs to update existing records or create new records. Mapper+'s are like mappers, but they are designed to be part of a user defined struct, that the user then defines their own Scopes on, where each custom scope would be composed of 1 or more regular scopes. The main difference between a Mapper and a Mapper+ is that Mappers return a Scope when you call a Queryable method on it, while Mapper+ will return a new Mapper+ for the first scope, and then all further scopes will be applied to the Mapper+ object. The reason for that is so that user structs do not have to be continually casted to or created, the Scopes simply add to the current Scope. When you need to duplicate a Mapper+ scope, you could use Identity, but that will return you a Scope, not a MapperPlus. To assist in this situation, the MapperPlus interface has a Dupe method that will return a MapperPlus for you to use for this situation. Scopes are the cornerstone of db. Scopes are the common case for creating SQL queries. Scopes come in different forms, depending on what you are wanting to accomplish. For instance, lets say I needed all students who have attended at least 1 class and have a score of 90 on the 3 tests they've taken. Or perhaps you would rather see the top 5 most popular posts on your blog from the articles you released in the past month. For detailed descriptions of each Scope function, please see the Queryable interface documentation. There are multiple ways to retrieve data and map the data into struct instancess. The Find function takes two parameters, the primary key of the record you want and a pointer to the struct instance you want to map to. The Find function may start from Mappers, Scopes and Mapper+'s. Note that the Find will still respect any conditions on the Scope or Mapper+ if you are calling it from one of them. The Retrieve function takes 1 parameter, the struct instance to map the first records data into. If there are more than 1 records that would be returned from the current Scope, Mapper, or Mapper+, then the first record will be the mapped record. The RetrieveAll function takes 1 parameter, which is a pointer to an array of the struct you want to map into. You can save slices of new structs into the database using a Mapper using the SaveAll call. You can also save single instances of structs as well using SaveAll, but you will need to pass a pointer to the struct instance, so the mapper can update the instance with the primary key assigned to that struct. You can also update columns in the database off of a Scope or a Mapper. There are three functions, UpdateAttribute, UpdateAttributes, and UpdateSql that will to this for you. UpdateAttribute takes a column name and a value, and will then update that column to the value for all the database rows that would match the scope. UpdateAttributes takes a map of column names to values so you may update more than 1 column at once. UpdateSql takes a sql fragment and will allow you to write sql that uses sql functions instead of using dumb values. UpdateSql will be less used when db.Formula objects are implemented. UpdateSql is not yet implemented as well. The Count method allows you to retrieve a count of the rows that would be retrieved from a Scope or Mapper. The Pluck method allows you to retrieve a selected column from a Scope, Mapper, etc. It is then mapped into a simple array value that was passed as the second value. The CountOn method is a user controlled version of Count. If you would like to specify a specific column, perhaps to do a DISTINCT count on, this is what you want. The PluckSeveral is similar to Pluck, but allows you to specify multiple parameters and arrays to map results into. It uses a string array for the first parameters, then a variable amount of pointers to the arrays for the data. The Select function allows you to map specially selected columns and/or formulas into purpose-written or anonymous structs. If a table has many columns, or you are returning quite a bit of data, this can be a performance boost to use special structs instead of the default mapper. There are also TableInformation and ScopeInformation interfaces. I would caution use of the two interfaces at the moment, as they are intended to be improved heavily before a stable release of db. A stable version of db will provide a comprehensive informational interface for both Scopes and Mappers, but there are more pressing features than it at the moment. If your use case involves significant use of the database, instead of using the database as a simple persistence mechanism, you will enjoy the Mixin functionality offered by db. When you add the db.Mixin struct as an embedded field to your sturcts, you will have the ability to Save, Delete, and UpdateAttribute(s) from struct instances directly instead of having to use the mapper objects. Mixins need to be initialized explicitly, this can be done by sending the instances individually, as a slice, or any number of individual instances to the mapper for that sturct type's Initialize function. You can also initialize individual instances by calling that instances Init function with a pointer to the instance. This is only required if you are constructing your instances manually and not using the Find/Retrieve/RetrieveAll Scope/Mapper functions. Find, Retrieve, and RetrieveAll will all initialize the instances they retrieve if the instances have Mixin instances. Instances do not need to be resident in the database for Initialization to succeed. Instances also don't need to be initialized to be saved using the Mapper.SaveAll function. While joining in db can be divided multiple ways, the simplest division may be the division between automatic joins and manual joins. Manual Joins may be specified by the user in the joins query and may add specifiers to the join call, or may be joining on non-intuitive columns. Automatic joins are discovered during mapping by db and can the be retrieved using the mapper or mixin, or using the Include Scope method. Manual scopes are intended for use either in cases when you need a filtering that is created from the existence of the join, or you need to select columns/formulas/etc. from the query using the Select method of retrieval. Automatic joins are declared as part of the struct, and then can be used in Join calls by simply passing in a string or mapper corresponding to the joined struct. See an example below. By default, joins are implemented as outer joins, but you can default specific joins to be inner joins in the sql statment by setting the struct tag of db_join to be inner. You can also use the alternative Join function, InnerJoin to have the join run as an inner join. Finally, if you have set a db_join to default to inner, but want it to be a outer join instead, you can use the OuterJoin function. The FullJoin function allows you to retrieve records that don't have a match to the primary mapped struct. You pass the normal Join paramerters, but add a pointer to an array of the struct you are asking to be joined, which will be filled with the non-matching records when the first Retrieve/RetrieveAll call is made If you need to use functions to be evaluated by the sql server as part of conditions, you can pass a formula created with the Func function. Func's can have their own parameters, which you should specify using ?'s to denote where the values should appear. Where scopings do not respect Func's at the moment, but they will in the future. The Col function allows you to specify a column to be used as a parameter in the same manner as a value or Func. A Dialect creates a way for db to talk to a specific RDBMS. The current internal ones are mysql and sqlite3, with postgres planned for the near future. You can replace existing dialects or add your own dialects by writing a struct that corresponds to the Dialect interface and then calling RegisterDialect with the name you want the dialect to be accessible under and an instance of your dialect struct. Before a public announcement of a db version, I need to implement the Logging facilities. It won't be difficult, but it takes time. Time that I haven't invested yet.
Package bstore is an in-process database with serializable transactions supporting referential/unique/nonzero constraints, (multikey) indices, automatic schema management based on Go types and struct tags, and a query API. Bstore a small, pure Go library that still provides most of the common data consistency requirements for modest database use cases. Bstore aims to make basic use of cgo-based libraries, such as sqlite, unnecessary. Bstore implements autoincrementing primary keys, indices, default values, enforcement of nonzero, unique and referential integrity constraints, automatic schema updates and a query API for combining filters/sorting/limits. Queries are planned and executed using indices for speed where possible. Bstore works with Go types: you typically don't have to write any (un)marshal code for your types. Bstore is not an ORM, it plans and executes queries itself. Struct field types currently supported for storing, including pointers to these types, but not pointers to pointers: Note: int and uint are stored as int32 and uint32, for compatibility of database files between 32bit and 64bit systems. Where possible, use explicit (u)int32 or (u)int64 types. Cyclic types are supported, but cyclic data is not. Attempting to store cyclic data will likely result in a stack overflow panic. Anonymous struct fields are handled by taking in each of the anonymous struct's fields as a type's own fields. The named embedded type is not part of the type schema, and with a Query it can currently only be used with UpdateField and UpdateFields, not for filtering. Bstore embraces the use of Go zero values. Use zero values, possibly pointers, where you would use NULL values in SQL. The typical Go struct can be stored in the database. The first field of a struct type is its primary key, must always be unique, and in case of an integer type the insertion of a zero value automatically changes it to the next sequence number by default. Additional behaviour can be configured through struct tag "bstore". The values are comma-separated. Typically one word, but some have multiple space-separated words: Before using a Go type, you must register it for use with the open database by passing a (possibly zero) value of that type to the Open or Register functions. For each type, a type definition is stored in the database. If a type has an updated definition since the previous database open, a new type definition is added to the database automatically and any required modifications are made and checked: Indexes (re)created, fields added/removed, new nonzero/unique/reference constraints validated. As a special case, you can change field types between pointer and non-pointer types. With one exception: changing from pointer to non-pointer where the type has a field that must be nonzero is not allowed. The on-disk encoding will not be changed, and nil pointers will turn into zero values, and zero values into nil pointers. Also see section Limitations about pointer types. Because named embed structs are not part of the type definition, you can wrap/unwrap fields into a embed/anonymous struct field. No new type definition is created. Some schema conversions are not allowed. In some cases due to architectural limitations. In some cases because the constraint checks haven't been implemented yet, or the parsing code does not yet know how to parse the old on-disk values into the updated Go types. If you need a conversion that is not supported, you will need to write a manual conversion, and you would have to keep track whether the update has been executed. Changes that are allowed: Conversions that are not currently allowed, but may be in the future: Bolt is used as underlying storage through the bbolt library. Bolt stores key/values in a single file, allowing multiple/nested buckets (namespaces) in a B+tree and provides ACID serializable transactions. A single write transaction can be active at a time, and one or more read-only transactions. Do not start a blocking read-only transaction in a goroutine while holding a writable transaction or vice versa, this can cause deadlock. Bolt returns Go values that are memory mapped to the database file. This means Bolt/bstore database files cannot be transferred between machines with different endianness. Bolt uses explicit widths for its types, so files can be transferred between 32bit and 64bit machines of same endianness. While Bolt returns read-only memory mapped byte slices, bstore only ever returns parsed/copied regular writable Go values that require no special programmer attention. For each Go type opened for a database file, bstore ensures a Bolt bucket exists with two subbuckets: For each index, another subbucket is created, its name starting with "index.". The stored keys consist of the index fields followed by the primary key, and an empty value. See format.md for details. Bstore has limitations, not all of which are architectural so may be fixed in the future. Bstore does not implement the equivalent of SQL joins, aggregates, and many other concepts. Filtering/comparing/sorting on pointer fields is not allowed. Pointer fields cannot have a (unique) index. Use non-pointer values with the zero value as the equivalent of a nil pointer. The first field of a stored struct is always the primary key. Autoincrement is only available for the primary key. Bolt opens the database file with a lock. Only one process can have the database open at a time. An index stored on disk in Bolt can consume more disk space than other database systems would: For each record, the indexed field(s) and primary key are stored in full. Because bstore uses Bolt as key/value store, and doesn't manage disk pages itself, it cannot as efficiently pack an index page with many records. Interface values cannot be stored. This would require storing the type along with the value. Instead, use a type that is a BinaryMarshaler. Values of builtin type "complex" cannot be stored. Bstore inherits limitations from Bolt, see https://pkg.go.dev/go.etcd.io/bbolt#readme-caveats-amp-limitations. Sqlite is a great library, but Go applications that require cgo are hard to cross-compile. With bstore, cross-compiling to most Go-supported platforms stays trivial (though not plan9, unfortunately). Although bstore is much more limited in so many aspects than sqlite, bstore also offers some advantages as well. Some points of comparison:
Package persistence is a pure* Go sql dialect agnostic sql interface. It can be used as a raw query executor, query builder, or ORM. Persistence comes packaged with the dialects which can be used to interact with either postgres or sqlite databases. These are both optional to use. *the sqlite dialect driver, if used, requires cgo and is therefore not "pure". It is not required for persistence to be usable though.
package dh is a tool for maintaining RDBMS versions. dh is inspired by my own DBIx::Class::DeploymentHandler, which worked well but required the use of an ORM, and additionally had some other limitations, like needing to reliably split the SQL in the migration files into statements, which ended up being pretty frustrating. dh simplifies the situation dramatically. You create directories of migrations; out of the box these migrations can contain SQL files (defined by having the `.sql` extension) or JSON files (defined by having the `.json` extension, containing simply an array of strings to be run as SQL statements.) In addition to directories of files, at the same level as the directories you define a plan (in `plan.txt`) that simply lists the migrations to be applied in order. Comments (starting with `#`) are ignored, as are blank lines. ## The First Migration The first migration is special, as it must create the table that dh uses to store which migrations have been applied. To work with the built in MigrationStorage it should be named `dh_migrations` and only needs two columns, `version`, which is the directory name applied, and `id`, which must increase with each applied version, so that queries for the last `id` will return the most recently applied version. dh includes a migration fit for the first migration that works for SQLite, which you can apply by doing something like this: If people submit MRs for other databases I'll happily have them.