Package mempool provides a policy-enforced pool of unmined Decred transactions. A key responsibility of the Decred network is mining transactions – regular transactions and stake transactions – into blocks. In order to facilitate this, the mining process relies on having a readily-available source of transactions to include in a block that is being solved. At a high level, this package satisfies that requirement by providing an in-memory pool of fully validated transactions that can also optionally be further filtered based upon a configurable policy. The Policy configuration options has flags that control whether or not "standard" transactions and old votes are accepted into the mempool. In essence, a "standard" transaction is one that satisfies a fairly strict set of requirements that are largely intended to help provide fair use of the system to all users. It is important to note that what is considered to be a "standard" transaction changes over time as policy and consensus rules evolve. For some insight, at the time of this writing, an example of _some_ of the criteria that are required for a transaction to be considered standard are that it is of the most-recently supported version, finalized, does not exceed a specific size, and only consists of specific script forms. Since this package does not deal with other Decred specifics such as network communication and transaction relay, it returns a list of transactions that were accepted which gives the caller a high level of flexibility in how they want to proceed. Typically, this will involve things such as relaying the transactions to other peers on the network and notifying the mining process that new transactions are available. This package has intentionally been designed so it can be used as a standalone package for any projects needing the ability create an in-memory pool of Decred transactions that are not only valid by consensus rules, but also adhere to a configurable policy ## Feature Overview The following is a quick overview of the major features. It is not intended to be an exhaustive list. - Maintain a pool of fully validated transactions - Stake transaction support (ticket purchases, votes and revocations) - Orphan transaction support (transactions that spend from unknown outputs) - Configurable transaction acceptance policy - Additional metadata tracking for each transaction - Manual control of transaction removal Errors returned by this package are either the raw errors provided by underlying calls or of type mempool.RuleError. Since there are two classes of rules (mempool acceptance rules and blockchain (consensus) acceptance rules), the mempool.RuleError type contains a single Err field which will, in turn, either be a mempool.TxRuleError or a blockchain.RuleError. The first indicates a violation of mempool acceptance rules while the latter indicates a violation of consensus acceptance rules. This allows the caller to easily differentiate between unexpected errors, such as database errors, versus errors due to rule violations through type assertions. In addition, callers can programmatically determine the specific rule violation by type asserting the Err field to one of the aforementioned types and examining their underlying ErrorCode field.
Package levigo provides the ability to create and access LevelDB databases. levigo.Open opens and creates databases. The DB struct returned by Open provides DB.Get, DB.Put and DB.Delete to modify and query the database. For bulk reads, use an Iterator. If you want to avoid disturbing your live traffic while doing the bulk read, be sure to call SetFillCache(false) on the ReadOptions you use when creating the Iterator. Batched, atomic writes can be performed with a WriteBatch and DB.Write. If your working dataset does not fit in memory, you'll want to add a bloom filter to your database. NewBloomFilter and Options.SetFilterPolicy is what you want. NewBloomFilter is amount of bits in the filter to use per key in your database. If you're using a custom comparator in your code, be aware you may have to make your own filter policy object. This documentation is not a complete discussion of LevelDB. Please read the LevelDB documentation <http://code.google.com/p/leveldb> for information on its operation. You'll find lots of goodies there.
Package pgconn is a low-level PostgreSQL database driver. pgconn provides lower level access to a PostgreSQL connection than a database/sql or pgx connection. It operates at nearly the same level is the C library libpq. Use Connect to establish a connection. It accepts a connection string in URL or DSN and will read the environment for libpq style environment variables. ExecParams and ExecPrepared execute a single query. They return readers that iterate over each row. The Read method reads all rows into memory. Exec and ExecBatch can execute multiple queries in a single round trip. They return readers that iterate over each query result. The ReadAll method reads all query results into memory. All potentially blocking operations take a context.Context. If a context is canceled while the method is in progress the method immediately returns. In most circumstances, this will close the underlying connection. The CancelRequest method may be used to request the PostgreSQL server cancel an in-progress query without forcing the client to abort.
Package driver implements a Go driver for the ArangoDB database. To get started, create a connection to the database and wrap a client around it.
Package resourceexplorer2 provides the API client, operations, and parameter types for AWS Resource Explorer. Amazon Web Services Resource Explorer is a resource search and discovery service. By using Resource Explorer, you can explore your resources using an internet search engine-like experience. Examples of resources include Amazon Relational Database Service (Amazon RDS) instances, Amazon Simple Storage Service (Amazon S3) buckets, or Amazon DynamoDB tables. You can search for your resources using resource metadata like names, tags, and IDs. Resource Explorer can search across all of the Amazon Web Services Regions in your account in which you turn the service on, to simplify your cross-Region workloads. Resource Explorer scans the resources in each of the Amazon Web Services Regions in your Amazon Web Services account in which you turn on Resource Explorer. Resource Explorer creates and maintains an indexin each Region, with the details of that Region's resources. You can search across all of the indexed Regions in your account by designating one of your Amazon Web Services Regions to contain the aggregator index for the account. When you promote a local index in a Region to become the aggregator index for the account, Resource Explorer automatically replicates the index information from all local indexes in the other Regions to the aggregator index. Therefore, the Region with the aggregator index has a copy of all resource information for all Regions in the account where you turned on Resource Explorer. As a result, views in the aggregator index Region include resources from all of the indexed Regions in your account. For more information about Amazon Web Services Resource Explorer, including how to enable and configure the service, see the Amazon Web Services Resource Explorer User Guide.
Package genji implements a document-oriented, embedded SQL database.
Command xo generates code from database schemas and custom queries. Works with PostgreSQL, MySQL, Microsoft SQL Server, Oracle Database, and SQLite3.
Package sqlite provides a Go interface to SQLite 3. The semantics of this package are deliberately close to the SQLite3 C API, so it is helpful to be familiar with http://www.sqlite.org/c3ref/intro.html. An SQLite connection is represented by a *sqlite.Conn. Connections cannot be used concurrently. A typical Go program will create a pool of connections (using Open to create a *sqlitex.Pool) so goroutines can borrow a connection while they need to talk to the database. This package assumes SQLite will be used concurrently by the process through several connections, so the build options for SQLite enable multi-threading and the shared cache: https://www.sqlite.org/sharedcache.html The implementation automatically handles shared cache locking, see the documentation on Stmt.Step for details. The optional SQLite3 compiled in are: FTS5, RTree, JSON1, Session, GeoPoly This is not a database/sql driver. Statements are prepared with the Prepare and PrepareTransient methods. When using Prepare, statements are keyed inside a connection by the original query string used to create them. This means long-running high-performance code paths can write: After all the connections in a pool have been warmed up by passing through one of these Prepare calls, subsequent calls are simply a map lookup that returns an existing statement. The sqlite package supports the SQLite incremental I/O interface for streaming blob data into and out of the the database without loading the entire blob into a single []byte. (This is important when working either with very large blobs, or more commonly, a large number of moderate-sized blobs concurrently.) To write a blob, first use an INSERT statement to set the size of the blob and assign a rowid: Use BindZeroBlob or SetZeroBlob to set the size of myblob. Then you can open the blob with: Every connection can have a done channel associated with it using the SetInterrupt method. This is typically the channel returned by a context.Context Done method. For example, a timeout can be associated with a connection session: As database connections are long-lived, the SetInterrupt method can be called multiple times to reset the associated lifetime. When using pools, the shorthand for associating a context with a connection is: SQLite transactions have to be managed manually with this package by directly calling BEGIN / COMMIT / ROLLBACK or SAVEPOINT / RELEASE/ ROLLBACK. The sqlitex has a Savepoint function that helps automate this. Using a Pool to execute SQL in a concurrent HTTP handler. For helper functions that make some kinds of statements easier to write see the sqlitex package.
Package clickhouse implements gdb.Driver, which supports operations for database ClickHouse.
Package database provides a block and metadata storage database. This package provides a database layer to store and retrieve block data and arbitrary metadata in a simple and efficient manner. The default backend, ffldb, has a strong focus on speed, efficiency, and robustness. It makes use leveldb for the metadata, flat files for block storage, and strict checksums in key areas to ensure data integrity. A quick overview of the features database provides are as follows: The main entry point is the DB interface. It exposes functionality for transactional-based access and storage of metadata and block data. It is obtained via the Create and Open functions which take a database type string that identifies the specific database driver (backend) to use as well as arguments specific to the specified driver. The Namespace interface is an abstraction that provides facilities for obtaining transactions (the Tx interface) that are the basis of all database reads and writes. Unlike some database interfaces that support reading and writing without transactions, this interface requires transactions even when only reading or writing a single key. The Begin function provides an unmanaged transaction while the View and Update functions provide a managed transaction. These are described in more detail below. The Tx interface provides facilities for rolling back or committing changes that took place while the transaction was active. It also provides the root metadata bucket under which all keys, values, and nested buckets are stored. A transaction can either be read-only or read-write and managed or unmanaged. A managed transaction is one where the caller provides a function to execute within the context of the transaction and the commit or rollback is handled automatically depending on whether or not the provided function returns an error. Attempting to manually call Rollback or Commit on the managed transaction will result in a panic. An unmanaged transaction, on the other hand, requires the caller to manually call Commit or Rollback when they are finished with it. Leaving transactions open for long periods of time can have several adverse effects, so it is recommended that managed transactions are used instead. The Bucket interface provides the ability to manipulate key/value pairs and nested buckets as well as iterate through them. The Get, Put, and Delete functions work with key/value pairs, while the Bucket, CreateBucket, CreateBucketIfNotExists, and DeleteBucket functions work with buckets. The ForEach function allows the caller to provide a function to be called with each key/value pair and nested bucket in the current bucket. As discussed above, all of the functions which are used to manipulate key/value pairs and nested buckets exist on the Bucket interface. The root metadata bucket is the upper-most bucket in which data is stored and is created at the same time as the database. Use the Metadata function on the Tx interface to retrieve it. The CreateBucket and CreateBucketIfNotExists functions on the Bucket interface provide the ability to create an arbitrary number of nested buckets. It is a good idea to avoid a lot of buckets with little data in them as it could lead to poor page utilization depending on the specific driver in use. This example demonstrates creating a new database and using a managed read-write transaction to store and retrieve metadata. This example demonstrates creating a new database, using a managed read-write transaction to store a block, and using a managed read-only transaction to fetch the block.
Package refs strives to offer a couple of types and corresponding encoding code to help other go-based ssb projects to talk about message, feed and blob references without pulling in all of go-ssb and it's network and database code.
Package safebrowsing implements a client for the Safe Browsing API v4. API v4 emphasizes efficient usage of the network for bandwidth-constrained applications such as mobile devices. It achieves this by maintaining a small portion of the server state locally such that some queries can be answered immediately without any network requests. Thus, fewer API calls made, means less bandwidth is used. At a high-level, the implementation does the following: Essentially the query is presented to three major components: The database, the cache, and the API. Each of these may satisfy the query immediately, or may say that it does not know and that the query should be satisfied by the next component. The goal of the database and cache is to satisfy as many queries as possible to avoid using the API. Starting with a user query, a hash of the query is performed to preserve privacy regarded the exact nature of the query. For example, if the query was for a URL, then this would be the SHA256 hash of the URL in question. Given a query hash, we first check the local database (which is periodically synced with the global Safe Browsing API servers). This database will either tell us that the query is definitely safe, or that it does not have enough information. If we are unsure about the query, we check the local cache, which can be used to satisfy queries immediately if the same query had been made recently. The cache will tell us that the query is either safe, unsafe, or unknown (because the it's not in the cache or the entry expired). If we are still unsure about the query, then we finally query the API server, which is guaranteed to return to us an authoritative answer, assuming no networking failures. For more information, see the API developer's guide:
Package rqlite is a lightweight, distributed relational database built on SQLite. rqlite uses Raft to achieve consensus across the cluster of SQLite databases. It ensures that every change made to the database is made to a majority of underlying SQLite files, or none at all. rqlite gives you the functionality of a rock solid, fault-tolerant, replicated relational database, but with very easy installation, deployment, and operation. With it you've got a lightweight and reliable distributed store for relational data. You could use rqlite as part of a larger system, as a central store for some critical relational data, without having to run a heavier solution like MySQL.
Package gnomock contains a framework to set up temporary docker containers for integration and end-to-end testing of other applications. It handles pulling images, starting containers, waiting for them to become available, setting up their initial state and cleaning up in the end. Its power is in a variety of Presets, each implementing a specific database, service or other tools. Each preset provides ways of setting up its initial state as easily as possible: SQL schema creation, test data upload into S3, sending test events to Splunk, etc. All containers created using Gnomock have a self-destruct mechanism that kicks-in right after the test execution completes. To debug cases where containers don't behave as expected, there are options like `WithDebugMode()` or `WithLogWriter()`. For the list of presets, please refer to https://pkg.go.dev/github.com/orlangure/gnomock/preset. Each preset can then be used in the following way:
Package goworker is a Resque-compatible, Go-based background worker. It allows you to push jobs into a queue using an expressive language like Ruby while harnessing the efficiency and concurrency of Go to minimize job latency and cost. goworker workers can run alongside Ruby Resque clients so that you can keep all but your most resource-intensive jobs in Ruby. To create a worker, write a function matching the signature and register it using Here is a simple worker that prints its arguments: To create workers that share a database pool or other resources, use a closure to share variables. goworker worker functions receive the queue they are serving and a slice of interfaces. To use them as parameters to other functions, use Go type assertions to convert them into usable types. For testing, it is helpful to use the redis-cli program to insert jobs onto the Redis queue: will insert 100 jobs for the MyClass worker onto the myqueue queue. It is equivalent to: After building your workers, you will have an executable that you can run which will automatically poll a Redis server and call your workers as jobs arrive. There are several flags which control the operation of the goworker client. -queues="comma,delimited,queues" — This is the only required flag. The recommended practice is to separate your Resque workers from your goworkers with different queues. Otherwise, Resque worker classes that have no goworker analog will cause the goworker process to fail the jobs. Because of this, there is no default queue, nor is there a way to select all queues (à la Resque's * queue). Queues are processed in the order they are specififed. If you have multiple queues you can assign them weights. A queue with a weight of 2 will be checked twice as often as a queue with a weight of 1: -queues='high=2,low=1'. -interval=5.0 — Specifies the wait period between polling if no job was in the queue the last time one was requested. -concurrency=25 — Specifies the number of concurrently executing workers. This number can be as low as 1 or rather comfortably as high as 100,000, and should be tuned to your workflow and the availability of outside resources. -connections=2 — Specifies the maximum number of Redis connections that goworker will consume between the poller and all workers. There is not much performance gain over two and a slight penalty when using only one. This is configurable in case you need to keep connection counts low for cloud Redis providers who limit plans on maxclients. -uri=redis://localhost:6379/ — Specifies the URI of the Redis database from which goworker polls for jobs. Accepts URIs of the format redis://user:pass@host:port/db or unix:///path/to/redis.sock. The flag may also be set by the environment variable $($REDIS_PROVIDER) or $REDIS_URL. E.g. set $REDIS_PROVIDER to REDISTOGO_URL on Heroku to let the Redis To Go add-on configure the Redis database. -namespace=resque: — Specifies the namespace from which goworker retrieves jobs and stores stats on workers. -exit-on-complete=false — Exits goworker when there are no jobs left in the queue. This is helpful in conjunction with the time command to benchmark different configurations. -use-number=false — Uses json.Number when decoding numbers in the job payloads. This will avoid issues that occur when goworker and the json package decode large numbers as floats, which then get encoded in scientific notation, losing pecision. This will default to true soon. You can also configure your own flags for use within your workers. Be sure to set them before calling goworker.Main(). It is okay to call flags.Parse() before calling goworker.Main() if you need to do additional processing on your flags. To stop goworker, send a QUIT, TERM, or INT signal to the process. This will immediately stop job polling. There can be up to $CONCURRENCY jobs currently running, which will continue to run until they are finished. Like Resque, goworker makes no guarantees about the safety of jobs in the event of process shutdown. Workers must be both idempotent and tolerant to loss of the job in the event of failure. If the process is killed with a KILL or by a system failure, there may be one job that is currently in the poller's buffer that will be lost without any representation in either the queue or the worker variable. If you are running Goworker on a system like Heroku, which sends a TERM to signal a process that it needs to stop, ten seconds later sends a KILL to force the process to stop, your jobs must finish within 10 seconds or they may be lost. Jobs will be recoverable from the Redis database under as a JSON object with keys queue, run_at, and payload, but the process is manual. Additionally, there is no guarantee that the job in Redis under the worker key has not finished, if the process is killed before goworker can flush the update to Redis.
SizedWaitGroup adds the feature of limiting the maximum number of concurrently started routines. It could for example be used to start multiples routines querying a database but without sending too much queries in order to not overload the given database. Rémy Mathieu © 2016
Package blockchain implements Decred block handling and chain selection rules. The Decred block handling and chain selection rules are an integral, and quite likely the most important, part of decred. At its core, Decred is a distributed consensus of which blocks are valid and which ones will comprise the main block chain (public ledger) that ultimately determines accepted transactions, so it is extremely important that fully validating nodes agree on all rules. At a high level, this package provides support for inserting new blocks into the block chain according to the aforementioned rules. It includes functionality such as rejecting duplicate blocks, ensuring blocks and transactions follow all rules, and best chain selection along with reorganization. Since this package does not deal with other Decred specifics such as network communication or wallets, it provides a notification system which gives the caller a high level of flexibility in how they want to react to certain events such as newly connected main chain blocks which might result in wallet updates. Before a block is allowed into the block chain, it must go through an intensive series of validation rules. The following list serves as a general outline of those rules to provide some intuition into what is going on under the hood, but is by no means exhaustive: Errors returned by this package are either the raw errors provided by underlying calls or of type blockchain.RuleError. This allows the caller to differentiate between unexpected errors, such as database errors, versus errors due to rule violations through type assertions. In addition, callers can programmatically determine the specific rule violation by examining the ErrorCode field of the type asserted blockchain.RuleError.
Package monkit is a flexible code instrumenting and data collection library. I'm going to try and sell you as fast as I can on this library. Example usage We've got tools that capture distribution information (including quantiles) about int64, float64, and bool types. We have tools that capture data about events (we've got meters for deltas, rates, etc). We have rich tools for capturing information about tasks and functions, and literally anything that can generate a name and a number. Almost just as importantly, the amount of boilerplate and code you have to write to get these features is very minimal. Data that's hard to measure probably won't get measured. This data can be collected and sent to Graphite (http://graphite.wikidot.com/) or any other time-series database. Here's a selection of live stats from one of our storage nodes: This library generates call graphs of your live process for you. These call graphs aren't created through sampling. They're full pictures of all of the interesting functions you've annotated, along with quantile information about their successes, failures, how often they panic, return an error (if so instrumented), how many are currently running, etc. The data can be returned in dot format, in json, in text, and can be about just the functions that are currently executing, or all the functions the monitoring system has ever seen. Here's another example of one of our production nodes: https://raw.githubusercontent.com/spacemonkeygo/monkit/master/images/callgraph2.png This library generates trace graphs of your live process for you directly, without requiring standing up some tracing system such as Zipkin (though you can do that too). Inspired by Google's Dapper (http://research.google.com/pubs/pub36356.html) and Twitter's Zipkin (http://zipkin.io), we have process-internal trace graphs, triggerable by a number of different methods. You get this trace information for free whenever you use Go contexts (https://blog.golang.org/context) and function monitoring. The output formats are svg and json. Additionally, the library supports trace observation plugins, and we've written a plugin that sends this data to Zipkin (http://github.com/spacemonkeygo/monkit-zipkin). https://raw.githubusercontent.com/spacemonkeygo/monkit/master/images/trace.png Before our crazy Go rewrite of everything (https://www.spacemonkey.com/blog/posts/go-space-monkey) (and before we had even seen Google's Dapper paper), we were a Python shop, and all of our "interesting" functions were decorated with a helper that collected timing information and sent it to Graphite. When we transliterated to Go, we wanted to preserve that functionality, so the first version of our monitoring package was born. Over time it started to get janky, especially as we found Zipkin and started adding tracing functionality to it. We rewrote all of our Go code to use Google contexts, and then realized we could get call graph information. We decided a refactor and then an all-out rethinking of our monitoring package was best, and so now we have this library. Sometimes you really want callstack contextual information without having to pass arguments through everything on the call stack. In other languages, many people implement this with thread-local storage. Example: let's say you have written a big system that responds to user requests. All of your libraries log using your log library. During initial development everything is easy to debug, since there's low user load, but now you've scaled and there's OVER TEN USERS and it's kind of hard to tell what log lines were caused by what. Wouldn't it be nice to add request ids to all of the log lines kicked off by that request? Then you could grep for all log lines caused by a specific request id. Geez, it would suck to have to pass all contextual debugging information through all of your callsites. Google solved this problem by always passing a context.Context interface through from call to call. A Context is basically just a mapping of arbitrary keys to arbitrary values that users can add new values for. This way if you decide to add a request context, you can add it to your Context and then all callsites that decend from that place will have the new data in their contexts. It is admittedly very verbose to add contexts to every function call. Painfully so. I hope to write more about it in the future, but Google also wrote up their thoughts about it (https://blog.golang.org/context), which you can go read. For now, just swallow your disgust and let's keep moving. Let's make a super simple Varnish (https://www.varnish-cache.org/) clone. Open up gedit! (Okay just kidding, open whatever text editor you want.) For this motivating program, we won't even add the caching, though there's comments for where to add it if you'd like. For now, let's just make a barebones system that will proxy HTTP requests. We'll call it VLite, but maybe we should call it VReallyLite. Run and build this and open localhost:8080 in your browser. If you use the default proxy target, it should inform you that the world hasn't been destroyed yet. The first thing you'll want to do is add the small amount of boilerplate to make the instrumentation we're going to add to your process observable later. Import the basic monkit packages: and then register environmental statistics and kick off a goroutine in your main method to serve debug requests: Rebuild, and then check out localhost:9000/stats (or localhost:9000/stats/json, if you prefer) in your browser! Remember what I said about Google's contexts (https://blog.golang.org/context)? It might seem a bit overkill for such a small project, but it's time to add them. To help out here, I've created a library that constructs contexts for you for incoming HTTP requests. Nothing that's about to happen requires my webhelp library (https://godoc.org/github.com/jtolds/webhelp), but here is the code now refactored to receive and pass contexts through our two per-request calls. You can create a new context for a request however you want. One reason to use something like webhelp is that the cancelation feature of Contexts is hooked up to the HTTP request getting canceled. Let's start to get statistics about how many requests we receive! First, this package (main) will need to get a monitoring Scope. Add this global definition right after all your imports, much like you'd create a logger with many logging libraries: Now, make the error return value of HandleHTTP named (so, (err error)), and add this defer line as the very first instruction of HandleHTTP: Let's also add the same line (albeit modified for the lack of error) to Proxy, replacing &err with nil: You should now have something like: We'll unpack what's going on here, but for now: For this new funcs dataset, if you want a graph, you can download a dot graph at localhost:9000/funcs/dot and json information from localhost:9000/funcs/json. You should see something like: with a similar report for the Proxy method, or a graph like: https://raw.githubusercontent.com/spacemonkeygo/monkit/master/images/handlehttp.png This data reports the overall callgraph of execution for known traces, along with how many of each function are currently running, the most running concurrently (the highwater), how many were successful along with quantile timing information, how many errors there were (with quantile timing information if applicable), and how many panics there were. Since the Proxy method isn't capturing a returned err value, and since HandleHTTP always returns nil, this example won't ever have failures. If you're wondering about the success count being higher than you expected, keep in mind your browser probably requested a favicon.ico. Cool, eh? How it works is an interesting line of code - there's three function calls. If you look at the Go spec, all of the function calls will run at the time the function starts except for the very last one. The first function call, mon.Task(), creates or looks up a wrapper around a Func. You could get this yourself by requesting mon.Func() inside of the appropriate function or mon.FuncNamed(). Both mon.Task() and mon.Func() are inspecting runtime.Caller to determine the name of the function. Because this is a heavy operation, you can actually store the result of mon.Task() and reuse it somehow else if you prefer, so instead of you could instead use which is more performant every time after the first time. runtime.Caller only gets called once. Careful! Don't use the same myFuncMon in different functions unless you want to screw up your statistics! The second function call starts all the various stop watches and bookkeeping to keep track of the function. It also mutates the context pointer it's given to extend the context with information about what current span (in Zipkin parlance) is active. Notably, you *can* pass nil for the context if you really don't want a context. You just lose callgraph information. The last function call stops all the stop watches ad makes a note of any observed errors or panics (it repanics after observing them). Turns out, we don't even need to change our program anymore to get rich tracing information! Open your browser and go to localhost:9000/trace/svg?regex=HandleHTTP. It won't load, and in fact, it's waiting for you to open another tab and refresh localhost:8080 again. Once you retrigger the actual application behavior, the trace regex will capture a trace starting on the first function that matches the supplied regex, and return an svg. Go back to your first tab, and you should see a relatively uninteresting but super promising svg. Let's make the trace more interesting. Add a to your HandleHTTP method, rebuild, and restart. Load localhost:8080, then start a new request to your trace URL, then reload localhost:8080 again. Flip back to your trace, and you should see that the Proxy method only takes a portion of the time of HandleHTTP! https://cdn.rawgit.com/spacemonkeygo/monkit/master/images/trace.svg There's multiple ways to select a trace. You can select by regex using the preselect method (default), which first evaluates the regex on all known functions for sanity checking. Sometimes, however, the function you want to trace may not yet be known to monkit, in which case you'll want to turn preselection off. You may have a bad regex, or you may be in this case if you get the error "Bad Request: regex preselect matches 0 functions." Another way to select a trace is by providing a trace id, which we'll get to next! Make sure to check out what the addition of the time.Sleep call did to the other reports. It's easy to write plugins for monkit! Check out our first one that exports data to Zipkin (http://zipkin.io/)'s Scribe API: https://github.com/spacemonkeygo/monkit-zipkin We plan to have more (for HTrace, OpenTracing, etc, etc), soon!
Package influxql implements a parser for the InfluxDB query language. InfluxQL is a DML and DDL language for the InfluxDB time series database. It provides the ability to query for aggregate statistics as well as create and configure the InfluxDB server. See https://docs.influxdata.com/influxdb/latest/query_language/ for a reference on using InfluxQL.
Package database provides a block and metadata storage database. This package provides a database layer to store and retrieve block data and arbitrary metadata in a simple and efficient manner. The default backend, ffldb, has a strong focus on speed, efficiency, and robustness. It makes use leveldb for the metadata, flat files for block storage, and strict checksums in key areas to ensure data integrity. A quick overview of the features database provides are as follows: The main entry point is the DB interface. It exposes functionality for transactional-based access and storage of metadata and block data. It is obtained via the Create and Open functions which take a database type string that identifies the specific database driver (backend) to use as well as arguments specific to the specified driver. The Namespace interface is an abstraction that provides facilities for obtaining transactions (the Tx interface) that are the basis of all database reads and writes. Unlike some database interfaces that support reading and writing without transactions, this interface requires transactions even when only reading or writing a single key. The Begin function provides an unmanaged transaction while the View and Update functions provide a managed transaction. These are described in more detail below. The Tx interface provides facilities for rolling back or committing changes that took place while the transaction was active. It also provides the root metadata bucket under which all keys, values, and nested buckets are stored. A transaction can either be read-only or read-write and managed or unmanaged. A managed transaction is one where the caller provides a function to execute within the context of the transaction and the commit or rollback is handled automatically depending on whether or not the provided function returns an error. Attempting to manually call Rollback or Commit on the managed transaction will result in a panic. An unmanaged transaction, on the other hand, requires the caller to manually call Commit or Rollback when they are finished with it. Leaving transactions open for long periods of time can have several adverse effects, so it is recommended that managed transactions are used instead. The Bucket interface provides the ability to manipulate key/value pairs and nested buckets as well as iterate through them. The Get, Put, and Delete functions work with key/value pairs, while the Bucket, CreateBucket, CreateBucketIfNotExists, and DeleteBucket functions work with buckets. The ForEach function allows the caller to provide a function to be called with each key/value pair and nested bucket in the current bucket. As discussed above, all of the functions which are used to manipulate key/value pairs and nested buckets exist on the Bucket interface. The root metadata bucket is the upper-most bucket in which data is stored and is created at the same time as the database. Use the Metadata function on the Tx interface to retrieve it. The CreateBucket and CreateBucketIfNotExists functions on the Bucket interface provide the ability to create an arbitrary number of nested buckets. It is a good idea to avoid a lot of buckets with little data in them as it could lead to poor page utilization depending on the specific driver in use. This example demonstrates creating a new database and using a managed read-write transaction to store and retrieve metadata. This example demonstrates creating a new database, using a managed read-write transaction to store a block, and using a managed read-only transaction to fetch the block.
Package grocksdb provides the ability to create and access RocksDB databases. grocksdb.OpenDb opens and creates databases. The DB struct returned by OpenDb provides DB.Get, DB.Put, DB.Merge and DB.Delete to modify and query the database. For bulk reads, use an Iterator. If you want to avoid disturbing your live traffic while doing the bulk read, be sure to call SetFillCache(false) on the ReadOptions you use when creating the Iterator. Batched, atomic writes can be performed with a WriteBatch and DB.Write. If your working dataset does not fit in memory, you'll want to add a bloom filter to your database. NewBloomFilter and BlockBasedTableOptions.SetFilterPolicy is what you want. NewBloomFilter is amount of bits in the filter to use per key in your database. If you're using a custom comparator in your code, be aware you may have to make your own filter policy object. This documentation is not a complete discussion of RocksDB. Please read the RocksDB documentation <http://rocksdb.org/> for information on its operation. You'll find lots of goodies there.
Package log15 provides an opinionated, simple toolkit for best-practice logging that is both human and machine readable. It is modeled after the standard library's io and net/http packages. This package enforces you to only log key/value pairs. Keys must be strings. Values may be any type that you like. The default output format is logfmt, but you may also choose to use JSON instead if that suits you. Here's how you log: This will output a line that looks like: To get started, you'll want to import the library: Now you're ready to start logging: Because recording a human-meaningful message is common and good practice, the first argument to every logging method is the value to the *implicit* key 'msg'. Additionally, the level you choose for a message will be automatically added with the key 'lvl', and so will the current timestamp with key 't'. You may supply any additional context as a set of key/value pairs to the logging function. log15 allows you to favor terseness, ordering, and speed over safety. This is a reasonable tradeoff for logging functions. You don't need to explicitly state keys/values, log15 understands that they alternate in the variadic argument list: If you really do favor your type-safety, you may choose to pass a log.Ctx instead: Frequently, you want to add context to a logger so that you can track actions associated with it. An http request is a good example. You can easily create new loggers that have context that is automatically included with each log line: This will output a log line that includes the path context that is attached to the logger: The Handler interface defines where log lines are printed to and how they are formated. Handler is a single interface that is inspired by net/http's handler interface: Handlers can filter records, format them, or dispatch to multiple other Handlers. This package implements a number of Handlers for common logging patterns that are easily composed to create flexible, custom logging structures. Here's an example handler that prints logfmt output to Stdout: Here's an example handler that defers to two other handlers. One handler only prints records from the rpc package in logfmt to standard out. The other prints records at Error level or above in JSON formatted output to the file /var/log/service.json This package implements three Handlers that add debugging information to the context, CallerFileHandler, CallerFuncHandler and CallerStackHandler. Here's an example that adds the source file and line number of each logging call to the context. This will output a line that looks like: Here's an example that logs the call stack rather than just the call site. This will output a line that looks like: The "%+v" format instructs the handler to include the path of the source file relative to the compile time GOPATH. The github.com/go-stack/stack package documents the full list of formatting verbs and modifiers available. The Handler interface is so simple that it's also trivial to write your own. Let's create an example handler which tries to write to one handler, but if that fails it falls back to writing to another handler and includes the error that it encountered when trying to write to the primary. This might be useful when trying to log over a network socket, but if that fails you want to log those records to a file on disk. This pattern is so useful that a generic version that handles an arbitrary number of Handlers is included as part of this library called FailoverHandler. Sometimes, you want to log values that are extremely expensive to compute, but you don't want to pay the price of computing them if you haven't turned up your logging level to a high level of detail. This package provides a simple type to annotate a logging operation that you want to be evaluated lazily, just when it is about to be logged, so that it would not be evaluated if an upstream Handler filters it out. Just wrap any function which takes no arguments with the log.Lazy type. For example: If this message is not logged for any reason (like logging at the Error level), then factorRSAKey is never evaluated. The same log.Lazy mechanism can be used to attach context to a logger which you want to be evaluated when the message is logged, but not when the logger is created. For example, let's imagine a game where you have Player objects: You always want to log a player's name and whether they're alive or dead, so when you create the player object, you might do: Only now, even after a player has died, the logger will still report they are alive because the logging context is evaluated when the logger was created. By using the Lazy wrapper, we can defer the evaluation of whether the player is alive or not to each log message, so that the log records will reflect the player's current state no matter when the log message is written: If log15 detects that stdout is a terminal, it will configure the default handler for it (which is log.StdoutHandler) to use TerminalFormat. This format logs records nicely for your terminal, including color-coded output based on log level. Becasuse log15 allows you to step around the type system, there are a few ways you can specify invalid arguments to the logging functions. You could, for example, wrap something that is not a zero-argument function with log.Lazy or pass a context key that is not a string. Since logging libraries are typically the mechanism by which errors are reported, it would be onerous for the logging functions to return errors. Instead, log15 handles errors by making these guarantees to you: - Any log record containing an error will still be printed with the error explained to you as part of the log record. - Any log record containing an error will include the context key LOG15_ERROR, enabling you to easily (and if you like, automatically) detect if any of your logging calls are passing bad values. Understanding this, you might wonder why the Handler interface can return an error value in its Log method. Handlers are encouraged to return errors only if they fail to write their log records out to an external source like if the syslog daemon is not responding. This allows the construction of useful handlers which cope with those failures like the FailoverHandler. log15 is intended to be useful for library authors as a way to provide configurable logging to users of their library. Best practice for use in a library is to always disable all output for your logger by default and to provide a public Logger instance that consumers of your library can configure. Like so: Users of your library may then enable it if they like: The ability to attach context to a logger is a powerful one. Where should you do it and why? I favor embedding a Logger directly into any persistent object in my application and adding unique, tracing context keys to it. For instance, imagine I am writing a web browser: When a new tab is created, I assign a logger to it with the url of the tab as context so it can easily be traced through the logs. Now, whenever we perform any operation with the tab, we'll log with its embedded logger and it will include the tab title automatically: There's only one problem. What if the tab url changes? We could use log.Lazy to make sure the current url is always written, but that would mean that we couldn't trace a tab's full lifetime through our logs after the user navigate to a new URL. Instead, think about what values to attach to your loggers the same way you think about what to use as a key in a SQL database schema. If it's possible to use a natural key that is unique for the lifetime of the object, do so. But otherwise, log15's ext package has a handy RandId function to let you generate what you might call "surrogate keys" They're just random hex identifiers to use for tracing. Back to our Tab example, we would prefer to set up our Logger like so: Now we'll have a unique traceable identifier even across loading new urls, but we'll still be able to see the tab's current url in the log messages. For all Handler functions which can return an error, there is a version of that function which will return no error but panics on failure. They are all available on the Must object. For example: All of the following excellent projects inspired the design of this library: code.google.com/p/log4go github.com/op/go-logging github.com/technoweenie/grohl github.com/Sirupsen/logrus github.com/kr/logfmt github.com/spacemonkeygo/spacelog golang's stdlib, notably io and net/http https://xkcd.com/927/
Package pgx is a PostgreSQL database driver. pgx provides lower level access to PostgreSQL than the standard database/sql It remains as similar to the database/sql interface as possible while providing better speed and access to PostgreSQL specific features. Import github.com/jack/pgx/stdlib to use pgx as a database/sql compatible driver. pgx implements Query and Scan in the familiar database/sql style. pgx also implements QueryRow in the same style as database/sql. Use Exec to execute a query that does not return a result set. Connection pool usage is explicit and configurable. In pgx, a connection can be created and managed directly, or a connection pool with a configurable maximum connections can be used. Also, the connection pool offers an after connect hook that allows every connection to be automatically setup before being made available in the connection pool. This is especially useful to ensure all connections have the same prepared statements available or to change any other connection settings. It delegates Query, QueryRow, Exec, and Begin functions to an automatically checked out and released connection so you can avoid manually acquiring and releasing connections when you do not need that level of control. pgx maps between all common base types directly between Go and PostgreSQL. In particular: pgx can map nulls in two ways. The first is Null* types that have a data field and a valid field. They work in a similar fashion to database/sql. The second is to use a pointer to a pointer. pgx maps between int16, int32, int64, float32, float64, and string Go slices and the equivalent PostgreSQL array type. Go slices of native types do not support nulls, so if a PostgreSQL array that contains a null is read into a native Go slice an error will occur. pgx includes an Hstore type and a NullHstore type. Hstore is simply a map[string]string and is preferred when the hstore contains no nulls. NullHstore follows the Null* pattern and supports null values. pgx includes built-in support to marshal and unmarshal between Go types and the PostgreSQL JSON and JSONB. pgx encodes from net.IPNet to and from inet and cidr PostgreSQL types. In addition, as a convenience pgx will encode from a net.IP; it will assume a /32 netmask for IPv4 and a /128 for IPv6. pgx includes support for the common data types like integers, floats, strings, dates, and times that have direct mappings between Go and SQL. Support can be added for additional types like point, hstore, numeric, etc. that do not have direct mappings in Go by the types implementing ScannerPgx and Encoder. Custom types can support text or binary formats. Binary format can provide a large performance increase. The natural place for deciding the format for a value would be in ScannerPgx as it is responsible for decoding the returned data. However, that is impossible as the query has already been sent by the time the ScannerPgx is invoked. The solution to this is the global DefaultTypeFormats. If a custom type prefers binary format it should register it there. Note that the type is referred to by name, not by OID. This is because custom PostgreSQL types like hstore will have different OIDs on different servers. When pgx establishes a connection it queries the pg_type table for all types. It then matches the names in DefaultTypeFormats with the returned OIDs and stores it in Conn.PgTypes. See example_custom_type_test.go for an example of a custom type for the PostgreSQL point type. pgx also includes support for custom types implementing the database/sql.Scanner and database/sql/driver.Valuer interfaces. []byte passed as arguments to Query, QueryRow, and Exec are passed unmodified to PostgreSQL. In like manner, a *[]byte passed to Scan will be filled with the raw bytes returned by PostgreSQL. This can be especially useful for reading varchar, text, json, and jsonb values directly into a []byte and avoiding the type conversion from string. Transactions are started by calling Begin or BeginIso. The BeginIso variant creates a transaction with a specified isolation level. Use CopyFrom to efficiently insert multiple rows at a time using the PostgreSQL copy protocol. CopyFrom accepts a CopyFromSource interface. If the data is already in a [][]interface{} use CopyFromRows to wrap it in a CopyFromSource interface. Or implement CopyFromSource to avoid buffering the entire data set in memory. CopyFrom can be faster than an insert with as few as 5 rows. pgx can listen to the PostgreSQL notification system with the WaitForNotification function. It takes a maximum time to wait for a notification. The pgx ConnConfig struct has a TLSConfig field. If this field is nil, then TLS will be disabled. If it is present, then it will be used to configure the TLS connection. This allows total configuration of the TLS connection. pgx defines a simple logger interface. Connections optionally accept a logger that satisfies this interface. The log15 package (http://gopkg.in/inconshreveable/log15.v2) satisfies this interface and it is simple to define adapters for other loggers. Set LogLevel to control logging verbosity.
Package dburl provides a standard, net/url.URL style mechanism for parsing and opening SQL database connection strings for Go. Provides standardized way to parse and open URL's for popular databases PostgreSQL, MySQL, SQLite3, Oracle Database, Microsoft SQL Server, in addition to most other SQL databases with a publicly available Go driver. See the package documentation README section for more details.
Package blockchain implements Decred block handling and chain selection rules. The Decred block handling and chain selection rules are an integral, and quite likely the most important, part of decred. At its core, Decred is a distributed consensus of which blocks are valid and which ones will comprise the main block chain (public ledger) that ultimately determines accepted transactions, so it is extremely important that fully validating nodes agree on all rules. At a high level, this package provides support for inserting new blocks into the block chain according to the aforementioned rules. It includes functionality such as rejecting duplicate blocks, ensuring blocks and transactions follow all rules, orphan handling, and best chain selection along with reorganization. Since this package does not deal with other Decred specifics such as network communication or wallets, it provides a notification system which gives the caller a high level of flexibility in how they want to react to certain events such as orphan blocks which need their parents requested and newly connected main chain blocks which might result in wallet updates. Before a block is allowed into the block chain, it must go through an intensive series of validation rules. The following list serves as a general outline of those rules to provide some intuition into what is going on under the hood, but is by no means exhaustive: Errors returned by this package are either the raw errors provided by underlying calls or of type blockchain.RuleError. This allows the caller to differentiate between unexpected errors, such as database errors, versus errors due to rule violations through type assertions. In addition, callers can programmatically determine the specific rule violation by examining the ErrorCode field of the type asserted blockchain.RuleError.
Package blockchain implements Decred block handling and chain selection rules. The Decred block handling and chain selection rules are an integral, and quite likely the most important, part of Decred. At its core, Decred is a distributed consensus of which blocks are valid and which ones will comprise the main block chain (public ledger) that ultimately determines accepted transactions, so it is extremely important that fully validating nodes agree on all rules. At a high level, this package provides support for inserting new blocks into the block chain according to the aforementioned rules. It includes functionality such as rejecting duplicate blocks, ensuring blocks and transactions follow all rules, and best chain selection along with reorganization. Since this package does not deal with other Decred specifics such as network communication or wallets, it provides a notification system which gives the caller a high level of flexibility in how they want to react to certain events such as newly connected main chain blocks which might result in wallet updates. Before a block is allowed into the block chain, it must go through an intensive series of validation rules. The following list serves as a general outline of those rules to provide some intuition into what is going on under the hood, but is by no means exhaustive: This package supports headers-first semantics such that block data can be processed out of order so long as the associated header is already known. The headers themselves, however, must be processed in the correct order since headers that do not properly connect are rejected. In other words, orphan headers are not allowed. The processing code always maintains the best chain as the branch tip that has the most cumulative proof of work, so it is important to keep that in mind when considering errors returned from processing blocks. Notably, due to the ability to process blocks out of order, and the fact blocks can only be fully validated once all of their ancestors have the block data available, it is to be expected that no error is returned immediately for blocks that are valid enough to make it to the point they require the remaining ancestor block data to be fully validated even though they might ultimately end up failing validation. Similarly, because the data for a block becoming available makes any of its direct descendants that already have their data available eligible for validation, an error being returned does not necessarily mean the block being processed is the one that failed validation. Errors returned by this package have full support for the standard library errors.Is and errors.As methods and are either the raw errors provided by underlying calls or of type blockchain.RuleError, possibly wrapped in a blockchain.MultiError. This allows the caller to differentiate between unexpected errors, such as database errors, versus errors due to rule violations through errors.As. In addition, callers can programmatically determine the specific rule violation by making use of errors.Is with any of the wrapped error kinds.
This ip2location package provides a fast lookup of country, region, city, latitude, longitude, ZIP code, time zone, ISP, domain name, connection type, IDD code, area code, weather station code, station name, MCC, MNC, mobile brand, elevation, usage type, address type, IAB category, district, autonomous system number (ASN) and autonomous system (AS) from IP address by using IP2Location database.
Package u2f implements the server-side parts of the FIDO Universal 2nd Factor (U2F) specification. Applications will usually persist Challenge and Registration objects in a database. To enrol a new token: To perform an authentication: The FIDO U2F specification can be found here: https://fidoalliance.org/specifications/download
Package grocksdb provides the ability to create and access RocksDB databases. grocksdb.OpenDb opens and creates databases. The DB struct returned by OpenDb provides DB.Get, DB.Put, DB.Merge and DB.Delete to modify and query the database. For bulk reads, use an Iterator. If you want to avoid disturbing your live traffic while doing the bulk read, be sure to call SetFillCache(false) on the ReadOptions you use when creating the Iterator. Batched, atomic writes can be performed with a WriteBatch and DB.Write. If your working dataset does not fit in memory, you'll want to add a bloom filter to your database. NewBloomFilter and BlockBasedTableOptions.SetFilterPolicy is what you want. NewBloomFilter is amount of bits in the filter to use per key in your database. If you're using a custom comparator in your code, be aware you may have to make your own filter policy object. This documentation is not a complete discussion of RocksDB. Please read the RocksDB documentation <http://rocksdb.org/> for information on its operation. You'll find lots of goodies there. The default link options, to customize it, you can try build tag `grocksdb_clean_link` for a cleaner set of flags, or `grocksdb_no_link` where you have full control through `CGO_LDFLAGS` environment variable.
Package dbx provides a set of DB-agnostic and easy-to-use query building methods for relational databases. This example shows how to do CRUD operations. This example shows how to populate DB data in different ways. This example shows how to use query builder to build DB queries. This example shows how to use query builder in transactions.
Package gocqlx is an idiomatic extension to gocql that provides usability features. With gocqlx you can bind the query parameters from maps and structs, use named query parameters (:identifier) and scan the query results into structs and slices. It comes with a fluent and flexible CQL query builder and a database migrations module.
Package pq is a pure Go Postgres driver for the database/sql package. In most cases clients will use the database/sql package instead of using this package directly. For example: You can also connect to a database using a URL. For example: Similarly to libpq, when establishing a connection using pq you are expected to supply a connection string containing zero or more parameters. A subset of the connection parameters supported by libpq are also supported by pq. Additionally, pq also lets you specify run-time parameters (such as search_path or work_mem) directly in the connection string. This is different from libpq, which does not allow run-time parameters in the connection string, instead requiring you to supply them in the options parameter. For compatibility with libpq, the following special connection parameters are supported: Valid values for sslmode are: See http://www.postgresql.org/docs/current/static/libpq-connect.html#LIBPQ-CONNSTRING for more information about connection string parameters. Use single quotes for values that contain whitespace: A backslash will escape the next character in values: Note that the connection parameter client_encoding (which sets the text encoding for the connection) may be set but must be "UTF8", matching with the same rules as Postgres. It is an error to provide any other value. In addition to the parameters listed above, any run-time parameter that can be set at backend start time can be set in the connection string. For more information, see http://www.postgresql.org/docs/current/static/runtime-config.html. Most environment variables as specified at http://www.postgresql.org/docs/current/static/libpq-envars.html supported by libpq are also supported by pq. If any of the environment variables not supported by pq are set, pq will panic during connection establishment. Environment variables have a lower precedence than explicitly provided connection parameters. The pgpass mechanism as described in http://www.postgresql.org/docs/current/static/libpq-pgpass.html is supported, but on Windows PGPASSFILE must be specified explicitly. database/sql does not dictate any specific format for parameter markers in query strings, and pq uses the Postgres-native ordinal markers, as shown above. The same marker can be reused for the same parameter: pq does not support the LastInsertId() method of the Result type in database/sql. To return the identifier of an INSERT (or UPDATE or DELETE), use the Postgres RETURNING clause with a standard Query or QueryRow call: For more details on RETURNING, see the Postgres documentation: For additional instructions on querying see the documentation for the database/sql package. Parameters pass through driver.DefaultParameterConverter before they are handled by this package. When the binary_parameters connection option is enabled, []byte values are sent directly to the backend as data in binary format. This package returns the following types for values from the PostgreSQL backend: All other types are returned directly from the backend as []byte values in text format. pq may return errors of type *pq.Error which can be interrogated for error details: See the pq.Error type for details. You can perform bulk imports by preparing a statement returned by pq.CopyIn (or pq.CopyInSchema) in an explicit transaction (sql.Tx). The returned statement handle can then be repeatedly "executed" to copy data into the target table. After all data has been processed you should call Exec() once with no arguments to flush all buffered data. Any call to Exec() might return an error which should be handled appropriately, but because of the internal buffering an error returned by Exec() might not be related to the data passed in the call that failed. CopyIn uses COPY FROM internally. It is not possible to COPY outside of an explicit transaction in pq. Usage example: PostgreSQL supports a simple publish/subscribe model over database connections. See http://www.postgresql.org/docs/current/static/sql-notify.html for more information about the general mechanism. To start listening for notifications, you first have to open a new connection to the database by calling NewListener. This connection can not be used for anything other than LISTEN / NOTIFY. Calling Listen will open a "notification channel"; once a notification channel is open, a notification generated on that channel will effect a send on the Listener.Notify channel. A notification channel will remain open until Unlisten is called, though connection loss might result in some notifications being lost. To solve this problem, Listener sends a nil pointer over the Notify channel any time the connection is re-established following a connection loss. The application can get information about the state of the underlying connection by setting an event callback in the call to NewListener. A single Listener can safely be used from concurrent goroutines, which means that there is often no need to create more than one Listener in your application. However, a Listener is always connected to a single database, so you will need to create a new Listener instance for every database you want to receive notifications in. The channel name in both Listen and Unlisten is case sensitive, and can contain any characters legal in an identifier (see http://www.postgresql.org/docs/current/static/sql-syntax-lexical.html#SQL-SYNTAX-IDENTIFIERS for more information). Note that the channel name will be truncated to 63 bytes by the PostgreSQL server. You can find a complete, working example of Listener usage at https://godoc.org/gitee.com/opengauss/openGauss-connector-go-pq/example/listen. If you need support for Kerberos authentication, add the following to your main package: This package is in a separate module so that users who don't need Kerberos don't have to download unnecessary dependencies. When imported, additional connection string parameters are supported:
Package scany is a set of packages for scanning data from a database into Go structs and more. scany isn't limited to any specific database. It integrates with database/sql, so any database with database/sql driver is supported. It also works with https://github.com/jackc/pgx native interface. Apart from the out of the box support, scany can be easily extended to work with almost any database library. scany contains the following packages: sqlscan package works with database/sql standard library. pgxscan package works with github.com/jackc/pgx library native interface. dbscan package works with an abstract database and can be integrated with any library that has a concept of rows. This particular package implements core scany features and contains all the logic. Both sqlscan and pgxscan use dbscan internally.
Package txdb is a single transaction based database/sql/driver implementation. When the connection is opened, it starts a transaction and all operations performed on the returned database/sql.DB will be within that transaction. If concurrent actions are performed, the lock is acquired and connection is always released the statements and rows are not holding the connection. Why is it useful? A very basic use case would be if you want to make functional tests, you can prepare a test database and within each test you do not have to reload a database. All tests are isolated within a transaction and execute fast. And you do not have to interface your database/sql.DB reference in your code, txdb is like a standard database/sql/driver.Driver. This driver supports any database/sql/driver.Driver connection to be opened. You can register txdb for different drivers and have it under different driver names. Under the hood whenever a txdb driver is opened, it attempts to open a real connection and starts transaction. When close is called, it rollbacks transaction leaving your prepared test database in the same state as before. Example, assuming you have a mysql database called txdb_test and a table users with a username: Every time you will run this application, it will remain in the same state as before.
Package puddle is a generic resource pool. Puddle is a tiny generic resource pool library for Go that uses the standard context library to signal cancellation of acquires. It is designed to contain the minimum functionality a resource pool needs that cannot be implemented without concurrency concerns. For example, a database connection pool may use puddle internally and implement health checks and keep-alive behavior without needing to implement any concurrent code of its own.
Package art implements an Adapative Radix Tree(ART) in pure Go. Note that this implementation is not thread-safe but it could be really easy to implement. The design of ART is based on "The Adaptive Radix Tree: ARTful Indexing for Main-Memory Databases" [1]. Usage Also the current implementation was inspired by [2] and [3] [1] http://db.in.tum.de/~leis/papers/ART.pdf (Specification) [2] https://github.com/armon/libart (C99 implementation) [3] https://github.com/kellydunn/go-art (other Go implementation)
Package kivik provides a generic interface to CouchDB or CouchDB-like databases. The kivik package must be used in conjunction with a database driver. See https://github.com/go-kivik/kivik/wiki/Kivik-database-drivers for a list. The kivik driver system is modeled after the standard library's sql and sql/driver packages, although the client API is completely different due to the different database models implemented by SQL and NoSQL databases such as CouchDB.
Package config provides convenient access methods to configuration stored as JSON or YAML. Let's start with a simple YAML file config.yml: We can parse it using ParseYaml(), which will return a *Config instance on success: An equivalent JSON configuration could be built using ParseJson(): From now, we can retrieve configuration values using a path in dotted notation: Besides String(), other types can be fetched directly: Bool(), Float64(), Int(), Map() and List(). All these methods will return an error if the path doesn't exist, or the value doesn't match or can't be converted to the requested type. A nested configuration can be fetched using Get(). Here we get a new *Config instance with a subset of the configuration: Then the inner values are fetched relatively to the subset: For lists, the dotted path must use an index to refer to a specific value. To retrieve the information from a user stored in the configuration above: JSON or YAML strings can be created calling the appropriate Render*() functions. Here's how we render a configuration like the one used in these examples: This results in a configuration string to be stored in a file or database. For more more convenience it can parse OS environment variables and command line arguments. We can also specify the order of parsing: In case of OS environment all existing at the moment of parsing keys will be scanned in OS environment, but in uppercase and the separator will be `_` instead of a `.`. If EnvPrefix() is used the given prefix will be used to lookup the environment variable, e.g PREFIX_FOO_BAR will set foo.bar. In case of flags separator will be `-`. In case of command line arguments possible to use regular dot notation syntax for all keys. For see existing keys we can run application with `-h`. We can use unsafe method to get value: There is unsafe methods, like regular, but wuth prefix `U`.
sqltocsv is a package to make it dead easy to turn arbitrary database query results (in the form of database/sql Rows) into CSV output. Source and README at https://github.com/joho/sqltocsv
Package kivik provides a generic interface to CouchDB or CouchDB-like databases. The kivik package must be used in conjunction with a database driver. The officially supported drivers are: The Filesystem and Memory drivers are also available, but in early stages of development, and so many features do not yet work: The kivik driver system is modeled after the standard library's `sql` and `sql/driver` packages, although the client API is completely different due to the different database models implemented by SQL and NoSQL databases such as CouchDB. couchDB stores JSON, so Kivik translates Go data structures to and from JSON as necessary. The conversion between Go data types and JSON, and vice versa, is handled automatically according to the rules and behavior described in the documentationf or the standard library's `encoding/json` package (https://golang.org/pkg/encoding/json). One would be well-advised to become familiar with using `json` struct field tags (https://golang.org/pkg/encoding/json/#Marshal) when working with JSON documents. Most Kivik methods take `context.Context` as their first argument. This allows the cancellation of blocking operations in the case that the result is no longer needed. A typical use case for a web application would be to cancel a Kivik request if the remote HTTP client ahs disconnected, rednering the results of the query irrelevant. To learn more about Go's contexts, read the `context` package documentation (https://golang.org/pkg/context/) and read the Go blog post "Go Concurrency Patterns: Context" (https://blog.golang.org/context) for example code. If in doubt, you can pass `context.TODO()` as the context variable. Example: Kivik returns errors that embed an HTTP status code. In most cases, this is the HTTP status code returned by the server. The embedded HTTP status code may be accessed easily using the StatusCode() method, or with a type assertion to `interface { StatusCode() int }`. Example: Any error that does not conform to this interface will be assumed to represent a http.StatusInternalServerError status code. For common usage, authentication should be as simple as including the authentication credentials in the connection DSN. For example: This will connect to `localhost` on port 5984, using the username `admin` and the password `abc123`. When connecting to CouchDB (as in the above example), this will use cookie auth (https://docs.couchdb.org/en/stable/api/server/authn.html?highlight=cookie%20auth#cookie-authentication). Depending on which driver you use, there may be other ways to authenticate, as well. At the moment, the CouchDB driver is the only official driver which offers additional authentication methods. Please refer to the CouchDB package documentation for details (https://pkg.go.dev/github.com/go-kivik/couchdb/v3). With a client handle in hand, you can create a database handle with the DB() method to interact with a specific database.
Package puddle is a generic resource pool with type-parametrized api. Puddle is a tiny generic resource pool library for Go that uses the standard context library to signal cancellation of acquires. It is designed to contain the minimum functionality a resource pool needs that cannot be implemented without concurrency concerns. For example, a database connection pool may use puddle internally and implement health checks and keep-alive behavior without needing to implement any concurrent code of its own.
Package jet is a complete solution for efficient and high performance database access, consisting of type-safe SQL builder with code generation and automatic query result data mapping. Jet currently supports PostgreSQL, MySQL, MariaDB and SQLite. Future releases will add support for additional databases. Use the command bellow to add jet as a dependency into go.mod project: Jet generator can be installed in one of the following ways: (Go1.16+) Install jet generator using go install: go install github.com/go-jet/jet/v2/cmd/jet@latest Install jet generator to GOPATH/bin folder: cd $GOPATH/src/ && GO111MODULE=off go get -u github.com/go-jet/jet/cmd/jet Install jet generator into specific folder: git clone https://github.com/go-jet/jet.git cd jet && go build -o dir_path ./cmd/jet Make sure that the destination folder is added to the PATH environment variable. Jet requires already defined database schema(with tables, enums etc), so that jet generator can generate SQL Builder and Model files. File generation is very fast, and can be added as every pre-build step. Sample command: Before we can write SQL queries in Go, we need to import generated SQL builder and model types: To write postgres SQL queries we import: Then we can write the SQL query: Now we can run the statement and store the result into desired destination: We can print a statement to see SQL query and arguments sent to postgres server: Output: If we print destination as json, we'll get: Detail info about all statements, features and use cases can be found at project wiki page - https://github.com/go-jet/jet/wiki.
This ip2location package provides a fast lookup of country, region, city, latitude, longitude, ZIP code, time zone, ISP, domain name, connection type, IDD code, area code, weather station code, station name, MCC, MNC, mobile brand, elevation, and usage type from IP address by using IP2Location database.
Package countries - ISO 3166 (ISO3166-1, ISO3166, Digit, Alpha-2 and Alpha-3) countries codes and names (on eng and rus), ISO 4217 currency designators, ITU-T E.164 IDD calling phone codes, countries capitals, UN M.49 regions codes, ccTLD countries domains, IOC/NOC and FIFA letters codes, VERY FAST, NO maps[], NO slices[], NO external links/files/data, NO interface{}, NO specific dependencies, Databases compatible, Emoji countries flags and currencies support, full support ISO-3166-1, ISO-4217, ITU-T E.164, Unicode CLDR and ccTLD standarts. Full support ISO-3166-1, ISO-4217, ITU-T E.164, Unicode CLDR and ccTLD standarts. Package countries - ISO 3166 (ISO3166-1, ISO3166, Digit, Alpha-2 and Alpha-3) countries codes and names (on eng and rus), ISO 4217 currency designators, ITU-T E.164 IDD calling phone codes, countries capitals, UN M.49 regions codes, ccTLD countries domains, IOC/NOC and FIFA letters codes, VERY FAST, NO maps[], NO slices[], NO external links/files/data, NO interface{}, NO specific dependencies, Databases compatible, Emoji countries flags and currencies support, full support ISO-3166-1, ISO-4217, ITU-T E.164, Unicode CLDR and ccTLD standarts. Full support ISO-3166-1, ISO-4217, ITU-T E.164, Unicode CLDR and ccTLD standarts. Contributing Package countries - ISO 3166 (ISO3166-1, ISO3166, Digit, Alpha-2 and Alpha-3) countries codes and names (on eng and rus), ISO 4217 currency designators, ITU-T E.164 IDD calling phone codes, countries capitals, UN M.49 regions codes, ccTLD countries domains, IOC/NOC and FIFA letters codes, VERY FAST, NO maps[], NO slices[], NO external links/files/data, NO interface{}, NO specific dependencies, Databases compatible, Emoji countries flags and currencies support, full support ISO-3166-1, ISO-4217, ITU-T E.164, Unicode CLDR and ccTLD standarts. Full support ISO-3166-1, ISO-4217, ITU-T E.164, Unicode CLDR and ccTLD standarts. Package countries supports subdivisions as per ISO 3166-2. Data has been sourced from <https://www.ip2location.com/free/iso3166-2>. See license for further information. Package countries supports subdivisions as per ISO 3166-2. Data has been sourced from <https://www.ip2location.com/free/iso3166-2>. See license for further information.
Package sqlite3 provides interface to SQLite3 databases. This works as a driver for database/sql. Installation Currently, go-sqlite3 supports the following data types. You can write your own extension module for sqlite3. For example, below is an extension for a Regexp matcher operation. It needs to be built as a so/dll shared library. And you need to register the extension module like below. Then, you can use this extension. You can hook and inject your code when the connection is established by setting ConnectHook to get the SQLiteConn. You can also use database/sql.Conn.Raw (Go >= 1.13): If you want to register Go functions as SQLite extension functions you can make a custom driver by calling RegisterFunction from ConnectHook. You can then use the custom driver by passing its name to sql.Open. See the documentation of RegisterFunc for more details.
Package kivik provides a generic interface to CouchDB or CouchDB-like databases. The kivik package must be used in conjunction with a database driver. See https://github.com/go-kivik/kivik/wiki/Kivik-database-drivers for a list. The kivik driver system is modeled after the standard library's sql and sql/driver packages, although the client API is completely different due to the different database models implemented by SQL and NoSQL databases such as CouchDB.
Package oracle implements gdb.Driver, which supports operations for database Oracle. Note: 1. It does not support Save/Replace features. 2. It does not support LastInsertId.