Package sqlite provides a Go interface to SQLite 3. The semantics of this package are deliberately close to the SQLite3 C API. See the official C API introduction for an overview of the basics. An SQLite connection is represented by a *Conn. Connections cannot be used concurrently. A typical Go program will create a pool of connections (e.g. by using zombiezen.com/go/sqlite/sqlitex.NewPool to create a *zombiezen.com/go/sqlite/sqlitex.Pool) so goroutines can borrow a connection while they need to talk to the database. This package assumes SQLite will be used concurrently by the process through several connections, so the build options for SQLite enable multi-threading and the shared cache. The implementation automatically handles shared cache locking, see the documentation on Stmt.Step for details. The optional SQLite 3 extensions compiled in are: session, FTS5, RTree, JSON1, and GeoPoly. This is not a database/sql driver. For helper functions to make it easier to execute statements, see the zombiezen.com/go/sqlite/sqlitex package. Statements are prepared with the Conn.Prepare and Conn.PrepareTransient methods. When using Conn.Prepare, statements are keyed inside a connection by the original query string used to create them. This means long-running high-performance code paths can write: After all the connections in a pool have been warmed up by passing through one of these Prepare calls, subsequent calls are simply a map lookup that returns an existing statement. SQLite transactions can be managed manually with this package by directly executing BEGIN / COMMIT / ROLLBACK or SAVEPOINT / RELEASE / ROLLBACK statements, but there are also helper functions available in zombiezen.com/go/sqlite/sqlitex: For simple schema migration needs, see the zombiezen.com/go/sqlite/sqlitemigration package. Use Conn.CreateFunction to register Go functions for use as SQL functions. The sqlite package supports the SQLite incremental I/O interface for streaming blob data into and out of the the database without loading the entire blob into a single []byte. (This is important when working either with very large blobs, or more commonly, a large number of moderate-sized blobs concurrently.) See Conn.OpenBlob for more details. Every connection can have a done channel associated with it using the Conn.SetInterrupt method. This is typically the channel returned by a context.Context.Done method. As database connections are long-lived, the Conn.SetInterrupt method can be called multiple times to reset the associated lifetime. Using a Pool to execute SQL in a concurrent HTTP handler. This is the same as the main package example, but uses the SQLite statement API instead of sqlitex.
Package redispipe - high throughput Redis connector with implicit pipelining. https://redis.io/topics/pipelining Pipelining improves maximum throughput that redis can serve, and reduces CPU usage both on redis server and on client. Mostly it comes from saving system CPU consumption. But it is not always possible to use pipelining explicitly: usually there are dozens of concurrent goroutines, each sends just one request at a time. To handle usual workload, pipelining has to be implicit. All known Golang redis connectors use connection-per-request working model with a connection pool, and provide only explicit pipelining. This showed far from optimal performance under highly concurrent load. This connector was created as implicitly pipelined from the ground up to achieve maximum performance in a highly concurrent environment. It writes all requests to a single connection to redis, and continuously reads answers from another goroutine. Note that it trades a bit of latency for throughput, and therefore may be not optimal for non-concurrent usage. - fast, - thread-safe: no need to lock around connection, no need to "return to pool", etc, - pipelining is implicit, - transactions supported (but without WATCH), - hook for custom logging, - hook for request timing reporting. - by default, it is not allowed to send blocking calls, because it will block the whole pipeline: `BLPOP`, `BRPOP`, `BRPOPLPUSH`, `BZPOPMIN`, `BZPOPMAX`, `XREAD`, `XREADGROUP`, `SAVE`. However, you could set `ScriptMode: true` option to enable these commands. `ScriptMode: true` also turns default `WritePause` to -1 (meaning it almost disables forced batching). - `WATCH` is also forbidden by default: it is useless and even harmful when concurrent goroutines use the same connection. It is also allowed with `ScriptMode: true`, but you should be sure you use connection only from single goroutine. - `SUBSCRIBE` and `PSUBSCRIBE` commands are forbidden. They switch the connection work mode to a completely different mode of communication, therefore it could not be combined with regular commands. This connector doesn't implement subscribing mode. - root package is empty - common functionality is in redis subpackage - singe connection is in redisconn subpackage - cluster support is in rediscluster subpackage Both redisconn.Connect and rediscluster.NewCluster creates implementations of redis.Sender. redis.Sender provides asynchronous api for sending request/requests/transactions. That api accepts redis.Future interface implementations as an argument and fulfills it asynchronously. Usually you don't need to provide your own redis.Future implementation, but rather use synchronous wrappers. To use convenient synchronous api, one should wrap "sender" with one of wrappers: - redis.Sync{sender} - provides simple synchronouse api, - redis.SyncCtx{sender} - provides same api, but all methods accept context.Context, and methods return immediately if that context is closed, - redis.ChanFutured{sender} - provides api with future through channel closing. Types accepted as command arguments: nil, []byte, string, int (and all other integer types), float64, float32, bool. All arguments are converted to redis bulk strings as usual (ie string and bytes - as is; numbers - in decimal notation). bool converted as "0/1", nil converted to empty string. In difference to other redis packages, no custom types are used for request results. Results are de-serialized into plain go types and are returned as interface{}: IO, connection, and other errors are not returned separately but as result (and has same *errorx.Error underlying type).
Package tview implements rich widgets for terminal based user interfaces. The widgets provided with this package are useful for data exploration and data entry. The package implements the following widgets: The package also provides Application which is used to poll the event queue and draw widgets on screen. The following is a very basic example showing a box with the title "Hello, world!": First, we create a box primitive with a border and a title. Then we create an application, set the box as its root primitive, and run the event loop. The application exits when the application's Stop() function is called or when Ctrl-C is pressed. If we have a primitive which consumes key presses, we call the application's SetFocus() function to redirect all key presses to that primitive. Most primitives then offer ways to install handlers that allow you to react to any actions performed on them. You will find more demos in the "demos" subdirectory. It also contains a presentation (written using tview) which gives an overview of the different widgets and how they can be used. Throughout this package, colors are specified using the tcell.Color type. Functions such as tcell.GetColor(), tcell.NewHexColor(), and tcell.NewRGBColor() can be used to create colors from W3C color names or RGB values. Almost all strings which are displayed can contain color tags. Color tags are W3C color names or six hexadecimal digits following a hash tag, wrapped in square brackets. Examples: A color tag changes the color of the characters following that color tag. This applies to almost everything from box titles, list text, form item labels, to table cells. In a TextView, this functionality has to be switched on explicitly. See the TextView documentation for more information. Color tags may contain not just the foreground (text) color but also the background color and additional flags. In fact, the full definition of a color tag is as follows: Each of the three fields can be left blank and trailing fields can be omitted. (Empty square brackets "[]", however, are not considered color tags.) Colors that are not specified will be left unchanged. A field with just a dash ("-") means "reset to default". You can specify the following flags (some flags may not be supported by your terminal): Examples: In the rare event that you want to display a string such as "[red]" or "[#00ff1a]" without applying its effect, you need to put an opening square bracket before the closing square bracket. Note that the text inside the brackets will be matched less strictly than region or colors tags. I.e. any character that may be used in color or region tags will be recognized. Examples: You can use the Escape() function to insert brackets automatically where needed. When primitives are instantiated, they are initialized with colors taken from the global Styles variable. You may change this variable to adapt the look and feel of the primitives to your preferred style. This package supports unicode characters including wide characters. Many functions in this package are not thread-safe. For many applications, this may not be an issue: If your code makes changes in response to key events, it will execute in the main goroutine and thus will not cause any race conditions. If you access your primitives from other goroutines, however, you will need to synchronize execution. The easiest way to do this is to call Application.QueueUpdate() or Application.QueueUpdateDraw() (see the function documentation for details): One exception to this is the io.Writer interface implemented by TextView. You can safely write to a TextView from any goroutine. See the TextView documentation for details. You can also call Application.Draw() from any goroutine without having to wrap it in QueueUpdate(). And, as mentioned above, key event callbacks are executed in the main goroutine and thus should not use QueueUpdate() as that may lead to deadlocks. All widgets listed above contain the Box type. All of Box's functions are therefore available for all widgets, too. All widgets also implement the Primitive interface. The tview package is based on https://github.com/derailed/tcell. It uses types and constants from that package (e.g. colors and keyboard values). This package does not process mouse input (yet).
Package amboy provides basic infrastructure for running and describing tasks and task workflows with, potentially, minimal overhead and additional complexity. Amboy works with 4 basic logical objects: jobs, or descriptions of tasks; runnners, which are responsible for executing tasks; queues, that represent pipelines and offline workflows of tasks (e.g. not real time, processes that run outside of the primary execution path of a program); and dependencies that represent relationships between jobs. The inspiration for amboy was to be able to provide a unified way to define and run jobs, that would feel equally "native" for distributed applications and distributed web application, and move easily between different architectures. While amboy users will generally implement their own Job and dependency implementations, Amboy itself provides several example Queue implementations, as well as several generic examples and prototypes of Job and dependency.Manager objects. Generally speaking you should be able to use included amboy components to provide the queue and runner components, in conjunction with custom and generic job and dependency variations. Consider the following example: The amboy package proves a number of generic methods that, using the Queue.Stats() method, block until all jobs are complete. They provide different semantics, which may be useful in different circumstances. All of these functions wait until the total number of jobs submitted to the queue is equal to the number of completed jobs, and as a result these methods don't prevent other threads from adding jobs to the queue after beginning to wait. Additionally, there are a set of methods that allow callers to wait for a specific job to complete.
Package cache implements Cache similar to hashicorp/golang-lru Support LRC, LRU and TTL-based eviction. Package is thread-safe and doesn't spawn any goroutines. On every Set() call, cache deletes single oldest entry in case it's expired. In case MaxSize is set, cache deletes the oldest entry disregarding its expiration date to maintain the size, either using LRC or LRU eviction. In case of default TTL (10 years) and default MaxSize (0, unlimited) the cache will be truly unlimited and will never delete entries from itself automatically. Important: only reliable way of not having expired entries stuck in a cache is to run cache.DeleteExpired periodically using time.Ticker, advisable period is 1/2 of TTL.
Package raft sends and receives messages in the Protocol Buffer format defined in the raftpb package. Raft is a protocol with which a cluster of nodes can maintain a replicated state machine. The state machine is kept in sync through the use of a replicated log. For more details on Raft, see "In Search of an Understandable Consensus Algorithm" (https://raft.github.io/raft.pdf) by Diego Ongaro and John Ousterhout. A simple example application, _raftexample_, is also available to help illustrate how to use this package in practice: https://github.com/etcd-io/etcd/tree/main/contrib/raftexample The primary object in raft is a Node. You either start a Node from scratch using raft.StartNode or start a Node from some initial state using raft.RestartNode. To start a node from scratch: To restart a node from previous state: Now that you are holding onto a Node you have a few responsibilities: First, you must read from the Node.Ready() channel and process the updates it contains. These steps may be performed in parallel, except as noted in step 2. 1. Write HardState, Entries, and Snapshot to persistent storage if they are not empty. Note that when writing an Entry with Index i, any previously-persisted entries with Index >= i must be discarded. 2. Send all Messages to the nodes named in the To field. It is important that no messages be sent until the latest HardState has been persisted to disk, and all Entries written by any previous Ready batch (Messages may be sent while entries from the same batch are being persisted). To reduce the I/O latency, an optimization can be applied to make leader write to disk in parallel with its followers (as explained at section 10.2.1 in Raft thesis). If any Message has type MsgSnap, call Node.ReportSnapshot() after it has been sent (these messages may be large). Note: Marshalling messages is not thread-safe; it is important that you make sure that no new entries are persisted while marshalling. The easiest way to achieve this is to serialize the messages directly inside your main raft loop. 3. Apply Snapshot (if any) and CommittedEntries to the state machine. If any committed Entry has Type EntryConfChange, call Node.ApplyConfChange() to apply it to the node. The configuration change may be cancelled at this point by setting the NodeID field to zero before calling ApplyConfChange (but ApplyConfChange must be called one way or the other, and the decision to cancel must be based solely on the state machine and not external information such as the observed health of the node). 4. Call Node.Advance() to signal readiness for the next batch of updates. This may be done at any time after step 1, although all updates must be processed in the order they were returned by Ready. Second, all persisted log entries must be made available via an implementation of the Storage interface. The provided MemoryStorage type can be used for this (if you repopulate its state upon a restart), or you can supply your own disk-backed implementation. Third, when you receive a message from another node, pass it to Node.Step: Finally, you need to call Node.Tick() at regular intervals (probably via a time.Ticker). Raft has two important timeouts: heartbeat and the election timeout. However, internally to the raft package time is represented by an abstract "tick". The total state machine handling loop will look something like this: To propose changes to the state machine from your node take your application data, serialize it into a byte slice and call: If the proposal is committed, data will appear in committed entries with type raftpb.EntryNormal. There is no guarantee that a proposed command will be committed; you may have to re-propose after a timeout. To add or remove a node in a cluster, build ConfChange struct 'cc' and call: After config change is committed, some committed entry with type raftpb.EntryConfChange will be returned. You must apply it to node through: Note: An ID represents a unique node in a cluster for all time. A given ID MUST be used only once even if the old node has been removed. This means that for example IP addresses make poor node IDs since they may be reused. Node IDs must be non-zero. This implementation is up to date with the final Raft thesis (https://github.com/ongardie/dissertation/blob/master/stanford.pdf), although our implementation of the membership change protocol differs somewhat from that described in chapter 4. The key invariant that membership changes happen one node at a time is preserved, but in our implementation the membership change takes effect when its entry is applied, not when it is added to the log (so the entry is committed under the old membership instead of the new). This is equivalent in terms of safety, since the old and new configurations are guaranteed to overlap. To ensure that we do not attempt to commit two membership changes at once by matching log positions (which would be unsafe since they should have different quorum requirements), we simply disallow any proposed membership change while any uncommitted change appears in the leader's log. This approach introduces a problem when you try to remove a member from a two-member cluster: If one of the members dies before the other one receives the commit of the confchange entry, then the member cannot be removed any more since the cluster cannot make progress. For this reason it is highly recommended to use three or more nodes in every cluster. Package raft sends and receives message in Protocol Buffer format (defined in raftpb package). Each state (follower, candidate, leader) implements its own 'step' method ('stepFollower', 'stepCandidate', 'stepLeader') when advancing with the given raftpb.Message. Each step is determined by its raftpb.MessageType. Note that every step is checked by one common method 'Step' that safety-checks the terms of node and incoming message to prevent stale log entries:
Package glue - Robust Go and Javascript Socket Library. This library is thread-safe.
Package bitmap implements (thread-safe) bitmap functions and abstractions. Installation