Extensible Go library for creating fast, SSR-first frontend avoiding vanilla templating downsides. Creating asynchronous and dynamic layout parts is a complex problem for larger projects using `html/template`. Library tries to simplify this process. Let's go straight into a simple example. Then, we will dig into details, step by step, how it works. Kyoto provides a simple net/http handlers and function wrappers to handle pages rendering and serving. See functions inside of nethttp.go file for details and advanced usage. Example: Kyoto provides a way to define components. It's a very common approach for modern libraries to manage frontend parts. In kyoto each component is a context receiver, which returns it's state. Each component becomes a part of the page or top-level component, which executes component asynchronously and gets a state future object. In that way your components are executing in a non-blocking way. Pages are just top-level components, where you can configure rendering and page related stuff. Example: As an option, you can wrap component with another function to accept additional paramenters from top-level page/component. Example: Kyoto provides a context, which holds common objects like http.ResponseWriter, *http.Request, etc. See kyoto.Context for details. Example: Kyoto provides a set of parameters and functions to provide a comfortable template building process. You can configure template building parameters with kyoto.TemplateConf configuration. See template.go for available functions and kyoto.TemplateConfiguration for configuration details. Example: Kyoto provides a way to simplify building dynamic UIs. For this purpose it has a feature named actions. Logic is pretty simple. Client calls an action (sends a request to the server). Action is executing on server side and server is sending updated component markup to the client which will be morphed into DOM. That's it. To use actions, you need to go through a few steps. You'll need to include a client into page (JS functions for communication) and register an actions handler for a needed component. Let's start from including a client. Then, let's register an actions handler for a needed component. That's all! Now we ready to use actions to provide a dynamic UI. Example: In this example you can see provided modifications to the quick start example. First, we've added a state and name into our components' markup. In this way we are saving our components' state between actions and find a component root. Unfortunately, we have to manually provide a component name for now, we haven't found a way to provide it dynamically. Second, we've added a reload button with onclick function call. We're using a function Action provided by a client. Action triggering will be described in details later. Third, we've added an action handler inside of our component. This handler will be executed when a client calls an action with a corresponding name. It's highly recommended to keep components' state as small as possible. It will be transmitted on each action call. Kyoto have multiple ways to trigger actions. Now we will check them one by one. This is the simplest way to trigger an action. It's just a function call with a referer (usually 'this', f.e. button) as a first argument (used to determine root), action name as a second argument and arguments as a rest. Arguments must to be JSON serializable. It's possible to trigger an action of another component. If you want to call an action of parent component, use $ prefix in action name. If you want to call an action of component by id, use <id:action> as an action name. This is a specific action which is triggered when a form is submitted. Usually called in onsubmit="..." attribute of a form. You'll need to implement 'Submit' action to handle this trigger. This is a special HTML attribute which will trigger an action on page load. This may be useful for components' lazy loading. With this special HTML attributes you can trigger an action with interval. Useful for components that must to be updated over time (f.e. charts, stats, etc). You can use this trigger with ssa:poll and ssa:poll.interval HTML attributes. This one attribute allows you to trigger an action when an element is visible on the screen. May be useful for lazy loading. Kyoto provides a way to control action flow. For now, it's possible to control display style on component call and push multiple UI updates to the client during a single action. Because kyoto makes a roundtrip to the server every time an action is triggered on the page, there are cases where the page may not react immediately to a user event (like a click). That's why the library provides a way to easily control display attributes on action call. You can use this HTML attribute to control display during action call. At the end of an action the layout will be restored. A small note. Don't forget to set a default display for loading elements like spinners and loaders. You can push multiple component UI updates during a single action call. Just call kyoto.ActionFlush(ctx, state) to initiate an update. Kyoto provides a way to control action rendering. Now there is at least 2 rendering options after an action call: morph (default) and replace. Morph will try to morph received markup to the current one with morphdom library. In case of an error, or explicit "replace" mode, markup will be replaced with x.outerHTML = '...'.
Extensible Go library for creating fast, SSR-first frontend avoiding vanilla templating downsides. Creating asynchronous and dynamic layout parts is a complex problem for larger projects using `html/template`. This library tries to simplify overall setup and process. Let's go straight into a simple example. Then, we will dig into details, step by step, how it works. Kyoto provides a set of simple net/http handlers, handler builders and function wrappers to provide serving, pages rendering, component actions, etc. Anyway, this is not an ultimative solution for any case. If you ever need to wrap/extend existing functionality, library encourages this. See functions inside of nethttp.go file for details and advanced usage. Example: Kyoto provides a way to define components. It's a very common approach for modern libraries to manage frontend parts. In kyoto each component is a context receiver, which returns it's state. Each component becomes a part of the page or top-level component, which executes component asynchronously and gets a state future object. In that way your components are executing in a non-blocking way. Pages are just top-level components, where you can configure rendering and page related stuff. Example: As an option, you can wrap component with another function to accept additional paramenters from top-level page/component. Example: Kyoto provides a context, which holds common objects like http.ResponseWriter, *http.Request, etc. See kyoto.Context for details. Example: Kyoto provides a set of parameters and functions to provide a comfortable template building process. You can configure template building parameters with kyoto.TemplateConf configuration. See template.go for available functions and kyoto.TemplateConfiguration for configuration details. Example: Kyoto provides a way to simplify building dynamic UIs. For this purpose it has a feature named actions. Logic is pretty simple. Client calls an action (sends a request to the server). Action is executing on server side and server is sending updated component markup to the client which will be morphed into DOM. That's it. To use actions, you need to go through a few steps. You'll need to include a client into page (JS functions for communication) and register an actions handler for a needed component. Let's start from including a client. Then, let's register an actions handler for a needed component. That's all! Now we ready to use actions to provide a dynamic UI. Example: In this example you can see provided modifications to the quick start example. First, we've added a state and name into our components' markup. In this way we are saving our components' state between actions and find a component root. Unfortunately, we have to manually provide a component name for now, we haven't found a way to provide it dynamically. Second, we've added a reload button with onclick function call. We're using a function Action provided by a client. Action triggering will be described in details later. Third, we've added an action handler inside of our component. This handler will be executed when a client calls an action with a corresponding name. It's highly recommended to keep components' state as small as possible. It will be transmitted on each action call. Kyoto have multiple ways to trigger actions. Now we will check them one by one. This is the simplest way to trigger an action. It's just a function call with a referer (usually 'this', f.e. button) as a first argument (used to determine root), action name as a second argument and arguments as a rest. Arguments must to be JSON serializable. It's possible to trigger an action of another component. If you want to call an action of parent component, use $ prefix in action name. If you want to call an action of component by id, use <id:action> as an action name. This is a specific action which is triggered when a form is submitted. Usually called in onsubmit="..." attribute of a form. You'll need to implement 'Submit' action to handle this trigger. This is a special HTML attribute which will trigger an action on page load. This may be useful for components' lazy loading. With this special HTML attributes you can trigger an action with interval. Useful for components that must to be updated over time (f.e. charts, stats, etc). You can use this trigger with ssa:poll and ssa:poll.interval HTML attributes. This one attribute allows you to trigger an action when an element is visible on the screen. May be useful for lazy loading. Kyoto provides a way to control action flow. For now, it's possible to control display style on component call and push multiple UI updates to the client during a single action. Because kyoto makes a roundtrip to the server every time an action is triggered on the page, there are cases where the page may not react immediately to a user event (like a click). That's why the library provides a way to easily control display attributes on action call. You can use this HTML attribute to control display during action call. At the end of an action the layout will be restored. A small note. Don't forget to set a default display for loading elements like spinners and loaders. You can push multiple component UI updates during a single action call. Just call kyoto.ActionFlush(ctx, state) to initiate an update. Kyoto provides a way to control action rendering. Now there is at least 2 rendering options after an action call: morph (default) and replace. Morph will try to morph received markup to the current one with morphdom library. In case of an error, or explicit "replace" mode, markup will be replaced with x.outerHTML = '...'.
Package process manages the lifecyle of processes and process groups
Package tableroll implements zero downtime upgrades between two independently managed processes. An upgrade is coordinated over a well-known coordination directory. Any number of processes may be run at once that coordinate upgrades on that directory, and between those many processes, one is chosen to own all shareable / upgradeable file descriptors. Each upgrade uniquely involves two processes, and unix exclusive locks on the filesystem ensure that. Each process under tableroll should be able to signal readiness, which will indicate to tableroll that it is safe for previous processes to cease listening for new connections and begin draining existing ones. Unlike other upgrade mechanisms in this space, it is expected that a new binary is started independently, such as in a new container, not as a child of the existing one. How a new upgrade is started is entirely out of scope of this library. Both copies of the process must have access to the same coordination directory, but apart from that, there are no stringent requirements.
Package multitenancy provides a Go framework for building multi-tenant applications, streamlining tenant management and model migrations. It abstracts multitenancy complexities through a unified, database-agnostic API compatible with GORM. The framework supports two primary multitenancy strategies: The following sections provide an overview of the package's features and usage instructions for implementing multitenancy in Go applications. The package supports multitenancy for both PostgreSQL and MySQL databases, offering three methods for establishing a new database connection with multitenancy support: OpenDB allows opening a database connection using a URL-like DSN string, providing a flexible and easy way to switch between drivers. This method abstracts the underlying driver mechanics, offering a straightforward connection process and a unified, database-agnostic API through the returned *DB instance, which embeds the gorm.DB instance. For constructing the DSN string, refer to the driver-specific documentation for the required parameters and formats. This approach is useful for applications that need to dynamically switch between different database drivers or configurations without changing the codebase. Postgres: MySQL: Open with a supported driver offers a unified, database-agnostic API for managing tenant-specific and shared data, embedding the gorm.DB instance. This method allows developers to initialize and configure the dialect themselves before opening the database connection, providing granular control over the database connection and configuration. It facilitates seamless switching between database drivers while maintaining access to GORM's full functionality. Postgres: MySQL: For direct access to the gorm.DB API and multitenancy features for specific tasks, this approach allows invoking driver-specific functions directly. It provides a lower level of abstraction, requiring manual management of database-specific operations. Prior to version 8, this was the only method available for using the package. Postgres: MySQL: All models must implement driver.TenantTabler, which extends GORM's Tabler interface. This extension allows models to define their table name and indicate whether they are shared across tenants. Public Models: These are models which are shared across tenants. driver.TenantTabler.TableName should return the table name prefixed with 'public.'. driver.TenantTabler.IsSharedModel should return true. Tenant-Specific Models: These models are specific to a single tenant and should not be shared across tenants. driver.TenantTabler.TableName should return the table name without any prefix. driver.TenantTabler.IsSharedModel should return false. Tenant Model: This package provides a TenantModel struct that can be embedded in any public model that requires tenant scoping, enriching it with essential fields for managing tenant-specific information. This structure incorporates fields for the tenant's domain URL and schema name, facilitating the linkage of tenant-specific models to their respective schemas. Before performing any migrations or operations on tenant-specific models, the models must be registered with the DB instance using DB.RegisterModels. Postgres Adapter: Use postgres.RegisterModels to register models. MySQL Adapter: Use mysql.RegisterModels to register models. To ensure data integrity and schema isolation across tenants,gorm.DB.AutoMigrate has been disabled. Instead, use the provided shared and tenant-specific migration methods. driver.ErrInvalidMigration is returned if the `AutoMigrate` method is called directly. To ensure tenant isolation and facilitate concurrent migrations, driver-specific locking mechanisms are used. These locks prevent concurrent migrations from interfering with each other, ensuring that only one migration process can run at a time for a given tenant. Consult the driver-specific documentation for more information. After registering models, shared models are migrated using DB.MigrateSharedModels. Postgres Adapter: Use postgres.MigrateSharedModels to migrate shared models. MySQL Adapter: Use mysql.MigrateSharedModels to migrate shared models. After registering models, tenant-specific models are migrated using DB.MigrateTenantModels. Postgres Adapter: Use postgres.MigrateTenantModels to migrate tenant-specific models. MySQL Adapter: Use mysql.MigrateTenantModels to migrate tenant-specific models. When a tenant is removed from the system, the tenant-specific schema and associated tables should be cleaned up using DB.OffboardTenant. Postgres Adapter: Use postgres.DropSchemaForTenant to offboard a tenant. MySQL Adapter: Use mysql.DropDatabaseForTenant to offboard a tenant. DB.UseTenant configures the database for operations specific to a tenant, abstracting database-specific operations for tenant context configuration. This method returns a reset function to revert the database context and an error if the operation fails. Postgres Adapter: Use postgres.SetSearchPath to set the search path for a tenant. MySQL Adapter: Use mysql.UseDatabase function to set the database for a tenant. For the most part, foreign key constraints work as expected, but there are some restrictions and considerations to keep in mind when working with multitenancy. The following guidelines outline the supported relationships between tables: Between Tables in the Public Schema: From Public Schema Tables to Tenant-Specific Tables: From Tenant-Specific Tables to Public Schema Tables: Between Tenant-Specific Tables: See the example application for a more comprehensive demonstration of the framework's capabilities. Always sanitize input to prevent SQL injection vulnerabilities. This framework does not perform any validation on the database name or schema name parameters. It is the responsibility of the caller to ensure that these parameters are sanitized. To facilitate this, the framework provides the following utilities: For a detailed technical overview of SQL design strategies adopted by the framework, see the STRATEGY.md file.
Package mysql provides a gorm.Dialector implementation for MySQL databases to support multitenancy in GORM applications, enabling tenant-specific operations and shared resources management. It includes utilities for registering models, migrating shared and tenant-specific models, and configuring the database for tenant-specific operations. This package follows the "separate databases" approach for multitenancy, which allows for complete data isolation by utilizing separate databases for each tenant. This approach ensures maximum security and performance isolation between tenants, making it suitable for applications with stringent data security requirements. The URL format for MySQL databases is as follows: See the MySQL connection strings documentation for more information. To register models for multitenancy support, use RegisterModels. This should be done before running any migrations or tenant-specific operations. To ensure data integrity and schema isolation across tenants,gorm.DB.AutoMigrate has been disabled. Instead, use the provided shared and tenant-specific migration methods. driver.ErrInvalidMigration is returned if the `AutoMigrate` method is called directly. To ensure tenant isolation and facilitate concurrent migrations, this package uses MySQL advisory locks. These locks prevent concurrent migrations from interfering with each other, ensuring that only one migration process can run at a time for a given tenant. Exponential backoff retry logic is enabled by default for migrations. To disable retry or customize the retry behavior, either provide options to New or specify options in the DSN connection string of Open. The following options are available: To migrate shared models, use MigrateSharedModels. To migrate tenant-specific models, use MigrateTenantModels. To clean up the database for a removed tenant, use DropDatabaseForTenant. To configure the database for operations specific to a tenant, use schema.UseDatabase.
Gaby is an experimental new bot running in the Go issue tracker as @gabyhelp, to try to help automate various mundane things that a machine can do reasonably well, as well as to try to discover new things that a machine can do reasonably well. The name gaby is short for “Go AI Bot”, because one of the purposes of the experiment is to learn what LLMs can be used for effectively, including identifying what they should not be used for. Some of the gaby functionality will involve LLMs; other functionality will not. The guiding principle is to create something that helps maintainers and that maintainers like, which means to use LLMs when they make sense and help but not when they don't. In the long term, the intention is for this code base or a successor version to take over the current functionality of “gopherbot” and become @gopherbot, at which point the @gabyhelp account will be retired. At the moment we are not accepting new code contributions or PRs. We hope to move this code to somewhere more official soon, at which point we will accept contributions. The GitHub Discussion is a good place to leave feedback about @gabyhelp. The bot functionality is implemented in internal packages in subdirectories. This comment gives a brief tour of the structure. An explicit goal for the Gaby code base is that it run well in many different environments, ranging from a maintainer's home server or even Raspberry Pi all the way up to a hosted cloud. (At the moment, Gaby runs on a Linux server in my basement.) Due to this emphasis on portability, Gaby defines its own interfaces for all the functionality it needs from the surrounding environment and then also defines a variety of implementations of those interfaces. Another explicit goal for the Gaby code base is that it be very well tested. (See my [Go Testing talk] for more about why this is so important.) Abstracting the various external functionality into interfaces also helps make testing easier, and some packages also provide explicit testing support. The result of both these goals is that Gaby defines some basic functionality like time-ordered indexing for itself instead of relying on some specific other implementation. In the grand scheme of things, these are a small amount of code to maintain, and the benefits to both portability and testability are significant. Code interacting with services like GitHub and code running on cloud servers is typically difficult to test and therefore undertested. It is an explicit requirement this repo to test all the code, even (and especially) when testing is difficult. A useful command to have available when working in the code is rsc.io/uncover, which prints the package source lines not covered by a unit test. A useful invocation is: The first “go test” command checks that the test passes. The second repeats the test with coverage enabled. Running the test twice this way makes sure that any syntax or type errors reported by the compiler are reported without coverage, because coverage can mangle the error output. After both tests pass and second writes a coverage profile, running “uncover /tmp/c.out” prints the uncovered lines. In this output, there are three error paths that are untested. In general, error paths should be tested, so tests should be written to cover these lines of code. In limited cases, it may not be practical to test a certain section, such as when code is unreachable but left in case of future changes or mistaken assumptions. That part of the code can be labeled with a comment beginning “// Unreachable” or “// unreachable” (usually with explanatory text following), and then uncover will not report it. If a code section should be tested but the test is being deferred to later, that section can be labeled “// Untested” or “// untested” instead. The rsc.io/gaby/internal/testutil package provides a few other testing helpers. The overview of the code now proceeds from bottom up, starting with storage and working up to the actual bot. Gaby needs to manage a few secret keys used to access services. The rsc.io/gaby/internal/secret package defines the interface for obtaining those secrets. The only implementations at the moment are an in-memory map and a disk-based implementation that reads $HOME/.netrc. Future implementations may include other file formats as well as cloud-based secret storage services. Secret storage is intentionally separated from the main database storage, described below. The main database should hold public data, not secrets. Gaby defines the interface it expects from a large language model. The llm.Embedder interface abstracts an LLM that can take a collection of documents and return their vector embeddings, each of type llm.Vector. The only real implementation to date is rsc.io/gaby/internal/gemini. It would be good to add an offline implementation using Ollama as well. For tests that need an embedder but don't care about the quality of the embeddings, llm.QuoteEmbedder copies a prefix of the text into the vector (preserving vector unit length) in a deterministic way. This is good enough for testing functionality like vector search and simplifies tests by avoiding a dependence on a real LLM. At the moment, only the embedding interface is defined. In the future we expect to add more interfaces around text generation and tool use. As noted above, Gaby defines interfaces for all the functionality it needs from its external environment, to admit a wide variety of implementations for both execution and testing. The lowest level interface is storage, defined in rsc.io/gaby/internal/storage. Gaby requires a key-value store that supports ordered traversal of key ranges and atomic batch writes up to a modest size limit (at least a few megabytes). The basic interface is storage.DB. storage.MemDB returns an in-memory implementation useful for testing. Other implementations can be put through their paces using storage.TestDB. The only real storage.DB implementation is rsc.io/gaby/internal/pebble, which is a LevelDB-derived on-disk key-value store developed and used as part of CockroachDB. It is a production-quality local storage implementation and maintains the database as a directory of files. In the future we plan to add an implementation using Google Cloud Firestore, which provides a production-quality key-value lookup as a Cloud service without fixed baseline server costs. (Firestore is the successor to Google Cloud Datastore.) The storage.DB makes the simplifying assumption that storage never fails, or rather that if storage has failed then you'd rather crash your program than try to proceed through typically untested code paths. As such, methods like Get and Set do not return errors. They panic on failure, and clients of a DB can call the DB's Panic method to invoke the same kind of panic if they notice any corruption. It remains to be seen whether this decision is kept. In addition to the usual methods like Get, Set, and Delete, storage.DB defines Lock and Unlock methods that acquire and release named mutexes managed by the database layer. The purpose of these methods is to enable coordination when multiple instances of a Gaby program are running on a serverless cloud execution platform. So far Gaby has only run on an underground basement server (the opposite of cloud), so these have not been exercised much and the APIs may change. In addition to the regular database, package storage also defines storage.VectorDB, a vector database for use with LLM embeddings. The basic operations are Set, Get, and Search. storage.MemVectorDB returns an in-memory implementation that stores the actual vectors in a storage.DB for persistence but also keeps a copy in memory and searches by comparing against all the vectors. When backed by a storage.MemDB, this implementation is useful for testing, but when backed by a persistent database, the implementation suffices for small-scale production use (say, up to a million documents, which would require 3 GB of vectors). It is possible that the package ordering here is wrong and that VectorDB should be defined in the llm package, built on top of storage, and not the current “storage builds on llm”. Because Gaby makes minimal demands of its storage layer, any structure we want to impose must be implemented on top of it. Gaby uses the rsc.io/ordered encoding format to produce database keys that order in useful ways. For example, ordered.Encode("issue", 123) < ordered.Encode("issue", 1001), so that keys of this form can be used to scan through issues in numeric order. In contrast, using something like fmt.Sprintf("issue%d", n) would visit issue 1001 before issue 123 because "1001" < "123". Using this kind of encoding is common when using NoSQL key-value storage. See the rsc.io/ordered package for the details of the specific encoding. One of the implied jobs Gaby has is to collect all the relevant information about an open source project: its issues, its code changes, its documentation, and so on. Those sources are always changing, so derived operations like adding embeddings for documents need to be able to identify what is new and what has been processed already. To enable this, Gaby implements time-stamped—or just “timed”—storage, in which a collection of key-value pairs also has a “by time” index of ((timestamp, key), no-value) pairs to make it possible to scan only the key-value pairs modified after the previous scan. This kind of incremental scan only has to remember the last timestamp processed and then start an ordered key range scan just after that timestamp. This convention is implemented by rsc.io/gaby/internal/timed, along with a [timed.Watcher] that formalizes the incremental scan pattern. Various package take care of downloading state from issue trackers and the like, but then all that state needs to be unified into a common document format that can be indexed and searched. That document format is defined by rsc.io/gaby/internal/docs. A document consists of an ID (conventionally a URL), a document title, and document text. Documents are stored using timed storage, enabling incremental processing of newly added documents . The next stop for any new document is embedding it into a vector and storing that vector in a vector database. The rsc.io/gaby/internal/embeddocs package does this, and there is very little to it, given the abstractions of a document store with incremental scanning, an LLM embedder, and a vector database, all of which are provided by other packages. None of the packages mentioned so far involve network operations, but the next few do. It is important to test those but also equally important not to depend on external network services in the tests. Instead, the package rsc.io/gaby/internal/httprr provides an HTTP record/replay system specifically designed to help testing. It can be run once in a mode that does use external network servers and records the HTTP exchanges, but by default tests look up the expected responses in the previously recorded log, replaying those responses. The result is that code making HTTP request can be tested with real server traffic once and then re-tested with recordings of that traffic afterward. This avoids having to write entire fakes of services but also avoids needing the services to stay available in order for tests to pass. It also typically makes the tests much faster than using the real servers. Gaby uses GitHub in two main ways. First, it downloads an entire copy of the issue tracker state, with incremental updates, into timed storage. Second, it performs actions in the issue tracker, like editing issues or comments, applying labels, or posting new comments. These operations are provided by rsc.io/gaby/internal/github. Gaby downloads the issue tracker state using GitHub's REST API, which makes incremental updating very easy but does not provide access to a few newer features such as project boards and discussions, which are only available in the GraphQL API. Sync'ing using the GraphQL API is left for future work: there is enough data available from the REST API that for now we can focus on what to do with that data and not that a few newer GitHub features are missing. The github package provides two important aids for testing. For issue tracker state, it also allows loading issue data from a simple text-based issue description, avoiding any actual GitHub use at all and making it easier to modify the test data. For issue tracker actions, the github package defaults in tests to not actually making changes, instead diverting edits into an in-memory log. Tests can then check the log to see whether the right edits were requested. The rsc.io/gaby/internal/githubdocs package takes care of adding content from the downloaded GitHub state into the general document store. Currently the only GitHub-derived documents are one document per issue, consisting of the issue title and body. It may be worth experimenting with incorporating issue comments in some way, although they bring with them a significant amount of potential noise. Gaby will need to download and store Gerrit state into the database and then derive documents from it. That code has not yet been written, although rsc.io/gerrit/reviewdb provides a basic version that can be adapted. Gaby will also need to download and store project documentation into the database and derive documents from it corresponding to cutting the page at each heading. That code has been written but is not yet tested well enough to commit. It will be added later. The simplest job Gaby has is to go around fixing new comments, including issue descriptions (which look like comments but are a different kind of GitHub data). The rsc.io/gaby/internal/commentfix package implements this, watching GitHub state incrementally and applying a few kinds of rewrite rules to each new comment or issue body. The commentfix package allows automatically editing text, automatically editing URLs, and automatically hyperlinking text. The next job Gaby has is to respond to new issues with related issues and documents. The rsc.io/gaby/internal/related package implements this, watching GitHub state incrementally for new issues, filtering out ones that should be ignored, and then finding related issues and documents and posting a list. This package was originally intended to identify and automatically close duplicates, but the difference between a duplicate and a very similar or not-quite-fixed issue is too difficult a judgement to make for an LLM. Even so, the act of bringing forward related context that may have been forgotten or never known by the people reading the issue has turned out to be incredibly helpful. All of these pieces are put together in the main program, this package, rsc.io/gaby. The actual main package has no tests yet but is also incredibly straightforward. It does need tests, but we also need to identify ways that the hard-coded policies in the package can be lifted out into data that a natural language interface can manipulate. For example the current policy choices in package main amount to: These could be stored somewhere as data and manipulated and added to by the LLM in response to prompts from maintainers. And other features could be added and configured in a similar way. Exactly how to do this is an important thing to learn in future experimentation. As mentioned above, the two jobs Gaby does already are both fairly simple and straightforward. It seems like a general approach that should work well is well-written, well-tested deterministic traditional functionality such as the comment fixer and related-docs poster, configured by LLMs in response to specific directions or eventually higher-level goals specified by project maintainers. Other functionality that is worth exploring is rules for automatically labeling issues, rules for identifying issues or CLs that need to be pinged, rules for identifying CLs that need maintainer attention or that need submitting, and so on. Another stretch goal might be to identify when an issue needs more information and ask for that information. Of course, it would be very important not to ask for information that is already present or irrelevant, so getting that right would be a very high bar. There is no guarantee that today's LLMs work well enough to build a useful version of that. Another important area of future work will be running Gaby on top of cloud databases and then moving Gaby's own execution into the cloud. Getting it a server with a URL will enable GitHub callbacks instead of the current 2-minute polling loop, which will enable interactive conversations with Gaby. Overall, we believe that there are a few good ideas for ways that LLM-based bots can help make project maintainers' jobs easier and less monotonous, and they are waiting to be found. There are also many bad ideas, and they must be filtered out. Understanding the difference will take significant care, thought, and experimentation. We have work to do.
Package stubbs manages the authentication process of Google Service Accounts.
Package pm is a process manager with an HTTP monitoring/control interface. To this package, processes or tasks are user-defined routines within a running program. This model is particularly suitable to server programs, with independent client-generated requests for action. The library would keep track of all such tasks and make the information available through HTTP, thus providing valuable insight into the running application. The client can also ask for a particular task to be canceled. Using pm starts by calling ListenAndServe() in a separate routine, like so: Note that pm's ListenAndServe() will not return by itself. Nevertheless, it will if the underlying net/http's ListenAndServe() does. So it's probably a good idea to wrap that call with some error checking and retrying for production code. Tasks to be tracked must be declared with Start(); that's when the identifier gets linked to them. An optional set of (arbitrary) attributes may be provided, that will get attached to the running task and be reported to the HTTP client tool as well. (Think of host, method and URI for a web server, for instance.) This could go like this: Note the deferred pm.Done() call. That's essential to mark the task as finished and release resources, and has to be called no matter if the task succeeded or not. That's why it's a good idea to use Go's defer mechanism. In fact, the protection from cancel-related panics (see StopCancelPanic below) does NOT work if Done() is called in-line (i.e., not deferred) due to properties of recover(). An HTTP client issuing a GET call to /procs/ would receive a JSON response with a snapshot of all currently running tasks. Each one will include the time when it was started (as per the server clock) as well as the complete set of attributes provided to Start(). Note that to avoid issues with clock skews among servers, the current time for the server is returned as well. The reply to the /procs/ GET also includes a status. When tasks start they are set to "init", but they may change their status using pm's Status() function. Each call will record the change and the HTTP client will receive the last status available, together with the time it was set. Furthermore, the client may GET /procs/<id>/history instead (where <id> is the task identifier) to have a complete history for the given task, including all status changes with their time information. However, note that task information is completely recycled as soon as a task is Done(). Hence, client applications should be prepared to receive a not found reply even if they've just seen the task in a /procs/ GET result. Given the lack of a statement in Go to kill a routine, cancellations are implemented as panics. A DELETE call to /procs/<id> will mark the task with the given identifier as cancel-pending. Nevertheless, the task will never notice until it reaches another Status() call, which is by definition a cancellation point. Calls to Status() either return having set the new status, or panic with an error of type CancelErr. Needless to say, the application should handle this gracefully or will otherwise crash. Programs serving multiple requests will probably be already protected, with a recover() call at the level where the routine was started. But if that's not the case, or if you simply want the panic to be handled transparently, you may use this: When the StopCancelPanic option is set (which is NOT the default) Done() will recover a panic due to a cancel operation. In such a case, the routine running that code will jump to the next statement after the invocation to the function that called Start(). (Read it again.) In other words, stack unfolding stops at the level where Done() was deferred. Notice, though, that this behavior is specifically tailored to panics raising from pending cancellation requests. Should any other panic arise for any reason, it will continue past Done() as usual. So will panics due to cancel requests if StopCancelPanic is not set. Options set with pm.SetOptions() work on a global scope for the process list. Alternatively, you may provide a specific set for some task by using the options argument in the Start() call. If none is given, pm will grab the current global values at the time Start() was invoked. Task-specific options may come handy when, for instance, some tasks should be resilient to cancellation. Setting the ForbidCancel option makes the task effectively immune to cancel requests. Attempts by clients to cancel one of such tasks will inevitably yield an error message with no other effects. Set that with the global SetOptions() function and none of your tasks will cancel, ever. HTTP clients can learn about pending cancellation requests. Furthermore, if a client request happens to be handled between the task is done/canceled and resource recycling (a VERY tiny time window), then the result would include one of these as the status: "killed", "aborted" or "ended", if it was respectively canceled, died out of another panic (not pm-related) or finished successfully. The "killed" status may optionally add a user-defined message, provided through the HTTP /procs/<id> DELETE method. For the cancellation feature to be useful, applications should collaborate. Go lacks a mechanism to cancel arbitrary routines (it even lacks identifiers for them), so programs willing to provide the feature must be willing to help. It's a good practice to add cancellation points every once in a while, particularly when lengthy operations are run. However, applications are not expected to change status that frequently. This package provides the function CheckCancel() for that. It works as a cancellation point by definition, without messing with the task status, nor leaving a trace in history. Finally, please note that cancellation requests yield panics in the same routine that called Start() with that given identifier. However, it's not unusual for servers to spawn additional Go routines to handle the same request. The application is responsible of cleaning up, if there are additional resources that should be recycled. The proper way to do this is by catching CancelErr type panics, cleaning-up and then re-panic, i.e.:
crane is a command line tool for service providers/administrators. It provides some commands that allow the service administrator to register himself/herself, manage teams, apps and services. Usage: The currently available commands are (grouped by subject): Use "crane help <command>" for more information about a command. Usage: The target is the crane server to which all operations will be directed to. With this set of commands you are be able to check the current target, add a new labeled target, set a target for usage, list the added targets and remove a target, respectively. Usage: This command returns the current version of crane command. Usage: user-create creates a user within crane remote server. It will ask for the password before issue the request. Usage: user-remove will remove currently authenticated user from remote tsuru server. since there cannot exist any orphan teams, tsuru will refuse to remove a user that is the last member of some team. if this is your case, make sure you remove the team using "team-remove" before removing the user. Usage: Login will ask for the password and check if the user is successfully authenticated. If so, the token generated by the crane server will be stored in ${HOME}/.crane_token. All crane actions require the user to be authenticated (except login and user-create, obviously). Usage: Logout will delete the token file and terminate the session within crane server. Usage: change-password will change the password of the logged in user. It will ask for the current password, the new and the confirmation. Usage: reset-password will redefine the user password. This process is composed by two steps: In order to generate the token, users should run this command without the --token flag. The token will be mailed to the user. With the token in hand, the user can finally reset the password using the --token flag. The new password will also be mailed to the user.`, Usage: team-create will create a team for the user. crane requires a user to be a member of at least one team in order to create a service. When you create a team, you're automatically member of this team. Usage: team-remove will remove a team from tsuru server. You're able to remove teams that you're member of. A team that has access to any app cannot be removed. Before removing a team, make sure it does not have access to any app (see "app-grant" and "app-revoke" commands for details). Usage: team-list will list all teams that you are member of. Usage: team-user-add adds a user to a team. You need to be a member of the team to be able to add another user to it. Usage: team-user-remove removes a user from a team. You need to be a member of the team to be able to remove a user from it. A team can never have 0 users. If you are the last member of a team, you can't remove yourself from it. Usage: Template will create a file named "manifest.yaml" with the following content: Change it at will to configure your service. Id is the id of your service, it must be unique. You must provide a production endpoint that will be invoked by tsuru when application developers ask for new instances and are binding their apps to their instances. For more details, see the text "Services API Workflow": http://tsuru.rtfd.org/services-api-workflow. Usage: Create will create a new service with information present in the manifest file. Here is an example of usage: You can use "crane template" to generate a template. Both id and production endpoint are required fields. When creating a new service, crane will add all user's teams as administrator teams of the service. Usage: Update will update a service using a manifest file. Currently, it's only possible to edit an endpoint, or add new endpoints. You need to be an administrator of the team to perform an update. Usage: Remove will remove a service from crane server. You need to be an administrator of the team to remove it. Usage: List will list all services that you administrate, and the instances of each service, created by application developers. Usage: doc-add will update service's doc. Example of usage: You need to be an administrator of the service to update its docs. Usage: doc-get will retrieve the current documentation of the service.
Package shutdown provides management of your shutdown process. The package will enable you to get notifications for your application and handle the shutdown process. Package home: https://github.com/klauspost/shutdown This package has been updated to a new version. If you are starting a new project, use the new package: * Package home: https://github.com/klauspost/shutdown2 * Godoc: https://godoc.org/github.com/klauspost/shutdown2 Version 2 mainly contains minor adjustments to the API to make it a bit easier to use. This package remains here to maintain compatibility.
Package dragonboat is a feature complete and highly optimized multi-group Raft implementation for providing consensus in distributed systems. The NodeHost struct is the facade interface for all features provided by the dragonboat package. Each NodeHost instance usually runs on a separate server managing CPU, storage and network resources used for achieving consensus. Each NodeHost manages Raft nodes from different Raft groups known as Raft shards. Each Raft shard is identified by its ShardID, it usually consists of multiple nodes (also known as replicas) each identified by a ReplicaID value. Nodes from the same Raft shard suppose to be distributed on different NodeHost instances across the network, this brings fault tolerance for machine and network failures as application data stored in the Raft shard will be available as long as the majority of its managing NodeHost instances (i.e. its underlying servers) are accessible. Arbitrary number of Raft shards can be launched across the network to aggregate distributed processing and storage capacities. Users can also make membership change requests to add or remove nodes from selected Raft shard. User applications can leverage the power of the Raft protocol by implementing the IStateMachine or IOnDiskStateMachine component, as defined in github.com/foreeest/dragonboat/statemachine. Known as user state machines, each IStateMachine or IOnDiskStateMachine instance is in charge of updating, querying and snapshotting application data with minimum exposure to the Raft protocol itself. Dragonboat guarantees the linearizability of your I/O when interacting with the IStateMachine or IOnDiskStateMachine instances. In plain English, writes (via making proposals) to your Raft shard appears to be instantaneous, once a write is completed, all later reads (via linearizable read based on Raft's ReadIndex protocol) should return the value of that write or a later write. Once a value is returned by a linearizable read, all later reads should return the same value or the result of a later write. To strictly provide such guarantee, we need to implement the at-most-once semantic. For a client, when it retries the proposal that failed to complete by its deadline, it faces the risk of having the same proposal committed and applied twice into the user state machine. Dragonboat prevents this by implementing the client session concept described in Diego Ongaro's PhD thesis.
Package sqlite provides a Go interface to SQLite 3. The semantics of this package are deliberately close to the SQLite3 C API, so it is helpful to be familiar with http://www.sqlite.org/c3ref/intro.html. An SQLite connection is represented by a *sqlite.Conn. Connections cannot be used concurrently. A typical Go program will create a pool of connections (using Open to create a *sqlitex.Pool) so goroutines can borrow a connection while they need to talk to the database. This package assumes SQLite will be used concurrently by the process through several connections, so the build options for SQLite enable multi-threading and the shared cache: https://www.sqlite.org/sharedcache.html The implementation automatically handles shared cache locking, see the documentation on Stmt.Step for details. The optional SQLite3 compiled in are: FTS5, RTree, JSON1, Session, GeoPoly This is not a database/sql driver. Statements are prepared with the Prepare and PrepareTransient methods. When using Prepare, statements are keyed inside a connection by the original query string used to create them. This means long-running high-performance code paths can write: After all the connections in a pool have been warmed up by passing through one of these Prepare calls, subsequent calls are simply a map lookup that returns an existing statement. The sqlite package supports the SQLite incremental I/O interface for streaming blob data into and out of the the database without loading the entire blob into a single []byte. (This is important when working either with very large blobs, or more commonly, a large number of moderate-sized blobs concurrently.) To write a blob, first use an INSERT statement to set the size of the blob and assign a rowid: Use BindZeroBlob or SetZeroBlob to set the size of myblob. Then you can open the blob with: Every connection can have a done channel associated with it using the SetInterrupt method. This is typically the channel returned by a context.Context Done method. For example, a timeout can be associated with a connection session: As database connections are long-lived, the SetInterrupt method can be called multiple times to reset the associated lifetime. When using pools, the shorthand for associating a context with a connection is: SQLite transactions have to be managed manually with this package by directly calling BEGIN / COMMIT / ROLLBACK or SAVEPOINT / RELEASE/ ROLLBACK. The sqlitex has a Savepoint function that helps automate this. Using a Pool to execute SQL in a concurrent HTTP handler. For helper functions that make some kinds of statements easier to write see the sqlitex package.
CDK - Curses Development Kit The Curses Development Kit is the low-level compliment to the higher-level Curses Tool Kit. CDK is based primarily upon the TCell codebase, however, it is a hard-fork with many API-breaking changes. Some of the more salient changes are as follows: All CDK applications require some form of `Window` implementation in order to function. One can use the build-in `cdk.CWindow` type to construct a basic `Window` object and tie into it's signals in order to render to the canvas and handle events. For this example however, a more formal approach is taken. Starting out with a very slim definition of our custom `Window`, all that's necessary is the embed the concrete `cdk.CWindow` type and proceed with overriding various methods. The `Init` method is not necessary to overload, however, this is a good spot to do any UI initializations or the like. For this demo, the boilerplate minimum is given as an example to start from. The next method implemented is the `Draw` method. This method receives a pre-configured Canvas object, set to the available size of the `Display`. Within this method, the application needs to process as little "work" as possible and focus primarily on just rendering the content upon the canvas. Let's walk through each line of the `Draw` method. This line uses the built-in logging facilities to log an "info" message that the `Draw` method was invoked and let's us know some sort of human-readable description of the canvas (resembles JSON text but isn't really JSON). The advantage to using these built-in logging facilities is that the log entry itself will be prefixed with some extra metadata identifying the particular object instance with a bit of text in the format of "typeName-ID" where typeName is the object's CDK-type and the ID is an integer (marking the unique instance). Within CDK, there is a concept of `Theme`, which really is just a collection of useful purpose-driven `Style`s. One can set the default theme for the running CDK system, however the stock state is either a monochrome base theme or a colorized variant. Some of the rendering functions require `Style`s or `Theme`s passed as arguments and so we're getting that here for later use. Simply getting a `Rectangle` primitive with it's `W`idth and `H`eight values set according to the size of the canvas' internal buffer. `Rectangle` is a CDK primitive which has just two fields: `W` and `H`. Most places where spacial bounds are necessary, these primitives are used (such as the concept of a box `size` for painting upon a canvas). This is the first actual draw command. In this case, the `Box` method is configured to draw a box on the screen, starting at a position of 0,0 (top left corner), taking up the full volume of the canvas, with a border (first boolean `true` argument), ensuring to fill the entire area with the filler rune and style within a given theme, which is the last argument to the `Box` method. On a color-supporting terminal, this will paint a navy-blue box over the entire terminal screen. These few lines of code are merely concatenating a string of `Tango` markup that includes use of `<b></b>`, `<u></u>`, `<i></i>`, and `<span></span>` tags. All colors have fallback versions and are typically safe even for monochrome terminal sessions. This sets up a variable holding a `Point2I` instance configured for 1/4 of the width into the screen (from the left) and halfway minus one of the height into the screen (from the top). `Point2I` is a CDK primitive which has just two fields: `X` and `Y`. Most places where coordinates are necessary, these primitives are used (such as the concept of an `origin` point for painting upon a canvas). This sets up a variable holding a `Rectangle` configured to be half the size of the canvas itself. This last command within the `Draw` method paints the textual-content prepared earlier onto the canvas provided, center-justified, wrapping on word boundaries, using the default `Normal` theme, specifying that the content is in fact to be parsed as `Tango` markup and finally the content itself. The result of this drawing process should be a navy-blue screen, with a border, and three lines of text centered neatly. The three lines of text should be marked up with bold, italics, underlines and colorization. The last line of text should be telling the current time and date at the time of rendering. The `ProcessEvent` method is the main event handler. Whenever a new event is received by a CDK `Display`, it is passed on to the active `Window` and in this demonstration, all that's happening is a log entry is being made which mentions the event received. When implementing your own `ProcessEvent` method, if the `Display` should repaint the screen for example, one would make two calls to methods on the `DisplayManager`: CDK is a multi-threaded framework and the various `Request*()` methods on the `DisplayManager` are used to funnel requests along the right channels in order to first render the canvas (via `Draw` calls on the active `Window`) and then follow that up with the running `Display` painting itself from the canvas modified in the `Draw` process. The other complete list of request methods is as follows: This concludes the `CdkDemoWindow` type implementation. Now on to using it! The `main()` function within the `_demos/cdk-demo.go` sources is deceptively simple to implement. The bulk of the code is constructing a new CDK `App` instance. This object is a wrapper around the `github.com/urfave/cli/v2` CLI package, providing a tidy interface to managing CLI arguments, help documentation and so on. In this example, the `App` is configured with a bunch of metadata for: the program's name "cdk-demo", a simply usage summary, the current version number, an internally-used tag, a title for the main window and the display is to use the `/dev/tty` (default) terminal device. Beyond the metadata, the final argument is an initialization function. This function receives a fully instantiated and running `Display` instance and it is expected that the application instantiates it's `Window` and sets it as the active window for the given `Display`. In addition to that is one final call to `AddTimeout`. This call will trigger the given `func() cdk.EventFlag` once, after a second. Because the `func()` implemented here in this demonstration returns the `cdk.EVENT_PASS` flag it will be continually called once per second. For this demonstration, this implementation simply requests a draw and show cycle which will cause the screen to be repainted with the current date and time every second the demo application is running. The final bit of code in this CDK demonstration simply passes the arguments provided by the end-user on the command-line in a call to the `App`'s `Run()` method. This will cause the `DisplayManager`, `Display` and other systems to instantiate and begin processing events and render cycles.