Package dali wraps the sql.DB and provides convenient API for building database driven applications. Its main goal is to create a unified way of handling placeholders among all drivers and to simplify some common, repetive queries. There is no support for query builders (you have to write pure SQL queries). It focuses on the common queries (like writing INSERTs or UPDATEs) and on loading of results into structs, for which it provides easy-to-write alternatives. The following is the complete list of possible placeholders that can be used when writing a query using Query method. dali has also a support for prepared statements. However, it doesn't support certain placeholders. Only ?ident, ?ident..., and ?sql placeholders are allowed in the phase of the query building (befored the statement is prepared). The ? placeholder is the only one left for parameter binding. So working with prepared statements can look like this:
Copyright Philippe Thomassigny 2004-2023. Use of this source code is governed by a MIT licence. license that can be found in the LICENSE file. XDominion for GO v0 ============================= xdominion is a Go library for creating a database layer that abstracts the underlying database implementation and allows developers to interact with the database using objects rather than SQL statements. It supports multiple database backends, including PostgreSQL, MySQL, SQLite, and Microsoft SQL Server, among others. If you need a not yet supported database, please open a ticket on github.com. The library provides a set of high-level APIs for interacting with databases. It allows developers to map database tables to Go structs, allowing them to interact with the database using objects. The library also provides an intuitive and chainable API for querying the database, similar to the structure of SQL statements, but without requiring developers to write SQL code directly. xdominion uses a set of interfaces to abstract the database operations, making it easy to use different database backends with the same code. The library supports transactions, allowing developers to perform multiple database operations in a single transaction. The xdominion library uses reflection to map Go structs to database tables, and also allows developers to specify custom column names and relationships between tables. Overall, xdominion provides a simple and intuitive way to interact with databases using objects and abstracts the underlying database implementation. It is a well-designed library with a clear API and support for multiple database backends. 1. Overview ------------------------ XDominion is a database abstraction layer, to build and use objects of data instead of building SQL queries. The code is portable between databases with changing the implementation, since you don't use direct incompatible SQL sentences. The library is build over 3 main objects: - XBase: database connector and cursors to build queries and manipulation language - - Other included objects: XCursor - XTable: the table definition, data access function/structures and definition manipulation language - - Other included objects: XField*, XConstraints, XContraint, XOrderby, XConditions, XCondition - XRecord: the results and data to interchange with the database - - Other included objects: XRecords 2. Some example code to start working rapidly: ------------------------ Creates the connector to the database and connect: ``` ``` Executes a direct query: ``` ``` Creates a table definition: ``` t := xdominion.NewXTable("test", "t_") t.AddField(xdominion.XFieldText{Name: "f3"}) t.AddField(xdominion.XFieldDate{Name: "f4"}) t.AddField(xdominion.XFieldDateTime{Name: "f5"}) t.AddField(xdominion.XFieldFloat{Name: "f6"}) t.SetBase(base) ``` Synchronize the table with DB (create it if it does not exist) ``` ``` Some Insert: ``` ``` With an error (f2 is mandatory based on table definition): ``` ``` General query (select ALL): ``` ``` Query by Key: ``` ``` Query by Where: ``` ``` Transactions: ``` tx, err := base.BeginTransaction() res1, err := tb.Insert(XRecord{"f1": 5, "f2": "Data line 1"}, tx) res2, err := tb.Update(2, XRecord{"f1": 5, "f2": "Data line 1"}, tx) res3, err := tb.Delete(3, tx) // Note that the transaction is always passed as a parameter to the insert, update, delete operations tx.Commit() ``` 3. Reference ------------------------ XBase ----- The xbase package in xdominion provides a set of functions for working with relational databases in Go. Here is a reference manual for the package: Constants VERSION: A constant string that represents the version of XDominion. DB_Postgres: A constant string that represents the PostgreSQL database. DB_MySQL: A constant string that represents the MySQL database. DB_Localhost: A constant string that represents the local host. Variables DEBUG: A boolean variable used to enable/disable debug mode. Structs XBase DB: A pointer to an instance of sql.DB, representing the database connection. Logged: A boolean indicating whether the database connection has been established. DBType: A string representing the type of database being used. Username: A string representing the username for the database connection. Password: A string representing the password for the database connection. Database: A string representing the name of the database being connected to. Host: A string representing the host for the database connection. SSL: A boolean indicating whether to use SSL for the database connection. Logger: A pointer to a logger for debugging purposes. XTransaction DB: A pointer to an instance of XBase, representing the database connection. TX: A pointer to an instance of sql.Tx, representing a transaction. Functions Logon() The Logon() function establishes a connection to the database. go Copy code func (b *XBase) Logon() Logoff() The Logoff() function closes the database connection. go Copy code func (b *XBase) Logoff() Exec() The Exec() function executes a SQL query on the database and returns a cursor. go Copy code func (b *XBase) Exec(query string, args ...interface{}) (*sql.Rows, error) Cursor() The Cursor() function returns a new instance of Cursor, which provides methods for working with database records. go Copy code package main import ( ) In this example, we first create a new instance of the xdominion.XBase struct with the connection details to the database we want to connect to. We then call the Logon() method of the XBase struct to establish a connection to the database. Next, we define an SQL query to insert a new user into the users table, and then call the Exec() method of the XBase struct with the query and the values we want to insert. The Exec() function returns a cursor, which we don't need in this example, so we ignore it using the blank identifier (_). If there's an error executing the query, we print an error message to the console. Finally, we close the database connection by calling the Logoff() method of the XBase struct. Note that this is just a simple example, and you should always make sure to properly handle errors and sanitize user input when working with databases. package main import ( ) In this example, we first create a new instance of the xdominion.XBase struct with the connection details to the database we want to connect to. We then call the Logon() method of the XBase struct to establish a connection to the database. Next, we define an SQL query to select a user from the users table with the id equal to 1. We then call the Exec() method of the XBase struct with the query and the value we want to use for the id parameter. The Exec() function returns a cursor that we can iterate over to get the results of the query. We use a for loop to iterate over the rows returned by the Exec() function. Inside the loop, we use the Scan() method of the rows object to read the values of the name and email columns into variables. We then print the values of these variables to the console. If there's an error executing the query or reading a row, we print an error message to the console. Finally, we close the rows object and the database connection by calling the Close() and Logoff() methods of the XBase struct, respectively. Note that this is just a simple example, and you should always make sure to properly handle errors and sanitize user input when working with databases. go Copy code func (b *XBase) Cursor() *Cursor BeginTransaction() The BeginTransaction() function starts a new transaction on the database. go Copy code func (b *XBase) BeginTransaction() (*XTransaction, error) Commit() The Commit() function commits a transaction to the database. go Copy code func (t *XTransaction) Commit() error Rollback() The Rollback() function rolls back a transaction on the database. go Copy code func (t *XTransaction) Rollback() error Notes The Logon() function must be called before using any other functions in the xbase package. The Logoff() function should be called when finished using the database connection. The Exec() function should be used for executing arbitrary SQL queries. The Cursor() function should be used for performing CRUD operations on database records. The BeginTransaction(), Commit(), and Rollback() functions should be used for transactions. Note that this is just a brief overview of the xbase package. For more information and examples, please refer to the documentation in the xdominion GitHub repository: https://github.com/webability-go/xdominion. Create a new instance of the xdominion.XBase struct, which represents a database connection. The XBase struct provides methods for interacting with the database, such as querying, inserting, updating, and deleting records. In this example, &xdominion.XBase{} is the instance of the XBase struct, and the properties of the struct are set to the database connection details. The DBType property specifies the type of database being used, Username and Password specify the username and password for the database connection, Database specifies the name of the database being connected to, Host specifies the host for the database connection, and SSL specifies whether to use SSL for the database connection. Use the Logon() method of the XBase struct to connect to the database. base.Logon() The Logon() method establishes a connection to the database using the details provided in the XBase struct. Note that this is just a simple example, and the XBase library provides many more features for working with databases using objects. You can find more information and examples in the xdominion GitHub repository: https://github.com/webability-go/xdominion. XTable definition ----------------- XTable operations ----------------- XRecord ------- XRecords -------- Conditions ---------- Orderby ------- Fields ------ Limits ------ Groupby ------- Having ------ */
Package esquery provides a non-obtrusive, idiomatic and easy-to-use query and aggregation builder for the official Go client (https://github.com/elastic/go-elasticsearch) for the ElasticSearch database (https://www.elastic.co/products/elasticsearch). esquery alleviates the need to use extremely nested maps (map[string]interface{}) and serializing queries to JSON manually. It also helps eliminating common mistakes such as misspelling query types, as everything is statically typed. Using `esquery` can make your code much easier to write, read and maintain, and significantly reduce the amount of code you write. esquery provides a method chaining-style API for building and executing queries and aggregations. It does not wrap the official Go client nor does it require you to change your existing code in order to integrate the library. Queries can be directly built with `esquery`, and executed by passing an `*elasticsearch.Client` instance (with optional search parameters). Results are returned as-is from the official client (e.g. `*esapi.Response` objects). Getting started is extremely simple: esquery currently supports version 7 of the ElasticSearch Go client. The library cannot currently generate "short queries". For example, whereas ElasticSearch can accept this: { "query": { "term": { "user": "Kimchy" } } } The library will always generate this: This is also true for queries such as "bool", where fields like "must" can either receive one query object, or an array of query objects. `esquery` will generate an array even if there's only one query object.
Package freegeoip provides an API for searching the geolocation of IP addresses. It uses a database that can be either a local file or a remote resource from a URL. Local databases are monitored by fsnotify and reloaded when the file is either updated or overwritten. Remote databases are automatically downloaded and updated in background so you can focus on using the API and not managing the database.
Package restlayer is an API framework heavily inspired by the excellent Python Eve (http://python-eve.org/). It helps you create a comprehensive, customizable, and secure REST (graph) API on top of pluggable backend storages with no boiler plate code so can focus on your business logic. Implemented as a net/http middleware, it plays well with other middleware like CORS (http://github.com/rs/cors) and is net/context aware thanks to xhandler. REST Layer is an opinionated framework. Unlike many API frameworks, you don’t directly control the routing and you don’t have to write handlers. You just define resources and sub-resources with a schema, the framework automatically figures out what routes to generate behind the scene. You don’t have to take care of the HTTP headers and response, JSON encoding, etc. either. REST layer handles HTTP conditional requests, caching, integrity checking for you. A powerful and extensible validation engine make sure that data comes pre-validated to your custom storage handlers. Generic resource handlers for MongoDB (http://github.com/piotrekmonko/rest-layer-mongo), ElasticSearch (http://github.com/piotrekmonko/rest-layer-es) and other databases are also available so you have few to no code to write to make the whole system work. Moreover, REST Layer let you create a graph API by linking resources between them. Thanks to its advanced field selection syntax (and coming support of GraphQL), you can gather resources and their dependencies in a single request, saving you from costly network roundtrips. REST Layer is composed of several sub-packages: See https://github.com/piotrekmonko/rest-layer/blob/master/README.md for full REST Layer documentation.
Package types implements several types for dealing with REST API's and databases. UUID's are very useful, but you often need to attach context to them; e.g. you cannot look at a UUID and know whether it points to a record in the accounts table or in the messages table. A PrefixUUID solves this problem, by embedding the additional useful information as part of the string. If we had to write this value to the database as a string it would take up 43 bytes. Instead we use a UUID type and strip the prefix before saving it. The converse, Value(), only returns the UUID part by default, since this is the only thing the database knows about. You can also attach the prefix manually in your SQL, like so: This will get parsed as part of the Scan(), and then you don't need to do anything. Alternatively, you can attach the prefix in your model, immediately after the query. A NullString is like the null string in `database/sql`, but can additionally be encoded/decoded via JSON. A NullTime behaves exactly like NullString, but the value is a time.Time.
Package kivik provides a generic interface to CouchDB or CouchDB-like databases. The kivik package must be used in conjunction with a database driver. The officially supported drivers are: The Filesystem and Memory drivers are also available, but in early stages of development, and so many features do not yet work: The kivik driver system is modeled after the standard library's `sql` and `sql/driver` packages, although the client API is completely different due to the different database models implemented by SQL and NoSQL databases such as CouchDB. couchDB stores JSON, so Kivik translates Go data structures to and from JSON as necessary. The conversion between Go data types and JSON, and vice versa, is handled automatically according to the rules and behavior described in the documentationf or the standard library's `encoding/json` package (https://golang.org/pkg/encoding/json). One would be well-advised to become familiar with using `json` struct field tags (https://golang.org/pkg/encoding/json/#Marshal) when working with JSON documents. Most Kivik methods take `context.Context` as their first argument. This allows the cancellation of blocking operations in the case that the result is no longer needed. A typical use case for a web application would be to cancel a Kivik request if the remote HTTP client ahs disconnected, rednering the results of the query irrelevant. To learn more about Go's contexts, read the `context` package documentation (https://golang.org/pkg/context/) and read the Go blog post "Go Concurrency Patterns: Context" (https://blog.golang.org/context) for example code. If in doubt, you can pass `context.TODO()` as the context variable. Example: Kivik returns errors that embed an HTTP status code. In most cases, this is the HTTP status code returned by the server. The embedded HTTP status code may be accessed easily using the StatusCode() method, or with a type assertion to `interface { StatusCode() int }`. Example: Any error that does not conform to this interface will be assumed to represent a http.StatusInternalServerError status code. For common usage, authentication should be as simple as including the authentication credentials in the connection DSN. For example: This will connect to `localhost` on port 5984, using the username `admin` and the password `abc123`. When connecting to CouchDB (as in the above example), this will use cookie auth (https://docs.couchdb.org/en/stable/api/server/authn.html?highlight=cookie%20auth#cookie-authentication). Depending on which driver you use, there may be other ways to authenticate, as well. At the moment, the CouchDB driver is the only official driver which offers additional authentication methods. Please refer to the CouchDB package documentation for details (https://pkg.go.dev/github.com/IG-Soft/couchdb/v3). With a client handle in hand, you can create a database handle with the DB() method to interact with a specific database.
Package bartlett automatically generates an API from your database schema.
Package firebasedb implements a REST client for the Firebase Realtime Database (https://firebase.google.com/docs/database/). The API is as close as possible to the official JavaScript API. Similar / related project: Reference / documentation: This packages uses the "Advanced Go Concurrency Patterns" presented by Sameer Ajmani:
Open Source Business Management Framework Nervatura is a business management framework. It can handle any type of business related information, starting from customer details, up to shipping, stock or payment information. Developed as open-source project and can be used freely under the scope of LGPLv3 License. The framework is based on Nervatura Object Model (https://nervatura.github.io/nervatura/docs/model) specification. It is a general open-data model, which can store all information generated in the operation of a usual corporation. The Nervatura service is small and fast. A single ~6 MB file contains all the necessary dependencies. The framework includes: • CLI API (https://nervatura.github.io/nervatura/docs/service/cli#cli-api) (command line) • CGO API (https://nervatura.github.io/nervatura/docs/service/cli#cgo-api) (C shared library) • standard HTTP OPEN API (https://nervatura.github.io/nervatura/docs/service/api) for client communication • HTTP/2-based gRPC API (https://nervatura.github.io/nervatura/docs/service/grpc) for server-side communication • JWT generation, external token validation, SSL/TLS support and other HTTP security settings (https://github.com/nervatura/nervatura-service/blob/master/.env.example) • built-in database drivers for postgres, mysql, mssql, sqlite databases • a basic report generation library for creating simple PDF documents (eg. order, invoice, etc.) or CSV data files • sample report templates and REPORT EDITOR (https://nervatura.github.io/nervatura/docs/client/program/editor) GUI • CLIENT (https://nervatura.github.io/nervatura/docs/client) Web Component application and a basic **ADMIN** interface The client and report interface supports multilingualism (https://nervatura.github.io/nervatura/docs/start/customization#customize-the-appearance). The framework can be easily extended with additional interfaces and functions in the any languages. https://nervatura.github.io/nervatura/docs/install https://nervatura.github.io/nervatura/docs/start More info see http://www.nervatura.com.
Package cldr expose types and data from the Unicode CLDR. This package is empty. Each sub-package exposes a string-based type called Code corresponding to an ISO code and which is the entrypoint of the API. Those types implement interfaces used for input and output from/to JSON (encoding/json.Marshaler, encoding/json.Unmarshaler), command line flags (flag.Value) and SQL databases (database/sql.Scanner, database/sql/driver.Valuer). Data (country names, currencies per country) comes from the Unicode Common Locale Data Repository. Code generators are bundled to update the data from the latest CLDR release. Copyright © 2023 Commerce Technologies, LLC. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
raa is a file container, similar to tar or zip, focused on allowing constant-time random file access with linear memory consumption increase. The library implements a very similar API to the go os package, allowing full control over,and low level acces to the contained files. raa is based on boltdb, a low-level key/value database for Go.
Package cldr expose types and data from the Unicode CLDR. This package is empty. Each sub-package exposes a string-based type called Code (country.Code, currency.Code) corresponding to an ISO code and which is the entrypoint of the API. Those types implement interfaces used for input and output from/to JSON (json.Marshaler, json.Unmarshaler), command line flags (flag.Value) and SQL databases (sql.Scanner, driver.Valuer). Data (country names, currencies per country) comes from the Unicode Common Locale Data Repository (http://cldr.unicode.org/index). Code generators are bundled to update the data from the latest CLDR release. Copyright © 2018 BlueBoard SAS. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Package freegeoip provides an API for searching the geolocation of IP addresses. It uses a database that can be either a local file or a remote resource from a URL. Local databases are monitored by fsnotify and reloaded when the file is either updated or overwritten. Remote databases are automatically downloaded and updated in background so you can focus on using the API and not managing the database. Also, the freegeoip package provides http handlers that any Go http server (net/http) can use. These handlers can process IP geolocation lookup requests and return data in multiple formats like CSV, XML, JSON and JSONP. It has also an API for supporting custom formats.
Command goat provides an implementation of a BitTorrent tracker, written in Go. goat can be built using Go 1.1+. It can be downloaded, built, and installed, simply by running: In addition, goat depends on a MySQL server for data storage. After creating a database and user for goat, its database schema may be imported from the SQL files located in 'res/'. goat will not run unless MySQL is installed, and a database and user are properly configured for its use. Optionally, goat can be built to use ql (https://github.com/cznic/ql) as its storage backend. This is done by supplying the 'ql' tag in the go get command: A blank ql database file is located under 'res/ql/goat.db', and will be copied to '~/.config/goat/goat.db' on UNIX systems. goat is now able to use ql as its storage backend, for those who do not wish to use an external, MySQL backend. goat is capable of listening for torrent traffic in three modes: HTTP, HTTPS, and UDP. HTTP/HTTPS are the recommended methods, and are required in order for goat to serve its API, and to allow use of private tracker passkeys. HTTP is considered the standard mode of operation for goat. HTTP allows gathering a great number of metrics, use of passkeys, use of a client whitelist, and access to goat's RESTful API, when configured. For most trackers, this will be the only listener which is necessary in order for goat to function properly. The HTTPS listener provides a method to encrypt traffic to the tracker, but must be used with caution. Unless the SSL certificate in use is signed by a proper certificate authority, it will distress most clients, and they may outright refuse to announce to it. If you are in possession of a certificate signed by a certificate authority, this mode may be more ideal, as it provides added security for your clients. The UDP listener is the most unusual method of the three, and should only be used for public trackers. The BitTorrent UDP tracker protocol specifies a very specific packet format, meaning that additional information or parameters cannot be packed into a UDP datagram in a standard way. The UDP tracker may be the fastest and least bandwidth-intensive, but as stated, should only be used for public trackers. A new feature goat added to goat in order to allow better interoperability with many languages is a RESTful API, which is served using the HTTP or HTTPS listeners. This API enables easy retrieval of tracker statistics, while allowing goat to run as a completely independent process. It should be noted that the API is only enabled when configured, and when a HTTP or HTTPS listener is enabled. Without a transport mechanism, the API will be inaccessible. Currently, the API is read-only, and only allows use of the HTTP GET method. This may change in the future, but as of now, it doesn't make any sense to modify tracker parameters without doing a proper announce or scrape via BitTorrent client. The API will feature several modes of authentication, including HTTP Basic and HMAC-SHA1. For the time being, only HTTP Basic is implemented. This method makes use of a username/password pair using the user's username, and an API key as the password. This list contains all API calls currently recognized by goat. Each call must be authenticated using the aforementioned methods. Retrieve a list of all files tracked by goat. Some extended attributes are not added to reduce strain on database, and to provide a more general overview. Retrieve extended attributes about a specific file with matching ID. This provides counts for number of completions, seeders, leechers, and a list of fileUser relationships associated with a given file. Retrieve a variety of metrics about the current status of goat, including its PID, hostname, memory usage, number of HTTP/UDP hits, etc. goat is configured using a JSON file, which will be created under '~/.config/goat/config.json' on UNIX systems. Here is an example configuration, describing the settings available to the user.
Package CloudForest implements ensembles of decision trees for machine learning in pure Go (golang to search engines). It allows for a number of related algorithms for classification, regression, feature selection and structure analysis on heterogeneous numerical/categorical data with missing values. These include: Breiman and Cutler's Random Forest for Classification and Regression Adaptive Boosting (AdaBoost) Classification Gradiant Boosting Tree Regression Entropy and Cost driven classification L1 regression Feature selection with artificial contrasts Proximity and model structure analysis Roughly balanced bagging for unbalanced classification The API hasn't stabilized yet and may change rapidly. Tests and benchmarks have been performed only on embargoed data sets and can not yet be released. Library Documentation is in code and can be viewed with godoc or live at: http://godoc.org/github.com/IlyaLab/CloudForest Documentation of command line utilities and file formats can be found in README.md, which can be viewed fromated on github: http://github.com/IlyaLab/CloudForest Pull requests and bug reports are welcome. CloudForest was created by Ryan Bressler and is being developed in the Shumelivich Lab at the Institute for Systems Biology for use on genomic/biomedical data with partial support from The Cancer Genome Atlas and the Inova Translational Medicine Institute. CloudForest is intended to provide fast, comprehensible building blocks that can be used to implement ensembles of decision trees. CloudForest is written in Go to allow a data scientist to develop and scale new models and analysis quickly instead of having to modify complex legacy code. Data structures and file formats are chosen with use in multi threaded and cluster environments in mind. Go's support for function types is used to provide a interface to run code as data is percolated through a tree. This method is flexible enough that it can extend the tree being analyzed. Growing a decision tree using Breiman and Cutler's method can be done in an anonymous function/closure passed to a tree's root node's Recurse method: This allows a researcher to include whatever additional analysis they need (importance scores, proximity etc) in tree growth. The same Recurse method can also be used to analyze existing forests to tabulate scores or extract structure. Utilities like leafcount and errorrate use this method to tabulate data about the tree in collection objects. Decision tree's are grown with the goal of reducing "Impurity" which is usually defined as Gini Impurity for categorical targets or mean squared error for numerical targets. CloudForest grows trees against the Target interface which allows for alternative definitions of impurity. CloudForest includes several alternative targets: Additional targets can be stacked on top of these target to add boosting functionality: Repeatedly splitting the data and searching for the best split at each node of a decision tree are the most computationally intensive parts of decision tree learning and CloudForest includes optimized code to perform these tasks. Go's slices are used extensively in CloudForest to make it simple to interact with optimized code. Many previous implementations of Random Forest have avoided reallocation by reordering data in place and keeping track of start and end indexes. In go, slices pointing at the same underlying arrays make this sort of optimization transparent. For example a function like: can return left and right slices that point to the same underlying array as the original slice of cases but these slices should not have their values changed. Functions used while searching for the best split also accepts pointers to reusable slices and structs to maximize speed by keeping memory allocations to a minimum. BestSplitAllocs contains pointers to these items and its use can be seen in functions like: For categorical predictors, BestSplit will also attempt to intelligently choose between 4 different implementations depending on user input and the number of categories. These include exhaustive, random, and iterative searches for the best combination of categories implemented with bitwise operations against int and big.Int. See BestCatSplit, BestCatSplitIter, BestCatSplitBig and BestCatSplitIterBig. All numerical predictors are handled by BestNumSplit which relies on go's sorting package. Training a Random forest is an inherently parallel process and CloudForest is designed to allow parallel implementations that can tackle large problems while keeping memory usage low by writing and using data structures directly to/from disk. Trees can be grown in separate go routines. The growforest utility provides an example of this that uses go routines and channels to grow trees in parallel and write trees to disk as the are finished by the "worker" go routines. The few summary statistics like mean impurity decrease per feature (importance) can be calculated using thread safe data structures like RunningMean. Trees can also be grown on separate machines. The .sf stochastic forest format allows several small forests to be combined by concatenation and the ForestReader and ForestWriter structs allow these forests to be accessed tree by tree (or even node by node) from disk. For data sets that are too big to fit in memory on a single machine Tree.Grow and FeatureMatrix.BestSplitter can be reimplemented to load candidate features from disk, distributed database etc. By default cloud forest uses a fast heuristic for missing values. When proposing a split on a feature with missing data the missing cases are removed and the impurity value is corrected to use three way impurity which reduces the bias towards features with lots of missing data: Missing values in the target variable are left out of impurity calculations. This provided generally good results at a fraction of the computational costs of imputing data. Optionally, feature.ImputeMissing or featurematrixImputeMissing can be called before forest growth to impute missing values to the feature mean/mode which Brieman [2] suggests as a fast method for imputing values. This forest could also be analyzed for proximity (using leafcount or tree.GetLeaves) to do the more accurate proximity weighted imputation Brieman describes. Experimental support is provided for 3 way splitting which splits missing cases onto a third branch. [2] This has so far yielded mixed results in testing. At some point in the future support may be added for local imputing of missing values during tree growth as described in [3] [1] http://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm#missing1 [2] https://code.google.com/p/rf-ace/ [3] http://projecteuclid.org/DPubS?verb=Display&version=1.0&service=UI&handle=euclid.aoas/1223908043&page=record In CloudForest data is stored using the FeatureMatrix struct which contains Features. The Feature struct implements storage and methods for both categorical and numerical data and calculations of impurity etc and the search for the best split. The Target interface abstracts the methods of Feature that are needed for a feature to be predictable. This allows for the implementation of alternative types of regression and classification. Trees are built from Nodes and Splitters and stored within a Forest. Tree has a Grow implements Brieman and Cutler's method (see extract above) for growing a tree. A GrowForest method is also provided that implements the rest of the method including sampling cases but it may be faster to grow the forest to disk as in the growforest utility. Prediction and Voting is done using Tree.Vote and CatBallotBox and NumBallotBox which implement the VoteTallyer interface.
Package rid provides a performant, k-sortable, scalable unique ID generator suitable for applications where ID generation coordination between machines or other processes is not required. ID generation is goroutine safe and scales well with CPU cores. Providing unique non-sequential keys for embeddable databases like SQLIte or BoltDB or key-value stores are typical use-cases. Binary IDs Base-32 encode as a 16-character URL and human-friendly representation like dfp7qt0v2pwt0v2x. The 10-byte binary representation of an ID is comprised of: Key features: Example usage: Acknowledgement: This source file is based on work in package github.com/rs/xid, a zero-configuration globally-unique ID generator. See LICENSE.rs-xid. The same API has been maintained.
Package CloudForest implements ensembles of decision trees for machine learning in pure Go (golang to search engines). It allows for a number of related algorithms for classification, regression, feature selection and structure analysis on heterogeneous numerical/categorical data with missing values. These include: Breiman and Cutler's Random Forest for Classification and Regression Adaptive Boosting (AdaBoost) Classification Gradiant Boosting Tree Regression Entropy and Cost driven classification L1 regression Feature selection with artificial contrasts Proximity and model structure analysis Roughly balanced bagging for unbalanced classification The API hasn't stabilized yet and may change rapidly. Tests and benchmarks have been performed only on embargoed data sets and can not yet be released. Library Documentation is in code and can be viewed with godoc or live at: http://godoc.org/github.com/IlyaLab/CloudForest Documentation of command line utilities and file formats can be found in README.md, which can be viewed fromated on github: http://github.com/IlyaLab/CloudForest Pull requests and bug reports are welcome. CloudForest was created by Ryan Bressler and is being developed in the Shumelivich Lab at the Institute for Systems Biology for use on genomic/biomedical data with partial support from The Cancer Genome Atlas and the Inova Translational Medicine Institute. CloudForest is intended to provide fast, comprehensible building blocks that can be used to implement ensembles of decision trees. CloudForest is written in Go to allow a data scientist to develop and scale new models and analysis quickly instead of having to modify complex legacy code. Data structures and file formats are chosen with use in multi threaded and cluster environments in mind. Go's support for function types is used to provide a interface to run code as data is percolated through a tree. This method is flexible enough that it can extend the tree being analyzed. Growing a decision tree using Breiman and Cutler's method can be done in an anonymous function/closure passed to a tree's root node's Recurse method: This allows a researcher to include whatever additional analysis they need (importance scores, proximity etc) in tree growth. The same Recurse method can also be used to analyze existing forests to tabulate scores or extract structure. Utilities like leafcount and errorrate use this method to tabulate data about the tree in collection objects. Decision tree's are grown with the goal of reducing "Impurity" which is usually defined as Gini Impurity for categorical targets or mean squared error for numerical targets. CloudForest grows trees against the Target interface which allows for alternative definitions of impurity. CloudForest includes several alternative targets: Additional targets can be stacked on top of these target to add boosting functionality: Repeatedly splitting the data and searching for the best split at each node of a decision tree are the most computationally intensive parts of decision tree learning and CloudForest includes optimized code to perform these tasks. Go's slices are used extensively in CloudForest to make it simple to interact with optimized code. Many previous implementations of Random Forest have avoided reallocation by reordering data in place and keeping track of start and end indexes. In go, slices pointing at the same underlying arrays make this sort of optimization transparent. For example a function like: can return left and right slices that point to the same underlying array as the original slice of cases but these slices should not have their values changed. Functions used while searching for the best split also accepts pointers to reusable slices and structs to maximize speed by keeping memory allocations to a minimum. BestSplitAllocs contains pointers to these items and its use can be seen in functions like: For categorical predictors, BestSplit will also attempt to intelligently choose between 4 different implementations depending on user input and the number of categories. These include exhaustive, random, and iterative searches for the best combination of categories implemented with bitwise operations against int and big.Int. See BestCatSplit, BestCatSplitIter, BestCatSplitBig and BestCatSplitIterBig. All numerical predictors are handled by BestNumSplit which relies on go's sorting package. Training a Random forest is an inherently parallel process and CloudForest is designed to allow parallel implementations that can tackle large problems while keeping memory usage low by writing and using data structures directly to/from disk. Trees can be grown in separate go routines. The growforest utility provides an example of this that uses go routines and channels to grow trees in parallel and write trees to disk as the are finished by the "worker" go routines. The few summary statistics like mean impurity decrease per feature (importance) can be calculated using thread safe data structures like RunningMean. Trees can also be grown on separate machines. The .sf stochastic forest format allows several small forests to be combined by concatenation and the ForestReader and ForestWriter structs allow these forests to be accessed tree by tree (or even node by node) from disk. For data sets that are too big to fit in memory on a single machine Tree.Grow and FeatureMatrix.BestSplitter can be reimplemented to load candidate features from disk, distributed database etc. By default cloud forest uses a fast heuristic for missing values. When proposing a split on a feature with missing data the missing cases are removed and the impurity value is corrected to use three way impurity which reduces the bias towards features with lots of missing data: Missing values in the target variable are left out of impurity calculations. This provided generally good results at a fraction of the computational costs of imputing data. Optionally, feature.ImputeMissing or featurematrixImputeMissing can be called before forest growth to impute missing values to the feature mean/mode which Brieman [2] suggests as a fast method for imputing values. This forest could also be analyzed for proximity (using leafcount or tree.GetLeaves) to do the more accurate proximity weighted imputation Brieman describes. Experimental support is provided for 3 way splitting which splits missing cases onto a third branch. [2] This has so far yielded mixed results in testing. At some point in the future support may be added for local imputing of missing values during tree growth as described in [3] [1] http://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm#missing1 [2] https://code.google.com/p/rf-ace/ [3] http://projecteuclid.org/DPubS?verb=Display&version=1.0&service=UI&handle=euclid.aoas/1223908043&page=record In CloudForest data is stored using the FeatureMatrix struct which contains Features. The Feature struct implements storage and methods for both categorical and numerical data and calculations of impurity etc and the search for the best split. The Target interface abstracts the methods of Feature that are needed for a feature to be predictable. This allows for the implementation of alternative types of regression and classification. Trees are built from Nodes and Splitters and stored within a Forest. Tree has a Grow implements Brieman and Cutler's method (see extract above) for growing a tree. A GrowForest method is also provided that implements the rest of the method including sampling cases but it may be faster to grow the forest to disk as in the growforest utility. Prediction and Voting is done using Tree.Vote and CatBallotBox and NumBallotBox which implement the VoteTallyer interface.
Package gupnp provides an API and GUI to control DLNA/UPnP devices like network TV and radio. Its goal is to locate media servers (with files) and media players. You can then send commands to players (volume, pause...) and let them play the selected music or video content. Can also be used as just a limited remote controler for supported renderers. A database of multimedia content, that other devices can play media from. Plays stuff, that is it makes sound and in required cases shows moving images. A device that works as a remote control, can play stop, skip, pause, change loudness, brightness etcetera. The manager keeps a server and a renderer as current target devices to use for fast user actions.
Package firestore provides a client for reading and writing to a Cloud Firestore database. See https://cloud.google.com/firestore/docs for an introduction to Cloud Firestore and additional help on using the Firestore API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Note: you can't use both Cloud Firestore and Cloud Datastore in the same project. To start working with this package, create a client with a project ID: In Firestore, documents are sets of key-value pairs, and collections are groups of documents. A Firestore database consists of a hierarchy of alternating collections and documents, referred to by slash-separated paths like "States/California/Cities/SanFrancisco". This client is built around references to collections and documents. CollectionRefs and DocumentRefs are lightweight values that refer to the corresponding database entities. Creating a ref does not involve any network traffic. Use DocumentRef.Get to read a document. The result is a DocumentSnapshot. Call its Data method to obtain the entire document contents as a map. You can also obtain a single field with DataAt, or extract the data into a struct with DataTo. With the type definition we can extract the document's data into a value of type State: Note that this client supports struct tags beginning with "firestore:" that work like the tags of the encoding/json package, letting you rename fields, ignore them, or omit their values when empty. To retrieve multiple documents from their references in a single call, use Client.GetAll. For writing individual documents, use the methods on DocumentReference. Create creates a new document. The first return value is a WriteResult, which contains the time at which the document was updated. Create fails if the document exists. Another method, Set, either replaces an existing document or creates a new one. To update some fields of an existing document, use Update. It takes a list of paths to update and their corresponding values. Use DocumentRef.Delete to delete a document. You can condition Deletes or Updates on when a document was last changed. Specify these preconditions as an option to a Delete or Update method. The check and the write happen atomically with a single RPC. Here we update a doc only if it hasn't changed since we read it. You could also do this with a transaction. To perform multiple writes at once, use a WriteBatch. Its methods chain for convenience. WriteBatch.Commit sends the collected writes to the server, where they happen atomically. You can use SQL to select documents from a collection. Begin with the collection, and build up a query using Select, Where and other methods of Query. Supported operators include `<`, `<=`, `>`, `>=`, `==`, and 'array-contains'. Call the Query's Documents method to get an iterator, and use it like the other Google Cloud Client iterators. To get all the documents in a collection, you can use the collection itself as a query. Use a transaction to execute reads and writes atomically. All reads must happen before any writes. Transaction creation, commit, rollback and retry are handled for you by the Client.RunTransaction method; just provide a function and use the read and write methods of the Transaction passed to it.
Package ocfl defines an API for interacting with content in an OCFL repository. Access to OCFL content is provided by one of more Driver implementations. Drivers may interact with a local filesystem, s3, a relational database for quick/indexed lookup, etc. See individual driver documentation under drivers/ for more information.
Package firestore provides a client for reading and writing to a Cloud Firestore database. See https://cloud.google.com/firestore/docs for an introduction to Cloud Firestore and additional help on using the Firestore API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Note: you can't use both Cloud Firestore and Cloud Datastore in the same project. To start working with this package, create a client with a project ID: In Firestore, documents are sets of key-value pairs, and collections are groups of documents. A Firestore database consists of a hierarchy of alternating collections and documents, referred to by slash-separated paths like "States/California/Cities/SanFrancisco". This client is built around references to collections and documents. CollectionRefs and DocumentRefs are lightweight values that refer to the corresponding database entities. Creating a ref does not involve any network traffic. Use DocumentRef.Get to read a document. The result is a DocumentSnapshot. Call its Data method to obtain the entire document contents as a map. You can also obtain a single field with DataAt, or extract the data into a struct with DataTo. With the type definition we can extract the document's data into a value of type State: Note that this client supports struct tags beginning with "firestore:" that work like the tags of the encoding/json package, letting you rename fields, ignore them, or omit their values when empty. To retrieve multiple documents from their references in a single call, use Client.GetAll. For writing individual documents, use the methods on DocumentReference. Create creates a new document. The first return value is a WriteResult, which contains the time at which the document was updated. Create fails if the document exists. Another method, Set, either replaces an existing document or creates a new one. To update some fields of an existing document, use Update. It takes a list of paths to update and their corresponding values. Use DocumentRef.Delete to delete a document. You can condition Deletes or Updates on when a document was last changed. Specify these preconditions as an option to a Delete or Update method. The check and the write happen atomically with a single RPC. Here we update a doc only if it hasn't changed since we read it. You could also do this with a transaction. To perform multiple writes at once, use a WriteBatch. Its methods chain for convenience. WriteBatch.Commit sends the collected writes to the server, where they happen atomically. You can use SQL to select documents from a collection. Begin with the collection, and build up a query using Select, Where and other methods of Query. Supported operators include `<`, `<=`, `>`, `>=`, `==`, and 'array-contains'. Call the Query's Documents method to get an iterator, and use it like the other Google Cloud Client iterators. To get all the documents in a collection, you can use the collection itself as a query. Use a transaction to execute reads and writes atomically. All reads must happen before any writes. Transaction creation, commit, rollback and retry are handled for you by the Client.RunTransaction method; just provide a function and use the read and write methods of the Transaction passed to it.
Package tcell provides a lower-level, portable API for building programs that interact with terminals or consoles. It works with both common (and many uncommon!) terminals or terminal emulators, and Windows console implementations. It provides support for up to 256 colors, text attributes, and box drawing elements. A database of terminals built from a real terminfo database is provided, along with code to generate new database entries. Tcell offers very rich support for mice, dependent upon the terminal of course. (Windows, XTerm, and iTerm 2 are known to work very well.) If the environment is not Unicode by default, such as an ISO8859 based locale or GB18030, Tcell can convert input and output, so that your terminal can operate in whatever locale is most convenient, while the application program can just assume "everything is UTF-8". Reasonable defaults are used for updating characters to something suitable for display. Unicode box drawing characters will be converted to use the alternate character set of your terminal, if native conversions are not available. If no ACS is available, then some ASCII fallbacks will be used. Note that support for non-UTF-8 locales (other than C) must be enabled by the application using RegisterEncoding() -- we don't have them all enabled by default to avoid bloating the application unneccessarily. (These days UTF-8 is good enough for almost everyone, and nobody should be using legacy locales anymore.) Also, actual glyphs for various code point will only be displayed if your terminal or emulator (or the font the emulator is using) supports them. A rich set of keycodes is supported, with support for up to 65 function keys, and various other special keys.
Package gonm automatically assigns a key from the interface, and autocahing interface local memory. gonm wrapped Google Cloud Datastore. I used https://godoc.org/cloud.google.com/go/datastore and https://godoc.org/github.com/mjibson/goon as a reference Gonm generate key from ID of structure property, and this key will use for get or put. All structures are complemented with ID of structure property after these method are used. Also, gonm stores the results in a local cache by default. Therefore, the same fetch can be performed at high speed. It is simple to use gonm. A key consists of an optional parent key, and parent key generate Parent of structure property. ID assumes int64 and string, and Parent assumes *datastore.Key like database api. example: create child-parent relationship If you want to use other property as key id, you need to put id tag in structure tag. The same applies to the parent key and key name. For parent Key, you need to put parent tag in structure. For Key kind, you need to put kind tag in structure. Gonm returns ErrNoIDField when the id cannot be obtained from the received structure. Check https://godoc.org/cloud.google.com/go/datastore#hdr-Properties to lean more about datastore properties. Of course you can use PropertyLoadSaver Interface. However, be careful because gonm wraps the datastore. Check https://godoc.org/cloud.google.com/go/datastore#hdr-The_PropertyLoadSaver_Interface to lean more about PropertyLoadSaver Interface. Queries of gonm is very similar datastore queries. Gonm use datastore.Query. Gonm support Run and GetAll, but I reconmmend using GetKeysOnly. Gonm.RunInTransaction runs a function in a transaction. To install and set up the emulator and its environment variables, see the documentation at https://cloud.google.com/datastore/docs/tools/datastore-emulator.
Package fetchbot provides a simple and flexible web crawler that follows the robots.txt policies and crawl delays. It is very much a rewrite of gocrawl (https://github.com/PuerkitoBio/gocrawl) with a simpler API, less features built-in, but at the same time more flexibility. As for Go itself, sometimes less is more! To install, simply run in a terminal: The package has a single external dependency, robotstxt (https://github.com/temoto/robotstxt-go). It also integrates code from the iq package (https://github.com/kylelemons/iq). The API documentation is available on godoc.org (http://godoc.org/github.com/PuerkitoBio/fetchbot). The following example (taken from /example/short/main.go) shows how to create and start a Fetcher, one way to send commands, and how to stop the fetcher once all commands have been handled. A more complex and complete example can be found in the repository, at /example/full/. Basically, a Fetcher is an instance of a web crawler, independent of other Fetchers. It receives Commands via the Queue, executes the requests, and calls a Handler to process the responses. A Command is an interface that tells the Fetcher which URL to fetch, and which HTTP method to use (i.e. "GET", "HEAD", ...). A call to Fetcher.Start() returns the Queue associated with this Fetcher. This is the thread-safe object that can be used to send commands, or to stop the crawler. Both the Command and the Handler are interfaces, and may be implemented in various ways. They are defined like so: A Context is a struct that holds the Command and the Queue, so that the Handler always knows which Command initiated this call, and has a handle to the Queue. A Handler is similar to the net/http Handler, and middleware-style combinations can be built on top of it. A HandlerFunc type is provided so that simple functions with the right signature can be used as Handlers (like net/http.HandlerFunc), and there is also a multiplexer Mux that can be used to dispatch calls to different Handlers based on some criteria. The Fetcher recognizes a number of interfaces that the Command may implement, for more advanced needs. If the Command implements the BasicAuthProvider interface, a Basic Authentication header will be put in place with the given credentials to fetch the URL. Similarly, the CookiesProvider and HeaderProvider interfaces offer the expected features (setting cookies and header values on the request). The ReaderProvider and ValuesProvider interfaces are also supported, although they should be mutually exclusive as they both set the body of the request. If both are supported, the ReaderProvider interface is used. It sets the body of the request (e.g. for a "POST") using the given io.Reader instance. The ValuesProvider does the same, but using the given url.Values instance, and sets the Content-Type of the body to "application/x-www-form-urlencoded" (unless it is explicitly set by a HeaderProvider). Since the Command is an interface, it can be a custom struct that holds additional information, such as an ID for the URL (e.g. from a database), or a depth counter so that the crawling stops at a certain depth, etc. For basic commands that don't require additional information, the package provides the Cmd struct that implements the Command interface. This is the Command implementation used when using the various Queue.SendString* methods. The Fetcher has a number of fields that provide further customization: - HttpClient : By default, the Fetcher uses the net/http default Client to make requests. A different client can be set on the Fetcher.HttpClient field. - CrawlDelay : That value is used only if there is no delay specified by the robots.txt of a given host. - UserAgent : Sets the user agent string to use for the requests and to validate against the robots.txt entries. - WorkerIdleTTL : Sets the duration that a worker goroutine can wait without receiving new commands to fetch. If the idle time-to-live is reached, the worker goroutine is stopped and its resources are released. This can be especially useful for long-running crawlers. - DisablePoliteness : If true, disables fetching of robots.txt, effectively forcing the use of the CrawlDelay value between calls to a host. What fetchbot doesn't do - especially compared to gocrawl - is that it doesn't keep track of already visited URLs, and it doesn't normalize the URLs. This is outside the scope of this package - all commands sent on the Queue will be fetched. Normalization can easily be done (e.g. using https://github.com/PuerkitoBio/purell) before sending the Command to the Fetcher. How to keep track of visited URLs depends on the use-case of the specific crawler, but for an example, see /example/full/main.go. The BSD 3-Clause license (http://opensource.org/licenses/BSD-3-Clause), the same as the Go language. The iq_slice.go file is under the CDDL-1.0 license (details in the source file).
Package safebrowsing implements a client for the Safe Browsing API v4. API v4 emphasizes efficient usage of the network for bandwidth-constrained applications such as mobile devices. It achieves this by maintaining a small portion of the server state locally such that some queries can be answered immediately without any network requests. Thus, fewer API calls made, means less bandwidth is used. At a high-level, the implementation does the following: Essentially the query is presented to three major components: The database, the cache, and the API. Each of these may satisfy the query immediately, or may say that it does not know and that the query should be satisfied by the next component. The goal of the database and cache is to satisfy as many queries as possible to avoid using the API. Starting with a user query, a hash of the query is performed to preserve privacy regarded the exact nature of the query. For example, if the query was for a URL, then this would be the SHA256 hash of the URL in question. Given a query hash, we first check the local database (which is periodically synced with the global Safe Browsing API servers). This database will either tell us that the query is definitely safe, or that it does not have enough information. If we are unsure about the query, we check the local cache, which can be used to satisfy queries immediately if the same query had been made recently. The cache will tell us that the query is either safe, unsafe, or unknown (because the it's not in the cache or the entry expired). If we are still unsure about the query, then we finally query the API server, which is guaranteed to return to us an authoritative answer, assuming no networking failures. For more information, see the API developer's guide:
Package passwap provides a unified implementation between different password hashing algorithms. It allows for easy swapping between algorithms, using the same API for all of them. Passwords hashed with passwap, using a certain algorithm and parameters can be stored in a database. If at a later moment paramers or even the algorithm is changed, passwap is still able to verify the "outdated" hashes and automatically return an updated hash when applicable. Only when an updated hash is returned, the record in the database needs to be updated. Resulting password hashes are encoded using dollar sign ($) notation. It's origin lies in Glibc, but there is no clear standard on the matter For passwap it is choosen to follow suit with python's passlib identifiers to be (hopefully) as portable as possible. Suplemental information can be found: Glibc: https://man.archlinux.org/man/crypt.5; Passlib "Modular Crypt Format": https://passlib.readthedocs.io/en/stable/modular_crypt_format.html; Password Hashing Competition string format: https://github.com/P-H-C/phc-string-format/blob/master/phc-sf-spec.md;
Package tcell provides a lower-level, portable API for building programs that interact with terminals or consoles. It works with both common (and many uncommon!) terminals or terminal emulators, and Windows console implementations. It provides support for up to 256 colors, text attributes, and box drawing elements. A database of terminals built from a real terminfo database is provided, along with code to generate new database entries. Tcell offers very rich support for mice, dependent upon the terminal of course. (Windows, XTerm, and iTerm 2 are known to work very well.) If the environment is not Unicode by default, such as an ISO8859 based locale or GB18030, Tcell can convert input and output, so that your terminal can operate in whatever locale is most convenient, while the application program can just assume "everything is UTF-8". Reasonable defaults are used for updating characters to something suitable for display. Unicode box drawing characters will be converted to use the alternate character set of your terminal, if native conversions are not available. If no ACS is available, then some ASCII fallbacks will be used. Note that support for non-UTF-8 locales (other than C) must be enabled by the application using RegisterEncoding() -- we don't have them all enabled by default to avoid bloating the application unneccessarily. (These days UTF-8 is good enough for almost everyone, and nobody should be using legacy locales anymore.) Also, actual glyphs for various code point will only be displayed if your terminal or emulator (or the font the emulator is using) supports them. A rich set of keycodes is supported, with support for up to 65 function keys, and various other special keys.
sasq (Shadowserver AS Query) is a small library to query the Shadowserver IP-BGP Whois database; using the bulk query API
Package ksi implements functionality for interacting with KSI service, including the core functions such as signing of data, extending and verifying KSI signatures. Note that the following tutorial is incremental, meaning the parameter names used in example code blocks are defined in previous example blocks. The subpackage log defines logging interface type log.Logger and a basic logger implementation for writing lines to file. By default logging is disabled. In order to enable logging of the API internals, an implementation to a logger has to be registered in the log package, e.g. setting default logger: In order to disable logging, set logger to nil. Almost every method of the API returns an error parameter alongside with a value (if applicable). All returned errors are of type errors.KsiError. For troubleshooting, the KsiError provides following information: Example usage of the KsiError: It is strongly advised to verify the returned error. In case it is not nil, most probably, it is indicating fatal state and requires some sort of recovery logic. Furthermore, all foreseen panics are wrapped into KsiError and returned via a function return error parameter. For simplicity reasons, the error handling in this tutorial is mostly omitted. A signature instance can be created in several ways by providing suitable initializer of type to the signature constructor Following initializers: are quite straight forward and do not require in further explanation. However, following initializers will be explained more deeply. A low level signing (also aggregation) request is responded by Aggregator server with an aggregation response. In order to initialize a signature instance from an aggregation response use: A low level extending request is responded by Extender server with an extending response. In order to initialize a signature instance from an extending response use: In order to initialize a signature instance from a locally aggregated tree use: For more detailed description about the initializers, refer the individual documentation. Note that the signature.BuildNoVerify must be used with care as the returned signature instance will not be verified for internal consistency. The common use case would be to initialize an erroneous KSI signature for troubleshooting. To save the signature to a file or database, the signature content has to be serialized first. Lets assume the data is provided by a io.Reader implementation (e.g. os.File). KSI defines an imprint structure, which basically represents a hash value and consists of a one-octet hash function identifier concatenated with the hash value itself. The subpackage hash provides such structure type hash.Imprint. As only the hash of the original document is signed, we need to create a hash.Imprint object. This can be achieved by using hash.DataHasher object. It can be created from any registered hash algorithm. We will use hash.Default. For more detailed information about hash algorithms and hashing, see subpackage hash documentation. A publications file type publications.File can be constructed using publications.NewFile() method with appropriate initializer type publications.FileBuilder. A more common use case would be to construct a publications file handler by calling publications.NewFileHandler() with desired options of type publications.FileHandlerSetting. To create a new KSI signature for a document hash, a new service.Signer instance has to be constructed. Signing of multiple imprints can be performed in parallel using goroutines. To extend an existing KSI signature, a new service.Extender instance has to be constructed. Extending of multiple signatures can be performed in parallel using goroutines. Signatures are verified according to one or more policies. A verification policy is a set of ordered rules that verify relevant signature properties. Verifying a signature according to a policy results in one of three possible outcomes: The SDK provides the following predefined policies for verification: Internal policy. This policy verifies the consistency of various internal components of the signature without requiring any additional data from the user. The verified components are the aggregation chain, calendar chain (optional), calendar authentication record (optional) and publication record (optional). Additionally, if a document hash is provided, the signature is verified against it. User provided publication string based policy. This policy verifies the signature's publication record against the publication string. If necessary (and permitted), the signature is extended to the user publication. For conclusive results the signature must either contain a publication record with a suitable publication or signature extending must be allowed. Additionally, a publication string must be provided and an Extender should be configured (in case extending is permitted). Publications file based policy. This policy verifies the signature's publication record against a publication in the publication file. If necessary (and permitted), the signature is extended to the publication. For conclusive results the signature must either contain a publication record with a suitable publication or signature extending must be allowed. Additionally, a publications file must be provided for lookup and an Extender should be configured (in case extending is permitted). Key-based policy. This policy verifies the PKI signature and calendar chain data in the calendar authentication record of the signature. For conclusive results, a calendar hash chain and calendar authentication record must be present in the signature. A trusted publication file must be provided for performing lookup of a matching certificate. Calendar-based policy. This policy verifies signature's calendar hash chain against calendar database. If calendar hash chain does not exist, signature is extended to head and its match with received calendar hash chain is verified. For conclusive results the Extender must be configured. Note that input signature is not changed. Default policy. This policy uses the previously mentioned policies in the specified order. Verification starts off with internal verification and if successful, continues with publication-based and/or key-based verification, depending on the availability of calendar chain, calendar authentication record or publication record in the signature. The default policy tries all available verification policies until a signature correctness is proved or disproved and is thus the recommended policy for verification unless some restriction dictates the use of a specific verification policy. Note that all of the policies perform internal verification as a prerequisite to the specific verification and a policy will never result in a success if internal verification fails. Note that the provided signature is never modified. In case any verification step requires a signature extending, only the extended calendar hash chain is retrieved from Extender service and is used for further validation. For the most basic verification the returned error parameter of signature.(Signature).Verify() can be checked. However, most probably the result will be an error because of the lack of essential data. The key to conclusive verification is to provide as much data as possible without assuming too much from the signature itself. For most cases this means that a publications file (or handler) and Extender should be provided. In some cases a permission for using Extender has to be set as well. If the signature needs to be verified against a specific publication, publication string has to be provided, etc. In order to specify optional parameters, signature.VerCtxOption should be used: Note that the constructor of new signature object (signature.New()) will perform verification based on Internal policy by default, unless signature.BuildNoVerify is used. For a detailed verification result, signature.(Policy).Verify() can be used. In this case a verification context must be set up first. To use a proxy, you need to configure the proxy on your operating system. Set the system environment variable: `http_proxy=user:pass@server:port` In the Windows Control Panel: In Linux, add the system variable to `/etc/bashrc`: Configuring authentication is not supported by the Windows Control Panel and Registry. More redundant connection to gateway can be achieved using HA feature of the service package. HA service combines multiple other services, sends requests to all of them in parallel and gives back the first successful one. To configure a HA service, you have to wrap the individual service endpoint configuration options into service.OptHighAvailability option. Further interaction with the constructed haSigner are exactly the same as with basic signer described in previous chapters. The example shows configuration of HA signer. However, similar steps apply to Extender service configuration as well. This product includes package github.com/fullsailor/pkcs7.
Package firebasedb implements a REST client for the Firebase Realtime Database (https://firebase.google.com/docs/database/). The API is as close as possible to the official JavaScript API. Similar / related project: Reference / documentation: This packages uses the "Advanced Go Concurrency Patterns" presented by Sameer Ajmani:
SQL Schema migration tool for Go. Key features: To install the library and command line program, use the following: The main command is called sql-migrate. Each command requires a configuration file (which defaults to dbconfig.yml, but can be specified with the -config flag). This config file should specify one or more environments: The `table` setting is optional and will default to `gorp_migrations`. The environment that will be used can be specified with the -env flag (defaults to development). Use the --help flag in combination with any of the commands to get an overview of its usage: The up command applies all available migrations. By contrast, down will only apply one migration by default. This behavior can be changed for both by using the -limit parameter. The redo command will unapply the last migration and reapply it. This is useful during development, when you're writing migrations. Use the status command to see the state of the applied migrations: If you are using MySQL, you must append ?parseTime=true to the datasource configuration. For example: See https://github.com/go-sql-driver/mysql#parsetime for more information. Import sql-migrate into your application: Set up a source of migrations, this can be from memory, from a set of files or from bindata (more on that later): Then use the Exec function to upgrade your database: Note that n can be greater than 0 even if there is an error: any migration that succeeded will remain applied even if a later one fails. The full set of capabilities can be found in the API docs below. Migrations are defined in SQL files, which contain a set of SQL statements. Special comments are used to distinguish up and down migrations. You can put multiple statements in each block, as long as you end them with a semicolon (;). If you have complex statements which contain semicolons, use StatementBegin and StatementEnd to indicate boundaries: The order in which migrations are applied is defined through the filename: sql-migrate will sort migrations based on their name. It's recommended to use an increasing version number or a timestamp as the first part of the filename. Normally each migration is run within a transaction in order to guarantee that it is fully atomic. However some SQL commands (for example creating an index concurrently in PostgreSQL) cannot be executed inside a transaction. In order to execute such a command in a migration, the migration can be run using the notransaction option: If you like your Go applications self-contained (that is: a single binary): use packr (https://github.com/gobuffalo/packr) to embed the migration files. Just write your migration files as usual, as a set of SQL files in a folder. Use the PackrMigrationSource in your application to find the migrations: If you already have a box and would like to use a subdirectory: As an alternative, but slightly less maintained, you can use bindata (https://github.com/shuLhan/go-bindata) to embed the migration files. Just write your migration files as usual, as a set of SQL files in a folder. Then use bindata to generate a .go file with the migrations embedded: The resulting bindata.go file will contain your migrations. Remember to regenerate your bindata.go file whenever you add/modify a migration (go generate will help here, once it arrives). Use the AssetMigrationSource in your application to find the migrations: Both Asset and AssetDir are functions provided by bindata. Then proceed as usual. Adding a new migration source means implementing MigrationSource. The resulting slice of migrations will be executed in the given order, so it should usually be sorted by the Id field.
Package opsworkscm provides the client and types for making API requests to AWS OpsWorks for Chef Automate. AWS OpsWorks for Chef Automate is a service that runs and manages configuration management servers. Glossary of terms Server: A configuration management server that can be highly-available. The configuration manager runs on your instances by using various AWS services, such as Amazon Elastic Compute Cloud (EC2), and potentially Amazon Relational Database Service (RDS). A server is a generic abstraction over the configuration manager that you want to use, much like Amazon RDS. In AWS OpsWorks for Chef Automate, you do not start or stop servers. After you create servers, they continue to run until they are deleted. Engine: The specific configuration manager that you want to use (such as Chef) is the engine. Backup: This is an application-level backup of the data that the configuration manager stores. A backup creates a .tar.gz file that is stored in an Amazon Simple Storage Service (S3) bucket in your account. AWS OpsWorks for Chef Automate creates the S3 bucket when you launch the first instance. A backup maintains a snapshot of all of a server's important attributes at the time of the backup. Events: Events are always related to a server. Events are written during server creation, when health checks run, when backups are created, etc. When you delete a server, the server's events are also deleted. AccountAttributes: Every account has attributes that are assigned in the AWS OpsWorks for Chef Automate database. These attributes store information about configuration limits (servers, backups, etc.) and your customer account. AWS OpsWorks for Chef Automate supports the following endpoints, all HTTPS. You must connect to one of the following endpoints. Chef servers can only be accessed or managed within the endpoint in which they are created. opsworks-cm.us-east-1.amazonaws.com opsworks-cm.us-west-2.amazonaws.com opsworks-cm.eu-west-1.amazonaws.com All API operations allow for five requests per second with a burst of 10 requests per second. See https://docs.aws.amazon.com/goto/WebAPI/opsworkscm-2016-11-01 for more information on this service. See opsworkscm package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/opsworkscm/ To AWS OpsWorks for Chef Automate with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the AWS OpsWorks for Chef Automate client OpsWorksCM for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/opsworkscm/#New
Package sqlite3memvfs implements support for reading data from in-memory SQLite database files, keyed in a VFS which can be accessed via database/sql.Open using the “vfs” query parameter. Multiple VFSs may be registered under different names; “files” may also be created or removed from each VFS independently. This is subtly different than the :memory: filename which is used to create an in-memory database scoped to a single connection. This package is for when you already have static data and don’t want to use a real filesystem at all. This uses the API provided by sqlite3vfs and the SQLite OS interface, but you don’t need to know the details of either to use this.
Package spanner provides a client for reading and writing to Cloud Spanner databases. See the packages under admin for clients that operate on databases and instances. Note: This package is in beta. Some backwards-incompatible changes may occur. See https://cloud.google.com/spanner/docs/getting-started/go/ for an introduction to Cloud Spanner and additional help on using this API. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. To start working with this package, create a client that refers to the database of interest: Remember to close the client after use to free up the sessions in the session pool. Two Client methods, Apply and Single, work well for simple reads and writes. As a quick introduction, here we write a new row to the database and read it back: All the methods used above are discussed in more detail below. Every Cloud Spanner row has a unique key, composed of one or more columns. Construct keys with a literal of type Key: The keys of a Cloud Spanner table are ordered. You can specify ranges of keys using the KeyRange type: By default, a KeyRange includes its start key but not its end key. Use the Kind field to specify other boundary conditions: A KeySet represents a set of keys. A single Key or KeyRange can act as a KeySet. Use the KeySets function to build the union of several KeySets: AllKeys returns a KeySet that refers to all the keys in a table: All Cloud Spanner reads and writes occur inside transactions. There are two types of transactions, read-only and read-write. Read-only transactions cannot change the database, do not acquire locks, and may access either the current database state or states in the past. Read-write transactions can read the database before writing to it, and always apply to the most recent database state. The simplest and fastest transaction is a ReadOnlyTransaction that supports a single read operation. Use Client.Single to create such a transaction. You can chain the call to Single with a call to a Read method. When you only want one row whose key you know, use ReadRow. Provide the table name, key, and the columns you want to read: Read multiple rows with the Read method. It takes a table name, KeySet, and list of columns: Read returns a RowIterator. You can call the Do method on the iterator and pass a callback: RowIterator also follows the standard pattern for the Google Cloud Client Libraries: Always call Stop when you finish using an iterator this way, whether or not you iterate to the end. (Failing to call Stop could lead you to exhaust the database's session quota.) To read rows with an index, use ReadUsingIndex. The most general form of reading uses SQL statements. Construct a Statement with NewStatement, setting any parameters using the Statement's Params map: You can also construct a Statement directly with a struct literal, providing your own map of parameters. Use the Query method to run the statement and obtain an iterator: Once you have a Row, via an iterator or a call to ReadRow, you can extract column values in several ways. Pass in a pointer to a Go variable of the appropriate type when you extract a value. You can extract by column position or name: You can extract all the columns at once: Or you can define a Go struct that corresponds to your columns, and extract into that: For Cloud Spanner columns that may contain NULL, use one of the NullXXX types, like NullString: To perform more than one read in a transaction, use ReadOnlyTransaction: You must call Close when you are done with the transaction. Cloud Spanner read-only transactions conceptually perform all their reads at a single moment in time, called the transaction's read timestamp. Once a read has started, you can call ReadOnlyTransaction's Timestamp method to obtain the read timestamp. By default, a transaction will pick the most recent time (a time where all previously committed transactions are visible) for its reads. This provides the freshest data, but may involve some delay. You can often get a quicker response if you are willing to tolerate "stale" data. You can control the read timestamp selected by a transaction by calling the WithTimestampBound method on the transaction before using it. For example, to perform a query on data that is at most one minute stale, use See the documentation of TimestampBound for more details. To write values to a Cloud Spanner database, construct a Mutation. The spanner package has functions for inserting, updating and deleting rows. Except for the Delete methods, which take a Key or KeyRange, each mutation-building function comes in three varieties. One takes lists of columns and values along with the table name: One takes a map from column names to values: And the third accepts a struct value, and determines the columns from the struct field names: To apply a list of mutations to the database, use Apply: If you need to read before writing in a single transaction, use a ReadWriteTransaction. ReadWriteTransactions may abort and need to be retried. You pass in a function to ReadWriteTransaction, and the client will handle the retries automatically. Use the transaction's BufferWrite method to buffer mutations, which will all be executed at the end of the transaction: Spanner supports DML statements like INSERT, UPDATE and DELETE. Use ReadWriteTransaction.Update to run DML statements. It returns the number of rows affected. (You can call use ReadWriteTransaction.Query with a DML statement. The first call to Next on the resulting RowIterator will return iterator.Done, and the RowCount field of the iterator will hold the number of affected rows.) For large databases, it may be more efficient to partition the DML statement. Use client.PartitionedUpdate to run a DML statement in this way. Not all DML statements can be partitioned. This client has been instrumented to use OpenCensus tracing (http://opencensus.io). To enable tracing, see "Enabling Tracing for a Program" at https://godoc.org/go.opencensus.io/trace. OpenCensus tracing requires Go 1.8 or higher.