Package gval provides a generic expression language. All functions, infix and prefix operators can be replaced by composing languages into a new one. The package contains concrete expression languages for common application in text, arithmetic, decimal arithmetic, propositional logic and so on. They can be used as basis for a custom expression language or to evaluate expressions directly.
Package openssl is a light wrapper around OpenSSL for Go. It strives to provide a near-drop-in replacement for the Go standard library tls package, while allowing for: OpenSSL is battle-tested and optimized C. While Go's built-in library shows great promise, it is still young and in some places, inefficient. This simple OpenSSL wrapper can often do at least 2x with the same cipher and protocol. On my lappytop, I get the following benchmarking speeds: Many systems support OpenSSL with a variety of plugins and modules for things, such as hardware acceleration in embedded devices. OpenSSL allows for far greater configuration of corner cases and backwards compatibility (such as support of SSLv2). You shouldn't be using SSLv2 if you can help but, but sometimes you can't help it. Yeah yeah, Heartbleed. But according to the author of the standard library's TLS implementation, Go's TLS library is vulnerable to timing attacks. And whether or not OpenSSL received the appropriate amount of scrutiny pre-Heartbleed, it sure is receiving it now. Starting an HTTP server that uses OpenSSL is very easy. It's as simple as: Getting a net.Listener that uses OpenSSL is also easy: Making a client connection is straightforward too: Help wanted: To get this library to work with net/http's client, we had to fork net/http. It would be nice if an alternate http client library supported the generality needed to use OpenSSL instead of crypto/tls.
Package pkcs12 implements some of PKCS#12 (also known as P12 or PFX). It is intended for decoding DER-encoded P12/PFX files for use with the crypto/tls package, and for encoding P12/PFX files for use by legacy applications which do not support newer formats. Since PKCS#12 uses weak encryption primitives, it SHOULD NOT be used for new applications. Note that only DER-encoded PKCS#12 files are supported, even though PKCS#12 allows BER encoding. This is because encoding/asn1 only supports DER. This package is forked from golang.org/x/crypto/pkcs12, which is frozen. The implementation is distilled from https://tools.ietf.org/html/rfc7292 and referenced documents.
Package jsonpath is an implementation of http://goessner.net/articles/JsonPath/ If a JSONPath contains one of [key1, key2 ...], .., *, [min:max], [min:max:step], (? expression) all matchs are listed in an []interface{} The package comes with an extension of JSONPath to access the wildcard values of a match. If the JSONPath is used inside of a JSON object, you can use placeholder '#' or '#i' with natural number i to access all wildcards values or the ith wildcard This package can be extended with gval modules for script features like multiply, length, regex or many more. So take a look at github.com/PaesslerAG/gval.
Package passlib provides a simple password hashing and verification interface abstracting multiple password hashing schemes. After initialisation, most people need concern themselves only with the functions Hash and Verify, which uses the default context and sensible defaults. You should initialise the library before using it with the following line. See func UseDefaults for details.
Package openssl contains a pure Go implementation of an OpenSSL compatible encryption / decryption
Package openssl is a light wrapper around OpenSSL for Go. This version has been forked from https://github.com/spacemonkeygo/openssl for greater back-compatibility to older openssl libraries. Starting an HTTP server that uses OpenSSL is very easy. It's as simple as: Getting a net.Listener that uses OpenSSL is also easy: Making a client connection is straightforward too:
Package gokeepasslib is a library written in go which provides functionality to decrypt and parse keepass 2 files (kdbx)
Package openssl is a light wrapper around OpenSSL for Go. It strives to provide a near-drop-in replacement for the Go standard library tls package, while allowing for: OpenSSL is battle-tested and optimized C. While Go's built-in library shows great promise, it is still young and in some places, inefficient. This simple OpenSSL wrapper can often do at least 2x with the same cipher and protocol. On my lappytop, I get the following benchmarking speeds: Many systems support OpenSSL with a variety of plugins and modules for things, such as hardware acceleration in embedded devices. OpenSSL allows for far greater configuration of corner cases and backwards compatibility (such as support of SSLv2). You shouldn't be using SSLv2 if you can help but, but sometimes you can't help it. Yeah yeah, Heartbleed. But according to the author of the standard library's TLS implementation, Go's TLS library is vulnerable to timing attacks. And whether or not OpenSSL received the appropriate amount of scrutiny pre-Heartbleed, it sure is receiving it now. Starting an HTTP server that uses OpenSSL is very easy. It's as simple as: Getting a net.Listener that uses OpenSSL is also easy: Making a client connection is straightforward too: Help wanted: To get this library to work with net/http's client, we had to fork net/http. It would be nice if an alternate http client library supported the generality needed to use OpenSSL instead of crypto/tls.
Package pgx is a PostgreSQL database driver. pgx provides lower level access to PostgreSQL than the standard database/sql. It remains as similar to the database/sql interface as possible while providing better speed and access to PostgreSQL specific features. Import github.com/jackc/pgx/stdlib to use pgx as a database/sql compatible driver. pgx implements Query and Scan in the familiar database/sql style. pgx also implements QueryRow in the same style as database/sql. Use Exec to execute a query that does not return a result set. Connection pool usage is explicit and configurable. In pgx, a connection can be created and managed directly, or a connection pool with a configurable maximum connections can be used. The connection pool offers an after connect hook that allows every connection to be automatically setup before being made available in the connection pool. It delegates methods such as QueryRow to an automatically checked out and released connection so you can avoid manually acquiring and releasing connections when you do not need that level of control. pgx maps between all common base types directly between Go and PostgreSQL. In particular: pgx can map nulls in two ways. The first is package pgtype provides types that have a data field and a status field. They work in a similar fashion to database/sql. The second is to use a pointer to a pointer. pgx maps between int16, int32, int64, float32, float64, and string Go slices and the equivalent PostgreSQL array type. Go slices of native types do not support nulls, so if a PostgreSQL array that contains a null is read into a native Go slice an error will occur. The pgtype package includes many more array types for PostgreSQL types that do not directly map to native Go types. pgx includes built-in support to marshal and unmarshal between Go types and the PostgreSQL JSON and JSONB. pgx encodes from net.IPNet to and from inet and cidr PostgreSQL types. In addition, as a convenience pgx will encode from a net.IP; it will assume a /32 netmask for IPv4 and a /128 for IPv6. pgx includes support for the common data types like integers, floats, strings, dates, and times that have direct mappings between Go and SQL. In addition, pgx uses the github.com/jackc/pgx/pgtype library to support more types. See documention for that library for instructions on how to implement custom types. See example_custom_type_test.go for an example of a custom type for the PostgreSQL point type. pgx also includes support for custom types implementing the database/sql.Scanner and database/sql/driver.Valuer interfaces. If pgx does cannot natively encode a type and that type is a renamed type (e.g. type MyTime time.Time) pgx will attempt to encode the underlying type. While this is usually desired behavior it can produce suprising behavior if one the underlying type and the renamed type each implement database/sql interfaces and the other implements pgx interfaces. It is recommended that this situation be avoided by implementing pgx interfaces on the renamed type. []byte passed as arguments to Query, QueryRow, and Exec are passed unmodified to PostgreSQL. Transactions are started by calling Begin or BeginEx. The BeginEx variant can create a transaction with a specified isolation level. Use CopyFrom to efficiently insert multiple rows at a time using the PostgreSQL copy protocol. CopyFrom accepts a CopyFromSource interface. If the data is already in a [][]interface{} use CopyFromRows to wrap it in a CopyFromSource interface. Or implement CopyFromSource to avoid buffering the entire data set in memory. CopyFrom can be faster than an insert with as few as 5 rows. pgx can listen to the PostgreSQL notification system with the WaitForNotification function. It takes a maximum time to wait for a notification. The pgx ConnConfig struct has a TLSConfig field. If this field is nil, then TLS will be disabled. If it is present, then it will be used to configure the TLS connection. This allows total configuration of the TLS connection. pgx has never explicitly supported Postgres < 9.6's `ssl_renegotiation` option. As of v3.3.0, it doesn't send `ssl_renegotiation: 0` either to support Redshift (https://github.com/jackc/pgx/pull/476). If you need TLS Renegotiation, consider supplying `ConnConfig.TLSConfig` with a non-zero `Renegotiation` value and if it's not the default on your server, set `ssl_renegotiation` via `ConnConfig.RuntimeParams`. pgx defines a simple logger interface. Connections optionally accept a logger that satisfies this interface. Set LogLevel to control logging verbosity. Adapters for github.com/inconshreveable/log15, github.com/sirupsen/logrus, and the testing log are provided in the log directory.
Package controllerutil provides various utility code for writing kubernetes controllers using kubebuilder and controller-runtime. We extract here common code from our controllers code base.
Package CloudForest implements ensembles of decision trees for machine learning in pure Go (golang to search engines). It allows for a number of related algorithms for classification, regression, feature selection and structure analysis on heterogeneous numerical/categorical data with missing values. These include: Breiman and Cutler's Random Forest for Classification and Regression Adaptive Boosting (AdaBoost) Classification Gradiant Boosting Tree Regression Entropy and Cost driven classification L1 regression Feature selection with artificial contrasts Proximity and model structure analysis Roughly balanced bagging for unbalanced classification The API hasn't stabilized yet and may change rapidly. Tests and benchmarks have been performed only on embargoed data sets and can not yet be released. Library Documentation is in code and can be viewed with godoc or live at: http://godoc.org/github.com/ryanbressler/CloudForest Documentation of command line utilities and file formats can be found in README.md, which can be viewed fromated on github: http://github.com/ryanbressler/CloudForest Pull requests and bug reports are welcome. CloudForest was created by Ryan Bressler and is being developed in the Shumelivich Lab at the Institute for Systems Biology for use on genomic/biomedical data with partial support from The Cancer Genome Atlas and the Inova Translational Medicine Institute. CloudForest is intended to provide fast, comprehensible building blocks that can be used to implement ensembles of decision trees. CloudForest is written in Go to allow a data scientist to develop and scale new models and analysis quickly instead of having to modify complex legacy code. Data structures and file formats are chosen with use in multi threaded and cluster environments in mind. Go's support for function types is used to provide a interface to run code as data is percolated through a tree. This method is flexible enough that it can extend the tree being analyzed. Growing a decision tree using Breiman and Cutler's method can be done in an anonymous function/closure passed to a tree's root node's Recurse method: This allows a researcher to include whatever additional analysis they need (importance scores, proximity etc) in tree growth. The same Recurse method can also be used to analyze existing forests to tabulate scores or extract structure. Utilities like leafcount and errorrate use this method to tabulate data about the tree in collection objects. Decision tree's are grown with the goal of reducing "Impurity" which is usually defined as Gini Impurity for categorical targets or mean squared error for numerical targets. CloudForest grows trees against the Target interface which allows for alternative definitions of impurity. CloudForest includes several alternative targets: Additional targets can be stacked on top of these target to add boosting functionality: Repeatedly splitting the data and searching for the best split at each node of a decision tree are the most computationally intensive parts of decision tree learning and CloudForest includes optimized code to perform these tasks. Go's slices are used extensively in CloudForest to make it simple to interact with optimized code. Many previous implementations of Random Forest have avoided reallocation by reordering data in place and keeping track of start and end indexes. In go, slices pointing at the same underlying arrays make this sort of optimization transparent. For example a function like: can return left and right slices that point to the same underlying array as the original slice of cases but these slices should not have their values changed. Functions used while searching for the best split also accepts pointers to reusable slices and structs to maximize speed by keeping memory allocations to a minimum. BestSplitAllocs contains pointers to these items and its use can be seen in functions like: For categorical predictors, BestSplit will also attempt to intelligently choose between 4 different implementations depending on user input and the number of categories. These include exhaustive, random, and iterative searches for the best combination of categories implemented with bitwise operations against int and big.Int. See BestCatSplit, BestCatSplitIter, BestCatSplitBig and BestCatSplitIterBig. All numerical predictors are handled by BestNumSplit which relies on go's sorting package. Training a Random forest is an inherently parallel process and CloudForest is designed to allow parallel implementations that can tackle large problems while keeping memory usage low by writing and using data structures directly to/from disk. Trees can be grown in separate go routines. The growforest utility provides an example of this that uses go routines and channels to grow trees in parallel and write trees to disk as the are finished by the "worker" go routines. The few summary statistics like mean impurity decrease per feature (importance) can be calculated using thread safe data structures like RunningMean. Trees can also be grown on separate machines. The .sf stochastic forest format allows several small forests to be combined by concatenation and the ForestReader and ForestWriter structs allow these forests to be accessed tree by tree (or even node by node) from disk. For data sets that are too big to fit in memory on a single machine Tree.Grow and FeatureMatrix.BestSplitter can be reimplemented to load candidate features from disk, distributed database etc. By default cloud forest uses a fast heuristic for missing values. When proposing a split on a feature with missing data the missing cases are removed and the impurity value is corrected to use three way impurity which reduces the bias towards features with lots of missing data: Missing values in the target variable are left out of impurity calculations. This provided generally good results at a fraction of the computational costs of imputing data. Optionally, feature.ImputeMissing or featurematrixImputeMissing can be called before forest growth to impute missing values to the feature mean/mode which Brieman [2] suggests as a fast method for imputing values. This forest could also be analyzed for proximity (using leafcount or tree.GetLeaves) to do the more accurate proximity weighted imputation Brieman describes. Experimental support is provided for 3 way splitting which splits missing cases onto a third branch. [2] This has so far yielded mixed results in testing. At some point in the future support may be added for local imputing of missing values during tree growth as described in [3] [1] http://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm#missing1 [2] https://code.google.com/p/rf-ace/ [3] http://projecteuclid.org/DPubS?verb=Display&version=1.0&service=UI&handle=euclid.aoas/1223908043&page=record In CloudForest data is stored using the FeatureMatrix struct which contains Features. The Feature struct implements storage and methods for both categorical and numerical data and calculations of impurity etc and the search for the best split. The Target interface abstracts the methods of Feature that are needed for a feature to be predictable. This allows for the implementation of alternative types of regression and classification. Trees are built from Nodes and Splitters and stored within a Forest. Tree has a Grow implements Brieman and Cutler's method (see extract above) for growing a tree. A GrowForest method is also provided that implements the rest of the method including sampling cases but it may be faster to grow the forest to disk as in the growforest utility. Prediction and Voting is done using Tree.Vote and CatBallotBox and NumBallotBox which implement the VoteTallyer interface.