Package kstat provides a Go interface to the Solaris/OmniOS kstat(s) system for user-level access to a lot of kernel statistics. For more documentation on kstats, see kstat(1) and kstat(3kstat). The package can retrieve what are called 'named' kstat statistics, IO statistics, and the most common additional types of 'raw' statistics, which covers almost all kstats you will normally find in the kernel. You can see the names and types of other kstats, but not currently retrieve data for them. Named statistics are the most common type for general information; IO statistics are exported by disks and some other things. Supported additional raw kstats are unix:0:sysinfo, unix:0:vminfo, unix:0:var, and mnt:*:mntinfo. General usage for named statistics: call Open() to obtain a Token, then call GetNamed() on it to obtain Named(s) for specific statistics. Note that this always gives you the very latest value for the statistic. If you want a number of statistics from the same module:inst:name triplet (eg several network counters from the same network interface) and you want them to all have been gathered at the same time, you need to call .Lookup() to obtain a KStat and then repeatedly call its .GetNamed() (this is also slightly more efficient). The short version: a kstat is a collection of some related statistics, eg various network counters for a particular network interface. A Token is a handle for a collection of kstats. You go collection (Token) -> kstat (KStat) -> specific statistic (Named) in order to retrieve the value of a specific statistic. (IO stats are retrieved all at once with GetIO(), because they come to us from the kernel as one single struct so that's what you get.) This is a cgo-based package. Cross compilation is up to you. Goroutine safety is in no way guaranteed because the underlying C kstat library is probably not thread or goroutine safe (and there are some all-Go concurrency races involving .Close()). This package may leak memory, especially since the Solaris kstat manpage is not clear on the requirements here. However I believe it's reasonably memory safe. It's possible to totally corrupt memory with use-after-free errors if you do operations on kstats after calling Token.Close(), although we try to avoid that. NOTE: this package is quite young. The API may well change as I (and other people) gain more experience with it. In general this is not going to be as lean and mean as calling C directly, partly because of intrinsic CGo overheads and partly because we do more memory allocation and deallocation than a C program would (partly because we prioritize not leaking memory). We support named kstats and IO kstats (KSTAT_TYPE_NAMED and KSTAT_TYPE_IO / kstat_io_t respectively). kstat(1) also knows about a number of magic specific 'raw' stats (which are generally custom C structs); of these we support unix:0:sysinfo, unix:0:vminfo, unix:0:var, and mnt:*:mntinfo for NFS filesystem mounts. In theory kstat supports general timer and interrupt stats. In practice there is no use of KSTAT_TYPE_TIMER in the current Illumos kernel source and very little use of KSTAT_TYPE_INTR (mostly by very old hardware drivers, although the vioif driver uses it too). Since I can't test KSTAT_TYPE_INTR stats, we don't currently support it. There are also a few additional KSTAT_TYPE_RAW raw stats that we don't support, mostly because they seem to be effectively obsolete. These specific raw stats can be found listed in the Illumos source code in cmd/stat/kstat/kstat.h in the ks_raw_lookup array. See cmd/stat/kstat/kstat.c for how they're interpreted. If you need access to one of these kstats, the KStat.CopyTo() and KStat.Raw() methods give you an escape hatch to roll your own. You'll probably need to use cgo to generate an appropriate Go struct that matches the C struct you need. My notes on this process may be helpful: https://utcc.utoronto.ca/~cks/space/blog/programming/GoCGoCompatibleStructs Author: Chris Siebenmann https://github.com/siebenmann/go-kstat Copyright: standard Go copyright. (If you're reading this documentation on a non-Solaris platform, you're probably not seeing the detailed API documentation for constants, types, and so on because of tooling limitations in godoc et al.)
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package kstat provides a Go interface to the Solaris/OmniOS kstat(s) system for user-level access to a lot of kernel statistics. For more documentation on kstats, see kstat(1) and kstat(3kstat). The package can retrieve what are called 'named' kstat statistics, IO statistics, and the most common additional types of 'raw' statistics, which covers almost all kstats you will normally find in the kernel. You can see the names and types of other kstats, but not currently retrieve data for them. Named statistics are the most common type for general information; IO statistics are exported by disks and some other things. Supported additional raw kstats are unix:0:sysinfo, unix:0:vminfo, unix:0:var, and mnt:*:mntinfo. General usage for named statistics: call Open() to obtain a Token, then call GetNamed() on it to obtain Named(s) for specific statistics. Note that this always gives you the very latest value for the statistic. If you want a number of statistics from the same module:inst:name triplet (eg several network counters from the same network interface) and you want them to all have been gathered at the same time, you need to call .Lookup() to obtain a KStat and then repeatedly call its .GetNamed() (this is also slightly more efficient). The short version: a kstat is a collection of some related statistics, eg various network counters for a particular network interface. A Token is a handle for a collection of kstats. You go collection (Token) -> kstat (KStat) -> specific statistic (Named) in order to retrieve the value of a specific statistic. (IO stats are retrieved all at once with GetIO(), because they come to us from the kernel as one single struct so that's what you get.) This is a cgo-based package. Cross compilation is up to you. Goroutine safety is in no way guaranteed because the underlying C kstat library is probably not thread or goroutine safe (and there are some all-Go concurrency races involving .Close()). This package may leak memory, especially since the Solaris kstat manpage is not clear on the requirements here. However I believe it's reasonably memory safe. It's possible to totally corrupt memory with use-after-free errors if you do operations on kstats after calling Token.Close(), although we try to avoid that. NOTE: this package is quite young. The API may well change as I (and other people) gain more experience with it. In general this is not going to be as lean and mean as calling C directly, partly because of intrinsic CGo overheads and partly because we do more memory allocation and deallocation than a C program would (partly because we prioritize not leaking memory). We support named kstats and IO kstats (KSTAT_TYPE_NAMED and KSTAT_TYPE_IO / kstat_io_t respectively). kstat(1) also knows about a number of magic specific 'raw' stats (which are generally custom C structs); of these we support unix:0:sysinfo, unix:0:vminfo, unix:0:var, and mnt:*:mntinfo for NFS filesystem mounts. In theory kstat supports general timer and interrupt stats. In practice there is no use of KSTAT_TYPE_TIMER in the current Illumos kernel source and very little use of KSTAT_TYPE_INTR (mostly by very old hardware drivers, although the vioif driver uses it too). Since I can't test KSTAT_TYPE_INTR stats, we don't currently support it. There are also a few additional KSTAT_TYPE_RAW raw stats that we don't support, mostly because they seem to be effectively obsolete. These specific raw stats can be found listed in the Illumos source code in cmd/stat/kstat/kstat.h in the ks_raw_lookup array. See cmd/stat/kstat/kstat.c for how they're interpreted. If you need access to one of these kstats, the KStat.CopyTo() and KStat.Raw() methods give you an escape hatch to roll your own. You'll probably need to use cgo to generate an appropriate Go struct that matches the C struct you need. My notes on this process may be helpful: https://utcc.utoronto.ca/~cks/space/blog/programming/GoCGoCompatibleStructs Author: Chris Siebenmann https://github.com/siebenmann/go-kstat Copyright: standard Go copyright. (If you're reading this documentation on a non-Solaris platform, you're probably not seeing the detailed API documentation for constants, types, and so on because of tooling limitations in godoc et al.)
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Copyright 2015 Swisscom (Schweiz) AG Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Package taskstats provides access to Linux's taskstats interface, for sending per-task, per-process, and cgroup statistics from the kernel to userspace. For more information on taskstats, please see:
Package toolbox is a middleware that provides health check, pprof, profile and statistic services for Macaron.
Package evidently provides the API client, operations, and parameter types for Amazon CloudWatch Evidently. You can use Amazon CloudWatch Evidently to safely validate new features by serving them to a specified percentage of your users while you roll out the feature. You can monitor the performance of the new feature to help you decide when to ramp up traffic to your users. This helps you reduce risk and identify unintended consequences before you fully launch the feature. You can also conduct A/B experiments to make feature design decisions based on evidence and data. An experiment can test as many as five variations at once. Evidently collects experiment data and analyzes it using statistical methods. It also provides clear recommendations about which variations perform better. You can test both user-facing features and backend features.
Package hercules contains the functions which are needed to gather various statistics from a Git repository. The analysis is expressed in a form of the tree: there are nodes - "pipeline items" - which require some other nodes to be executed prior to selves and in turn provide the data for dependent nodes. There are several service items which do not produce any useful statistics but rather provide the requirements for other items. The top-level items include: - BurndownAnalysis - line burndown statistics for project, files and developers. - CouplesAnalysis - coupling statistics for files and developers. - ShotnessAnalysis - structural hotness and couples, by any Babelfish UAST XPath (functions by default). The typical API usage is to initialize the Pipeline class: Then add the required analysis: This call will add all the needed intermediate pipeline items. Then link and execute the analysis tree: Finally extract the result: The actual usage example is cmd/hercules/root.go - the command line tool's code. You can provide additional options via `facts` on initialization. For example, to provide your own logger, enable people-tracking, and set a custom tick size: Hercules depends heavily on https://github.com/src-d/go-git and leverages the diff algorithm through https://github.com/sergi/go-diff. Besides, BurndownAnalysis involves File and RBTree. These are low level data structures which enable incremental blaming. File carries an instance of RBTree and the current line burndown state. RBTree implements the red-black balanced binary tree and is based on https://github.com/yasushi-saito/rbtree. Coupling stats are supposed to be further processed rather than observed directly. labours.py uses Swivel embeddings and visualises them in Tensorflow Projector. Shotness analysis as well as other UAST-featured items relies on [Babelfish](https://doc.bblf.sh) and requires the server to be running.
Package clistats implements a progress monitor functionality which exposes statistics in json format on a api endpoint bound to localhost
Package dockerstats provides the ability to get currently running Docker container statistics, including memory and CPU usage. To get the statistics of running Docker containers, you can use the `Current()` function: Alternatively, you can use the `NewMonitor()` function to receive a constant stream of Docker container stats, available on the Monitor's `Stream` channel:
Package CloudForest implements ensembles of decision trees for machine learning in pure Go (golang to search engines). It allows for a number of related algorithms for classification, regression, feature selection and structure analysis on heterogeneous numerical/categorical data with missing values. These include: Breiman and Cutler's Random Forest for Classification and Regression Adaptive Boosting (AdaBoost) Classification Gradiant Boosting Tree Regression Entropy and Cost driven classification L1 regression Feature selection with artificial contrasts Proximity and model structure analysis Roughly balanced bagging for unbalanced classification The API hasn't stabilized yet and may change rapidly. Tests and benchmarks have been performed only on embargoed data sets and can not yet be released. Library Documentation is in code and can be viewed with godoc or live at: http://godoc.org/github.com/ryanbressler/CloudForest Documentation of command line utilities and file formats can be found in README.md, which can be viewed fromated on github: http://github.com/ryanbressler/CloudForest Pull requests and bug reports are welcome. CloudForest was created by Ryan Bressler and is being developed in the Shumelivich Lab at the Institute for Systems Biology for use on genomic/biomedical data with partial support from The Cancer Genome Atlas and the Inova Translational Medicine Institute. CloudForest is intended to provide fast, comprehensible building blocks that can be used to implement ensembles of decision trees. CloudForest is written in Go to allow a data scientist to develop and scale new models and analysis quickly instead of having to modify complex legacy code. Data structures and file formats are chosen with use in multi threaded and cluster environments in mind. Go's support for function types is used to provide a interface to run code as data is percolated through a tree. This method is flexible enough that it can extend the tree being analyzed. Growing a decision tree using Breiman and Cutler's method can be done in an anonymous function/closure passed to a tree's root node's Recurse method: This allows a researcher to include whatever additional analysis they need (importance scores, proximity etc) in tree growth. The same Recurse method can also be used to analyze existing forests to tabulate scores or extract structure. Utilities like leafcount and errorrate use this method to tabulate data about the tree in collection objects. Decision tree's are grown with the goal of reducing "Impurity" which is usually defined as Gini Impurity for categorical targets or mean squared error for numerical targets. CloudForest grows trees against the Target interface which allows for alternative definitions of impurity. CloudForest includes several alternative targets: Additional targets can be stacked on top of these target to add boosting functionality: Repeatedly splitting the data and searching for the best split at each node of a decision tree are the most computationally intensive parts of decision tree learning and CloudForest includes optimized code to perform these tasks. Go's slices are used extensively in CloudForest to make it simple to interact with optimized code. Many previous implementations of Random Forest have avoided reallocation by reordering data in place and keeping track of start and end indexes. In go, slices pointing at the same underlying arrays make this sort of optimization transparent. For example a function like: can return left and right slices that point to the same underlying array as the original slice of cases but these slices should not have their values changed. Functions used while searching for the best split also accepts pointers to reusable slices and structs to maximize speed by keeping memory allocations to a minimum. BestSplitAllocs contains pointers to these items and its use can be seen in functions like: For categorical predictors, BestSplit will also attempt to intelligently choose between 4 different implementations depending on user input and the number of categories. These include exhaustive, random, and iterative searches for the best combination of categories implemented with bitwise operations against int and big.Int. See BestCatSplit, BestCatSplitIter, BestCatSplitBig and BestCatSplitIterBig. All numerical predictors are handled by BestNumSplit which relies on go's sorting package. Training a Random forest is an inherently parallel process and CloudForest is designed to allow parallel implementations that can tackle large problems while keeping memory usage low by writing and using data structures directly to/from disk. Trees can be grown in separate go routines. The growforest utility provides an example of this that uses go routines and channels to grow trees in parallel and write trees to disk as the are finished by the "worker" go routines. The few summary statistics like mean impurity decrease per feature (importance) can be calculated using thread safe data structures like RunningMean. Trees can also be grown on separate machines. The .sf stochastic forest format allows several small forests to be combined by concatenation and the ForestReader and ForestWriter structs allow these forests to be accessed tree by tree (or even node by node) from disk. For data sets that are too big to fit in memory on a single machine Tree.Grow and FeatureMatrix.BestSplitter can be reimplemented to load candidate features from disk, distributed database etc. By default cloud forest uses a fast heuristic for missing values. When proposing a split on a feature with missing data the missing cases are removed and the impurity value is corrected to use three way impurity which reduces the bias towards features with lots of missing data: Missing values in the target variable are left out of impurity calculations. This provided generally good results at a fraction of the computational costs of imputing data. Optionally, feature.ImputeMissing or featurematrixImputeMissing can be called before forest growth to impute missing values to the feature mean/mode which Brieman [2] suggests as a fast method for imputing values. This forest could also be analyzed for proximity (using leafcount or tree.GetLeaves) to do the more accurate proximity weighted imputation Brieman describes. Experimental support is provided for 3 way splitting which splits missing cases onto a third branch. [2] This has so far yielded mixed results in testing. At some point in the future support may be added for local imputing of missing values during tree growth as described in [3] [1] http://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm#missing1 [2] https://code.google.com/p/rf-ace/ [3] http://projecteuclid.org/DPubS?verb=Display&version=1.0&service=UI&handle=euclid.aoas/1223908043&page=record In CloudForest data is stored using the FeatureMatrix struct which contains Features. The Feature struct implements storage and methods for both categorical and numerical data and calculations of impurity etc and the search for the best split. The Target interface abstracts the methods of Feature that are needed for a feature to be predictable. This allows for the implementation of alternative types of regression and classification. Trees are built from Nodes and Splitters and stored within a Forest. Tree has a Grow implements Brieman and Cutler's method (see extract above) for growing a tree. A GrowForest method is also provided that implements the rest of the method including sampling cases but it may be faster to grow the forest to disk as in the growforest utility. Prediction and Voting is done using Tree.Vote and CatBallotBox and NumBallotBox which implement the VoteTallyer interface.