Package order enables easier ordering and comparison tasks. This package provides functionality to easily define and apply order on values. It works out of the box for most primitive types and their pointer versions, and enable order of any object using (three-way comparison) https://en.wikipedia.org/wiki/Three-way_comparison with a given `func(T, T) int` function, or by implementing the generic interface: `func (T) Compare(T) int`. Supported Tasks: * [x] `Sort` / `SortStable` - sort a slice. * [x] `Search` - binary search for a value in a slice. * [x] `MinMax` - get indices of minimal and maximal values of a slice. * [X] `Is` - get a comparable object for more readable code. + [x] `Select` - get the K'th greatest value of a slice. * [x] `IsSorted` / `IsStrictSorted` - check if a slice is sorted. Order between values can be more forgiving than strict comparison. This library allows sensible type conversions. A type `U` can be used in order function of type `T` in the following cases: * `U` is a pointer (or pointers chain) to a `T`. * `T` is a pointer (or pointers chain) to a `U`. * `T` and `U` are of the same kind. * `T` and `U` are of the same number kind group (int?, uint?, float?, complex?) and `U`'s bits number is less or equal to `T`'s bits number. * `U` and `T` are assignable structs. Using this library might be less type safe - because of the usage of interfaces API, and less efficient - because of the use of reflection. On the other hand, this library reduce chances for errors by providing a well tested code and more readable code. See below how some order tasks can be translated to be used by this library. A simple example that shows how to use the order library with different basic types. A type may implement a `func (t T) Compare(other T) int` function. In this case it could be just used with the order package functions. An example of ordering struct with multiple fields with different priorities.
Package gophersauce is an API wrapper for the SauceNAO.com reverse image search engine. Initializing a client (without additional options): Initializing a client with options: Any of the options can be omitted. By default, MaxResults will be 6 and APIKey will be an empty string. You can also change these properties after instantiating the client: There are three ways in which you can consume the SauceNAO API: URL, file, and reader. Reverse searching an image using a URL: Reverse searching an image using a file path: Reverse searching an image using a reader: API responses have helpful methods, such as First() which will return the first result (which is likely the one that is the most similar to your image), if any: Some of the response fields are, by default, declared as interfaces because of the way the SauceNAO API works. You will have to either check for the type of the field yourself and parse it that way, or use a helper function, such as GetUserID(), GetAccountType() (on type SaucenaoResponse) or GetCreatorString() (on type SearchResult). Example: This will not work: This will work:
Package errors provides simple error handling primitives. The traditional error handling idiom in Go is roughly akin to which applied recursively up the call stack results in error reports without context or debugging information. The errors package allows programmers to add context to the failure path in their code in a way that does not destroy the original value of the error. The errors.Annotate function returns a new error that adds context to the original error by recording a stack trace at the point Annotate is called, and the supplied message. For example If additional control is required the errors.AddStack and errors.WithMessage functions destructure errors.Annotate into its component operations of annotating an error with a stack trace and an a message, respectively. Using errors.Annotate constructs a stack of errors, adding context to the preceding error. Depending on the nature of the error it may be necessary to reverse the operation of errors.Annotate to retrieve the original error for inspection. Any error value which implements this interface can be inspected by errors.Cause. errors.Cause will recursively retrieve the topmost error which does not implement causer, which is assumed to be the original cause. For example: causer interface is not exported by this package, but is considered a part of stable public API. errors.Unwrap is also available: this will retrieve the next error in the chain. All error values returned from this package implement fmt.Formatter and can be formatted by the fmt package. The following verbs are supported New, Errorf, Annotate, and Annotatef record a stack trace at the point they are invoked. This information can be retrieved with the StackTracer interface that returns a StackTrace. Where errors.StackTrace is defined as The Frame type represents a call site in the stack trace. Frame supports the fmt.Formatter interface that can be used for printing information about the stack trace of this error. For example: See the documentation for Frame.Format for more details. errors.Find can be used to search for an error in the error chain.
Doc is an old program that was used to design a nice command-level API for presenting Go documentation. It is deprecated; use the new go doc in 1.5 instead. Doc is a simple document printer that produces the doc comments for its argument symbols, plus a link to the full documentation and a pointer to the source. It has a more Go-like UI than godoc. It can also search for symbols by looking in all packages, and case is ignored. For instance: will find unicode.IsUpper. The -pkg flag retrieves package-level doc comments only. Usage: The pkg is the last element of the package path; no slashes (ast.Node not go/ast.Node). The name may also be a regular expression to select which names to match. In regular expression searches, case is ignored and the pattern must match the entire name, so ".?print" will match Print, Fprint and Sprint but not Fprintf. Flags restrict hits to declarations of the corresponding kind. Flags restrict printing to the documentation, source path, or godoc URL. Flags restrict printing to the documentation, source path, or godoc URL. Flag takes a single argument (no package), a name or regular expression to search for in all packages.
Doc is an old program that was used to design a nice command-level API for presenting Go documentation. It is deprecated; use the new go doc in 1.5 instead. Doc is a simple document printer that produces the doc comments for its argument symbols, plus a link to the full documentation and a pointer to the source. It has a more Go-like UI than godoc. It can also search for symbols by looking in all packages, and case is ignored. For instance: will find unicode.IsUpper. The -pkg flag retrieves package-level doc comments only. Usage: The pkg is the last element of the package path; no slashes (ast.Node not go/ast.Node). The name may also be a regular expression to select which names to match. In regular expression searches, case is ignored and the pattern must match the entire name, so ".?print" will match Print, Fprint and Sprint but not Fprintf. Flags restrict hits to declarations of the corresponding kind. Flags restrict printing to the documentation, source path, or godoc URL. Flags restrict printing to the documentation, source path, or godoc URL. Flag takes a single argument (no package), a name or regular expression to search for in all packages.
Package errors provides simple error handling primitives. The traditional error handling idiom in Go is roughly akin to which applied recursively up the call stack results in error reports without context or debugging information. The errors package allows programmers to add context to the failure path in their code in a way that does not destroy the original value of the error. The errors.Annotate function returns a new error that adds context to the original error by recording a stack trace at the point Annotate is called, and the supplied message. For example If additional control is required the errors.AddStack and errors.WithMessage functions destructure errors.Annotate into its component operations of annotating an error with a stack trace and an a message, respectively. Using errors.Annotate constructs a stack of errors, adding context to the preceding error. Depending on the nature of the error it may be necessary to reverse the operation of errors.Annotate to retrieve the original error for inspection. Any error value which implements this interface can be inspected by errors.Cause. errors.Cause will recursively retrieve the topmost error which does not implement causer, which is assumed to be the original cause. For example: causer interface is not exported by this package, but is considered a part of stable public API. errors.Unwrap is also available: this will retrieve the next error in the chain. All error values returned from this package implement fmt.Formatter and can be formatted by the fmt package. The following verbs are supported New, Errorf, Annotate, and Annotatef record a stack trace at the point they are invoked. This information can be retrieved with the StackTracer interface that returns a StackTrace. Where errors.StackTrace is defined as The Frame type represents a call site in the stack trace. Frame supports the fmt.Formatter interface that can be used for printing information about the stack trace of this error. For example: See the documentation for Frame.Format for more details. errors.Find can be used to search for an error in the error chain.
Package seekret provides a framework to create tools to inspect information looking for sensitive information like passwords, tokens, private keys, certificates, etc. The current trend of automation of all things and de DevOps culture are very beneficial for efficiency but also come with several problems, being one of them the secret provisioning. Bootstrapping secrets into systems and applications may be complicated and sometimes the straightforward way is to store them into a insecure storage, like github repository, embedded into an artifact or system image, etc. That means that an AWS secret_key end up into a Github repository. Seekret is an extensible framework that gelps in creating tools for detecting secrets on different sources. The secrets to detect are defined by a set of rules that can help detect passwords, tokens, private keys, certificates, etc. Seekret is extensible and can cover various use cases. Below there are some tools that uses seekret: Seekret API is very simple and easy to use. This section shows some snippets of code that shows the basic operations you can do with it. The first thing to be done is to create a new Seekret context: Then the rules must to be loaded. They can be loaded from a path definition, a directory or a single file: Optionally, exceptions (or false positives) can also be loaded from a file: After that, must be loaded the objects to be inspected searching for secrets. sourceType is an interface that implements the interface shown below. We offer sourceType's for Directories and Git Repositories, but you are able to extend it by creating your own. Currently, there are the following different sources supported: Having all the rules, exceptions and objects loaded into the contects, it's possible to start the inspection with the following code: Nworkers is an integuer that specify the number of goroutines used on the inspection. The recommended value is runtime.NumCPU(). Finally, it is possible to obtain the list of secrets located and do something with them:
Package goidoit is an https://www.i-doit.com/ API client implementation written in https://golang.org Install using go get Initialize your API object and do stuff like get your idoit version or search stuff etc. for Debuging use and to disable TLS Certificate Verification (if you use self signed certs for i-doit)
Package **cli** provides access to a **DB2 database** using DB2 Call Level Interface (**CLI**) API. This requires **cgo** and DB2 _cli/odbc_ driver **libdb2.so**. It is not possible to use this driver to create a statically linked Go package because IBM doesn't provide the DB2 _cli/odbc_ driver as _libdb2.a_ static library. On **Windows**, DB2 _cli/odbc_ library is not compatiable with **gcc**, but **cgo** requires **gcc**. Hence, this driver is not supported on Windows. **cli** is based on *alexbrainman's* odbc package: https://github.com/alexbrainman/odbc. This package registers a driver for the standard Go **database/sql** package and used through the **database/sql** API. ### Error Handling The package has no exported API except two functions-**SQLCode()** and **SQLState()**-for inspecting DB2 CLI error. The function signature is as follows: Since package **cli** is imported for side-effects only, use the following code pattern to access SQLCode() and SQLState(): The local interface can include SQLState() also for inspecting SQLState from DB2 CLI. **SQLCODE** is a return code from a IBM DB2 SQL operation. This code can be zero (0), negative, or positive. Search "SQL messages" in DB2 Information Center to find out more about SQLCODE. **SQLSTATE** is a return code like SQLCODE. But instead of a number, it is a five character error code that is consistent across all IBM database products. SQLSTATE follows this format: ccsss, where cc indicates class and sss indicates subclass. Search "SQLSTATE Messages" in DB2 Information Center for more detail. ### Connection String This driver uses DB2 CLI function **SQLConnect** and **SQLDriverConnect** in driver.Open(...). To use **SQLConnect**, start the name or the DSN string with keyword sqlconnect. This keyword is case insensitive. The connection string needs to follow this syntax to be valid: [...] means optional. If a database_name is not provided, then SAMPLE is used as the database name. Also note that each keyword and value ends with a semicolon. The keyword "sqlconnect" doesn't take a value but ends with a semi-colon. Examples: Any other connection string must follow the connection string rule that is valid with SQLDriverConnect. For example, this is a valid dsn/connection string for SQLDriverConnect: Examples: Search **SQLDriverConnect** in DB2 LUW *Information Center* for more detail. ## Installation IBM DB2 for Linux, Unix and Windows (DB2 LUW) implements its own ODBC driver. This package uses the DB2 ODBC/CLI driver through cgo. As such this package requires DB2 C headers and libraries for compilation. If you don't have DB2 LUW installed on the system, then you can install the free, community DB2 version *DB2 Express-C*. You can also use *IBM Data Server Driver package*. It includes the required headers and libraries but not a DB2 database manager. To install, download this package by running the following: Go to the following directory: In that directory run the following to install the package: This script and this driver only works on Mac OS and Linux. ## Usage See `example_test.go`.
Package enigma provides developers with a Go client for the Enigma.io API. The Enigma API allows users to download datasets, query metadata, or perform server side operations on tables in Enigma. All calls to the API are made through a RESTful protocol and require an API key. The Enigma API is served over HTTPS. Enigma.io (http://enigma.io) lets you quickly search and analyze billions of public records published by governments, companies and organizations. Please refer to the official documentation of the API http://app.enigma.io/api for more info.
Package twitter implements a client for the Twitter API v2. This package is in development and is not yet ready for production use. The general structure of an API call is to first construct a query, then invoke that query with a context on a client: Package "types" contains the type and constant definitions for the API. Queries to look up tweets by ID or username, to search recent tweets, and to search or sample streams of tweets are defined in package "tweets". Queries to look up users by ID or user name are defined in package "users". Queries to read or update search rules are defined in package "rules". Queries to create, edit, delete, and show the contents of lists are defined in package "lists".
Package conf is a configuration parser that loads configuration in a struct from files and automatically creates cli flags. Just define a struct with needed configuration. Values are then taken from multiple source in this order of precedence: Default is to use lower cased field names in struct to create command line arguments. Tags can be used to specify different names, command line help message and section in conf file. Format is: conf.Path variable can be set to the path of a configuration file. For default it is initialized to the value of environment variable: Files ending with: will be parsed as INI informal standard: First letter of every key found is upper cased and than, a struct field with same name is searched: If such field name is not found than comparison is made against key specified as first element in tag. conf tries to be modular and easily extensible to support different formats. This is a work in progress, APIs can change.
Package freegeoip provides an API for searching the geolocation of IP addresses. It uses a database that can be either a local file or a remote resource from a URL. Local databases are monitored by fsnotify and reloaded when the file is either updated or overwritten. Remote databases are automatically downloaded and updated in background so you can focus on using the API and not managing the database.
Package freegeoip provides an API for searching the geolocation of IP addresses. It uses a database that can be either a local file or a remote resource from a URL. Local databases are monitored by fsnotify and reloaded when the file is either updated or overwritten. Remote databases are automatically downloaded and updated in background so you can focus on using the API and not managing the database.
Package bst and its sub-packages specify an extensible Binary Search Tree (BST) API. Instead of trying to think of every method a user might want on a BST, we declare a set of extensible methods with the hopes of enabling a user to create any functionality they need. Packages and sub-packages: -------------------------- * bst Declares basic BST methods (Insert, Get, Remove, etc). * bst/finder Declares extensible method, Find, which allows you to direct navigation down a tree, looking for a specific value. * bst/visitor Declares extensible method, Visit, which allows you to defer taking an action on a tree node (insert, get, update, delete) until after you've visited that node (i.e. inspect then change with one visit to the node). * bst/walker Declares extensible method, Walk, which allows you to freely traverse a tree, either up and down (left, right, parent) or in sequential order (prev, next). * bst/simple Provides a reference implementation of an Extensible BST that implements all of the above-declared methods. License ------- This package is released under the MIT License. See included file 'LICENSE' for more details. Contributors ------------ David Farell <DavidPFarrell@yahoo.com>
Package typesense is a Go client for Typesense, an open-source, typo tolerant search engine. This package defines a client and some useful methods to communicate with the Typesense API without the drawback of writing actual API requests. Example of an actual program that can connect to the Typesense API creates a new collection, index some document and retrieve it using search. Highly inspired by the official Typesense guide https://typesense.org/docs/0.11.2/guide/ .
Package openfec implements a client library for the OpenFEC API. See https://api.open.fec.gov This package all allows you to explore the way candidates and committees fund their campaigns by accessing FEC (Federal Election Committee) data. A few restrictions limit the way you can use FEC data. For example, you can’t use contributor lists for commercial purposes or to solicit donations. Get an API key here: https://api.data.gov/signup/ Endpoints are classified in the following groups: Candidate, Committee, Financial, Filings and Schedules. Candidate endpoints give you access to information about the people running for office. This information is organized by candidate_id. If you're unfamiliar with candidate IDs, using `/candidates/search` will help you locate a particular candidate. Officially, a candidate is an individual seeking nomination for election to a federal office. People become candidates when they (or agents working on their behalf) raise contributions or make expenditures that exceed $5,000. The candidate endpoints primarily use data from FEC registration [Form 1](http://www.fec.gov/pdf/forms/fecfrm1.pdf), for candidate information, and [Form 2](http://www.fec.gov/pdf/forms/fecfrm2.pdf), for committee information. Committees are entities that spend and raise money in an election. Their characteristics and relationships with candidates can change over time. You might want to use filters or search endpoints to find the committee you're looking for. Then you can use other committee endpoints to explore information about the committee that interests you. Financial information is organized by `committee_id`, so finding the committee you're interested in will lead you to more granular financial information. The committee endpoints include all FEC filers, even if they aren't registered as a committee. Officially, committees include the committees and organizations that file with the FEC. Several different types of organizations file financial reports with the FEC: * Campaign committees authorized by particular candidates to raise and spend funds in their campaigns * Non-party committees (e.g., PACs), some of which may be sponsored by corporations, unions, trade or membership groups, etc. * Political party committees at the national, state, and local levels * Groups and individuals making only independent expenditures * Corporations, unions, and other organizations making internal communications The committee endpoints primarily use data from FEC registration Form 1 and Form 2. Fetch key information about a committee's Form 3, Form 3X, or Form 3P financial reports. Most committees are required to summarize their financial activity in each filing; those summaries are included in these files. Generally, committees file reports on a quarterly or monthly basis, but some must also submit a report 12 days before primary elections. Therefore, during the primary season, the period covered by this file may be different for different committees. These totals also incorporate any changes made by committees, if any report covering the period is amended. Information is made available on the API as soon as it's processed. Keep in mind, complex paper filings take longer to process. The financial endpoints use data from FEC [form 5](http://www.fec.gov/pdf/forms/fecfrm5.pdf), for independent expenditors; or the summary and detailed summary pages of the FEC [form 3](http://www.fec.gov/pdf/forms/fecfrm3.pdf), for House and Senate committees; [form 3X](http://www.fec.gov/pdf/forms/fecfrm3x.pdf), for PACs and parties; and [form 3P](http://www.fec.gov/pdf/forms/fecfrm3p.pdf), for presidential committees. All official records and reports filed by or delivered to the FEC. Note: because the filings data includes many records, counts for large result sets are approximate. Schedules come from particular sections on forms and contain detailed transactional data. Schedule A explains where contributions come from. If you are interested in individual donors, this will be the endpoint you use. For the Schedule A aggregates, "memoed" items are not included to avoid double counting. Schedule B explains how money is spent.
Package gbsearch provides a way to search the Google books API.
Package stringset provides the capabilities of a standard Set with strings, including the standard set operations and a specialized ability to have a "negative" set for intersections. Negative sets are useful for inverting the sense of a set when combining them using Intersection. Negative sets do not change the behavior of any other set operations, including Union and Difference. Negative operations were created for managing a set of tags, where it is useful to be able to search for items that contain some tags but do not contain others. The API is designed to be chainable so that common operations become one-liners, and it is designed to be easy to use with slices of strings. This package does not return errors. Set operations should be fast and chainable; adding items more than once, or attempting to remove things that don't exist are not errors.
Package conf is a configuration parser that loads configuration in a struct from files and automatically creates cli flags. Just define a struct with needed configuration. Values are then taken from multiple source in this order of precedence: Default is to use lower cased field names in struct to create command line arguments. Tags can be used to specify different names, command line help message and section in conf file. Format is: conf.Path variable can be set to the path of a configuration file. For default it is initialized to the value of environment variable: Files ending with: will be parsed as INI informal standard: First letter of every key found is upper cased and than, a struct field with same name is searched: If such field name is not found than comparison is made against key specified as first element in tag. conf tries to be modular and easily extensible to support different formats. This is a work in progress, APIs can change.
Package postkr provides access to APIs of epost.kr, Korean post office. The epost.kr provide APIs for track snail-mail and search zip-code. But, I can't access to the tracking API with my key. So, It's not implemented currently. You need own key which is issued by eport.kr. Get one from this link:
Package itunes_search_go provides client for access to iTunes API. See https://affiliate.itunes.apple.com/resources/documentation/itunes-store-web-service-search-api
Package CloudForest implements ensembles of decision trees for machine learning in pure Go (golang to search engines). It allows for a number of related algorithms for classification, regression, feature selection and structure analysis on heterogeneous numerical/categorical data with missing values. These include: Breiman and Cutler's Random Forest for Classification and Regression Adaptive Boosting (AdaBoost) Classification Gradiant Boosting Tree Regression Entropy and Cost driven classification L1 regression Feature selection with artificial contrasts Proximity and model structure analysis Roughly balanced bagging for unbalanced classification The API hasn't stabilized yet and may change rapidly. Tests and benchmarks have been performed only on embargoed data sets and can not yet be released. Library Documentation is in code and can be viewed with godoc or live at: http://godoc.org/github.com/IlyaLab/CloudForest Documentation of command line utilities and file formats can be found in README.md, which can be viewed fromated on github: http://github.com/IlyaLab/CloudForest Pull requests and bug reports are welcome. CloudForest was created by Ryan Bressler and is being developed in the Shumelivich Lab at the Institute for Systems Biology for use on genomic/biomedical data with partial support from The Cancer Genome Atlas and the Inova Translational Medicine Institute. CloudForest is intended to provide fast, comprehensible building blocks that can be used to implement ensembles of decision trees. CloudForest is written in Go to allow a data scientist to develop and scale new models and analysis quickly instead of having to modify complex legacy code. Data structures and file formats are chosen with use in multi threaded and cluster environments in mind. Go's support for function types is used to provide a interface to run code as data is percolated through a tree. This method is flexible enough that it can extend the tree being analyzed. Growing a decision tree using Breiman and Cutler's method can be done in an anonymous function/closure passed to a tree's root node's Recurse method: This allows a researcher to include whatever additional analysis they need (importance scores, proximity etc) in tree growth. The same Recurse method can also be used to analyze existing forests to tabulate scores or extract structure. Utilities like leafcount and errorrate use this method to tabulate data about the tree in collection objects. Decision tree's are grown with the goal of reducing "Impurity" which is usually defined as Gini Impurity for categorical targets or mean squared error for numerical targets. CloudForest grows trees against the Target interface which allows for alternative definitions of impurity. CloudForest includes several alternative targets: Additional targets can be stacked on top of these target to add boosting functionality: Repeatedly splitting the data and searching for the best split at each node of a decision tree are the most computationally intensive parts of decision tree learning and CloudForest includes optimized code to perform these tasks. Go's slices are used extensively in CloudForest to make it simple to interact with optimized code. Many previous implementations of Random Forest have avoided reallocation by reordering data in place and keeping track of start and end indexes. In go, slices pointing at the same underlying arrays make this sort of optimization transparent. For example a function like: can return left and right slices that point to the same underlying array as the original slice of cases but these slices should not have their values changed. Functions used while searching for the best split also accepts pointers to reusable slices and structs to maximize speed by keeping memory allocations to a minimum. BestSplitAllocs contains pointers to these items and its use can be seen in functions like: For categorical predictors, BestSplit will also attempt to intelligently choose between 4 different implementations depending on user input and the number of categories. These include exhaustive, random, and iterative searches for the best combination of categories implemented with bitwise operations against int and big.Int. See BestCatSplit, BestCatSplitIter, BestCatSplitBig and BestCatSplitIterBig. All numerical predictors are handled by BestNumSplit which relies on go's sorting package. Training a Random forest is an inherently parallel process and CloudForest is designed to allow parallel implementations that can tackle large problems while keeping memory usage low by writing and using data structures directly to/from disk. Trees can be grown in separate go routines. The growforest utility provides an example of this that uses go routines and channels to grow trees in parallel and write trees to disk as the are finished by the "worker" go routines. The few summary statistics like mean impurity decrease per feature (importance) can be calculated using thread safe data structures like RunningMean. Trees can also be grown on separate machines. The .sf stochastic forest format allows several small forests to be combined by concatenation and the ForestReader and ForestWriter structs allow these forests to be accessed tree by tree (or even node by node) from disk. For data sets that are too big to fit in memory on a single machine Tree.Grow and FeatureMatrix.BestSplitter can be reimplemented to load candidate features from disk, distributed database etc. By default cloud forest uses a fast heuristic for missing values. When proposing a split on a feature with missing data the missing cases are removed and the impurity value is corrected to use three way impurity which reduces the bias towards features with lots of missing data: Missing values in the target variable are left out of impurity calculations. This provided generally good results at a fraction of the computational costs of imputing data. Optionally, feature.ImputeMissing or featurematrixImputeMissing can be called before forest growth to impute missing values to the feature mean/mode which Brieman [2] suggests as a fast method for imputing values. This forest could also be analyzed for proximity (using leafcount or tree.GetLeaves) to do the more accurate proximity weighted imputation Brieman describes. Experimental support is provided for 3 way splitting which splits missing cases onto a third branch. [2] This has so far yielded mixed results in testing. At some point in the future support may be added for local imputing of missing values during tree growth as described in [3] [1] http://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm#missing1 [2] https://code.google.com/p/rf-ace/ [3] http://projecteuclid.org/DPubS?verb=Display&version=1.0&service=UI&handle=euclid.aoas/1223908043&page=record In CloudForest data is stored using the FeatureMatrix struct which contains Features. The Feature struct implements storage and methods for both categorical and numerical data and calculations of impurity etc and the search for the best split. The Target interface abstracts the methods of Feature that are needed for a feature to be predictable. This allows for the implementation of alternative types of regression and classification. Trees are built from Nodes and Splitters and stored within a Forest. Tree has a Grow implements Brieman and Cutler's method (see extract above) for growing a tree. A GrowForest method is also provided that implements the rest of the method including sampling cases but it may be faster to grow the forest to disk as in the growforest utility. Prediction and Voting is done using Tree.Vote and CatBallotBox and NumBallotBox which implement the VoteTallyer interface.
Package CloudForest implements ensembles of decision trees for machine learning in pure Go (golang to search engines). It allows for a number of related algorithms for classification, regression, feature selection and structure analysis on heterogeneous numerical/categorical data with missing values. These include: Breiman and Cutler's Random Forest for Classification and Regression Adaptive Boosting (AdaBoost) Classification Gradiant Boosting Tree Regression Entropy and Cost driven classification L1 regression Feature selection with artificial contrasts Proximity and model structure analysis Roughly balanced bagging for unbalanced classification The API hasn't stabilized yet and may change rapidly. Tests and benchmarks have been performed only on embargoed data sets and can not yet be released. Library Documentation is in code and can be viewed with godoc or live at: http://godoc.org/github.com/IlyaLab/CloudForest Documentation of command line utilities and file formats can be found in README.md, which can be viewed fromated on github: http://github.com/IlyaLab/CloudForest Pull requests and bug reports are welcome. CloudForest was created by Ryan Bressler and is being developed in the Shumelivich Lab at the Institute for Systems Biology for use on genomic/biomedical data with partial support from The Cancer Genome Atlas and the Inova Translational Medicine Institute. CloudForest is intended to provide fast, comprehensible building blocks that can be used to implement ensembles of decision trees. CloudForest is written in Go to allow a data scientist to develop and scale new models and analysis quickly instead of having to modify complex legacy code. Data structures and file formats are chosen with use in multi threaded and cluster environments in mind. Go's support for function types is used to provide a interface to run code as data is percolated through a tree. This method is flexible enough that it can extend the tree being analyzed. Growing a decision tree using Breiman and Cutler's method can be done in an anonymous function/closure passed to a tree's root node's Recurse method: This allows a researcher to include whatever additional analysis they need (importance scores, proximity etc) in tree growth. The same Recurse method can also be used to analyze existing forests to tabulate scores or extract structure. Utilities like leafcount and errorrate use this method to tabulate data about the tree in collection objects. Decision tree's are grown with the goal of reducing "Impurity" which is usually defined as Gini Impurity for categorical targets or mean squared error for numerical targets. CloudForest grows trees against the Target interface which allows for alternative definitions of impurity. CloudForest includes several alternative targets: Additional targets can be stacked on top of these target to add boosting functionality: Repeatedly splitting the data and searching for the best split at each node of a decision tree are the most computationally intensive parts of decision tree learning and CloudForest includes optimized code to perform these tasks. Go's slices are used extensively in CloudForest to make it simple to interact with optimized code. Many previous implementations of Random Forest have avoided reallocation by reordering data in place and keeping track of start and end indexes. In go, slices pointing at the same underlying arrays make this sort of optimization transparent. For example a function like: can return left and right slices that point to the same underlying array as the original slice of cases but these slices should not have their values changed. Functions used while searching for the best split also accepts pointers to reusable slices and structs to maximize speed by keeping memory allocations to a minimum. BestSplitAllocs contains pointers to these items and its use can be seen in functions like: For categorical predictors, BestSplit will also attempt to intelligently choose between 4 different implementations depending on user input and the number of categories. These include exhaustive, random, and iterative searches for the best combination of categories implemented with bitwise operations against int and big.Int. See BestCatSplit, BestCatSplitIter, BestCatSplitBig and BestCatSplitIterBig. All numerical predictors are handled by BestNumSplit which relies on go's sorting package. Training a Random forest is an inherently parallel process and CloudForest is designed to allow parallel implementations that can tackle large problems while keeping memory usage low by writing and using data structures directly to/from disk. Trees can be grown in separate go routines. The growforest utility provides an example of this that uses go routines and channels to grow trees in parallel and write trees to disk as the are finished by the "worker" go routines. The few summary statistics like mean impurity decrease per feature (importance) can be calculated using thread safe data structures like RunningMean. Trees can also be grown on separate machines. The .sf stochastic forest format allows several small forests to be combined by concatenation and the ForestReader and ForestWriter structs allow these forests to be accessed tree by tree (or even node by node) from disk. For data sets that are too big to fit in memory on a single machine Tree.Grow and FeatureMatrix.BestSplitter can be reimplemented to load candidate features from disk, distributed database etc. By default cloud forest uses a fast heuristic for missing values. When proposing a split on a feature with missing data the missing cases are removed and the impurity value is corrected to use three way impurity which reduces the bias towards features with lots of missing data: Missing values in the target variable are left out of impurity calculations. This provided generally good results at a fraction of the computational costs of imputing data. Optionally, feature.ImputeMissing or featurematrixImputeMissing can be called before forest growth to impute missing values to the feature mean/mode which Brieman [2] suggests as a fast method for imputing values. This forest could also be analyzed for proximity (using leafcount or tree.GetLeaves) to do the more accurate proximity weighted imputation Brieman describes. Experimental support is provided for 3 way splitting which splits missing cases onto a third branch. [2] This has so far yielded mixed results in testing. At some point in the future support may be added for local imputing of missing values during tree growth as described in [3] [1] http://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm#missing1 [2] https://code.google.com/p/rf-ace/ [3] http://projecteuclid.org/DPubS?verb=Display&version=1.0&service=UI&handle=euclid.aoas/1223908043&page=record In CloudForest data is stored using the FeatureMatrix struct which contains Features. The Feature struct implements storage and methods for both categorical and numerical data and calculations of impurity etc and the search for the best split. The Target interface abstracts the methods of Feature that are needed for a feature to be predictable. This allows for the implementation of alternative types of regression and classification. Trees are built from Nodes and Splitters and stored within a Forest. Tree has a Grow implements Brieman and Cutler's method (see extract above) for growing a tree. A GrowForest method is also provided that implements the rest of the method including sampling cases but it may be faster to grow the forest to disk as in the growforest utility. Prediction and Voting is done using Tree.Vote and CatBallotBox and NumBallotBox which implement the VoteTallyer interface.
Package esquery provides a non-obtrusive, idiomatic and easy-to-use query and aggregation builder for the official Go client (https://github.com/elastic/go-elasticsearch) for the ElasticSearch database (https://www.elastic.co/products/elasticsearch). esquery alleviates the need to use extremely nested maps (map[string]interface{}) and serializing queries to JSON manually. It also helps eliminating common mistakes such as misspelling query types, as everything is statically typed. Using `esquery` can make your code much easier to write, read and maintain, and significantly reduce the amount of code you write. esquery provides a method chaining-style API for building and executing queries and aggregations. It does not wrap the official Go client nor does it require you to change your existing code in order to integrate the library. Queries can be directly built with `esquery`, and executed by passing an `*elasticsearch.Client` instance (with optional search parameters). Results are returned as-is from the official client (e.g. `*esapi.Response` objects). Getting started is extremely simple: esquery currently supports version 7 of the ElasticSearch Go client. The library cannot currently generate "short queries". For example, whereas ElasticSearch can accept this: { "query": { "term": { "user": "Kimchy" } } } The library will always generate this: This is also true for queries such as "bool", where fields like "must" can either receive one query object, or an array of query objects. `esquery` will generate an array even if there's only one query object.
Package Maroto provide a simple way to generate PDF documents. Maroto is inspired in Bootstrap and uses gofpdf. Simple and Fast - Grid system with rows and columns - Automatic page breaks - Inclusion of JPEG, PNG, GIF, TIFF and basic path-only SVG images - Lines - Barcodes - Qrcodes - Signatures Maroto has only gofpdf dependency. All tests pass on Linux and Mac. To install the package on your system, run Later, to receive updates, run The following Go Code generates a simple PDF file. See the functions in the maroto_test.go file (shown as examples in this documentation) for more advanced PDF examples. This package is an high level API from gofpdf. The original API names have been slightly adapted. And the package search to be simpler to use. The main contribution upside gofpdf is the grid system with high level components. Maroto is released under the GPL3 License. This package’s Code and documentation are based on gofpdf. - Improve test coverage as reported by the coverage tool.
Package freegeoip provides an API for searching the geolocation of IP addresses. It uses a database that can be either a local file or a remote resource from a URL. Local databases are monitored by fsnotify and reloaded when the file is either updated or overwritten. Remote databases are automatically downloaded and updated in background so you can focus on using the API and not managing the database.
Package freegeoip provides an API for searching the geolocation of IP addresses. It uses a database that can be either a local file or a remote resource from a URL. Local databases are monitored by fsnotify and reloaded when the file is either updated or overwritten. Remote databases are automatically downloaded and updated in background so you can focus on using the API and not managing the database.
Package scrape provides a searching api on top of golang.org/x/net/html
Package CloudForest implements ensembles of decision trees for machine learning in pure Go (golang to search engines). It allows for a number of related algorithms for classification, regression, feature selection and structure analysis on heterogeneous numerical/categorical data with missing values. These include: Breiman and Cutler's Random Forest for Classification and Regression Adaptive Boosting (AdaBoost) Classification Gradiant Boosting Tree Regression Entropy and Cost driven classification L1 regression Feature selection with artificial contrasts Proximity and model structure analysis Roughly balanced bagging for unbalanced classification The API hasn't stabilized yet and may change rapidly. Tests and benchmarks have been performed only on embargoed data sets and can not yet be released. Library Documentation is in code and can be viewed with godoc or live at: http://godoc.org/github.com/ryanbressler/CloudForest Documentation of command line utilities and file formats can be found in README.md, which can be viewed fromated on github: http://github.com/ryanbressler/CloudForest Pull requests and bug reports are welcome. CloudForest was created by Ryan Bressler and is being developed in the Shumelivich Lab at the Institute for Systems Biology for use on genomic/biomedical data with partial support from The Cancer Genome Atlas and the Inova Translational Medicine Institute. CloudForest is intended to provide fast, comprehensible building blocks that can be used to implement ensembles of decision trees. CloudForest is written in Go to allow a data scientist to develop and scale new models and analysis quickly instead of having to modify complex legacy code. Data structures and file formats are chosen with use in multi threaded and cluster environments in mind. Go's support for function types is used to provide a interface to run code as data is percolated through a tree. This method is flexible enough that it can extend the tree being analyzed. Growing a decision tree using Breiman and Cutler's method can be done in an anonymous function/closure passed to a tree's root node's Recurse method: This allows a researcher to include whatever additional analysis they need (importance scores, proximity etc) in tree growth. The same Recurse method can also be used to analyze existing forests to tabulate scores or extract structure. Utilities like leafcount and errorrate use this method to tabulate data about the tree in collection objects. Decision tree's are grown with the goal of reducing "Impurity" which is usually defined as Gini Impurity for categorical targets or mean squared error for numerical targets. CloudForest grows trees against the Target interface which allows for alternative definitions of impurity. CloudForest includes several alternative targets: Additional targets can be stacked on top of these target to add boosting functionality: Repeatedly splitting the data and searching for the best split at each node of a decision tree are the most computationally intensive parts of decision tree learning and CloudForest includes optimized code to perform these tasks. Go's slices are used extensively in CloudForest to make it simple to interact with optimized code. Many previous implementations of Random Forest have avoided reallocation by reordering data in place and keeping track of start and end indexes. In go, slices pointing at the same underlying arrays make this sort of optimization transparent. For example a function like: can return left and right slices that point to the same underlying array as the original slice of cases but these slices should not have their values changed. Functions used while searching for the best split also accepts pointers to reusable slices and structs to maximize speed by keeping memory allocations to a minimum. BestSplitAllocs contains pointers to these items and its use can be seen in functions like: For categorical predictors, BestSplit will also attempt to intelligently choose between 4 different implementations depending on user input and the number of categories. These include exhaustive, random, and iterative searches for the best combination of categories implemented with bitwise operations against int and big.Int. See BestCatSplit, BestCatSplitIter, BestCatSplitBig and BestCatSplitIterBig. All numerical predictors are handled by BestNumSplit which relies on go's sorting package. Training a Random forest is an inherently parallel process and CloudForest is designed to allow parallel implementations that can tackle large problems while keeping memory usage low by writing and using data structures directly to/from disk. Trees can be grown in separate go routines. The growforest utility provides an example of this that uses go routines and channels to grow trees in parallel and write trees to disk as the are finished by the "worker" go routines. The few summary statistics like mean impurity decrease per feature (importance) can be calculated using thread safe data structures like RunningMean. Trees can also be grown on separate machines. The .sf stochastic forest format allows several small forests to be combined by concatenation and the ForestReader and ForestWriter structs allow these forests to be accessed tree by tree (or even node by node) from disk. For data sets that are too big to fit in memory on a single machine Tree.Grow and FeatureMatrix.BestSplitter can be reimplemented to load candidate features from disk, distributed database etc. By default cloud forest uses a fast heuristic for missing values. When proposing a split on a feature with missing data the missing cases are removed and the impurity value is corrected to use three way impurity which reduces the bias towards features with lots of missing data: Missing values in the target variable are left out of impurity calculations. This provided generally good results at a fraction of the computational costs of imputing data. Optionally, feature.ImputeMissing or featurematrixImputeMissing can be called before forest growth to impute missing values to the feature mean/mode which Brieman [2] suggests as a fast method for imputing values. This forest could also be analyzed for proximity (using leafcount or tree.GetLeaves) to do the more accurate proximity weighted imputation Brieman describes. Experimental support is provided for 3 way splitting which splits missing cases onto a third branch. [2] This has so far yielded mixed results in testing. At some point in the future support may be added for local imputing of missing values during tree growth as described in [3] [1] http://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm#missing1 [2] https://code.google.com/p/rf-ace/ [3] http://projecteuclid.org/DPubS?verb=Display&version=1.0&service=UI&handle=euclid.aoas/1223908043&page=record In CloudForest data is stored using the FeatureMatrix struct which contains Features. The Feature struct implements storage and methods for both categorical and numerical data and calculations of impurity etc and the search for the best split. The Target interface abstracts the methods of Feature that are needed for a feature to be predictable. This allows for the implementation of alternative types of regression and classification. Trees are built from Nodes and Splitters and stored within a Forest. Tree has a Grow implements Brieman and Cutler's method (see extract above) for growing a tree. A GrowForest method is also provided that implements the rest of the method including sampling cases but it may be faster to grow the forest to disk as in the growforest utility. Prediction and Voting is done using Tree.Vote and CatBallotBox and NumBallotBox which implement the VoteTallyer interface.
Package anticycle is a tool for static code analysis which search for dependency cycles. It scans recursively all source files and parses theirs dependencies. Anticycle does not compile the code, so it is ideal for searching for complex, difficult to debug cycles. Anticycle consists of three parts: 1. Main package which exists in cmd/anticycle directory. It compiles to CLI binary 2. Public API which is importable for other applications. 3. Internal, private interface which should not be imported and use directly. It may change at any moment without warnings.
Package CloudForest implements ensembles of decision trees for machine learning in pure Go (golang to search engines). It allows for a number of related algorithms for classification, regression, feature selection and structure analysis on heterogeneous numerical/categorical data with missing values. These include: Breiman and Cutler's Random Forest for Classification and Regression Adaptive Boosting (AdaBoost) Classification Gradiant Boosting Tree Regression Entropy and Cost driven classification L1 regression Feature selection with artificial contrasts Proximity and model structure analysis Roughly balanced bagging for unbalanced classification The API hasn't stabilized yet and may change rapidly. Tests and benchmarks have been performed only on embargoed data sets and can not yet be released. Library Documentation is in code and can be viewed with godoc or live at: http://godoc.org/github.com/ryanbressler/CloudForest Documentation of command line utilities and file formats can be found in README.md, which can be viewed fromated on github: http://github.com/ryanbressler/CloudForest Pull requests and bug reports are welcome. CloudForest was created by Ryan Bressler and is being developed in the Shumelivich Lab at the Institute for Systems Biology for use on genomic/biomedical data with partial support from The Cancer Genome Atlas and the Inova Translational Medicine Institute. CloudForest is intended to provide fast, comprehensible building blocks that can be used to implement ensembles of decision trees. CloudForest is written in Go to allow a data scientist to develop and scale new models and analysis quickly instead of having to modify complex legacy code. Data structures and file formats are chosen with use in multi threaded and cluster environments in mind. Go's support for function types is used to provide a interface to run code as data is percolated through a tree. This method is flexible enough that it can extend the tree being analyzed. Growing a decision tree using Breiman and Cutler's method can be done in an anonymous function/closure passed to a tree's root node's Recurse method: This allows a researcher to include whatever additional analysis they need (importance scores, proximity etc) in tree growth. The same Recurse method can also be used to analyze existing forests to tabulate scores or extract structure. Utilities like leafcount and errorrate use this method to tabulate data about the tree in collection objects. Decision tree's are grown with the goal of reducing "Impurity" which is usually defined as Gini Impurity for categorical targets or mean squared error for numerical targets. CloudForest grows trees against the Target interface which allows for alternative definitions of impurity. CloudForest includes several alternative targets: Additional targets can be stacked on top of these target to add boosting functionality: Repeatedly splitting the data and searching for the best split at each node of a decision tree are the most computationally intensive parts of decision tree learning and CloudForest includes optimized code to perform these tasks. Go's slices are used extensively in CloudForest to make it simple to interact with optimized code. Many previous implementations of Random Forest have avoided reallocation by reordering data in place and keeping track of start and end indexes. In go, slices pointing at the same underlying arrays make this sort of optimization transparent. For example a function like: can return left and right slices that point to the same underlying array as the original slice of cases but these slices should not have their values changed. Functions used while searching for the best split also accepts pointers to reusable slices and structs to maximize speed by keeping memory allocations to a minimum. BestSplitAllocs contains pointers to these items and its use can be seen in functions like: For categorical predictors, BestSplit will also attempt to intelligently choose between 4 different implementations depending on user input and the number of categories. These include exhaustive, random, and iterative searches for the best combination of categories implemented with bitwise operations against int and big.Int. See BestCatSplit, BestCatSplitIter, BestCatSplitBig and BestCatSplitIterBig. All numerical predictors are handled by BestNumSplit which relies on go's sorting package. Training a Random forest is an inherently parallel process and CloudForest is designed to allow parallel implementations that can tackle large problems while keeping memory usage low by writing and using data structures directly to/from disk. Trees can be grown in separate go routines. The growforest utility provides an example of this that uses go routines and channels to grow trees in parallel and write trees to disk as the are finished by the "worker" go routines. The few summary statistics like mean impurity decrease per feature (importance) can be calculated using thread safe data structures like RunningMean. Trees can also be grown on separate machines. The .sf stochastic forest format allows several small forests to be combined by concatenation and the ForestReader and ForestWriter structs allow these forests to be accessed tree by tree (or even node by node) from disk. For data sets that are too big to fit in memory on a single machine Tree.Grow and FeatureMatrix.BestSplitter can be reimplemented to load candidate features from disk, distributed database etc. By default cloud forest uses a fast heuristic for missing values. When proposing a split on a feature with missing data the missing cases are removed and the impurity value is corrected to use three way impurity which reduces the bias towards features with lots of missing data: Missing values in the target variable are left out of impurity calculations. This provided generally good results at a fraction of the computational costs of imputing data. Optionally, feature.ImputeMissing or featurematrixImputeMissing can be called before forest growth to impute missing values to the feature mean/mode which Brieman [2] suggests as a fast method for imputing values. This forest could also be analyzed for proximity (using leafcount or tree.GetLeaves) to do the more accurate proximity weighted imputation Brieman describes. Experimental support is provided for 3 way splitting which splits missing cases onto a third branch. [2] This has so far yielded mixed results in testing. At some point in the future support may be added for local imputing of missing values during tree growth as described in [3] [1] http://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm#missing1 [2] https://code.google.com/p/rf-ace/ [3] http://projecteuclid.org/DPubS?verb=Display&version=1.0&service=UI&handle=euclid.aoas/1223908043&page=record In CloudForest data is stored using the FeatureMatrix struct which contains Features. The Feature struct implements storage and methods for both categorical and numerical data and calculations of impurity etc and the search for the best split. The Target interface abstracts the methods of Feature that are needed for a feature to be predictable. This allows for the implementation of alternative types of regression and classification. Trees are built from Nodes and Splitters and stored within a Forest. Tree has a Grow implements Brieman and Cutler's method (see extract above) for growing a tree. A GrowForest method is also provided that implements the rest of the method including sampling cases but it may be faster to grow the forest to disk as in the growforest utility. Prediction and Voting is done using Tree.Vote and CatBallotBox and NumBallotBox which implement the VoteTallyer interface.
Package seekret provides a framework to create tools to inspect information looking for sensitive information like passwords, tokens, private keys, certificates, etc. The current trend of automation of all things and de DevOps culture are very beneficial for efficiency but also come with several problems, being one of them the secret provisioning. Bootstrapping secrets into systems and applications may be complicated and sometimes the straightforward way is to store them into a insecure storage, like github repository, embedded into an artifact or system image, etc. That means that an AWS secret_key end up into a Github repository. Seekret is an extensible framework that gelps in creating tools for detecting secrets on different sources. The secrets to detect are defined by a set of rules that can help detect passwords, tokens, private keys, certificates, etc. Seekret is extensible and can cover various use cases. Below there are some tools that uses seekret: Seekret API is very simple and easy to use. This section shows some snippets of code that shows the basic operations you can do with it. The first thing to be done is to create a new Seekret context: Then the rules must to be loaded. They can be loaded from a path definition, a directory or a single file: Optionally, exceptions (or false positives) can also be loaded from a file: After that, must be loaded the objects to be inspected searching for secrets. sourceType is an interface that implements the interface shown below. We offer sourceType's for Directories and Git Repositories, but you are able to extend it by creating your own. Currently, there are the following different sources supported: Having all the rules, exceptions and objects loaded into the contects, it's possible to start the inspection with the following code: Nworkers is an integuer that specify the number of goroutines used on the inspection. The recommended value is runtime.NumCPU(). Finally, it is possible to obtain the list of secrets located and do something with them:
Package enmime implements a MIME encoding and decoding library. It's built on top of Go's included mime/multipart support where possible, but is geared towards parsing MIME encoded emails. The enmime API has two conceptual layers. The lower layer is a tree of Part structs, representing each component of a decoded MIME message. The upper layer, called an Envelope provides an intuitive way to interact with a MIME message. Calling ReadParts causes enmime to parse the body of a MIME message into a tree of Part objects, each of which is aware of its content type, filename and headers. The content of a Part is available as a slice of bytes via the Content field. If the part was encoded in quoted-printable or base64, it is decoded prior to being placed in Content. If the Part contains text in a character set other than utf-8, enmime will attempt to convert it to utf-8. To locate a particular Part, pass a custom PartMatcher function into the BreadthMatchFirst() or DepthMatchFirst() methods to search the Part tree. BreadthMatchAll() and DepthMatchAll() will collect all Parts matching your criteria. ReadEnvelope returns an Envelope struct. Behind the scenes a Part tree is constructed, and then sorted into the correct fields of the Envelope. The Envelope contains both the plain text and HTML portions of the email. If there was no plain text Part available, the HTML Part will be down-converted using the html2text library1. The root of the Part tree, as well as slices of the inline and attachment Parts are also available. Every MIME Part has its own headers, accessible via the Part.Header field. The raw headers for an Envelope are available in Root.Header. Envelope also provides helper methods to fetch headers: GetHeader(key) will return the RFC 2047 decoded value of the specified header. AddressList(key) will convert the specified address header into a slice of net/mail.Address values. enmime attempts to be tolerant of poorly encoded MIME messages. In situations where parsing is not possible, the ReadEnvelope and ReadParts functions will return a hard error. If enmime is able to continue parsing the message, it will add an entry to the Errors slice on the relevant Part. After parsing is complete, all Part errors will be appended to the Envelope Errors slice. The Error* constants can be used to identify a specific class of error. Please note that enmime parses messages into memory, so it is not likely to perform well with multi-gigabyte attachments. enmime is open source software released under the MIT License. The latest version can be found at https://github.com/pacellig/enmime
Package govidious implements a wrapper over the Invidious REST API v1, which is described in these documents: For each of the endpoints listed in those links, this module provides an API function which: Takes as many input arguments as needed depending on the endpoint. For example, the "/api/v1/stat" endpoint is represented by the "Stat()" API function which takes no arguments, while the "/api/v1/search" endpoint is represented by the "Search()" API function which takes many arguments (search query, date, duration, ...). Returns a structure that matches the JSON response associated to each endpoint. In order to use this package first create a new "InvidiousV1" instance using the "New()" function: ...then, simply call, on that object, the desired API function(s): For example, to get the URL of the most watched video featuring "kittens playing golf", you would do this: Notice that both URLs ("youtubeUrl" and "invidiousUrl") can be used on a web browser to actually watch the video. The main difference is that: Using "youtubeUrl" establishes a direct connection with YouTube servers. Using "invidiousUrl" might or might not establish a direct connection with YouTube servers, depending on how the Invidious server is configured. If you don't want to leave any trace of YouTube servers, make sure you use "InvidiousUrl" on an Invidious server configured to proxy video files. Another thing you can do with this URL ("youtubeUrl" or "invidiousUrl") is to use the "mpv" player to start watching the video directly, without needing a web browser. Under the hood, "mpv" uses "youtube-dl" (another program) to parse the web page, find the URL of the video file (which might change depending on the options you call "mpv" with, such as the desired quality) and download it. Well... the good news is that you don't need "mpv" nor "youtube-dl" to do all of this as the Invidious API can also be used to find out the URL of the video file once you have the "videoId". First, you obtain all the associated data of the desired video: ...then you inspect the returned structure to find the URL of the video with the desired quality level: There are many other things you can do with this package, such as listing all the videos from a given channel, retrieving the subtitles associated to a video, etc... Check the documentation of each function (and the name of the fields of the structures they return) for more details.
Package aw is a utility library/framework for Alfred 3 workflows https://www.alfredapp.com/ It provides APIs for interacting with Alfred (e.g. Script Filter feedback) and the workflow environment (variables, caches, settings). NOTE: AwGo is currently in development. The API *will* change and should not be considered stable until v1.0. Until then, vendoring AwGo (e.g. with dep or vgo) is strongly recommended. As of AwGo 0.14, all applicable features of Alfred 3.6 are supported. The main features are: Typically, you'd call your program's main entry point via Run(). This way, the library will rescue any panic, log the stack trace and show an error message to the user in Alfred. In the Script box (Language = "/bin/bash"): To generate results for Alfred to show in a Script Filter, use the feedback API of Workflow: You can set workflow variables (via feedback) with Workflow.Var, Item.Var and Modifier.Var. See Workflow.SendFeedback for more documentation. Alfred requires a different JSON format if you wish to set workflow variables. Use the ArgVars (named for its equivalent element in Alfred) struct to generate output from Run Script actions. Be sure to set TextErrors to true to prevent Workflow from generating Alfred JSON if it catches a panic: See ArgVars for more information. New() creates a *Workflow using the default values and workflow settings read from environment variables set by Alfred. You can change defaults by passing one or more Options to New(). If you do not want to use Alfred's environment variables, or they aren't set (i.e. you're not running the code in Alfred), you must pass an Env as the first Option to New() using CustomEnv(). A Workflow can be re-configured later using its Configure() method. Check out the _examples/ subdirectory for some simple, but complete, workflows which you can copy to get started. See the documentation for Option for more information on configuring a Workflow. AwGo can filter Script Filter feedback using a Sublime Text-like fuzzy matching algorithm. Workflow.Filter() sorts feedback Items against the provided query, removing those that do not match. Sorting is performed by subpackage fuzzy via the fuzzy.Sortable interface. See _examples/fuzzy for a basic demonstration. See _examples/bookmarks for a demonstration of implementing fuzzy.Sortable on your own structs and customising the fuzzy sort settings. AwGo automatically configures the default log package to write to STDERR (Alfred's debugger) and a log file in the workflow's cache directory. The log file is necessary because background processes aren't connected to Alfred, so their output is only visible in the log. It is rotated when it exceeds 1 MiB in size. One previous log is kept. AwGo detects when Alfred's debugger is open (Workflow.Debug() returns true) and in this case prepends filename:linenumber: to log messages. The Config struct (which is included in Workflow as Workflow.Config) provides an interface to the workflow's settings from the Workflow Environment Variables panel. https://www.alfredapp.com/help/workflows/advanced/variables/#environment Alfred exports these settings as environment variables, and you can read them ad-hoc with the Config.Get*() methods, and save values back to Alfred with Config.Set(). Using Config.To() and Config.From(), you can "bind" your own structs to the settings in Alfred: And to save a struct's fields to the workflow's settings in Alfred: See the documentation for Config.To and Config.From for more information, and _examples/settings for a demo workflow based on the API. The Alfred struct provides methods for the rest of Alfred's AppleScript API. Amongst other things, you can use it to tell Alfred to open, to search for a query, or to browse/action files & directories. See documentation of the Alfred struct for more information. AwGo provides a basic, but useful, API for loading and saving data. In addition to reading/writing bytes and marshalling/unmarshalling to/from JSON, the API can auto-refresh expired cache data. See Cache and Session for the API documentation. Workflow has three caches tied to different directories: These all share the same API. The difference is in when the data go away. Data saved with Session are deleted after the user closes Alfred or starts using a different workflow. The Cache directory is in a system cache directory, so may be deleted by the system or "System Maintenance" tools. The Data directory lives with Alfred's application data and would not normally be deleted. Subpackage util provides several functions for running script files and snippets of AppleScript/JavaScript code. See util for documentation and examples. AwGo offers a simple API to start/stop background processes via Workflow's RunInBackground(), IsRunning() and Kill() methods. This is useful for running checks for updates and other jobs that hit the network or take a significant amount of time to complete, allowing you to keep your Script Filters extremely responsive. See _examples/update and _examples/workflows for demonstrations of this API.
Package timezones provides an slice of timezones as well as a simple API for accessing and searching timezones.
Package esquery provides a non-obtrusive, idiomatic and easy-to-use query and aggregation builder for the official Go client (https://github.com/elastic/go-elasticsearch) for the ElasticSearch database (https://www.elastic.co/products/elasticsearch). esquery alleviates the need to use extremely nested maps (map[string]interface{}) and serializing queries to JSON manually. It also helps eliminating common mistakes such as misspelling query types, as everything is statically typed. Using `esquery` can make your code much easier to write, read and maintain, and significantly reduce the amount of code you write. esquery provides a method chaining-style API for building and executing queries and aggregations. It does not wrap the official Go client nor does it require you to change your existing code in order to integrate the library. Queries can be directly built with `esquery`, and executed by passing an `*elasticsearch.Client` instance (with optional search parameters). Results are returned as-is from the official client (e.g. `*esapi.Response` objects). Getting started is extremely simple: esquery currently supports version 7 of the ElasticSearch Go client. The library cannot currently generate "short queries". For example, whereas ElasticSearch can accept this: { "query": { "term": { "user": "Kimchy" } } } The library will always generate this: This is also true for queries such as "bool", where fields like "must" can either receive one query object, or an array of query objects. `esquery` will generate an array even if there's only one query object.
Package imdb implements IMDb web API. All operations require an http client such as: To search a title: results is a slice of imdb.Title results with basic information (Name, URL, Year). To get detailed information on a title: Actors, Rating, Description and other fields are available.
Package osdb is an API client for opensubtitles.org This is a client for the OSDb protocol. Currently the package only allows movie identification, subtitles search, and download.
< GoPlug > GoPlug is a pure Go Plugin libary project that provides flexibility, Loose Coupling and moduler approach of Building Software in/around Go. The goal of the project is to provide a simple, fast and a reliable plugin architecture that is independent of the platform. GoPlug plugin lifecycle is quite simple as it consist only three state. 1. Stopped : Plugin is not yet started or stopped 2. Discovered/Installed : Plugin is discoved and ready to be started 3. Started/Loaded : Plugin is started or Loaded for serving request Each of the application creates a Plugin Registry to manage Plugins. Plugin Registry is based on plugin discovery service that provide api to search, load and unload plugin to/from registry. Auto discovery service at plugin registry could be disabled resulting plugin to be discovered at loading time. Each plugin makes itself available for the discovery service, and while discovered it is loaded by the application. On a successful loading start() is called and on a successful uploading stop() is called Lazy start could be enabled to make plugin loaded by explicit call to Plugin Registry rather than at discovery. Plugins runs a different process that sould be started explicitly. Unix domain socket is used for IPC where the communication is based on HTTP request response model. Step by Step: 1> At start of the Plugin it opens a Unix domain socket and listen for connection 2> Once it initialized it puts the .pconf file in a specific location of Plugin Discovery 3> Plugin Registry discover the .pconf and load the configuration to get the properties and UNIX sock 4> Plugin Registry initialize the Connections using UNIX sock and it loads the Plugin information 5> Http request is made as per the methods Executed over the connection
Package osdb is an API client for opensubtitles.org This is a client for the OSDb protocol. Currently the package only allows movie identification, subtitles search, and download.