Package errors provides simple error handling primitives. The traditional error handling idiom in Go is roughly akin to which applied recursively up the call stack results in error reports without context or debugging information. The errors package allows programmers to add context to the failure path in their code in a way that does not destroy the original value of the error. The errors.Annotate function returns a new error that adds context to the original error by recording a stack trace at the point Annotate is called, and the supplied message. For example If additional control is required the errors.AddStack and errors.WithMessage functions destructure errors.Annotate into its component operations of annotating an error with a stack trace and an a message, respectively. Using errors.Annotate constructs a stack of errors, adding context to the preceding error. Depending on the nature of the error it may be necessary to reverse the operation of errors.Annotate to retrieve the original error for inspection. Any error value which implements this interface can be inspected by errors.Cause. errors.Cause will recursively retrieve the topmost error which does not implement causer, which is assumed to be the original cause. For example: causer interface is not exported by this package, but is considered a part of stable public API. errors.Unwrap is also available: this will retrieve the next error in the chain. All error values returned from this package implement fmt.Formatter and can be formatted by the fmt package. The following verbs are supported New, Errorf, Annotate, and Annotatef record a stack trace at the point they are invoked. This information can be retrieved with the StackTracer interface that returns a StackTrace. Where errors.StackTrace is defined as The Frame type represents a call site in the stack trace. Frame supports the fmt.Formatter interface that can be used for printing information about the stack trace of this error. For example: See the documentation for Frame.Format for more details. errors.Find can be used to search for an error in the error chain.
Package seekret provides a framework to create tools to inspect information looking for sensitive information like passwords, tokens, private keys, certificates, etc. The current trend of automation of all things and de DevOps culture are very beneficial for efficiency but also come with several problems, being one of them the secret provisioning. Bootstrapping secrets into systems and applications may be complicated and sometimes the straightforward way is to store them into a insecure storage, like github repository, embedded into an artifact or system image, etc. That means that an AWS secret_key end up into a Github repository. Seekret is an extensible framework that gelps in creating tools for detecting secrets on different sources. The secrets to detect are defined by a set of rules that can help detect passwords, tokens, private keys, certificates, etc. Seekret is extensible and can cover various use cases. Below there are some tools that uses seekret: Seekret API is very simple and easy to use. This section shows some snippets of code that shows the basic operations you can do with it. The first thing to be done is to create a new Seekret context: Then the rules must to be loaded. They can be loaded from a path definition, a directory or a single file: Optionally, exceptions (or false positives) can also be loaded from a file: After that, must be loaded the objects to be inspected searching for secrets. sourceType is an interface that implements the interface shown below. We offer sourceType's for Directories and Git Repositories, but you are able to extend it by creating your own. Currently, there are the following different sources supported: Having all the rules, exceptions and objects loaded into the contects, it's possible to start the inspection with the following code: Nworkers is an integuer that specify the number of goroutines used on the inspection. The recommended value is runtime.NumCPU(). Finally, it is possible to obtain the list of secrets located and do something with them:
Package goidoit is an https://www.i-doit.com/ API client implementation written in https://golang.org Install using go get Initialize your API object and do stuff like get your idoit version or search stuff etc. for Debuging use and to disable TLS Certificate Verification (if you use self signed certs for i-doit)
Package **cli** provides access to a **DB2 database** using DB2 Call Level Interface (**CLI**) API. This requires **cgo** and DB2 _cli/odbc_ driver **libdb2.so**. It is not possible to use this driver to create a statically linked Go package because IBM doesn't provide the DB2 _cli/odbc_ driver as _libdb2.a_ static library. On **Windows**, DB2 _cli/odbc_ library is not compatiable with **gcc**, but **cgo** requires **gcc**. Hence, this driver is not supported on Windows. **cli** is based on *alexbrainman's* odbc package: https://github.com/alexbrainman/odbc. This package registers a driver for the standard Go **database/sql** package and used through the **database/sql** API. ### Error Handling The package has no exported API except two functions-**SQLCode()** and **SQLState()**-for inspecting DB2 CLI error. The function signature is as follows: Since package **cli** is imported for side-effects only, use the following code pattern to access SQLCode() and SQLState(): The local interface can include SQLState() also for inspecting SQLState from DB2 CLI. **SQLCODE** is a return code from a IBM DB2 SQL operation. This code can be zero (0), negative, or positive. Search "SQL messages" in DB2 Information Center to find out more about SQLCODE. **SQLSTATE** is a return code like SQLCODE. But instead of a number, it is a five character error code that is consistent across all IBM database products. SQLSTATE follows this format: ccsss, where cc indicates class and sss indicates subclass. Search "SQLSTATE Messages" in DB2 Information Center for more detail. ### Connection String This driver uses DB2 CLI function **SQLConnect** and **SQLDriverConnect** in driver.Open(...). To use **SQLConnect**, start the name or the DSN string with keyword sqlconnect. This keyword is case insensitive. The connection string needs to follow this syntax to be valid: [...] means optional. If a database_name is not provided, then SAMPLE is used as the database name. Also note that each keyword and value ends with a semicolon. The keyword "sqlconnect" doesn't take a value but ends with a semi-colon. Examples: Any other connection string must follow the connection string rule that is valid with SQLDriverConnect. For example, this is a valid dsn/connection string for SQLDriverConnect: Examples: Search **SQLDriverConnect** in DB2 LUW *Information Center* for more detail. ## Installation IBM DB2 for Linux, Unix and Windows (DB2 LUW) implements its own ODBC driver. This package uses the DB2 ODBC/CLI driver through cgo. As such this package requires DB2 C headers and libraries for compilation. If you don't have DB2 LUW installed on the system, then you can install the free, community DB2 version *DB2 Express-C*. You can also use *IBM Data Server Driver package*. It includes the required headers and libraries but not a DB2 database manager. To install, download this package by running the following: Go to the following directory: In that directory run the following to install the package: This script and this driver only works on Mac OS and Linux. ## Usage See `example_test.go`.
Package enigma provides developers with a Go client for the Enigma.io API. The Enigma API allows users to download datasets, query metadata, or perform server side operations on tables in Enigma. All calls to the API are made through a RESTful protocol and require an API key. The Enigma API is served over HTTPS. Enigma.io (http://enigma.io) lets you quickly search and analyze billions of public records published by governments, companies and organizations. Please refer to the official documentation of the API http://app.enigma.io/api for more info.
Package twitter implements a client for the Twitter API v2. This package is in development and is not yet ready for production use. The general structure of an API call is to first construct a query, then invoke that query with a context on a client: Package "types" contains the type and constant definitions for the API. Queries to look up tweets by ID or username, to search recent tweets, and to search or sample streams of tweets are defined in package "tweets". Queries to look up users by ID or user name are defined in package "users". Queries to read or update search rules are defined in package "rules". Queries to create, edit, delete, and show the contents of lists are defined in package "lists".
Package conf is a configuration parser that loads configuration in a struct from files and automatically creates cli flags. Just define a struct with needed configuration. Values are then taken from multiple source in this order of precedence: Default is to use lower cased field names in struct to create command line arguments. Tags can be used to specify different names, command line help message and section in conf file. Format is: conf.Path variable can be set to the path of a configuration file. For default it is initialized to the value of environment variable: Files ending with: will be parsed as INI informal standard: First letter of every key found is upper cased and than, a struct field with same name is searched: If such field name is not found than comparison is made against key specified as first element in tag. conf tries to be modular and easily extensible to support different formats. This is a work in progress, APIs can change.
Package freegeoip provides an API for searching the geolocation of IP addresses. It uses a database that can be either a local file or a remote resource from a URL. Local databases are monitored by fsnotify and reloaded when the file is either updated or overwritten. Remote databases are automatically downloaded and updated in background so you can focus on using the API and not managing the database.
Package freegeoip provides an API for searching the geolocation of IP addresses. It uses a database that can be either a local file or a remote resource from a URL. Local databases are monitored by fsnotify and reloaded when the file is either updated or overwritten. Remote databases are automatically downloaded and updated in background so you can focus on using the API and not managing the database.
Package bst and its sub-packages specify an extensible Binary Search Tree (BST) API. Instead of trying to think of every method a user might want on a BST, we declare a set of extensible methods with the hopes of enabling a user to create any functionality they need. Packages and sub-packages: -------------------------- * bst Declares basic BST methods (Insert, Get, Remove, etc). * bst/finder Declares extensible method, Find, which allows you to direct navigation down a tree, looking for a specific value. * bst/visitor Declares extensible method, Visit, which allows you to defer taking an action on a tree node (insert, get, update, delete) until after you've visited that node (i.e. inspect then change with one visit to the node). * bst/walker Declares extensible method, Walk, which allows you to freely traverse a tree, either up and down (left, right, parent) or in sequential order (prev, next). * bst/simple Provides a reference implementation of an Extensible BST that implements all of the above-declared methods. License ------- This package is released under the MIT License. See included file 'LICENSE' for more details. Contributors ------------ David Farell <DavidPFarrell@yahoo.com>
Package typesense is a Go client for Typesense, an open-source, typo tolerant search engine. This package defines a client and some useful methods to communicate with the Typesense API without the drawback of writing actual API requests. Example of an actual program that can connect to the Typesense API creates a new collection, index some document and retrieve it using search. Highly inspired by the official Typesense guide https://typesense.org/docs/0.11.2/guide/ .
Package lingua accurately detects the natural language of written text, be it long or short. Its task is simple: It tells you which language some text is written in. This is very useful as a preprocessing step for linguistic data in natural language processing applications such as text classification and spell checking. Other use cases, for instance, might include routing e-mails to the right geographically located customer service department, based on the e-mails' languages. Language detection is often done as part of large machine learning frameworks or natural language processing applications. In cases where you don't need the full-fledged functionality of those systems or don't want to learn the ropes of those, a small flexible library comes in handy. So far, the only other comprehensive open source library in the Go ecosystem for this task is Whatlanggo (https://github.com/abadojack/whatlanggo). Unfortunately, it has two major drawbacks: 1. Detection only works with quite lengthy text fragments. For very short text snippets such as Twitter messages, it does not provide adequate results. 2. The more languages take part in the decision process, the less accurate are the detection results. Lingua aims at eliminating these problems. It nearly does not need any configuration and yields pretty accurate results on both long and short text, even on single words and phrases. It draws on both rule-based and statistical methods but does not use any dictionaries of words. It does not need a connection to any external API or service either. Once the library has been downloaded, it can be used completely offline. Compared to other language detection libraries, Lingua's focus is on quality over quantity, that is, getting detection right for a small set of languages first before adding new ones. Currently, 75 languages are supported. They are listed as variants of type Language. Lingua is able to report accuracy statistics for some bundled test data available for each supported language. The test data for each language is split into three parts: 1. a list of single words with a minimum length of 5 characters 2. a list of word pairs with a minimum length of 10 characters 3. a list of complete grammatical sentences of various lengths Both the language models and the test data have been created from separate documents of the Wortschatz corpora (https://wortschatz.uni-leipzig.de) offered by Leipzig University, Germany. Data crawled from various news websites have been used for training, each corpus comprising one million sentences. For testing, corpora made of arbitrarily chosen websites have been used, each comprising ten thousand sentences. From each test corpus, a random unsorted subset of 1000 single words, 1000 word pairs and 1000 sentences has been extracted, respectively. Given the generated test data, I have compared the detection results of Lingua, and Whatlanggo running over the data of Lingua's supported 75 languages. Additionally, I have added Google's CLD3 (https://github.com/google/cld3/) to the comparison with the help of the gocld3 bindings (https://github.com/jmhodges/gocld3). Languages that are not supported by CLD3 or Whatlanggo are simply ignored during the detection process. Lingua clearly outperforms its contenders. Every language detector uses a probabilistic n-gram (https://en.wikipedia.org/wiki/N-gram) model trained on the character distribution in some training corpus. Most libraries only use n-grams of size 3 (trigrams) which is satisfactory for detecting the language of longer text fragments consisting of multiple sentences. For short phrases or single words, however, trigrams are not enough. The shorter the input text is, the less n-grams are available. The probabilities estimated from such few n-grams are not reliable. This is why Lingua makes use of n-grams of sizes 1 up to 5 which results in much more accurate prediction of the correct language. A second important difference is that Lingua does not only use such a statistical model, but also a rule-based engine. This engine first determines the alphabet of the input text and searches for characters which are unique in one or more languages. If exactly one language can be reliably chosen this way, the statistical model is not necessary anymore. In any case, the rule-based engine filters out languages that do not satisfy the conditions of the input text. Only then, in a second step, the probabilistic n-gram model is taken into consideration. This makes sense because loading less language models means less memory consumption and better runtime performance. In general, it is always a good idea to restrict the set of languages to be considered in the classification process using the respective api methods. If you know beforehand that certain languages are never to occur in an input text, do not let those take part in the classifcation process. The filtering mechanism of the rule-based engine is quite good, however, filtering based on your own knowledge of the input text is always preferable. There might be classification tasks where you know beforehand that your language data is definitely not written in Latin, for instance. The detection accuracy can become better in such cases if you exclude certain languages from the decision process or just explicitly include relevant languages. Knowing about the most likely language is nice but how reliable is the computed likelihood? And how less likely are the other examined languages in comparison to the most likely one? In the example below, a slice of ConfidenceValue is returned containing those languages which the calling instance of LanguageDetector has been built from. The entries are sorted by their confidence value in descending order. Each value is a probability between 0.0 and 1.0. The probabilities of all languages will sum to 1.0. If the language is unambiguously identified by the rule engine, the value 1.0 will always be returned for this language. The other languages will receive a value of 0.0. By default, Lingua uses lazy-loading to load only those language models on demand which are considered relevant by the rule-based filter engine. For web services, for instance, it is rather beneficial to preload all language models into memory to avoid unexpected latency while waiting for the service response. If you want to enable the eager-loading mode, you can do it as seen below. Multiple instances of LanguageDetector share the same language models in memory which are accessed asynchronously by the instances. By default, Lingua returns the most likely language for a given input text. However, there are certain words that are spelled the same in more than one language. The word `prologue`, for instance, is both a valid English and French word. Lingua would output either English or French which might be wrong in the given context. For cases like that, it is possible to specify a minimum relative distance that the logarithmized and summed up probabilities for each possible language have to satisfy. It can be stated as seen below. Be aware that the distance between the language probabilities is dependent on the length of the input text. The longer the input text, the larger the distance between the languages. So if you want to classify very short text phrases, do not set the minimum relative distance too high. Otherwise Unknown will be returned most of the time as in the example below. This is the return value for cases where language detection is not reliably possible.
Package esquery provides a non-obtrusive, idiomatic and easy-to-use query and aggregation builder for the official Go client (https://github.com/elastic/go-elasticsearch) for the ElasticSearch database (https://www.elastic.co/products/elasticsearch). esquery alleviates the need to use extremely nested maps (map[string]interface{}) and serializing queries to JSON manually. It also helps eliminating common mistakes such as misspelling query types, as everything is statically typed. Using `esquery` can make your code much easier to write, read and maintain, and significantly reduce the amount of code you write. esquery provides a method chaining-style API for building and executing queries and aggregations. It does not wrap the official Go client nor does it require you to change your existing code in order to integrate the library. Queries can be directly built with `esquery`, and executed by passing an `*elasticsearch.Client` instance (with optional search parameters). Results are returned as-is from the official client (e.g. `*esapi.Response` objects). Getting started is extremely simple: esquery currently supports version 7 of the ElasticSearch Go client. The library cannot currently generate "short queries". For example, whereas ElasticSearch can accept this: { "query": { "term": { "user": "Kimchy" } } } The library will always generate this: This is also true for queries such as "bool", where fields like "must" can either receive one query object, or an array of query objects. `esquery` will generate an array even if there's only one query object.
Package freegeoip provides an API for searching the geolocation of IP addresses. It uses a database that can be either a local file or a remote resource from a URL. Local databases are monitored by fsnotify and reloaded when the file is either updated or overwritten. Remote databases are automatically downloaded and updated in background so you can focus on using the API and not managing the database.
The musicbrainzws2 package allows accessing the MusicBrainz web service. The package supports all MusicBrainz entities. The MusicBrainz web service provides three general types of requests: lookup, browse and search. All of them are supported. All API functions are available by creating a Client using NewClient. Please note that this package currently is still a beta and the API may still change. The Client supports lookup, browse and search queries for all entities. Submission of ISRCs, barcodes, folksonomy tags/genres and ratings is also supported, as is managing the contents of user collections. The deprecated CD stubs are not supported and there are no plans to support them. For a full documentation of the web service see https://musicbrainz.org/doc/MusicBrainz_API and the documentation of the search service at https://musicbrainz.org/doc/MusicBrainz_API/Search.
Package openfec implements a client library for the OpenFEC API. See https://api.open.fec.gov This package all allows you to explore the way candidates and committees fund their campaigns by accessing FEC (Federal Election Committee) data. A few restrictions limit the way you can use FEC data. For example, you can’t use contributor lists for commercial purposes or to solicit donations. Get an API key here: https://api.data.gov/signup/ Endpoints are classified in the following groups: Candidate, Committee, Financial, Filings and Schedules. Candidate endpoints give you access to information about the people running for office. This information is organized by candidate_id. If you're unfamiliar with candidate IDs, using `/candidates/search` will help you locate a particular candidate. Officially, a candidate is an individual seeking nomination for election to a federal office. People become candidates when they (or agents working on their behalf) raise contributions or make expenditures that exceed $5,000. The candidate endpoints primarily use data from FEC registration [Form 1](http://www.fec.gov/pdf/forms/fecfrm1.pdf), for candidate information, and [Form 2](http://www.fec.gov/pdf/forms/fecfrm2.pdf), for committee information. Committees are entities that spend and raise money in an election. Their characteristics and relationships with candidates can change over time. You might want to use filters or search endpoints to find the committee you're looking for. Then you can use other committee endpoints to explore information about the committee that interests you. Financial information is organized by `committee_id`, so finding the committee you're interested in will lead you to more granular financial information. The committee endpoints include all FEC filers, even if they aren't registered as a committee. Officially, committees include the committees and organizations that file with the FEC. Several different types of organizations file financial reports with the FEC: * Campaign committees authorized by particular candidates to raise and spend funds in their campaigns * Non-party committees (e.g., PACs), some of which may be sponsored by corporations, unions, trade or membership groups, etc. * Political party committees at the national, state, and local levels * Groups and individuals making only independent expenditures * Corporations, unions, and other organizations making internal communications The committee endpoints primarily use data from FEC registration Form 1 and Form 2. Fetch key information about a committee's Form 3, Form 3X, or Form 3P financial reports. Most committees are required to summarize their financial activity in each filing; those summaries are included in these files. Generally, committees file reports on a quarterly or monthly basis, but some must also submit a report 12 days before primary elections. Therefore, during the primary season, the period covered by this file may be different for different committees. These totals also incorporate any changes made by committees, if any report covering the period is amended. Information is made available on the API as soon as it's processed. Keep in mind, complex paper filings take longer to process. The financial endpoints use data from FEC [form 5](http://www.fec.gov/pdf/forms/fecfrm5.pdf), for independent expenditors; or the summary and detailed summary pages of the FEC [form 3](http://www.fec.gov/pdf/forms/fecfrm3.pdf), for House and Senate committees; [form 3X](http://www.fec.gov/pdf/forms/fecfrm3x.pdf), for PACs and parties; and [form 3P](http://www.fec.gov/pdf/forms/fecfrm3p.pdf), for presidential committees. All official records and reports filed by or delivered to the FEC. Note: because the filings data includes many records, counts for large result sets are approximate. Schedules come from particular sections on forms and contain detailed transactional data. Schedule A explains where contributions come from. If you are interested in individual donors, this will be the endpoint you use. For the Schedule A aggregates, "memoed" items are not included to avoid double counting. Schedule B explains how money is spent.
Package gbsearch provides a way to search the Google books API.
Package freegeoip provides an API for searching the geolocation of IP addresses. It uses a database that can be either a local file or a remote resource from a URL. Local databases are monitored by fsnotify and reloaded when the file is either updated or overwritten. Remote databases are automatically downloaded and updated in background so you can focus on using the API and not managing the database. Also, the freegeoip package provides http handlers that any Go http server (net/http) can use. These handlers can process IP geolocation lookup requests and return data in multiple formats like CSV, XML, JSON and JSONP. It has also an API for supporting custom formats.
Package conf is a configuration parser that loads configuration in a struct from files and automatically creates cli flags. Just define a struct with needed configuration. Values are then taken from multiple source in this order of precedence: Default is to use lower cased field names in struct to create command line arguments. Tags can be used to specify different names, command line help message and section in conf file. Format is: conf.Path variable can be set to the path of a configuration file. For default it is initialized to the value of environment variable: Files ending with: will be parsed as INI informal standard: First letter of every key found is upper cased and than, a struct field with same name is searched: If such field name is not found than comparison is made against key specified as first element in tag. conf tries to be modular and easily extensible to support different formats. This is a work in progress, APIs can change.