Package plist implements encoding and decoding of Apple's "property list" format. Property lists come in three sorts: plain text (GNUStep and OpenStep), XML and binary. plist supports all of them. The mapping between property list and Go objects is described in the documentation for the Marshal and Unmarshal functions.
Package aw is a "plug-and-play" workflow development library/framework for Alfred 3 & 4 (https://www.alfredapp.com/). It requires Go 1.13 or later. It provides everything you need to create a polished and blazing-fast Alfred frontend for your project. As of AwGo 0.26, all applicable features of Alfred 4.1 are supported. The main features are: AwGo is an opinionated framework that expects to be used in a certain way in order to eliminate boilerplate. It *will* panic if not run in a valid, minimally Alfred-like environment. At a minimum the following environment variables should be set to meaningful values: NOTE: AwGo is currently in development. The API *will* change and should not be considered stable until v1.0. Until then, be sure to pin a version using go modules or similar. Be sure to also check out the _examples/ subdirectory, which contains some simple, but complete, workflows that demonstrate the features of AwGo and useful workflow idioms. Typically, you'd call your program's main entry point via Workflow.Run(). This way, the library will rescue any panic, log the stack trace and show an error message to the user in Alfred. In the Script box (Language = "/bin/bash"): To generate results for Alfred to show in a Script Filter, use the feedback API of Workflow: You can set workflow variables (via feedback) with Workflow.Var, Item.Var and Modifier.Var. See Workflow.SendFeedback for more documentation. Alfred requires a different JSON format if you wish to set workflow variables. Use the ArgVars (named for its equivalent element in Alfred) struct to generate output from Run Script actions. Be sure to set TextErrors to true to prevent Workflow from generating Alfred JSON if it catches a panic: See ArgVars for more information. New() creates a *Workflow using the default values and workflow settings read from environment variables set by Alfred. You can change defaults by passing one or more Options to New(). If you do not want to use Alfred's environment variables, or they aren't set (i.e. you're not running the code in Alfred), use NewFromEnv() with a custom Env implementation. A Workflow can be re-configured later using its Configure() method. See the documentation for Option for more information on configuring a Workflow. AwGo can check for and install new versions of your workflow. Subpackage update provides an implementation of the Updater interface and sources to load updates from GitHub or Gitea releases, or from the URL of an Alfred `metadata.json` file. See subpackage update and _examples/update. AwGo can filter Script Filter feedback using a Sublime Text-like fuzzy matching algorithm. Workflow.Filter() sorts feedback Items against the provided query, removing those that do not match. See _examples/fuzzy for a basic demonstration, and _examples/bookmarks for a demonstration of implementing fuzzy.Sortable on your own structs and customising the fuzzy sort settings. Fuzzy matching is done by package https://godoc.org/go.deanishe.net/fuzzy AwGo automatically configures the default log package to write to STDERR (Alfred's debugger) and a log file in the workflow's cache directory. The log file is necessary because background processes aren't connected to Alfred, so their output is only visible in the log. It is rotated when it exceeds 1 MiB in size. One previous log is kept. AwGo detects when Alfred's debugger is open (Workflow.Debug() returns true) and in this case prepends filename:linenumber: to log messages. The Config struct (which is included in Workflow as Workflow.Config) provides an interface to the workflow's settings from the Workflow Environment Variables panel (see https://www.alfredapp.com/help/workflows/advanced/variables/#environment). Alfred exports these settings as environment variables, and you can read them ad-hoc with the Config.Get*() methods, and save values back to Alfred/info.plist with Config.Set(). Using Config.To() and Config.From(), you can "bind" your own structs to the settings in Alfred: See the documentation for Config.To and Config.From for more information, and _examples/settings for a demo workflow based on the API. The Alfred struct provides methods for the rest of Alfred's AppleScript API. Amongst other things, you can use it to tell Alfred to open, to search for a query, to browse/action files & directories, or to run External Triggers. See documentation of the Alfred struct for more information. AwGo provides a basic, but useful, API for loading and saving data. In addition to reading/writing bytes and marshalling/unmarshalling to/from JSON, the API can auto-refresh expired cache data. See Cache and Session for the API documentation. Workflow has three caches tied to different directories: These all share (almost) the same API. The difference is in when the data go away. Data saved with Session are deleted after the user closes Alfred or starts using a different workflow. The Cache directory is in a system cache directory, so may be deleted by the system or "system maintenance" tools. The Data directory lives with Alfred's application data and would not normally be deleted. Subpackage util provides several functions for running script files and snippets of AppleScript/JavaScript code. See util for documentation and examples. AwGo offers a simple API to start/stop background processes via Workflow's RunInBackground(), IsRunning() and Kill() methods. This is useful for running checks for updates and other jobs that hit the network or take a significant amount of time to complete, allowing you to keep your Script Filters extremely responsive. See _examples/update and _examples/workflows for demonstrations of this API.
Package nutsdb implements a simple, fast, embeddable and persistent key/value store written in pure Go. It supports fully serializable transactions. And it also supports data structure such as list、set、sorted set etc. NutsDB currently works on Mac OS, Linux and Windows. NutsDB has the following main types: DB, BPTree, Entry, DataFile And Tx. and NutsDB supports bucket, A bucket is a collection of unique keys that are associated with values. All operations happen inside a Tx. Tx represents a transaction, which can be read-only or read-write. Read-only transactions can read values for a given key , or iterate over a set of key-value pairs (prefix scanning or range scanning). read-write transactions can also update and delete keys from the DB. See the examples for more usage details.
Package gofpdf implements a PDF document generator with high level support for text, drawing and images. - UTF-8 support - Choice of measurement unit, page format and margins - Page header and footer management - Automatic page breaks, line breaks, and text justification - Inclusion of JPEG, PNG, GIF, TIFF and basic path-only SVG images - Colors, gradients and alpha channel transparency - Outline bookmarks - Internal and external links - TrueType, Type1 and encoding support - Page compression - Lines, Bézier curves, arcs, and ellipses - Rotation, scaling, skewing, translation, and mirroring - Clipping - Document protection - Layers - Templates - Barcodes - Charting facility - Import PDFs as templates gofpdf has no dependencies other than the Go standard library. All tests pass on Linux, Mac and Windows platforms. gofpdf supports UTF-8 TrueType fonts and “right-to-left” languages. Note that Chinese, Japanese, and Korean characters may not be included in many general purpose fonts. For these languages, a specialized font (for example, NotoSansSC for simplified Chinese) can be used. Also, support is provided to automatically translate UTF-8 runes to code page encodings for languages that have fewer than 256 glyphs. This repository will not be maintained, at least for some unknown duration. But it is hoped that gofpdf has a bright future in the open source world. Due to Go’s promise of compatibility, gofpdf should continue to function without modification for a longer time than would be the case with many other languages. Forks should be based on the last viable commit. Tools such as active-forks can be used to select a fork that looks promising for your needs. If a particular fork looks like it has taken the lead in attracting followers, this README will be updated to point people in that direction. The efforts of all contributors to this project have been deeply appreciated. Best wishes to all of you. To install the package on your system, run Later, to receive updates, run The following Go code generates a simple PDF file. See the functions in the fpdf_test.go file (shown as examples in this documentation) for more advanced PDF examples. If an error occurs in an Fpdf method, an internal error field is set. After this occurs, Fpdf method calls typically return without performing any operations and the error state is retained. This error management scheme facilitates PDF generation since individual method calls do not need to be examined for failure; it is generally sufficient to wait until after Output() is called. For the same reason, if an error occurs in the calling application during PDF generation, it may be desirable for the application to transfer the error to the Fpdf instance by calling the SetError() method or the SetErrorf() method. At any time during the life cycle of the Fpdf instance, the error state can be determined with a call to Ok() or Err(). The error itself can be retrieved with a call to Error(). This package is a relatively straightforward translation from the original FPDF library written in PHP (despite the caveat in the introduction to Effective Go). The API names have been retained even though the Go idiom would suggest otherwise (for example, pdf.GetX() is used rather than simply pdf.X()). The similarity of the two libraries makes the original FPDF website a good source of information. It includes a forum and FAQ. However, some internal changes have been made. Page content is built up using buffers (of type bytes.Buffer) rather than repeated string concatenation. Errors are handled as explained above rather than panicking. Output is generated through an interface of type io.Writer or io.WriteCloser. A number of the original PHP methods behave differently based on the type of the arguments that are passed to them; in these cases additional methods have been exported to provide similar functionality. Font definition files are produced in JSON rather than PHP. A side effect of running go test ./... is the production of a number of example PDFs. These can be found in the gofpdf/pdf directory after the tests complete. Please note that these examples run in the context of a test. In order run an example as a standalone application, you’ll need to examine fpdf_test.go for some helper routines, for example exampleFilename() and summary(). Example PDFs can be compared with reference copies in order to verify that they have been generated as expected. This comparison will be performed if a PDF with the same name as the example PDF is placed in the gofpdf/pdf/reference directory and if the third argument to ComparePDFFiles() in internal/example/example.go is true. (By default it is false.) The routine that summarizes an example will look for this file and, if found, will call ComparePDFFiles() to check the example PDF for equality with its reference PDF. If differences exist between the two files they will be printed to standard output and the test will fail. If the reference file is missing, the comparison is considered to succeed. In order to successfully compare two PDFs, the placement of internal resources must be consistent and the internal creation timestamps must be the same. To do this, the methods SetCatalogSort() and SetCreationDate() need to be called for both files. This is done automatically for all examples. Nothing special is required to use the standard PDF fonts (courier, helvetica, times, zapfdingbats) in your documents other than calling SetFont(). You should use AddUTF8Font() or AddUTF8FontFromBytes() to add a TrueType UTF-8 encoded font. Use RTL() and LTR() methods switch between “right-to-left” and “left-to-right” mode. In order to use a different non-UTF-8 TrueType or Type1 font, you will need to generate a font definition file and, if the font will be embedded into PDFs, a compressed version of the font file. This is done by calling the MakeFont function or using the included makefont command line utility. To create the utility, cd into the makefont subdirectory and run “go build”. This will produce a standalone executable named makefont. Select the appropriate encoding file from the font subdirectory and run the command as in the following example. In your PDF generation code, call AddFont() to load the font and, as with the standard fonts, SetFont() to begin using it. Most examples, including the package example, demonstrate this method. Good sources of free, open-source fonts include Google Fonts and DejaVu Fonts. The draw2d package is a two dimensional vector graphics library that can generate output in different forms. It uses gofpdf for its document production mode. gofpdf is a global community effort and you are invited to make it even better. If you have implemented a new feature or corrected a problem, please consider contributing your change to the project. A contribution that does not directly pertain to the core functionality of gofpdf should be placed in its own directory directly beneath the contrib directory. Here are guidelines for making submissions. Your change should - be compatible with the MIT License - be properly documented - be formatted with go fmt - include an example in fpdf_test.go if appropriate - conform to the standards of golint and go vet, that is, golint . and go vet . should not generate any warnings - not diminish test coverage Pull requests are the preferred means of accepting your changes. gofpdf is released under the MIT License. It is copyrighted by Kurt Jung and the contributors acknowledged below. This package’s code and documentation are closely derived from the FPDF library created by Olivier Plathey, and a number of font and image resources are copied directly from it. Bruno Michel has provided valuable assistance with the code. Drawing support is adapted from the FPDF geometric figures script by David Hernández Sanz. Transparency support is adapted from the FPDF transparency script by Martin Hall-May. Support for gradients and clipping is adapted from FPDF scripts by Andreas Würmser. Support for outline bookmarks is adapted from Olivier Plathey by Manuel Cornes. Layer support is adapted from Olivier Plathey. Support for transformations is adapted from the FPDF transformation script by Moritz Wagner and Andreas Würmser. PDF protection is adapted from the work of Klemen Vodopivec for the FPDF product. Lawrence Kesteloot provided code to allow an image’s extent to be determined prior to placement. Support for vertical alignment within a cell was provided by Stefan Schroeder. Ivan Daniluk generalized the font and image loading code to use the Reader interface while maintaining backward compatibility. Anthony Starks provided code for the Polygon function. Robert Lillack provided the Beziergon function and corrected some naming issues with the internal curve function. Claudio Felber provided implementations for dashed line drawing and generalized font loading. Stani Michiels provided support for multi-segment path drawing with smooth line joins, line join styles, enhanced fill modes, and has helped greatly with package presentation and tests. Templating is adapted by Marcus Downing from the FPDF_Tpl library created by Jan Slabon and Setasign. Jelmer Snoeck contributed packages that generate a variety of barcodes and help with registering images on the web. Jelmer Snoek and Guillermo Pascual augmented the basic HTML functionality with aligned text. Kent Quirk implemented backwards-compatible support for reading DPI from images that support it, and for setting DPI manually and then having it properly taken into account when calculating image size. Paulo Coutinho provided support for static embedded fonts. Dan Meyers added support for embedded JavaScript. David Fish added a generic alias-replacement function to enable, among other things, table of contents functionality. Andy Bakun identified and corrected a problem in which the internal catalogs were not sorted stably. Paul Montag added encoding and decoding functionality for templates, including images that are embedded in templates; this allows templates to be stored independently of gofpdf. Paul also added support for page boxes used in printing PDF documents. Wojciech Matusiak added supported for word spacing. Artem Korotkiy added support of UTF-8 fonts. Dave Barnes added support for imported objects and templates. Brigham Thompson added support for rounded rectangles. Joe Westcott added underline functionality and optimized image storage. Benoit KUGLER contributed support for rectangles with corners of unequal radius, modification times, and for file attachments and annotations. - Remove all legacy code page font support; use UTF-8 exclusively - Improve test coverage as reported by the coverage tool. Example demonstrates the generation of a simple PDF document. Note that since only core fonts are used (in this case Arial, a synonym for Helvetica), an empty string can be specified for the font directory in the call to New(). Note also that the example.Filename() and example.Summary() functions belong to a separate, internal package and are not part of the gofpdf library. If an error occurs at some point during the construction of the document, subsequent method calls exit immediately and the error is finally retrieved with the output call where it can be handled by the application.
Package lingua accurately detects the natural language of written text, be it long or short. Its task is simple: It tells you which language some text is written in. This is very useful as a preprocessing step for linguistic data in natural language processing applications such as text classification and spell checking. Other use cases, for instance, might include routing e-mails to the right geographically located customer service department, based on the e-mails' languages. Language detection is often done as part of large machine learning frameworks or natural language processing applications. In cases where you don't need the full-fledged functionality of those systems or don't want to learn the ropes of those, a small flexible library comes in handy. So far, the only other comprehensive open source library in the Go ecosystem for this task is Whatlanggo (https://github.com/abadojack/whatlanggo). Unfortunately, it has two major drawbacks: 1. Detection only works with quite lengthy text fragments. For very short text snippets such as Twitter messages, it does not provide adequate results. 2. The more languages take part in the decision process, the less accurate are the detection results. Lingua aims at eliminating these problems. It nearly does not need any configuration and yields pretty accurate results on both long and short text, even on single words and phrases. It draws on both rule-based and statistical methods but does not use any dictionaries of words. It does not need a connection to any external API or service either. Once the library has been downloaded, it can be used completely offline. Compared to other language detection libraries, Lingua's focus is on quality over quantity, that is, getting detection right for a small set of languages first before adding new ones. Currently, 75 languages are supported. They are listed as variants of type Language. Lingua is able to report accuracy statistics for some bundled test data available for each supported language. The test data for each language is split into three parts: 1. a list of single words with a minimum length of 5 characters 2. a list of word pairs with a minimum length of 10 characters 3. a list of complete grammatical sentences of various lengths Both the language models and the test data have been created from separate documents of the Wortschatz corpora (https://wortschatz.uni-leipzig.de) offered by Leipzig University, Germany. Data crawled from various news websites have been used for training, each corpus comprising one million sentences. For testing, corpora made of arbitrarily chosen websites have been used, each comprising ten thousand sentences. From each test corpus, a random unsorted subset of 1000 single words, 1000 word pairs and 1000 sentences has been extracted, respectively. Given the generated test data, I have compared the detection results of Lingua, and Whatlanggo running over the data of Lingua's supported 75 languages. Additionally, I have added Google's CLD3 (https://github.com/google/cld3/) to the comparison with the help of the gocld3 bindings (https://github.com/jmhodges/gocld3). Languages that are not supported by CLD3 or Whatlanggo are simply ignored during the detection process. Lingua clearly outperforms its contenders. Every language detector uses a probabilistic n-gram (https://en.wikipedia.org/wiki/N-gram) model trained on the character distribution in some training corpus. Most libraries only use n-grams of size 3 (trigrams) which is satisfactory for detecting the language of longer text fragments consisting of multiple sentences. For short phrases or single words, however, trigrams are not enough. The shorter the input text is, the less n-grams are available. The probabilities estimated from such few n-grams are not reliable. This is why Lingua makes use of n-grams of sizes 1 up to 5 which results in much more accurate prediction of the correct language. A second important difference is that Lingua does not only use such a statistical model, but also a rule-based engine. This engine first determines the alphabet of the input text and searches for characters which are unique in one or more languages. If exactly one language can be reliably chosen this way, the statistical model is not necessary anymore. In any case, the rule-based engine filters out languages that do not satisfy the conditions of the input text. Only then, in a second step, the probabilistic n-gram model is taken into consideration. This makes sense because loading less language models means less memory consumption and better runtime performance. In general, it is always a good idea to restrict the set of languages to be considered in the classification process using the respective api methods. If you know beforehand that certain languages are never to occur in an input text, do not let those take part in the classifcation process. The filtering mechanism of the rule-based engine is quite good, however, filtering based on your own knowledge of the input text is always preferable. There might be classification tasks where you know beforehand that your language data is definitely not written in Latin, for instance. The detection accuracy can become better in such cases if you exclude certain languages from the decision process or just explicitly include relevant languages. Knowing about the most likely language is nice but how reliable is the computed likelihood? And how less likely are the other examined languages in comparison to the most likely one? In the example below, a slice of ConfidenceValue is returned containing those languages which the calling instance of LanguageDetector has been built from. The entries are sorted by their confidence value in descending order. Each value is a probability between 0.0 and 1.0. The probabilities of all languages will sum to 1.0. If the language is unambiguously identified by the rule engine, the value 1.0 will always be returned for this language. The other languages will receive a value of 0.0. By default, Lingua uses lazy-loading to load only those language models on demand which are considered relevant by the rule-based filter engine. For web services, for instance, it is rather beneficial to preload all language models into memory to avoid unexpected latency while waiting for the service response. If you want to enable the eager-loading mode, you can do it as seen below. Multiple instances of LanguageDetector share the same language models in memory which are accessed asynchronously by the instances. By default, Lingua returns the most likely language for a given input text. However, there are certain words that are spelled the same in more than one language. The word `prologue`, for instance, is both a valid English and French word. Lingua would output either English or French which might be wrong in the given context. For cases like that, it is possible to specify a minimum relative distance that the logarithmized and summed up probabilities for each possible language have to satisfy. It can be stated as seen below. Be aware that the distance between the language probabilities is dependent on the length of the input text. The longer the input text, the larger the distance between the languages. So if you want to classify very short text phrases, do not set the minimum relative distance too high. Otherwise Unknown will be returned most of the time as in the example below. This is the return value for cases where language detection is not reliably possible.
Package sortedset provides the data-struct allows fast access the element in set by key or by score(order). It is inspired by Sorted Set from Redis. Every node in the set is associated with these properties. Each node in the set is associated with a key. While keys are unique, scores may be repeated. Nodes are taken in order (from low score to high score) instead of ordered afterwards. If scores are the same, the node is ordered by its key in lexicographic order. Each node in the set also can be accessed by rank, which represents the position in the sorted set. Sorted Set is implemented basing on skip list and hash map internally. With sorted sets you can add, remove, or update nodes in a very fast way (in a time proportional to the logarithm of the number of nodes). You can also get ranges by score or by rank (position) in a very fast way. Accessing the middle of a sorted set is also very fast, so you can use Sorted Sets as a smart list of non repeating nodes where you can quickly access everything you need: nodes in order, fast existence test, fast access to nodes in the middle! A typical use case of sorted set is a leader board in a massive online game, where every time a new score is submitted you update it using AddOrUpdate() method. You can easily take the top users using GetByRankRange() method, you can also, given an user id, return its rank in the listing using FindRank() method. Using FindRank() and GetByRankRange() together you can show users with a score similar to a given user. All very quickly. Examples
Package semver provides the ability to work with Semantic Versions (http://semver.org) in Go. Specifically it provides the ability to: To parse a semantic version use the `NewVersion` function. For example, If there is an error the version wasn't parseable. The version object has methods to get the parts of the version, compare it to other versions, convert the version back into a string, and get the original string. For more details please see the documentation at https://godoc.org/github.com/Masterminds/semver. A set of versions can be sorted using the `sort` package from the standard library. For example, Checking a version against version constraints is one of the most featureful parts of the package. There are two elements to the comparisons. First, a comparison string is a list of comma separated and comparisons. These are then separated by || separated or comparisons. For example, `">= 1.2, < 3.0.0 || >= 4.2.3"` is looking for a comparison that's greater than or equal to 1.2 and less than 3.0.0 or is greater than or equal to 4.2.3. The basic comparisons are: There are multiple methods to handle ranges and the first is hyphens ranges. These look like: The `x`, `X`, and `*` characters can be used as a wildcard character. This works for all comparison operators. When used on the `=` operator it falls back to the pack level comparison (see tilde below). For example, Tilde Range Comparisons (Patch) The tilde (`~`) comparison operator is for patch level ranges when a minor version is specified and major level changes when the minor number is missing. For example, Caret Range Comparisons (Major) The caret (`^`) comparison operator is for major level changes. This is useful when comparisons of API versions as a major change is API breaking. For example,
Package fpdf implements a PDF document generator with high level support for text, drawing and images. - UTF-8 support - Choice of measurement unit, page format and margins - Page header and footer management - Automatic page breaks, line breaks, and text justification - Inclusion of JPEG, PNG, GIF, TIFF and basic path-only SVG images - Colors, gradients and alpha channel transparency - Outline bookmarks - Internal and external links - TrueType, Type1 and encoding support - Page compression - Lines, Bézier curves, arcs, and ellipses - Rotation, scaling, skewing, translation, and mirroring - Clipping - Document protection - Layers - Templates - Barcodes - Charting facility - Import PDFs as templates go-pdf/fpdf has no dependencies other than the Go standard library. All tests pass on Linux, Mac and Windows platforms. go-pdf/fpdf supports UTF-8 TrueType fonts and “right-to-left” languages. Note that Chinese, Japanese, and Korean characters may not be included in many general purpose fonts. For these languages, a specialized font (for example, NotoSansSC for simplified Chinese) can be used. Also, support is provided to automatically translate UTF-8 runes to code page encodings for languages that have fewer than 256 glyphs. To install the package on your system, run Later, to receive updates, run The following Go code generates a simple PDF file. See the functions in the fpdf_test.go file (shown as examples in this documentation) for more advanced PDF examples. If an error occurs in an Fpdf method, an internal error field is set. After this occurs, Fpdf method calls typically return without performing any operations and the error state is retained. This error management scheme facilitates PDF generation since individual method calls do not need to be examined for failure; it is generally sufficient to wait until after Output() is called. For the same reason, if an error occurs in the calling application during PDF generation, it may be desirable for the application to transfer the error to the Fpdf instance by calling the SetError() method or the SetErrorf() method. At any time during the life cycle of the Fpdf instance, the error state can be determined with a call to Ok() or Err(). The error itself can be retrieved with a call to Error(). This package is a relatively straightforward translation from the original FPDF library written in PHP (despite the caveat in the introduction to Effective Go). The API names have been retained even though the Go idiom would suggest otherwise (for example, pdf.GetX() is used rather than simply pdf.X()). The similarity of the two libraries makes the original FPDF website a good source of information. It includes a forum and FAQ. However, some internal changes have been made. Page content is built up using buffers (of type bytes.Buffer) rather than repeated string concatenation. Errors are handled as explained above rather than panicking. Output is generated through an interface of type io.Writer or io.WriteCloser. A number of the original PHP methods behave differently based on the type of the arguments that are passed to them; in these cases additional methods have been exported to provide similar functionality. Font definition files are produced in JSON rather than PHP. A side effect of running go test ./... is the production of a number of example PDFs. These can be found in the go-pdf/fpdf/pdf directory after the tests complete. Please note that these examples run in the context of a test. In order run an example as a standalone application, you’ll need to examine fpdf_test.go for some helper routines, for example exampleFilename() and summary(). Example PDFs can be compared with reference copies in order to verify that they have been generated as expected. This comparison will be performed if a PDF with the same name as the example PDF is placed in the go-pdf/fpdf/pdf/reference directory and if the third argument to ComparePDFFiles() in internal/example/example.go is true. (By default it is false.) The routine that summarizes an example will look for this file and, if found, will call ComparePDFFiles() to check the example PDF for equality with its reference PDF. If differences exist between the two files they will be printed to standard output and the test will fail. If the reference file is missing, the comparison is considered to succeed. In order to successfully compare two PDFs, the placement of internal resources must be consistent and the internal creation timestamps must be the same. To do this, the methods SetCatalogSort() and SetCreationDate() need to be called for both files. This is done automatically for all examples. Nothing special is required to use the standard PDF fonts (courier, helvetica, times, zapfdingbats) in your documents other than calling SetFont(). You should use AddUTF8Font() or AddUTF8FontFromBytes() to add a TrueType UTF-8 encoded font. Use RTL() and LTR() methods switch between “right-to-left” and “left-to-right” mode. In order to use a different non-UTF-8 TrueType or Type1 font, you will need to generate a font definition file and, if the font will be embedded into PDFs, a compressed version of the font file. This is done by calling the MakeFont function or using the included makefont command line utility. To create the utility, cd into the makefont subdirectory and run “go build”. This will produce a standalone executable named makefont. Select the appropriate encoding file from the font subdirectory and run the command as in the following example. In your PDF generation code, call AddFont() to load the font and, as with the standard fonts, SetFont() to begin using it. Most examples, including the package example, demonstrate this method. Good sources of free, open-source fonts include Google Fonts and DejaVu Fonts. The draw2d package is a two dimensional vector graphics library that can generate output in different forms. It uses gofpdf for its document production mode. gofpdf is a global community effort and you are invited to make it even better. If you have implemented a new feature or corrected a problem, please consider contributing your change to the project. A contribution that does not directly pertain to the core functionality of gofpdf should be placed in its own directory directly beneath the contrib directory. Here are guidelines for making submissions. Your change should - be compatible with the MIT License - be properly documented - be formatted with go fmt - include an example in fpdf_test.go if appropriate - conform to the standards of golint and go vet, that is, golint . and go vet . should not generate any warnings - not diminish test coverage Pull requests are the preferred means of accepting your changes. gofpdf is released under the MIT License. It is copyrighted by Kurt Jung and the contributors acknowledged below. This package’s code and documentation are closely derived from the FPDF library created by Olivier Plathey, and a number of font and image resources are copied directly from it. Bruno Michel has provided valuable assistance with the code. Drawing support is adapted from the FPDF geometric figures script by David Hernández Sanz. Transparency support is adapted from the FPDF transparency script by Martin Hall-May. Support for gradients and clipping is adapted from FPDF scripts by Andreas Würmser. Support for outline bookmarks is adapted from Olivier Plathey by Manuel Cornes. Layer support is adapted from Olivier Plathey. Support for transformations is adapted from the FPDF transformation script by Moritz Wagner and Andreas Würmser. PDF protection is adapted from the work of Klemen Vodopivec for the FPDF product. Lawrence Kesteloot provided code to allow an image’s extent to be determined prior to placement. Support for vertical alignment within a cell was provided by Stefan Schroeder. Ivan Daniluk generalized the font and image loading code to use the Reader interface while maintaining backward compatibility. Anthony Starks provided code for the Polygon function. Robert Lillack provided the Beziergon function and corrected some naming issues with the internal curve function. Claudio Felber provided implementations for dashed line drawing and generalized font loading. Stani Michiels provided support for multi-segment path drawing with smooth line joins, line join styles, enhanced fill modes, and has helped greatly with package presentation and tests. Templating is adapted by Marcus Downing from the FPDF_Tpl library created by Jan Slabon and Setasign. Jelmer Snoeck contributed packages that generate a variety of barcodes and help with registering images on the web. Jelmer Snoek and Guillermo Pascual augmented the basic HTML functionality with aligned text. Kent Quirk implemented backwards-compatible support for reading DPI from images that support it, and for setting DPI manually and then having it properly taken into account when calculating image size. Paulo Coutinho provided support for static embedded fonts. Dan Meyers added support for embedded JavaScript. David Fish added a generic alias-replacement function to enable, among other things, table of contents functionality. Andy Bakun identified and corrected a problem in which the internal catalogs were not sorted stably. Paul Montag added encoding and decoding functionality for templates, including images that are embedded in templates; this allows templates to be stored independently of gofpdf. Paul also added support for page boxes used in printing PDF documents. Wojciech Matusiak added supported for word spacing. Artem Korotkiy added support of UTF-8 fonts. Dave Barnes added support for imported objects and templates. Brigham Thompson added support for rounded rectangles. Joe Westcott added underline functionality and optimized image storage. Benoit KUGLER contributed support for rectangles with corners of unequal radius, modification times, and for file attachments and annotations. - Remove all legacy code page font support; use UTF-8 exclusively - Improve test coverage as reported by the coverage tool. Example demonstrates the generation of a simple PDF document. Note that since only core fonts are used (in this case Arial, a synonym for Helvetica), an empty string can be specified for the font directory in the call to New(). Note also that the example.Filename() and example.SummaryCompare() functions belong to a separate, internal package and are not part of the gofpdf library. If an error occurs at some point during the construction of the document, subsequent method calls exit immediately and the error is finally retrieved with the output call where it can be handled by the application.
Package kv implements a simple and easy to use persistent key/value (KV) store. 2016-07-11: KV now uses the stable version of lldb. (github.com/cznic/lldb). The stored KV pairs are sorted in the key collation order defined by an user supplied 'compare' function (passed as a field in Options). Keys, as well as the values associated with them, are opaque []bytes. Maximum size of a "native" key or value is 65787 bytes. Larger keys or values have to be composed of the "native" ones in client code. The maximum DB size kv can handle is 2^60 bytes (1 exabyte). See also [4]: "Block handles". Transactions are resource limited. All changes made by a transaction are held in memory until the top level transaction is committed. ACID[1] implementation notes/details follows. A successfully committed transaction appears (by its effects on the database) to be indivisible ("atomic") iff the transaction is performed in isolation. An aborted (via RollBack) transaction appears like it never happened under the same limitation. Atomic updates to the DB, via functions like Set, Inc, etc., are performed in their own automatic transaction. If the partial progress of any such function fails at any point, the automatic transaction is canceled via Rollback before returning from the function. A non nil error is returned in that case. All reads, including those made from any other concurrent non isolated transaction(s), performed during a not yet committed transaction, are dirty reads, i.e. the data returned are consistent with the in-progress state of the open transaction, or all of the open transactions. Obviously, conflicts, data races and inconsistent states can happen, but iff non isolated transactions are performed. Performing a Rollback at a nested transaction level properly returns the transaction state (and data read from the DB) to what it was before the respective BeginTransaction. Transactions of the atomic updating functions (Set, Put, Delete ...) are always isolated. Transactions controlled by BeginTransaction/Commit/RollBack, are isolated iff their execution is serialized. Transactions are committed using the two phase commit protocol(2PC)[2] and a write ahead log(WAL)[3]. DB recovery after a crash is performed automatically using data from the WAL. Last transaction data, either of an in progress transaction or a transaction being committed at the moment of the crash, can get lost. No protection from non readable files, files corrupted by other processes or by memory faults or other HW problems, is provided. Always properly backup your DB data file(s).
package nodb is a high performance embedded NoSQL. nodb supports various data structure like kv, list, hash and zset like redis. Other features include binlog replication, data with a limited time-to-live. First create a nodb instance before use: cfg is a Config instance which contains configuration for nodb use, like DataDir (root directory for nodb working to store data). After you create a nodb instance, you can select a DB to store you data: DB must be selected by a index, nodb supports only 16 databases, so the index range is [0-15]. KV is the most basic nodb type like any other key-value database. List is simply lists of values, sorted by insertion order. You can push or pop value on the list head (left) or tail (right). Hash is a map between fields and values. ZSet is a sorted collections of values. Every member of zset is associated with score, a int64 value which used to sort, from smallest to greatest score. Members are unique, but score may be same. nodb supports binlog, so you can sync binlog to another server for replication. If you want to open binlog support, set UseBinLog to true in config.
Package neuron is the cloud-native, distributed ORM implementation. It's design allows to use the separate repository for each model, with a possibility to have different relationships types between them. neuron consists of following packages: - auth - defines basic interfaces and structures used for neuron authentication and authorization. - codec - is a set structures and interfaces used on marshal process. - controller - defines a structure that keeps and maps all models to related repositories. - database - defines database connection and interface, functions and structures that allows to execute queries. - errors - neuron defined errors. - log - is the neuron service logging interface structure for the neuron based applications. - mapping - contains the information about the mapped models their fields and settings. - query - contains structures used to create queries, sort, pagination on base of mapped models. - query/filters - contains query filters structures and implementations. - repository - is a package used to store and register the repositories. - server - defines interfaces used as the servers.
Package stream provides filters that can be chained together in a manner similar to Unix pipelines. A simple example that prints all go files under the current directory: stream.Run is passed a list of filters that are chained together (stream.Find, stream.Grep, stream.WriteLines are filters). Each filter takes as input a sequence of strings and produces a sequence of strings. The empty sequence is passed as input to the first filter. The output of one filter is fed as input to the next filter. stream.Run is just one way to execute filters. Others are stream.Contents (returns the output of the last filter as a []string), and stream.ForEach (executes a supplied function for every output item). Filter execution can result in errors. These are returned from stream functions normally. For example, the following call will return a non-nil error. Each filter takes as input a sequence of strings (read from a channel) and produces as output a sequence of strings (written to a channel). The stream package provides a bunch of useful filters. Applications can define their own filters easily. For example, here is a filter that repeats every input n times: The output will be: Note that Repeat returns a FilterFunc, a function type that implements the Filter interface. This is a common implementation pattern: many simple filters can be expressed as a single function of type FilterFunc. FilterFunc is an appropriate type to use for most filters like Repeat above. However for some filters, dynamic customization is appropriate. Such filters provide their own implementation of the Filter interface with extra methods. For example, stream.Sort provides extra methods that can be used to control how items are sorted: The interface of this package is inspired by the http://labix.org/pipe package. Users may wish to consider that package in case it fits their needs better.
Package cron implements a cron spec parser and runner. Package cron implements a cron spec parser and job runner. Callers may register Funcs to be invoked on a given schedule. Cron will run them in their own goroutines. A cron expression represents a set of times, using 6 space-separated fields. Note: Month and Day-of-week field values are case insensitive. "SUN", "Sun", and "sun" are equally accepted. Asterisk ( * ) The asterisk indicates that the cron expression will match for all values of the field; e.g., using an asterisk in the 5th field (month) would indicate every month. Slash ( / ) Slashes are used to describe increments of ranges. For example 3-59/15 in the 1st field (minutes) would indicate the 3rd minute of the hour and every 15 minutes thereafter. The form "*\/..." is equivalent to the form "first-last/...", that is, an increment over the largest possible range of the field. The form "N/..." is accepted as meaning "N-MAX/...", that is, starting at N, use the increment until the end of that specific range. It does not wrap around. Comma ( , ) Commas are used to separate items of a list. For example, using "MON,WED,FRI" in the 5th field (day of week) would mean Mondays, Wednesdays and Fridays. Hyphen ( - ) Hyphens are used to define ranges. For example, 9-17 would indicate every hour between 9am and 5pm inclusive. Question mark ( ? ) Question mark may be used instead of '*' for leaving either day-of-month or day-of-week blank. You may use one of several pre-defined schedules in place of a cron expression. You may also schedule a job to execute at fixed intervals. This is supported by formatting the cron spec like this: where "duration" is a string accepted by time.ParseDuration (http://golang.org/pkg/time/#ParseDuration). For example, "@every 1h30m10s" would indicate a schedule that activates every 1 hour, 30 minutes, 10 seconds. Note: The interval does not take the job runtime into account. For example, if a job takes 3 minutes to run, and it is scheduled to run every 5 minutes, it will have only 2 minutes of idle time between each run. By default, all interpretation and scheduling is done in the machine's local time zone (as provided by the Go time package http://www.golang.org/pkg/time). The time zone may be overridden by providing an additional space-separated field at the beginning of the cron spec, of the form "TZ=Asia/Tokyo" Be aware that jobs scheduled during daylight-savings leap-ahead transitions will not be run! Since the Cron service runs concurrently with the calling code, some amount of care must be taken to ensure proper synchronization. All cron methods are designed to be correctly synchronized as long as the caller ensures that invocations have a clear happens-before ordering between them. Cron entries are stored in an array, sorted by their next activation time. Cron sleeps until the next job is due to be run. Upon waking:
Package iplib provides enhanced tools for working with IP networks and addresses. These tools are built upon and extend the generic functionality found in the Go "net" package. The main library comes in two parts: a series of utilities for working with net.IP (sort, increment, decrement, delta, compare, convert to binary or hex- string, convert between net.IP and integer) and an enhancement of net.IPNet called iplib.Net that can calculate the first and last IPs of a block as well as enumerating the block into []net.IP, incrementing and decrementing within the boundaries of the block and creating sub- or super-nets of it. For most features iplib exposes a v4 and a v6 variant to handle each network properly, but in all cases there is a generic function that handles any IP and routes between them. One caveat to this is those functions that require or return an integer value representing the address, in these cases the IPv4 variants take an int32 as input while the IPv6 functions require a *big.Int in order to work with the 128bits of address. For managing the complexity of IPv6 address-spaces, this library adds a new mask, called a Hostmask, as an optional constraint on iplib.Net6 networks, please see the type-documentation for more information on using it. For functions where it is possible to exceed the address-space the rule is that underflows return the version-appropriate all-zeroes address while overflows return the all-ones. There are also two submodules under iplib: the iplib/iid module contains functions for generating RFC 7217-compliant IPv6 Interface ID addresses, and iplib/iana imports the IANA IP Special Registries and exposes functions for comparing IP addresses against those registries to determine if the IP is part of a special reservation (for example RFC 1918 private networks or the RFC 3849 documentation network).
Package edn implements encoding and decoding of EDN values as defined in https://github.com/edn-format/edn. For a full introduction on how to use go-edn, see https://github.com/go-edn/edn/blob/v1/docs/introduction.md. Fully self-contained examples of go-edn can be found at https://github.com/go-edn/edn/tree/v1/examples. Note that the small examples in this package is not checking errors as persively as you should do when you use this package. This is done because I'd like the examples to be easily readable and understandable. The bigger examples provide proper error handling. EDN, in contrast to JSON, supports arbitrary values as keys. This example shows how one can implement enums and sets, and how to support multiple different forms for a specific value type. The set implemented here supports the notation `:all` for all values. This example shows how to read and write basic EDN tags, and how this can be utilised: In contrast to encoding/json, you can read in data where you only know that the input satisfies some sort of interface, provided the value is tagged. This example shows how one can do streaming with the decoder, and how to properly know when the stream has no elements left.
Package rpm implements the rpm package file format. For more information about the rpm file format, see: http://ftp.rpm.org/max-rpm/s1-rpm-file-format-rpm-file-format.html Packages are composed of two headers: the Signature header and the "Header" header. Each contains key-value pairs called tags. Tags map an integer key to a value whose data type will be one of the TagType types. Tag values can be decoded with the appropriate Tag method for the data type. Many known tags are available as Package methods. For example, RPMTAG_NAME and RPMTAG_BUILDTIME are available as Package.Name and Package.BuildTime respectively. Tags can be retrieved and decoded from the Signature or Header headers directly using Header.GetTag and their tag identifier. Header.GetTag and all Tag methods will return a zero value if the header or the tag do not exist, or if the tag has a different data type. You may enumerate all tags in a header with Header.Tags: In the rpm ecosystem, package versions are compared using EVR; epoch, version, release. Versions may be compared using the Compare function. Packages may be be sorted using the PackageSlice type which implements sort.Interface. Packages are sorted lexically by name ascending and then by version descending. Version is evaluated first by epoch, then by version string, then by release. The Sort function is provided for your convenience. Packages may be validated using MD5Check or GPGCheck. See the example for each function. The payload of an rpm package is typically archived in cpio format and compressed with xz. To decompress and unarchive an rpm payload, the reader that read the rpm package headers will be positioned at the beginning of the payload and can be reused with the appropriate Go packages for the rpm payload format. You can check the archive format with Package.PayloadFormat and the compression algorithm with Package.PayloadCompression. For the cpio archive format, the following package is recommended: https://github.com/cavaliergopher/cpio For xz compression, the following package is recommended: https://github.com/ulikunitz/xz See README.md for a working example of extracting files from a cpio/xz rpm package using these packages. See cmd/rpmdump and cmd/rpminfo for example programs that emulate tools from the rpm ecosystem.
Package cron implements a cron spec parser and job runner. Callers may register Funcs to be invoked on a given schedule. Cron will run them in their own goroutines. A cron expression represents a set of times, using 6 space-separated fields. Note: Month and Day-of-week field values are case insensitive. "SUN", "Sun", and "sun" are equally accepted. Asterisk ( * ) The asterisk indicates that the cron expression will match for all values of the field; e.g., using an asterisk in the 5th field (month) would indicate every month. Slash ( / ) Slashes are used to describe increments of ranges. For example 3-59/15 in the 1st field (minutes) would indicate the 3rd minute of the hour and every 15 minutes thereafter. The form "*\/..." is equivalent to the form "first-last/...", that is, an increment over the largest possible range of the field. The form "N/..." is accepted as meaning "N-MAX/...", that is, starting at N, use the increment until the end of that specific range. It does not wrap around. Comma ( , ) Commas are used to separate items of a list. For example, using "MON,WED,FRI" in the 5th field (day of week) would mean Mondays, Wednesdays and Fridays. Hyphen ( - ) Hyphens are used to define ranges. For example, 9-17 would indicate every hour between 9am and 5pm inclusive. Question mark ( ? ) Question mark may be used instead of '*' for leaving either day-of-month or day-of-week blank. You may use one of several pre-defined schedules in place of a cron expression. You may also schedule a job to execute at fixed intervals, starting at the time it's added or cron is run. This is supported by formatting the cron spec like this: where "duration" is a string accepted by time.ParseDuration (http://golang.org/pkg/time/#ParseDuration). For example, "@every 1h30m10s" would indicate a schedule that activates after 1 hour, 30 minutes, 10 seconds, and then every interval after that. Note: The interval does not take the job runtime into account. For example, if a job takes 3 minutes to run, and it is scheduled to run every 5 minutes, it will have only 2 minutes of idle time between each run. All interpretation and scheduling is done in the machine's local time zone (as provided by the Go time package (http://www.golang.org/pkg/time). Be aware that jobs scheduled during daylight-savings leap-ahead transitions will not be run! Since the Cron service runs concurrently with the calling code, some amount of care must be taken to ensure proper synchronization. All cron methods are designed to be correctly synchronized as long as the caller ensures that invocations have a clear happens-before ordering between them. Cron entries are stored in an array, sorted by their next activation time. Cron sleeps until the next job is due to be run. Upon waking:
Sortutil is a Go library which lets you sort a slice without implementing a sort.Interface, and in different orderings: ascending, descending, or case-insensitive ascending or descending (for slices of strings.) Additionally, Sortutil lets you sort a slice of a custom struct by a given struct field or index--for example, you can sort a []MyStruct by the structs' "Name" fields, or a [][]int by the second index of each nested slice, similar to using sorted(key=operator.itemgetter/attrgetter) in Python.
Package natsort implements natural strings sorting
Package hamt provides a reference implementation of the IPLD HAMT used in the Filecoin blockchain. It includes some optional flexibility such that it may be used for other purposes outside of Filecoin. HAMT is a "hash array mapped trie" https://en.wikipedia.org/wiki/Hash_array_mapped_trie. This implementation extends the standard form by including buckets for the key/value pairs at storage leaves and CHAMP mutation semantics https://michael.steindorfer.name/publications/oopsla15.pdf. The CHAMP invariant and mutation rules provide us with the ability to maintain canonical forms given any set of keys and their values, regardless of insertion order and intermediate data insertion and deletion. Therefore, for any given set of keys and their values, a HAMT using the same parameters and CHAMP semantics, the root node should always produce the same content identifier (CID). The HAMT algorithm hashes incoming keys and uses incrementing subsections of that hash digest at each level of its tree structure to determine the placement of either the entry or a link to a child node of the tree. A `bitWidth` determines the number of bits of the hash to use for index calculation at each level of the tree such that the root node takes the first `bitWidth` bits of the hash to calculate an index and as we move lower in the tree, we move along the hash by `depth x bitWidth` bits. In this way, a sufficiently randomizing hash function will generate a hash that provides a new index at each level of the data structure. An index comprising `bitWidth` bits will generate index values of `[ 0, 2^bitWidth )`. So a `bitWidth` of 8 will generate indexes of 0 to 255 inclusive. Each node in the tree can therefore hold up to `2^bitWidth` elements of data, which we store in an array. In the this HAMT and the IPLD HashMap we store entries in buckets. A `Set(key, value)` mutation where the index generated at the root node for the hash of key denotes an array index that does not yet contain an entry, we create a new bucket and insert the key / value pair entry. In this way, a single node can theoretically hold up to `2^bitWidth x bucketSize` entries, where `bucketSize` is the maximum number of elements a bucket is allowed to contain ("collisions"). In practice, indexes do not distribute with perfect randomness so this maximum is theoretical. Entries stored in the node's buckets are stored in key-sorted order. This HAMT implementation: • Fixes the `bucketSize` to 3. • Defaults the `bitWidth` to 8, however within Filecoin it uses 5 • Defaults the hash algorithm to the 64-bit variant of Murmur3-x64 The algorithm used here is identical to that of the IPLD HashMap algorithm specified at https://github.com/ipld/specs/blob/master/data-structures/hashmap.md. The specific parameters used by Filecoin and the DAG-CBOR block layout differ from the specification and are defined at https://github.com/ipld/specs/blob/master/data-structures/hashmap.md#Appendix-Filecoin-hamt-variant.
Package hamt provides a reference implementation of the IPLD HAMT used in the Filecoin blockchain. It includes some optional flexibility such that it may be used for other purposes outside of Filecoin. HAMT is a "hash array mapped trie" https://en.wikipedia.org/wiki/Hash_array_mapped_trie. This implementation extends the standard form by including buckets for the key/value pairs at storage leaves and CHAMP mutation semantics https://michael.steindorfer.name/publications/oopsla15.pdf. The CHAMP invariant and mutation rules provide us with the ability to maintain canonical forms given any set of keys and their values, regardless of insertion order and intermediate data insertion and deletion. Therefore, for any given set of keys and their values, a HAMT using the same parameters and CHAMP semantics, the root node should always produce the same content identifier (CID). The HAMT algorithm hashes incoming keys and uses incrementing subsections of that hash digest at each level of its tree structure to determine the placement of either the entry or a link to a child node of the tree. A `bitWidth` determines the number of bits of the hash to use for index calculation at each level of the tree such that the root node takes the first `bitWidth` bits of the hash to calculate an index and as we move lower in the tree, we move along the hash by `depth x bitWidth` bits. In this way, a sufficiently randomizing hash function will generate a hash that provides a new index at each level of the data structure. An index comprising `bitWidth` bits will generate index values of `[ 0, 2^bitWidth )`. So a `bitWidth` of 8 will generate indexes of 0 to 255 inclusive. Each node in the tree can therefore hold up to `2^bitWidth` elements of data, which we store in an array. In the this HAMT and the IPLD HashMap we store entries in buckets. A `Set(key, value)` mutation where the index generated at the root node for the hash of key denotes an array index that does not yet contain an entry, we create a new bucket and insert the key / value pair entry. In this way, a single node can theoretically hold up to `2^bitWidth x bucketSize` entries, where `bucketSize` is the maximum number of elements a bucket is allowed to contain ("collisions"). In practice, indexes do not distribute with perfect randomness so this maximum is theoretical. Entries stored in the node's buckets are stored in key-sorted order. This HAMT implementation: • Fixes the `bucketSize` to 3. • Defaults the `bitWidth` to 8, however within Filecoin it uses 5 • Defaults the hash algorithm to the 64-bit variant of Murmur3-x64 The algorithm used here is identical to that of the IPLD HashMap algorithm specified at https://github.com/ipld/specs/blob/master/data-structures/hashmap.md. The specific parameters used by Filecoin and the DAG-CBOR block layout differ from the specification and are defined at https://github.com/ipld/specs/blob/master/data-structures/hashmap.md#Appendix-Filecoin-hamt-variant.
Tool for Golang to sort goimports by 3-4 groups: std, general, local(which is optional) and project dependencies. It will help you to keep your code cleaner. Example: Input: Output: If you need to set package names explicitly(in import declaration), you can use additional option `-set-alias`. More:
Package radix contains a string sorting algorithm. This is an optimized sorting algorithm equivalent to sort.Strings. For string sorting, a carefully implemented radix sort can be considerably faster than Quicksort, sometimes more than twice as fast. The algorithm uses O(n) extra space and runs in O(n+B) worst-case time, where n is the number of strings to be sorted and B is the number of bytes that must be inspected to sort the strings.
Package unique provides primitives for sorting slices removing repeated elements.
Package sortutil provides utilities supplementing the standard 'sort' package. 2015-06-17: Added utils for math/big.{Int,Rat}.
Package duplo provides tools to efficiently query large sets of images for visual duplicates. The technique is based on the paper "Fast Multiresolution Image Querying" by Charles E. Jacobs, Adam Finkelstein, and David H. Salesin, with a few modifications and additions, such as the addition of a width to height ratio, the dHash metric by Dr. Neal Krawetz as well as some histogram-based metrics. Quering the data structure will return a list of potential matches, sorted by the score described in the main paper. The user can make searching for duplicates stricter, however, by filtering based on the additional metrics. Package example.
Package climax is a handy alternative CLI for Go applications. It looks pretty much exactly like the output of the default `go` command and incorporates some cool features from it. For instance, Climax does support so-called topics (some sort of Wiki entries for CLI). You can also define some annotated use cases of some command that would get displayed in the help section of corresponding command. Climax applications produce this sort of output:
Package ofxgo seeks to provide a library to make it easier to query and/or parse financial information with OFX from the comfort of Golang, without having to deal with marshalling/unmarshalling the SGML or XML. The library does *not* intend to abstract away all of the details of the OFX specification, which would be difficult to do well. Instead, it exposes the OFX SGML/XML hierarchy as structs which mostly resemble it. For more information on OFX and to read the specification, see http://ofx.net. There are three main top-level objects defined in ofxgo. These are Client, Request, and Response. The Request and Response objects represent OFX requests and responses as Golang structs. Client contains settings which control how requests and responses are marshalled and unmarshalled (the OFX version used, client id and version, whether to indent SGML/XML tags, etc.), and provides helper methods for making requests and optionally parsing the response using those settings. Every Request object contains a SignonRequest element, called Signon. This element contains the username, password (or key), and the ORG and FID fields particular to the financial institution being queried, and an optional ClientUID field (required by some FIs). Likewise, each Response contains a SignonResponse object which contains, among other things, the Status of the request. Any status with a nonzero Code should be inspected for a possible error (using the Severity and Message fields populated by the server, or the CodeMeaning() and CodeConditions() functions which return information about a particular code as specified by the OFX specification). Each top-level Request or Response object may contain zero or more messages, sorted into named slices by message set, just as the OFX specification groups them. Here are the supported types of Request/Response objects (along with the name of the slice of Messages they belong to in parentheses): Requests: Responses: When constructing a Request, simply append the desired message to the message set it belongs to. For Responses, it is the user's responsibility to make type assertions on objects found inside one of these message sets before using them. For example, the following code would request a bank statement for a checking account and print the balance: More usage examples may be found in the example command-line client provided with this library, in the cmd/ofx directory of the source.
Package betterguid generates 20-character guid (globally unique id) strings with good properties: They're 20 character strings, safe for inclusion in urls (don't require escaping) They're based on timestamp so that they sort **after** any existing ids They contain 72-bits of random data after the timestamp so that IDs won't collide with other clients' IDs They sort **lexicographically** (so the timestamp is converted to characters that will sort properly) They're monotonically increasing. Even if you generate more than one in the same timestamp, thelatter ones will sort after the former ones. We do this by using the previous random bits but "incrementing" them by 1 (only in the case of a timestamp collision). Read https://www.firebase.com/blog/2015-02-11-firebase-unique-identifiers.html for more info. Based on https://gist.github.com/mikelehen/3596a30bd69384624c11
Package sorts does parallel radix sorts of data by (u)int64, string, or []byte keys, and parallel quicksort. See the sorts/sortutil package for shortcuts for common slice types and help sorting floats.
Package natsort implements natural strings sorting
Sortutil is a Go library which lets you sort a slice without implementing a sort.Interface, and in different orderings: ascending, descending, or case-insensitive ascending or descending (for slices of strings.) Additionally, Sortutil lets you sort a slice of a custom struct by a given struct field or index--for example, you can sort a []MyStruct by the structs' "Name" fields, or a [][]int by the second index of each nested slice, similar to using sorted(key=operator.itemgetter/attrgetter) in Python.
Streaming relation (overlap, distance, KNN) testing of (any number of) sorted files of intervals.
This library implements a cron spec parser and runner. See the README for more details. Package cron implements a cron spec parser and job runner. Callers may register Funcs to be invoked on a given schedule. Cron will run them in their own goroutines. A cron expression represents a set of times, using 6 space-separated fields. Note: Month and Day-of-week field values are case insensitive. "SUN", "Sun", and "sun" are equally accepted. Asterisk ( * ) The asterisk indicates that the cron expression will match for all values of the field; e.g., using an asterisk in the 5th field (month) would indicate every month. Slash ( / ) Slashes are used to describe increments of ranges. For example 3-59/15 in the 1st field (minutes) would indicate the 3rd minute of the hour and every 15 minutes thereafter. The form "*\/..." is equivalent to the form "first-last/...", that is, an increment over the largest possible range of the field. The form "N/..." is accepted as meaning "N-MAX/...", that is, starting at N, use the increment until the end of that specific range. It does not wrap around. Comma ( , ) Commas are used to separate items of a list. For example, using "MON,WED,FRI" in the 5th field (day of week) would mean Mondays, Wednesdays and Fridays. Hyphen ( - ) Hyphens are used to define ranges. For example, 9-17 would indicate every hour between 9am and 5pm inclusive. Question mark ( ? ) Question mark may be used instead of '*' for leaving either day-of-month or day-of-week blank. You may use one of several pre-defined schedules in place of a cron expression. You may also schedule a job to execute at fixed intervals. This is supported by formatting the cron spec like this: where "duration" is a string accepted by time.ParseDuration (http://golang.org/pkg/time/#ParseDuration). For example, "@every 1h30m10s" would indicate a schedule that activates every 1 hour, 30 minutes, 10 seconds. Note: The interval does not take the job runtime into account. For example, if a job takes 3 minutes to run, and it is scheduled to run every 5 minutes, it will have only 2 minutes of idle time between each run. All interpretation and scheduling is done in the machine's local time zone (as provided by the Go time package (http://www.golang.org/pkg/time). Be aware that jobs scheduled during daylight-savings leap-ahead transitions will not be run! Since the Cron service runs concurrently with the calling code, some amount of care must be taken to ensure proper synchronization. All cron methods are designed to be correctly synchronized as long as the caller ensures that invocations have a clear happens-before ordering between them. Cron entries are stored in an array, sorted by their next activation time. Cron sleeps until the next job is due to be run. Upon waking:
Tool for Golang to sort goimports by 3-4 groups: std, general, local(which is optional) and project dependencies. It will help you to keep your code cleaner. Example: Input: Output: If you need to set package names explicitly(in import declaration), you can use additional option `-set-alias`. More:
Package neuron is the cloud-native, distributed ORM implementation. It's design allows to use the separate repository for each model, with a possibility to have different relationships types between them. Neuron-core consists of following packages: neuron - (Neuron Core) the root package that gives easy access to all subpackages. . controller - is the neuron's core, that registers and stores the models and contains configurations required by other packages. config - contains the configurations for all packages. query - used to create queries, filters, sort, pagination on base of mapped models. mapping - contains the information about the mapped models their fields and settings. class - contains errors classification system for the neuron packages. log - is the logging interface for the neuron based applications. i18n - is the neuron based application supported internationalization. repository - is a package used to store and register the repositories. It is also used to get the repository/factory per model. A modular design allows to use and compile only required repositories.
Package dominantcolor provides a function for finding a color that represents the calculated dominant color in the image. This uses a KMean clustering algorithm to find clusters of pixel colors in RGB space. The algorithm is ported from Chromium source code: RGB KMean Algorithm (N clusters, M iterations): 1. Pick N starting colors by randomly sampling the pixels. If you see a color you already saw keep sampling. After a certain number of tries just remove the cluster and continue with N = N-1 clusters (for an image with just one color this should devolve to N=1). These colors are the centers of your N clusters. 2. For each pixel in the image find the cluster that it is closest to in RGB space. Add that pixel's color to that cluster (we keep a sum and a count of all of the pixels added to the space, so just add it to the sum and increment count). 3. Calculate the new cluster centroids by getting the average color of all of the pixels in each cluster (dividing the sum by the count). 4. See if the new centroids are the same as the old centroids. a) If this is the case for all N clusters than we have converged and can move on. b) If any centroid moved, repeat step 2 with the new centroids for up to M iterations. 5. Once the clusters have converged or M iterations have been tried, sort the clusters by weight (where weight is the number of pixels that make up this cluster). 6. Going through the sorted list of clusters, pick the first cluster with the largest weight that's centroid falls between |lower_bound| and |upper_bound|. Return that color. If no color fulfills that requirement return the color with the largest weight regardless of whether or not it fulfills the equation above.
Package sortorder implements sort orders and comparison functions. Currently, it only implements so-called "natural order", where integers embedded in strings are compared by value.
The quickselect package provides primitives for finding the smallest k elements in slices and user-defined collections. The primitives used in the package are modeled off of the standard sort library for Go. Quickselect uses Hoare's Selection Algorithm which finds the smallest k elements in expected O(n) time, and is thus an asymptotically optimal algorithm (and is faster than sorting or heap implementations).
Package set implements type-safe, non-allocating algorithms that operate on ordered sets. Most functions take a data parameter of type sort.Interface and a pivot parameter of type int; data represents two sets covering the ranges [0:pivot] and [pivot:Len], each of which is expected to be sorted and free of duplicates. sort.Sort may be used for sorting, and Uniq may be used to filter away duplicates. All mutating functions swap elements as necessary from the two input sets to form a single output set, returning its size: the output set will be in the range [0:size], and will be in sorted order and free of duplicates. Elements which were moved into the range [size:Len] will have undefined order and may contain duplicates. All pivots must be in the range [0:Len]. A panic may occur when invalid pivots are passed into any of the functions. Convenience functions exist for slices of int, float64, and string element types, and also serve as examples for implementing utility functions for other types. Elements will be considered equal if `!Less(i,j) && !Less(j,i)`. An implication of this is that NaN values are equal to each other.
Package merkletree implements a high-performance Merkle Tree in Go. It supports parallel execution for enhanced performance and offers compatibility with OpenZeppelin through sorted sibling pairs.
Package goredis is another redis client with full features which writter in golang Protocol Specification: http://redis.io/topics/protocol. Redis reply has five types: status, error, integer, bulk, multi bulk. A Status Reply is in the form of a single line string starting with "+" terminated by "\r\n". Error Replies are very similar to Status Replies. The only difference is that the first byte is "-". Integer reply is just a CRLF terminated string representing an integer, prefixed by a ":" byte. Bulk replies are used by the server in order to return a single binary safe string up to 512 MB in length. A Multi bulk reply is used to return an array of other replies. Every element of a Multi Bulk Reply can be of any kind, including a nested Multi Bulk Reply. So five reply type is defined: And then a Reply struct which represent the redis response data is defined: Reply struct has many useful methods: Connect redis has two function: Dial and DialURL, for example: DialConfig can also take named options for connection config: Try a redis command is simple too, let's do GET/SET: Or you can execute customer command with Redis.ExecuteCommand method: Redis Pipelining is defined as: Transaction, Lua Eval, Publish/Subscribe, Monitor, Scan, Sort are also supported.