Package aw is a "plug-and-play" workflow development library/framework for Alfred 3 & 4 (https://www.alfredapp.com/). It requires Go 1.13 or later. It provides everything you need to create a polished and blazing-fast Alfred frontend for your project. As of AwGo 0.26, all applicable features of Alfred 4.1 are supported. The main features are: AwGo is an opinionated framework that expects to be used in a certain way in order to eliminate boilerplate. It *will* panic if not run in a valid, minimally Alfred-like environment. At a minimum the following environment variables should be set to meaningful values: NOTE: AwGo is currently in development. The API *will* change and should not be considered stable until v1.0. Until then, be sure to pin a version using go modules or similar. Be sure to also check out the _examples/ subdirectory, which contains some simple, but complete, workflows that demonstrate the features of AwGo and useful workflow idioms. Typically, you'd call your program's main entry point via Workflow.Run(). This way, the library will rescue any panic, log the stack trace and show an error message to the user in Alfred. In the Script box (Language = "/bin/bash"): To generate results for Alfred to show in a Script Filter, use the feedback API of Workflow: You can set workflow variables (via feedback) with Workflow.Var, Item.Var and Modifier.Var. See Workflow.SendFeedback for more documentation. Alfred requires a different JSON format if you wish to set workflow variables. Use the ArgVars (named for its equivalent element in Alfred) struct to generate output from Run Script actions. Be sure to set TextErrors to true to prevent Workflow from generating Alfred JSON if it catches a panic: See ArgVars for more information. New() creates a *Workflow using the default values and workflow settings read from environment variables set by Alfred. You can change defaults by passing one or more Options to New(). If you do not want to use Alfred's environment variables, or they aren't set (i.e. you're not running the code in Alfred), use NewFromEnv() with a custom Env implementation. A Workflow can be re-configured later using its Configure() method. See the documentation for Option for more information on configuring a Workflow. AwGo can check for and install new versions of your workflow. Subpackage update provides an implementation of the Updater interface and sources to load updates from GitHub or Gitea releases, or from the URL of an Alfred `metadata.json` file. See subpackage update and _examples/update. AwGo can filter Script Filter feedback using a Sublime Text-like fuzzy matching algorithm. Workflow.Filter() sorts feedback Items against the provided query, removing those that do not match. See _examples/fuzzy for a basic demonstration, and _examples/bookmarks for a demonstration of implementing fuzzy.Sortable on your own structs and customising the fuzzy sort settings. Fuzzy matching is done by package https://godoc.org/go.deanishe.net/fuzzy AwGo automatically configures the default log package to write to STDERR (Alfred's debugger) and a log file in the workflow's cache directory. The log file is necessary because background processes aren't connected to Alfred, so their output is only visible in the log. It is rotated when it exceeds 1 MiB in size. One previous log is kept. AwGo detects when Alfred's debugger is open (Workflow.Debug() returns true) and in this case prepends filename:linenumber: to log messages. The Config struct (which is included in Workflow as Workflow.Config) provides an interface to the workflow's settings from the Workflow Environment Variables panel (see https://www.alfredapp.com/help/workflows/advanced/variables/#environment). Alfred exports these settings as environment variables, and you can read them ad-hoc with the Config.Get*() methods, and save values back to Alfred/info.plist with Config.Set(). Using Config.To() and Config.From(), you can "bind" your own structs to the settings in Alfred: See the documentation for Config.To and Config.From for more information, and _examples/settings for a demo workflow based on the API. The Alfred struct provides methods for the rest of Alfred's AppleScript API. Amongst other things, you can use it to tell Alfred to open, to search for a query, to browse/action files & directories, or to run External Triggers. See documentation of the Alfred struct for more information. AwGo provides a basic, but useful, API for loading and saving data. In addition to reading/writing bytes and marshalling/unmarshalling to/from JSON, the API can auto-refresh expired cache data. See Cache and Session for the API documentation. Workflow has three caches tied to different directories: These all share (almost) the same API. The difference is in when the data go away. Data saved with Session are deleted after the user closes Alfred or starts using a different workflow. The Cache directory is in a system cache directory, so may be deleted by the system or "system maintenance" tools. The Data directory lives with Alfred's application data and would not normally be deleted. Subpackage util provides several functions for running script files and snippets of AppleScript/JavaScript code. See util for documentation and examples. AwGo offers a simple API to start/stop background processes via Workflow's RunInBackground(), IsRunning() and Kill() methods. This is useful for running checks for updates and other jobs that hit the network or take a significant amount of time to complete, allowing you to keep your Script Filters extremely responsive. See _examples/update and _examples/workflows for demonstrations of this API.
Package gofpdf implements a PDF document generator with high level support for text, drawing and images. - UTF-8 support - Choice of measurement unit, page format and margins - Page header and footer management - Automatic page breaks, line breaks, and text justification - Inclusion of JPEG, PNG, GIF, TIFF and basic path-only SVG images - Colors, gradients and alpha channel transparency - Outline bookmarks - Internal and external links - TrueType, Type1 and encoding support - Page compression - Lines, Bézier curves, arcs, and ellipses - Rotation, scaling, skewing, translation, and mirroring - Clipping - Document protection - Layers - Templates - Barcodes - Charting facility - Import PDFs as templates gofpdf has no dependencies other than the Go standard library. All tests pass on Linux, Mac and Windows platforms. gofpdf supports UTF-8 TrueType fonts and “right-to-left” languages. Note that Chinese, Japanese, and Korean characters may not be included in many general purpose fonts. For these languages, a specialized font (for example, NotoSansSC for simplified Chinese) can be used. Also, support is provided to automatically translate UTF-8 runes to code page encodings for languages that have fewer than 256 glyphs. This repository will not be maintained, at least for some unknown duration. But it is hoped that gofpdf has a bright future in the open source world. Due to Go’s promise of compatibility, gofpdf should continue to function without modification for a longer time than would be the case with many other languages. Forks should be based on the last viable commit. Tools such as active-forks can be used to select a fork that looks promising for your needs. If a particular fork looks like it has taken the lead in attracting followers, this README will be updated to point people in that direction. The efforts of all contributors to this project have been deeply appreciated. Best wishes to all of you. To install the package on your system, run Later, to receive updates, run The following Go code generates a simple PDF file. See the functions in the fpdf_test.go file (shown as examples in this documentation) for more advanced PDF examples. If an error occurs in an Fpdf method, an internal error field is set. After this occurs, Fpdf method calls typically return without performing any operations and the error state is retained. This error management scheme facilitates PDF generation since individual method calls do not need to be examined for failure; it is generally sufficient to wait until after Output() is called. For the same reason, if an error occurs in the calling application during PDF generation, it may be desirable for the application to transfer the error to the Fpdf instance by calling the SetError() method or the SetErrorf() method. At any time during the life cycle of the Fpdf instance, the error state can be determined with a call to Ok() or Err(). The error itself can be retrieved with a call to Error(). This package is a relatively straightforward translation from the original FPDF library written in PHP (despite the caveat in the introduction to Effective Go). The API names have been retained even though the Go idiom would suggest otherwise (for example, pdf.GetX() is used rather than simply pdf.X()). The similarity of the two libraries makes the original FPDF website a good source of information. It includes a forum and FAQ. However, some internal changes have been made. Page content is built up using buffers (of type bytes.Buffer) rather than repeated string concatenation. Errors are handled as explained above rather than panicking. Output is generated through an interface of type io.Writer or io.WriteCloser. A number of the original PHP methods behave differently based on the type of the arguments that are passed to them; in these cases additional methods have been exported to provide similar functionality. Font definition files are produced in JSON rather than PHP. A side effect of running go test ./... is the production of a number of example PDFs. These can be found in the gofpdf/pdf directory after the tests complete. Please note that these examples run in the context of a test. In order run an example as a standalone application, you’ll need to examine fpdf_test.go for some helper routines, for example exampleFilename() and summary(). Example PDFs can be compared with reference copies in order to verify that they have been generated as expected. This comparison will be performed if a PDF with the same name as the example PDF is placed in the gofpdf/pdf/reference directory and if the third argument to ComparePDFFiles() in internal/example/example.go is true. (By default it is false.) The routine that summarizes an example will look for this file and, if found, will call ComparePDFFiles() to check the example PDF for equality with its reference PDF. If differences exist between the two files they will be printed to standard output and the test will fail. If the reference file is missing, the comparison is considered to succeed. In order to successfully compare two PDFs, the placement of internal resources must be consistent and the internal creation timestamps must be the same. To do this, the methods SetCatalogSort() and SetCreationDate() need to be called for both files. This is done automatically for all examples. Nothing special is required to use the standard PDF fonts (courier, helvetica, times, zapfdingbats) in your documents other than calling SetFont(). You should use AddUTF8Font() or AddUTF8FontFromBytes() to add a TrueType UTF-8 encoded font. Use RTL() and LTR() methods switch between “right-to-left” and “left-to-right” mode. In order to use a different non-UTF-8 TrueType or Type1 font, you will need to generate a font definition file and, if the font will be embedded into PDFs, a compressed version of the font file. This is done by calling the MakeFont function or using the included makefont command line utility. To create the utility, cd into the makefont subdirectory and run “go build”. This will produce a standalone executable named makefont. Select the appropriate encoding file from the font subdirectory and run the command as in the following example. In your PDF generation code, call AddFont() to load the font and, as with the standard fonts, SetFont() to begin using it. Most examples, including the package example, demonstrate this method. Good sources of free, open-source fonts include Google Fonts and DejaVu Fonts. The draw2d package is a two dimensional vector graphics library that can generate output in different forms. It uses gofpdf for its document production mode. gofpdf is a global community effort and you are invited to make it even better. If you have implemented a new feature or corrected a problem, please consider contributing your change to the project. A contribution that does not directly pertain to the core functionality of gofpdf should be placed in its own directory directly beneath the contrib directory. Here are guidelines for making submissions. Your change should - be compatible with the MIT License - be properly documented - be formatted with go fmt - include an example in fpdf_test.go if appropriate - conform to the standards of golint and go vet, that is, golint . and go vet . should not generate any warnings - not diminish test coverage Pull requests are the preferred means of accepting your changes. gofpdf is released under the MIT License. It is copyrighted by Kurt Jung and the contributors acknowledged below. This package’s code and documentation are closely derived from the FPDF library created by Olivier Plathey, and a number of font and image resources are copied directly from it. Bruno Michel has provided valuable assistance with the code. Drawing support is adapted from the FPDF geometric figures script by David Hernández Sanz. Transparency support is adapted from the FPDF transparency script by Martin Hall-May. Support for gradients and clipping is adapted from FPDF scripts by Andreas Würmser. Support for outline bookmarks is adapted from Olivier Plathey by Manuel Cornes. Layer support is adapted from Olivier Plathey. Support for transformations is adapted from the FPDF transformation script by Moritz Wagner and Andreas Würmser. PDF protection is adapted from the work of Klemen Vodopivec for the FPDF product. Lawrence Kesteloot provided code to allow an image’s extent to be determined prior to placement. Support for vertical alignment within a cell was provided by Stefan Schroeder. Ivan Daniluk generalized the font and image loading code to use the Reader interface while maintaining backward compatibility. Anthony Starks provided code for the Polygon function. Robert Lillack provided the Beziergon function and corrected some naming issues with the internal curve function. Claudio Felber provided implementations for dashed line drawing and generalized font loading. Stani Michiels provided support for multi-segment path drawing with smooth line joins, line join styles, enhanced fill modes, and has helped greatly with package presentation and tests. Templating is adapted by Marcus Downing from the FPDF_Tpl library created by Jan Slabon and Setasign. Jelmer Snoeck contributed packages that generate a variety of barcodes and help with registering images on the web. Jelmer Snoek and Guillermo Pascual augmented the basic HTML functionality with aligned text. Kent Quirk implemented backwards-compatible support for reading DPI from images that support it, and for setting DPI manually and then having it properly taken into account when calculating image size. Paulo Coutinho provided support for static embedded fonts. Dan Meyers added support for embedded JavaScript. David Fish added a generic alias-replacement function to enable, among other things, table of contents functionality. Andy Bakun identified and corrected a problem in which the internal catalogs were not sorted stably. Paul Montag added encoding and decoding functionality for templates, including images that are embedded in templates; this allows templates to be stored independently of gofpdf. Paul also added support for page boxes used in printing PDF documents. Wojciech Matusiak added supported for word spacing. Artem Korotkiy added support of UTF-8 fonts. Dave Barnes added support for imported objects and templates. Brigham Thompson added support for rounded rectangles. Joe Westcott added underline functionality and optimized image storage. Benoit KUGLER contributed support for rectangles with corners of unequal radius, modification times, and for file attachments and annotations. - Remove all legacy code page font support; use UTF-8 exclusively - Improve test coverage as reported by the coverage tool. Example demonstrates the generation of a simple PDF document. Note that since only core fonts are used (in this case Arial, a synonym for Helvetica), an empty string can be specified for the font directory in the call to New(). Note also that the example.Filename() and example.Summary() functions belong to a separate, internal package and are not part of the gofpdf library. If an error occurs at some point during the construction of the document, subsequent method calls exit immediately and the error is finally retrieved with the output call where it can be handled by the application.
Enumer is a tool to generate Go code that adds useful methods to Go enums (constants with a specific type). It started as a fork of Rob Pike’s Stringer tool Please visit http://github.com/dmarkham/enumer for a comprehensive documentation
Package storage intends to provide a unified storage layer for Golang. - Production ready: high test coverage, enterprise storage software adaptation, semantic versioning, well documented. - High performance: more code generation, less runtime reflect. - Vendor agnostic: more generic abstraction, less internal details. The most common case to use a Storager service could be following: 1. Init a storager. 2. Use Storager API to maintain data.
Package types implements concrete types for marshalling to and from the dcrd JSON-RPC commands, return values, and notifications. When communicating via the JSON-RPC protocol, all requests and responses must be marshalled to and from the wire in the appropriate format. This package provides data structures and primitives that are registered with dcrjson to ease this process. An overview specific to this package is provided here, however it is also instructive to read the documentation for the dcrjson package (https://pkg.go.dev/github.com/decred/dcrd/dcrjson/v4). The types in this package map to the required parts of the protocol as discussed in the dcrjson documentation: To simplify the marshalling of the requests and responses, the dcrjson.MarshalCmd and dcrjson.MarshalResponse functions may be used. They return the raw bytes ready to be sent across the wire. Unmarshalling a received Request object is a two step process: This approach is used since it provides the caller with access to the additional fields in the request that are not part of the command such as the ID. Unmarshalling a received Response object is also a two step process: As above, this approach is used since it provides the caller with access to the fields in the response such as the ID and Error. This package provides two approaches for creating a new command. This first, and preferred, method is to use one of the New<Foo>Cmd functions. This allows static compile-time checking to help ensure the parameters stay in sync with the struct definitions. The second approach is the dcrjson.NewCmd function which takes a method (command) name and variable arguments. Since this package registers all of its types with dcrjson, the function will recognize them and includes full checking to ensure the parameters are accurate according to provided method, however these checks are, obviously, run-time which means any mistakes won't be found until the code is actually executed. However, it is quite useful for user-supplied commands that are intentionally dynamic. To facilitate providing consistent help to users of the RPC server, the dcrjson package exposes the GenerateHelp and function which uses reflection on commands and notifications registered by this package, as well as the provided expected result types, to generate the final help text. In addition, the dcrjson.MethodUsageText function may be used to generate consistent one-line usage for registered commands and notifications using reflection.
Package lingua accurately detects the natural language of written text, be it long or short. Its task is simple: It tells you which language some text is written in. This is very useful as a preprocessing step for linguistic data in natural language processing applications such as text classification and spell checking. Other use cases, for instance, might include routing e-mails to the right geographically located customer service department, based on the e-mails' languages. Language detection is often done as part of large machine learning frameworks or natural language processing applications. In cases where you don't need the full-fledged functionality of those systems or don't want to learn the ropes of those, a small flexible library comes in handy. So far, the only other comprehensive open source library in the Go ecosystem for this task is Whatlanggo (https://github.com/abadojack/whatlanggo). Unfortunately, it has two major drawbacks: 1. Detection only works with quite lengthy text fragments. For very short text snippets such as Twitter messages, it does not provide adequate results. 2. The more languages take part in the decision process, the less accurate are the detection results. Lingua aims at eliminating these problems. It nearly does not need any configuration and yields pretty accurate results on both long and short text, even on single words and phrases. It draws on both rule-based and statistical methods but does not use any dictionaries of words. It does not need a connection to any external API or service either. Once the library has been downloaded, it can be used completely offline. Compared to other language detection libraries, Lingua's focus is on quality over quantity, that is, getting detection right for a small set of languages first before adding new ones. Currently, 75 languages are supported. They are listed as variants of type Language. Lingua is able to report accuracy statistics for some bundled test data available for each supported language. The test data for each language is split into three parts: 1. a list of single words with a minimum length of 5 characters 2. a list of word pairs with a minimum length of 10 characters 3. a list of complete grammatical sentences of various lengths Both the language models and the test data have been created from separate documents of the Wortschatz corpora (https://wortschatz.uni-leipzig.de) offered by Leipzig University, Germany. Data crawled from various news websites have been used for training, each corpus comprising one million sentences. For testing, corpora made of arbitrarily chosen websites have been used, each comprising ten thousand sentences. From each test corpus, a random unsorted subset of 1000 single words, 1000 word pairs and 1000 sentences has been extracted, respectively. Given the generated test data, I have compared the detection results of Lingua, and Whatlanggo running over the data of Lingua's supported 75 languages. Additionally, I have added Google's CLD3 (https://github.com/google/cld3/) to the comparison with the help of the gocld3 bindings (https://github.com/jmhodges/gocld3). Languages that are not supported by CLD3 or Whatlanggo are simply ignored during the detection process. Lingua clearly outperforms its contenders. Every language detector uses a probabilistic n-gram (https://en.wikipedia.org/wiki/N-gram) model trained on the character distribution in some training corpus. Most libraries only use n-grams of size 3 (trigrams) which is satisfactory for detecting the language of longer text fragments consisting of multiple sentences. For short phrases or single words, however, trigrams are not enough. The shorter the input text is, the less n-grams are available. The probabilities estimated from such few n-grams are not reliable. This is why Lingua makes use of n-grams of sizes 1 up to 5 which results in much more accurate prediction of the correct language. A second important difference is that Lingua does not only use such a statistical model, but also a rule-based engine. This engine first determines the alphabet of the input text and searches for characters which are unique in one or more languages. If exactly one language can be reliably chosen this way, the statistical model is not necessary anymore. In any case, the rule-based engine filters out languages that do not satisfy the conditions of the input text. Only then, in a second step, the probabilistic n-gram model is taken into consideration. This makes sense because loading less language models means less memory consumption and better runtime performance. In general, it is always a good idea to restrict the set of languages to be considered in the classification process using the respective api methods. If you know beforehand that certain languages are never to occur in an input text, do not let those take part in the classifcation process. The filtering mechanism of the rule-based engine is quite good, however, filtering based on your own knowledge of the input text is always preferable. There might be classification tasks where you know beforehand that your language data is definitely not written in Latin, for instance. The detection accuracy can become better in such cases if you exclude certain languages from the decision process or just explicitly include relevant languages. Knowing about the most likely language is nice but how reliable is the computed likelihood? And how less likely are the other examined languages in comparison to the most likely one? In the example below, a slice of ConfidenceValue is returned containing those languages which the calling instance of LanguageDetector has been built from. The entries are sorted by their confidence value in descending order. Each value is a probability between 0.0 and 1.0. The probabilities of all languages will sum to 1.0. If the language is unambiguously identified by the rule engine, the value 1.0 will always be returned for this language. The other languages will receive a value of 0.0. By default, Lingua uses lazy-loading to load only those language models on demand which are considered relevant by the rule-based filter engine. For web services, for instance, it is rather beneficial to preload all language models into memory to avoid unexpected latency while waiting for the service response. If you want to enable the eager-loading mode, you can do it as seen below. Multiple instances of LanguageDetector share the same language models in memory which are accessed asynchronously by the instances. By default, Lingua returns the most likely language for a given input text. However, there are certain words that are spelled the same in more than one language. The word `prologue`, for instance, is both a valid English and French word. Lingua would output either English or French which might be wrong in the given context. For cases like that, it is possible to specify a minimum relative distance that the logarithmized and summed up probabilities for each possible language have to satisfy. It can be stated as seen below. Be aware that the distance between the language probabilities is dependent on the length of the input text. The longer the input text, the larger the distance between the languages. So if you want to classify very short text phrases, do not set the minimum relative distance too high. Otherwise Unknown will be returned most of the time as in the example below. This is the return value for cases where language detection is not reliably possible.
Package storage intends to provide a unified storage layer for Golang. - Production ready: high test coverage, enterprise storage software adaptation, semantic versioning, well documented. - High performance: more code generation, less runtime reflect. - Vendor agnostic: more generic abstraction, less internal details. The most common case to use a Storager service could be following: 1. Init a storager. 2. Use Storager API to maintain data.
simhash package implements Charikar's simhash algorithm to generate a 64-bit fingerprint of a given document. simhash fingerprints have the property that similar documents will have a similar fingerprint. Therefore, the hamming distance between two fingerprints will be small if the documents are similar
Package datadog-api-client-go. This repository contains a Go API client for the Datadog API (https://docs.datadoghq.com/api/). • Go 1.22+ This repository contains per-major-version API client packages. Right now, Datadog has two API versions, v1, v2 and the common package. The client library for Datadog API v1 is located in the api/datadogV1 directory. Import it with The client library for Datadog API v2 is located in the api/datadogV2 directory. Import it with The datadog package for Datadog API is located in the api/datadog directory. Import it with Here's an example creating a user: Save it to example.go, then run go get github.com/DataDog/datadog-api-client-go/v2. Set the DD_CLIENT_API_KEY and DD_CLIENT_APP_KEY to your Datadog credentials, and then run go run example.go. This client includes access to Datadog API endpoints while they are in an unstable state and may undergo breaking changes. An extra configuration step is required to enable these endpoints: where <OperationName> is the name of the method used to interact with that endpoint. For example: GetLogsIndex, or UpdateLogsIndex When talking to a different server, like the eu instance, change the ContextServerVariables: If you want to disable GZIP compressed responses, set the compress flag on your configuration object: If you want to enable requests logging, set the debug flag on your configuration object: If you want to enable retry when getting status code 429 rate-limited, set EnableRetry to true The default max retry is 3, you can change it with MaxRetries If you want to configure proxy, set env var HTTP_PROXY, and HTTPS_PROXY or set custom HTTPClient with proxy configured on configuration object: Several listing operations have a pagination method to help consume all the items available. For example, to retrieve all your incidents: Encoder/Decoder By default, datadog-api-client-go uses the Go standard library enconding/json (https://pkg.go.dev/encoding/json) to encode and decode data. As an alternative users can opt in to use goccy/go-json (https://github.com/goccy/go-json) by specifying the go build tag goccy_gojson. In comparison, there was a significant decrease in cpu time with goccy/go-json with an increase in memory overhead. For further benchmark information, see goccy/go-json benchmark (https://github.com/goccy/go-json#benchmarks) section. Developer documentation for API endpoints and models is available on Github pages (https://datadoghq.dev/datadog-api-client-go/pkg/github.com/DataDog/datadog-api-client-go/v2/). Released versions are available on pkg.go.dev (https://pkg.go.dev/github.com/DataDog/datadog-api-client-go/v2). As most of the code in this repository is generated, we will only accept PRs for files that are not modified by our code-generation machinery (changes to the generated files would get overwritten). We happily accept contributions to files that are not autogenerated, such as tests and development tooling. support@datadoghq.com
Package gofight offers simple API http handler testing for Golang framework. Details about the gofight project are found in github page: Installation: Set Header: You can add custom header via SetHeader func. Set Cookie: You can add custom cookie via SetCookie func. Set query string: Using SetQuery to generate query string data. POST FORM Data: Using SetForm to generate form data. POST JSON Data: Using SetJSON to generate json data. POST RAW Data: Using SetBody to generate raw data. For more details, see the documentation and example.
Package gomarkdoc formats documentation for one or more packages as markdown for usage outside of the main https://pkg.go.dev site. It supports custom templates for tweaking representation of documentation at fine-grained levels, exporting both exported and unexported symbols, and custom formatters for different backends. If you want to use this package as a command-line tool, you can install the command by running the following on go 1.16+: For older versions of go, you can install using the following method instead: The command line tool supports configuration for all of the features of the importable package: The gomarkdoc command processes each of the provided packages, generating documentation for the package in markdown format and writing it to console. For example, if you have a package in your current directory and want to send it to a documentation markdown file, you might do something like this: The gomarkdoc tool supports generating documentation for both local packages and remote ones. To specify a local package, start the name of the package with a period (.) or specify an absolute path on the filesystem. All other package signifiers are assumed to be remote packages. You may specify both local and remote packages in the same command invocation as separate arguments. If you have a project with many packages but you want to skip documentation generation for some, you can use the --exclude-dirs option. This will remove any matching directories from the list of directories to process. Excluded directories are specified using the same pathing syntax as the packages to process. Multiple expressions may be comma-separated or specified by using the --exclude-dirs flag multiple times. For example, in this repository we generate documentation for the entire project while excluding our test packages by running: By default, the documentation generated by the gomarkdoc command is sent to standard output, where it can be redirected to a file. This can be useful if you want to perform additional modifications to the documentation or send it somewhere other than a file. However, keep in mind that there are some inconsistencies in how various shells/platforms handle redirected command output (for example, Powershell encodes in UTF-16, not UTF-8). As a result, the --output option described below is recommended for most use cases. If you want to redirect output for each processed package to a file, you can provide the --output/-o option, which accepts a template specifying how to generate the path of the output file. A common usage of this option is when generating README documentation for a package with subpackages (which are supported via the ... signifier as in other parts of the golang toolchain). In addition, this option provides consistent behavior across platforms and shells: You can see all of the data available to the output template in the PackageSpec struct in the github.com/princjef/gomarkdoc/cmd/gomarkdoc package. The documentation information that is output is formatted using a series of text templates for the various components of the overall documentation which get generated. Higher level templates contain lower level templates, but any template may be replaced with an override template using the --template/-t option. The full list of templates that may be overridden are: file: generates documentation for a file containing one or more packages, depending on how the tool is configured. This is the root template for documentation generation. package: generates documentation for an entire package. type: generates documentation for a single type declaration, as well as any related functions/methods. func: generates documentation for a single function or method. It may be referenced from within a type, or directly in the package, depending on nesting. value: generates documentation for a single variable or constant declaration block within a package. index: generates an index of symbols within a package, similar to what is seen for godoc.org. The index links to types, funcs, variables, and constants generated by other templates, so it may need to be overridden as well if any of those templates are changed in a material way. example: generates documentation for a single example for a package or one of its symbols. The example is generated alongside whichever symbol it represents, based on the standard naming conventions outlined in https://blog.golang.org/examples#TOC_4. doc: generates the freeform documentation block for any of the above structures that can contain a documentation section. import: generates the import code used to pull in a package. Overriding with the --template-file option uses a key-value pair mapping a template name to the file containing the contents of the override template to use. Specified template files must exist: As with the godoc tool itself, only exported symbols will be shown in documentation. This can be expanded to include all symbols in a package by adding the --include-unexported/-u flag. If you want to blend the documentation generated by gomarkdoc with your own hand-written markdown, you can use the --embed/-e flag to change the gomarkdoc tool into an append/embed mode. When documentation is generated, gomarkdoc looks for a file in the location where the documentation is to be written and embeds the documentation if present. Otherwise, the documentation is appended to the end of the file. When running with embed mode enabled, gomarkdoc will look for either this single comment: Or the following pair of comments (in which case all content in between is replaced): If you would like to include files that are part of a build tag, you can specify build tags with the --tags flag. Tags are also supported through GOFLAGS, though command line and configuration file definitions override tags specified through GOFLAGS. You can also run gomarkdoc in a verification mode with the --check/-c flag. This is particularly useful for continuous integration when you want to make sure that a commit correctly updated the generated documentation. This flag is only supported when the --output/-o flag is specified, as the file provided there is what the tool is checking: If you're experiencing difficulty with gomarkdoc or just want to get more information about how it's executing underneath, you can add -v to show more logs. This can be chained a second time to show even more verbose logs: Some features of gomarkdoc rely on being able to detect information from the git repository containing the project. Since individual local git repositories may be configured differently from person to person, you may want to manually specify the information for the repository to remove any inconsistencies. This can be achieved with the --repository.url, --repository.default-branch and --repository.path options. For example, this repository would be configured with: If you want to reuse configuration options across multiple invocations, you can specify a file in the folder where you invoke gomarkdoc containing configuration information that you would otherwise provide on the command line. This file may be a JSON, TOML, YAML, HCL, env, or Java properties file, but the name is expected to start with .gomarkdoc (e.g. .gomarkdoc.yml). All configuration options are available with the camel-cased form of their long name (e.g. --include-unexported becomes includeUnexported). Template overrides are specified as a map, rather than a set of key-value pairs separated by =. Options provided on the command line override those provided in the configuration file if an option is present in both. While most users will find the command line utility sufficient for their needs, this package may also be used programmatically by installing it directly, rather than its command subpackage. The programmatic usage provides more flexibility when selecting what packages to work with and what components to generate documentation for. A common usage will look something like this: This project uses itself to generate the README files in github.com/princjef/gomarkdoc and its subdirectories. To see the commands that are run to generate documentation for this repository, take a look at the Doc() and DocVerify() functions in magefile.go and the .gomarkdoc.yml file in the root of this repository. To run these commands in your own project, simply replace `go run ./cmd/gomarkdoc` with `gomarkdoc`. Know of another project that is using gomarkdoc? Open an issue with a description of the project and link to the repository and it might be featured here!
Package restlayer is an API framework heavily inspired by the excellent Python Eve (http://python-eve.org/). It helps you create a comprehensive, customizable, and secure REST (graph) API on top of pluggable backend storages with no boiler plate code so can focus on your business logic. Implemented as a net/http middleware, it plays well with other middleware like CORS (http://github.com/rs/cors) and is net/context aware thanks to xhandler. REST Layer is an opinionated framework. Unlike many API frameworks, you don’t directly control the routing and you don’t have to write handlers. You just define resources and sub-resources with a schema, the framework automatically figures out what routes to generate behind the scene. You don’t have to take care of the HTTP headers and response, JSON encoding, etc. either. REST layer handles HTTP conditional requests, caching, integrity checking for you. A powerful and extensible validation engine make sure that data comes pre-validated to your custom storage handlers. Generic resource handlers for MongoDB (http://github.com/rs/rest-layer-mongo), ElasticSearch (http://github.com/rs/rest-layer-es) and other databases are also available so you have few to no code to write to make the whole system work. Moreover, REST Layer let you create a graph API by linking resources between them. Thanks to its advanced field selection syntax (and coming support of GraphQL), you can gather resources and their dependencies in a single request, saving you from costly network roundtrips. REST Layer is composed of several sub-packages: See https://github.com/rs/rest-layer/blob/master/README.md for full REST Layer documentation.
Package gorilla/pat is a request router and dispatcher with a pat-like interface. It is an alternative to gorilla/mux that showcases how it can be used as a base for different API flavors. Package pat is documented at: Let's start registering a couple of URL paths and handlers: Here we register three routes mapping URL paths to handlers. This is equivalent to how http.HandleFunc() works: if an incoming GET request matches one of the paths, the corresponding handler is called passing (http.ResponseWriter, *http.Request) as parameters. Note: gorilla/pat matches path prefixes, so you must register the most specific paths first. Note: differently from pat, these methods accept a handler function, and not an http.Handler. We think this is shorter and more convenient. To set an http.Handler, use the Add() method. Paths can have variables. They are defined using the format {name} or {name:pattern}. If a regular expression pattern is not defined, the matched variable will be anything until the next slash. For example: The names are used to create a map of route variables which are stored in the URL query, prefixed by a colon: As in the gorilla/mux package, other matchers can be added to the registered routes and URLs can be reversed as well. To build a URL for a route, first add a name to it: Then you can get it using the name and generate a URL: ...and the result will be a url.URL with the following path: Check the mux documentation for more details about URL building and extra matchers:
Package fpdf implements a PDF document generator with high level support for text, drawing and images. - UTF-8 support - Choice of measurement unit, page format and margins - Page header and footer management - Automatic page breaks, line breaks, and text justification - Inclusion of JPEG, PNG, GIF, TIFF and basic path-only SVG images - Colors, gradients and alpha channel transparency - Outline bookmarks - Internal and external links - TrueType, Type1 and encoding support - Page compression - Lines, Bézier curves, arcs, and ellipses - Rotation, scaling, skewing, translation, and mirroring - Clipping - Document protection - Layers - Templates - Barcodes - Charting facility - Import PDFs as templates go-pdf/fpdf has no dependencies other than the Go standard library. All tests pass on Linux, Mac and Windows platforms. go-pdf/fpdf supports UTF-8 TrueType fonts and “right-to-left” languages. Note that Chinese, Japanese, and Korean characters may not be included in many general purpose fonts. For these languages, a specialized font (for example, NotoSansSC for simplified Chinese) can be used. Also, support is provided to automatically translate UTF-8 runes to code page encodings for languages that have fewer than 256 glyphs. To install the package on your system, run Later, to receive updates, run The following Go code generates a simple PDF file. See the functions in the fpdf_test.go file (shown as examples in this documentation) for more advanced PDF examples. If an error occurs in an Fpdf method, an internal error field is set. After this occurs, Fpdf method calls typically return without performing any operations and the error state is retained. This error management scheme facilitates PDF generation since individual method calls do not need to be examined for failure; it is generally sufficient to wait until after Output() is called. For the same reason, if an error occurs in the calling application during PDF generation, it may be desirable for the application to transfer the error to the Fpdf instance by calling the SetError() method or the SetErrorf() method. At any time during the life cycle of the Fpdf instance, the error state can be determined with a call to Ok() or Err(). The error itself can be retrieved with a call to Error(). This package is a relatively straightforward translation from the original FPDF library written in PHP (despite the caveat in the introduction to Effective Go). The API names have been retained even though the Go idiom would suggest otherwise (for example, pdf.GetX() is used rather than simply pdf.X()). The similarity of the two libraries makes the original FPDF website a good source of information. It includes a forum and FAQ. However, some internal changes have been made. Page content is built up using buffers (of type bytes.Buffer) rather than repeated string concatenation. Errors are handled as explained above rather than panicking. Output is generated through an interface of type io.Writer or io.WriteCloser. A number of the original PHP methods behave differently based on the type of the arguments that are passed to them; in these cases additional methods have been exported to provide similar functionality. Font definition files are produced in JSON rather than PHP. A side effect of running go test ./... is the production of a number of example PDFs. These can be found in the go-pdf/fpdf/pdf directory after the tests complete. Please note that these examples run in the context of a test. In order run an example as a standalone application, you’ll need to examine fpdf_test.go for some helper routines, for example exampleFilename() and summary(). Example PDFs can be compared with reference copies in order to verify that they have been generated as expected. This comparison will be performed if a PDF with the same name as the example PDF is placed in the go-pdf/fpdf/pdf/reference directory and if the third argument to ComparePDFFiles() in internal/example/example.go is true. (By default it is false.) The routine that summarizes an example will look for this file and, if found, will call ComparePDFFiles() to check the example PDF for equality with its reference PDF. If differences exist between the two files they will be printed to standard output and the test will fail. If the reference file is missing, the comparison is considered to succeed. In order to successfully compare two PDFs, the placement of internal resources must be consistent and the internal creation timestamps must be the same. To do this, the methods SetCatalogSort() and SetCreationDate() need to be called for both files. This is done automatically for all examples. Nothing special is required to use the standard PDF fonts (courier, helvetica, times, zapfdingbats) in your documents other than calling SetFont(). You should use AddUTF8Font() or AddUTF8FontFromBytes() to add a TrueType UTF-8 encoded font. Use RTL() and LTR() methods switch between “right-to-left” and “left-to-right” mode. In order to use a different non-UTF-8 TrueType or Type1 font, you will need to generate a font definition file and, if the font will be embedded into PDFs, a compressed version of the font file. This is done by calling the MakeFont function or using the included makefont command line utility. To create the utility, cd into the makefont subdirectory and run “go build”. This will produce a standalone executable named makefont. Select the appropriate encoding file from the font subdirectory and run the command as in the following example. In your PDF generation code, call AddFont() to load the font and, as with the standard fonts, SetFont() to begin using it. Most examples, including the package example, demonstrate this method. Good sources of free, open-source fonts include Google Fonts and DejaVu Fonts. The draw2d package is a two dimensional vector graphics library that can generate output in different forms. It uses gofpdf for its document production mode. gofpdf is a global community effort and you are invited to make it even better. If you have implemented a new feature or corrected a problem, please consider contributing your change to the project. A contribution that does not directly pertain to the core functionality of gofpdf should be placed in its own directory directly beneath the contrib directory. Here are guidelines for making submissions. Your change should - be compatible with the MIT License - be properly documented - be formatted with go fmt - include an example in fpdf_test.go if appropriate - conform to the standards of golint and go vet, that is, golint . and go vet . should not generate any warnings - not diminish test coverage Pull requests are the preferred means of accepting your changes. gofpdf is released under the MIT License. It is copyrighted by Kurt Jung and the contributors acknowledged below. This package’s code and documentation are closely derived from the FPDF library created by Olivier Plathey, and a number of font and image resources are copied directly from it. Bruno Michel has provided valuable assistance with the code. Drawing support is adapted from the FPDF geometric figures script by David Hernández Sanz. Transparency support is adapted from the FPDF transparency script by Martin Hall-May. Support for gradients and clipping is adapted from FPDF scripts by Andreas Würmser. Support for outline bookmarks is adapted from Olivier Plathey by Manuel Cornes. Layer support is adapted from Olivier Plathey. Support for transformations is adapted from the FPDF transformation script by Moritz Wagner and Andreas Würmser. PDF protection is adapted from the work of Klemen Vodopivec for the FPDF product. Lawrence Kesteloot provided code to allow an image’s extent to be determined prior to placement. Support for vertical alignment within a cell was provided by Stefan Schroeder. Ivan Daniluk generalized the font and image loading code to use the Reader interface while maintaining backward compatibility. Anthony Starks provided code for the Polygon function. Robert Lillack provided the Beziergon function and corrected some naming issues with the internal curve function. Claudio Felber provided implementations for dashed line drawing and generalized font loading. Stani Michiels provided support for multi-segment path drawing with smooth line joins, line join styles, enhanced fill modes, and has helped greatly with package presentation and tests. Templating is adapted by Marcus Downing from the FPDF_Tpl library created by Jan Slabon and Setasign. Jelmer Snoeck contributed packages that generate a variety of barcodes and help with registering images on the web. Jelmer Snoek and Guillermo Pascual augmented the basic HTML functionality with aligned text. Kent Quirk implemented backwards-compatible support for reading DPI from images that support it, and for setting DPI manually and then having it properly taken into account when calculating image size. Paulo Coutinho provided support for static embedded fonts. Dan Meyers added support for embedded JavaScript. David Fish added a generic alias-replacement function to enable, among other things, table of contents functionality. Andy Bakun identified and corrected a problem in which the internal catalogs were not sorted stably. Paul Montag added encoding and decoding functionality for templates, including images that are embedded in templates; this allows templates to be stored independently of gofpdf. Paul also added support for page boxes used in printing PDF documents. Wojciech Matusiak added supported for word spacing. Artem Korotkiy added support of UTF-8 fonts. Dave Barnes added support for imported objects and templates. Brigham Thompson added support for rounded rectangles. Joe Westcott added underline functionality and optimized image storage. Benoit KUGLER contributed support for rectangles with corners of unequal radius, modification times, and for file attachments and annotations. - Remove all legacy code page font support; use UTF-8 exclusively - Improve test coverage as reported by the coverage tool. Example demonstrates the generation of a simple PDF document. Note that since only core fonts are used (in this case Arial, a synonym for Helvetica), an empty string can be specified for the font directory in the call to New(). Note also that the example.Filename() and example.SummaryCompare() functions belong to a separate, internal package and are not part of the gofpdf library. If an error occurs at some point during the construction of the document, subsequent method calls exit immediately and the error is finally retrieved with the output call where it can be handled by the application.
Gocc is LR1 parser generator for go written in go. The generator uses a BNF with very easy to use SDT rules. Please see https://github.com/goccmack/gocc/ for more documentation.
Package stopwords allows you to customize the list of stopwords Package stopwords implements the Levenshtein Distance algorithm to evaluate the diference between 2 strings Package stopwords implements Charikar's simhash algorithm to generate a 64-bit fingerprint of a given document. Package stopwords contains various algorithms of text comparison (Simhash, Levenshtein)
Package crypto provides a toolbox of advanced cryptographic primitives, for applications that need more than straightforward signing and encryption. The cornerstone of this toolbox is the 'abstract' sub-package, which defines abstract interfaces to cryptographic primitives designed to be independent of specific cryptographic algorithms, to facilitate upgrading applications to new cryptographic algorithms or switching to alternative algorithms for experimentation purposes. This toolkit's public-key crypto API includes an abstract.Group interface generically supporting a broad class of group-based public-key primitives including DSA-style integer residue groups and elliptic curve groups. Users of this API can thus write higher-level crypto algorithms such as zero-knowledge proofs without knowing or caring exactly what kind of group, let alone which precise security parameters or elliptic curves, are being used. The abstract group interface supports the standard algebraic operations on group elements and scalars that nontrivial public-key algorithms tend to rely on. The interface uses additive group terminology typical for elliptic curves, such that point addition is homomorphically equivalent to adding their (potentially secret) scalar multipliers. But the API and its operations apply equally well to DSA-style integer groups. The abstract.Suite interface builds further on the abstract.Group API to represent an abstraction of entire pluggable ciphersuites, which include a group (e.g., curve) suitable for advanced public-key crypto together with a suitably matched set of symmetric-key crypto algorithms. As a trivial example, generating a public/private keypair is as simple as: The first statement picks a private key (Scalar) from a specified source of cryptographic random or pseudo-random bits, while the second performs elliptic curve scalar multiplication of the curve's standard base point (indicated by the 'nil' argument to Mul) by the scalar private key 'a'. Similarly, computing a Diffie-Hellman shared secret using Alice's private key 'a' and Bob's public key 'B' can be done via: Note that we use 'Mul' rather than 'Exp' here because the library uses the additive-group terminology common for elliptic curve crypto, rather than the multiplicative-group terminology of traditional integer groups - but the two are semantically equivalent and the interface itself works for both elliptic curve and integer groups. See below for more complete examples. Various sub-packages provide several specific implementations of these abstract cryptographic interfaces. In particular, the 'nist' sub-package provides implementations of modular integer groups underlying conventional DSA-style algorithms, and of NIST-standardized elliptic curves built on the Go crypto library. The 'edwards' sub-package provides the abstract group interface using more recent Edwards curves, including the popular Ed25519 curve. The 'openssl' sub-package offers an alternative implementation of NIST-standardized elliptic curves and symmetric-key algorithms, built as wrappers around OpenSSL's crypto library. Other sub-packages build more interesting high-level cryptographic tools atop these abstract primitive interfaces, including: - poly: Polynomial commitment and verifiable Shamir secret splitting for implementing verifiable 't-of-n' threshold cryptographic schemes. This can be used to encrypt a message so that any 2 out of 3 receivers must work together to decrypt it, for example. - proof: An implementation of the general Camenisch/Stadler framework for discrete logarithm knowledge proofs. This system supports both interactive and non-interactive proofs of a wide variety of statements such as, "I know the secret x associated with public key X or I know the secret y associated with public key Y", without revealing anything about either secret or even which branch of the "or" clause is true. - anon: Anonymous and pseudonymous public-key encryption and signing, where the sender of a signed message or the receiver of an encrypted message is defined as an explicit anonymity set containing several public keys rather than just one. For example, a member of an organization's board of trustees might prove to be a member of the board without revealing which member she is. - shuffle: Verifiable cryptographic shuffles of ElGamal ciphertexts, which can be used to implement (for example) voting or auction schemes that keep the sources of individual votes or bids private without anyone having to trust the shuffler(s) to shuffle votes/bids honestly. For now this library should currently be considered experimental: it will definitely be changing in non-backward-compatible ways, and it will need independent security review before it should be considered ready for use in security-critical applications. However, we intend to bring the library closer to stability and real-world usability as quickly as development resources permit, and as interest and application demand dictates. As should be obvious, this library is intended the use of developers who are at least moderately knowledgeable about crypto. If you want a crypto library that makes it easy to implement "basic crypto" functionality correctly - i.e., plain public-key encryption and signing - then the NaCl/Sodium pursues this worthy goal (http://doc.libsodium.org). This toolkit's purpose is to make it possible - and preferably but not necessarily easy - to do slightly more interesting things that most current crypto libraries don't support effectively. The one existing crypto library that this toolkit is probably most comparable to is the Charm rapid prototyping library for Python (http://charm-crypto.com/). This library incorporates and/or builds on existing code from a variety of sources, as documented in the relevant sub-packages. This example illustrates how to use the crypto toolkit's abstract group API to perform basic Diffie-Hellman key exchange calculations, using the NIST-standard P256 elliptic curve in this case. Any other suitable elliptic curve or other cryptographic group may be used simply by changing the first line that picks the suite. This example illustrates how the crypto toolkit may be used to perform "pure" ElGamal encryption, in which the message to be encrypted is small enough to be embedded directly within a group element (e.g., in an elliptic curve point). For basic background on ElGamal encryption see for example http://en.wikipedia.org/wiki/ElGamal_encryption. Most public-key crypto libraries tend not to support embedding data in points, in part because for "vanilla" public-key encryption you don't need it: one would normally just generate an ephemeral Diffie-Hellman secret and use that to seed a symmetric-key crypto algorithm such as AES, which is much more efficient per bit and works for arbitrary-length messages. However, in many advanced public-key crypto algorithms it is often useful to be able to embedded data directly into points and compute with them: as just one of many examples, the proactively verifiable anonymous messaging scheme prototyped in Verdict (see http://dedis.cs.yale.edu/dissent/papers/verdict-abs). For fancier versions of ElGamal encryption implemented in this toolkit see for example anon.Encrypt, which encrypts a message for one of several possible receivers forming an explicit anonymity set.
Package generator provides the code generation library for go-swagger. The general idea is that you should rarely see interface{} in the generated code. You get a complete representation of a swagger document in somewhat idiomatic go. To do so, there is a set of mapping patterns that are applied, to map a Swagger specification to go types: NOTE: anyOf and oneOf JSON-schema constructs are not supported by Swagger 2.0 A property on a definition is a pointer when any one of the following conditions is met: JSONSchema and by extension Swagger allow for items that have a fixed size array, with the schema describing the items at each index. This can be combined with additional items to form some kind of tuple with varargs. To map this to go it creates a struct that has fixed names and a custom json serializer. NOTE: the additionalItems keyword is not supported by Swagger 2.0. However, the generator and validator parts in go-swagger do. The code that is generated also gets the doc comments that are used by the scanner to generate a spec from go code. So that after generation you should be able to reverse generate a spec from the code that was generated by your spec. It should be equivalent to the original spec but might miss some default values and examples.
This is a Chef Infra Server API client. This Library can be used to write tools to interact with the chef server. The testing can be run with go test, and the client can be used as per normal via: Documentation can be found on GoDoc at http://godoc.org/github.com/go-chef/chef This is an example code generating a new node on a Chef Infra Server:
Package podcast generates a fully compliant iTunes and RSS 2.0 podcast feed for GoLang using a simple API. Full documentation with detailed examples located at https://godoc.org/github.com/eduncan911/podcast To use, `go get` and `import` the package like your typical GoLang library. The API exposes a number of method receivers on structs that implements the logic required to comply with the specifications and ensure a compliant feed. A number of overrides occur to help with iTunes visibility of your episodes. Notably, the `Podcast.AddItem` function performs most of the heavy lifting by taking the `Item` input and performing validation, overrides and duplicate setters through the feed. Full detailed Examples of the API are at https://godoc.org/github.com/eduncan911/podcast. This library is supported on GoLang 1.7 and higher. We have implemented Go Modules support and the CI pipeline shows it working with new installs, tested with Go 1.13. To keep 1.7 compatibility, we use `go mod vendor` to maintain the `vendor/` folder for older 1.7 and later runtimes. If either runtime has an issue, please create an Issue and I will address. For version 1.x, you are not restricted in having full control over your feeds. You may choose to skip the API methods and instead use the structs directly. The fields have been grouped by RSS 2.0 and iTunes fields with iTunes specific fields all prefixed with the letter `I`. However, do note that the 2.x version currently in progress will break this extensibility and enforce API methods going forward. This is to ensure that the feed can both be marshalled, and unmarshalled back and forth (current 1.x branch can only be unmarshalled - hence the work for 2.x). `go-fuzz` has been added in 1.4.1, covering all exported API methods. They have been ran extensively and no issues have come out of them yet (most tests were ran overnight, over about 11 hours with zero crashes). If you wish to help fuzz the inputs, with Go 1.13 or later you can run `go-fuzz` on any of the inputs. To obtain a list of available funcs to pass, just run `go-fuzz` without any parameters: If you do find an issue, please raise an issue immediately and I will quickly address. The 1.x branch is now mostly in maintenance mode, open to PRs. This means no more planned features on the 1.x feature branch is expected. With the success of 6 iTunes-accepted podcasts I have published with this library, and with the feedback from the community, the 1.x releases are now considered stable. The 2.x branch's primary focus is to allow for bi-direction marshalling both ways. Currently, the 1.x branch only allows unmarshalling to a serial feed. An attempt to marshall a serialized feed back into a Podcast form will error or not work correctly. Note that while the 2.x branch is targeted to remain backwards compatible, it is true if using the public API funcs to set parameters only. Several of the underlying public fields are being removed in order to accommodate the marshalling of serialized data. Therefore, a version 2.x is denoted for this release. We use SemVer versioning schema. You can rest assured that pulling 1.x branches will remain backwards compatible now and into the future. However, the new 2.x branch, while keeping the same API, is expected break those that bypass the API methods and use the underlying public properties instead. v1.4.2 v1.4.1 v1.4.0 v1.3.2 v1.3.1 v1.3.0 v1.2.1 v1.2.0 v1.1.0 v1.0.0 RSS 2.0: https://cyber.harvard.edu/rss/rss.html Podcasts: https://help.apple.com/itc/podcasts_connect/#/itca5b22233
Package iplib provides enhanced tools for working with IP networks and addresses. These tools are built upon and extend the generic functionality found in the Go "net" package. The main library comes in two parts: a series of utilities for working with net.IP (sort, increment, decrement, delta, compare, convert to binary or hex- string, convert between net.IP and integer) and an enhancement of net.IPNet called iplib.Net that can calculate the first and last IPs of a block as well as enumerating the block into []net.IP, incrementing and decrementing within the boundaries of the block and creating sub- or super-nets of it. For most features iplib exposes a v4 and a v6 variant to handle each network properly, but in all cases there is a generic function that handles any IP and routes between them. One caveat to this is those functions that require or return an integer value representing the address, in these cases the IPv4 variants take an int32 as input while the IPv6 functions require a *big.Int in order to work with the 128bits of address. For managing the complexity of IPv6 address-spaces, this library adds a new mask, called a Hostmask, as an optional constraint on iplib.Net6 networks, please see the type-documentation for more information on using it. For functions where it is possible to exceed the address-space the rule is that underflows return the version-appropriate all-zeroes address while overflows return the all-ones. There are also two submodules under iplib: the iplib/iid module contains functions for generating RFC 7217-compliant IPv6 Interface ID addresses, and iplib/iana imports the IANA IP Special Registries and exposes functions for comparing IP addresses against those registries to determine if the IP is part of a special reservation (for example RFC 1918 private networks or the RFC 3849 documentation network).
Package crypto provides a toolbox of advanced cryptographic primitives, for applications that need more than straightforward signing and encryption. The cornerstone of this toolbox is the 'abstract' sub-package, which defines abstract interfaces to cryptographic primitives designed to be independent of specific cryptographic algorithms, to facilitate upgrading applications to new cryptographic algorithms or switching to alternative algorithms for experimentation purposes. This toolkit's public-key crypto API includes an abstract.Group interface generically supporting a broad class of group-based public-key primitives including DSA-style integer residue groups and elliptic curve groups. Users of this API can thus write higher-level crypto algorithms such as zero-knowledge proofs without knowing or caring exactly what kind of group, let alone which precise security parameters or elliptic curves, are being used. The abstract group interface supports the standard algebraic operations on group elements and scalars that nontrivial public-key algorithms tend to rely on. The interface uses additive group terminology typical for elliptic curves, such that point addition is homomorphically equivalent to adding their (potentially secret) scalar multipliers. But the API and its operations apply equally well to DSA-style integer groups. The abstract.Suite interface builds further on the abstract.Group API to represent an abstraction of entire pluggable ciphersuites, which include a group (e.g., curve) suitable for advanced public-key crypto together with a suitably matched set of symmetric-key crypto algorithms. As a trivial example, generating a public/private keypair is as simple as: The first statement picks a private key (Scalar) from a specified source of cryptographic random or pseudo-random bits, while the second performs elliptic curve scalar multiplication of the curve's standard base point (indicated by the 'nil' argument to Mul) by the scalar private key 'a'. Similarly, computing a Diffie-Hellman shared secret using Alice's private key 'a' and Bob's public key 'B' can be done via: Note that we use 'Mul' rather than 'Exp' here because the library uses the additive-group terminology common for elliptic curve crypto, rather than the multiplicative-group terminology of traditional integer groups - but the two are semantically equivalent and the interface itself works for both elliptic curve and integer groups. See below for more complete examples. Various sub-packages provide several specific implementations of these abstract cryptographic interfaces. In particular, the 'nist' sub-package provides implementations of modular integer groups underlying conventional DSA-style algorithms, and of NIST-standardized elliptic curves built on the Go crypto library. The 'edwards' sub-package provides the abstract group interface using more recent Edwards curves, including the popular Ed25519 curve. The 'openssl' sub-package offers an alternative implementation of NIST-standardized elliptic curves and symmetric-key algorithms, built as wrappers around OpenSSL's crypto library. Other sub-packages build more interesting high-level cryptographic tools atop these abstract primitive interfaces, including: - poly: Polynomial commitment and verifiable Shamir secret splitting for implementing verifiable 't-of-n' threshold cryptographic schemes. This can be used to encrypt a message so that any 2 out of 3 receivers must work together to decrypt it, for example. - proof: An implementation of the general Camenisch/Stadler framework for discrete logarithm knowledge proofs. This system supports both interactive and non-interactive proofs of a wide variety of statements such as, "I know the secret x associated with public key X or I know the secret y associated with public key Y", without revealing anything about either secret or even which branch of the "or" clause is true. - anon: Anonymous and pseudonymous public-key encryption and signing, where the sender of a signed message or the receiver of an encrypted message is defined as an explicit anonymity set containing several public keys rather than just one. For example, a member of an organization's board of trustees might prove to be a member of the board without revealing which member she is. - shuffle: Verifiable cryptographic shuffles of ElGamal ciphertexts, which can be used to implement (for example) voting or auction schemes that keep the sources of individual votes or bids private without anyone having to trust the shuffler(s) to shuffle votes/bids honestly. For now this library should currently be considered experimental: it will definitely be changing in non-backward-compatible ways, and it will need independent security review before it should be considered ready for use in security-critical applications. However, we intend to bring the library closer to stability and real-world usability as quickly as development resources permit, and as interest and application demand dictates. As should be obvious, this library is intended the use of developers who are at least moderately knowledgeable about crypto. If you want a crypto library that makes it easy to implement "basic crypto" functionality correctly - i.e., plain public-key encryption and signing - then the NaCl/Sodium pursues this worthy goal (http://doc.libsodium.org). This toolkit's purpose is to make it possible - and preferably but not necessarily easy - to do slightly more interesting things that most current crypto libraries don't support effectively. The one existing crypto library that this toolkit is probably most comparable to is the Charm rapid prototyping library for Python (http://charm-crypto.com/). This library incorporates and/or builds on existing code from a variety of sources, as documented in the relevant sub-packages. This example illustrates how to use the crypto toolkit's abstract group API to perform basic Diffie-Hellman key exchange calculations, using the NIST-standard P256 elliptic curve in this case. Any other suitable elliptic curve or other cryptographic group may be used simply by changing the first line that picks the suite. This example illustrates how the crypto toolkit may be used to perform "pure" ElGamal encryption, in which the message to be encrypted is small enough to be embedded directly within a group element (e.g., in an elliptic curve point). For basic background on ElGamal encryption see for example http://en.wikipedia.org/wiki/ElGamal_encryption. Most public-key crypto libraries tend not to support embedding data in points, in part because for "vanilla" public-key encryption you don't need it: one would normally just generate an ephemeral Diffie-Hellman secret and use that to seed a symmetric-key crypto algorithm such as AES, which is much more efficient per bit and works for arbitrary-length messages. However, in many advanced public-key crypto algorithms it is often useful to be able to embedded data directly into points and compute with them: as just one of many examples, the proactively verifiable anonymous messaging scheme prototyped in Verdict (see http://dedis.cs.yale.edu/dissent/papers/verdict-abs). For fancier versions of ElGamal encryption implemented in this toolkit see for example anon.Encrypt, which encrypts a message for one of several possible receivers forming an explicit anonymity set.
Package gofight offers simple API http handler testing for Golang framework. Details about the gofight project are found in github page: Installation: Set Header: You can add custom header via SetHeader func. Set Cookie: You can add custom cookie via SetCookie func. Set query string: Using SetQuery to generate query string data. POST FORM Data: Using SetForm to generate form data. POST JSON Data: Using SetJSON to generate json data. POST RAW Data: Using SetBody to generate raw data. For more details, see the documentation and example.
Package XGB provides the X Go Binding, which is a low-level API to communicate with the core X protocol and many of the X extensions. It is *very* closely modeled on XCB, so that experience with XCB (or xpyb) is easily translatable to XGB. That is, it uses the same cookie/reply model and is thread safe. There are otherwise no major differences (in the API). Most uses of XGB typically fall under the realm of window manager and GUI kit development, but other applications (like pagers, panels, tilers, etc.) may also require XGB. Moreover, it is a near certainty that if you need to work with X, xgbutil will be of great use to you as well: https://github.com/jezek/xgbutil This is an extremely terse example that demonstrates how to connect to X, create a window, listen to StructureNotify events and Key{Press,Release} events, map the window, and print out all events received. An example with accompanying documentation can be found in examples/create-window. This is another small example that shows how to query Xinerama for geometry information of each active head. Accompanying documentation for this example can be found in examples/xinerama. XGB can benefit greatly from parallelism due to its concurrent design. For evidence of this claim, please see the benchmarks in xproto/xproto_test.go. xproto/xproto_test.go contains a number of contrived tests that stress particular corners of XGB that I presume could be problem areas. Namely: requests with no replies, requests with replies, checked errors, unchecked errors, sequence number wrapping, cookie buffer flushing (i.e., forcing a round trip every N requests made that don't have a reply), getting/setting properties and creating a window and listening to StructureNotify events. Both XCB and xpyb use the same Python module (xcbgen) for a code generator. XGB (before this fork) used the same code generator as well, but in my attempt to add support for more extensions, I found the code generator extremely difficult to work with. Therefore, I re-wrote the code generator in Go. It can be found in its own sub-package, xgbgen, of xgb. My design of xgbgen includes a rough consideration that it could be used for other languages. I am reasonably confident that the core X protocol is in full working form. I've also tested the Xinerama and RandR extensions sparingly. Many of the other existing extensions have Go source generated (and are compilable) and are included in this package, but I am currently unsure of their status. They *should* work. XKB is the only extension that intentionally does not work, although I suspect that GLX also does not work (however, there is Go source code for GLX that compiles, unlike XKB). I don't currently have any intention of getting XKB working, due to its complexity and my current mental incapacity to test it.
Package pbc provides structures for building pairing-based cryptosystems. It is a wrapper around the Pairing-Based Cryptography (PBC) Library authored by Ben Lynn (https://crypto.stanford.edu/pbc/). This wrapper provides access to all PBC functions. It supports generation of various types of elliptic curves and pairings, element initialization, I/O, and arithmetic. These features can be used to quickly build pairing-based or conventional cryptosystems. The PBC library is designed to be extremely fast. Internally, it uses GMP for arbitrary-precision arithmetic. It also includes a wide variety of optimizations that make pairing-based cryptography highly efficient. To improve performance, PBC does not perform type checking to ensure that operations actually make sense. The Go wrapper provides the ability to add compatibility checks to most operations, or to use unchecked elements to maximize performance. Since this library provides low-level access to pairing primitives, it is very easy to accidentally construct insecure systems. This library is intended to be used by cryptographers or to implement well-analyzed cryptosystems. Cryptographic pairings are defined over three mathematical groups: G1, G2, and GT, where each group is typically of the same order r. Additionally, a bilinear map e maps a pair of elements — one from G1 and another from G2 — to an element in GT. This map e has the following additional property: If G1 == G2, then a pairing is said to be symmetric. Otherwise, it is asymmetric. Pairings can be used to construct a variety of efficient cryptosystems. The PBC library currently supports 5 different types of pairings, each with configurable parameters. These types are designated alphabetically, roughly in chronological order of introduction. Type A, D, E, F, and G pairings are implemented in the library. Each type has different time and space requirements. For more information about the types, see the documentation for the corresponding generator calls, or the PBC manual page at https://crypto.stanford.edu/pbc/manual/ch05s01.html. This package must be compiled using cgo. It also requires the installation of GMP and PBC. During the build process, this package will attempt to include <gmp.h> and <pbc/pbc.h>, and then dynamically link to GMP and PBC. Most systems include a package for GMP. To install GMP in Debian / Ubuntu: For an RPM installation with YUM: For installation with Fink (http://www.finkproject.org/) on Mac OS X: For more information or to compile from source, visit https://gmplib.org/ To install the PBC library, download the appropriate files for your system from https://crypto.stanford.edu/pbc/download.html. PBC has three dependencies: the gcc compiler, flex (http://flex.sourceforge.net/), and bison (https://www.gnu.org/software/bison/). See the respective sites for installation instructions. Most distributions include packages for these libraries. For example, in Debian / Ubuntu: The PBC source can be compiled and installed using the usual GNU Build System: After installing, you may need to rebuild the search path for libraries: It is possible to install the package on Windows through the use of MinGW and MSYS. MSYS is required for installing PBC, while GMP can be installed through a package. Based on your MinGW installation, you may need to add "-I/usr/local/include" to CPPFLAGS and "-L/usr/local/lib" to LDFLAGS when building PBC. Likewise, you may need to add these options to CGO_CPPFLAGS and CGO_LDFLAGS when installing this package. This package is free software: you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. For additional details, see the COPYING and COPYING.LESSER files. This example generates a pairing and some random group elements, then applies the pairing operation. This example computes and verifies a Boneh-Lynn-Shacham signature in a simulated conversation between Alice and Bob.
Package goreadme generates readme markdown file from go doc. The package can be used as a command line tool and as Github action, described below: Github actions can be configured to update the README file automatically every time it is needed. Below there is an example that on every time a new change is pushed to the main branch, the action is trigerred, generates a new README file, and if there is a change - commits and pushes it to the main branch. In pull requests that affect the README content, if the `GITHUB_TOKEN` is given, the action will post a comment on the pull request with changes that will be made to the README file. To use this with Github actions, add the following content to `.github/workflows/goreadme.yml`. See ./action.yml for all available input options. Use as a command line tool Both Go doc and readme files are important. Go doc to be used by your user's library, and README file to welcome users to use your library. They share common content, which is usually duplicated from the doc to the readme or vice versa once the library is ready. The problem is that keeping documentation updated is important, and hard enough - keeping both updated is twice as hard. The formatting of the README.md is done by the go doc parser. This makes the result README.md a bit more limited. Currently, `goreadme` supports the formatting as explained in (godoc page) https://blog.golang.org/godoc-documenting-go-code, or (here) https://pkg.go.dev/github.com/fluhus/godoc-tricks. Meaning: * A header is a single line that is separated from a paragraph above. * Code block is recognized by indentation as Go code. * Inline code is marked with `backticks`. * URLs will just automatically be converted to links: https://github.com/posener/goreadme Additionally, the syntax was extended to include some more markdown features while keeping the Go doc readable: * Bulleted and numbered lists are possible when each bullet item is followed by an empty line. * Diff blocks are automatically detected when each line in a code block starts with a `' '`, `'-'` or `'+'`: * A repository file can be linked when providing a path that start with `./`: ./goreadme.go. * A link can have a link text by prefixing it with parenthesised text: (goreadme page) https://github.com/posener/goreadme. * A link to repository file and can have a link text: (goreadme main file) ./goreamde.go. * An image can be added by prefixing a link to an image with `(image/<image title>)`: (image/title of image) https://github.githubassets.com/images/icons/emoji/unicode/1f44c.png The goreadme tests the test cases in the ./testdata directory. It generates readme files for all the packages in that directory and asserts that the result readme matches the existing one. When modifying goreadme behavior, there is no need to manually change these readme files. It is possible to run `WRITE_READMES=1 go test ./...` which regenerates them and check the changes match the expected (optionally using `git diff`).
Package gofight offers simple API http handler testing for Golang framework. Details about the gofight project are found in github page: Installation: Set Header: You can add custom header via SetHeader func. Set query string: Using SetQuery to generate query string data. POST FORM Data: Using SetForm to generate form data. POST JSON Data: Using SetJSON to generate json data. POST RAW Data: Using SetBody to generate raw data. For more details, see the documentation and example.
Package kyber provides a toolbox of advanced cryptographic primitives, for applications that need more than straightforward signing and encryption. This top level package defines the interfaces to cryptographic primitives designed to be independent of specific cryptographic algorithms, to facilitate upgrading applications to new cryptographic algorithms or switching to alternative algorithms for experimentation purposes. This toolkits public-key crypto API includes a kyber.Group interface supporting a broad class of group-based public-key primitives including DSA-style integer residue groups and elliptic curve groups. Users of this API can write higher-level crypto algorithms such as zero-knowledge proofs without knowing or caring exactly what kind of group, let alone which precise security parameters or elliptic curves, are being used. The kyber.Group interface supports the standard algebraic operations on group elements and scalars that nontrivial public-key algorithms tend to rely on. The interface uses additive group terminology typical for elliptic curves, such that point addition is homomorphically equivalent to adding their (potentially secret) scalar multipliers. But the API and its operations apply equally well to DSA-style integer groups. As a trivial example, generating a public/private keypair is as simple as: The first statement picks a private key (Scalar) from a the suites's source of cryptographic random or pseudo-random bits, while the second performs elliptic curve scalar multiplication of the curve's standard base point (indicated by the 'nil' argument to Mul) by the scalar private key 'a'. Similarly, computing a Diffie-Hellman shared secret using Alice's private key 'a' and Bob's public key 'B' can be done via: Note that we use 'Mul' rather than 'Exp' here because the library uses the additive-group terminology common for elliptic curve crypto, rather than the multiplicative-group terminology of traditional integer groups - but the two are semantically equivalent and the interface itself works for both elliptic curve and integer groups. Various sub-packages provide several specific implementations of these cryptographic interfaces. In particular, the 'group/mod' sub-package provides implementations of modular integer groups underlying conventional DSA-style algorithms. The `group/nist` package provides NIST-standardized elliptic curves built on the Go crypto library. The 'group/edwards25519' sub-package provides the kyber.Group interface using the popular Ed25519 curve. Other sub-packages build more interesting high-level cryptographic tools atop these primitive interfaces, including: - share: Polynomial commitment and verifiable Shamir secret splitting for implementing verifiable 't-of-n' threshold cryptographic schemes. This can be used to encrypt a message so that any 2 out of 3 receivers must work together to decrypt it, for example. - proof: An implementation of the general Camenisch/Stadler framework for discrete logarithm knowledge proofs. This system supports both interactive and non-interactive proofs of a wide variety of statements such as, "I know the secret x associated with public key X or I know the secret y associated with public key Y", without revealing anything about either secret or even which branch of the "or" clause is true. - sign: The sign directory contains different signature schemes. - sign/anon provides anonymous and pseudonymous public-key encryption and signing, where the sender of a signed message or the receiver of an encrypted message is defined as an explicit anonymity set containing several public keys rather than just one. For example, a member of an organization's board of trustees might prove to be a member of the board without revealing which member she is. - sign/cosi provides collective signature algorithm, where a bunch of signers create a unique, compact and efficiently verifiable signature using the Schnorr signature as a basis. - sign/eddsa provides a kyber-native implementation of the EdDSA signature scheme. - sign/schnorr provides a basic vanilla Schnorr signature scheme implementation. - shuffle: Verifiable cryptographic shuffles of ElGamal ciphertexts, which can be used to implement (for example) voting or auction schemes that keep the sources of individual votes or bids private without anyone having to trust more than one of the shuffler(s) to shuffle votes/bids honestly. As should be obvious, this library is intended to be used by developers who are at least moderately knowledgeable about cryptography. If you want a crypto library that makes it easy to implement "basic crypto" functionality correctly - i.e., plain public-key encryption and signing - then [NaCl secretbox](https://godoc.org/golang.org/x/crypto/nacl/secretbox) may be a better choice. This toolkit's purpose is to make it possible - and preferably easy - to do slightly more interesting things that most current crypto libraries don't support effectively. The one existing crypto library that this toolkit is probably most comparable to is the Charm rapid prototyping library for Python (https://charm-crypto.com/category/charm). This library incorporates and/or builds on existing code from a variety of sources, as documented in the relevant sub-packages. This library is offered as-is, and without a guarantee. It will need an independent security review before it should be considered ready for use in security-critical applications. If you integrate Kyber into your application it is YOUR RESPONSIBILITY to arrange for that audit. If you notice a possible security problem, please report it to dedis-security@epfl.ch.
Package storage intend to provide a unified storage layer for Golang. - Production ready: high test coverage, enterprise storage software adaptation, semantic versioning, well documented. - High performance: more code generation, less runtime reflect. - Vendor agnostic: more generic abstraction, less internal details. There two main public interfaces: Servicer and Storager. Storager is a fully functional storage client, and Servicer is a manager of Storager instances, which will be useful for services like object storage. For any service, Storager is required to implement and Servicer is optional. The most common case to use a Storager service could be following: 1. Init a service. 2. Use Storager API to maintain data. - Storage uses error wrapping added by go 1.13, go version before 1.13 could be behaved as unexpected.
Package pango is a golang cross version mechanism for interacting with Palo Alto Networks devices (including physical and virtualized Next-generation Firewalls and Panorama). Versioning support is in place for PAN-OS 6.1 and up. To start, create a client connection with the desired parameters and then initialize the connection: Initializing the connection creates the API key (if it was not already specified), then performs "show system info" to get the PAN-OS version. Once the firewall client is created, you can query and configure the Palo Alto Networks device from the functions inside the various namespaces of the client connection. Namespaces correspond to the various configuration areas available in the GUI. For example: Generally speaking, there are the following functions inside each namespace: These functions correspond with PAN-OS Get, Show, Set, Edit, and Delete API calls. Get(), Set(), and Edit() take and return normalized, version independent objects. These version safe objects are typically named Entry, which corresponds to how the object is placed in the PAN-OS XPATH. Some Entry objects have a special function, Defaults(). Invoking this function will initialize the object with some default values. Each Entry that implements Defaults() calls out in its documentation what parameters are affected by this, and what the defaults are. For any version safe object, attempting to configure a parameter that your PAN-OS doesn't support will be safely ignored in the resultant XML sent to the firewall / Panorama. A PAN-OS configuration can be loaded from a PAN-OS device using `RetrievePanosConfig()` to pull it from a live device or `LoadPanosConfig()` if already in local memory. Once it's been loaded, use `FromPanosConfig()` for singletons and `AllFromPanosConfig()` for slices of normalized objects from the loaded config. You can also use this file load and config retrieval to do offline inspection of the config, just make sure to set `pango.Client.Version` to the appropriate PAN-OS version so the version normalization can take place. The PAN-OS XML API Edit command can be used to both create as well as update existing config, however it can also truncate config for the given XPATH. Due to this, if you want to use Edit(), you need to make sure that you perform either a Get() or a Show() first, make your modification, then invoke Edit() using that object. If you don't do this, you will truncate any sub config. To learn more about PAN-OS XML API, please refer to the Palo Alto Netowrks API documentation. Functions such as `panos.Client.Set`, `panos.Client.Edit`, and `panos.Client.Delete` take a parameter named `path`. This path can be either a fully formed XPATH as a string or a list of strings such as `[]string{"config", "shared", "address"}`. The grand majority of namespaces give their paths as a list of strings, as the XPATH oftentimes needs to be tweaked depending on SET vs EDIT, single objects vs multiple objects, etc, so handling path updates is easier this way. Example_createAddressGroup is a Panorama example on how to create/delete a security policy with the associated address group and addresses ExampleCreateInterface demonstrates how to use pango to create an interface if the interface is not already configured. ExamplePanosInfo outputs various info about a PAN-OS device as JSON.
Package capn is a capnproto library for go see http://kentonv.github.io/capnproto/ capnpc-go provides the compiler backend for capnp after installing to $PATH capnp files can be compiled with capnpc-go requires two annotations for all types. This is the package and import found in go.capnp. Package is needed to know what package to place at the head of the generated file and what go name to use when referring to the type from another package. Import should be the fully qualified import path and is used to generate import statement from other packages and to detect when two types are in the same package. Typically these are added as file annotations. For example: In capnproto, the unit of communication is a message. A message consists of one or more of segments to allow easier allocation, but ideally and typically you just make one segment per message. Logically, a message organized in a tree of objects, with the root always being a struct (as opposed to a list or primitive). Here is an example of writing a new message. We use the demo schema aircraft.capnp from the aircraftlib directory. You may wish to read the schema before reading this example. << Example moved to its own file: See the file, write_test.go >> In summary, when you make a new message, you should first make new segment, and then create the root struct in that segment. Then add your non-child (contained) objects. This is because, as the spec says: All objects are values with pointer semantics that point into the data in a message or segment. Messages can be read/written from a stream uncompressed or using the capnproto compression. In this library a *Segment is taken to refer to both a specific segment as well as the containing message. This is to reduce the number of types generic code needs to deal with and allows objects to be created in the same segment as their outer object (thus reducing the number of far pointers). Most getters/setters in the library don't return an error. Instead a get that fails due to an invalid pointer, out of bounds, etc will return the default value. A invalid set will be noop'ed. If you really need to know whether a set succeeds then errors are provided by the lower level Object methods. Since go doesn't have any templating, lists are created for the basic types and one level of named types. The list of basic types (e.g. List(UInt8), List(Text), etc) are all provided in this library. Lists of user named types are created with the user types (e.g. user struct Foo will create a Foo_List type). capnp schemas that use deeper lists (e.g. List(List(UInt8))) will use PointerList and the user will have to use the Object.ToList* functions to cast to the correct type. For adding documentation comments to the generated code, there's the doc annotation. This annotation adds the comment to a struct, enum or field so that godoc will pick it up. For Example: capnpc-go will generate the following for structs: For each group a typedef is created with a different method set for just the groups fields: That way the following may be used to access a field in a group: Note that Group accessors just cast the type and so have no overhead Named unions are treated as a group with an inner unnamed union. Unnamed unions generate an enum Type_Which and a corresponding Which() function: Which() should be checked before using the getters, and the default case must always be handled. Setters for single values will set the union discriminator as well as set the value. For voids in unions, there is a void setter that just sets the discriminator. For example: For groups in unions, there is a group setter that just sets the discriminator. This must be called before the group getter can be used to set values. For example: capnpc-go generates enum values in all caps. For example in the capnp file: In the generated capnp.go file: In addition an enum.String() function is generated that will convert the constants to a string for debugging or logging purposes. By default, the enum name is used as the tag value, but the tags can be customized with a $Go.tag or $Go.notag annotation. For example: In the generated go file:
Package ivs provides the API client, operations, and parameter types for Amazon Interactive Video Service. The Amazon Interactive Video Service (IVS) API is REST compatible, using a standard HTTP API and an Amazon Web Services EventBridge event stream for responses. JSON is used for both requests and responses, including errors. The API is an Amazon Web Services regional service. For a list of supported regions and Amazon IVS HTTPS service endpoints, see the Amazon IVS pagein the Amazon Web Services General Reference. All API request parameters and URLs are case sensitive. For a summary of notable documentation changes in each release, see Document History. Allowed Header Values Accept: application/json Accept-Encoding: gzip, deflate Content-Type: application/json Key Concepts Channel — Stores configuration data related to your live stream. You first create a channel and then use the channel’s stream key to start your live stream. Stream key — An identifier assigned by Amazon IVS when you create a channel, which is then used to authorize streaming. Treat the stream key like a secret, since it allows anyone to stream to the channel. Playback key pair — Video playback may be restricted using playback-authorization tokens, which use public-key encryption. A playback key pair is the public-private pair of keys used to sign and validate the playback-authorization token. Recording configuration — Stores configuration related to recording a live stream and where to store the recorded content. Multiple channels can reference the same recording configuration. Playback restriction policy — Restricts playback by countries and/or origin sites. For more information about your IVS live stream, also see Getting Started with IVS Low-Latency Streaming. A tag is a metadata label that you assign to an Amazon Web Services resource. A tag comprises a key and a value, both set by you. For example, you might set a tag as topic:nature to label a particular video category. See Best practices and strategies in Tagging Amazon Web Services Resources and Tag Editor for details, including restrictions that apply to tags and "Tag naming limits and requirements"; Amazon IVS has no service-specific constraints beyond what is documented there. Tags can help you identify and organize your Amazon Web Services resources. For example, you can use the same tag for different resources to indicate that they are related. You can also use tags to manage access (see Access Tags). The Amazon IVS API has these tag-related operations: TagResource, UntagResource, and ListTagsForResource. The following resources support tagging: Channels, Stream Keys, Playback Key Pairs, and Recording Configurations. At most 50 tags can be applied to a resource. Note the differences between these concepts: Authentication is about verifying identity. You need to be authenticated to sign Amazon IVS API requests. Authorization is about granting permissions. Your IAM roles need to have permissions for Amazon IVS API requests. In addition, authorization is needed to view Amazon IVS private channels. (Private channels are channels that are enabled for "playback authorization.") All Amazon IVS API requests must be authenticated with a signature. The Amazon Web Services Command-Line Interface (CLI) and Amazon IVS Player SDKs take care of signing the underlying API calls for you. However, if your application calls the Amazon IVS API directly, it’s your responsibility to sign the requests. You generate a signature using valid Amazon Web Services credentials that have permission to perform the requested action. For example, you must sign PutMetadata requests with a signature generated from a user account that has the ivs:PutMetadata permission. For more information: Authentication and generating signatures — See Authenticating Requests (Amazon Web Services Signature Version 4)in the Amazon Web Services General Reference. Managing Amazon IVS permissions — See Identity and Access Managementon the Security page of the Amazon IVS User Guide. Amazon Resource Names (ARNs) ARNs uniquely identify AWS resources. An ARN is required when you need to specify a resource unambiguously across all of AWS, such as in IAM policies and API calls. For more information, see Amazon Resource Namesin the AWS General Reference.
Package telego provides one-to-one Telegram Bot API method & types. Telego features all methods and types described in official Telegram documentation (https://core.telegram.org/bots/api). It achieves this by generating methods and types from docs (generation is in internal/generator package). The main goal was and is to create a one-to-one library, so that if you know how Telegram bots work, you will immediately know how to implement that in Go using Telego. All types named and contain the same information as documented by Telegram, for methods it's exactly the same. However, some minor differences may be present (like use of interfaces or combined types). Also, all generated codes have the same description as in Telegram docs, so there is actually no need to go to docs (but still, be careful as it is not a full copy of docs due to text only limitation). Telego was also created to simplify work with a Telegram API, so some additional methods for more convenient usage located in long_polling.go and webhook.go and telegoutil package. When you are working with things like chat ID which can be an integer or string Telego provides combined types: or input files that can be URL, file ID or actual file data: you will specify only one of the fields and Telego will figure out what to do with that. For more flexibility, file data for InputFile are provided via simple interface: os.File already implements this interface, so you can use it directly. Most of the examples can be seen in examples folder. Simple echo bot: This bot will send the same messages as you sent to him.
Package storage intends to provide a unified storage layer for Golang. - Production ready: high test coverage, enterprise storage software adaptation, semantic versioning, well documented. - High performance: more code generation, less runtime reflect. - Vendor agnostic: more generic abstraction, less internal details. The most common case to use a Storager service could be following: 1. Init a storager. 2. Use Storager API to maintain data.
Package ccgo translates C to Go source code. This v3 package is obsolete. Please use current ccgo/v4: Invocation 2021-12-23: v3.13.0 add clang support. To compile the resulting Go programs the package modernc.org/libc has to be installed. CCGO_CPP selects which command is used by the C front end to obtain target configuration. Defaults to `cpp`. Ignored when --load-config <path> is used. TARGET_GOARCH selects the GOARCH of the resulting Go code. Defaults to $GOARCH or runtime.GOARCH if $GOARCH is not set. Ignored when --load-config <path> is used. TARGET_GOOS selects the GOOS of the resulting Go code. Defaults to $GOOS or runtime.GOOS if $GOOS is not set. Ignored when --load-config <path> is used. To compile for the host invoke something like To cross compile set TARGET_GOARCH and/or TARGET_GOOS, not GOARCH/GOOS. Cross compile depends on availability of C stdlib headers for the target platform as well on the set of predefined macros for the target platform. For example, to cross compile on a Linux host, targeting windows/amd64, it's necessary to have mingw64 installed in $PATH. Then invoke something like Only files with extension .c, .h or .json are recognized as input files. A .json file is interpreted as a compile database. All other command line arguments following the .json file are interpreted as items that should be found in the database and included in the output file. Each item should be on object file (.o) or static archive (.a) or a command (no extension). Command line options requiring an argument. -Dfoo Equals `#define foo 1`. -Dfoo=bar Equals `#define foo bar`. -Ipath Add path to the list of include files search path. The option is a capital letter I (India), not a lowercase letter l (Lima). -limport-path The package at <import-path> must have been produced without using the -nocapi option, ie. the package must have a proper capi_$GOOS_$GOARCH.go file. The option is a lowercase letter l (Lima), not a capital letter I (India). -Ufoo Equals `#undef foo`. -compiledb name When this option appears anywhere, most preceding options are ignored and all following command line arguments are interpreted as a command with arguments that will be executed to produce the compilation database. For example: This will execute `make -DFOO -w` and attempts to extract the compile and archive commands. Only POSIX operating systems are supported. The supported build system must output information about entering directories that is compatible with GNU make. The only compilers supported are `gcc` and `clang`. The only archiver supported is `ar`. Format specification: https://clang.llvm.org/docs/JSONCompilationDatabase.html Note: This option produces also information about libraries created with `ar cr` and include it in the json file, which is above the specification. -crt-import-path path Unless disabled by the -nostdlib option, every produced Go file imports the C runtime library. Default is `modernc.org/libc`. -export-defines "" Export C numeric/string defines as Go constants by capitalizing the first letter of the define's name. -export-defines prefix Export C numeric/string defines as Go constants by prefixing the define's name with `prefix`. Name conflicts are resolved by adding a numeric suffix. -export-enums "" Export C enum constants as Go constants by capitalizing the first letter of the enum constant name. -export-enums prefix Export C enum constants as Go constants by prefixing the enum constant name with `prefix`. Name conflicts are resolved by adding a numeric suffix. -export-externs "" Export C extern definitions as Go definitions by capitalizing the first letter of the definition name. -export-externs prefix Export C extern definitions as Go definitions by prefixing the definition name with `prefix`. Name conflicts are resolved by adding a numeric suffix. -export-fields "" Export C struct fields as Go fields by capitalizing the first letter of the field name. -export-fields prefix Export C struct fields as Go fields by prefixing the field name with `prefix`. Name conflicts are resolved by adding a numeric suffix. -export-structs "" Export tagged C struct/union types as Go types by capitalizing the first letter of the tag name. -export-structs prefix Export tagged C struct/union types as Go types by prefixing the tag name with `prefix`. Name conflicts are resolved by adding a numeric suffix. -export-typedefs "" Export C typedefs as Go types by capitalizing the first letter of the typedef name. -export-structs prefix Export C typedefs as as Go types by prefixing the typedef name with `prefix`. Name conflicts are resolved by adding a numeric suffix. -static-locals-prefix prefix Prefix C static local declarators names with 'prefix'. -host-config-cmd command This option has the same effect as setting `CCGO_CPP=command`. -host-config-opts comma-separated-list The separated items of the list are added to the invocation of the configuration command. -pkgname name Set the resulting Go package name to 'name'. Defaults to `main`. -script filename Ccgo does not yet have a concept of object files. All C files that are needed for producing the resulting Go file have to be compiled together and "linked" in memory. There are some problems with this approach, one of them is the situation when foo.c has to be compiled using, for example `-Dbar=42` and "linked" with baz.c that needs to be compiled with `-Dbar=314`. Or `bar` must not defined at all for baz.c, etc. A script in a named file is a CSV file. It is opened like this (error handling omitted): The first field of every record in the CSV file is the directory to use. The remaining fields are the arguments of the ccgo command. This way different C files can be translated using different options. The CSV file may look something like: -volatile comma-separated-list The separated items of the list are added to the list of file scope extern variables the will be accessed atomically, like if their C declarator used the 'volatile' type specifier. Currently only C scalar types of size 4 and 8 bytes are supported. Other types/sizes will ignore both the volatile specifier and the -volatile option. -save-config path This option copies every header included during compilation or compile database generation to a file under the path argument. Additionally the host configuration, ie. predefined macros, include search paths, os and architecture is stored in path/config.json. When this option is used, no Go code is generated, meaning no link phase occurs and thus the memory consumption should stay low. Passing an empty string as an argument of -save-config is the same as if the option is not present at all. Possibly useful when the option set is generated in code. This option is ignored when -compiledb <path> is used. --load-config path Note that this option must have the double dash prefix to distinguish it from -lfoo, the [traditional] short form of `-l foo`. This option configures the compiler using path/config.json. The include paths are adjusted to be relative to path. For example: Assume on machine A the default C preprocessor reports a system include search path "/usr/include". Running ccgo on A with -save-config /tmp/foo to compile foo.c that #includes <stdlib.h>, which is found in /usr/include/stdlib.h on the host results in Assume /tmp/foo from machine A will be recursively copied to machine B, that may run a different operating system and/or architecture. Let the copy be for example in /tmp/bar. Using --load-config /tmp/bar will instruct ccgo to configure its preprocessor with a system include path /tmp/bar/usr/include and thus use the original machine A stdlib.h found there. When the --load-config is used, no host configuration from a machine B cross C preprocessor/compiler is needed to transpile the foo.c source on machine B as if the compiler would be running on machine A. The particular usefulness of this mechanism is for transpiling big projects for 32 bit architectures. There the lack if ccgo having an object format and thus linking everything in RAM can need too much memory for the system to handle. The way around this is possibly to run something like on machine A, transfer path/* to machine B and run the link phase there with eg. Note that the C sources for the project must be in the same path on both machines because the compile database stores absolute paths. It might be convenient to put the sources in path/src, the config in path/config, for example, and transfer the [archive of] path/ to the same directory on the second machine. That also solves the issue when ./configure generates files and the result differs per operating system or architecture. Passing an empty string as an argument of -load-config is the same as if the option is not present at all. Possibly useful when the option set is generated in code. These command line options don't take arguments. -E When this option is present the compiler does not produce any Go files and instead prints the preprocessor output to stdout. -all-errors Normally only the first 10 or so errors are shown. With this option the compiler will show all errors. -header Using this option suppresses producing of any function definitions. This is possibly useful for producing Go files from C header files. Including function signatures with -header. -func-sig Add this option to include fucntion signature when compiling headers (using -header). -nostdinc This option disables the default C include search paths. -nostdlib This option disables importing of the runtime library by the resulting Go code. -trace-pinning This option will print the positions and names of local declarators that are being pinned. -version Ignore all other options, print version and exit. -verbose-compiledb Enable verbose output when -compiledb is present. -ignore-undefined This option tells the linker to not insist on finding definitions for declarators that are not implicitly declared and used - but not defined. This might be useful when the intent is to define the missing function in Go functions manually. Name conflict resolution for such declarator names may or may not be applied. -ignore-unsupported-alignment This option tells the compiler to not complain about alignments that Go cannot support. -trace-included-files This option outputs the path names of all included files. This option is ignored when -compiledb <path> is used. There may exist other options not listed above. Those should be considered temporary and/or unsupported and may be removed without notice. Alternatively, they may eventually get promoted to "documented" options.
Package storage provides an easy way to work with Google Cloud Storage. Google Cloud Storage stores data in named objects, which are grouped into buckets. More information about Google Cloud Storage is available at https://cloud.google.com/storage/docs. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. All of the methods of this package use exponential backoff to retry calls that fail with certain errors, as described in https://cloud.google.com/storage/docs/exponential-backoff. Retrying continues indefinitely unless the controlling context is canceled or the client is closed. See context.WithTimeout and context.WithCancel. To start working with this package, create a client: The client will use your default application credentials. If you only wish to access public data, you can create an unauthenticated client with A Google Cloud Storage bucket is a collection of objects. To work with a bucket, make a bucket handle: A handle is a reference to a bucket. You can have a handle even if the bucket doesn't exist yet. To create a bucket in Google Cloud Storage, call Create on the handle: Note that although buckets are associated with projects, bucket names are global across all projects. Each bucket has associated metadata, represented in this package by BucketAttrs. The third argument to BucketHandle.Create allows you to set the initial BucketAttrs of a bucket. To retrieve a bucket's attributes, use Attrs: An object holds arbitrary data as a sequence of bytes, like a file. You refer to objects using a handle, just as with buckets, but unlike buckets you don't explicitly create an object. Instead, the first time you write to an object it will be created. You can use the standard Go io.Reader and io.Writer interfaces to read and write object data: Objects also have attributes, which you can fetch with Attrs: Both objects and buckets have ACLs (Access Control Lists). An ACL is a list of ACLRules, each of which specifies the role of a user, group or project. ACLs are suitable for fine-grained control, but you may prefer using IAM to control access at the project level (see https://cloud.google.com/storage/docs/access-control/iam). To list the ACLs of a bucket or object, obtain an ACLHandle and call its List method: You can also set and delete ACLs. Every object has a generation and a metageneration. The generation changes whenever the content changes, and the metageneration changes whenever the metadata changes. Conditions let you check these values before an operation; the operation only executes if the conditions match. You can use conditions to prevent race conditions in read-modify-write operations. For example, say you've read an object's metadata into objAttrs. Now you want to write to that object, but only if its contents haven't changed since you read it. Here is how to express that: You can obtain a URL that lets anyone read or write an object for a limited time. You don't need to create a client to do this. See the documentation of SignedURL for details. Errors returned by this client are often of the type [`googleapi.Error`](https://godoc.org/google.golang.org/api/googleapi#Error). These errors can be introspected for more information by type asserting to the richer `googleapi.Error` type. For example:
Package restlayer is an API framework heavily inspired by the excellent Python Eve (http://python-eve.org/). It helps you create a comprehensive, customizable, and secure REST (graph) API on top of pluggable backend storages with no boiler plate code so can focus on your business logic. Implemented as a net/http middleware, it plays well with other middleware like CORS (http://github.com/rs/cors) and is net/context aware thanks to xhandler. REST Layer is an opinionated framework. Unlike many API frameworks, you don’t directly control the routing and you don’t have to write handlers. You just define resources and sub-resources with a schema, the framework automatically figures out what routes to generate behind the scene. You don’t have to take care of the HTTP headers and response, JSON encoding, etc. either. REST layer handles HTTP conditional requests, caching, integrity checking for you. A powerful and extensible validation engine make sure that data comes pre-validated to your custom storage handlers. Generic resource handlers for MongoDB (http://github.com/clarify/rested/storers/mongo) and other databases are also available so you have few to no code to write to make the whole system work. Moreover, REST Layer let you create a graph API by linking resources between them. Thanks to its advanced field selection syntax, you can gather resources and their dependencies in a single request, saving you from costly network roundtrips. REST Layer is composed of several sub-packages: See https://github.com/clarify/rested/blob/master/README.md for full REST Layer documentation.
Package gopacket provides packet decoding for the Go language. gopacket contains many sub-packages with additional functionality you may find useful, including: Also, if you're looking to dive right into code, see the examples subdirectory for numerous simple binaries built using gopacket libraries. gopacket takes in packet data as a []byte and decodes it into a packet with a non-zero number of "layers". Each layer corresponds to a protocol within the bytes. Once a packet has been decoded, the layers of the packet can be requested from the packet. Packets can be decoded from a number of starting points. Many of our base types implement Decoder, which allow us to decode packets for which we don't have full data. Most of the time, you won't just have a []byte of packet data lying around. Instead, you'll want to read packets in from somewhere (file, interface, etc) and process them. To do that, you'll want to build a PacketSource. First, you'll need to construct an object that implements the PacketDataSource interface. There are implementations of this interface bundled with gopacket in the gopacket/pcap and gopacket/pfring subpackages... see their documentation for more information on their usage. Once you have a PacketDataSource, you can pass it into NewPacketSource, along with a Decoder of your choice, to create a PacketSource. Once you have a PacketSource, you can read packets from it in multiple ways. See the docs for PacketSource for more details. The easiest method is the Packets function, which returns a channel, then asynchronously writes new packets into that channel, closing the channel if the packetSource hits an end-of-file. You can change the decoding options of the packetSource by setting fields in packetSource.DecodeOptions... see the following sections for more details. gopacket optionally decodes packet data lazily, meaning it only decodes a packet layer when it needs to handle a function call. Lazily-decoded packets are not concurrency-safe. Since layers have not all been decoded, each call to Layer() or Layers() has the potential to mutate the packet in order to decode the next layer. If a packet is used in multiple goroutines concurrently, don't use gopacket.Lazy. Then gopacket will decode the packet fully, and all future function calls won't mutate the object. By default, gopacket will copy the slice passed to NewPacket and store the copy within the packet, so future mutations to the bytes underlying the slice don't affect the packet and its layers. If you can guarantee that the underlying slice bytes won't be changed, you can use NoCopy to tell gopacket.NewPacket, and it'll use the passed-in slice itself. The fastest method of decoding is to use both Lazy and NoCopy, but note from the many caveats above that for some implementations either or both may be dangerous. During decoding, certain layers are stored in the packet as well-known layer types. For example, IPv4 and IPv6 are both considered NetworkLayer layers, while TCP and UDP are both TransportLayer layers. We support 4 layers, corresponding to the 4 layers of the TCP/IP layering scheme (roughly anagalous to layers 2, 3, 4, and 7 of the OSI model). To access these, you can use the packet.LinkLayer, packet.NetworkLayer, packet.TransportLayer, and packet.ApplicationLayer functions. Each of these functions returns a corresponding interface (gopacket.{Link,Network,Transport,Application}Layer). The first three provide methods for getting src/dst addresses for that particular layer, while the final layer provides a Payload function to get payload data. This is helpful, for example, to get payloads for all packets regardless of their underlying data type: A particularly useful layer is ErrorLayer, which is set whenever there's an error parsing part of the packet. Note that we don't return an error from NewPacket because we may have decoded a number of layers successfully before running into our erroneous layer. You may still be able to get your Ethernet and IPv4 layers correctly, even if your TCP layer is malformed. gopacket has two useful objects, Flow and Endpoint, for communicating in a protocol independent manner the fact that a packet is coming from A and going to B. The general layer types LinkLayer, NetworkLayer, and TransportLayer all provide methods for extracting their flow information, without worrying about the type of the underlying Layer. A Flow is a simple object made up of a set of two Endpoints, one source and one destination. It details the sender and receiver of the Layer of the Packet. An Endpoint is a hashable representation of a source or destination. For example, for LayerTypeIPv4, an Endpoint contains the IP address bytes for a v4 IP packet. A Flow can be broken into Endpoints, and Endpoints can be combined into Flows: Both Endpoint and Flow objects can be used as map keys, and the equality operator can compare them, so you can easily group together all packets based on endpoint criteria: For load-balancing purposes, both Flow and Endpoint have FastHash() functions, which provide quick, non-cryptographic hashes of their contents. Of particular importance is the fact that Flow FastHash() is symetric: A->B will have the same hash as B->A. An example usage could be: This allows us to split up a packet stream while still making sure that each stream sees all packets for a flow (and its bidirectional opposite). If your network has some strange encapsulation, you can implement your own decoder. In this example, we handle Ethernet packets which are encapsulated in a 4-byte header. See the docs for Decoder and PacketBuilder for more details on how coding decoders works, or look at RegisterLayerType and RegisterEndpointType to see how to add layer/endpoint types to gopacket. TLDR: DecodingLayerParser takes about 10% of the time as NewPacket to decode packet data, but only for known packet stacks. Basic decoding using gopacket.NewPacket or PacketSource.Packets is somewhat slow due to its need to allocate a new packet and every respective layer. It's very versatile and can handle all known layer types, but sometimes you really only care about a specific set of layers regardless, so that versatility is wasted. DecodingLayerParser avoids memory allocation altogether by decoding packet layers directly into preallocated objects, which you can then reference to get the packet's information. A quick example: The important thing to note here is that the parser is modifying the passed in layers (eth, ip4, ip6, tcp) instead of allocating new ones, thus greatly speeding up the decoding process. It's even branching based on layer type... it'll handle an (eth, ip4, tcp) or (eth, ip6, tcp) stack. However, it won't handle any other type... since no other decoders were passed in, an (eth, ip4, udp) stack will stop decoding after ip4, and only pass back [LayerTypeEthernet, LayerTypeIPv4] through the 'decoded' slice (along with an error saying it can't decode a UDP packet). Unfortunately, not all layers can be used by DecodingLayerParser... only those implementing the DecodingLayer interface are usable. Also, it's possible to create DecodingLayers that are not themselves Layers... see layers.IPv6ExtensionSkipper for an example of this. As well as offering the ability to decode packet data, gopacket will allow you to create packets from scratch, as well. A number of gopacket layers implement the SerializableLayer interface; these layers can be serialized to a []byte in the following manner: SerializeTo PREPENDS the given layer onto the SerializeBuffer, and they treat the current buffer's Bytes() slice as the payload of the serializing layer. Therefore, you can serialize an entire packet by serializing a set of layers in reverse order (Payload, then TCP, then IP, then Ethernet, for example). The SerializeBuffer's SerializeLayers function is a helper that does exactly that. To generate a (empty and useless, because no fields are set) Ethernet(IPv4(TCP(Payload))) packet, for exmaple, you can run: If you use gopacket, you'll almost definitely want to make sure gopacket/layers is imported, since when imported it sets all the LayerType variables and fills in a lot of interesting variables/maps (DecodersByLayerName, etc). Therefore, it's recommended that even if you don't use any layers functions directly, you still import with:
Package storage intends to provide a unified storage layer for Golang. - Production ready: high test coverage, enterprise storage software adaptation, semantic versioning, well documented. - High performance: more code generation, less runtime reflect. - Vendor agnostic: more generic abstraction, less internal details. The most common case to use a Storager service could be following: 1. Init a storager. 2. Use Storager API to maintain data.
Package gopacket provides packet decoding for the Go language. gopacket contains many sub-packages with additional functionality you may find useful, including: Also, if you're looking to dive right into code, see the examples subdirectory for numerous simple binaries built using gopacket libraries. Minimum go version required is 1.5. gopacket takes in packet data as a []byte and decodes it into a packet with a non-zero number of "layers". Each layer corresponds to a protocol within the bytes. Once a packet has been decoded, the layers of the packet can be requested from the packet. Packets can be decoded from a number of starting points. Many of our base types implement Decoder, which allow us to decode packets for which we don't have full data. Most of the time, you won't just have a []byte of packet data lying around. Instead, you'll want to read packets in from somewhere (file, interface, etc) and process them. To do that, you'll want to build a PacketSource. First, you'll need to construct an object that implements the PacketDataSource interface. There are implementations of this interface bundled with gopacket in the gopacket/pcap and gopacket/pfring subpackages... see their documentation for more information on their usage. Once you have a PacketDataSource, you can pass it into NewPacketSource, along with a Decoder of your choice, to create a PacketSource. Once you have a PacketSource, you can read packets from it in multiple ways. See the docs for PacketSource for more details. The easiest method is the Packets function, which returns a channel, then asynchronously writes new packets into that channel, closing the channel if the packetSource hits an end-of-file. You can change the decoding options of the packetSource by setting fields in packetSource.DecodeOptions... see the following sections for more details. gopacket optionally decodes packet data lazily, meaning it only decodes a packet layer when it needs to handle a function call. Lazily-decoded packets are not concurrency-safe. Since layers have not all been decoded, each call to Layer() or Layers() has the potential to mutate the packet in order to decode the next layer. If a packet is used in multiple goroutines concurrently, don't use gopacket.Lazy. Then gopacket will decode the packet fully, and all future function calls won't mutate the object. By default, gopacket will copy the slice passed to NewPacket and store the copy within the packet, so future mutations to the bytes underlying the slice don't affect the packet and its layers. If you can guarantee that the underlying slice bytes won't be changed, you can use NoCopy to tell gopacket.NewPacket, and it'll use the passed-in slice itself. The fastest method of decoding is to use both Lazy and NoCopy, but note from the many caveats above that for some implementations either or both may be dangerous. During decoding, certain layers are stored in the packet as well-known layer types. For example, IPv4 and IPv6 are both considered NetworkLayer layers, while TCP and UDP are both TransportLayer layers. We support 4 layers, corresponding to the 4 layers of the TCP/IP layering scheme (roughly anagalous to layers 2, 3, 4, and 7 of the OSI model). To access these, you can use the packet.LinkLayer, packet.NetworkLayer, packet.TransportLayer, and packet.ApplicationLayer functions. Each of these functions returns a corresponding interface (gopacket.{Link,Network,Transport,Application}Layer). The first three provide methods for getting src/dst addresses for that particular layer, while the final layer provides a Payload function to get payload data. This is helpful, for example, to get payloads for all packets regardless of their underlying data type: A particularly useful layer is ErrorLayer, which is set whenever there's an error parsing part of the packet. Note that we don't return an error from NewPacket because we may have decoded a number of layers successfully before running into our erroneous layer. You may still be able to get your Ethernet and IPv4 layers correctly, even if your TCP layer is malformed. gopacket has two useful objects, Flow and Endpoint, for communicating in a protocol independent manner the fact that a packet is coming from A and going to B. The general layer types LinkLayer, NetworkLayer, and TransportLayer all provide methods for extracting their flow information, without worrying about the type of the underlying Layer. A Flow is a simple object made up of a set of two Endpoints, one source and one destination. It details the sender and receiver of the Layer of the Packet. An Endpoint is a hashable representation of a source or destination. For example, for LayerTypeIPv4, an Endpoint contains the IP address bytes for a v4 IP packet. A Flow can be broken into Endpoints, and Endpoints can be combined into Flows: Both Endpoint and Flow objects can be used as map keys, and the equality operator can compare them, so you can easily group together all packets based on endpoint criteria: For load-balancing purposes, both Flow and Endpoint have FastHash() functions, which provide quick, non-cryptographic hashes of their contents. Of particular importance is the fact that Flow FastHash() is symmetric: A->B will have the same hash as B->A. An example usage could be: This allows us to split up a packet stream while still making sure that each stream sees all packets for a flow (and its bidirectional opposite). If your network has some strange encapsulation, you can implement your own decoder. In this example, we handle Ethernet packets which are encapsulated in a 4-byte header. See the docs for Decoder and PacketBuilder for more details on how coding decoders works, or look at RegisterLayerType and RegisterEndpointType to see how to add layer/endpoint types to gopacket. TLDR: DecodingLayerParser takes about 10% of the time as NewPacket to decode packet data, but only for known packet stacks. Basic decoding using gopacket.NewPacket or PacketSource.Packets is somewhat slow due to its need to allocate a new packet and every respective layer. It's very versatile and can handle all known layer types, but sometimes you really only care about a specific set of layers regardless, so that versatility is wasted. DecodingLayerParser avoids memory allocation altogether by decoding packet layers directly into preallocated objects, which you can then reference to get the packet's information. A quick example: The important thing to note here is that the parser is modifying the passed in layers (eth, ip4, ip6, tcp) instead of allocating new ones, thus greatly speeding up the decoding process. It's even branching based on layer type... it'll handle an (eth, ip4, tcp) or (eth, ip6, tcp) stack. However, it won't handle any other type... since no other decoders were passed in, an (eth, ip4, udp) stack will stop decoding after ip4, and only pass back [LayerTypeEthernet, LayerTypeIPv4] through the 'decoded' slice (along with an error saying it can't decode a UDP packet). Unfortunately, not all layers can be used by DecodingLayerParser... only those implementing the DecodingLayer interface are usable. Also, it's possible to create DecodingLayers that are not themselves Layers... see layers.IPv6ExtensionSkipper for an example of this. As well as offering the ability to decode packet data, gopacket will allow you to create packets from scratch, as well. A number of gopacket layers implement the SerializableLayer interface; these layers can be serialized to a []byte in the following manner: SerializeTo PREPENDS the given layer onto the SerializeBuffer, and they treat the current buffer's Bytes() slice as the payload of the serializing layer. Therefore, you can serialize an entire packet by serializing a set of layers in reverse order (Payload, then TCP, then IP, then Ethernet, for example). The SerializeBuffer's SerializeLayers function is a helper that does exactly that. To generate a (empty and useless, because no fields are set) Ethernet(IPv4(TCP(Payload))) packet, for example, you can run: If you use gopacket, you'll almost definitely want to make sure gopacket/layers is imported, since when imported it sets all the LayerType variables and fills in a lot of interesting variables/maps (DecodersByLayerName, etc). Therefore, it's recommended that even if you don't use any layers functions directly, you still import with:
Package goaction enables writing Github Actions in Go. The idea is: write a standard Go script, one that works with `go run`, and use it as Github action. The script's inputs - flags and environment variables, are set though the Github action API. This project will generate all the required files for the script (This generation can be done automattically with Github action integration). The library also exposes neat API to get workflow information. - [x] Write a Go script. - [x] Add `goaction` configuration in `.github/workflows/goaction.yml`. - [x] Push the project to Github. See simplest example for a Goaction script: (posener/goaction-example) https://github.com/posener/goaction-example, or an example that demonstrait using Github APIs: (posener/goaction-issues-example) https://github.com/posener/goaction-issues-example. Write Github Action by writing Go code! Just start a Go module with a main package, and execute it as a Github action using Goaction, or from the command line using `go run`. A go executable can get inputs from the command line flag and from environment variables. Github actions should have a `action.yml` file that defines this API. Goaction bridges the gap by parsing the Go code and creating this file automatically for you. The main package inputs should be defined with the standard `flag` package for command line arguments, or by `os.Getenv` for environment variables. These inputs define the API of the program and `goaction` automatically detect them and creates the `action.yml` file from them. Additionally, goaction also provides a library that exposes all Github action environment in an easy-to-use API. See the documentation for more information. Code segments which should run only in Github action (called "CI mode"), and not when the main package runs as a command line tool, should be protected by a `if goaction.CI { ... }` block. In order to convert the repository to a Github action, goaction command line should run on the **"main file"** (described above). This command can run manually (by ./cmd/goaction) but luckily `goaction` also comes as a Github action :-) Goaction Github action keeps the Github action file updated according to the main Go file automatically. When a PR is made, goaction will post a review explaining what changes to expect. When a new commit is pushed, Goaction makes sure that the Github action files are updated if needed. Add the following content to `.github/workflows/goaction.yml` ./action.yml: A "metadata" file for Github actions. If this file exists, the repository is considered as Github action, and the file contains information that instructs how to invoke this action. See (metadata syntax) https://help.github.com/en/actions/building-actions/metadata-syntax-for-github-actions. for more info. ./Dockerfile: A file that contains instructions how to build a container, that is used for Github actions. Github action uses this file in order to create a container image to the action. The container can also be built and tested manually: Goaction parses Go script file and looks for annotations that extends the information that exists in the function calls. Goaction annotations are a comments that start with `//goaction:` (no space after slashes). They can only be set on a `var` definition. The following annotations are available: * `//goaction:required` - sets an input definition to be "required". * `//goaction:skip` - skips an input out output definition. * `//goaction:description <description>` - add description for `os.Getenv`. * `//goaction:default <value>` - add default value for `os.Getenv`. A list of projects which are using Goaction (please send a PR if your project uses goaction and does not appear her). * (posener/goreadme) http://github.com/posener/goreadme
Package kyber provides a toolbox of advanced cryptographic primitives, for applications that need more than straightforward signing and encryption. This top level package defines the interfaces to cryptographic primitives designed to be independent of specific cryptographic algorithms, to facilitate upgrading applications to new cryptographic algorithms or switching to alternative algorithms for experimentation purposes. This toolkits public-key crypto API includes a kyber.Group interface supporting a broad class of group-based public-key primitives including DSA-style integer residue groups and elliptic curve groups. Users of this API can write higher-level crypto algorithms such as zero-knowledge proofs without knowing or caring exactly what kind of group, let alone which precise security parameters or elliptic curves, are being used. The kyber.Group interface supports the standard algebraic operations on group elements and scalars that nontrivial public-key algorithms tend to rely on. The interface uses additive group terminology typical for elliptic curves, such that point addition is homomorphically equivalent to adding their (potentially secret) scalar multipliers. But the API and its operations apply equally well to DSA-style integer groups. As a trivial example, generating a public/private keypair is as simple as: The first statement picks a private key (Scalar) from a the suites's source of cryptographic random or pseudo-random bits, while the second performs elliptic curve scalar multiplication of the curve's standard base point (indicated by the 'nil' argument to Mul) by the scalar private key 'a'. Similarly, computing a Diffie-Hellman shared secret using Alice's private key 'a' and Bob's public key 'B' can be done via: Note that we use 'Mul' rather than 'Exp' here because the library uses the additive-group terminology common for elliptic curve crypto, rather than the multiplicative-group terminology of traditional integer groups - but the two are semantically equivalent and the interface itself works for both elliptic curve and integer groups. Various sub-packages provide several specific implementations of these cryptographic interfaces. In particular, the 'group/mod' sub-package provides implementations of modular integer groups underlying conventional DSA-style algorithms. The `group/nist` package provides NIST-standardized elliptic curves built on the Go crypto library. The 'group/edwards25519' sub-package provides the kyber.Group interface using the popular Ed25519 curve. Other sub-packages build more interesting high-level cryptographic tools atop these primitive interfaces, including: - share: Polynomial commitment and verifiable Shamir secret splitting for implementing verifiable 't-of-n' threshold cryptographic schemes. This can be used to encrypt a message so that any 2 out of 3 receivers must work together to decrypt it, for example. - proof: An implementation of the general Camenisch/Stadler framework for discrete logarithm knowledge proofs. This system supports both interactive and non-interactive proofs of a wide variety of statements such as, "I know the secret x associated with public key X or I know the secret y associated with public key Y", without revealing anything about either secret or even which branch of the "or" clause is true. - sign: The sign directory contains different signature schemes. - sign/anon provides anonymous and pseudonymous public-key encryption and signing, where the sender of a signed message or the receiver of an encrypted message is defined as an explicit anonymity set containing several public keys rather than just one. For example, a member of an organization's board of trustees might prove to be a member of the board without revealing which member she is. - sign/cosi provides collective signature algorithm, where a bunch of signers create a unique, compact and efficiently verifiable signature using the Schnorr signature as a basis. - sign/eddsa provides a kyber-native implementation of the EdDSA signature scheme. - sign/schnorr provides a basic vanilla Schnorr signature scheme implementation. - shuffle: Verifiable cryptographic shuffles of ElGamal ciphertexts, which can be used to implement (for example) voting or auction schemes that keep the sources of individual votes or bids private without anyone having to trust more than one of the shuffler(s) to shuffle votes/bids honestly. For now this library should currently be considered experimental: it will definitely be changing in non-backward-compatible ways, and it will need independent security review before it should be considered ready for use in security-critical applications. However, we intend to bring the library closer to stability and real-world usability as quickly as development resources permit, and as interest and application demand dictates. As should be obvious, this library is intended to be used by developers who are at least moderately knowledgeable about cryptography. If you want a crypto library that makes it easy to implement "basic crypto" functionality correctly - i.e., plain public-key encryption and signing - then [NaCl secretbox](https://godoc.org/golang.org/x/crypto/nacl/secretbox) may be a better choice. This toolkit's purpose is to make it possible - and preferably easy - to do slightly more interesting things that most current crypto libraries don't support effectively. The one existing crypto library that this toolkit is probably most comparable to is the Charm rapid prototyping library for Python (https://charm-crypto.com/category/charm). This library incorporates and/or builds on existing code from a variety of sources, as documented in the relevant sub-packages.
Package kstat provides a Go interface to the Solaris/OmniOS kstat(s) system for user-level access to a lot of kernel statistics. For more documentation on kstats, see kstat(1) and kstat(3kstat). The package can retrieve what are called 'named' kstat statistics, IO statistics, and the most common additional types of 'raw' statistics, which covers almost all kstats you will normally find in the kernel. You can see the names and types of other kstats, but not currently retrieve data for them. Named statistics are the most common type for general information; IO statistics are exported by disks and some other things. Supported additional raw kstats are unix:0:sysinfo, unix:0:vminfo, unix:0:var, and mnt:*:mntinfo. General usage for named statistics: call Open() to obtain a Token, then call GetNamed() on it to obtain Named(s) for specific statistics. Note that this always gives you the very latest value for the statistic. If you want a number of statistics from the same module:inst:name triplet (eg several network counters from the same network interface) and you want them to all have been gathered at the same time, you need to call .Lookup() to obtain a KStat and then repeatedly call its .GetNamed() (this is also slightly more efficient). The short version: a kstat is a collection of some related statistics, eg various network counters for a particular network interface. A Token is a handle for a collection of kstats. You go collection (Token) -> kstat (KStat) -> specific statistic (Named) in order to retrieve the value of a specific statistic. (IO stats are retrieved all at once with GetIO(), because they come to us from the kernel as one single struct so that's what you get.) This is a cgo-based package. Cross compilation is up to you. Goroutine safety is in no way guaranteed because the underlying C kstat library is probably not thread or goroutine safe (and there are some all-Go concurrency races involving .Close()). This package may leak memory, especially since the Solaris kstat manpage is not clear on the requirements here. However I believe it's reasonably memory safe. It's possible to totally corrupt memory with use-after-free errors if you do operations on kstats after calling Token.Close(), although we try to avoid that. NOTE: this package is quite young. The API may well change as I (and other people) gain more experience with it. In general this is not going to be as lean and mean as calling C directly, partly because of intrinsic CGo overheads and partly because we do more memory allocation and deallocation than a C program would (partly because we prioritize not leaking memory). We support named kstats and IO kstats (KSTAT_TYPE_NAMED and KSTAT_TYPE_IO / kstat_io_t respectively). kstat(1) also knows about a number of magic specific 'raw' stats (which are generally custom C structs); of these we support unix:0:sysinfo, unix:0:vminfo, unix:0:var, and mnt:*:mntinfo for NFS filesystem mounts. In theory kstat supports general timer and interrupt stats. In practice there is no use of KSTAT_TYPE_TIMER in the current Illumos kernel source and very little use of KSTAT_TYPE_INTR (mostly by very old hardware drivers, although the vioif driver uses it too). Since I can't test KSTAT_TYPE_INTR stats, we don't currently support it. There are also a few additional KSTAT_TYPE_RAW raw stats that we don't support, mostly because they seem to be effectively obsolete. These specific raw stats can be found listed in the Illumos source code in cmd/stat/kstat/kstat.h in the ks_raw_lookup array. See cmd/stat/kstat/kstat.c for how they're interpreted. If you need access to one of these kstats, the KStat.CopyTo() and KStat.Raw() methods give you an escape hatch to roll your own. You'll probably need to use cgo to generate an appropriate Go struct that matches the C struct you need. My notes on this process may be helpful: https://utcc.utoronto.ca/~cks/space/blog/programming/GoCGoCompatibleStructs Author: Chris Siebenmann https://github.com/siebenmann/go-kstat Copyright: standard Go copyright. (If you're reading this documentation on a non-Solaris platform, you're probably not seeing the detailed API documentation for constants, types, and so on because of tooling limitations in godoc et al.)
Package kstat provides a Go interface to the Solaris/OmniOS kstat(s) system for user-level access to a lot of kernel statistics. For more documentation on kstats, see kstat(1) and kstat(3kstat). The package can retrieve what are called 'named' kstat statistics, IO statistics, and the most common additional types of 'raw' statistics, which covers almost all kstats you will normally find in the kernel. You can see the names and types of other kstats, but not currently retrieve data for them. Named statistics are the most common type for general information; IO statistics are exported by disks and some other things. Supported additional raw kstats are unix:0:sysinfo, unix:0:vminfo, unix:0:var, and mnt:*:mntinfo. General usage for named statistics: call Open() to obtain a Token, then call GetNamed() on it to obtain Named(s) for specific statistics. Note that this always gives you the very latest value for the statistic. If you want a number of statistics from the same module:inst:name triplet (eg several network counters from the same network interface) and you want them to all have been gathered at the same time, you need to call .Lookup() to obtain a KStat and then repeatedly call its .GetNamed() (this is also slightly more efficient). The short version: a kstat is a collection of some related statistics, eg various network counters for a particular network interface. A Token is a handle for a collection of kstats. You go collection (Token) -> kstat (KStat) -> specific statistic (Named) in order to retrieve the value of a specific statistic. (IO stats are retrieved all at once with GetIO(), because they come to us from the kernel as one single struct so that's what you get.) This is a cgo-based package. Cross compilation is up to you. Goroutine safety is in no way guaranteed because the underlying C kstat library is probably not thread or goroutine safe (and there are some all-Go concurrency races involving .Close()). This package may leak memory, especially since the Solaris kstat manpage is not clear on the requirements here. However I believe it's reasonably memory safe. It's possible to totally corrupt memory with use-after-free errors if you do operations on kstats after calling Token.Close(), although we try to avoid that. NOTE: this package is quite young. The API may well change as I (and other people) gain more experience with it. In general this is not going to be as lean and mean as calling C directly, partly because of intrinsic CGo overheads and partly because we do more memory allocation and deallocation than a C program would (partly because we prioritize not leaking memory). We support named kstats and IO kstats (KSTAT_TYPE_NAMED and KSTAT_TYPE_IO / kstat_io_t respectively). kstat(1) also knows about a number of magic specific 'raw' stats (which are generally custom C structs); of these we support unix:0:sysinfo, unix:0:vminfo, unix:0:var, and mnt:*:mntinfo for NFS filesystem mounts. In theory kstat supports general timer and interrupt stats. In practice there is no use of KSTAT_TYPE_TIMER in the current Illumos kernel source and very little use of KSTAT_TYPE_INTR (mostly by very old hardware drivers, although the vioif driver uses it too). Since I can't test KSTAT_TYPE_INTR stats, we don't currently support it. There are also a few additional KSTAT_TYPE_RAW raw stats that we don't support, mostly because they seem to be effectively obsolete. These specific raw stats can be found listed in the Illumos source code in cmd/stat/kstat/kstat.h in the ks_raw_lookup array. See cmd/stat/kstat/kstat.c for how they're interpreted. If you need access to one of these kstats, the KStat.CopyTo() and KStat.Raw() methods give you an escape hatch to roll your own. You'll probably need to use cgo to generate an appropriate Go struct that matches the C struct you need. My notes on this process may be helpful: https://utcc.utoronto.ca/~cks/space/blog/programming/GoCGoCompatibleStructs Author: Chris Siebenmann https://github.com/siebenmann/go-kstat Copyright: standard Go copyright. (If you're reading this documentation on a non-Solaris platform, you're probably not seeing the detailed API documentation for constants, types, and so on because of tooling limitations in godoc et al.)
Package datastore provides a client for Google Cloud Datastore. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. Entities are the unit of storage and are associated with a key. A key consists of an optional parent key, a string application ID, a string kind (also known as an entity type), and either a StringID or an IntID. A StringID is also known as an entity name or key name. It is valid to create a key with a zero StringID and a zero IntID; this is called an incomplete key, and does not refer to any saved entity. Putting an entity into the datastore under an incomplete key will cause a unique key to be generated for that entity, with a non-zero IntID. An entity's contents are a mapping from case-sensitive field names to values. Valid value types are: Slices of structs are valid, as are structs that contain slices. The Get and Put functions load and save an entity's contents. An entity's contents are typically represented by a struct pointer. Example code: GetMulti, PutMulti and DeleteMulti are batch versions of the Get, Put and Delete functions. They take a []*Key instead of a *Key, and may return a datastore.MultiError when encountering partial failure. Mutate generalizes PutMulti and DeleteMulti to a sequence of any Datastore mutations. It takes a series of mutations created with NewInsert, NewUpdate, NewUpsert and NewDelete and applies them atomically. An entity's contents can be represented by a variety of types. These are typically struct pointers, but can also be any type that implements the PropertyLoadSaver interface. If using a struct pointer, you do not have to explicitly implement the PropertyLoadSaver interface; the datastore will automatically convert via reflection. If a struct pointer does implement PropertyLoadSaver then those methods will be used in preference to the default behavior for struct pointers. Struct pointers are more strongly typed and are easier to use; PropertyLoadSavers are more flexible. The actual types passed do not have to match between Get and Put calls or even across different calls to datastore. It is valid to put a *PropertyList and get that same entity as a *myStruct, or put a *myStruct0 and get a *myStruct1. Conceptually, any entity is saved as a sequence of properties, and is loaded into the destination value on a property-by-property basis. When loading into a struct pointer, an entity that cannot be completely represented (such as a missing field) will result in an ErrFieldMismatch error but it is up to the caller whether this error is fatal, recoverable or ignorable. By default, for struct pointers, all properties are potentially indexed, and the property name is the same as the field name (and hence must start with an upper case letter). Fields may have a `datastore:"name,options"` tag. The tag name is the property name, which must be one or more valid Go identifiers joined by ".", but may start with a lower case letter. An empty tag name means to just use the field name. A "-" tag name means that the datastore will ignore that field. The only valid options are "omitempty", "noindex" and "flatten". If the options include "omitempty" and the value of the field is a zero value, then the field will be omitted on Save. Zero values are best defined in the golang spec (https://golang.org/ref/spec#The_zero_value). Struct field values will never be empty, except for nil pointers. If options include "noindex" then the field will not be indexed. All fields are indexed by default. Strings or byte slices longer than 1500 bytes cannot be indexed; fields used to store long strings and byte slices must be tagged with "noindex" or they will cause Put operations to fail. For a nested struct field, the options may also include "flatten". This indicates that the immediate fields and any nested substruct fields of the nested struct should be flattened. See below for examples. To use multiple options together, separate them by a comma. The order does not matter. If the options is "" then the comma may be omitted. Example code: A field of slice type corresponds to a Datastore array property, except for []byte, which corresponds to a Datastore blob. Zero-length slice fields are not saved. Slice fields of length 1 or greater are saved as Datastore arrays. When a zero-length Datastore array is loaded into a slice field, the slice field remains unchanged. If a non-array value is loaded into a slice field, the result will be a slice with one element, containing the value. Loading a Datastore Null into a basic type (int, float, etc.) results in a zero value. Loading a Null into a slice of basic type results in a slice of size 1 containing the zero value. Loading a Null into a pointer field results in nil. Loading a Null into a field of struct type is an error. A struct field can be a pointer to a signed integer, floating-point number, string or bool. Putting a non-nil pointer will store its dereferenced value. Putting a nil pointer will store a Datastore Null property, unless the field is marked omitempty, in which case no property will be stored. Loading a Null into a pointer field sets the pointer to nil. Loading any other value allocates new storage with the value, and sets the field to point to it. If the struct contains a *datastore.Key field tagged with the name "__key__", its value will be ignored on Put. When reading the Entity back into the Go struct, the field will be populated with the *datastore.Key value used to query for the Entity. Example code: If the struct pointed to contains other structs, then the nested or embedded structs are themselves saved as Entity values. For example, given these definitions: then an Outer would have one property, Inner, encoded as an Entity value. If an outer struct is tagged "noindex" then all of its implicit flattened fields are effectively "noindex". If the Inner struct contains a *Key field with the name "__key__", like so: then the value of K will be used as the Key for Inner, represented as an Entity value in datastore. If any nested struct fields should be flattened, instead of encoded as Entity values, the nested struct field should be tagged with the "flatten" option. For example, given the following: an Outer's properties would be equivalent to those of: Note that the "flatten" option cannot be used for Entity value fields. The server will reject any dotted field names for an Entity value. An entity's contents can also be represented by any type that implements the PropertyLoadSaver interface. This type may be a struct pointer, but it does not have to be. The datastore package will call Load when getting the entity's contents, and Save when putting the entity's contents. Possible uses include deriving non-stored fields, verifying fields, or indexing a field only if its value is positive. Example code: The *PropertyList type implements PropertyLoadSaver, and can therefore hold an arbitrary entity's contents. If a type implements the PropertyLoadSaver interface, it may also want to implement the KeyLoader interface. The KeyLoader interface exists to allow implementations of PropertyLoadSaver to also load an Entity's Key into the Go type. This type may be a struct pointer, but it does not have to be. The datastore package will call LoadKey when getting the entity's contents, after calling Load. Example code: To load a Key into a struct which does not implement the PropertyLoadSaver interface, see the "Key Field" section above. Queries retrieve entities based on their properties or key's ancestry. Running a query yields an iterator of results: either keys or (key, entity) pairs. Queries are re-usable and it is safe to call Query.Run from concurrent goroutines. Iterators are not safe for concurrent use. Queries are immutable, and are either created by calling NewQuery, or derived from an existing query by calling a method like Filter or Order that returns a new query value. A query is typically constructed by calling NewQuery followed by a chain of zero or more such methods. These methods are: Example code: Client.RunInTransaction runs a function in a transaction. Example code: Pass the ReadOnly option to RunInTransaction if your transaction is used only for Get, GetMulti or queries. Read-only transactions are more efficient. This package supports the Cloud Datastore emulator, which is useful for testing and development. Environment variables are used to indicate that datastore traffic should be directed to the emulator instead of the production Datastore service. To install and set up the emulator and its environment variables, see the documentation at https://cloud.google.com/datastore/docs/tools/datastore-emulator.
Package kyber provides a toolbox of advanced cryptographic primitives, for applications that need more than straightforward signing and encryption. This top level package defines the interfaces to cryptographic primitives designed to be independent of specific cryptographic algorithms, to facilitate upgrading applications to new cryptographic algorithms or switching to alternative algorithms for experimentation purposes. This toolkits public-key crypto API includes a kyber.Group interface supporting a broad class of group-based public-key primitives including DSA-style integer residue groups and elliptic curve groups. Users of this API can write higher-level crypto algorithms such as zero-knowledge proofs without knowing or caring exactly what kind of group, let alone which precise security parameters or elliptic curves, are being used. The kyber.Group interface supports the standard algebraic operations on group elements and scalars that nontrivial public-key algorithms tend to rely on. The interface uses additive group terminology typical for elliptic curves, such that point addition is homomorphically equivalent to adding their (potentially secret) scalar multipliers. But the API and its operations apply equally well to DSA-style integer groups. As a trivial example, generating a public/private keypair is as simple as: The first statement picks a private key (Scalar) from a the suites's source of cryptographic random or pseudo-random bits, while the second performs elliptic curve scalar multiplication of the curve's standard base point (indicated by the 'nil' argument to Mul) by the scalar private key 'a'. Similarly, computing a Diffie-Hellman shared secret using Alice's private key 'a' and Bob's public key 'B' can be done via: Note that we use 'Mul' rather than 'Exp' here because the library uses the additive-group terminology common for elliptic curve crypto, rather than the multiplicative-group terminology of traditional integer groups - but the two are semantically equivalent and the interface itself works for both elliptic curve and integer groups. Various sub-packages provide several specific implementations of these cryptographic interfaces. In particular, the 'group/mod' sub-package provides implementations of modular integer groups underlying conventional DSA-style algorithms. The `group/nist` package provides NIST-standardized elliptic curves built on the Go crypto library. The 'group/edwards25519' sub-package provides the kyber.Group interface using the popular Ed25519 curve. Other sub-packages build more interesting high-level cryptographic tools atop these primitive interfaces, including: - share: Polynomial commitment and verifiable Shamir secret splitting for implementing verifiable 't-of-n' threshold cryptographic schemes. This can be used to encrypt a message so that any 2 out of 3 receivers must work together to decrypt it, for example. - proof: An implementation of the general Camenisch/Stadler framework for discrete logarithm knowledge proofs. This system supports both interactive and non-interactive proofs of a wide variety of statements such as, "I know the secret x associated with public key X or I know the secret y associated with public key Y", without revealing anything about either secret or even which branch of the "or" clause is true. - sign: The sign directory contains different signature schemes. - sign/anon provides anonymous and pseudonymous public-key encryption and signing, where the sender of a signed message or the receiver of an encrypted message is defined as an explicit anonymity set containing several public keys rather than just one. For example, a member of an organization's board of trustees might prove to be a member of the board without revealing which member she is. - sign/cosi provides collective signature algorithm, where a bunch of signers create a unique, compact and efficiently verifiable signature using the Schnorr signature as a basis. - sign/eddsa provides a kyber-native implementation of the EdDSA signature scheme. - sign/schnorr provides a basic vanilla Schnorr signature scheme implementation. - shuffle: Verifiable cryptographic shuffles of ElGamal ciphertexts, which can be used to implement (for example) voting or auction schemes that keep the sources of individual votes or bids private without anyone having to trust more than one of the shuffler(s) to shuffle votes/bids honestly. For now this library should currently be considered experimental: it will definitely be changing in non-backward-compatible ways, and it will need independent security review before it should be considered ready for use in security-critical applications. However, we intend to bring the library closer to stability and real-world usability as quickly as development resources permit, and as interest and application demand dictates. As should be obvious, this library is intended to be used by developers who are at least moderately knowledgeable about cryptography. If you want a crypto library that makes it easy to implement "basic crypto" functionality correctly - i.e., plain public-key encryption and signing - then [NaCl secretbox](https://godoc.org/golang.org/x/crypto/nacl/secretbox) may be a better choice. This toolkit's purpose is to make it possible - and preferably easy - to do slightly more interesting things that most current crypto libraries don't support effectively. The one existing crypto library that this toolkit is probably most comparable to is the Charm rapid prototyping library for Python (https://charm-crypto.com/category/charm). This library incorporates and/or builds on existing code from a variety of sources, as documented in the relevant sub-packages.
Package edgedb is the official Go driver for EdgeDB. Additionally, github.com/edgedb/edgedb-go/cmd/edgeql-go is a code generator that generates go functions from edgeql files. Typical client usage looks like this: We recommend using environment variables for connection parameters. See the client connection docs for more information. You may also connect to a database using a DSN: Or you can use Option fields. edgedb never returns underlying errors directly. If you are checking for things like context expiration use errors.Is() or errors.As(). Most errors returned by the edgedb package will satisfy the edgedb.Error interface which has methods for introspecting. The following list shows the marshal/unmarshal mapping between EdgeDB types and go types: Note that EdgeDB's std::duration type is represented in int64 microseconds while go's time.Duration type is int64 nanoseconds. It is incorrect to cast one directly to the other. Shape fields that are not required must use optional types for receiving query results. The edgedb.Optional struct can be embedded to make structs optional. Not all types listed above are valid query parameters. To pass a slice of scalar values use array in your query. EdgeDB doesn't currently support using sets as parameters. Nested structures are also not directly allowed but you can use json instead. By default EdgeDB will ignore embedded structs when marshaling/unmarshaling. To treat an embedded struct's fields as part of the parent struct's fields, tag the embedded struct with `edgedb:"$inline"`. Interfaces for user defined marshaler/unmarshalers are documented in the internal/marshal package. Link properties are treated as fields in the linked to struct, and the @ is omitted from the field's tag.
Package storage provides an easy way to work with Google Cloud Storage. Google Cloud Storage stores data in named objects, which are grouped into buckets. More information about Google Cloud Storage is available at https://cloud.google.com/storage/docs. See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package. All of the methods of this package use exponential backoff to retry calls that fail with certain errors, as described in https://cloud.google.com/storage/docs/exponential-backoff. Retrying continues indefinitely unless the controlling context is canceled or the client is closed. See context.WithTimeout and context.WithCancel. To start working with this package, create a client: The client will use your default application credentials. If you only wish to access public data, you can create an unauthenticated client with A Google Cloud Storage bucket is a collection of objects. To work with a bucket, make a bucket handle: A handle is a reference to a bucket. You can have a handle even if the bucket doesn't exist yet. To create a bucket in Google Cloud Storage, call Create on the handle: Note that although buckets are associated with projects, bucket names are global across all projects. Each bucket has associated metadata, represented in this package by BucketAttrs. The third argument to BucketHandle.Create allows you to set the initial BucketAttrs of a bucket. To retrieve a bucket's attributes, use Attrs: An object holds arbitrary data as a sequence of bytes, like a file. You refer to objects using a handle, just as with buckets, but unlike buckets you don't explicitly create an object. Instead, the first time you write to an object it will be created. You can use the standard Go io.Reader and io.Writer interfaces to read and write object data: Objects also have attributes, which you can fetch with Attrs: Both objects and buckets have ACLs (Access Control Lists). An ACL is a list of ACLRules, each of which specifies the role of a user, group or project. ACLs are suitable for fine-grained control, but you may prefer using IAM to control access at the project level (see https://cloud.google.com/storage/docs/access-control/iam). To list the ACLs of a bucket or object, obtain an ACLHandle and call its List method: You can also set and delete ACLs. Every object has a generation and a metageneration. The generation changes whenever the content changes, and the metageneration changes whenever the metadata changes. Conditions let you check these values before an operation; the operation only executes if the conditions match. You can use conditions to prevent race conditions in read-modify-write operations. For example, say you've read an object's metadata into objAttrs. Now you want to write to that object, but only if its contents haven't changed since you read it. Here is how to express that: You can obtain a URL that lets anyone read or write an object for a limited time. You don't need to create a client to do this. See the documentation of SignedURL for details. Errors returned by this client are often of the type [`googleapi.Error`](https://godoc.org/google.golang.org/api/googleapi#Error). These errors can be introspected for more information by type asserting to the richer `googleapi.Error` type. For example: