protoc-gen-go-tag is a plugin for the Google protocol buffer compiler to Generate Go code. Install it by building this program and making it accessible within your PATH with the name: The 'go' suffix becomes part of the argument for the protocol compiler, such that it can be invoked as: This generates Go bindings for the protocol buffer defined by astFile.proto. With that input, the output will be written to: See the README and documentation for protocol buffers to learn more:
Package podcast generates a fully compliant iTunes and RSS 2.0 podcast feed for GoLang using a simple API. Full documentation with detailed examples located at https://godoc.org/github.com/eduncan911/podcast To use, `go get` and `import` the package like your typical GoLang library. The API exposes a number of method receivers on structs that implements the logic required to comply with the specifications and ensure a compliant feed. A number of overrides occur to help with iTunes visibility of your episodes. Notably, the `Podcast.AddItem` function performs most of the heavy lifting by taking the `Item` input and performing validation, overrides and duplicate setters through the feed. Full detailed Examples of the API are at https://godoc.org/github.com/eduncan911/podcast. In no way are you restricted in having full control over your feeds. You may choose to skip the API methods and instead use the structs directly. The fields have been grouped by RSS 2.0 and iTunes fields. iTunes specific fields are all prefixed with the letter `I`. RSS 2.0: https://cyber.harvard.edu/rss/rss.html Podcasts: https://help.apple.com/itc/podcasts_connect/#/itca5b22233 The 1.x branch is now mostly in maintenance mode, open to PRs. This means no more planned features on the 1.x feature branch is expected. With the success of 6 iTunes-accepted podcasts I have published with this library, and with the feedback from the community, the 1.x releases are now considered stable. The 2.x branch's primary focus is to allow for bi-direction marshalling both ways. Currently, the 1.x branch only allows unmarshalling to a serial feed. An attempt to marshall a serialized feed back into a Podcast form will error or not work correctly. Note that while the 2.x branch is targeted to remain backwards compatible, it is true if using the public API funcs to set parameters only. Several of the underlying public fields are being removed in order to accommodate the marshalling of serialized data. Therefore, a version 2.x is denoted for this release. We use SemVer versioning schema. You can rest assured that pulling 1.x branches will remain backwards compatible now and into the future. However, the new 2.x branch, while keeping the same API, is expected break those that bypass the API methods and use the underlying public properties instead. 1.3.2 1.3.1 1.3.0 1.2.1 1.2.0 1.1.0 1.0.0
godocgo is a repo demonstrating effective Go documentation. I started to write this as a blog post, intending to link to a repo with examples, and then realized that there was no reason I couldn't write the whole post as godoc on the repo, so here it is. This repo serves as a working example of "hey, how do I get something like that to show up in my godoc?" It also has tips on common conventions and idioms used in documenting Go code. Please feel free to send me pull requests if you see errors or places where things could be improved. To view this godoc on your own machine, just run (if you already have godoc, you can skip the first step) Then you can open a web browser and go to http://localhost:8080/pkg/github.com/natefinch/godocgo/ Alternatively, you can view this on the excellent godoc.org, by going to http://godoc.org/github.com/natefinch/godocgo (note that godoc.org has different styling than the godoc tool, but the content is the same). One of my favorite things about the Go programming language is godoc, the auto-generated documentation for your code. If you're coming from javadoc or similar tools, you're probably thinking one of two things (possibly both): An understandable reaction, but let me show you why godoc is different, and how to leverage all its features. First off, godoc comments are designed from the start to be read in the code. We all know that the code will be read a lot more often than some html documentation out on the web, so godoc makes sure to keep its documentation legible for the coder in their text editor. There is not a single angle bracket to be found. Simple conventions replace the html/xml tags you often see in other documentation standards. Paragraphs are delineated by blank lines. Headings like the above are simply a single line without punctuation with paragraphs before and after it. If you're viewing this using godoc, it will be included in the table of contents at the top of the file. If you're viewing this on godoc.org, there is, sadly, no table of contents. Below this is pre-formatted text. All that is required for it to be rendered in that way is that it be indented from the surrounding comments (you can use tabs or spaces, whatever you like). This is handy for showing code snippets that you don't want godoc to auto-wrap, the way it does normal text. Html links will be auto linked without you having to do anything, like so: http://golang.org That's it, that's all the formatting you get. But it's enough, and I think we can all be glad to focus on the content and not the formatting of our documentation. In Go, any comment which immediately precedes Go code is considered a comment about that code. Only comments on exported names will be included in the documentation. By corollary, any comment that is *not* on a line before a line of code, is not picked up as documentation. Note that this is important when writing things like build tags and copyright notices at the top of files. Always make sure you have a space between those comments and the package declaration. Comments exist to tell developers what the code is for and how to use it. They should be concise and to the point. Comments should be full sentences that end with a period. This generally forces you to offer complete thoughts in your comments, rather than trying to make it into shorthand. It's also useful, because then no one has to wonder if you forgot to write the end of that sentence. Comments in Go traditionally start with the name of the thing they're commenting about. Thus, if you're commenting on a type Foo, the comment would start "Foo is ...". This makes it clear that the comment is about that name, and can help connect the two if they get separated. Often times the first sentence of a comment is extracted as a short summary, so make it short and useful on its own. Always comment every exported name in your program. You never know when a little extra information will help someone use your code. Also, it just makes your documentation look complete. The text you are reading is documentation on the main package of a Go executable. It is simply a comment attached (immediately preceding) the package <foo> declaration (in this case package main, since this is the application package). It is traditional to put long form documentation in a file called doc.go, without other code in that file, but that is not necessary. A comment on the package declaration in any file will work, and if you have multiple comments, they'll be concatenated together. The first sentence in this file (up to the first period) will be the label next to the link in godoc's index. This should be a short description of what the command does, and traditionally starts with "Command <name-of- command>". Note that all the tips that apply to code in packages, as demonstrated in package sub below, also apply to commands, I just thought this page was long enough already, so I put everything else in the next package :) To read more of the article, and explore more of what godoc has to offer, click on the link to the package named "sub" below.
Package zmanim_core provides Go bindings for the Zmanim Core library, offering comprehensive Jewish calendar calculations and zmanim (halachic times) functionality. This package provides a Go interface to the Rust-based Zmanim Core library through CGO bindings generated by UniFFI. It enables calculation of various Jewish calendar dates, times, and astronomical events. This package requires the Zmanim Core dynamic library to be available at runtime: Download the appropriate library for your platform from the GitHub releases page at https://github.com/dickermoshe/zmanim-core/releases and place it in your project directory or system library path. Then download the dynamic library for your platform from the releases page. The library provides calculations for: For detailed usage examples and API documentation, see the individual function documentation and the project README.
Package address contains logic for parsing a Terraform address. The Terraform address grammar is documented at https://www.terraform.io/docs/internals/resource-addressing.html Parsing is implemented using Pigeon, a PEG parser generator.
Package gofpdf implements a PDF document generator with high level support for text, drawing and images. - UTF-8 support - Choice of measurement unit, page format and margins - Page header and footer management - Automatic page breaks, line breaks, and text justification - Inclusion of JPEG, PNG, GIF, TIFF and basic path-only SVG images - Colors, gradients and alpha channel transparency - Outline bookmarks - Internal and external links - TrueType, Type1 and encoding support - Page compression - Lines, Bézier curves, arcs, and ellipses - Rotation, scaling, skewing, translation, and mirroring - Clipping - Document protection - Layers - Templates - Barcodes - Charting facility - Import PDFs as templates gofpdf has no dependencies other than the Go standard library. All tests pass on Linux, Mac and Windows platforms. gofpdf supports UTF-8 TrueType fonts and “right-to-left” languages. Note that Chinese, Japanese, and Korean characters may not be included in many general purpose fonts. For these languages, a specialized font (for example, NotoSansSC for simplified Chinese) can be used. Also, support is provided to automatically translate UTF-8 runes to code page encodings for languages that have fewer than 256 glyphs. To install the package on your system, run Later, to receive updates, run The following Go code generates a simple PDF file. See the functions in the fpdf_test.go file (shown as examples in this documentation) for more advanced PDF examples. If an error occurs in an Fpdf method, an internal error field is set. After this occurs, Fpdf method calls typically return without performing any operations and the error state is retained. This error management scheme facilitates PDF generation since individual method calls do not need to be examined for failure; it is generally sufficient to wait until after Output() is called. For the same reason, if an error occurs in the calling application during PDF generation, it may be desirable for the application to transfer the error to the Fpdf instance by calling the SetError() method or the SetErrorf() method. At any time during the life cycle of the Fpdf instance, the error state can be determined with a call to Ok() or Err(). The error itself can be retrieved with a call to Error(). This package is a relatively straightforward translation from the original FPDF library written in PHP (despite the caveat in the introduction to Effective Go). The API names have been retained even though the Go idiom would suggest otherwise (for example, pdf.GetX() is used rather than simply pdf.X()). The similarity of the two libraries makes the original FPDF website a good source of information. It includes a forum and FAQ. However, some internal changes have been made. Page content is built up using buffers (of type bytes.Buffer) rather than repeated string concatenation. Errors are handled as explained above rather than panicking. Output is generated through an interface of type io.Writer or io.WriteCloser. A number of the original PHP methods behave differently based on the type of the arguments that are passed to them; in these cases additional methods have been exported to provide similar functionality. Font definition files are produced in JSON rather than PHP. A side effect of running go test ./... is the production of a number of example PDFs. These can be found in the gofpdf/pdf directory after the tests complete. Please note that these examples run in the context of a test. In order run an example as a standalone application, you’ll need to examine fpdf_test.go for some helper routines, for example exampleFilename() and summary(). Example PDFs can be compared with reference copies in order to verify that they have been generated as expected. This comparison will be performed if a PDF with the same name as the example PDF is placed in the gofpdf/pdf/reference directory and if the third argument to ComparePDFFiles() in internal/example/example.go is true. (By default it is false.) The routine that summarizes an example will look for this file and, if found, will call ComparePDFFiles() to check the example PDF for equality with its reference PDF. If differences exist between the two files they will be printed to standard output and the test will fail. If the reference file is missing, the comparison is considered to succeed. In order to successfully compare two PDFs, the placement of internal resources must be consistent and the internal creation timestamps must be the same. To do this, the methods SetCatalogSort() and SetCreationDate() need to be called for both files. This is done automatically for all examples. Nothing special is required to use the standard PDF fonts (courier, helvetica, times, zapfdingbats) in your documents other than calling SetFont(). You should use AddUTF8Font() or AddUTF8FontFromBytes() to add a TrueType UTF-8 encoded font. Use RTL() and LTR() methods switch between “right-to-left” and “left-to-right” mode. In order to use a different non-UTF-8 TrueType or Type1 font, you will need to generate a font definition file and, if the font will be embedded into PDFs, a compressed version of the font file. This is done by calling the MakeFont function or using the included makefont command line utility. To create the utility, cd into the makefont subdirectory and run “go build”. This will produce a standalone executable named makefont. Select the appropriate encoding file from the font subdirectory and run the command as in the following example. In your PDF generation code, call AddFont() to load the font and, as with the standard fonts, SetFont() to begin using it. Most examples, including the package example, demonstrate this method. Good sources of free, open-source fonts include Google Fonts and DejaVu Fonts. The draw2d package is a two dimensional vector graphics library that can generate output in different forms. It uses gofpdf for its document production mode. gofpdf is a global community effort and you are invited to make it even better. If you have implemented a new feature or corrected a problem, please consider contributing your change to the project. A contribution that does not directly pertain to the core functionality of gofpdf should be placed in its own directory directly beneath the contrib directory. Here are guidelines for making submissions. Your change should - be compatible with the MIT License - be properly documented - be formatted with go fmt - include an example in fpdf_test.go if appropriate - conform to the standards of golint and go vet, that is, golint . and go vet . should not generate any warnings - not diminish test coverage Pull requests are the preferred means of accepting your changes. gofpdf is released under the MIT License. It is copyrighted by Kurt Jung and the contributors acknowledged below. This package’s code and documentation are closely derived from the FPDF library created by Olivier Plathey, and a number of font and image resources are copied directly from it. Bruno Michel has provided valuable assistance with the code. Drawing support is adapted from the FPDF geometric figures script by David Hernández Sanz. Transparency support is adapted from the FPDF transparency script by Martin Hall-May. Support for gradients and clipping is adapted from FPDF scripts by Andreas Würmser. Support for outline bookmarks is adapted from Olivier Plathey by Manuel Cornes. Layer support is adapted from Olivier Plathey. Support for transformations is adapted from the FPDF transformation script by Moritz Wagner and Andreas Würmser. PDF protection is adapted from the work of Klemen Vodopivec for the FPDF product. Lawrence Kesteloot provided code to allow an image’s extent to be determined prior to placement. Support for vertical alignment within a cell was provided by Stefan Schroeder. Ivan Daniluk generalized the font and image loading code to use the Reader interface while maintaining backward compatibility. Anthony Starks provided code for the Polygon function. Robert Lillack provided the Beziergon function and corrected some naming issues with the internal curve function. Claudio Felber provided implementations for dashed line drawing and generalized font loading. Stani Michiels provided support for multi-segment path drawing with smooth line joins, line join styles, enhanced fill modes, and has helped greatly with package presentation and tests. Templating is adapted by Marcus Downing from the FPDF_Tpl library created by Jan Slabon and Setasign. Jelmer Snoeck contributed packages that generate a variety of barcodes and help with registering images on the web. Jelmer Snoek and Guillermo Pascual augmented the basic HTML functionality with aligned text. Kent Quirk implemented backwards-compatible support for reading DPI from images that support it, and for setting DPI manually and then having it properly taken into account when calculating image size. Paulo Coutinho provided support for static embedded fonts. Dan Meyers added support for embedded JavaScript. David Fish added a generic alias-replacement function to enable, among other things, table of contents functionality. Andy Bakun identified and corrected a problem in which the internal catalogs were not sorted stably. Paul Montag added encoding and decoding functionality for templates, including images that are embedded in templates; this allows templates to be stored independently of gofpdf. Paul also added support for page boxes used in printing PDF documents. Wojciech Matusiak added supported for word spacing. Artem Korotkiy added support of UTF-8 fonts. Dave Barnes added support for imported objects and templates. Brigham Thompson added support for rounded rectangles. Joe Westcott added underline functionality and optimized image storage. Benoit KUGLER contributed support for rectangles with corners of unequal radius, modification times, and for file attachments and annotations. - Remove all legacy code page font support; use UTF-8 exclusively - Improve test coverage as reported by the coverage tool. Example demonstrates the generation of a simple PDF document. Note that since only core fonts are used (in this case Arial, a synonym for Helvetica), an empty string can be specified for the font directory in the call to New(). Note also that the example.Filename() and example.Summary() functions belong to a separate, internal package and are not part of the gofpdf library. If an error occurs at some point during the construction of the document, subsequent method calls exit immediately and the error is finally retrieved with the output call where it can be handled by the application.
Package goa implements a Go framework for writing microservices that promotes best practice by providing a single source of truth from which server code, client code, and documentation is derived. The code generated by goa follows the clean architecture pattern where composable modules are generated for the transport, endpoint, and business logic layers. The goa package contains middleware, plugins, and other complementary functionality that can be leveraged in tandem with the generated code to implement complete microservices in an efficient manner. By using goa for developing microservices, implementers don’t have to worry with the documentation getting out of sync from the implementation as goa takes care of generating OpenAPI specifications for HTTP based services and gRPC protocol buffer files for gRPC based services (or both if the service supports both transports). Reviewers can also be assured that the implementation follows the documentation as the code is generated from the same source. Visit https://goa.design for more information.
Package langchaingo provides a Go implementation of LangChain, a framework for building applications with Large Language Models (LLMs) through composability. LangchainGo enables developers to create powerful AI-driven applications by providing a unified interface to various LLM providers, vector databases, and other AI services. The framework emphasizes modularity, extensibility, and ease of use. The framework is organized around several key packages: Basic text generation with OpenAI: Creating embeddings and using vector search: Building a chain for question answering: LangchainGo supports numerous providers: LLM Providers: Embedding Providers: Vector Stores: Create agents that can use tools to accomplish complex tasks: Maintain conversation context across multiple interactions: Streaming responses: Function calling: Multi-modal inputs (text and images): Most providers require API keys set as environment variables: LangchainGo provides standardized error handling: LangchainGo includes comprehensive testing utilities including HTTP record/replay for internal tests. The httprr package provides deterministic testing of HTTP interactions: See the examples/ directory for complete working examples including: LangchainGo welcomes contributions! The project follows Go best practices and includes comprehensive testing, linting, and documentation standards. See CONTRIBUTING.md for detailed guidelines.
Package pbc provides structures for building pairing-based cryptosystems. It is a wrapper around the Pairing-Based Cryptography (PBC) Library authored by Ben Lynn (https://crypto.stanford.edu/pbc/). This wrapper provides access to all PBC functions. It supports generation of various types of elliptic curves and pairings, element initialization, I/O, and arithmetic. These features can be used to quickly build pairing-based or conventional cryptosystems. The PBC library is designed to be extremely fast. Internally, it uses GMP for arbitrary-precision arithmetic. It also includes a wide variety of optimizations that make pairing-based cryptography highly efficient. To improve performance, PBC does not perform type checking to ensure that operations actually make sense. The Go wrapper provides the ability to add compatibility checks to most operations, or to use unchecked elements to maximize performance. Since this library provides low-level access to pairing primitives, it is very easy to accidentally construct insecure systems. This library is intended to be used by cryptographers or to implement well-analyzed cryptosystems. Cryptographic pairings are defined over three mathematical groups: G1, G2, and GT, where each group is typically of the same order r. Additionally, a bilinear map e maps a pair of elements — one from G1 and another from G2 — to an element in GT. This map e has the following additional property: If G1 == G2, then a pairing is said to be symmetric. Otherwise, it is asymmetric. Pairings can be used to construct a variety of efficient cryptosystems. The PBC library currently supports 5 different types of pairings, each with configurable parameters. These types are designated alphabetically, roughly in chronological order of introduction. Type A, D, E, F, and G pairings are implemented in the library. Each type has different time and space requirements. For more information about the types, see the documentation for the corresponding generator calls, or the PBC manual page at https://crypto.stanford.edu/pbc/manual/ch05s01.html. This package must be compiled using cgo. It also requires the installation of GMP and PBC. During the build process, this package will attempt to include <gmp.h> and <pbc/pbc.h>, and then dynamically link to GMP and PBC. Most systems include a package for GMP. To install GMP in Debian / Ubuntu: For an RPM installation with YUM: For installation with Fink (http://www.finkproject.org/) on Mac OS X: For more information or to compile from source, visit https://gmplib.org/ To install the PBC library, download the appropriate files for your system from https://crypto.stanford.edu/pbc/download.html. PBC has three dependencies: the gcc compiler, flex (http://flex.sourceforge.net/), and bison (https://www.gnu.org/software/bison/). See the respective sites for installation instructions. Most distributions include packages for these libraries. For example, in Debian / Ubuntu: The PBC source can be compiled and installed using the usual GNU Build System: After installing, you may need to rebuild the search path for libraries: It is possible to install the package on Windows through the use of MinGW and MSYS. MSYS is required for installing PBC, while GMP can be installed through a package. Based on your MinGW installation, you may need to add "-I/usr/local/include" to CPPFLAGS and "-L/usr/local/lib" to LDFLAGS when building PBC. Likewise, you may need to add these options to CGO_CPPFLAGS and CGO_LDFLAGS when installing this package. This package is free software: you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. For additional details, see the COPYING and COPYING.LESSER files. This example generates a pairing and some random group elements, then applies the pairing operation. This example computes and verifies a Boneh-Lynn-Shacham signature in a simulated conversation between Alice and Bob.
Package pbc provides structures for building pairing-based cryptosystems. It is a wrapper around the Pairing-Based Cryptography (PBC) Library authored by Ben Lynn (https://crypto.stanford.edu/pbc/). This wrapper provides access to all PBC functions. It supports generation of various types of elliptic curves and pairings, element initialization, I/O, and arithmetic. These features can be used to quickly build pairing-based or conventional cryptosystems. The PBC library is designed to be extremely fast. Internally, it uses GMP for arbitrary-precision arithmetic. It also includes a wide variety of optimizations that make pairing-based cryptography highly efficient. To improve performance, PBC does not perform type checking to ensure that operations actually make sense. The Go wrapper provides the ability to add compatibility checks to most operations, or to use unchecked elements to maximize performance. Since this library provides low-level access to pairing primitives, it is very easy to accidentally construct insecure systems. This library is intended to be used by cryptographers or to implement well-analyzed cryptosystems. Cryptographic pairings are defined over three mathematical groups: G1, G2, and GT, where each group is typically of the same order r. Additionally, a bilinear map e maps a pair of elements — one from G1 and another from G2 — to an element in GT. This map e has the following additional property: If G1 == G2, then a pairing is said to be symmetric. Otherwise, it is asymmetric. Pairings can be used to construct a variety of efficient cryptosystems. The PBC library currently supports 5 different types of pairings, each with configurable parameters. These types are designated alphabetically, roughly in chronological order of introduction. Type A, D, E, F, and G pairings are implemented in the library. Each type has different time and space requirements. For more information about the types, see the documentation for the corresponding generator calls, or the PBC manual page at https://crypto.stanford.edu/pbc/manual/ch05s01.html. This package must be compiled using cgo. It also requires the installation of GMP and PBC. During the build process, this package will attempt to include <gmp.h> and <pbc/pbc.h>, and then dynamically link to GMP and PBC. Most systems include a package for GMP. To install GMP in Debian / Ubuntu: For an RPM installation with YUM: For installation with Fink (http://www.finkproject.org/) on Mac OS X: For more information or to compile from source, visit https://gmplib.org/ To install the PBC library, download the appropriate files for your system from https://crypto.stanford.edu/pbc/download.html. PBC has three dependencies: the gcc compiler, flex (http://flex.sourceforge.net/), and bison (https://www.gnu.org/software/bison/). See the respective sites for installation instructions. Most distributions include packages for these libraries. For example, in Debian / Ubuntu: The PBC source can be compiled and installed using the usual GNU Build System: After installing, you may need to rebuild the search path for libraries: It is possible to install the package on Windows through the use of MinGW and MSYS. MSYS is required for installing PBC, while GMP can be installed through a package. Based on your MinGW installation, you may need to add "-I/usr/local/include" to CPPFLAGS and "-L/usr/local/lib" to LDFLAGS when building PBC. Likewise, you may need to add these options to CGO_CPPFLAGS and CGO_LDFLAGS when installing this package. This package is free software: you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. For additional details, see the COPYING and COPYING.LESSER files. This example generates a pairing and some random group elements, then applies the pairing operation. This example computes and verifies a Boneh-Lynn-Shacham signature in a simulated conversation between Alice and Bob.
Package gorilla/pat is a request router and dispatcher with a pat-like interface. It is an alternative to gorilla/mux that showcases how it can be used as a base for different API flavors. Package pat is documented at: Let's start registering a couple of URL paths and handlers: Here we register three routes mapping URL paths to handlers. This is equivalent to how http.HandleFunc() works: if an incoming GET request matches one of the paths, the corresponding handler is called passing (http.ResponseWriter, *http.Request) as parameters. Note: gorilla/pat matches path prefixes, so you must register the most specific paths first. Note: differently from pat, these methods accept a handler function, and not an http.Handler. We think this is shorter and more convenient. To set an http.Handler, use the Add() method. Paths can have variables. They are defined using the format {name} or {name:pattern}. If a regular expression pattern is not defined, the matched variable will be anything until the next slash. For example: The names are used to create a map of route variables which are stored in the URL query, prefixed by a colon: As in the gorilla/mux package, other matchers can be added to the registered routes and URLs can be reversed as well. To build a URL for a route, first add a name to it: Then you can get it using the name and generate a URL: ...and the result will be a url.URL with the following path: Check the mux documentation for more details about URL building and extra matchers:
┌────────────────────────────────── WARNING ───────────────────────────────────┐ │ This "module_api.go" file was automatically generated using: │ │ https://github.com/craterdog/go-development-tools/wiki │ │ │ │ Updates to any part of this file—other than the Module Description │ │ and the Global Functions sections may be overwritten. │ └──────────────────────────────────────────────────────────────────────────────┘ Package "module" declares type aliases for the commonly used types declared in the packages contained in this module. It also provides constructors for each commonly used class that is exported by the module. Each constructor delegates the actual construction process to its corresponding concrete class declared in the corresponding package contained within this module. For detailed documentation on this entire module refer to the wiki:
Package goca provides a CLI tool for generating Go Clean Architecture projects. Goca is a comprehensive CLI tool that generates well-structured Go projects following Clean Architecture principles. It creates projects with zero compilation errors, proper file organization, and clean code that passes all linting checks. All generated code is guaranteed to: For more information and documentation, visit: https://github.com/jorgefuertes/goca
┌────────────────────────────────── WARNING ───────────────────────────────────┐ │ This "module_api.go" file was automatically generated using: │ │ https://github.com/craterdog/go-development-tools/wiki │ │ │ │ Updates to any part of this file—other than the Module Description │ │ and the Global Functions sections may be overwritten. │ └──────────────────────────────────────────────────────────────────────────────┘ Package "module" declares type aliases for the commonly used types declared in the packages contained in this module. It also provides constructors for each commonly used class that is exported by the module. Each constructor delegates the actual construction process to its corresponding concrete class declared in the corresponding package contained within this module. For detailed documentation on this entire module refer to the wiki:
Package swagger (2.0) provides a powerful interface to your API Contains an implementation of Swagger 2.0. It knows how to serialize, deserialize and validate swagger specifications. Swagger is a simple yet powerful representation of your RESTful API. With the largest ecosystem of API tooling on the planet, thousands of developers are supporting Swagger in almost every modern programming language and deployment environment. With a Swagger-enabled API, you get interactive documentation, client SDK generation and discoverability. We created Swagger to help fulfill the promise of APIs. Swagger helps companies like Apigee, Getty Images, Intuit, LivingSocial, McKesson, Microsoft, Morningstar, and PayPal build the best possible services with RESTful APIs.Now in version 2.0, Swagger is more enabling than ever. And it's 100% open source software. More detailed documentation is available at https://goswagger.io. Install: The implementation also provides a number of command line tools to help working with swagger. Currently there is a spec validator tool: To generate a server for a swagger spec document: To generate a client for a swagger spec document: To generate a swagger spec document for a go application: There are several other sub commands available for the generate command You're free to add files to the directories the generated code lands in, but the files generated by the generator itself will be regenerated on following generation runs so any changes to those files will be lost. However extra files you create won't be lost so they are safe to use for customizing the application to your needs. To generate a server for a swagger spec document:
Package confkit provides a minimalist configuration management library with transparent layered configuration sources. Key Features: Key Normalization: Confkit automatically normalizes keys when matching, making these equivalent: This allows configuration from different sources (files, environment, flags) to work together seamlessly without manual key transformation. Struct Unmarshaling: When using Unmarshal with structs, fields are mapped using mapstructure tags: This creates the configuration paths "database.host" and "database.port", which can be set by any normalized variant (DATABASE_HOST, database-host, etc). Handling Defaults (RECOMMENDED PATTERN): Instead of using struct tags for defaults, create a DefaultConfig function: This approach keeps defaults in code (not in tags), makes them testable, and allows the same defaults to be used for documentation generation. Configuration Validation: Confkit provides helpers for validation without enforcing any validation rules: This design keeps confkit minimal while enabling sophisticated validation patterns.
Package hyperpb is a highly optimized dynamic message library for Protobuf or read-only workloads. It is designed to be a drop-in replacement for dynamicpb, protobuf-go's canonical solution for working with completely dynamic messages. hyperpb's parser is an efficient VM for a special instruction set, a variant of table-driven parsing (TDP), pioneered by the UPB project. Our parser is very fast, beating dynamicpb by 10x, and often beating protobuf-go's generated code by a factor of 2-3x, especially for workloads with many nested messages. The core conceit of hyperpb is that you must pre-compile a parser using hyperpb.CompileMessageDescriptor at runtime, much like regular expressions require that you use regexp.Compile them. Doing this allows hyperpb to run optimization passes on your message, and delaying it to runtime allows us to continuously improve layout optimizations, without making any source-breaking changes. For example, let's say that we want to compile a parser for some type baked into our binary, and parse some data with it. For example, let's say that we want to compile a parser for some type baked into our binary, and parse some data with it. Currently, hyperpb only supports manipulating messages through the reflection API; it shines best when you need write a very generic service that downloads types off the network and parses messages using those types, which forces you to use reflection. Mutation is currently not supported; any operation which would mutate an already-parsed message will panic. Which methods of Message panic is included in the documentation. hyperpb has a memory-reuse mechanism that side-steps the Go garbage collector for improved allocation latency. Shared is book-keeping state and resources shared by all messages resulting from the same parse. After the message goes out of scope, these resources are ordinarily reclaimed by the garbage collector. However, a Shared can be retained after its associated message goes away, allowing for re-use. Consider the following example of a request handler: Beware that msg must not outlive the call to Shared.Free; failure to do so will result in memory errors that Go cannot protect you from. `hyperpb` supports online PGO for squeezing extra performance out of the parser by optimizing the parser with knowledge of what the average message actually looks like. For example, using PGO, the parser can predict the expected size of repeated fields and allocate more intelligently. For example, suppose you have a corpus of messages for a particular type. You can build an optimized type, using that corpus as the profile, using `Type.Recompile`: hyperpb is experimental software, and the API may change drastically before v1. It currently implements all Protobuf language constructs. It does not implement mutation of parsed messages, however.
Package sponge is a powerful Go development framework that makes it easy to develop web, gRPC, and microservices projects, while supporting custom templates to generate code for various projects. ``` Github: https://github.com/zhufuyi/sponge Documentation: https://go-sponge.com Usage: sponge [command] Available Commands: completion Generate the autocompletion script for the specified shell config Generate go config code from yaml file help Help about any command init Initialize sponge merge Merge the generated code into the template file micro Generate protobuf, model, cache, dao, service, grpc, grpc+http, grpc-gw, grpc-cli code patch Patch the generated code plugins Managing sponge dependency plugins run Run code generation UI service template Generate code based on custom templates upgrade Upgrade sponge version web Generate model, cache, dao, handler, http code Flags: -h, --help help for sponge -v, --version version for sponge Use "sponge [command] --help" for more information about a command. ```
Package loukoum provides a simple SQL Query Builder. At the moment, only PostgreSQL is supported. If you have to generate complex queries, which rely on various contexts, loukoum is the right tool for you. It helps you generate SQL queries from composable parts. However, keep in mind it's not an ORM or a Mapper so you have to use a SQL connector (like "database/sql" or "sqlx", for example) to execute queries. If you're afraid to slip a tiny SQL injection manipulating fmt (or a byte buffer...) when you append conditions, loukoum is here to protect you against yourself. For further informations, you can read this documentation: https://github.com/ulule/loukoum/blob/master/README.md Or you can discover loukoum with these examples. An "insert" can be generated like that: Also, if you need an upsert, you can define a "on conflict" clause: Updating a news is also simple: You can remove a specific user: Or select a list of users...
Package loukoum provides a simple SQL Query Builder. At the moment, only PostgreSQL is supported. If you have to generate complex queries, which rely on various contexts, loukoum is the right tool for you. It helps you generate SQL queries from composable parts. However, keep in mind it's not an ORM or a Mapper so you have to use a SQL connector (like "database/sql" or "sqlx", for example) to execute queries. If you're afraid to slip a tiny SQL injection manipulating fmt (or a byte buffer...) when you append conditions, loukoum is here to protect you against yourself. For further informations, you can read this documentation: https://github.com/ulule/loukoum/blob/master/README.md Or you can discover loukoum with these examples. An "insert" can be generated like that: Also, if you need an upsert, you can define a "on conflict" clause: Updating a news is also simple: You can remove a specific user: Or select a list of users...
┌────────────────────────────────── WARNING ───────────────────────────────────┐ │ This "module_api.go" file was automatically generated using: │ │ https://github.com/craterdog/go-development-tools/wiki │ │ │ │ Updates to any part of this file—other than the Module Description │ │ and the Global Functions sections may be overwritten. │ └──────────────────────────────────────────────────────────────────────────────┘ Package "module" declares type aliases for the commonly used types declared in the packages contained in this module. It also provides constructors for each commonly used class that is exported by the module. Each constructor delegates the actual construction process to its corresponding concrete class declared in the corresponding package contained within this module. For detailed documentation on this entire module refer to the wiki:
┌────────────────────────────────── WARNING ───────────────────────────────────┐ │ This "module_api.go" file was automatically generated using: │ │ https://github.com/craterdog/go-development-tools/wiki │ │ │ │ Updates to any part of this file—other than the Module Description │ │ and the Global Functions sections may be overwritten. │ └──────────────────────────────────────────────────────────────────────────────┘ Package "module" declares type aliases for the commonly used types declared in the packages contained in this module. It also provides constructors for each commonly used class that is exported by the module. Each constructor delegates the actual construction process to its corresponding concrete class declared in the corresponding package contained within this module. For detailed documentation on this entire module refer to the wiki:
Package skipper provides an HTTP routing library with flexible configuration as well as a runtime update of the routing rules. Skipper works as an HTTP reverse proxy that is responsible for mapping incoming requests to multiple HTTP backend services, based on routes that are selected by the request attributes. At the same time, both the requests and the responses can be augmented by a filter chain that is specifically defined for each route. Optionally, it can provide circuit breaker mechanism individually for each backend host. Skipper can load and update the route definitions from multiple data sources without being restarted. It provides a default executable command with a few built-in filters, however, its primary use case is to be extended with custom filters, predicates or data sources. For further information read 'Extending Skipper'. Skipper took the core design and inspiration from Vulcand: https://github.com/mailgun/vulcand. Skipper is 'go get' compatible. If needed, create a 'go workspace' first: Get the Skipper packages: Create a file with a route: Optionally, verify the syntax of the file: Start Skipper and make an HTTP request: The core of Skipper's request processing is implemented by a reverse proxy in the 'proxy' package. The proxy receives the incoming request, forwards it to the routing engine in order to receive the most specific matching route. When a route matches, the request is forwarded to all filters defined by it. The filters can modify the request or execute any kind of program logic. Once the request has been processed by all the filters, it is forwarded to the backend endpoint of the route. The response from the backend goes once again through all the filters in reverse order. Finally, it is mapped as the response of the original incoming request. Besides the default proxying mechanism, it is possible to define routes without a real network backend endpoint. One of these cases is called a 'shunt' backend, in which case one of the filters needs to handle the request providing its own response (e.g. the 'static' filter). Actually, filters themselves can instruct the request flow to shunt by calling the Serve(*http.Response) method of the filter context. Another case of a route without a network backend is the 'loopback'. A loopback route can be used to match a request, modified by filters, against the lookup tree with different conditions and then execute a different route. One example scenario can be to use a single route as an entry point to execute some calculation to get an A/B testing decision and then matching the updated request metadata for the actual destination route. This way the calculation can be executed for only those requests that don't contain information about a previously calculated decision. For further details, see the 'proxy' and 'filters' package documentation. Finding a request's route happens by matching the request attributes to the conditions in the route's definitions. Such definitions may have the following conditions: - method - path (optionally with wildcards) - path regular expressions - host regular expressions - headers - header regular expressions It is also possible to create custom predicates with any other matching criteria. The relation between the conditions in a route definition is 'and', meaning, that a request must fulfill each condition to match a route. For further details, see the 'routing' package documentation. Filters are applied in order of definition to the request and in reverse order to the response. They are used to modify request and response attributes, such as headers, or execute background tasks, like logging. Some filters may handle the requests without proxying them to service backends. Filters, depending on their implementation, may accept/require parameters, that are set specifically to the route. For further details, see the 'filters' package documentation. Each route has one of the following backends: HTTP endpoint, shunt or loopback. Backend endpoints can be any HTTP service. They are specified by their network address, including the protocol scheme, the domain name or the IP address, and optionally the port number: e.g. "https://www.example.org:4242". (The path and query are sent from the original request, or set by filters.) A shunt route means that Skipper handles the request alone and doesn't make requests to a backend service. In this case, it is the responsibility of one of the filters to generate the response. A loopback route executes the routing mechanism on current state of the request from the start, including the route lookup. This way it serves as a form of an internal redirect. Route definitions consist of the following: - request matching conditions (predicates) - filter chain (optional) - backend (either an HTTP endpoint or a shunt) The eskip package implements the in-memory and text representations of route definitions, including a parser. (Note to contributors: in order to stay compatible with 'go get', the generated part of the parser is stored in the repository. When changing the grammar, 'go generate' needs to be executed explicitly to update the parser.) For further details, see the 'eskip' package documentation Skipper has filter implementations of basic auth and OAuth2. It can be integrated with tokeninfo based OAuth2 providers. For details, see: https://godoc.org/github.com/zalando/skipper/filters/auth. Skipper's route definitions of Skipper are loaded from one or more data sources. It can receive incremental updates from those data sources at runtime. It provides three different data clients: - Kubernetes: Skipper can be used as part of a Kubernetes Ingress Controller implementation together with https://github.com/zalando-incubator/kube-ingress-aws-controller . In this scenario, Skipper uses the Kubernetes API's Ingress extensions as a source for routing. For a complete deployment example, see more details in: https://github.com/zalando-incubator/kubernetes-on-aws/ . - Innkeeper: the Innkeeper service implements a storage for large sets of Skipper routes, with an HTTP+JSON API, OAuth2 authentication and role management. See the 'innkeeper' package and https://github.com/zalando/innkeeper. - etcd: Skipper can load routes and receive updates from etcd clusters (https://github.com/coreos/etcd). See the 'etcd' package. - static file: package eskipfile implements a simple data client, which can load route definitions from a static file in eskip format. Currently, it loads the routes on startup. It doesn't support runtime updates. Skipper can use additional data sources, provided by extensions. Sources must implement the DataClient interface in the routing package. Skipper provides circuit breakers, configured either globally, based on backend hosts or based on individual routes. It supports two types of circuit breaker behavior: open on N consecutive failures, or open on N failures out of M requests. For details, see: https://godoc.org/github.com/zalando/skipper/circuit. Skipper can be started with the default executable command 'skipper', or as a library built into an application. The easiest way to start Skipper as a library is to execute the 'Run' function of the current, root package. Each option accepted by the 'Run' function is wired in the default executable as well, as a command line flag. E.g. EtcdUrls becomes -etcd-urls as a comma separated list. For command line help, enter: An additional utility, eskip, can be used to verify, print, update and delete routes from/to files or etcd (Innkeeper on the roadmap). See the cmd/eskip command package, and/or enter in the command line: Skipper doesn't use dynamically loaded plugins, however, it can be used as a library, and it can be extended with custom predicates, filters and/or custom data sources. To create a custom predicate, one needs to implement the PredicateSpec interface in the routing package. Instances of the PredicateSpec are used internally by the routing package to create the actual Predicate objects as referenced in eskip routes, with concrete arguments. Example, randompredicate.go: In the above example, a custom predicate is created, that can be referenced in eskip definitions with the name 'Random': To create a custom filter we need to implement the Spec interface of the filters package. 'Spec' is the specification of a filter, and it is used to create concrete filter instances, while the raw route definitions are processed. Example, hellofilter.go: The above example creates a filter specification, and in the routes where they are included, the filter instances will set the 'X-Hello' header for each and every response. The name of the filter is 'hello', and in a route definition it is referenced as: The easiest way to create a custom Skipper variant is to implement the required filters (as in the example above) by importing the Skipper package, and starting it with the 'Run' command. Example, hello.go: A file containing the routes, routes.eskip: Start the custom router: The 'Run' function in the root Skipper package starts its own listener but it doesn't provide the best composability. The proxy package, however, provides a standard http.Handler, so it is possible to use it in a more complex solution as a building block for routing. Skipper provides detailed logging of failures, and access logs in Apache log format. Skipper also collects detailed performance metrics, and exposes them on a separate listener endpoint for pulling snapshots. For details, see the 'logging' and 'metrics' packages documentation. The router's performance depends on the environment and on the used filters. Under ideal circumstances, and without filters, the biggest time factor is the route lookup. Skipper is able to scale to thousands of routes with logarithmic performance degradation. However, this comes at the cost of increased memory consumption, due to storing the whole lookup tree in a single structure. Benchmarks for the tree lookup can be run by: In case more aggressive scale is needed, it is possible to setup Skipper in a cascade model, with multiple Skipper instances for specific route segments.
Package log15 provides an opinionated, simple toolkit for best-practice logging that is both human and machine readable. It is modeled after the standard library's io and net/http packages. This package enforces you to only log key/value pairs. Keys must be strings. Values may be any type that you like. The default output format is logfmt, but you may also choose to use JSON instead if that suits you. Here's how you log: This will output a line that looks like: To get started, you'll want to import the library: Now you're ready to start logging: Because recording a human-meaningful message is common and good practice, the first argument to every logging method is the value to the *implicit* key 'msg'. Additionally, the level you choose for a message will be automatically added with the key 'lvl', and so will the current timestamp with key 't'. You may supply any additional context as a set of key/value pairs to the logging function. log15 allows you to favor terseness, ordering, and speed over safety. This is a reasonable tradeoff for logging functions. You don't need to explicitly state keys/values, log15 understands that they alternate in the variadic argument list: If you really do favor your type-safety, you may choose to pass a log.Ctx instead: Frequently, you want to add context to a logger so that you can track actions associated with it. An http request is a good example. You can easily create new loggers that have context that is automatically included with each log line: This will output a log line that includes the path context that is attached to the logger: The Handler interface defines where log lines are printed to and how they are formated. Handler is a single interface that is inspired by net/http's handler interface: Handlers can filter records, format them, or dispatch to multiple other Handlers. This package implements a number of Handlers for common logging patterns that are easily composed to create flexible, custom logging structures. Here's an example handler that prints logfmt output to Stdout: Here's an example handler that defers to two other handlers. One handler only prints records from the rpc package in logfmt to standard out. The other prints records at Error level or above in JSON formatted output to the file /var/log/service.json This package implements three Handlers that add debugging information to the context, CallerFileHandler, CallerFuncHandler and CallerStackHandler. Here's an example that adds the source file and line number of each logging call to the context. This will output a line that looks like: Here's an example that logs the call stack rather than just the call site. This will output a line that looks like: The "%+v" format instructs the handler to include the path of the source file relative to the compile time GOPATH. The github.com/go-stack/stack package documents the full list of formatting verbs and modifiers available. The Handler interface is so simple that it's also trivial to write your own. Let's create an example handler which tries to write to one handler, but if that fails it falls back to writing to another handler and includes the error that it encountered when trying to write to the primary. This might be useful when trying to log over a network socket, but if that fails you want to log those records to a file on disk. This pattern is so useful that a generic version that handles an arbitrary number of Handlers is included as part of this library called FailoverHandler. Sometimes, you want to log values that are extremely expensive to compute, but you don't want to pay the price of computing them if you haven't turned up your logging level to a high level of detail. This package provides a simple type to annotate a logging operation that you want to be evaluated lazily, just when it is about to be logged, so that it would not be evaluated if an upstream Handler filters it out. Just wrap any function which takes no arguments with the log.Lazy type. For example: If this message is not logged for any reason (like logging at the Error level), then factorRSAKey is never evaluated. The same log.Lazy mechanism can be used to attach context to a logger which you want to be evaluated when the message is logged, but not when the logger is created. For example, let's imagine a game where you have Player objects: You always want to log a player's name and whether they're alive or dead, so when you create the player object, you might do: Only now, even after a player has died, the logger will still report they are alive because the logging context is evaluated when the logger was created. By using the Lazy wrapper, we can defer the evaluation of whether the player is alive or not to each log message, so that the log records will reflect the player's current state no matter when the log message is written: If log15 detects that stdout is a terminal, it will configure the default handler for it (which is log.StdoutHandler) to use TerminalFormat. This format logs records nicely for your terminal, including color-coded output based on log level. Becasuse log15 allows you to step around the type system, there are a few ways you can specify invalid arguments to the logging functions. You could, for example, wrap something that is not a zero-argument function with log.Lazy or pass a context key that is not a string. Since logging libraries are typically the mechanism by which errors are reported, it would be onerous for the logging functions to return errors. Instead, log15 handles errors by making these guarantees to you: - Any log record containing an error will still be printed with the error explained to you as part of the log record. - Any log record containing an error will include the context key LOG15_ERROR, enabling you to easily (and if you like, automatically) detect if any of your logging calls are passing bad values. Understanding this, you might wonder why the Handler interface can return an error value in its Log method. Handlers are encouraged to return errors only if they fail to write their log records out to an external source like if the syslog daemon is not responding. This allows the construction of useful handlers which cope with those failures like the FailoverHandler. log15 is intended to be useful for library authors as a way to provide configurable logging to users of their library. Best practice for use in a library is to always disable all output for your logger by default and to provide a public Logger instance that consumers of your library can configure. Like so: Users of your library may then enable it if they like: The ability to attach context to a logger is a powerful one. Where should you do it and why? I favor embedding a Logger directly into any persistent object in my application and adding unique, tracing context keys to it. For instance, imagine I am writing a web browser: When a new tab is created, I assign a logger to it with the url of the tab as context so it can easily be traced through the logs. Now, whenever we perform any operation with the tab, we'll log with its embedded logger and it will include the tab title automatically: There's only one problem. What if the tab url changes? We could use log.Lazy to make sure the current url is always written, but that would mean that we couldn't trace a tab's full lifetime through our logs after the user navigate to a new URL. Instead, think about what values to attach to your loggers the same way you think about what to use as a key in a SQL database schema. If it's possible to use a natural key that is unique for the lifetime of the object, do so. But otherwise, log15's ext package has a handy RandId function to let you generate what you might call "surrogate keys" They're just random hex identifiers to use for tracing. Back to our Tab example, we would prefer to set up our Logger like so: Now we'll have a unique traceable identifier even across loading new urls, but we'll still be able to see the tab's current url in the log messages. For all Handler functions which can return an error, there is a version of that function which will return no error but panics on failure. They are all available on the Must object. For example: All of the following excellent projects inspired the design of this library: code.google.com/p/log4go github.com/op/go-logging github.com/technoweenie/grohl github.com/Sirupsen/logrus github.com/kr/logfmt github.com/spacemonkeygo/spacelog golang's stdlib, notably io and net/http https://xkcd.com/927/
Cfgo from the YAML document, bi-directional synchronous multi-module configuration. The structure of the generated document will reflect the structure of the value itself. Maps and pointers (to struct, string, int, etc) are accepted as the in value. Struct fields are only unmarshalled if they are exported (have an upper case first letter), and are unmarshalled using the field name lowercased as the default key. Custom keys may be defined via the "yaml" name in the field tag: the content preceding the first comma is used as the key, and the following comma-separated options are used to tweak the marshalling process. Conflicting names result in a runtime error. The field tag format accepted is: The following flags are currently supported: In addition, if the key is `-`, the field is ignored.
Package pkcs11 provides a Go wrapper for PKCS#11 (Cryptoki) operations with HSM (Hardware Security Module) support. This package simplifies interaction with PKCS#11 compliant devices by providing high-level abstractions for common cryptographic operations including key management, digital signing, encryption/decryption, and key wrapping/unwrapping. To use this package, first create a configuration and establish a connection to the PKCS#11 device: You can also configure the client using environment variables: The following environment variables are supported: Generate an RSA key pair: Generate an ECDSA key pair: Find an existing key pair: Get a signer for digital signatures: Get a decrypter for RSA operations: Generate an AES key: Encrypt and decrypt data: Wrap a key with another key: The package provides comprehensive error handling with typed errors: This package is designed to be thread-safe. The Client maintains proper synchronization for session management and can be used concurrently from multiple goroutines. Asymmetric Key Types: Hash Algorithms (for signing): RSA Padding Schemes: Symmetric Key Types: This package is designed for use with Hardware Security Modules (HSMs) and follows security best practices: For complete API documentation, see the individual type and function documentation.
Package main generates the README.md file for the vago project by extracting examples from test files and creating documentation.
Package main is the Markdown to Godoc converter. Sort of like godocdown (https://github.com/robertkrimen/godocdown), but in reverse. md-to-godoc takes markdown as input, and generates godoc-formatted package documentation. Way, **way** alpha. Barebones. The minimalest. Mostly here so we can see some code in godoc: • This is a test • And another test First, install the binary: Then, run it on one or more packages. If you'd like to generate a doc.go file in the current package (that already has a README.md), simply run md-to-godoc with no flags: To generate doc.go for all subpackages, you can do something like the following: • UberFx, on GitHub (https://github.com/uber-go/fx) and godoc.org (https://godoc.org/go.uber.org/fx) • Jaeger, on Github (https://github.com/uber/jaeger) and godoc.org (https://godoc.org/github.com/uber/jaeger/services/agent) Apache 2.0 (https://www.apache.org/licenses/LICENSE-2.0) Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Package autofiber provides a FastAPI-like wrapper for the Fiber web framework. It enables automatic request parsing, validation, and OpenAPI/Swagger documentation generation. Package autofiber provides OpenAPI 3.0 specification generation for automatic API documentation. Package autofiber provides OpenAPI/Swagger documentation configuration and serving utilities. Package autofiber provides route group functionality with automatic request parsing, validation, and documentation generation. Package autofiber provides handler creation utilities for automatic request parsing, validation, and response handling. Package autofiber provides map and interface parsing utilities for converting data structures to Go structs. Package autofiber provides middleware functions for automatic request parsing, validation, and response handling. Package autofiber provides route configuration options for building APIs with automatic parsing, validation, and documentation. Package autofiber provides request parsing utilities for extracting and validating data from multiple sources. Package autofiber provides HTTP route registration methods with automatic request parsing, validation, and documentation generation. Package autofiber provides core types and configuration for the AutoFiber web framework. Package autofiber provides response validation utilities for ensuring API responses match expected schemas.
Package adder provides a documentation-driven CLI generator for Go applications. It processes markdown files with YAML frontmatter to generate type-safe command interfaces, request/response structures, and handler interfaces for Cobra CLI applications.