Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices
Package toml provides facilities for decoding and encoding TOML configuration files via reflection. There is also support for delaying decoding with the Primitive type, and querying the set of keys in a TOML document with the MetaData type. The specification implemented: https://github.com/toml-lang/toml The sub-command github.com/BurntSushi/toml/cmd/tomlv can be used to verify whether a file is a valid TOML document. It can also be used to print the type of each key in a TOML document. There are two important types of tests used for this package. The first is contained inside '*_test.go' files and uses the standard Go unit testing framework. These tests are primarily devoted to holistically testing the decoder and encoder. The second type of testing is used to verify the implementation's adherence to the TOML specification. These tests have been factored into their own project: https://github.com/BurntSushi/toml-test The reason the tests are in a separate project is so that they can be used by any implementation of TOML. Namely, it is language agnostic. Example StrictDecoding shows how to detect whether there are keys in the TOML document that weren't decoded into the value given. This is useful for returning an error to the user if they've included extraneous fields in their configuration. Example UnmarshalTOML shows how to implement a struct type that knows how to unmarshal itself. The struct must take full responsibility for mapping the values passed into the struct. The method may be used with interfaces in a struct in cases where the actual type is not known until the data is examined. Example Unmarshaler shows how to decode TOML strings into your own custom data type.
Package sandwich is a middleware framework for go that lets you write testable web servers. Sandwich allows writing robust middleware handlers that are easily tested: Sandwich is provides a basic PAT-style router. Here's a simple complete program using sandwich: Sandwich automatically calls your middleware with the necessary arguments to run them based on the types they require. These types can be provided by previous middleware or directly during the initial setup. For example, you can use this to provide your database to all handlers: Set(...) and SetAs(...) are excellent alternatives to using global values, plus they keep your functions easy to test! In many cases you want to initialize a value based on the request, for example extracting the user login: This starts to show off the real power of sandwich. For each request, the following occurs: This allows you to write small, independently testable functions and let sandwich chain them together for you. Sandwich works hard to ensure that you don't get annoying run-time errors: it's structured such that it must always be possible to call your functions when the middleware is initialized rather than when the http handler is being executed, so you don't get surprised while your server is running. When a handler returns an error, sandwich aborts the middleware chain and looks for the most recently registered error handler and calls that. Error handlers may accept any types that have been provided so far in the middleware stack as well as the error type. They must not have any return values. Sandwich also allows registering handlers to run during AND after the middleware (and error handling) stack has completed. This is especially useful for handles such as logging or gzip wrappers. Once the before handle is run, the 'after' handlers are queued to run and will be run regardless of whether an error aborts any subsequent middleware handlers. Typically this is done with the first function creating and initializing some state to pass to the deferred handler. For example, the logging handlers are: and are added to the chain using: In this case, the `Wrap` executes NewLogEntry during middleware processing that returns a *LogEntry which is provided to downstream handlers, including the deferred Commit handler -- in this case a method expression (https://golang.org/ref/spec#Method_expressions) that takes the *LogEntry as its value receiver. Unfortunately, providing interfaces is a little tricky. Since interfaces in Go are only used for static typing, the encapsulation isn't passed to functions that accept interface{}, like Set(). This means that if you have an interface and a concrete implementation, such as: You cannot provide this to handlers directly via the Set() call. Instead, you have to either use SetAs() or a dedicated middleware function: It's a bit silly, but that's how it is.
Package tea provides a framework for building rich terminal user interfaces based on the paradigms of The Elm Architecture. It's well-suited for simple and complex terminal applications, either inline, full-window, or a mix of both. It's been battle-tested in several large projects and is production-ready. A tutorial is available at https://github.com/charmbracelet/bubbletea/tree/master/tutorials Example programs can be found at https://github.com/charmbracelet/bubbletea/tree/master/examples
Ginkgo is a testing framework for Go designed to help you write expressive tests. https://github.com/onsi/ginkgo MIT-Licensed The godoc documentation outlines Ginkgo's API. Since Ginkgo is a Domain-Specific Language it is important to build a mental model for Ginkgo - the narrative documentation at https://onsi.github.io/ginkgo/ is designed to help you do that. You should start there - even a brief skim will be helpful. At minimum you should skim through the https://onsi.github.io/ginkgo/#getting-started chapter. Ginkgo's is best paired with the Gomega matcher library: https://github.com/onsi/gomega You can run Ginkgo specs with go test - however we recommend using the ginkgo cli. It enables functionality that go test does not (especially running suites in parallel). You can learn more at https://onsi.github.io/ginkgo/#ginkgo-cli-overview or by running 'ginkgo help'.
Package tea provides a framework for building rich terminal user interfaces based on the paradigms of The Elm Architecture. It's well-suited for simple and complex terminal applications, either inline, full-window, or a mix of both. It's been battle-tested in several large projects and is production-ready. A tutorial is available at https://github.com/charmbracelet/bubbletea/tree/master/tutorials Example programs can be found at https://github.com/charmbracelet/bubbletea/tree/master/examples
Package gofight offers simple API http handler testing for Golang framework. Details about the gofight project are found in github page: Installation: Set Header: You can add custom header via SetHeader func. Set query string: Using SetQuery to generate query string data. POST FORM Data: Using SetForm to generate form data. POST JSON Data: Using SetJSON to generate json data. POST RAW Data: Using SetBody to generate raw data. For more details, see the documentation and example.
Package buster provides a generic framework for load testing. Specifically, Buster allows you to run a job at a specific concurrency level and a fixed rate while monitoring throughput and latency. The generic nature of Buster makes it suitable for load testing many different systems—HTTP servers, databases, RPC services, etc.
Package tea provides a framework for building rich terminal user interfaces based on the paradigms of The Elm Architecture. It's well-suited for simple and complex terminal applications, either inline, full-window, or a mix of both. It's been battle-tested in several large projects and is production-ready. A tutorial is available at https://gitlab.com/hypolas/tools/bubbletea/tree/master/tutorials Example programs can be found at https://gitlab.com/hypolas/tools/bubbletea/tree/master/examples
Package storage contains common tests for storage implementation.
Package mspec is a BDD context/specification testing package for Go(Lang) with a strong emphases on spec'ing your feature(s) and scenarios first, before any code is written using as little syntax noise as possible. This leaves you free to think of your project and features as a whole without the distraction of writing any code with the added benefit of having tests ready for your project. [](https://godoc.org/github.com/eduncan911/mspec) holds the source documentation (where else?) * Uses natural language (Given/When/Then) * Stubbing * Human-readable outputs * HTML output (coming soon) * Use custom Assertions * Configuration options * Uses Testify's rich assertions * Uses Go's built-in testing.T package Install it with one line of code: There are no external dependencies and it is built against Go's internal packages. The only dependency is that you have [GOPATH setup normaly](https://golang.org/doc/code.html). Create a new file to hold your specs. Using Dan North's original BDD definitions, you spec code using the Given/When/Then storyline similar to: But this is just a static example. Let's take a real example from one of my projects: You represent these thoughts in code like this: Note that `Given`, `when` and `it` all have optional variadic parameters. This allows you to spec things out with as little or as far as you want. That's it. Now run it: Print it out and stick it on your office door for everyone to see what you are working on. This is actually colored output in Terminal: It is not uncommon to go back and tweak your stories over time as you talk with your domain experts, modifying exactly the scenarios and specifications that should happen. `GoMSpec` is a testing package for the Go framework that extends Go's built-in testing package. It is modeled after the BDD Feature Specification story workflow such as: Currently it has an included `Expectation` struct that mimics basic assertion behaviors. Future plans may allow for custom assertion packages (like testify). Getting it Importing it Writing Specs Testing it Which outputs the following: Nice eh? There is nothing like using a testing package to test itself. There is some nice rich information available. ## Examples Be sure to check out more examples in the examples/ folder. Or just open the files and take a look. That's the most important part anyways. When evaluating several BDD frameworks, [Pranavraja's Zen](https://github.com/pranavraja/zen) package for Go came close - really close; but, it was lacking the more "story" overview I've been accustomed to over the years with [Machine.Specifications](https://github.com/machine/machine.specifications) in C# (.NET land). Do note that there is something to be said for simple testing in Go (and simple coding); therefore, if you are the type to keep it short and sweet and just code, then you may want to use Pranavraja's framework as it is just the context (Desc) and specs writing. I forked his code and submitted a few bug tweaks at first. But along the way, I started to have grand visions of my soul mate [Machine.Specifications](https://github.com/machine/machine.specifications) (which is called MSpec for short) for BDD testing. The ease of defining complete stories right down to the scenarios without having to implement them intrigued me in C#. It freed me from worrying about implementation details and just focus on the feature I was writing: What did it need to do? What context was I given to start with? What should it do? So while using Pranavraja's Zen framework, I kept asking myself: Could I bring those MSpec practices to Go, using a bare-bones framework? Ok, done. And since it was so heavily inspired by Aaron's MSpec project, I kept the name going here: `GoMSpec`. While keeping backwards compatibility with his existing Zen framework, I defined several goals for this package: * Had to stay simple with Give/When/Then definitions. No complex coding. * Keep the low syntax noise from the existing Zen package. * I had to be able to write features, scenarios and specs with no implementation details needed. That last goal above is key and I think is what speaks truly about what BDD is: focus on the story, feature and/or context you are designing - focus on the Behavior! I tended to design my C# code using Machine.Specifications in this BDD-style by writing entire stories and grand specs up front - designing the system I was building, or the feature I was extending. In C# land, it's not unheard of me hitting 50 to 100 specs across a single feature and a few different contexts in an hour or two, before writing any code. Which at that point, I had everything planned out pretty much the way it should behave. So with this framework, I came up with a simple method name, `NA()`, to keep the syntax noise down. Therefore, you are free to code specs with just a little syntax noise:
shortinette is the core framework for managing and automating the process of grading coding bootcamps (Shorts). It provides a comprehensive set of tools for running and testing student submissions across various programming languages. The shortinette package is composed of several sub-packages, each responsible for a specific aspect of the grading pipeline: `logger`: Handles logging for the framework, including general informational messages, error reporting, and trace logging for feedback on individual submissions. This package ensures that all important events and errors are captured for debugging and auditing purposes. `requirements`: Validates the necessary environment variables and dependencies required by the framework. This includes checking for essential configuration values in a `.env` file and ensuring that all necessary tools (e.g., Docker images) are available before grading begins. `testutils`: Provides utility functions for compiling and running code submissions. This includes functions for compiling Rust code, running executables with various options (such as timeouts and real-time output), and manipulating files. The utility functions are designed to handle the intricacies of running untrusted student code safely and efficiently. `git`: Manages interactions with GitHub, including cloning repositories, managing collaborators, and uploading files. This package abstracts the GitHub API to simplify common tasks such as adding collaborators to repositories, creating branches, and pushing code or data to specific locations in a repository. `exercise`: Defines the structure and behavior of individual coding exercises. This includes specifying the files that students are allowed to submit, the expected output, and the functions to be tested. The `exercise` package provides the framework for setting up exercises, running tests, and reporting results. `module`: Organizes exercises into modules, allowing for the grouping of related exercises into a coherent curriculum. The `module` package handles the execution of all exercises within a module, aggregates results, and manages the overall grading process. `webhook`: Enables automatic grading triggered by GitHub webhooks. This allows for a fully automated workflow where student submissions are graded as soon as they are pushed to a specific branch in a GitHub repository. `short`: The central orchestrator of the grading process, integrating all sub-packages into a cohesive system. The `short` package handles the setup and teardown of grading environments, manages the execution of modules and exercises, and ensures that all results are properly recorded and reported.
withmock is a tool to assist in mocking code for testing. The basic idea is that you can mark import statements to indicate packages that should be mocked. Then, if you run your test via withmock then mock versions of the marked packages will be generated - and the tests will be run using those packages instead of the real ones. To mark an import for mocking, simply append a comment consisting of just the word mock to the end of the import line in the xxx_test.go file. So if we had the import statement: then we could mark the external package for mocking by changing it to: The mocking is not restricted to external packages. Though often we want to keep access to the original package for use in the test code itself. So, keeping the same example, we might want to use a mock version of fmt in the code under test. So, now we change the import to: So, when run, the non-test code will be using the mocked fmt and external packages, and the test code will have the proper fmt, the mocked fmt as mockfmt, and the mocked external package using it's own name (which will assume it ext, for the purposes of this documentation). The generated mock code behaves much like the code generated by gomock's mockgen , particularly when dealing with methods on types. Though there are some differences, due to the whole package nature. The first thing to do with a mocked package is to set the controller. This needs to be done before any mocked method or function is called or expectation is set - otherwise the generated code will cause a panic. To set the controller, we use the special mock object returned by the MOCK() function, and call it's SetController method: Once you have set the controller then you can set your mock expectations, either using the EXPECT() function for function expectations, or the EXPECT() method for any method expectations. For example, if there was a type called UsefulType, and we were expecting it's HandyMethod to be called - followed by a message printed indicating the result, we might set our expectations as follows: And then finally we can call our code under test, passing it our mocked UsefulType instance: And now we just need to wrap our call to "go test", so we run: and gomock and the Go testing framework will do the rest for us ... :D
Package submodule offers a simple DI framework without going overboard Each service/components in any systems are likely have dependency on other services/components Sometime, those dependencies create an tangible problem. By using just a small chunk of service, you instead initiate the whole system. Submodule was born to solve this problem. Submodule requires you to provide the linkage between dependencies. In short, you'll need to define what you need When a part of system is initializing, submodule will resolve dependencies that needed for the graph. By doing so, integration tests become an ease You have all benefits of default system wiring, while still refrain from initializing the whole system just to test a single service
Package fx is a framework that makes it easy to build applications out of reusable, composable modules. Fx applications use dependency injection to eliminate globals without the tedium of manually wiring together function calls. Unlike other approaches to dependency injection, Fx works with plain Go functions: you don't need to use struct tags or embed special types, so Fx automatically works well with most Go packages. Basic usage is explained in the package-level example below. If you're new to Fx, start there! Advanced features, including named instances, optional parameters, and value groups, are explained under the In and Out types. To test functions that use the Lifecycle type or to write end-to-end tests of your Fx application, use the helper functions and types provided by the go.uber.org/fx/fxtest package.
Package pi provides the top-level repository for the GoPi interactive parser system. The code is organized into the various sub-packages, dealing with the different stages of parsing etc. * pi: integrates all the parsing elements into the overall parser framework. * langs: has the parsers for specific languages, including Go (of course), markdown and tex (latter are lexer-only) Note that the GUI editor framework for creating and testing parsers is in the Gide package: https://github.com/goki/gide under the piv sub-package.