Package buster provides a generic framework for load testing. Specifically, Buster allows you to run a job at a specific concurrency level and a fixed rate while monitoring throughput and latency. The generic nature of Buster makes it suitable for load testing many different systems—HTTP servers, databases, RPC services, etc.
Package tea provides a framework for building rich terminal user interfaces based on the paradigms of The Elm Architecture. It's well-suited for simple and complex terminal applications, either inline, full-window, or a mix of both. It's been battle-tested in several large projects and is production-ready. A tutorial is available at https://gitlab.com/hypolas/tools/bubbletea/tree/master/tutorials Example programs can be found at https://gitlab.com/hypolas/tools/bubbletea/tree/master/examples
Package storage contains common tests for storage implementation.
Package mspec is a BDD context/specification testing package for Go(Lang) with a strong emphases on spec'ing your feature(s) and scenarios first, before any code is written using as little syntax noise as possible. This leaves you free to think of your project and features as a whole without the distraction of writing any code with the added benefit of having tests ready for your project. [](https://godoc.org/github.com/eduncan911/mspec) holds the source documentation (where else?) * Uses natural language (Given/When/Then) * Stubbing * Human-readable outputs * HTML output (coming soon) * Use custom Assertions * Configuration options * Uses Testify's rich assertions * Uses Go's built-in testing.T package Install it with one line of code: There are no external dependencies and it is built against Go's internal packages. The only dependency is that you have [GOPATH setup normaly](https://golang.org/doc/code.html). Create a new file to hold your specs. Using Dan North's original BDD definitions, you spec code using the Given/When/Then storyline similar to: But this is just a static example. Let's take a real example from one of my projects: You represent these thoughts in code like this: Note that `Given`, `when` and `it` all have optional variadic parameters. This allows you to spec things out with as little or as far as you want. That's it. Now run it: Print it out and stick it on your office door for everyone to see what you are working on. This is actually colored output in Terminal: It is not uncommon to go back and tweak your stories over time as you talk with your domain experts, modifying exactly the scenarios and specifications that should happen. `GoMSpec` is a testing package for the Go framework that extends Go's built-in testing package. It is modeled after the BDD Feature Specification story workflow such as: Currently it has an included `Expectation` struct that mimics basic assertion behaviors. Future plans may allow for custom assertion packages (like testify). Getting it Importing it Writing Specs Testing it Which outputs the following: Nice eh? There is nothing like using a testing package to test itself. There is some nice rich information available. ## Examples Be sure to check out more examples in the examples/ folder. Or just open the files and take a look. That's the most important part anyways. When evaluating several BDD frameworks, [Pranavraja's Zen](https://github.com/pranavraja/zen) package for Go came close - really close; but, it was lacking the more "story" overview I've been accustomed to over the years with [Machine.Specifications](https://github.com/machine/machine.specifications) in C# (.NET land). Do note that there is something to be said for simple testing in Go (and simple coding); therefore, if you are the type to keep it short and sweet and just code, then you may want to use Pranavraja's framework as it is just the context (Desc) and specs writing. I forked his code and submitted a few bug tweaks at first. But along the way, I started to have grand visions of my soul mate [Machine.Specifications](https://github.com/machine/machine.specifications) (which is called MSpec for short) for BDD testing. The ease of defining complete stories right down to the scenarios without having to implement them intrigued me in C#. It freed me from worrying about implementation details and just focus on the feature I was writing: What did it need to do? What context was I given to start with? What should it do? So while using Pranavraja's Zen framework, I kept asking myself: Could I bring those MSpec practices to Go, using a bare-bones framework? Ok, done. And since it was so heavily inspired by Aaron's MSpec project, I kept the name going here: `GoMSpec`. While keeping backwards compatibility with his existing Zen framework, I defined several goals for this package: * Had to stay simple with Give/When/Then definitions. No complex coding. * Keep the low syntax noise from the existing Zen package. * I had to be able to write features, scenarios and specs with no implementation details needed. That last goal above is key and I think is what speaks truly about what BDD is: focus on the story, feature and/or context you are designing - focus on the Behavior! I tended to design my C# code using Machine.Specifications in this BDD-style by writing entire stories and grand specs up front - designing the system I was building, or the feature I was extending. In C# land, it's not unheard of me hitting 50 to 100 specs across a single feature and a few different contexts in an hour or two, before writing any code. Which at that point, I had everything planned out pretty much the way it should behave. So with this framework, I came up with a simple method name, `NA()`, to keep the syntax noise down. Therefore, you are free to code specs with just a little syntax noise:
shortinette is the core framework for managing and automating the process of grading coding bootcamps (Shorts). It provides a comprehensive set of tools for running and testing student submissions across various programming languages. The shortinette package is composed of several sub-packages, each responsible for a specific aspect of the grading pipeline: `logger`: Handles logging for the framework, including general informational messages, error reporting, and trace logging for feedback on individual submissions. This package ensures that all important events and errors are captured for debugging and auditing purposes. `requirements`: Validates the necessary environment variables and dependencies required by the framework. This includes checking for essential configuration values in a `.env` file and ensuring that all necessary tools (e.g., Docker images) are available before grading begins. `testutils`: Provides utility functions for compiling and running code submissions. This includes functions for compiling Rust code, running executables with various options (such as timeouts and real-time output), and manipulating files. The utility functions are designed to handle the intricacies of running untrusted student code safely and efficiently. `git`: Manages interactions with GitHub, including cloning repositories, managing collaborators, and uploading files. This package abstracts the GitHub API to simplify common tasks such as adding collaborators to repositories, creating branches, and pushing code or data to specific locations in a repository. `exercise`: Defines the structure and behavior of individual coding exercises. This includes specifying the files that students are allowed to submit, the expected output, and the functions to be tested. The `exercise` package provides the framework for setting up exercises, running tests, and reporting results. `module`: Organizes exercises into modules, allowing for the grouping of related exercises into a coherent curriculum. The `module` package handles the execution of all exercises within a module, aggregates results, and manages the overall grading process. `webhook`: Enables automatic grading triggered by GitHub webhooks. This allows for a fully automated workflow where student submissions are graded as soon as they are pushed to a specific branch in a GitHub repository. `short`: The central orchestrator of the grading process, integrating all sub-packages into a cohesive system. The `short` package handles the setup and teardown of grading environments, manages the execution of modules and exercises, and ensures that all results are properly recorded and reported.
withmock is a tool to assist in mocking code for testing. The basic idea is that you can mark import statements to indicate packages that should be mocked. Then, if you run your test via withmock then mock versions of the marked packages will be generated - and the tests will be run using those packages instead of the real ones. To mark an import for mocking, simply append a comment consisting of just the word mock to the end of the import line in the xxx_test.go file. So if we had the import statement: then we could mark the external package for mocking by changing it to: The mocking is not restricted to external packages. Though often we want to keep access to the original package for use in the test code itself. So, keeping the same example, we might want to use a mock version of fmt in the code under test. So, now we change the import to: So, when run, the non-test code will be using the mocked fmt and external packages, and the test code will have the proper fmt, the mocked fmt as mockfmt, and the mocked external package using it's own name (which will assume it ext, for the purposes of this documentation). The generated mock code behaves much like the code generated by gomock's mockgen , particularly when dealing with methods on types. Though there are some differences, due to the whole package nature. The first thing to do with a mocked package is to set the controller. This needs to be done before any mocked method or function is called or expectation is set - otherwise the generated code will cause a panic. To set the controller, we use the special mock object returned by the MOCK() function, and call it's SetController method: Once you have set the controller then you can set your mock expectations, either using the EXPECT() function for function expectations, or the EXPECT() method for any method expectations. For example, if there was a type called UsefulType, and we were expecting it's HandyMethod to be called - followed by a message printed indicating the result, we might set our expectations as follows: And then finally we can call our code under test, passing it our mocked UsefulType instance: And now we just need to wrap our call to "go test", so we run: and gomock and the Go testing framework will do the rest for us ... :D
Package submodule offers a simple DI framework without going overboard Each service/components in any systems are likely have dependency on other services/components Sometime, those dependencies create an tangible problem. By using just a small chunk of service, you instead initiate the whole system. Submodule was born to solve this problem. Submodule requires you to provide the linkage between dependencies. In short, you'll need to define what you need When a part of system is initializing, submodule will resolve dependencies that needed for the graph. By doing so, integration tests become an ease You have all benefits of default system wiring, while still refrain from initializing the whole system just to test a single service
Package fx is a framework that makes it easy to build applications out of reusable, composable modules. Fx applications use dependency injection to eliminate globals without the tedium of manually wiring together function calls. Unlike other approaches to dependency injection, Fx works with plain Go functions: you don't need to use struct tags or embed special types, so Fx automatically works well with most Go packages. Basic usage is explained in the package-level example below. If you're new to Fx, start there! Advanced features, including named instances, optional parameters, and value groups, are explained under the In and Out types. To test functions that use the Lifecycle type or to write end-to-end tests of your Fx application, use the helper functions and types provided by the go.uber.org/fx/fxtest package.
Package pi provides the top-level repository for the GoPi interactive parser system. The code is organized into the various sub-packages, dealing with the different stages of parsing etc. * pi: integrates all the parsing elements into the overall parser framework. * langs: has the parsers for specific languages, including Go (of course), markdown and tex (latter are lexer-only) Note that the GUI editor framework for creating and testing parsers is in the Gide package: https://github.com/goki/gide under the piv sub-package.
Package skogul is a framework for receiving, processing and forwarding data, typically metric data or event-oriented data, at high throughput. It is designed to be as agnostic as possible with regards to how it transmits data and how it receives it, and the processors in between need not worry about how the data got there or how it will be treated in the next chain. This means you can use Skogul to receive data on a influxdb-like line-based TCP interface and send it on to postgres - or influxdb - without having to write explicit support, just set up the chain. The guiding principles of Skogul is: - Make as few assumptions as possible about how data is received - Be stupid fast End users should only need to worry about the cmd/skogul tool, which comes fully equipped with self-contained documentation. Adding new logic to Skogul should also be fairly easy. New developers should focus on understanding two things: 1. The skogul.Container data structure - which is the heart of Skogul. 2. The relationship from receiver to handler to sender. The Container is documented in this very package. Receivers are where data originates within Skogul. The typical Receiver will receive data from the outside world, e.g. by other tools posting data to a HTTP endpoint. Receivers can also be used to "create" data, either test data or, for example, log data. When skogul starts, it will start all receivers that are configured. Handlers determine what is done with the data once received. They are responsible for parsing raw data and optionally transform it. This is the only place where it is allowed to _modify_ data. Today, the only transformer is the "templater", which allows a collection of metrics which share certain attributes (e.g.: all collected at the same time and from the same machine) to provide these shared attributes in a template which the "templater" transformer then applies to all metrics. Other examples of transformations that make sense are: - Adding a metadata field - Removing a metadata field - Removing all but a specific set of fields - Converting nested metrics to multiple metrics or flatten them Once a handler has done its deed, it sends the Container to the sender, and this is where "the fun begins" so to speak. Senders consist of just a data structure that implements the Send() interface. They are not allowed to change the container, but besides that, they can do "whatever". The most obvious example is to send the container to a suitable storage system - e.g., a time series database. So if you want to add support for a new time series database in Skogul, you will write a sender. In addition to that, many senders serve only to add internal logic and pass data on to other senders. Each sender should only do one specific thing. For example, if you want to write data both to InfluxDB and MySQL, you need three senders: The "MySQL" and "InfluxDB" senders, and the "dupe" sender, which just takes a list of other senders and sends whatever it receives on to all of them. Today, Senders and Receivers both have an identical "Auto"-system, found in auto.go of the relevant directories. This is how the individual implementations are made discoverable to the configuration system, and how documentation is provided. Documentation for the settings of a sender/receiver is handled as struct tags. Once more parsers/transformers are added, they will likely also use a similar system.
Copyright 2015 Realtime Framework. The ortc package implements the Go lang version of the Realtime Messaging protocol, If your application has data that needs to be updated in the user’s interface as it changes (e.g. real-time stock quotes or ever changing social news feed) Realtime Messaging is the reliable, easy, unbelievably fast, “works everywhere” solution. Installation: Below are examples of use of the ortc package: - Create a new instance of ortc client: client, onConnected, onDisconnected, onException, onMessage, onReconnected, onReconnecting, onSubscribed, onUnsubscribed := ortc.NewOrtcClient() - Using the channels received on the ortc client: - Connect to a ortc server: client.Connect("YOUR_APPLICATION_KEY", "myToken", "GoApp", "http://ortc-developers.realtime.co/server/2.1", true, false) - Disconnect from ortc server: client.Disconnect() - Disable presence on a channel: ch := make(chan ortc.PresenceType) - Enable presence on a channel: - Get presence on channel: - Save Authentication: permissions := make(map[string][]authentication.ChannelPermissions) yellowPermissions := []authentication.ChannelPermissions{} yellowPermissions = append(yellowPermissions, authentication.Write) yellowPermissions = append(yellowPermissions, authentication.Presence) testPermissions := []authentication.ChannelPermissions{} testPermissions = append(testPermissions, authentication.Read) testPermissions = append(testPermissions, authentication.Presence) permissions["yellow:*"] = yellowPermissions permissions["test:*"] = testPermissions - Send message to a channel: client.Send("my_channel", "Hello World!") - Subscribe to a channel: client.Subscribe("my_channel", true) - Unsubscribe from a channel: client.Unsubscribe("my_channel") More documentation about the Realtime Messaging service (ORTC) can be found at: http://messaging-public.realtime.co/documentation/starting-guide/overview.html
Package cue implements contextual logging with "batteries included". It has thorough test coverage and supports logging to stdout/stderr, file, syslog, and network sockets, as well as hosted third-party logging and error/reporting services such as Honeybadger, Loggly, Opbeat, Rollbar, and Sentry. Cue uses atomic operations to compare logging calls to registered collector thresholds. This ensures no-op calls are performed quickly and without lock contention. On a 2015 MacBook Pro, no-op calls take about 16ns/call, meaning tens of millions of calls may be dispatched per second. Uncollected log calls are very cheap. Furthermore, collector thresholds may be altered dynamically at run-time, on a per-collector basis. If debugging logs are needed to troubleshoot a live issue, collector thresholds may be set to the DEBUG level for a short period of time and then restored to their original levels shortly thereafter. See the SetLevel function for details. Logging instances are created via the NewLogger function. A simple convention is to initialize an unexported package logger: Additional context information may be added to the package logger via the log.WithValue and log.WithFields methods: Depending on the collector and log format, output would look something like: Cue simplifies error reporting by logging the given error and message, and then returning the same error value. Hence you can return the log.Error/log.Errorf values in-line: Cue provides Collector implementations for popular error reporting services such as Honeybadger, Rollbar, Sentry, and Opbeat. If one of these collector implementations were registered, the above code would automatically open a new error report, complete with stack trace and context information from the logger instance. See the cue/hosted package for details. Finally, cue provides convenience methods for panic and recovery. Calling Panic or Panicf will log the provided message at the FATAL level and then panic. Calling Recover recovers from panics and logs the recovered value and message at the FATAL level. If a panic is triggered via a cue logger instance's Panic or Panicf methods, Recover recovers from the panic but only emits the single event from the Panic/Panicf method. Cue decouples event generation from event collection. Library and framework authors may generate log events without concern for the details of collection. Event collection is opt-in -- no collectors are registered by default. Event collection, if enabled, should be configured close to a program's main package/function, not by libraries. This gives the event subscriber complete control over the behavior of event collection. Collectors are registered via the Collect and CollectAsync functions. Each collector is registered for a given level threshold. The threshold for a collector may be updated at any time using the SetLevel function. Collect registers fully synchronous event collectors. Logging calls that match a synchronous collector's threshold block until the collector's Collect method returns successfully. This is dangerous if the Collector performs any operations that block or return errors. However, it's simple to use and understand: CollectAsync registers asynchronous collectors. It creates a buffered channel for the collector and starts a worker goroutine to service events. Logging calls return after queuing events to the collector channel. If the channel's buffer is full, the event is dropped and a drop counter is incremented atomically. This ensures asynchronous logging calls never block. The worker goroutine detects changes in the atomic drop counter and surfaces drop events as collector errors. See the cue/collector docs for details on collector error handling. When asynchronous logging is enabled, Close must be called to flush queued events on program termination. Close is safe to call even if asynchronous logging isn't enabled -- it returns immediately if no events are queued. Note that ctrl+c and kill <pid> terminate Go programs without triggering cleanup code. When using asynchronous logging, it's a good idea to register signal handlers to capture SIGINT (ctrl+c) and SIGTERM (kill <pid>). See the os/signals package docs for details. By default, cue collects a single stack frame for any event that matches a registered collector. This ensures collectors may log the file name, package, and line number for any collected event. SetFrames may be used to alter this frame count, or disable frame collection entirely. See the SetFrames function for details. When using error reporting services, SetFrames should be used to increase the errorFrames parameter from the default value of 1 to a value that provides enough stack context to successfully diagnose reported errors. This example logs to both the terminal (stdout) and to file. If the program receives SIGHUP, the file will be reopened (for log rotation). Additional context is added via the .WithValue and .WithFields Logger methods. The formatting may be changed by passing a different formatter to either collector. See the cue/format godocs for details. The context data may also be formatted as JSON for machine parsing if desired. See cue/format.JSONMessage and cue/format.JSONContext. This example shows how to use error reporting services. This example shows quite a few of the cue features: logging to a file that reopens on SIGHUP (for log rotation), logging colored output to stdout, logging to syslog, and reporting errors to Honeybadger.
Package mspec is a BDD context/specification testing package for Go(Lang) with a strong emphases on spec'ing your feature(s) and scenarios first, before any code is written using as little syntax noise as possible. This leaves you free to think of your project and features as a whole without the distraction of writing any code with the added benefit of having tests ready for your project. [](https://godoc.org/github.com/ddspog/mspec) holds the source documentation (where else?) * Uses natural language (Given/When/Then) * Stubbing * Human-readable outputs * HTML output (coming soon) * Use custom Assertions * Configuration options * Uses Testify's rich assertions * Uses Go's built-in testing.T package Install it with one line of code: There are no external dependencies and it is built against Go's internal packages. The only dependency is that you have [GOPATH setup normaly](https://golang.org/doc/code.html). Create a new file to hold your specs. Using Dan North's original BDD definitions, you spec code using the Given/When/Then storyline similar to: But this is just a static example. Let's take a real example from one of my projects: You represent these thoughts in code like this: Note that `Given`, `when` and `it` all have optional variadic parameters. This allows you to spec things out with as little or as far as you want. That's it. Now run it: Print it out and stick it on your office door for everyone to see what you are working on. This is actually colored output in Terminal: It is not uncommon to go back and tweak your stories over time as you talk with your domain experts, modifying exactly the scenarios and specifications that should happen. `GoMSpec` is a testing package for the Go framework that extends Go's built-in testing package. It is modeled after the BDD Feature Specification story workflow such as: Currently it has an included `Expectation` struct that mimics basic assertion behaviors. Future plans may allow for custom assertion packages (like testify). Getting it Importing it Writing Specs Testing it Which outputs the following: Nice eh? There is nothing like using a testing package to test itself. There is some nice rich information available. ## Examples Be sure to check out more examples in the examples/ folder. Or just open the files and take a look. That's the most important part anyways. When evaluating several BDD frameworks, [Pranavraja's Zen](https://github.com/pranavraja/zen) package for Go came close - really close; but, it was lacking the more "story" overview I've been accustomed to over the years with [Machine.Specifications](https://github.com/machine/machine.specifications) in C# (.NET land). Do note that there is something to be said for simple testing in Go (and simple coding); therefore, if you are the type to keep it short and sweet and just code, then you may want to use Pranavraja's framework as it is just the context (Desc) and specs writing. I forked his code and submitted a few bug tweaks at first. But along the way, I started to have grand visions of my soul mate [Machine.Specifications](https://github.com/machine/machine.specifications) (which is called MSpec for short) for BDD testing. The ease of defining complete stories right down to the scenarios without having to implement them intrigued me in C#. It freed me from worrying about implementation details and just focus on the feature I was writing: What did it need to do? What context was I given to start with? What should it do? So while using Pranavraja's Zen framework, I kept asking myself: Could I bring those MSpec practices to Go, using a bare-bones framework? Ok, done. And since it was so heavily inspired by Aaron's MSpec project, I kept the name going here: `GoMSpec`. While keeping backwards compatibility with his existing Zen framework, I defined several goals for this package: * Had to stay simple with Give/When/Then definitions. No complex coding. * Keep the low syntax noise from the existing Zen package. * I had to be able to write features, scenarios and specs with no implementation details needed. That last goal above is key and I think is what speaks truly about what BDD is: focus on the story, feature and/or context you are designing - focus on the Behavior! I tended to design my C# code using Machine.Specifications in this BDD-style by writing entire stories and grand specs up front - designing the system I was building, or the feature I was extending. In C# land, it's not unheard of me hitting 50 to 100 specs across a single feature and a few different contexts in an hour or two, before writing any code. Which at that point, I had everything planned out pretty much the way it should behave. So with this framework, I came up with a simple method name, `NA()`, to keep the syntax noise down. Therefore, you are free to code specs with just a little syntax noise:
Package toml provides facilities for decoding and encoding TOML configuration files via reflection. There is also support for delaying decoding with the Primitive type, and querying the set of keys in a TOML document with the MetaData type. The specification implemented: https://github.com/toml-lang/toml The sub-command github.com/BurntSushi/toml/cmd/tomlv can be used to verify whether a file is a valid TOML document. It can also be used to print the type of each key in a TOML document. There are two important types of tests used for this package. The first is contained inside '*_test.go' files and uses the standard Go unit testing framework. These tests are primarily devoted to holistically testing the decoder and encoder. The second type of testing is used to verify the implementation's adherence to the TOML specification. These tests have been factored into their own project: https://github.com/BurntSushi/toml-test The reason the tests are in a separate project is so that they can be used by any implementation of TOML. Namely, it is language agnostic. Example StrictDecoding shows how to detect whether there are keys in the TOML document that weren't decoded into the value given. This is useful for returning an error to the user if they've included extraneous fields in their configuration. Example UnmarshalTOML shows how to implement a struct type that knows how to unmarshal itself. The struct must take full responsibility for mapping the values passed into the struct. The method may be used with interfaces in a struct in cases where the actual type is not known until the data is examined. Example Unmarshaler shows how to decode TOML strings into your own custom data type.
Multitemplate is a library to allow you to write html templates in multiple languages, then allowing those templates to work with each other using either a Rails-like yield/content_for paradigm or a Django style block/extend paradigm. Multitemplate is built on html/template, so it gets all of the auto-escaping logic from that library. Multitemplate at the moment has 3 languages, the standard Go template syntax, a simplified haml-like language called bham, and a simple mustache implementation. Multitemplate has an open interface for creating new syntaxes, so external languages can be easily used. Yield's are executing saved templates or blocks. You can add a fallback template to yields, but not fallback content. Yielding with a name will return the first template set for that name, or the content of the first block that had that name. ContentFor will set a template to be executed on a name. This is similar to the template command built in to the go template library, but with a layer of indirection. Blocks are template content that is executed, then saved for a later time. Blocks share names with ContentFor and yields, so a yield might outtput the content from a block, or a template set with ContentFor. Inherited templates are templates that use the inherits function to define that after it is executed, then another template should be executed as well. These templates should only be made up of non-writing functions and blocks. Context's are the way to use the more advanced features of the multitemplate library. With Context's, you can set two templates to be executed, a Main template (executed first), and a Layout template (executed after Main). You can also set Templates for Yields and Block content. Since you can't pass RenderArgs in the ExecuteContext, you should put your RenderArgs in the Dot variable. Layouts are templates executed after a Main template. Context's are the way to define Layouts to be executed. The Main template should set up content that can then be yielded using the Main template. Yielding without a name will cause the main template's content to be output. While multitemplate is available to use as a library in all Go applications, it also includes integrations libraries that will integrate multitemplate into external frameworks without requiring the user to learn how to integrate this library. The Revel integration is (as far as I know) a drop-in replacement for the Revel template library. Instructions on how to integrate are available in the godoc for the github.com/acsellers/multitemplate/revel subdirectory, while the integration code is in github.com/acsellers/multitemplate/revel/app/controllers due to how revel deals with modules. The following code demonstrates the common types of yield statements. The first yield will render the template assigned to the stylesheets key, or render the template "include/javascripts.html" if there is not a set argument. The second yield will render the template set on "stylesheets" with the .Stylesheets as the data. The third will render the template set for "sidebar" with the data originally put into ExecuteTemplate. The fourth yield will render the main template. This will be the template set for the Main attribute on the Context struct in this case. app_controller.go layouts/main.html The following code describes two templates that use the inherits function to utilize template inheritence. It works similarly to the yield example. Note that the inherits call should be the first call of the view, any code before the inherits call may be sent to the writer added to the Execute call. In the interest of security, you should only enclose multiple continuous lines of similar types of content inside a block. The reason for this is that blocks may end up with different escaping rules when originally executed and saved versus where they are output. While multitemplate will catch many cases where a block is asked to be output to a location that is under different escaping rules than the block's original rules, you should still be careful. app_controller.go layouts/main.html app/index.html As both yields and blocks are built on the same underlying mechanisms, they can be combined in interesting ways. Implementation wise, blocks are like yields that have embedded fallback content, while yields have to have separate template fallbacks. Both Layouts and the Main template can extend other templates. app_controller.go app/index.html layouts/main.html yield allows for rendering template aliases or simply rendering nothing. Rendering the Main template without a Main template set is an error content_for allows you to set a template to be rendered in a block or yield from within a template. block saves content inside the current template to a key. That key can be recalled using yield or another block with the same key in the final template with a block of that key. Keys are claimed by the first block to render to them. end_block ends the content are started by block extend marks that the current template is made up of blocks that will be executed in the context of another template. Template inheritence can be carried to arbitrary levels, you are not limited to using extend only once in template execution. root_dot is the orignal RenderArgs passed in to the ExecuteTemplate call exec execute an arbitrary template with the passed name and data fallback sets a specific template to be rendered in the case that a yield call finds that there is no content set for the key of the yield. yield . is ambiguous when dot is currently a string. It could be either a request to output a pre-set template or block, or to render the main template with the dot as the data. Assigning the same key in a Context for both Yields and Content means that the Content will be ignored. Calling content_for and block (in templates) with the same key has lock-out protection within the template functions. In this case, we will use the template named in the Yields map. Within the templates, the rule is the first to claim the key, wins. Any integrations that hide the Context, will operate under the assumption that the last claim before template execution should win. Getting an error about a stack overflow during template execution is most likely a template that is yielding itself. The block function has two related functions, define_block and exec_block. If you need to define a block, even when it would normally execute the block (for instance, if you are in the main layout, and which to ensure a block exists before yielding or executing a template), define_block will save the content of the block and not output the content. This will not override any content already saved for that block name. exec_block is the reverse function, it will force the block to be executed. If you need to start the main template with a block, and you are using a template, exec_block will cause the block to be executed correctly. You should use the standard block call in nearly all situations. There are tests in integration_test.go that spell out how yields and block interact in all the situations that I could think of. The test cases are spelled out with minimal templates, names for each test case, and a description of what the situation that the case is testing. Bham is a beta-quality library. I've tried to fix the bugs that I'm aware of, but I'm sure that there's more lurking out there. The Mustache implementation here is alpha quality. It's low on the totem pole for improvements. First release is 0.1, which has bham and html/templates available as first-class languages. Blocks and yields are supported, along with layouts. Second release is 0.5, which adds the helpers library, and the super_block and main_block functions. Also a whole bunch of new tests for yields, blocks and their interactions. Third release will be either a 0.6 or a 1.0, adding things I forgot, fixing bugs discovered and things that need to be fixed. Mustache will get integration tests, function calling, blocks. If there were relatively few bugs to fix, then this will be 1.0. Releases after 1.0, will be adding functions or languages. Template languages I'm interested in investigating adding are: jade/slim, full haml, some sort of lispy thing, handlebars, jinja2, and Razor.
Package spf implements an SPF checker to evaluate whether or not an email messages passes a published SPF (Sender Policy Framework) policy. It implements all of the SPF checker protocol as described in RFC 7208, including macros and PTR checks, and passes 100% of the openspf and pyspf test suites. A DNS stub resolver is included, but can be replaced by anything that implements the spf.Resolver interface. The Hook interface can be used to hook into the check_host function to see more details about why a policy passes or fails.
Package p9p implements a compliant 9P2000 client and server library for use in modern, production Go services. This package differentiates itself in that is has departed from the plan 9 implementation primitives and better follows idiomatic Go style. The package revolves around the session type, which is an enumeration of raw 9p message calls. A few calls, such as flush and version, have been elided, deferring their usage to the server implementation. Sessions can be trivially proxied through clients and servers. The best place to get started is with Serve. Serve can be provided a connection and a handler. A typical implementation will call Serve as part of a listen/accept loop. As each network connection is created, Serve can be called with a handler for the specific connection. The handler can be implemented with a Session via the Dispatch function or can generate sessions for dispatch in response to client messages. (See cmd/9ps for an example) On the client side, NewSession provides a 9p session from a connection. After a version negotiation, methods can be called on the session, in parallel, and calls will be sent over the connection. Call timeouts can be controlled via the context provided to each method call. This package has the beginning of a nice client-server framework for working with 9p. Some of the abstractions aren't entirely fleshed out, but most of this can center around the Handler. Missing from this are a number of tools for implementing 9p servers. The most glaring are directory read and walk helpers. Other, more complex additions might be a system to manage in memory filesystem trees that expose multi-user sessions. The largest difference between this package and other 9p packages is simplification of the types needed to implement a server. To avoid confusing bugs and odd behavior, the components are separated by each level of the protocol. One example is that requests and responses are separated and they no longer hold mutable state. This means that framing, transport management, encoding, and dispatching are componentized. Little work will be required to swap out encodings, transports or connection implementations. This package has been wired from top to bottom to support context-based resource management. Everything from startup to shutdown can have timeouts using contexts. Not all close methods are fully in place, but we are very close to having controlled, predictable cleanup for both servers and clients. Timeouts can be very granular or very course, depending on the context of the timeout. For example, it is very easy to set a short timeout for a stat call but a long timeout for reading data. Currently, there is not multiversion support. The hooks and functionality are in place to add multi-version support. Generally, the correct space to do this is in the codec. Types, such as Dir, simply need to be extended to support the possibility of extra fields. The real question to ask here is what is the role of the version number in the 9p protocol. It really comes down to the level of support required. Do we just need it at the protocol level, or do handlers and sessions need to be have differently based on negotiated versions? This package has a number of TODOs to make it easier to use. Most of the existing code provides a solid base to work from. Don't be discouraged by the sawdust. In addition, the testing is embarassingly lacking. With time, we can get full testing going and ensure we have confidence in the implementation.
Package core contains compatibility testing framework core types and functions
Package micro is a pluggable framework for microservices
Package micro is a pluggable framework for microservices