Package regexp implements regular expression search. The syntax of the regular expressions accepted is the same general syntax used by Perl, Python, and other languages. More precisely, it is the syntax accepted by RE2 and described at https://golang.org/s/re2syntax, except for \C. For an overview of the syntax, run The regexp implementation provided by this package is guaranteed to run in time linear in the size of the input. (This is a property not guaranteed by most open source implementations of regular expressions.) For more information about this property, see or any book about automata theory. All characters are UTF-8-encoded code points. There are 16 methods of Regexp that match a regular expression and identify the matched text. Their names are matched by this regular expression: If 'All' is present, the routine matches successive non-overlapping matches of the entire expression. Empty matches abutting a preceding match are ignored. The return value is a slice containing the successive return values of the corresponding non-'All' routine. These routines take an extra integer argument, n. If n >= 0, the function returns at most n matches/submatches; otherwise, it returns all of them. If 'String' is present, the argument is a string; otherwise it is a slice of bytes; return values are adjusted as appropriate. If 'Submatch' is present, the return value is a slice identifying the successive submatches of the expression. Submatches are matches of parenthesized subexpressions (also known as capturing groups) within the regular expression, numbered from left to right in order of opening parenthesis. Submatch 0 is the match of the entire expression, submatch 1 the match of the first parenthesized subexpression, and so on. If 'Index' is present, matches and submatches are identified by byte index pairs within the input string: result[2*n:2*n+1] identifies the indexes of the nth submatch. The pair for n==0 identifies the match of the entire expression. If 'Index' is not present, the match is identified by the text of the match/submatch. If an index is negative or text is nil, it means that subexpression did not match any string in the input. For 'String' versions an empty string means either no match or an empty match. There is also a subset of the methods that can be applied to text read from a ByteReader: This set may grow. Note that regular expression matches may need to examine text beyond the text returned by a match, so the methods that match text from a ByteReader may read arbitrarily far into the input before returning. (There are a few other methods that do not match this pattern.)
Command pigeon generates parsers in Go from a PEG grammar. This version is a fork: https://github.com/fy0/pigeon From Wikipedia [0]: Its features and syntax are inspired by the PEG.js project [1], while the implementation is loosely based on [2]. Formal presentation of the PEG theory by Bryan Ford is also an important reference [3]. An introductory blog post can be found at [4]. The pigeon tool must be called with PEG input as defined by the accepted PEG syntax below. The grammar may be provided by a file or read from stdin. The generated parser is written to stdout by default. The following options can be specified: If the code blocks in the grammar (see below, section "Code block") are golint- and go vet-compliant, then the resulting generated code will also be golint- and go vet-compliant. The generated code doesn't use any third-party dependency unless code blocks in the grammar require such a dependency. The accepted syntax for the grammar is formally defined in the grammar/pigeon.peg file, using the PEG syntax. What follows is an informal description of this syntax. Identifiers, whitespace, comments and literals follow the same notation as the Go language, as defined in the language specification (http://golang.org/ref/spec#Source_code_representation): The grammar must be Unicode text encoded in UTF-8. New lines are identified by the \n character (U+000A). Space (U+0020), horizontal tabs (U+0009) and carriage returns (U+000D) are considered whitespace and are ignored except to separate tokens. A PEG grammar consists of a set of rules. A rule is an identifier followed by a rule definition operator and an expression. An optional display name - a string literal used in error messages instead of the rule identifier - can be specified after the rule identifier. E.g.: The rule definition operator can be any one of those: A rule is defined by an expression. The following sections describe the various expression types. Expressions can be grouped by using parentheses, and a rule can be referenced by its identifier in place of an expression. The choice expression is a list of expressions that will be tested in the order they are defined. The first one that matches will be used. Expressions are separated by the forward slash character "/". E.g.: Because the first match is used, it is important to think about the order of expressions. For example, in this rule, "<=" would never be used because the "<" expression comes first: The sequence expression is a list of expressions that must all match in that same order for the sequence expression to be considered a match. Expressions are separated by whitespace. E.g.: A labeled expression consists of an identifier followed by a colon ":" and an expression. A labeled expression introduces a variable named with the label that can be referenced in the code blocks in the same scope. The variable will have the value of the expression that follows the colon. E.g.: The variable is typed as an empty interface, and the underlying type depends on the following: For terminals (character and string literals, character classes and the any matcher), the value is []byte. E.g.: For predicates (& and !), the value is always nil. E.g.: For a sequence, the value is a slice of empty interfaces, one for each expression value in the sequence. The underlying types of each value in the slice follow the same rules described here, recursively. E.g.: For a repetition (+ and *), the value is a slice of empty interfaces, one for each repetition. The underlying types of each value in the slice follow the same rules described here, recursively. E.g.: For a choice expression, the value is that of the matching choice. E.g.: For the optional expression (?), the value is nil or the value of the expression. E.g.: Of course, the type of the value can be anything once an action code block is used. E.g.: An expression prefixed with the ampersand "&" is the "and" predicate expression: it is considered a match if the following expression is a match, but it does not consume any input. An expression prefixed with the exclamation point "!" is the "not" predicate expression: it is considered a match if the following expression is not a match, but it does not consume any input. E.g.: The expression following the & and ! operators can be a code block. In that case, the code block must return a bool and an error. The operator's semantic is the same, & is a match if the code block returns true, ! is a match if the code block returns false. The code block has access to any labeled value defined in its scope. E.g.: An expression followed by "*", "?" or "+" is a match if the expression occurs zero or more times ("*"), zero or one time "?" or one or more times ("+") respectively. The match is greedy, it will match as many times as possible. E.g. A literal matcher tries to match the input against a single character or a string literal. The literal may be a single-quoted single character, a double-quoted string or a backtick-quoted raw string. The same rules as in Go apply regarding the allowed characters and escapes. The literal may be followed by a lowercase "i" (outside the ending quote) to indicate that the match is case-insensitive. E.g.: A character class matcher tries to match the input against a class of characters inside square brackets "[...]". Inside the brackets, characters represent themselves and the same escapes as in string literals are available, except that the single- and double-quote escape is not valid, instead the closing square bracket "]" must be escaped to be used. Character ranges can be specified using the "[a-z]" notation. Unicode classes can be specified using the "[\pL]" notation, where L is a single-letter Unicode class of characters, or using the "[\p{Class}]" notation where Class is a valid Unicode class (e.g. "Latin"). As for string literals, a lowercase "i" may follow the matcher (outside the ending square bracket) to indicate that the match is case-insensitive. A "^" as first character inside the square brackets indicates that the match is inverted (it is a match if the input does not match the character class matcher). E.g.: The any matcher is represented by the dot ".". It matches any character except the end of file, thus the "!." expression is used to indicate "match the end of file". E.g.: Code blocks can be added to generate custom Go code. There are three kinds of code blocks: the initializer, the action and the predicate. All code blocks appear inside curly braces "{...}". The initializer must appear first in the grammar, before any rule. It is copied as-is (minus the wrapping curly braces) at the top of the generated parser. It may contain function declarations, types, variables, etc. just like any Go file. Every symbol declared here will be available to all other code blocks. Although the initializer is optional in a valid grammar, it is usually required to generate a valid Go source code file (for the package clause). E.g.: Action code blocks are code blocks declared after an expression in a rule. Those code blocks are turned into a method on the "*current" type in the generated source code. The method receives any labeled expression's value as argument (as any) and must return two values, the first being the value of the expression (an any), and the second an error. If a non-nil error is returned, it is added to the list of errors that the parser will return. E.g.: Predicate code blocks are code blocks declared immediately after the and "&" or the not "!" operators. Like action code blocks, predicate code blocks are turned into a method on the "*current" type in the generated source code. The method receives any labeled expression's value as argument (as any) and must return two opt, the first being a bool and the second an error. If a non-nil error is returned, it is added to the list of errors that the parser will return. E.g.: State change code blocks are code blocks starting with "#". In contrast to action and predicate code blocks, state change code blocks are allowed to modify values in the global "state" store (see below). State change code blocks are turned into a method on the "*current" type in the generated source code. The method is passed any labeled expression's value as an argument (of type any) and must return a value of type error. If a non-nil error is returned, it is added to the list of errors that the parser will return, note that the parser does NOT backtrack if a non-nil error is returned. E.g: The "*current" type is a struct that provides four useful fields that can be accessed in action, state change, and predicate code blocks: "pos", "text", "state" and "globalStore". The "pos" field indicates the current position of the parser in the source input. It is itself a struct with three fields: "line", "col" and "offset". Line is a 1-based line number, col is a 1-based column number that counts runes from the start of the line, and offset is a 0-based byte offset. The "text" field is the slice of bytes of the current match. It is empty in a predicate code block. The "state" field is a global store, with backtrack support, of type "map[string]any". The values in the store are tied to the parser's backtracking, in particular if a rule fails to match then all updates to the state that occurred in the process of matching the rule are rolled back. For a key-value store that is not tied to the parser's backtracking, see the "globalStore". The values in the "state" store are available for read access in action and predicate code blocks, any changes made to the "state" store will be reverted once the action or predicate code block is finished running. To update values in the "state" use state change code blocks ("#{}"). IMPORTANT: The "globalStore" field is a global store of type "map[string]any", which allows to store arbitrary values, which are available in action and predicate code blocks for read as well as write access. It is important to notice, that the global store is completely independent from the backtrack mechanism of PEG and is therefore not set back to its old state during backtrack. The initialization of the global store may be achieved by using the GlobalStore function (http://godoc.org/github.com/mna/pigeon/test/predicates#GlobalStore). Be aware, that all keys starting with "_pigeon" are reserved for internal use of pigeon and should not be used nor modified. Those keys are treated as internal implementation details and therefore there are no guarantees given in regards of API stability. With options -support-left-recursion pigeon supports left recursion. E.g.: Supports indirect recursion: The implementation is based on the [Left-recursive PEG Grammars][9] article that links to [Left Recursion in Parsing Expression Grammars][10] and [Packrat Parsers Can Support Left Recursion][11] papers. References: pigeon supports an extension of the classical PEG syntax called failure labels, proposed by Maidl et al. in their paper "Error Reporting in Parsing Expression Grammars" [7]. The used syntax for the introduced expressions is borrowed from their lpeglabel [8] implementation. This extension allows to signal different kinds of errors and to specify, which recovery pattern should handle a given label. With labeled failures it is possible to distinguish between an ordinary failure and an error. Usually, an ordinary failure is produced when the matching of a character fails, and this failure is caught by ordered choice. An error (a non-ordinary failure), by its turn, is produced by the throw operator and may be caught by the recovery operator. In pigeon, the recovery expression consists of the regular expression, the recovery expression and a set of labels to be matched. First, the regular expression is tried. If this fails with one of the provided labels, the recovery expression is tried. If this fails as well, the error is propagated. E.g.: To signal a failure condition, the throw expression is used. E.g.: For concrete examples, how to use throw and recover, have a look at the examples "labeled_failures" and "thrownrecover" in the "test" folder. The implementation of the throw and recover operators work as follows: The failure recover expression adds the recover expression for every failure label to the recovery stack and runs the regular expression. The throw expression checks the recovery stack in reversed order for the provided failure label. If the label is found, the respective recovery expression is run. If this expression is successful, the parser continues the processing of the input. If the recovery expression is not successful, the parsing fails and the parser starts to backtrack. If throw and recover expressions are used together with global state, it is the responsibility of the author of the grammar to reset the global state to a valid state during the recovery operation. The parser generated by pigeon exports a few symbols so that it can be used as a package with public functions to parse input text. The exported API is: See the godoc page of the generated parser for the test/predicates grammar for an example documentation page of the exported API: http://godoc.org/github.com/mna/pigeon/test/predicates. Like the grammar used to generate the parser, the input text must be UTF-8-encoded Unicode. The start rule of the parser is the first rule in the PEG grammar used to generate the parser. A call to any of the Parse* functions returns the value generated by executing the grammar on the provided input text, and an optional error. Typically, the grammar should generate some kind of abstract syntax tree (AST), but for simple grammars it may evaluate the result immediately, such as in the examples/calculator example. There are no constraints imposed on the author of the grammar, it can return whatever is needed. When the parser returns a non-nil error, the error is always of type errList, which is defined as a slice of errors ([]error). Each error in the list is of type *parserError. This is a struct that has an "Inner" field that can be used to access the original error. So if a code block returns some well-known error like: The original error can be accessed this way: By default the parser will continue after an error is returned and will cumulate all errors found during parsing. If the grammar reaches a point where it shouldn't continue, a panic statement can be used to terminate parsing. The panic will be caught at the top-level of the Parse* call and will be converted into a *parserError like any error, and an errList will still be returned to the caller. The divide by zero error in the examples/calculator grammar leverages this feature (no special code is needed to handle division by zero, if it happens, the runtime panics and it is recovered and returned as a parsing error). Providing good error reporting in a parser is not a trivial task. Part of it is provided by the pigeon tool, by offering features such as filename, position, expected literals and rule name in the error message, but an important part of good error reporting needs to be done by the grammar author. For example, many programming languages use double-quotes for string literals. Usually, if the opening quote is found, the closing quote is expected, and if none is found, there won't be any other rule that will match, there's no need to backtrack and try other choices, an error should be added to the list and the match should be consumed. In order to do this, the grammar can look something like this: This is just one example, but it illustrates the idea that error reporting needs to be thought out when designing the grammar. Because the above mentioned error types (errList and parserError) are not exported, additional steps have to be taken, ff the generated parser is used as library package in other packages (e.g. if the same parser is used in multiple command line tools). One possible implementation for exported errors (based on interfaces) and customized error reporting (caret style formatting of the position, where the parsing failed) is available in the json example and its command line tool: http://godoc.org/github.com/mna/pigeon/examples/json Generated parsers have user-provided code mixed with pigeon code in the same package, so there is no package boundary in the resulting code to prevent access to unexported symbols. What is meant to be implementation details in pigeon is also available to user code - which doesn't mean it should be used. For this reason, it is important to precisely define what is intended to be the supported API of pigeon, the parts that will be stable in future versions. The "stability" of the version 1.0 API attempts to make a similar guarantee as the Go 1 compatibility [5]. The following lists what part of the current pigeon code falls under that guarantee (features may be added in the future): The pigeon command-line flags and arguments: those will not be removed and will maintain the same semantics. The explicitly exported API generated by pigeon. See [6] for the documentation of this API on a generated parser. The PEG syntax, as documented above. The code blocks (except the initializer) will always be generated as methods on the *current type, and this type is guaranteed to have the fields pos (type position) and text (type []byte). There are no guarantees on other fields and methods of this type. The position type will always have the fields line, col and offset, all defined as int. There are no guarantees on other fields and methods of this type. The type of the error value returned by the Parse* functions, when not nil, will always be errList defined as a []error. There are no guarantees on methods of this type, other than the fact it implements the error interface. Individual errors in the errList will always be of type *parserError, and this type is guaranteed to have an Inner field that contains the original error value. There are no guarantees on other fields and methods of this type. The above guarantee is given to the version 1.0 (https://github.com/mna/pigeon/releases/tag/v1.0.0) of pigeon, which has entered maintenance mode (bug fixes only). The current master branch includes the development toward a future version 2.0, which intends to further improve pigeon. While the given API stability should be maintained as far as it makes sense, breaking changes may be necessary to be able to improve pigeon. The new version 2.0 API has not yet stabilized and therefore changes to the API may occur at any time. References:
Package restful , a lean package for creating REST-style WebServices without magic. A WebService has a collection of Route objects that dispatch incoming Http Requests to a function calls. Typically, a WebService has a root path (e.g. /users) and defines common MIME types for its routes. WebServices must be added to a container (see below) in order to handler Http requests from a server. A Route is defined by a HTTP method, an URL path and (optionally) the MIME types it consumes (Content-Type) and produces (Accept). This package has the logic to find the best matching Route and if found, call its Function. The (*Request, *Response) arguments provide functions for reading information from the request and writing information back to the response. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-user-resource.go with a full implementation. A Route parameter can be specified using the format "uri/{var[:regexp]}" or the special version "uri/{var:*}" for matching the tail of the path. For example, /persons/{name:[A-Z][A-Z]} can be used to restrict values for the parameter "name" to only contain capital alphabetic characters. Regular expressions must use the standard Go syntax as described in the regexp package. (https://code.google.com/p/re2/wiki/Syntax) This feature requires the use of a CurlyRouter. A Container holds a collection of WebServices, Filters and a http.ServeMux for multiplexing http requests. Using the statements "restful.Add(...) and restful.Filter(...)" will register WebServices and Filters to the Default Container. The Default container of go-restful uses the http.DefaultServeMux. You can create your own Container and create a new http.Server for that particular container. A filter dynamically intercepts requests and responses to transform or use the information contained in the requests or responses. You can use filters to perform generic logging, measurement, authentication, redirect, set response headers etc. In the restful package there are three hooks into the request,response flow where filters can be added. Each filter must define a FilterFunction: Use the following statement to pass the request,response pair to the next filter or RouteFunction These are processed before any registered WebService. These are processed before any Route of a WebService. These are processed before calling the function associated with the Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-filters.go with full implementations. Two encodings are supported: gzip and deflate. To enable this for all responses: If a Http request includes the Accept-Encoding header then the response content will be compressed using the specified encoding. Alternatively, you can create a Filter that performs the encoding and install it per WebService or Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-encoding-filter.go By installing a pre-defined container filter, your Webservice(s) can respond to the OPTIONS Http request. By installing the filter of a CrossOriginResourceSharing (CORS), your WebService(s) can handle CORS requests. Unexpected things happen. If a request cannot be processed because of a failure, your service needs to tell via the response what happened and why. For this reason HTTP status codes exist and it is important to use the correct code in every exceptional situation. If path or query parameters are not valid (content or type) then use http.StatusBadRequest. Despite a valid URI, the resource requested may not be available If the application logic could not process the request (or write the response) then use http.StatusInternalServerError. The request has a valid URL but the method (GET,PUT,POST,...) is not allowed. The request does not have or has an unknown Accept Header set for this operation. The request does not have or has an unknown Content-Type Header set for this operation. In addition to setting the correct (error) Http status code, you can choose to write a ServiceError message on the response. This package has several options that affect the performance of your service. It is important to understand them and how you can change it. DoNotRecover controls whether panics will be caught to return HTTP 500. If set to false, the container will recover from panics. Default value is true If content encoding is enabled then the default strategy for getting new gzip/zlib writers and readers is to use a sync.Pool. Because writers are expensive structures, performance is even more improved when using a preloaded cache. You can also inject your own implementation. This package has the means to produce detail logging of the complete Http request matching process and filter invocation. Enabling this feature requires you to set an implementation of restful.StdLogger (e.g. log.Logger) instance such as: The restful.SetLogger() method allows you to override the logger used by the package. By default restful uses the standard library `log` package and logs to stdout. Different logging packages are supported as long as they conform to `StdLogger` interface defined in the `log` sub-package, writing an adapter for your preferred package is simple. (c) 2012-2015, http://ernestmicklei.com. MIT License
Package restful , a lean package for creating REST-style WebServices without magic. A WebService has a collection of Route objects that dispatch incoming Http Requests to a function calls. Typically, a WebService has a root path (e.g. /users) and defines common MIME types for its routes. WebServices must be added to a container (see below) in order to handler Http requests from a server. A Route is defined by a HTTP method, an URL path and (optionally) the MIME types it consumes (Content-Type) and produces (Accept). This package has the logic to find the best matching Route and if found, call its Function. The (*Request, *Response) arguments provide functions for reading information from the request and writing information back to the response. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-user-resource.go with a full implementation. A Route parameter can be specified using the format "uri/{var[:regexp]}" or the special version "uri/{var:*}" for matching the tail of the path. For example, /persons/{name:[A-Z][A-Z]} can be used to restrict values for the parameter "name" to only contain capital alphabetic characters. Regular expressions must use the standard Go syntax as described in the regexp package. (https://code.google.com/p/re2/wiki/Syntax) This feature requires the use of a CurlyRouter. A Container holds a collection of WebServices, Filters and a http.ServeMux for multiplexing http requests. Using the statements "restful.Add(...) and restful.Filter(...)" will register WebServices and Filters to the Default Container. The Default container of go-restful uses the http.DefaultServeMux. You can create your own Container and create a new http.Server for that particular container. A filter dynamically intercepts requests and responses to transform or use the information contained in the requests or responses. You can use filters to perform generic logging, measurement, authentication, redirect, set response headers etc. In the restful package there are three hooks into the request,response flow where filters can be added. Each filter must define a FilterFunction: Use the following statement to pass the request,response pair to the next filter or RouteFunction These are processed before any registered WebService. These are processed before any Route of a WebService. These are processed before calling the function associated with the Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-filters.go with full implementations. Two encodings are supported: gzip and deflate. To enable this for all responses: If a Http request includes the Accept-Encoding header then the response content will be compressed using the specified encoding. Alternatively, you can create a Filter that performs the encoding and install it per WebService or Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-encoding-filter.go By installing a pre-defined container filter, your Webservice(s) can respond to the OPTIONS Http request. By installing the filter of a CrossOriginResourceSharing (CORS), your WebService(s) can handle CORS requests. Unexpected things happen. If a request cannot be processed because of a failure, your service needs to tell via the response what happened and why. For this reason HTTP status codes exist and it is important to use the correct code in every exceptional situation. If path or query parameters are not valid (content or type) then use http.StatusBadRequest. Despite a valid URI, the resource requested may not be available If the application logic could not process the request (or write the response) then use http.StatusInternalServerError. The request has a valid URL but the method (GET,PUT,POST,...) is not allowed. The request does not have or has an unknown Accept Header set for this operation. The request does not have or has an unknown Content-Type Header set for this operation. In addition to setting the correct (error) Http status code, you can choose to write a ServiceError message on the response. This package has several options that affect the performance of your service. It is important to understand them and how you can change it. DoNotRecover controls whether panics will be caught to return HTTP 500. If set to false, the container will recover from panics. Default value is true If content encoding is enabled then the default strategy for getting new gzip/zlib writers and readers is to use a sync.Pool. Because writers are expensive structures, performance is even more improved when using a preloaded cache. You can also inject your own implementation. This package has the means to produce detail logging of the complete Http request matching process and filter invocation. Enabling this feature requires you to set an implementation of restful.StdLogger (e.g. log.Logger) instance such as: The restful.SetLogger() method allows you to override the logger used by the package. By default restful uses the standard library `log` package and logs to stdout. Different logging packages are supported as long as they conform to `StdLogger` interface defined in the `log` sub-package, writing an adapter for your preferred package is simple. (c) 2012-2015, http://ernestmicklei.com. MIT License
Package chi is a small, idiomatic and composable router for building HTTP services. chi requires Go 1.14 or newer. Example: See github.com/go-chi/chi/_examples/ for more in-depth examples. URL patterns allow for easy matching of path components in HTTP requests. The matching components can then be accessed using chi.URLParam(). All patterns must begin with a slash. A simple named placeholder {name} matches any sequence of characters up to the next / or the end of the URL. Trailing slashes on paths must be handled explicitly. A placeholder with a name followed by a colon allows a regular expression match, for example {number:\\d+}. The regular expression syntax is Go's normal regexp RE2 syntax, except that / will never be matched. An anonymous regexp pattern is allowed, using an empty string before the colon in the placeholder, such as {:\\d+} The special placeholder of asterisk matches the rest of the requested URL. Any trailing characters in the pattern are ignored. This is the only placeholder which will match / characters. Examples:
regen is a tool to parse and generate random strings from regular expressions. regen works by parsing a regular expression and walking its op tree. It is currently not guaranteed to produce entirely accurate results, but will at least try. Currently, word boundaries are not supported (until I decide how best to randomly insert a word boundary character). Using a word boundary op (\b or \B) will currently cause regen to panic. In addition, line endings are also poorly supported right now and EOT markers are treated as the end of string generation. Usage is simple, pass one or more regular expressions to regen on the command line and it will generate a string from each, printing them in the same order as on the command line (separated by newlines): So, if you fancy yourself a Javascript weirdo of some variety, you can at least use regen to write code for eBay: A few command-line options are provided, which you can see by running regen -help.
Package dmg implements parser combinators to build simple and fast parsers. The produced parsers are recursive-descent parsers with infinite lookahead, aka. LL(k), but with support for left-recursive (in fact, any-recursive) grammars. It also supports nullability (again, anywhere). Additionally, dmg provides a set of convenience functions to provide common building blocks for parsers such as the Kleene Closure and a Maybe-Parser, equivalent in functionality to the POSIX regular expression operators `*` and `?`, respectively. A simple, left-recursive parser for arithmetic difference can be built as follows: The above code, when evaluated to the end of a valid input will produce a left-leaning concrete syntax tree. In order to build an abstract syntax tree, dmg provides the MappingParser, which enables you to write "actions" to manipulate matches. Following with the example: This package is work-in-progress and will grow. Once it reaches a Beta-state It'll get a grammar-tree optimizer which will hopefully eliminate any and all back-tracking from the parser implementation, essentially guaranteeing an O(n) time-complexity for unambiguous grammars, as well as scaling elegantly for more complex grammars.
bindata converts any file into managable Go source code. Useful for embedding binary data into a go program. The file data is optionally gzip compressed before being converted to a raw byte slice. The following paragraphs cover some of the customization options which can be specified in the Config struct, which must be passed into the Translate() call. When used with the `Debug` option, the generated code does not actually include the asset data. Instead, it generates function stubs which load the data from the original file on disk. The asset API remains identical between debug and release builds, so your code will not have to change. This is useful during development when you expect the assets to change often. The host application using these assets uses the same API in both cases and will not have to care where the actual data comes from. An example is a Go webserver with some embedded, static web content like HTML, JS and CSS files. While developing it, you do not want to rebuild the whole server and restart it every time you make a change to a bit of javascript. You just want to build and launch the server once. Then just press refresh in the browser to see those changes. Embedding the assets with the `debug` flag allows you to do just that. When you are finished developing and ready for deployment, just re-invoke `go-bindata` without the `-debug` flag. It will now embed the latest version of the assets. The `NoMemCopy` option will alter the way the output file is generated. It will employ a hack that allows us to read the file data directly from the compiled program's `.rodata` section. This ensures that when we call call our generated function, we omit unnecessary memcopies. The downside of this, is that it requires dependencies on the `reflect` and `unsafe` packages. These may be restricted on platforms like AppEngine and thus prevent you from using this mode. Another disadvantage is that the byte slice we create, is strictly read-only. For most use-cases this is not a problem, but if you ever try to alter the returned byte slice, a runtime panic is thrown. Use this mode only on target platforms where memory constraints are an issue. The default behaviour is to use the old code generation method. This prevents the two previously mentioned issues, but will employ at least one extra memcopy and thus increase memory requirements. For instance, consider the following two examples: This would be the default mode, using an extra memcopy but gives a safe implementation without dependencies on `reflect` and `unsafe`: Here is the same functionality, but uses the `.rodata` hack. The byte slice returned from this example can not be written to without generating a runtime error. The NoCompress option indicates that the supplied assets are *not* GZIP compressed before being turned into Go code. The data should still be accessed through a function call, so nothing changes in the API. This feature is useful if you do not care for compression, or the supplied resource is already compressed. Doing it again would not add any value and may even increase the size of the data. The default behaviour of the program is to use compression. The keys used in the `_bindata` map are the same as the input file name passed to `go-bindata`. This includes the path. In most cases, this is not desireable, as it puts potentially sensitive information in your code base. For this purpose, the tool supplies another command line flag `-prefix`. This accepts a [regular expression](https://github.com/google/re2/wiki/Syntax) string, which will be used to match a portion of the map keys and function names that should be stripped out. For example, running without the `-prefix` flag, we get: Running with the `-prefix` flag, we get: With the optional Tags field, you can specify any go build tags that must be fulfilled for the output file to be included in a build. This is useful when including binary data in multiple formats, where the desired format is specified at build time with the appropriate tags. The tags are appended to a `// +build` line in the beginning of the output file and must follow the build tags syntax specified by the go tool.
Package emojiregexp provides a regular expression that matches all Unicode emojis.
Package regexp implements regular expression search. The syntax of the regular expressions accepted is the same general syntax used by Perl, Python, and other languages. More precisely, it is the syntax accepted by RE2 and described at https://golang.org/s/re2syntax, except for \C. For an overview of the syntax, run The regexp implementation provided by this package is guaranteed to run in time linear in the size of the input. (This is a property not guaranteed by most open source implementations of regular expressions.) For more information about this property, see or any book about automata theory. All characters are UTF-8-encoded code points. There are 16 methods of Regexp that match a regular expression and identify the matched text. Their names are matched by this regular expression: If 'All' is present, the routine matches successive non-overlapping matches of the entire expression. Empty matches abutting a preceding match are ignored. The return value is a slice containing the successive return values of the corresponding non-'All' routine. These routines take an extra integer argument, n; if n >= 0, the function returns at most n matches/submatches. If 'String' is present, the argument is a string; otherwise it is a slice of bytes; return values are adjusted as appropriate. If 'Submatch' is present, the return value is a slice identifying the successive submatches of the expression. Submatches are matches of parenthesized subexpressions (also known as capturing groups) within the regular expression, numbered from left to right in order of opening parenthesis. Submatch 0 is the match of the entire expression, submatch 1 the match of the first parenthesized subexpression, and so on. If 'Index' is present, matches and submatches are identified by byte index pairs within the input string: result[2*n:2*n+1] identifies the indexes of the nth submatch. The pair for n==0 identifies the match of the entire expression. If 'Index' is not present, the match is identified by the text of the match/submatch. If an index is negative, it means that subexpression did not match any string in the input. There is also a subset of the methods that can be applied to text read from a RuneReader: This set may grow. Note that regular expression matches may need to examine text beyond the text returned by a match, so the methods that match text from a RuneReader may read arbitrarily far into the input before returning. (There are a few other methods that do not match this pattern.)
This command filters a JUnit file such that only tests with a name matching a regular expression are passed through. By concatenating multiple input files it is possible to merge them into a single file.
Package emailsupport contains some more esoteric routines useful for parsing and handling email. For instance, some baroque regular expressions. No APIs are stable. Make sure you handle your dependencies accordingly. The package creates a number of exported regular expression objects, initialised at init time, pulling up the creation cost to program start. Per the documentation for the `regexp` package, “A Regexp is safe for concurrent use by multiple goroutines.” Each regular expression comes in two forms, `Foo` and `FooUnanchored`. The short name is anchored to the start and end of the pattern namespace, so that by default if you match an variable against `EmailAddress`, you are confirming that the contents of the variable are an email address, not accidentally only matching that there's something address-like somewhere within. The longer name (I need something briefer which is still clear) provides a pattern which can be used to find a regular expression elsewhere. Each regular expression is available as a pattern string in a form with a `Txt` prefix, which can be used to build larger regular expressions. The pattern is wrapped with `(?:...)` to be a non-capturing group which can be qualified or otherwise treated as a single unit. No regular expression has any capturing groups, letting the caller manage capturing indices. Thus for any `Foo`, this package provides: The list includes: `EmailAddress`: an RFC5321 email address, the part within angle-brackets or as is used in SMTP. This is _not_ an RFC5322 message header email email address, with display forms and comments. `EmailDomain`: a domain which can be used in email addresses; this is the base specification form and does not handle internationalisation, though this regexp should be correct to apply against punycode-encoded domains. This does handle embedded IPv4 and IPv6 address literals, but not the General-address-literal which is a grammar hook for future extension (because that inherently can't be handled until defined). `EmailLHS`: the Left-Hand-Side (or "local part") of an email address; this handles unquoted and quoted forms. `EmailAddressOrUnqualified`: either an address or a LHS, this is a form often used in mail configuration files where a domain is implicit. `IPv4Address`, `IPv6Address`: an IPv4 or IPv6 address `IPv4Netblock`, `IPv6Netblock`, IPNetblock: a netblock in CIDR prefix/len notation (used for source ACLs) `IPv4Octet`: a number 0 to 255 The IPv6 address regexp is taken from RFC3986 (the one which gets it right) and is a careful copy/paste and edit of a version which has been used and gradually debugged for years, including in a tool I released called `emit_ipv6_regexp`. The patterns used in `EmailLHS` (and thus also in items which include an email left-hand-side) can be in one of two forms, and selecting between them is a compile-time decision. The rules can be either those from RFC2822 or those from RFC5321. By default, those from RFC5321 are used. Build with a `rfc2822` build-tag to get the older definitions. If a future RFC changes the rules again, then the default patterns in this package may change; the build-tag `rfc5321` is currently unused, but is reserved for the future to force selecting the rules which are now current. It is safe (harmless) to supply that build-tag now (but not together with `rfc2822`).
Package pcre provides access to the Perl Compatible Regular Expresion library, PCRE. It implements two main types, Regexp and Matcher. Regexp objects store a compiled regular expression. They consist of two immutable parts: pcre and pcre_extra. Compile()/MustCompile() initialize pcre. Calling Study() on a compiled Regexp initializes pcre_extra. Compilation of regular expressions using Compile or MustCompile is slightly expensive, so these objects should be kept and reused, instead of compiling them from scratch for each matching attempt. CompileJIT and MustCompileJIT are way more expensive, because they run Study() after compiling a Regexp, but they tend to give much better perfomance: http://sljit.sourceforge.net/regex_perf.html Matcher objects keeps the results of a match against a []byte or string subject. The Group and GroupString functions provide access to capture groups; both versions work no matter if the subject was a []byte or string, but the version with the matching type is slightly more efficient. Matcher objects contain some temporary space and refer the original subject. They are mutable and can be reused (using Match, MatchString, Reset or ResetString). For details on the regular expression language implemented by this package and the flags defined below, see the PCRE documentation. http://www.pcre.org/pcre.txt
Package pcre provides access to the Perl Compatible Regular Expresion library, PCRE. It implements two main types, Regexp and Matcher. Regexp objects store a compiled regular expression. They consist of two immutable parts: pcre and pcre_extra. Compile()/MustCompile() initialize pcre. Calling Study() on a compiled Regexp initializes pcre_extra. Compilation of regular expressions using Compile or MustCompile is slightly expensive, so these objects should be kept and reused, instead of compiling them from scratch for each matching attempt. CompileJIT and MustCompileJIT are way more expensive, because they run Study() after compiling a Regexp, but they tend to give much better perfomance: http://sljit.sourceforge.net/regex_perf.html Matcher objects keeps the results of a match against a []byte or string subject. The Group and GroupString functions provide access to capture groups; both versions work no matter if the subject was a []byte or string, but the version with the matching type is slightly more efficient. Matcher objects contain some temporary space and refer the original subject. They are mutable and can be reused (using Match, MatchString, Reset or ResetString). For details on the regular expression language implemented by this package and the flags defined below, see the PCRE documentation. http://www.pcre.org/pcre.txt
Package gorilla/mux implements a request router and dispatcher. The name mux stands for "HTTP request multiplexer". Like the standard http.ServeMux, mux.Router matches incoming requests against a list of registered routes and calls a handler for the route that matches the URL or other conditions. The main features are: Let's start registering a couple of URL paths and handlers: Here we register three routes mapping URL paths to handlers. This is equivalent to how http.HandleFunc() works: if an incoming request URL matches one of the paths, the corresponding handler is called passing (http.ResponseWriter, *http.Request) as parameters. Paths can have variables. They are defined using the format {name} or {name:pattern}. If a regular expression pattern is not defined, the matched variable will be anything until the next slash. For example: The names are used to create a map of route variables which can be retrieved calling mux.Vars(): And this is all you need to know about the basic usage. More advanced options are explained below. Routes can also be restricted to a domain or subdomain. Just define a host pattern to be matched. They can also have variables: There are several other matchers that can be added. To match path prefixes: ...or HTTP methods: ...or URL schemes: ...or header values: ...or query values: ...or to use a custom matcher function: ...and finally, it is possible to combine several matchers in a single route: Setting the same matching conditions again and again can be boring, so we have a way to group several routes that share the same requirements. We call it "subrouting". For example, let's say we have several URLs that should only match when the host is "www.domain.com". Create a route for that host and get a "subrouter" from it: Then register routes in the subrouter: The three URL paths we registered above will only be tested if the domain is "www.domain.com", because the subrouter is tested first. This is not only convenient, but also optimizes request matching. You can create subrouters combining any attribute matchers accepted by a route. Subrouters can be used to create domain or path "namespaces": you define subrouters in a central place and then parts of the app can register its paths relatively to a given subrouter. There's one more thing about subroutes. When a subrouter has a path prefix, the inner routes use it as base for their paths: Now let's see how to build registered URLs. Routes can be named. All routes that define a name can have their URLs built, or "reversed". We define a name calling Name() on a route. For example: To build a URL, get the route and call the URL() method, passing a sequence of key/value pairs for the route variables. For the previous route, we would do: ...and the result will be a url.URL with the following path: This also works for host variables: All variables defined in the route are required, and their values must conform to the corresponding patterns. These requirements guarantee that a generated URL will always match a registered route -- the only exception is for explicitly defined "build-only" routes which never match. There's also a way to build only the URL host or path for a route: use the methods URLHost() or URLPath() instead. For the previous route, we would do: And if you use subrouters, host and path defined separately can be built as well:
Package mux implements a request router and dispatcher. The name mux stands for "HTTP request multiplexer". Like the standard http.ServeMux, mux.Router matches incoming requests against a list of registered routes and calls a handler for the route that matches the URL or other conditions. The main features are: Let's start registering a couple of URL paths and handlers: Here we register three routes mapping URL paths to handlers. This is equivalent to how http.HandleFunc() works: if an incoming request URL matches one of the paths, the corresponding handler is called passing (http.ResponseWriter, *http.Request) as parameters. Paths can have variables. They are defined using the format {name} or {name:pattern}. If a regular expression pattern is not defined, the matched variable will be anything until the next slash. For example: Groups can be used inside patterns, as long as they are non-capturing (?:re). For example: The names are used to create a map of route variables which can be retrieved calling mux.Vars(): Note that if any capturing groups are present, mux will panic() during parsing. To prevent this, convert any capturing groups to non-capturing, e.g. change "/{sort:(asc|desc)}" to "/{sort:(?:asc|desc)}". This is a change from prior versions which behaved unpredictably when capturing groups were present. And this is all you need to know about the basic usage. More advanced options are explained below. Routes can also be restricted to a domain or subdomain. Just define a host pattern to be matched. They can also have variables: There are several other matchers that can be added. To match path prefixes: ...or HTTP methods: ...or URL schemes: ...or header values: ...or query values: ...or to use a custom matcher function: ...and finally, it is possible to combine several matchers in a single route: Setting the same matching conditions again and again can be boring, so we have a way to group several routes that share the same requirements. We call it "subrouting". For example, let's say we have several URLs that should only match when the host is "www.example.com". Create a route for that host and get a "subrouter" from it: Then register routes in the subrouter: The three URL paths we registered above will only be tested if the domain is "www.example.com", because the subrouter is tested first. This is not only convenient, but also optimizes request matching. You can create subrouters combining any attribute matchers accepted by a route. Subrouters can be used to create domain or path "namespaces": you define subrouters in a central place and then parts of the app can register its paths relatively to a given subrouter. There's one more thing about subroutes. When a subrouter has a path prefix, the inner routes use it as base for their paths: Note that the path provided to PathPrefix() represents a "wildcard": calling PathPrefix("/static/").Handler(...) means that the handler will be passed any request that matches "/static/*". This makes it easy to serve static files with mux: Now let's see how to build registered URLs. Routes can be named. All routes that define a name can have their URLs built, or "reversed". We define a name-calling Name() on a route. For example: To build a URL, get the route and call the URL() method, passing a sequence of key/value pairs for the route variables. For the previous route, we would do: ...and the result will be an url.URL with the following path: This also works for host and query value variables: All variables defined in the route are required, and their values must conform to the corresponding patterns. These requirements guarantee that a generated URL will always match a registered route -- the only exception is for explicitly defined "build-only" routes which never match. Regex support also exists for matching Headers within a route. For example, we could do: ...and the route will match both requests with a Content-Type of `application/json` and `application/text` There's also a way to build only the URL host or path for a route: use the methods URLHost() or URLPath() instead. For the previous route, we would do: And if you use subrouters, host and path defined separately can be built as well: Mux supports the addition of middlewares to a Router, which are executed in the order they are added if a match is found, including its subrouters. Middlewares are (typically) small pieces of code which take one request, do something with it, and pass it down to another middleware or the final handler. Some common use cases for middleware are request logging, header manipulation, or ResponseWriter hijacking. Typically, the returned handler is a closure which does something with the http.ResponseWriter and http.Request passed to it, and then calls the handler passed as parameter to the MiddlewareFunc (closures can access variables from the context where they are created). A very basic middleware which logs the URI of the request being handled could be written as: Middlewares can be added to a router using `Router.Use()`: A more complex authentication middleware, which maps session token to users, could be written as: Note: The handler chain will be stopped if your middleware doesn't call `next.ServeHTTP()` with the corresponding parameters. This can be used to abort a request if the middleware writer wants to.
Package restful , a lean package for creating REST-style WebServices without magic. A WebService has a collection of Route objects that dispatch incoming Http Requests to a function calls. Typically, a WebService has a root path (e.g. /users) and defines common MIME types for its routes. WebServices must be added to a container (see below) in order to handler Http requests from a server. A Route is defined by a HTTP method, an URL path and (optionally) the MIME types it consumes (Content-Type) and produces (Accept). This package has the logic to find the best matching Route and if found, call its Function. The (*Request, *Response) arguments provide functions for reading information from the request and writing information back to the response. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-user-resource.go with a full implementation. A Route parameter can be specified using the format "uri/{var[:regexp]}" or the special version "uri/{var:*}" for matching the tail of the path. For example, /persons/{name:[A-Z][A-Z]} can be used to restrict values for the parameter "name" to only contain capital alphabetic characters. Regular expressions must use the standard Go syntax as described in the regexp package. (https://code.google.com/p/re2/wiki/Syntax) This feature requires the use of a CurlyRouter. A Container holds a collection of WebServices, Filters and a http.ServeMux for multiplexing http requests. Using the statements "restful.Add(...) and restful.Filter(...)" will register WebServices and Filters to the Default Container. The Default container of go-restful uses the http.DefaultServeMux. You can create your own Container and create a new http.Server for that particular container. A filter dynamically intercepts requests and responses to transform or use the information contained in the requests or responses. You can use filters to perform generic logging, measurement, authentication, redirect, set response headers etc. In the restful package there are three hooks into the request,response flow where filters can be added. Each filter must define a FilterFunction: Use the following statement to pass the request,response pair to the next filter or RouteFunction These are processed before any registered WebService. These are processed before any Route of a WebService. These are processed before calling the function associated with the Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-filters.go with full implementations. Two encodings are supported: gzip and deflate. To enable this for all responses: If a Http request includes the Accept-Encoding header then the response content will be compressed using the specified encoding. Alternatively, you can create a Filter that performs the encoding and install it per WebService or Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-encoding-filter.go By installing a pre-defined container filter, your Webservice(s) can respond to the OPTIONS Http request. By installing the filter of a CrossOriginResourceSharing (CORS), your WebService(s) can handle CORS requests. Unexpected things happen. If a request cannot be processed because of a failure, your service needs to tell via the response what happened and why. For this reason HTTP status codes exist and it is important to use the correct code in every exceptional situation. If path or query parameters are not valid (content or type) then use http.StatusBadRequest. Despite a valid URI, the resource requested may not be available If the application logic could not process the request (or write the response) then use http.StatusInternalServerError. The request has a valid URL but the method (GET,PUT,POST,...) is not allowed. The request does not have or has an unknown Accept Header set for this operation. The request does not have or has an unknown Content-Type Header set for this operation. In addition to setting the correct (error) Http status code, you can choose to write a ServiceError message on the response. This package has several options that affect the performance of your service. It is important to understand them and how you can change it. DoNotRecover controls whether panics will be caught to return HTTP 500. If set to false, the container will recover from panics. Default value is true If content encoding is enabled then the default strategy for getting new gzip/zlib writers and readers is to use a sync.Pool. Because writers are expensive structures, performance is even more improved when using a preloaded cache. You can also inject your own implementation. This package has the means to produce detail logging of the complete Http request matching process and filter invocation. Enabling this feature requires you to set an implementation of restful.StdLogger (e.g. log.Logger) instance such as: The restful.SetLogger() method allows you to override the logger used by the package. By default restful uses the standard library `log` package and logs to stdout. Different logging packages are supported as long as they conform to `StdLogger` interface defined in the `log` sub-package, writing an adapter for your preferred package is simple. (c) 2012-2015, http://ernestmicklei.com. MIT License
package phi is a small, idiomatic and composable router for building HTTP services. phi requires Go 1.10 or newer. Example: See github.com/go-phi/phi/_examples/ for more in-depth examples. URL patterns allow for easy matphing of path components in HTTP requests. The matphing components can then be accessed using phi.URLParam(). All patterns must begin with a slash. A simple named placeholder {name} matches any sequence of characters up to the next / or the end of the URL. Trailing slashes on paths must be handled explicitly. A placeholder with a name followed by a colon allows a regular expression match, for example {number:\\d+}. The regular expression syntax is Go's normal regexp RE2 syntax, except that regular expressions including { or } are not supported, and / will never be matched. An anonymous regexp pattern is allowed, using an empty string before the colon in the placeholder, such as {:\\d+} The special placeholder of asterisk matches the rest of the requested URL. Any trailing characters in the pattern are ignored. This is the only placeholder which will match / characters. Examples:
Package mux implements a request router and dispatcher. The name mux stands for "HTTP request multiplexer". Like the standard http.ServeMux, mux.Router matches incoming requests against a list of registered routes and calls a handler for the route that matches the URL or other conditions. The main features are: Let's start registering a couple of URL paths and handlers: Here we register three routes mapping URL paths to handlers. This is equivalent to how http.HandleFunc() works: if an incoming request URL matches one of the paths, the corresponding handler is called passing (http.ResponseWriter, *http.Request) as parameters. Paths can have variables. They are defined using the format {name} or {name:pattern}. If a regular expression pattern is not defined, the matched variable will be anything until the next slash. For example: Groups can be used inside patterns, as long as they are non-capturing (?:re). For example: The names are used to create a map of route variables which can be retrieved calling mux.Vars(): Note that if any capturing groups are present, mux will panic() during parsing. To prevent this, convert any capturing groups to non-capturing, e.g. change "/{sort:(asc|desc)}" to "/{sort:(?:asc|desc)}". This is a change from prior versions which behaved unpredictably when capturing groups were present. And this is all you need to know about the basic usage. More advanced options are explained below. Routes can also be restricted to a domain or subdomain. Just define a host pattern to be matched. They can also have variables: There are several other matchers that can be added. To match path prefixes: ...or HTTP methods: ...or URL schemes: ...or header values: ...or query values: ...or to use a custom matcher function: ...and finally, it is possible to combine several matchers in a single route: Setting the same matching conditions again and again can be boring, so we have a way to group several routes that share the same requirements. We call it "subrouting". For example, let's say we have several URLs that should only match when the host is "www.example.com". Create a route for that host and get a "subrouter" from it: Then register routes in the subrouter: The three URL paths we registered above will only be tested if the domain is "www.example.com", because the subrouter is tested first. This is not only convenient, but also optimizes request matching. You can create subrouters combining any attribute matchers accepted by a route. Subrouters can be used to create domain or path "namespaces": you define subrouters in a central place and then parts of the app can register its paths relatively to a given subrouter. There's one more thing about subroutes. When a subrouter has a path prefix, the inner routes use it as base for their paths: Note that the path provided to PathPrefix() represents a "wildcard": calling PathPrefix("/static/").Handler(...) means that the handler will be passed any request that matches "/static/*". This makes it easy to serve static files with mux: Now let's see how to build registered URLs. Routes can be named. All routes that define a name can have their URLs built, or "reversed". We define a name calling Name() on a route. For example: To build a URL, get the route and call the URL() method, passing a sequence of key/value pairs for the route variables. For the previous route, we would do: ...and the result will be a url.URL with the following path: This also works for host and query value variables: All variables defined in the route are required, and their values must conform to the corresponding patterns. These requirements guarantee that a generated URL will always match a registered route -- the only exception is for explicitly defined "build-only" routes which never match. Regex support also exists for matching Headers within a route. For example, we could do: ...and the route will match both requests with a Content-Type of `application/json` as well as `application/text` There's also a way to build only the URL host or path for a route: use the methods URLHost() or URLPath() instead. For the previous route, we would do: And if you use subrouters, host and path defined separately can be built as well: Mux supports the addition of middlewares to a Router, which are executed in the order they are added if a match is found, including its subrouters. Middlewares are (typically) small pieces of code which take one request, do something with it, and pass it down to another middleware or the final handler. Some common use cases for middleware are request logging, header manipulation, or ResponseWriter hijacking. Typically, the returned handler is a closure which does something with the http.ResponseWriter and http.Request passed to it, and then calls the handler passed as parameter to the MiddlewareFunc (closures can access variables from the context where they are created). A very basic middleware which logs the URI of the request being handled could be written as: Middlewares can be added to a router using `Router.Use()`: A more complex authentication middleware, which maps session token to users, could be written as: Note: The handler chain will be stopped if your middleware doesn't call `next.ServeHTTP()` with the corresponding parameters. This can be used to abort a request if the middleware writer wants to.
Package xurls extracts urls from plain text using regular expressions.
Package rat implements a PEG packrat parser that takes advantage of Go's unique ability to switch on type to create an interpreted meta-language consisting entirely of Go types. These types serve as tokens that any lexer would create when lexing a higher level meta language such as PEG, PEGN, or regular expressions. Passing these types to rat.Pack compiles (memoizes) them into a grammar of parsing functions not unlike compilation of regular expressions. The string representations of these structs consists of entirely of valid, compilable Go code suitable for parser code generation for parsers of any type, in any language. Simply printing a Grammar instance to a file is suitable to generate such a parser. Prefer Pack over Make* Although the individual Make* methods for each of the supported types have been exported publicly allowing developers to call them directly from within their own Rule implementations, most should use Pack instead. Consider it the equivalent of compiling a regular expression.
Package otto is a JavaScript parser and interpreter written natively in Go. http://godoc.org/github.com/robertkrimen/otto Run something in the VM Get a value out of the VM Set a number Set a string Get the value of an expression An error happens Set a Go function Set a Go function that returns something useful Use the functions in JavaScript A separate parser is available in the parser package if you're just interested in building an AST. http://godoc.org/github.com/robertkrimen/otto/parser Parse and return an AST otto You can run (Go) JavaScript from the commandline with: http://github.com/robertkrimen/otto/tree/master/otto Run JavaScript by entering some source on stdin or by giving otto a filename: underscore Optionally include the JavaScript utility-belt library, underscore, with this import: For more information: http://github.com/robertkrimen/otto/tree/master/underscore The following are some limitations with otto: Go translates JavaScript-style regular expressions into something that is "regexp" compatible via `parser.TransformRegExp`. Unfortunately, RegExp requires backtracking for some patterns, and backtracking is not supported by the standard Go engine: https://code.google.com/p/re2/wiki/Syntax Therefore, the following syntax is incompatible: A brief discussion of these limitations: "Regexp (?!re)" https://groups.google.com/forum/?fromgroups=#%21topic/golang-nuts/7qgSDWPIh_E More information about re2: https://code.google.com/p/re2/ In addition to the above, re2 (Go) has a different definition for \s: [\t\n\f\r ]. The JavaScript definition, on the other hand, also includes \v, Unicode "Separator, Space", etc. If you want to stop long running executions (like third-party code), you can use the interrupt channel to do this: Where is setTimeout/setInterval? These timing functions are not actually part of the ECMA-262 specification. Typically, they belong to the `windows` object (in the browser). It would not be difficult to provide something like these via Go, but you probably want to wrap otto in an event loop in that case. For an example of how this could be done in Go with otto, see natto: http://github.com/robertkrimen/natto Here is some more discussion of the issue: * http://book.mixu.net/node/ch2.html * http://en.wikipedia.org/wiki/Reentrancy_%28computing%29 * http://aaroncrane.co.uk/2009/02/perl_safe_signals/
Package chi is a small, idiomatic and composable router for building HTTP services. chi requires Go 1.7 or newer. Example: See github.com/go-chi/chi/_examples/ for more in-depth examples. URL patterns allow for easy matching of path components in HTTP requests. The matching components can then be accessed using chi.URLParam(). All patterns must begin with a slash. A simple named placeholder {name} matches any sequence of characters up to the next / or the end of the URL. Trailing slashes on paths must be handled explicitly. A placeholder with a name followed by a colon allows a regular expression match, for example {number:\\d+}. The regular expression syntax is Go's normal regexp RE2 syntax, except that regular expressions including { or } are not supported, and / will never be matched. An anonymous regexp pattern is allowed, using an empty string before the colon in the placeholder, such as {:\\d+} The special placeholder of asterisk matches the rest of the requested URL. Any trailing characters in the pattern are ignored. This is the only placeholder which will match / characters. Examples:
Package otto is a JavaScript parser and interpreter written natively in Go. http://godoc.org/github.com/kirillDanshin/otto Run something in the VM Get a value out of the VM Set a number Set a string Get the value of an expression An error happens Set a Go function Set a Go function that returns something useful Use the functions in JavaScript A separate parser is available in the parser package if you're just interested in building an AST. http://godoc.org/github.com/kirillDanshin/otto/parser Parse and return an AST otto You can run (Go) JavaScript from the commandline with: http://github.com/kirillDanshin/otto/tree/master/otto Run JavaScript by entering some source on stdin or by giving otto a filename: underscore Optionally include the JavaScript utility-belt library, underscore, with this import: For more information: http://github.com/kirillDanshin/otto/tree/master/underscore The following are some limitations with otto: Go translates JavaScript-style regular expressions into something that is "regexp" compatible via `parser.TransformRegExp`. Unfortunately, RegExp requires backtracking for some patterns, and backtracking is not supported by the standard Go engine: https://code.google.com/p/re2/wiki/Syntax Therefore, the following syntax is incompatible: A brief discussion of these limitations: "Regexp (?!re)" https://groups.google.com/forum/?fromgroups=#%21topic/golang-nuts/7qgSDWPIh_E More information about re2: https://code.google.com/p/re2/ In addition to the above, re2 (Go) has a different definition for \s: [\t\n\f\r ]. The JavaScript definition, on the other hand, also includes \v, Unicode "Separator, Space", etc. If you want to stop long running executions (like third-party code), you can use the interrupt channel to do this: Where is setTimeout/setInterval? These timing functions are not actually part of the ECMA-262 specification. Typically, they belong to the `windows` object (in the browser). It would not be difficult to provide something like these via Go, but you probably want to wrap otto in an event loop in that case. For an example of how this could be done in Go with otto, see natto: http://github.com/robertkrimen/natto Here is some more discussion of the issue: * http://book.mixu.net/node/ch2.html * http://en.wikipedia.org/wiki/Reentrancy_%28computing%29 * http://aaroncrane.co.uk/2009/02/perl_safe_signals/
Package mux implements a request router and dispatcher. The name mux stands for "HTTP request multiplexer". Like the standard http.ServeMux, mux.Router matches incoming requests against a list of registered routes and calls a handler for the route that matches the URL or other conditions. The main features are: Let's start registering a couple of URL paths and handlers: Here we register three routes mapping URL paths to handlers. This is equivalent to how http.HandleFunc() works: if an incoming request URL matches one of the paths, the corresponding handler is called passing (http.ResponseWriter, *http.Request) as parameters. Paths can have variables. They are defined using the format {name} or {name:pattern}. If a regular expression pattern is not defined, the matched variable will be anything until the next slash. For example: Groups can be used inside patterns, as long as they are non-capturing (?:re). For example: The names are used to create a map of route variables which can be retrieved calling mux.Vars(): Note that if any capturing groups are present, mux will panic() during parsing. To prevent this, convert any capturing groups to non-capturing, e.g. change "/{sort:(asc|desc)}" to "/{sort:(?:asc|desc)}". This is a change from prior versions which behaved unpredictably when capturing groups were present. And this is all you need to know about the basic usage. More advanced options are explained below. Routes can also be restricted to a domain or subdomain. Just define a host pattern to be matched. They can also have variables: There are several other matchers that can be added. To match path prefixes: ...or HTTP methods: ...or URL schemes: ...or header values: ...or query values: ...or to use a custom matcher function: ...and finally, it is possible to combine several matchers in a single route: Setting the same matching conditions again and again can be boring, so we have a way to group several routes that share the same requirements. We call it "subrouting". For example, let's say we have several URLs that should only match when the host is "www.example.com". Create a route for that host and get a "subrouter" from it: Then register routes in the subrouter: The three URL paths we registered above will only be tested if the domain is "www.example.com", because the subrouter is tested first. This is not only convenient, but also optimizes request matching. You can create subrouters combining any attribute matchers accepted by a route. Subrouters can be used to create domain or path "namespaces": you define subrouters in a central place and then parts of the app can register its paths relatively to a given subrouter. There's one more thing about subroutes. When a subrouter has a path prefix, the inner routes use it as base for their paths: Note that the path provided to PathPrefix() represents a "wildcard": calling PathPrefix("/static/").Handler(...) means that the handler will be passed any request that matches "/static/*". This makes it easy to serve static files with mux: Now let's see how to build registered URLs. Routes can be named. All routes that define a name can have their URLs built, or "reversed". We define a name calling Name() on a route. For example: To build a URL, get the route and call the URL() method, passing a sequence of key/value pairs for the route variables. For the previous route, we would do: ...and the result will be a url.URL with the following path: This also works for host and query value variables: All variables defined in the route are required, and their values must conform to the corresponding patterns. These requirements guarantee that a generated URL will always match a registered route -- the only exception is for explicitly defined "build-only" routes which never match. Regex support also exists for matching Headers within a route. For example, we could do: ...and the route will match both requests with a Content-Type of `application/json` as well as `application/text` There's also a way to build only the URL host or path for a route: use the methods URLHost() or URLPath() instead. For the previous route, we would do: And if you use subrouters, host and path defined separately can be built as well: Mux supports the addition of middlewares to a Router, which are executed in the order they are added if a match is found, including its subrouters. Middlewares are (typically) small pieces of code which take one request, do something with it, and pass it down to another middleware or the final handler. Some common use cases for middleware are request logging, header manipulation, or ResponseWriter hijacking. Typically, the returned handler is a closure which does something with the http.ResponseWriter and http.Request passed to it, and then calls the handler passed as parameter to the MiddlewareFunc (closures can access variables from the context where they are created). A very basic middleware which logs the URI of the request being handled could be written as: Middlewares can be added to a router using `Router.Use()`: A more complex authentication middleware, which maps session token to users, could be written as: Note: The handler chain will be stopped if your middleware doesn't call `next.ServeHTTP()` with the corresponding parameters. This can be used to abort a request if the middleware writer wants to.
Package mux implements a request router and dispatcher. The name mux stands for "HTTP request multiplexer". Like the standard http.ServeMux, mux.Router matches incoming requests against a list of registered routes and calls a handler for the route that matches the URL or other conditions. The main features are: Let's start registering a couple of URL paths and handlers: Here we register three routes mapping URL paths to handlers. This is equivalent to how http.HandleFunc() works: if an incoming request URL matches one of the paths, the corresponding handler is called passing (http.ResponseWriter, *http.Request) as parameters. Paths can have variables. They are defined using the format {name} or {name:pattern}. If a regular expression pattern is not defined, the matched variable will be anything until the next slash. For example: The names are used to create a map of route variables which can be retrieved calling mux.Vars(): And this is all you need to know about the basic usage. More advanced options are explained below. Routes can also be restricted to a domain or subdomain. Just define a host pattern to be matched. They can also have variables: There are several other matchers that can be added. To match path prefixes: ...or HTTP methods: ...or URL schemes: ...or header values: ...or query values: ...or to use a custom matcher function: ...and finally, it is possible to combine several matchers in a single route: Setting the same matching conditions again and again can be boring, so we have a way to group several routes that share the same requirements. We call it "subrouting". For example, let's say we have several URLs that should only match when the host is "www.example.com". Create a route for that host and get a "subrouter" from it: Then register routes in the subrouter: The three URL paths we registered above will only be tested if the domain is "www.example.com", because the subrouter is tested first. This is not only convenient, but also optimizes request matching. You can create subrouters combining any attribute matchers accepted by a route. Subrouters can be used to create domain or path "namespaces": you define subrouters in a central place and then parts of the app can register its paths relatively to a given subrouter. There's one more thing about subroutes. When a subrouter has a path prefix, the inner routes use it as base for their paths: Note that the path provided to PathPrefix() represents a "wildcard": calling PathPrefix("/static/").Handler(...) means that the handler will be passed any request that matches "/static/*". This makes it easy to serve static files with mux: Now let's see how to build registered URLs. Routes can be named. All routes that define a name can have their URLs built, or "reversed". We define a name calling Name() on a route. For example: To build a URL, get the route and call the URL() method, passing a sequence of key/value pairs for the route variables. For the previous route, we would do: ...and the result will be a url.URL with the following path: This also works for host variables: All variables defined in the route are required, and their values must conform to the corresponding patterns. These requirements guarantee that a generated URL will always match a registered route -- the only exception is for explicitly defined "build-only" routes which never match. Regex support also exists for matching Headers within a route. For example, we could do: ...and the route will match both requests with a Content-Type of `application/json` as well as `application/text` There's also a way to build only the URL host or path for a route: use the methods URLHost() or URLPath() instead. For the previous route, we would do: And if you use subrouters, host and path defined separately can be built as well:
Package restful, a lean package for creating REST-style WebServices without magic. A WebService has a collection of Route objects that dispatch incoming Http Requests to a function calls. Typically, a WebService has a root path (e.g. /users) and defines common MIME types for its routes. WebServices must be added to a container (see below) in order to handler Http requests from a server. A Route is defined by a HTTP method, an URL path and (optionally) the MIME types it consumes (Content-Type) and produces (Accept). This package has the logic to find the best matching Route and if found, call its Function. The (*Request, *Response) arguments provide functions for reading information from the request and writing information back to the response. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-user-resource.go with a full implementation. A Route parameter can be specified using the format "uri/{var[:regexp]}" or the special version "uri/{var:*}" for matching the tail of the path. For example, /persons/{name:[A-Z][A-Z]} can be used to restrict values for the parameter "name" to only contain capital alphabetic characters. Regular expressions must use the standard Go syntax as described in the regexp package. (https://code.google.com/p/re2/wiki/Syntax) This feature requires the use of a CurlyRouter. A Container holds a collection of WebServices, Filters and a http.ServeMux for multiplexing http requests. Using the statements "restful.Add(...) and restful.Filter(...)" will register WebServices and Filters to the Default Container. The Default container of go-restful uses the http.DefaultServeMux. You can create your own Container and create a new http.Server for that particular container. A filter dynamically intercepts requests and responses to transform or use the information contained in the requests or responses. You can use filters to perform generic logging, measurement, authentication, redirect, set response headers etc. In the restful package there are three hooks into the request,response flow where filters can be added. Each filter must define a FilterFunction: Use the following statement to pass the request,response pair to the next filter or RouteFunction These are processed before any registered WebService. These are processed before any Route of a WebService. These are processed before calling the function associated with the Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-filters.go with full implementations. Two encodings are supported: gzip and deflate. To enable this for all responses: If a Http request includes the Accept-Encoding header then the response content will be compressed using the specified encoding. Alternatively, you can create a Filter that performs the encoding and install it per WebService or Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-encoding-filter.go By installing a pre-defined container filter, your Webservice(s) can respond to the OPTIONS Http request. By installing the filter of a CrossOriginResourceSharing (CORS), your WebService(s) can handle CORS requests. Unexpected things happen. If a request cannot be processed because of a failure, your service needs to tell via the response what happened and why. For this reason HTTP status codes exist and it is important to use the correct code in every exceptional situation. If path or query parameters are not valid (content or type) then use http.StatusBadRequest. Despite a valid URI, the resource requested may not be available If the application logic could not process the request (or write the response) then use http.StatusInternalServerError. The request has a valid URL but the method (GET,PUT,POST,...) is not allowed. The request does not have or has an unknown Accept Header set for this operation. The request does not have or has an unknown Content-Type Header set for this operation. In addition to setting the correct (error) Http status code, you can choose to write a ServiceError message on the response. This package has several options that affect the performance of your service. It is important to understand them and how you can change it. The default router is the RouterJSR311 which is an implementation of its spec (http://jsr311.java.net/nonav/releases/1.1/spec/spec.html). However, it uses regular expressions for all its routes which, depending on your usecase, may consume a significant amount of time. The CurlyRouter implementation is more lightweight that also allows you to use wildcards and expressions, but only if needed. DoNotRecover controls whether panics will be caught to return HTTP 500. If set to true, Route functions are responsible for handling any error situation. Default value is false; it will recover from panics. This has performance implications. SetCacheReadEntity controls whether the response data ([]byte) is cached such that ReadEntity is repeatable. If you expect to read large amounts of payload data, and you do not use this feature, you should set it to false. This package has the means to produce detail logging of the complete Http request matching process and filter invocation. Enabling this feature requires you to set a log.Logger instance such as: (c) 2012-2014, http://ernestmicklei.com. MIT License
Package mux implements a request router and dispatcher. The name mux stands for "HTTP request multiplexer". Like the standard http.ServeMux, mux.Router matches incoming requests against a list of registered routes and calls a handler for the route that matches the URL or other conditions. The main features are: Let's start registering a couple of URL paths and handlers: Here we register three routes mapping URL paths to handlers. This is equivalent to how http.HandleFunc() works: if an incoming request URL matches one of the paths, the corresponding handler is called passing (http.ResponseWriter, *http.Request) as parameters. Paths can have variables. They are defined using the format {name} or {name:pattern}. If a regular expression pattern is not defined, the matched variable will be anything until the next slash. For example: Groups can be used inside patterns, as long as they are non-capturing (?:re). For example: The names are used to create a map of route variables which can be retrieved calling mux.Vars(): Note that if any capturing groups are present, mux will panic() during parsing. To prevent this, convert any capturing groups to non-capturing, e.g. change "/{sort:(asc|desc)}" to "/{sort:(?:asc|desc)}". This is a change from prior versions which behaved unpredictably when capturing groups were present. And this is all you need to know about the basic usage. More advanced options are explained below. Routes can also be restricted to a domain or subdomain. Just define a host pattern to be matched. They can also have variables: There are several other matchers that can be added. To match path prefixes: ...or HTTP methods: ...or URL schemes: ...or header values: ...or query values: ...or to use a custom matcher function: ...and finally, it is possible to combine several matchers in a single route: Setting the same matching conditions again and again can be boring, so we have a way to group several routes that share the same requirements. We call it "subrouting". For example, let's say we have several URLs that should only match when the host is "www.example.com". Create a route for that host and get a "subrouter" from it: Then register routes in the subrouter: The three URL paths we registered above will only be tested if the domain is "www.example.com", because the subrouter is tested first. This is not only convenient, but also optimizes request matching. You can create subrouters combining any attribute matchers accepted by a route. Subrouters can be used to create domain or path "namespaces": you define subrouters in a central place and then parts of the app can register its paths relatively to a given subrouter. There's one more thing about subroutes. When a subrouter has a path prefix, the inner routes use it as base for their paths: Note that the path provided to PathPrefix() represents a "wildcard": calling PathPrefix("/static/").Handler(...) means that the handler will be passed any request that matches "/static/*". This makes it easy to serve static files with mux: Now let's see how to build registered URLs. Routes can be named. All routes that define a name can have their URLs built, or "reversed". We define a name calling Name() on a route. For example: To build a URL, get the route and call the URL() method, passing a sequence of key/value pairs for the route variables. For the previous route, we would do: ...and the result will be a url.URL with the following path: This also works for host and query value variables: All variables defined in the route are required, and their values must conform to the corresponding patterns. These requirements guarantee that a generated URL will always match a registered route -- the only exception is for explicitly defined "build-only" routes which never match. Regex support also exists for matching Headers within a route. For example, we could do: ...and the route will match both requests with a Content-Type of `application/json` as well as `application/text` There's also a way to build only the URL host or path for a route: use the methods URLHost() or URLPath() instead. For the previous route, we would do: And if you use subrouters, host and path defined separately can be built as well: Mux supports the addition of middlewares to a Router, which are executed in the order they are added if a match is found, including its subrouters. Middlewares are (typically) small pieces of code which take one request, do something with it, and pass it down to another middleware or the final handler. Some common use cases for middleware are request logging, header manipulation, or ResponseWriter hijacking. Typically, the returned handler is a closure which does something with the http.ResponseWriter and http.Request passed to it, and then calls the handler passed as parameter to the MiddlewareFunc (closures can access variables from the context where they are created). A very basic middleware which logs the URI of the request being handled could be written as: Middlewares can be added to a router using `Router.Use()`: A more complex authentication middleware, which maps session token to users, could be written as: Note: The handler chain will be stopped if your middleware doesn't call `next.ServeHTTP()` with the corresponding parameters. This can be used to abort a request if the middleware writer wants to.
Package otto is a JavaScript parser and interpreter written natively in Go. http://godoc.org/github.com/robertkrimen/otto Run something in the VM Get a value out of the VM Set a number Set a string Get the value of an expression An error happens Set a Go function Set a Go function that returns something useful Use the functions in JavaScript A separate parser is available in the parser package if you're just interested in building an AST. http://godoc.org/github.com/robertkrimen/otto/parser Parse and return an AST otto You can run (Go) JavaScript from the commandline with: http://github.com/robertkrimen/otto/tree/master/otto Run JavaScript by entering some source on stdin or by giving otto a filename: underscore Optionally include the JavaScript utility-belt library, underscore, with this import: For more information: http://github.com/robertkrimen/otto/tree/master/underscore The following are some limitations with otto: Go translates JavaScript-style regular expressions into something that is "regexp" compatible via `parser.TransformRegExp`. Unfortunately, RegExp requires backtracking for some patterns, and backtracking is not supported by the standard Go engine: https://code.google.com/p/re2/wiki/Syntax Therefore, the following syntax is incompatible: A brief discussion of these limitations: "Regexp (?!re)" https://groups.google.com/forum/?fromgroups=#%21topic/golang-nuts/7qgSDWPIh_E More information about re2: https://code.google.com/p/re2/ In addition to the above, re2 (Go) has a different definition for \s: [\t\n\f\r ]. The JavaScript definition, on the other hand, also includes \v, Unicode "Separator, Space", etc. If you want to stop long running executions (like third-party code), you can use the interrupt channel to do this: Where is setTimeout/setInterval? These timing functions are not actually part of the ECMA-262 specification. Typically, they belong to the `windows` object (in the browser). It would not be difficult to provide something like these via Go, but you probably want to wrap otto in an event loop in that case. For an example of how this could be done in Go with otto, see natto: http://github.com/robertkrimen/natto Here is some more discussion of the issue: * http://book.mixu.net/node/ch2.html * http://en.wikipedia.org/wiki/Reentrancy_%28computing%29 * http://aaroncrane.co.uk/2009/02/perl_safe_signals/
Package skipper provides an HTTP routing library with flexible configuration as well as a runtime update of the routing rules. Skipper works as an HTTP reverse proxy that is responsible for mapping incoming requests to multiple HTTP backend services, based on routes that are selected by the request attributes. At the same time, both the requests and the responses can be augmented by a filter chain that is specifically defined for each route. Optionally, it can provide circuit breaker mechanism individually for each backend host. Skipper can load and update the route definitions from multiple data sources without being restarted. It provides a default executable command with a few built-in filters, however, its primary use case is to be extended with custom filters, predicates or data sources. For further information read 'Extending Skipper'. Skipper took the core design and inspiration from Vulcand: https://github.com/mailgun/vulcand. Skipper is 'go get' compatible. If needed, create a 'go workspace' first: Get the Skipper packages: Create a file with a route: Optionally, verify the syntax of the file: Start Skipper and make an HTTP request: The core of Skipper's request processing is implemented by a reverse proxy in the 'proxy' package. The proxy receives the incoming request, forwards it to the routing engine in order to receive the most specific matching route. When a route matches, the request is forwarded to all filters defined by it. The filters can modify the request or execute any kind of program logic. Once the request has been processed by all the filters, it is forwarded to the backend endpoint of the route. The response from the backend goes once again through all the filters in reverse order. Finally, it is mapped as the response of the original incoming request. Besides the default proxying mechanism, it is possible to define routes without a real network backend endpoint. One of these cases is called a 'shunt' backend, in which case one of the filters needs to handle the request providing its own response (e.g. the 'static' filter). Actually, filters themselves can instruct the request flow to shunt by calling the Serve(*http.Response) method of the filter context. Another case of a route without a network backend is the 'loopback'. A loopback route can be used to match a request, modified by filters, against the lookup tree with different conditions and then execute a different route. One example scenario can be to use a single route as an entry point to execute some calculation to get an A/B testing decision and then matching the updated request metadata for the actual destination route. This way the calculation can be executed for only those requests that don't contain information about a previously calculated decision. For further details, see the 'proxy' and 'filters' package documentation. Finding a request's route happens by matching the request attributes to the conditions in the route's definitions. Such definitions may have the following conditions: - method - path (optionally with wildcards) - path regular expressions - host regular expressions - headers - header regular expressions It is also possible to create custom predicates with any other matching criteria. The relation between the conditions in a route definition is 'and', meaning, that a request must fulfill each condition to match a route. For further details, see the 'routing' package documentation. Filters are applied in order of definition to the request and in reverse order to the response. They are used to modify request and response attributes, such as headers, or execute background tasks, like logging. Some filters may handle the requests without proxying them to service backends. Filters, depending on their implementation, may accept/require parameters, that are set specifically to the route. For further details, see the 'filters' package documentation. Each route has one of the following backends: HTTP endpoint, shunt or loopback. Backend endpoints can be any HTTP service. They are specified by their network address, including the protocol scheme, the domain name or the IP address, and optionally the port number: e.g. "https://www.example.org:4242". (The path and query are sent from the original request, or set by filters.) A shunt route means that Skipper handles the request alone and doesn't make requests to a backend service. In this case, it is the responsibility of one of the filters to generate the response. A loopback route executes the routing mechanism on current state of the request from the start, including the route lookup. This way it serves as a form of an internal redirect. Route definitions consist of the following: - request matching conditions (predicates) - filter chain (optional) - backend (either an HTTP endpoint or a shunt) The eskip package implements the in-memory and text representations of route definitions, including a parser. (Note to contributors: in order to stay compatible with 'go get', the generated part of the parser is stored in the repository. When changing the grammar, 'go generate' needs to be executed explicitly to update the parser.) For further details, see the 'eskip' package documentation Skipper has filter implementations of basic auth and OAuth2. It can be integrated with tokeninfo based OAuth2 providers. For details, see: https://godoc.org/github.com/zalando/skipper/filters/auth. Skipper's route definitions of Skipper are loaded from one or more data sources. It can receive incremental updates from those data sources at runtime. It provides three different data clients: - Kubernetes: Skipper can be used as part of a Kubernetes Ingress Controller implementation together with https://github.com/zalando-incubator/kube-ingress-aws-controller . In this scenario, Skipper uses the Kubernetes API's Ingress extensions as a source for routing. For a complete deployment example, see more details in: https://github.com/zalando-incubator/kubernetes-on-aws/ . - Innkeeper: the Innkeeper service implements a storage for large sets of Skipper routes, with an HTTP+JSON API, OAuth2 authentication and role management. See the 'innkeeper' package and https://github.com/zalando/innkeeper. - etcd: Skipper can load routes and receive updates from etcd clusters (https://github.com/coreos/etcd). See the 'etcd' package. - static file: package eskipfile implements a simple data client, which can load route definitions from a static file in eskip format. Currently, it loads the routes on startup. It doesn't support runtime updates. Skipper can use additional data sources, provided by extensions. Sources must implement the DataClient interface in the routing package. Skipper provides circuit breakers, configured either globally, based on backend hosts or based on individual routes. It supports two types of circuit breaker behavior: open on N consecutive failures, or open on N failures out of M requests. For details, see: https://godoc.org/github.com/zalando/skipper/circuit. Skipper can be started with the default executable command 'skipper', or as a library built into an application. The easiest way to start Skipper as a library is to execute the 'Run' function of the current, root package. Each option accepted by the 'Run' function is wired in the default executable as well, as a command line flag. E.g. EtcdUrls becomes -etcd-urls as a comma separated list. For command line help, enter: An additional utility, eskip, can be used to verify, print, update and delete routes from/to files or etcd (Innkeeper on the roadmap). See the cmd/eskip command package, and/or enter in the command line: Skipper doesn't use dynamically loaded plugins, however, it can be used as a library, and it can be extended with custom predicates, filters and/or custom data sources. To create a custom predicate, one needs to implement the PredicateSpec interface in the routing package. Instances of the PredicateSpec are used internally by the routing package to create the actual Predicate objects as referenced in eskip routes, with concrete arguments. Example, randompredicate.go: In the above example, a custom predicate is created, that can be referenced in eskip definitions with the name 'Random': To create a custom filter we need to implement the Spec interface of the filters package. 'Spec' is the specification of a filter, and it is used to create concrete filter instances, while the raw route definitions are processed. Example, hellofilter.go: The above example creates a filter specification, and in the routes where they are included, the filter instances will set the 'X-Hello' header for each and every response. The name of the filter is 'hello', and in a route definition it is referenced as: The easiest way to create a custom Skipper variant is to implement the required filters (as in the example above) by importing the Skipper package, and starting it with the 'Run' command. Example, hello.go: A file containing the routes, routes.eskip: Start the custom router: The 'Run' function in the root Skipper package starts its own listener but it doesn't provide the best composability. The proxy package, however, provides a standard http.Handler, so it is possible to use it in a more complex solution as a building block for routing. Skipper provides detailed logging of failures, and access logs in Apache log format. Skipper also collects detailed performance metrics, and exposes them on a separate listener endpoint for pulling snapshots. For details, see the 'logging' and 'metrics' packages documentation. The router's performance depends on the environment and on the used filters. Under ideal circumstances, and without filters, the biggest time factor is the route lookup. Skipper is able to scale to thousands of routes with logarithmic performance degradation. However, this comes at the cost of increased memory consumption, due to storing the whole lookup tree in a single structure. Benchmarks for the tree lookup can be run by: In case more aggressive scale is needed, it is possible to setup Skipper in a cascade model, with multiple Skipper instances for specific route segments.
Package regexp implements extended regular expression search.
Package restful, a lean package for creating REST-style WebServices without magic. A WebService has a collection of Route objects that dispatch incoming Http Requests to a function calls. Typically, a WebService has a root path (e.g. /users) and defines common MIME types for its routes. WebServices must be added to a container (see below) in order to handler Http requests from a server. A Route is defined by a HTTP method, an URL path and (optionally) the MIME types it consumes (Content-Type) and produces (Accept). This package has the logic to find the best matching Route and if found, call its Function. The (*Request, *Response) arguments provide functions for reading information from the request and writing information back to the response. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-user-resource.go with a full implementation. A Route parameter can be specified using the format "uri/{var[:regexp]}" or the special version "uri/{var:*}" for matching the tail of the path. For example, /persons/{name:[A-Z][A-Z]} can be used to restrict values for the parameter "name" to only contain capital alphabetic characters. Regular expressions must use the standard Go syntax as described in the regexp package. (https://code.google.com/p/re2/wiki/Syntax) This feature requires the use of a CurlyRouter. A Container holds a collection of WebServices, Filters and a http.ServeMux for multiplexing http requests. Using the statements "restful.Add(...) and restful.Filter(...)" will register WebServices and Filters to the Default Container. The Default container of go-restful uses the http.DefaultServeMux. You can create your own Container and create a new http.Server for that particular container. A filter dynamically intercepts requests and responses to transform or use the information contained in the requests or responses. You can use filters to perform generic logging, measurement, authentication, redirect, set response headers etc. In the restful package there are three hooks into the request,response flow where filters can be added. Each filter must define a FilterFunction: Use the following statement to pass the request,response pair to the next filter or RouteFunction These are processed before any registered WebService. These are processed before any Route of a WebService. These are processed before calling the function associated with the Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-filters.go with full implementations. Two encodings are supported: gzip and deflate. To enable this for all responses: If a Http request includes the Accept-Encoding header then the response content will be compressed using the specified encoding. Alternatively, you can create a Filter that performs the encoding and install it per WebService or Route. See the example https://github.com/emicklei/go-restful/blob/master/examples/restful-encoding-filter.go By installing a pre-defined container filter, your Webservice(s) can respond to the OPTIONS Http request. By installing the filter of a CrossOriginResourceSharing (CORS), your WebService(s) can handle CORS requests. Unexpected things happen. If a request cannot be processed because of a failure, your service needs to tell via the response what happened and why. For this reason HTTP status codes exist and it is important to use the correct code in every exceptional situation. If path or query parameters are not valid (content or type) then use http.StatusBadRequest. Despite a valid URI, the resource requested may not be available If the application logic could not process the request (or write the response) then use http.StatusInternalServerError. The request has a valid URL but the method (GET,PUT,POST,...) is not allowed. The request does not have or has an unknown Accept Header set for this operation. The request does not have or has an unknown Content-Type Header set for this operation. In addition to setting the correct (error) Http status code, you can choose to write a ServiceError message on the response. This package has several options that affect the performance of your service. It is important to understand them and how you can change it. The default router is the RouterJSR311 which is an implementation of its spec (http://jsr311.java.net/nonav/releases/1.1/spec/spec.html). However, it uses regular expressions for all its routes which, depending on your usecase, may consume a significant amount of time. The CurlyRouter implementation is more lightweight that also allows you to use wildcards and expressions, but only if needed. DoNotRecover controls whether panics will be caught to return HTTP 500. If set to true, Route functions are responsible for handling any error situation. Default value is false; it will recover from panics. This has performance implications. SetCacheReadEntity controls whether the response data ([]byte) is cached such that ReadEntity is repeatable. If you expect to read large amounts of payload data, and you do not use this feature, you should set it to false. This package has the means to produce detail logging of the complete Http request matching process and filter invocation. Enabling this feature requires you to set a log.Logger instance such as: (c) 2012-2014, http://ernestmicklei.com. MIT License
Package mux implements a request router and dispatcher. The name mux stands for "HTTP request multiplexer". Like the standard http.ServeMux, mux.Router matches incoming requests against a list of registered routes and calls a handler for the route that matches the URL or other conditions. The main features are: Let's start registering a couple of URL paths and handlers: Here we register three routes mapping URL paths to handlers. This is equivalent to how http.HandleFunc() works: if an incoming request URL matches one of the paths, the corresponding handler is called passing (http.ResponseWriter, *http.Request) as parameters. Paths can have variables. They are defined using the format {name} or {name:pattern}. If a regular expression pattern is not defined, the matched variable will be anything until the next slash. For example: Groups can be used inside patterns, as long as they are non-capturing (?:re). For example: The names are used to create a map of route variables which can be retrieved calling mux.Vars(): Note that if any capturing groups are present, mux will panic() during parsing. To prevent this, convert any capturing groups to non-capturing, e.g. change "/{sort:(asc|desc)}" to "/{sort:(?:asc|desc)}". This is a change from prior versions which behaved unpredictably when capturing groups were present. And this is all you need to know about the basic usage. More advanced options are explained below. Routes can also be restricted to a domain or subdomain. Just define a host pattern to be matched. They can also have variables: There are several other matchers that can be added. To match path prefixes: ...or HTTP methods: ...or URL schemes: ...or header values: ...or query values: ...or to use a custom matcher function: ...and finally, it is possible to combine several matchers in a single route: Setting the same matching conditions again and again can be boring, so we have a way to group several routes that share the same requirements. We call it "subrouting". For example, let's say we have several URLs that should only match when the host is "www.example.com". Create a route for that host and get a "subrouter" from it: Then register routes in the subrouter: The three URL paths we registered above will only be tested if the domain is "www.example.com", because the subrouter is tested first. This is not only convenient, but also optimizes request matching. You can create subrouters combining any attribute matchers accepted by a route. Subrouters can be used to create domain or path "namespaces": you define subrouters in a central place and then parts of the app can register its paths relatively to a given subrouter. There's one more thing about subroutes. When a subrouter has a path prefix, the inner routes use it as base for their paths: Note that the path provided to PathPrefix() represents a "wildcard": calling PathPrefix("/static/").Handler(...) means that the handler will be passed any request that matches "/static/*". This makes it easy to serve static files with mux: Now let's see how to build registered URLs. Routes can be named. All routes that define a name can have their URLs built, or "reversed". We define a name calling Name() on a route. For example: To build a URL, get the route and call the URL() method, passing a sequence of key/value pairs for the route variables. For the previous route, we would do: ...and the result will be a url.URL with the following path: This also works for host variables: All variables defined in the route are required, and their values must conform to the corresponding patterns. These requirements guarantee that a generated URL will always match a registered route -- the only exception is for explicitly defined "build-only" routes which never match. Regex support also exists for matching Headers within a route. For example, we could do: ...and the route will match both requests with a Content-Type of `application/json` as well as `application/text` There's also a way to build only the URL host or path for a route: use the methods URLHost() or URLPath() instead. For the previous route, we would do: And if you use subrouters, host and path defined separately can be built as well: