A thin wrapper for the lmdb C library. These are low-level bindings for the C API. The C documentation should be used as a reference while developing (http://symas.com/mdb/doc/group__mdb.html). The errors returned by the package API will with few exceptions be of type Errno or syscall.Errno. The only errors of type Errno returned are those defined in lmdb.h. Other errno values like EINVAL will by of type syscall.Errno. Most mdb functions/methods can return errors. This example ignores errors for brevity. Real code should check all return values.
Package hercules contains the functions which are needed to gather various statistics from a Git repository. The analysis is expressed in a form of the tree: there are nodes - "pipeline items" - which require some other nodes to be executed prior to selves and in turn provide the data for dependent nodes. There are several service items which do not produce any useful statistics but rather provide the requirements for other items. The top-level items include: - BurndownAnalysis - line burndown statistics for project, files and developers. - CouplesAnalysis - coupling statistics for files and developers. - ShotnessAnalysis - structural hotness and couples, by any Babelfish UAST XPath (functions by default). The typical API usage is to initialize the Pipeline class: Then add the required analysis: This call will add all the needed intermediate pipeline items. Then link and execute the analysis tree: Finally extract the result: The actual usage example is cmd/hercules/root.go - the command line tool's code. You can provide additional options via `facts` on initialization. For example, to provide your own logger, enable people-tracking, and set a custom tick size: Hercules depends heavily on https://github.com/src-d/go-git and leverages the diff algorithm through https://github.com/sergi/go-diff. Besides, BurndownAnalysis involves File and RBTree. These are low level data structures which enable incremental blaming. File carries an instance of RBTree and the current line burndown state. RBTree implements the red-black balanced binary tree and is based on https://github.com/yasushi-saito/rbtree. Coupling stats are supposed to be further processed rather than observed directly. labours.py uses Swivel embeddings and visualises them in Tensorflow Projector. Shotness analysis as well as other UAST-featured items relies on [Babelfish](https://doc.bblf.sh) and requires the server to be running.
Package beego provide a MVC framework beego: an open-source, high-performance, modular, full-stack web framework It is used for rapid development of RESTful APIs, web apps and backend services in Go. beego is inspired by Tornado, Sinatra and Flask with the added benefit of some Go-specific features such as interfaces and struct embedding. more information: http://beego.me
Package beego provide a MVC framework beego: an open-source, high-performance, modular, full-stack web framework It is used for rapid development of RESTful APIs, web apps and backend services in Go. beego is inspired by Tornado, Sinatra and Flask with the added benefit of some Go-specific features such as interfaces and struct embedding. more information: http://beego.me
Package iris is a web framework which provides efficient and well-designed tools with robust set of features to create your awesome and high-performance web application powered by unlimited potentials and portability. Source code and other details for the project are available at GitHub: Looking for further support? Note: This package is under active development status. Each month a new version is releasing to adapt the latest web trends and technologies. Iris is a very pluggable ecosystem, router can be customized by adapting a 'RouterBuilderPolicy && RouterReversionPolicy'. With the power of Iris' router adaptors, developers are able to use any third-party router's path features without any implications to the rest of their API. A Developer is able to select between two out-of-the-box powerful routers: Httprouter, it's a custom version of https://github.comjulienschmidt/httprouter, which is edited to support iris' subdomains, reverse routing, custom http errors and a lot features, it should be a bit faster than the original too because of iris' Context. It uses `/mypath/:firstParameter/path/:secondParameter` and `/mypath/*wildcardParamName` . Gorilla Mux, it's the https://github.com/gorilla/mux which supports subdomains, custom http errors, reverse routing, pattern matching via regex and the rest of the iris' features. It uses `/mypath/{firstParameter:any-regex-valid-here}/path/{secondParameter}` and `/mypath/{wildcardParamName:.*}` Example code: Run All HTTP methods are supported, users can register handlers for same paths on different methods. The first parameter is the HTTP Method, second parameter is the request path of the route, third variadic parameter should contains one or more iris.Handler/HandlerFunc executed by the registered order when a user requests for that specific resouce path from the server. Example code: In order to make things easier for the user, Iris provides functions for all HTTP Methods. The first parameter is the request path of the route, second variadic parameter should contains one or more iris.HandlerFunc executed by the registered order when a user requests for that specific resouce path from the server. Example code: Path Parameters' syntax depends on the selected router. This is the only difference between the routers, the registered path form, the API remains the same for both. Example `gorillamux` code: Example `httprouter` code: A set of routes that are being groupped by path prefix can (optionally) share the same middleware handlers and template layout. A group can have a nested group too. `.Party` is being used to group routes, developers can declare an unlimited number of (nested) groups. Example code: With iris users are able to register their own handlers for http statuses like 404 not found, 500 internal server error and so on. Example code: Custom http errors can be also be registered to a specific group of routes. Example code: Static Files Example code: Middleware is just a concept of ordered chain of handlers. Middleware can be registered globally, per-party, per-subdomain and per-route. Example code: Let's convert the https://github.com/rs/cors net/http external middleware which returns a `next form` handler. Example code: Iris supports 5 template engines out-of-the-box, developers can still use any external golang template engine, as `context.ResponseWriter` is an `io.Writer`. All of these five template engines have common features with common API, like Layout, Template Funcs, Party-specific layout, partial rendering and more. Example code: View engine supports bundled(https://github.com/jteeuwen/go-bindata) template files too. go-bindata gives you two functions, asset and assetNames, these can be setted to each of the template engines using the `.Binary` func. Example code: A real example can be found here: https://github.com/kataras/iris/tree/v6/_examples/intermediate/view/embedding-templates-into-app. Enable auto-reloading of templates on each request. Useful while developers are in dev mode as they no neeed to restart their app on every template edit. Example code: Each one of these template engines has different options located here: https://github.com/kataras/iris/tree/v6/adaptors/view . But you should have a basic idea of the framework by now, we just scratched the surface. If you enjoy what you just saw and want to learn more, please follow the below links: * Examples: https://github.com/iris-contrib/examples * Adaptors: https://github.com/kataras/iris/tree/v6/adaptors * Middleware: https://github.com/kataras/iris/tree/v6/middleware and * https://github.com/iris-contrib/middleware Package iris is a web framework which provides efficient and well-designed tools with robust set of features to create your awesome and high-performance web application powered by unlimited potentials and portability For view engines, render engines, sessions, websockets, subdomains, automatic-TLS, context support with 50+ handy http functions, dynamic subdomains, router & routes, parties of subdomains & routes, access control, typescript compiler, basicauth,internalization, logging, and much more, please visit https://godoc.org/gopkg.in/kataras/iris.v6
lmdrouter is a simple-to-use library for writing AWS Lambda functions in Go that listen to events of type API Gateway Proxy Request (represented by the `events.APIGatewayProxyRequest` type of the github.com/aws-lambda-go/events package). The library allows creating functions that can match requests based on their URI, just like an HTTP server that uses the standard https://golang.org/pkg/net/http/#ServeMux (or any other community-built routing library such as https://github.com/julienschmidt/httprouter or https://github.com/go-chi/chi) would. The interface provided by the library is very similar to these libraries and should be familiar to anyone who has written HTTP applications in Go. When building large cloud-native applications, there's a certain balance to strike when it comes to deployment of APIs. On one side of the scale, each API endpoint has its own lambda function. This provides the greatest flexibility, but is extremely difficult to maintain. On the other side of the scale, there can be one lambda function for the entire API. This provides the least flexibility, but is the easiest to maintain. Both are probably not a good idea. With `lmdrouter`, one can create small lambda functions for different aspects of the API. For example, if your application model contains multiple domains (e.g. articles, authors, topics, etc...), you can create one lambda function for each domain, and deploy these independently (e.g. everything below "/api/articles" is one lambda function, everything below "/api/authors" is another function). This is also useful for applications where different teams are in charge of different parts of the API. * Supports all HTTP methods. * Supports middleware functions at a global and per-resource level. * Supports path parameters with a simple ":<name>" format (e.g. "/posts/:id"). * Provides ability to automatically "unmarshal" an API Gateway request to an arbitrary Go struct, with data coming either from path and query string parameters, or from the request body (only JSON requests are currently supported). See the documentation for the `UnmarshalRequest` function for more information. * Provides the ability to automatically "marshal" responses of any type to an API Gateway response (only JSON responses are currently generated). See the MarshalResponse function for more information. * Implements net/http.Handler for local development and general usage outside of an AWS Lambda environment.
Package Faygo is a fast and concise Go Web framework that can be used to develop high-performance web app(especially API) with fewer codes; Just define a struct Handler, Faygo will automatically bind/verify the request parameters and generate the online API doc. Copyright 2016 HenryLee. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. A trivial example is: run result: StructHandler tag value description: List of supported param value types:
Package gitfs is a complete solution for static files in Go code. When Go code uses non-Go files, they are not packaged into the binary. The common approach to the problem, as implemented by (go-bindata) https://github.com/kevinburke/go-bindata is to convert all the required static files into Go code, which eventually compiled into the binary. This library takes a different approach, in which the static files are not required to be "binary-packed", and even no required to be in the same repository as the Go code. This package enables loading static content from a remote git repository, or packing it to the binary if desired or loaded from local path for development process. The transition from remote repository to binary packed content, to local content is completely smooth. *The API is simple and minimalistic*. The `New` method returns a (sub)tree of a Git repository, represented by the standard `http.FileSystem` interface. This object enables anything that is possible to do with a regular filesystem, such as opening a file or listing a directory. Additionally, the ./fsutil package provides enhancements over the `http.FileSystem` object (They can work with any object that implements the interface) such as loading Go templates in the standard way, walking over the filesystem, and applying glob patterns on a filesystem. Supported features: * Loading of specific version/tag/branch. * For debug purposes, the files can be loaded from local path instead of the remote repository. * Files are loaded lazily by default or they can be preloaded if required. * Files can be packed to the Go binary using a command line tool. * This project is using the standard `http.FileSystem` interface. * In ./fsutil there are some general useful tools around the `http.FileSystem` interace. To create a filesystem using the `New` function, provide the Git project with the pattern: `github.com/<owner>/<repo>(/<path>)?(@<ref>)?`. If no `path` is specified, the root of the project will be used. `ref` can be any git branch using `heads/<branch name>` or any git tag using `tags/<tag>`. If the tag is of Semver format, the `tags/` prefix is not required. If no `ref` is specified, the default branch will be used. In the following example, the repository `github.com/x/y` at tag v1.2.3 and internal path "static" is loaded: The variable `fs` implements the `http.FileSystem` interface. Reading a file from the repository can be done using the `Open` method. This function accepts a path, relative to the root of the defined filesystem. The `fs` variable can be used in anything that accept the standard interface. For example, it can be used for serving static content using the standard library: When used with private github repository, the Github API calls should be instrumented with the appropriate credentials. The credentials can be passed by providing an HTTP client. For example, to use a Github Token from environnement variable `GITHUB_TOKEN`: For quick development workflows, it is easier and faster to use local static content and not remote content that was pushed to a remote repository. This is enabled by the `OptLocal` option. To use this option only in local development and not in production system, it can be used as follow: In this example, we stored the value for `OptLocal` in an environment variable. As a result, when running the program with `LOCAL_DEBUG=.` local files will be used, while running without it will result in using the remote files. (the value of the environment variable should point to any directory within the github project). Using gitfs does not mean that files are required to be remotely fetched. When binary packing of the files is needed, a command line tool can pack them for you. To get the tool run: `go get github.com/posener/gitfs/cmd/gitfs`. Running the tool is by `gitfs <patterns>`. This generates a `gitfs.go` file in the current directory that contains all the used filesystems' data. This will cause all `gitfs.New` calls to automatically use the packed data, insted of fetching the data on runtime. By default, a test will also be generated with the code. This test fails when the local files are modified without updating the binary content. Use binary-packing with `go generate`: To generate all filesystems used by a project add `//go:generate gitfs ./...` in the root of the project. To generate only a specific filesystem add `//go:generate gitfs $GOFILE` in the file it is being used. An interesting anecdote is that gitfs command is using itself for generating its own templates. Files exclusion can be done by including only specific files using a glob pattern with `OptGlob` option, using the Glob options. This will affect both local loading of files, remote loading and binary packing (may reduce binary size). For example: The ./fsutil package is a collection of useful functions that can work with any `http.FileSystem` implementation. For example, here we will use a function that loads go templates from the filesystem. With gitfs you can open a remote git repository, and load any file, including non-go files. In this example, the README.md file of a remote repository is loaded.
Package CloudForest implements ensembles of decision trees for machine learning in pure Go (golang to search engines). It allows for a number of related algorithms for classification, regression, feature selection and structure analysis on heterogeneous numerical/categorical data with missing values. These include: Breiman and Cutler's Random Forest for Classification and Regression Adaptive Boosting (AdaBoost) Classification Gradiant Boosting Tree Regression Entropy and Cost driven classification L1 regression Feature selection with artificial contrasts Proximity and model structure analysis Roughly balanced bagging for unbalanced classification The API hasn't stabilized yet and may change rapidly. Tests and benchmarks have been performed only on embargoed data sets and can not yet be released. Library Documentation is in code and can be viewed with godoc or live at: http://godoc.org/github.com/ryanbressler/CloudForest Documentation of command line utilities and file formats can be found in README.md, which can be viewed fromated on github: http://github.com/ryanbressler/CloudForest Pull requests and bug reports are welcome. CloudForest was created by Ryan Bressler and is being developed in the Shumelivich Lab at the Institute for Systems Biology for use on genomic/biomedical data with partial support from The Cancer Genome Atlas and the Inova Translational Medicine Institute. CloudForest is intended to provide fast, comprehensible building blocks that can be used to implement ensembles of decision trees. CloudForest is written in Go to allow a data scientist to develop and scale new models and analysis quickly instead of having to modify complex legacy code. Data structures and file formats are chosen with use in multi threaded and cluster environments in mind. Go's support for function types is used to provide a interface to run code as data is percolated through a tree. This method is flexible enough that it can extend the tree being analyzed. Growing a decision tree using Breiman and Cutler's method can be done in an anonymous function/closure passed to a tree's root node's Recurse method: This allows a researcher to include whatever additional analysis they need (importance scores, proximity etc) in tree growth. The same Recurse method can also be used to analyze existing forests to tabulate scores or extract structure. Utilities like leafcount and errorrate use this method to tabulate data about the tree in collection objects. Decision tree's are grown with the goal of reducing "Impurity" which is usually defined as Gini Impurity for categorical targets or mean squared error for numerical targets. CloudForest grows trees against the Target interface which allows for alternative definitions of impurity. CloudForest includes several alternative targets: Additional targets can be stacked on top of these target to add boosting functionality: Repeatedly splitting the data and searching for the best split at each node of a decision tree are the most computationally intensive parts of decision tree learning and CloudForest includes optimized code to perform these tasks. Go's slices are used extensively in CloudForest to make it simple to interact with optimized code. Many previous implementations of Random Forest have avoided reallocation by reordering data in place and keeping track of start and end indexes. In go, slices pointing at the same underlying arrays make this sort of optimization transparent. For example a function like: can return left and right slices that point to the same underlying array as the original slice of cases but these slices should not have their values changed. Functions used while searching for the best split also accepts pointers to reusable slices and structs to maximize speed by keeping memory allocations to a minimum. BestSplitAllocs contains pointers to these items and its use can be seen in functions like: For categorical predictors, BestSplit will also attempt to intelligently choose between 4 different implementations depending on user input and the number of categories. These include exhaustive, random, and iterative searches for the best combination of categories implemented with bitwise operations against int and big.Int. See BestCatSplit, BestCatSplitIter, BestCatSplitBig and BestCatSplitIterBig. All numerical predictors are handled by BestNumSplit which relies on go's sorting package. Training a Random forest is an inherently parallel process and CloudForest is designed to allow parallel implementations that can tackle large problems while keeping memory usage low by writing and using data structures directly to/from disk. Trees can be grown in separate go routines. The growforest utility provides an example of this that uses go routines and channels to grow trees in parallel and write trees to disk as the are finished by the "worker" go routines. The few summary statistics like mean impurity decrease per feature (importance) can be calculated using thread safe data structures like RunningMean. Trees can also be grown on separate machines. The .sf stochastic forest format allows several small forests to be combined by concatenation and the ForestReader and ForestWriter structs allow these forests to be accessed tree by tree (or even node by node) from disk. For data sets that are too big to fit in memory on a single machine Tree.Grow and FeatureMatrix.BestSplitter can be reimplemented to load candidate features from disk, distributed database etc. By default cloud forest uses a fast heuristic for missing values. When proposing a split on a feature with missing data the missing cases are removed and the impurity value is corrected to use three way impurity which reduces the bias towards features with lots of missing data: Missing values in the target variable are left out of impurity calculations. This provided generally good results at a fraction of the computational costs of imputing data. Optionally, feature.ImputeMissing or featurematrixImputeMissing can be called before forest growth to impute missing values to the feature mean/mode which Brieman [2] suggests as a fast method for imputing values. This forest could also be analyzed for proximity (using leafcount or tree.GetLeaves) to do the more accurate proximity weighted imputation Brieman describes. Experimental support is provided for 3 way splitting which splits missing cases onto a third branch. [2] This has so far yielded mixed results in testing. At some point in the future support may be added for local imputing of missing values during tree growth as described in [3] [1] http://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm#missing1 [2] https://code.google.com/p/rf-ace/ [3] http://projecteuclid.org/DPubS?verb=Display&version=1.0&service=UI&handle=euclid.aoas/1223908043&page=record In CloudForest data is stored using the FeatureMatrix struct which contains Features. The Feature struct implements storage and methods for both categorical and numerical data and calculations of impurity etc and the search for the best split. The Target interface abstracts the methods of Feature that are needed for a feature to be predictable. This allows for the implementation of alternative types of regression and classification. Trees are built from Nodes and Splitters and stored within a Forest. Tree has a Grow implements Brieman and Cutler's method (see extract above) for growing a tree. A GrowForest method is also provided that implements the rest of the method including sampling cases but it may be faster to grow the forest to disk as in the growforest utility. Prediction and Voting is done using Tree.Vote and CatBallotBox and NumBallotBox which implement the VoteTallyer interface.
Package autopprof provides a development-time library to collect pprof profiles from Go programs. This package is experimental and APIs may change.
Package messagebird is an official library for interacting with MessageBird.com API. The MessageBird API connects your website or application to operators around the world. With our API you can integrate SMS, Chat & Voice. More documentation you can find on the MessageBird developers portal: https://developers.messagebird.com/
Package iex provides an API for accessing and using IEX's developer API. https://www.iextrading.com/developer/
Package graphql provides a GraphQL client implementation. For more information, see package github.com/shurcooL/githubv4, which is a specialized version targeting GitHub GraphQL API v4. That package is driving the feature development. For now, see README for more details.
update-urls updates GitHub URL docs for each service endpoint. It is meant to be used periodically by go-github repo maintainers to update stale GitHub Developer v3 API documentation URLs. Usage (from go-github directory): When confronted with "PLEASE CHECK MANUALLY AND FIX", the problematic URL needs to be debugged. To debug a specific file, run like this:
Package pinpoint provides the APIs to instrument Go applications for Pinpoint (https://github.com/pinpoint-apm/pinpoint). This package enables you to monitor Go applications using Pinpoint. Go applications must be instrumented manually at the source code level, because Go is a compiled language and does not have a virtual machine like Java. Developers can instrument Go applications using the APIs provided in this package. It has support for instrumenting Go’s built-in http package, database/sql drivers and plug-ins for popular frameworks and toolkits (like Gin and gRPC, ...). In Pinpoint, a transaction consists of a group of Spans. Each span represents a trace of a single logical node where the transaction has gone through. A span records important function invocations and their related data(arguments, return value, etc.) before encapsulating them as SpanEvents in a call stack like representation. The span itself and each of its SpanEvents represents a function invocation. Find out more about the concept of Pinpoint at the links below:
Package swagger (2.0) provides a powerful interface to your API Contains an implementation of Swagger 2.0. It knows how to serialize, deserialize and validate swagger specifications. Swagger is a simple yet powerful representation of your RESTful API. With the largest ecosystem of API tooling on the planet, thousands of developers are supporting Swagger in almost every modern programming language and deployment environment. With a Swagger-enabled API, you get interactive documentation, client SDK generation and discoverability. We created Swagger to help fulfill the promise of APIs. Swagger helps companies like Apigee, Getty Images, Intuit, LivingSocial, McKesson, Microsoft, Morningstar, and PayPal build the best possible services with RESTful APIs.Now in version 2.0, Swagger is more enabling than ever. And it's 100% open source software. More detailed documentation is available at https://goswagger.io. Install: The implementation also provides a number of command line tools to help working with swagger. Currently there is a spec validator tool: To generate a server for a swagger spec document: To generate a client for a swagger spec document: To generate a swagger spec document for a go application: There are several other sub commands available for the generate command You're free to add files to the directories the generated code lands in, but the files generated by the generator itself will be regenerated on following generation runs so any changes to those files will be lost. However extra files you create won't be lost so they are safe to use for customizing the application to your needs. To generate a server for a swagger spec document:
Package swagger (2.0) provides a powerful interface to your API Contains an implementation of Swagger 2.0. It knows how to serialize, deserialize and validate swagger specifications. Swagger is a simple yet powerful representation of your RESTful API. With the largest ecosystem of API tooling on the planet, thousands of developers are supporting Swagger in almost every modern programming language and deployment environment. With a Swagger-enabled API, you get interactive documentation, client SDK generation and discoverability. We created Swagger to help fulfill the promise of APIs. Swagger helps companies like Apigee, Getty Images, Intuit, LivingSocial, McKesson, Microsoft, Morningstar, and PayPal build the best possible services with RESTful APIs.Now in version 2.0, Swagger is more enabling than ever. And it's 100% open source software. More detailed documentation is available at https://goswagger.io. Install: The implementation also provides a number of command line tools to help working with swagger. Currently there is a spec validator tool: To generate a server for a swagger spec document: To generate a client for a swagger spec document: To generate a swagger spec document for a go application: There are several other sub commands available for the generate command You're free to add files to the directories the generated code lands in, but the files generated by the generator itself will be regenerated on following generation runs so any changes to those files will be lost. However extra files you create won't be lost so they are safe to use for customizing the application to your needs. To generate a server for a swagger spec document:
Package bindata converts any file into manageable Go source code. Useful for embedding binary data into a go program. The file data is optionally gzip compressed before being converted to a raw byte slice. The following paragraphs cover some of the customization options which can be specified in the Config struct, which must be passed into the Translate() call. When used with the `Debug` option, the generated code does not actually include the asset data. Instead, it generates function stubs which load the data from the original file on disk. The asset API remains identical between debug and release builds, so your code will not have to change. This is useful during development when you expect the assets to change often. The host application using these assets uses the same API in both cases and will not have to care where the actual data comes from. An example is a Go webserver with some embedded, static web content like HTML, JS and CSS files. While developing it, you do not want to rebuild the whole server and restart it every time you make a change to a bit of javascript. You just want to build and launch the server once. Then just press refresh in the browser to see those changes. Embedding the assets with the `debug` flag allows you to do just that. When you are finished developing and ready for deployment, just re-invoke `go-bindata` without the `-debug` flag. It will now embed the latest version of the assets. The `MemCopy` option will alter the way the output file is generated. If false, it will employ a hack that allows us to read the file data directly from the compiled program's `.rodata` section. This ensures that when we call call our generated function, we omit unnecessary memcopies. The downside of this, is that it requires dependencies on the `reflect` and `unsafe` packages. These may be restricted on platforms like AppEngine and thus prevent you from using this mode. Another disadvantage is that the byte slice we create, is strictly read-only. For most use-cases this is not a problem, but if you ever try to alter the returned byte slice, a runtime panic is thrown. Use this mode only on target platforms where memory constraints are an issue. The default behaviour is to use the old code generation method. This prevents the two previously mentioned issues, but will employ at least one extra memcopy and thus increase memory requirements. For instance, consider the following two examples: This would be the default mode, using an extra memcopy but gives a safe implementation without dependencies on `reflect` and `unsafe`: Here is the same functionality, but uses the `.rodata` hack. The byte slice returned from this example can not be written to without generating a runtime error. The Compress option indicates that the supplied assets are GZIP compressed before being turned into Go code. The data should still be accessed through a function call, so nothing changes in the API. This feature is useful if you do not care for compression, or the supplied resource is already compressed. Doing it again would not add any value and may even increase the size of the data. The default behaviour of the program is to use compression. The keys used in the `_bindata` map are the same as the input file name passed to `go-bindata`. This includes the path. In most cases, this is not desirable, as it puts potentially sensitive information in your code base. For this purpose, the tool supplies another command line flag `-prefix`. This accepts a portion of a path name, which should be stripped off from the map keys and function names. For example, running without the `-prefix` flag, we get: Running with the `-prefix` flag, we get: With the optional Tags field, you can specify any go build tags that must be fulfilled for the output file to be included in a build. This is useful when including binary data in multiple formats, where the desired format is specified at build time with the appropriate tags. The tags are appended to a `// +build` line in the beginning of the output file and must follow the build tags syntax specified by the go tool.
Package swagger (2.0) provides a powerful interface to your API Contains an implementation of Swagger 2.0. It knows how to serialize, deserialize and validate swagger specifications. Swagger is a simple yet powerful representation of your RESTful API. With the largest ecosystem of API tooling on the planet, thousands of developers are supporting Swagger in almost every modern programming language and deployment environment. With a Swagger-enabled API, you get interactive documentation, client SDK generation and discoverability. We created Swagger to help fulfill the promise of APIs. Swagger helps companies like Apigee, Getty Images, Intuit, LivingSocial, McKesson, Microsoft, Morningstar, and PayPal build the best possible services with RESTful APIs.Now in version 2.0, Swagger is more enabling than ever. And it's 100% open source software. More detailed documentation is available at https://goswagger.io. Install: The implementation also provides a number of command line tools to help working with swagger. Currently there is a spec validator tool: To generate a server for a swagger spec document: To generate a client for a swagger spec document: To generate a swagger spec document for a go application: There are several other sub commands available for the generate command You're free to add files to the directories the generated code lands in, but the files generated by the generator itself will be regenerated on following generation runs so any changes to those files will be lost. However extra files you create won't be lost so they are safe to use for customizing the application to your needs. To generate a server for a swagger spec document:
A push notification server using Gin framework written in Go (Golang). Details about the gorush project are found in github page: The pre-compiled binaries can be downloaded from release page. Send Android notification Send iOS notification The default endpoint is APNs development. Please add -production flag for APNs production push endpoint. Run gorush web server Get go status of api server using httpie tool: Simple send iOS notification example, the platform value is 1: Simple send Android notification example, the platform value is 2: For more details, see the documentation and example.
Package ebiten provides graphics and input API to develop a 2D game. You can start the game by calling the function Run.
Package ebiten provides graphics and input API to develop a 2D game. You can start the game by calling the function Run.
Package goa provides the runtime support for goa microservices. goa service development begins with writing the *design* of a service. The design is described using the goa language implemented by the github.com/goadesign/goa/design/apidsl package. The `goagen` tool consumes the metadata produced from executing the design language to generate service specific code that glues the underlying HTTP server with action specific code and data structures. The goa package contains supporting functionality for the generated code including basic request and response state management through the RequestData and ResponseData structs, error handling via error classes, middleware support via the Middleware data structure as well as decoding and encoding algorithms. The RequestData and ResponseData structs provides access to the request and response state. goa request handlers also accept a context.Context interface as first parameter so that deadlines and cancelation signals may easily be implemented. The request state exposes the underlying http.Request object as well as the deserialized payload (request body) and parameters (both path and querystring parameters). Generated action specific contexts wrap the context.Context, ResponseData and RequestData data structures. They expose properly typed fields that correspond to the request parameters and body data structure descriptions appearing in the design. The response state exposes the response status and body length as well as the underlying ResponseWriter. Action contexts provide action specific helper methods that write the responses as described in the design optionally taking an instance of the media type for responses that contain a body. Here is an example showing an "update" action corresponding to following design (extract): The action signature generated by goagen is: where UpdateBottleContext is: and implements: The definitions of the Bottle and UpdateBottlePayload data structures are ommitted for brievity. There is one controller interface generated per resource defined via the design language. The interface exposes the controller actions. User code must provide data structures that implement these interfaces when mounting a controller onto a service. The controller data structure should include an anonymous field of type *goa.Controller which takes care of implementing the middleware handling. A goa middleware is a function that takes and returns a Handler. A Handler is a the low level function which handles incoming HTTP requests. goagen generates the handlers code so each handler creates the action specific context and calls the controller action with it. Middleware can be added to a goa service or a specific controller using the corresponding Use methods. goa comes with a few stock middleware that handle common needs such as logging, panic recovery or using the RequestID header to trace requests across multiple services. The controller action methods generated by goagen such as the Update method of the BottleController interface shown above all return an error value. goa defines an Error struct that action implementations can use to describe the content of the corresponding HTTP response. Errors can be created using error classes which are functions created via NewErrorClass. The ErrorHandler middleware maps errors to HTTP responses. Errors that are instances of the Error struct are mapped using the struct fields while other types of errors return responses with status code 500 and the error message in the body. The goa design language documented in the dsl package makes it possible to attach validations to data structure definitions. One specific type of validation consists of defining the format that a data structure string field must follow. Example of formats include email, data time, hostnames etc. The ValidateFormat function provides the implementation for the format validation invoked from the code generated by goagen. The goa design language makes it possible to specify the encodings supported by the API both as input (Consumes) and output (Produces). goagen uses that information to registed the corresponding packages with the service encoders and decoders via their Register methods. The service exposes the DecodeRequest and EncodeResponse that implement a simple content type negotiation algorithm for picking the right encoder for the "Content-Type" (decoder) or "Accept" (encoder) request header. Package goa standardizes on structured error responses: a request that fails because of an invalid input or an unexpected condition produces a response that contains a structured error. The error data structures returned to clients contains five fields: an ID, a code, a status, a detail and metadata. The ID is unique for the occurrence of the error, it helps correlate the content of the response with the content of the service logs. The code defines the class of error (e.g. "invalid_parameter_type") and the status the corresponding HTTP status (e.g. 400). The detail contains a message specific to the error occurrence. The metadata contains key/value pairs that provide contextual information (name of parameters, value of invalid parameter etc.). Instances of Error can be created via Error Class functions. See http://goa.design/implement/error_handling.html All instance of errors created via a error class implement the ServiceError interface. This interface is leveraged by the error handler middleware to produce the error responses. The code generated by goagen calls the helper functions exposed in this file when it encounters invalid data (wrong type, validation errors etc.) such as InvalidParamTypeError, InvalidAttributeTypeError etc. These methods return errors that get merged with any previously encountered error via the Error Merge method. The helper functions are error classes stored in global variable. This means your code can override their values to produce arbitrary error responses. goa includes an error handler middleware that takes care of mapping back any error returned by previously called middleware or action handler into HTTP responses. If the error was created via an error class then the corresponding content including the HTTP status is used otherwise an internal error is returned. Errors that bubble up all the way to the top (i.e. not handled by the error middleware) also generate an internal error response.
Package messagebird is an official library for interacting with MessageBird.com API. The MessageBird API connects your website or application to operators around the world. With our API you can integrate SMS, Chat & Voice. More documentation you can find on the MessageBird developers portal: https://developers.messagebird.com/
Package ec2 provides the client and types for making API requests to Amazon Elastic Compute Cloud. Amazon Elastic Compute Cloud (Amazon EC2) provides resizable computing capacity in the AWS Cloud. Using Amazon EC2 eliminates the need to invest in hardware up front, so you can develop and deploy applications faster. See https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15 for more information on this service. See ec2 package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/ec2/ To Amazon Elastic Compute Cloud with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the Amazon Elastic Compute Cloud client EC2 for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/ec2/#New
Package chimesdkvoice provides the API client, operations, and parameter types for Amazon Chime SDK Voice. The Amazon Chime SDK telephony APIs in this section enable developers to create PSTN calling solutions that use Amazon Chime SDK Voice Connectors, and Amazon Chime SDK SIP media applications. Developers can also order and manage phone numbers, create and manage Voice Connectors and SIP media applications, and run voice analytics.
Package whatsapp provides a developer API to interact with the WhatsAppWeb-Servers.
Package ec2 provides the client and types for making API requests to Amazon Elastic Compute Cloud. Amazon Elastic Compute Cloud (Amazon EC2) provides resizable computing capacity in the AWS Cloud. Using Amazon EC2 eliminates the need to invest in hardware up front, so you can develop and deploy applications faster. See https://docs.aws.amazon.com/goto/WebAPI/ec2-2016-11-15 for more information on this service. See ec2 package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/ec2/ To Amazon Elastic Compute Cloud with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the Amazon Elastic Compute Cloud client EC2 for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/ec2/#New
Package kms provides the client and types for making API requests to AWS Key Management Service. AWS Key Management Service (AWS KMS) is an encryption and key management web service. This guide describes the AWS KMS operations that you can call programmatically. For general information about AWS KMS, see the AWS Key Management Service Developer Guide (http://docs.aws.amazon.com/kms/latest/developerguide/). AWS provides SDKs that consist of libraries and sample code for various programming languages and platforms (Java, Ruby, .Net, iOS, Android, etc.). The SDKs provide a convenient way to create programmatic access to AWS KMS and other AWS services. For example, the SDKs take care of tasks such as signing requests (see below), managing errors, and retrying requests automatically. For more information about the AWS SDKs, including how to download and install them, see Tools for Amazon Web Services (http://aws.amazon.com/tools/). We recommend that you use the AWS SDKs to make programmatic API calls to AWS KMS. Clients must support TLS (Transport Layer Security) 1.0. We recommend TLS 1.2. Clients must also support cipher suites with Perfect Forward Secrecy (PFS) such as Ephemeral Diffie-Hellman (DHE) or Elliptic Curve Ephemeral Diffie-Hellman (ECDHE). Most modern systems such as Java 7 and later support these modes. Requests must be signed by using an access key ID and a secret access key. We strongly recommend that you do not use your AWS account (root) access key ID and secret key for everyday work with AWS KMS. Instead, use the access key ID and secret access key for an IAM user, or you can use the AWS Security Token Service to generate temporary security credentials that you can use to sign requests. All AWS KMS operations require Signature Version 4 (http://docs.aws.amazon.com/general/latest/gr/signature-version-4.html). AWS KMS supports AWS CloudTrail, a service that logs AWS API calls and related events for your AWS account and delivers them to an Amazon S3 bucket that you specify. By using the information collected by CloudTrail, you can determine what requests were made to AWS KMS, who made the request, when it was made, and so on. To learn more about CloudTrail, including how to turn it on and find your log files, see the AWS CloudTrail User Guide (http://docs.aws.amazon.com/awscloudtrail/latest/userguide/). For more information about credentials and request signing, see the following: AWS Security Credentials (http://docs.aws.amazon.com/general/latest/gr/aws-security-credentials.html) This topic provides general information about the types of credentials used for accessing AWS. Temporary Security Credentials (http://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html) This section of the IAM User Guide describes how to create and use temporary security credentials. Signature Version 4 Signing Process (http://docs.aws.amazon.com/general/latest/gr/signature-version-4.html) This set of topics walks you through the process of signing a request using an access key ID and a secret access key. Of the APIs discussed in this guide, the following will prove the most useful for most applications. You will likely perform actions other than these, such as creating keys and assigning policies, by using the console. Encrypt Decrypt GenerateDataKey GenerateDataKeyWithoutPlaintext See https://docs.aws.amazon.com/goto/WebAPI/kms-2014-11-01 for more information on this service. See kms package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/kms/ To AWS Key Management Service with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the AWS Key Management Service client KMS for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/kms/#New
Package iotfleetwise provides the API client, operations, and parameter types for AWS IoT FleetWise. Amazon Web Services IoT FleetWise is a fully managed service that you can use to collect, model, and transfer vehicle data to the Amazon Web Services cloud at scale. With Amazon Web Services IoT FleetWise, you can standardize all of your vehicle data models, independent of the in-vehicle communication architecture, and define data collection rules to transfer only high-value data to the cloud. For more information, see What is Amazon Web Services IoT FleetWise? in the Amazon Web Services IoT FleetWise Developer Guide.
Package route53recoverycluster provides the API client, operations, and parameter types for Route53 Recovery Cluster. Welcome to the Routing Control (Recovery Cluster) API Reference Guide for Amazon Route 53 Application Recovery Controller. With Route 53 ARC, you can use routing control with extreme reliability to recover applications by rerouting traffic across Availability Zones or Amazon Web Services Regions. Routing controls are simple on/off switches hosted on a highly available cluster in Route 53 ARC. A cluster provides a set of five redundant Regional endpoints against which you can run API calls to get or update the state of routing controls. To implement failover, you set one routing control to ON and another one to OFF, to reroute traffic from one Availability Zone or Amazon Web Services Region to another. Be aware that you must specify a Regional endpoint for a cluster when you work with API cluster operations to get or update routing control states in Route 53 ARC. In addition, you must specify the US West (Oregon) Region for Route 53 ARC API calls. For example, use the parameter --region us-west-2 with AWS CLI commands. For more information, see Get and update routing control states using the APIin the Amazon Route 53 Application Recovery Controller Developer Guide. This API guide includes information about the API operations for how to get and update routing control states in Route 53 ARC. To work with routing control in Route 53 ARC, you must first create the required components (clusters, control panels, and routing controls) using the recovery cluster configuration API. For more information about working with routing control in Route 53 ARC, see the following: Create clusters, control panels, and routing controls by using API operations. For more information, see the Recovery Control Configuration API Reference Guide for Amazon Route 53 Application Recovery Controller. Learn about the components in recovery control, including clusters, routing controls, and control panels, and how to work with Route 53 ARC in the Amazon Web Services console. For more information, see Recovery control componentsin the Amazon Route 53 Application Recovery Controller Developer Guide. Route 53 ARC also provides readiness checks that continually audit resources to help make sure that your applications are scaled and ready to handle failover traffic. For more information about the related API operations, see the Recovery Readiness API Reference Guide for Amazon Route 53 Application Recovery Controller. For more information about creating resilient applications and preparing for recovery readiness with Route 53 ARC, see the Amazon Route 53 Application Recovery Controller Developer Guide.
Package sdk is the official AWS SDK v2 for the Go programming language. aws-sdk-go-v2 is the Developer Preview for the v2 of the AWS SDK for the Go programming language. Look for additional documentation and examples to be added. The best way to get started working with the SDK is to use `go get` to add the SDK to your Go Workspace manually. You could also use [Dep] to add the SDK to your application's dependencies. Using [Dep] will simplify your update story and help your application keep pinned to specific version of the SDK ### Hello AWS This example shows how you can use the v2 SDK to make an API request using the SDK's Amazon DynamoDB client.
Package m2 provides the API client, operations, and parameter types for AWSMainframeModernization. Amazon Web Services Mainframe Modernization provides tools and resources to help you plan and implement migration and modernization from mainframes to Amazon Web Services managed runtime environments. It provides tools for analyzing existing mainframe applications, developing or updating mainframe applications using COBOL or PL/I, and implementing an automated pipeline for continuous integration and continuous delivery (CI/CD) of the applications.
Package cloudfront provides the client and types for making API requests to Amazon CloudFront. This is the Amazon CloudFront API Reference. This guide is for developers who need detailed information about CloudFront API actions, data types, and errors. For detailed information about CloudFront features, see the Amazon CloudFront Developer Guide. See https://docs.aws.amazon.com/goto/WebAPI/cloudfront-2017-10-30 for more information on this service. See cloudfront package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/cloudfront/ To Amazon CloudFront with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the Amazon CloudFront client CloudFront for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/cloudfront/#New
Package lookoutmetrics provides the API client, operations, and parameter types for Amazon Lookout for Metrics. This is the Amazon Lookout for Metrics API Reference. For an introduction to the service with tutorials for getting started, visit Amazon Lookout for Metrics Developer Guide.
Package codegurureviewer provides the API client, operations, and parameter types for Amazon CodeGuru Reviewer. This section provides documentation for the Amazon CodeGuru Reviewer API operations. CodeGuru Reviewer is a service that uses program analysis and machine learning to detect potential defects that are difficult for developers to find and recommends fixes in your Java and Python code. By proactively detecting and providing recommendations for addressing code defects and implementing best practices, CodeGuru Reviewer improves the overall quality and maintainability of your code base during the code review stage. For more information about CodeGuru Reviewer, see the Amazon CodeGuru Reviewer User Guide. To improve the security of your CodeGuru Reviewer API calls, you can establish a private connection between your VPC and CodeGuru Reviewer by creating an interface VPC endpoint. For more information, see CodeGuru Reviewer and interface VPC endpoints (Amazon Web Services PrivateLink)in the Amazon CodeGuru Reviewer User Guide.
Package iotthingsgraph provides the API client, operations, and parameter types for AWS IoT Things Graph. AWS IoT Things Graph provides an integrated set of tools that enable developers to connect devices and services that use different standards, such as units of measure and communication protocols. AWS IoT Things Graph makes it possible to build IoT applications with little to no code by connecting devices and services and defining how they interact at an abstract level. For more information about how AWS IoT Things Graph works, see the User Guide. The AWS IoT Things Graph service is discontinued.
Package chimesdkmessaging provides the API client, operations, and parameter types for Amazon Chime SDK Messaging. The Amazon Chime SDK messaging APIs in this section allow software developers to send and receive messages in custom messaging applications. These APIs depend on the frameworks provided by the Amazon Chime SDK identity APIs. For more information about the messaging APIs, see Amazon Chime SDK messaging.
Package greengrassv2 provides the API client, operations, and parameter types for AWS IoT Greengrass V2. IoT Greengrass brings local compute, messaging, data management, sync, and ML inference capabilities to edge devices. This enables devices to collect and analyze data closer to the source of information, react autonomously to local events, and communicate securely with each other on local networks. Local devices can also communicate securely with Amazon Web Services IoT Core and export IoT data to the Amazon Web Services Cloud. IoT Greengrass developers can use Lambda functions and components to create and deploy applications to fleets of edge devices for local operation. IoT Greengrass Version 2 provides a new major version of the IoT Greengrass Core software, new APIs, and a new console. Use this API reference to learn how to use the IoT Greengrass V2 API operations to manage components, manage deployments, and core devices. For more information, see What is IoT Greengrass? in the IoT Greengrass V2 Developer Guide.
Package iotdeviceadvisor provides the API client, operations, and parameter types for AWS IoT Core Device Advisor. Amazon Web Services IoT Core Device Advisor is a cloud-based, fully managed test capability for validating IoT devices during device software development. Device Advisor provides pre-built tests that you can use to validate IoT devices for reliable and secure connectivity with Amazon Web Services IoT Core before deploying devices to production. By using Device Advisor, you can confirm that your devices can connect to Amazon Web Services IoT Core, follow security best practices and, if applicable, receive software updates from IoT Device Management. You can also download signed qualification reports to submit to the Amazon Web Services Partner Network to get your device qualified for the Amazon Web Services Partner Device Catalog without the need to send your device in and wait for it to be tested.