* Tencent is pleased to support the open source community by making TKEStack available. * * Copyright (C) 2012-2019 Tencent. All Rights Reserved. * * Licensed under the Apache License, Version 2.0 (the "License"); you may not use * this file except in compliance with the License. You may obtain a copy of the * License at * * https://opensource.org/licenses/Apache-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT * WARRANTIES OF ANY KIND, either express or implied. See the License for the * specific language governing permissions and limitations under the License.
Demonstrates how to access differ information Demonstrates how to append cols/rows/sheets. Demonstrates how to copy information in sheet Demonstrates how to delete information Demonstrates how to create/open/save XLSX files Demonstrates how to add style formatting Demonstrates how to get/set value for cell Demonstrates how to insert cols/rows Demonstrates how to iterate Demonstrates how to set options of rows/cols/sheets Demonstrates how to open sheet in streaming mode Demonstrates how to update information Demonstrate walk cells using callback
godocgo is a repo demonstrating effective Go documentation. I started to write this as a blog post, intending to link to a repo with examples, and then realized that there was no reason I couldn't write the whole post as godoc on the repo, so here it is. This repo serves as a working example of "hey, how do I get something like that to show up in my godoc?" It also has tips on common conventions and idioms used in documenting Go code. Please feel free to send me pull requests if you see errors or places where things could be improved. To view this godoc on your own machine, just run (if you already have godoc, you can skip the first step) Then you can open a web browser and go to http://localhost:8080/pkg/github.com/natefinch/godocgo/ Alternatively, you can view this on the excellent godoc.org, by going to http://godoc.org/github.com/natefinch/godocgo (note that godoc.org has different styling than the godoc tool, but the content is the same). One of my favorite things about the Go programming language is godoc, the auto-generated documentation for your code. If you're coming from javadoc or similar tools, you're probably thinking one of two things (possibly both): An understandable reaction, but let me show you why godoc is different, and how to leverage all its features. First off, godoc comments are designed from the start to be read in the code. We all know that the code will be read a lot more often than some html documentation out on the web, so godoc makes sure to keep its documentation legible for the coder in their text editor. There is not a single angle bracket to be found. Simple conventions replace the html/xml tags you often see in other documentation standards. Paragraphs are delineated by blank lines. Headings like the above are simply a single line without punctuation with paragraphs before and after it. If you're viewing this using godoc, it will be included in the table of contents at the top of the file. If you're viewing this on godoc.org, there is, sadly, no table of contents. Below this is pre-formatted text. All that is required for it to be rendered in that way is that it be indented from the surrounding comments (you can use tabs or spaces, whatever you like). This is handy for showing code snippets that you don't want godoc to auto-wrap, the way it does normal text. Html links will be auto linked without you having to do anything, like so: http://golang.org That's it, that's all the formatting you get. But it's enough, and I think we can all be glad to focus on the content and not the formatting of our documentation. In Go, any comment which immediately precedes Go code is considered a comment about that code. Only comments on exported names will be included in the documentation. By corollary, any comment that is *not* on a line before a line of code, is not picked up as documentation. Note that this is important when writing things like build tags and copyright notices at the top of files. Always make sure you have a space between those comments and the package declaration. Comments exist to tell developers what the code is for and how to use it. They should be concise and to the point. Comments should be full sentences that end with a period. This generally forces you to offer complete thoughts in your comments, rather than trying to make it into shorthand. It's also useful, because then no one has to wonder if you forgot to write the end of that sentence. Comments in Go traditionally start with the name of the thing they're commenting about. Thus, if you're commenting on a type Foo, the comment would start "Foo is ...". This makes it clear that the comment is about that name, and can help connect the two if they get separated. Often times the first sentence of a comment is extracted as a short summary, so make it short and useful on its own. Always comment every exported name in your program. You never know when a little extra information will help someone use your code. Also, it just makes your documentation look complete. The text you are reading is documentation on the main package of a Go executable. It is simply a comment attached (immediately preceding) the package <foo> declaration (in this case package main, since this is the application package). It is traditional to put long form documentation in a file called doc.go, without other code in that file, but that is not necessary. A comment on the package declaration in any file will work, and if you have multiple comments, they'll be concatenated together. The first sentence in this file (up to the first period) will be the label next to the link in godoc's index. This should be a short description of what the command does, and traditionally starts with "Command <name-of- command>". Note that all the tips that apply to code in packages, as demonstrated in package sub below, also apply to commands, I just thought this page was long enough already, so I put everything else in the next package :) To read more of the article, and explore more of what godoc has to offer, click on the link to the package named "sub" below.
Package matroska provides functionality for parsing Matroska/EBML (Extensible Binary Meta Language) files. Matroska is a multimedia container format that can hold an unlimited number of video, audio, picture, or subtitle tracks in one file. It is based on EBML, which is a binary format similar to XML. This package implements the core EBML parsing functionality, including: The main types in this package are: Example usage: Package matroska provides a pure Go implementation for parsing and demuxing Matroska/EBML files. Matroska is an open standard, free container format, a file format that can hold an unlimited number of video, audio, picture, or subtitle tracks in one file. It is intended to serve as a universal format for storing common multimedia content, like movies or TV shows. This package provides a high-level API for reading Matroska files, extracting track information, and reading media packets. The main entry point is the Demuxer struct, which can be created using either NewDemuxer for seekable inputs or NewStreamingDemuxer for non-seekable streams. Basic usage: Package matroska implements a parser for Matroska and WebM media container formats. Matroska is an open standard, free container format, a file format that can hold an unlimited number of video, audio, picture, or subtitle tracks in one file. It is intended to serve as a universal format for storing common multimedia content, like movies or TV shows. This package provides functionality to parse Matroska files, extract metadata, and read media packets. It builds upon the EBML (Extensible Binary Meta Language) parsing functionality to implement Matroska-specific parsing logic. The main entry point is the MatroskaParser type, which can be created using the NewMatroskaParser function. Once created, it can be used to read file information, track information, and media packets. Example usage: Package matroska defines all the data structures and constants used in the Matroska/EBML project. Matroska is an extensible open standard Audio/video container format. This package provides the core types and constants needed to parse, create, and manipulate Matroska files. The package includes definitions for: These types are used throughout the project to represent different aspects of Matroska files and serve as the central location for all data type definitions used by other files in the project. Package matroska provides utilities for working with Matroska/EBML format. This file contains auxiliary utility functions and types that support the main functionality of the Matroska/EBML parser.
Package swagger (2.0) provides a powerful interface to your API Contains an implementation of Swagger 2.0. It knows how to serialize, deserialize and validate swagger specifications. Swagger is a simple yet powerful representation of your RESTful API. With the largest ecosystem of API tooling on the planet, thousands of developers are supporting Swagger in almost every modern programming language and deployment environment. With a Swagger-enabled API, you get interactive documentation, client SDK generation and discoverability. We created Swagger to help fulfill the promise of APIs. Swagger helps companies like Apigee, Getty Images, Intuit, LivingSocial, McKesson, Microsoft, Morningstar, and PayPal build the best possible services with RESTful APIs.Now in version 2.0, Swagger is more enabling than ever. And it's 100% open source software. More detailed documentation is available at https://goswagger.io. Install: The implementation also provides a number of command line tools to help working with swagger. Currently there is a spec validator tool: To generate a server for a swagger spec document: To generate a client for a swagger spec document: To generate a swagger spec document for a go application: There are several other sub commands available for the generate command You're free to add files to the directories the generated code lands in, but the files generated by the generator itself will be regenerated on following generation runs so any changes to those files will be lost. However extra files you create won't be lost so they are safe to use for customizing the application to your needs. To generate a server for a swagger spec document:
Package docx is one of the most functional libraries to read and write .docx (a.k.a. Microsoft Word documents or ECMA-376 Office Open XML) files in Go.
Package skipper provides an HTTP routing library with flexible configuration as well as a runtime update of the routing rules. Skipper works as an HTTP reverse proxy that is responsible for mapping incoming requests to multiple HTTP backend services, based on routes that are selected by the request attributes. At the same time, both the requests and the responses can be augmented by a filter chain that is specifically defined for each route. Optionally, it can provide circuit breaker mechanism individually for each backend host. Skipper can load and update the route definitions from multiple data sources without being restarted. It provides a default executable command with a few built-in filters, however, its primary use case is to be extended with custom filters, predicates or data sources. For further information read 'Extending Skipper'. Skipper took the core design and inspiration from Vulcand: https://github.com/mailgun/vulcand. Skipper is 'go get' compatible. If needed, create a 'go workspace' first: Get the Skipper packages: Create a file with a route: Optionally, verify the syntax of the file: Start Skipper and make an HTTP request: The core of Skipper's request processing is implemented by a reverse proxy in the 'proxy' package. The proxy receives the incoming request, forwards it to the routing engine in order to receive the most specific matching route. When a route matches, the request is forwarded to all filters defined by it. The filters can modify the request or execute any kind of program logic. Once the request has been processed by all the filters, it is forwarded to the backend endpoint of the route. The response from the backend goes once again through all the filters in reverse order. Finally, it is mapped as the response of the original incoming request. Besides the default proxying mechanism, it is possible to define routes without a real network backend endpoint. One of these cases is called a 'shunt' backend, in which case one of the filters needs to handle the request providing its own response (e.g. the 'static' filter). Actually, filters themselves can instruct the request flow to shunt by calling the Serve(*http.Response) method of the filter context. Another case of a route without a network backend is the 'loopback'. A loopback route can be used to match a request, modified by filters, against the lookup tree with different conditions and then execute a different route. One example scenario can be to use a single route as an entry point to execute some calculation to get an A/B testing decision and then matching the updated request metadata for the actual destination route. This way the calculation can be executed for only those requests that don't contain information about a previously calculated decision. For further details, see the 'proxy' and 'filters' package documentation. Finding a request's route happens by matching the request attributes to the conditions in the route's definitions. Such definitions may have the following conditions: - method - path (optionally with wildcards) - path regular expressions - host regular expressions - headers - header regular expressions It is also possible to create custom predicates with any other matching criteria. The relation between the conditions in a route definition is 'and', meaning, that a request must fulfill each condition to match a route. For further details, see the 'routing' package documentation. Filters are applied in order of definition to the request and in reverse order to the response. They are used to modify request and response attributes, such as headers, or execute background tasks, like logging. Some filters may handle the requests without proxying them to service backends. Filters, depending on their implementation, may accept/require parameters, that are set specifically to the route. For further details, see the 'filters' package documentation. Each route has one of the following backends: HTTP endpoint, shunt or loopback. Backend endpoints can be any HTTP service. They are specified by their network address, including the protocol scheme, the domain name or the IP address, and optionally the port number: e.g. "https://www.example.org:4242". (The path and query are sent from the original request, or set by filters.) A shunt route means that Skipper handles the request alone and doesn't make requests to a backend service. In this case, it is the responsibility of one of the filters to generate the response. A loopback route executes the routing mechanism on current state of the request from the start, including the route lookup. This way it serves as a form of an internal redirect. Route definitions consist of the following: - request matching conditions (predicates) - filter chain (optional) - backend (either an HTTP endpoint or a shunt) The eskip package implements the in-memory and text representations of route definitions, including a parser. (Note to contributors: in order to stay compatible with 'go get', the generated part of the parser is stored in the repository. When changing the grammar, 'go generate' needs to be executed explicitly to update the parser.) For further details, see the 'eskip' package documentation Skipper has filter implementations of basic auth and OAuth2. It can be integrated with tokeninfo based OAuth2 providers. For details, see: https://godoc.org/github.com/zalando/skipper/filters/auth. Skipper's route definitions of Skipper are loaded from one or more data sources. It can receive incremental updates from those data sources at runtime. It provides three different data clients: - Kubernetes: Skipper can be used as part of a Kubernetes Ingress Controller implementation together with https://github.com/zalando-incubator/kube-ingress-aws-controller . In this scenario, Skipper uses the Kubernetes API's Ingress extensions as a source for routing. For a complete deployment example, see more details in: https://github.com/zalando-incubator/kubernetes-on-aws/ . - Innkeeper: the Innkeeper service implements a storage for large sets of Skipper routes, with an HTTP+JSON API, OAuth2 authentication and role management. See the 'innkeeper' package and https://github.com/zalando/innkeeper. - etcd: Skipper can load routes and receive updates from etcd clusters (https://github.com/coreos/etcd). See the 'etcd' package. - static file: package eskipfile implements a simple data client, which can load route definitions from a static file in eskip format. Currently, it loads the routes on startup. It doesn't support runtime updates. Skipper can use additional data sources, provided by extensions. Sources must implement the DataClient interface in the routing package. Skipper provides circuit breakers, configured either globally, based on backend hosts or based on individual routes. It supports two types of circuit breaker behavior: open on N consecutive failures, or open on N failures out of M requests. For details, see: https://godoc.org/github.com/zalando/skipper/circuit. Skipper can be started with the default executable command 'skipper', or as a library built into an application. The easiest way to start Skipper as a library is to execute the 'Run' function of the current, root package. Each option accepted by the 'Run' function is wired in the default executable as well, as a command line flag. E.g. EtcdUrls becomes -etcd-urls as a comma separated list. For command line help, enter: An additional utility, eskip, can be used to verify, print, update and delete routes from/to files or etcd (Innkeeper on the roadmap). See the cmd/eskip command package, and/or enter in the command line: Skipper doesn't use dynamically loaded plugins, however, it can be used as a library, and it can be extended with custom predicates, filters and/or custom data sources. To create a custom predicate, one needs to implement the PredicateSpec interface in the routing package. Instances of the PredicateSpec are used internally by the routing package to create the actual Predicate objects as referenced in eskip routes, with concrete arguments. Example, randompredicate.go: In the above example, a custom predicate is created, that can be referenced in eskip definitions with the name 'Random': To create a custom filter we need to implement the Spec interface of the filters package. 'Spec' is the specification of a filter, and it is used to create concrete filter instances, while the raw route definitions are processed. Example, hellofilter.go: The above example creates a filter specification, and in the routes where they are included, the filter instances will set the 'X-Hello' header for each and every response. The name of the filter is 'hello', and in a route definition it is referenced as: The easiest way to create a custom Skipper variant is to implement the required filters (as in the example above) by importing the Skipper package, and starting it with the 'Run' command. Example, hello.go: A file containing the routes, routes.eskip: Start the custom router: The 'Run' function in the root Skipper package starts its own listener but it doesn't provide the best composability. The proxy package, however, provides a standard http.Handler, so it is possible to use it in a more complex solution as a building block for routing. Skipper provides detailed logging of failures, and access logs in Apache log format. Skipper also collects detailed performance metrics, and exposes them on a separate listener endpoint for pulling snapshots. For details, see the 'logging' and 'metrics' packages documentation. The router's performance depends on the environment and on the used filters. Under ideal circumstances, and without filters, the biggest time factor is the route lookup. Skipper is able to scale to thousands of routes with logarithmic performance degradation. However, this comes at the cost of increased memory consumption, due to storing the whole lookup tree in a single structure. Benchmarks for the tree lookup can be run by: In case more aggressive scale is needed, it is possible to setup Skipper in a cascade model, with multiple Skipper instances for specific route segments.
Package vix API allows you to automate virtual machine operations on most current VMware platform products, especially hosted VMware products such as: vmware workstation, player, fusion and server. vSphere API, starting from 5.0, merged VIX API in the GuestOperationsManager managed object. So, we encourage you to use VMware's official Go package for vSphere. This API supports: In order for Go to find libvix when running your compiled binary, a govix path has to be added to the LD_LIBRARY_PATH environment variable. Example: Be aware that the previous example assumes $GOPATH only has a path set. In order to enable VIX debugging in VMware, you have set the following setting: For logging, the following are the paths in each operating system: The Vix library is intended for use by multi-threaded clients. Vix shared objects are managed by the Vix library to avoid conflicts between threads. Clients need only be responsible for protecting user-defined shared data. As noted in the End User License Agreement, the VIX API allows you to build and distribute your own applications. To facilitate this, the following files are designated as redistributable for the purpose of that agreement: Redistribution of the open source libraries included with the VIX API is governed by their respective open source license agreements. http://blogs.vmware.com/vix/2010/05/redistibutable-vix-api-client-libraries.html
Amahi SPDY is a library built from scratch in the "Go way" for building SPDY clients and servers in the Go programming language. It supports a subset of SPDY 3.1 http://www.chromium.org/spdy/spdy-protocol/spdy-protocol-draft3-1 Check the source code, examples and overview at https://github.com/amahi/spdy This library is used in a streaming server/proxy implementation for Amahi, the [home and media server](https://www.amahi.org). The goals are reliability, streaming and performance/scalability. 1) Design for reliability means that network connections are assumed to disconnect at any time, especially when it's most inapropriate for the library to handle. This also includes potential issues with bugs in within the library, so the library tries to handle all crazy errors in the most reasonable way. A client or a server built with this library should be able to run for months and months of reliable operation. It's not there yet, but it will be. 2) Streaming requests, unlike typical HTTP requests (which are short), require working with an arbitrary large number of open requests (streams) simultaneously, and most of them are flow-constrained at the client endpoint. Streaming clients kind of misbehave too, for example, they open and close many streams rapidly with Range request to check certain parts of the file. This is common with endpoint clients like VLC or Quicktime (Safari on iOS or Mac OS X). We wrote this library with the goal of making it not just suitable for HTTP serving, but also for streaming. 3) The library was built with performance and scalability in mind, so things have been done using as little blocking and copying of data as possible. It was meant to be implemented in the "go way", using concurrency extensively and channel communication. The library uses mutexes very sparingly so that handling of errors at all manner of inapropriate times becomes easier. It goes to great lengths to not block, establishing timeouts when network and even channel communication may fail. The library should use very very little CPU, even in the presence of many streams and sessions running simultaneously.
Package expect provides support for interpreting structured comments in Go source code (including go.mod and go.work files) as test expectations. [Note: there is an open proposal (golang/go#70229) to deprecate, tag, and delete this package. If accepted, the last version of the package be available indefinitely but will not receive updates.] This is primarily intended for writing tests of things that process Go source files, although it does not directly depend on the testing package. Collect notes with the Extract or Parse functions, and use the MatchBefore function to find matches within the lines the comments were on. The interpretation of the notes depends on the application. For example, the test suite for a static checking tool might use a @diag note to indicate an expected diagnostic: By contrast, the test suite for a source code navigation tool might use notes to indicate the positions of features of interest, the actions to be performed by the test, and their expected outcomes: Note comments always start with the special marker @, which must be the very first character after the comment opening pair, so //@ or /*@ with no spaces. This is followed by a comma separated list of notes. A note always starts with an identifier, which is optionally followed by an argument list. The argument list is surrounded with parentheses and contains a comma-separated list of arguments. The empty parameter list and the missing parameter list are distinguishable if needed; they result in a nil or an empty list in the Args parameter respectively. Arguments are either identifiers or literals. The literals supported are the basic value literals, of string, float, integer true, false or nil. All the literals match the standard go conventions, with all bases of integers, and both quote and backtick strings. There is one extra literal type, which is a string literal preceded by the identifier "re" which is compiled to a regular expression.
Package nzgo is a pure Go language driver for the database/sql package to work with IBM PDA (aka Netezza) In most cases clients will use the database/sql package instead of using this package directly. For example: nzgo defines a simple logger interface. Set logLevel to control logging verbosity and logPath to specify log file path. By default logging will be enabled with logLevel=Info and current directory as logPath. You can configure logLevel and logPath (i.e. log file directory) as per your requirement. There is one more configuration parameter with logger "additionalLogFile". This parameter can be used to set additional logger file. additionalLogFile can be used to enable writing logs to stdout, this can be achieved by simply setting "additionalLogFile=stdout" Valid values for 'logLevel' are : "OFF" , "DEBUG", "INFO" and "FATAL". logLevel=OFF can be used to turn off logging. It will turn of both internal and additionalLogFile logs. These logger configuration parameters should be mentinoed in connection string. The level of security (SSL/TLS) that the driver uses for the connection to the data store. onlyUnSecured: The driver does not use SSL. preferredUnSecured: If the server provides a choice, the driver does not use SSL. preferredSecured: If the server provides a choice, the driver uses SSL. onlySecured: The driver does not connect unless an SSL connection is available. Similarly, Netezza server has above securityLevel. Cases which would fail: Client tries to connect with 'Only secured' or 'Preferred secured' mode while server is 'Only Unsecured' mode. Client tries to connect with 'Only secured' or 'Preferred secured' mode while server is 'Preferred Unsecured' mode. Client tries to connect with 'Only Unsecured' or 'Preferred Unsecured' mode while server is 'Only Secured' mode. Client tries to connect with 'Only Unsecured' or 'Preferred Unsecured' mode while server is 'Preferred Secured' mode. Below are the securityLevel you can pass in connection string : Use Open to create a database handle with connection parameters: The Go Netezza Driver supports the following connection syntaxes (or data source name formats): In this case, application is running from NPS server itself so using 'localhost'. Golang driver should connect on port 5480(postgres port). The user is admin, password is password, database is db1, sslmode is require, and the location of the root certificate file is C:/Users/root31.crt with securityLevel as 'Only Secured session' When establishing a connection using nzgo you are expected to supply a connection string containing zero or more parameters. Below are subset of the connection parameters supported by nzgo. The following special connection parameters are supported: Valid values for sslmode are: Use single quotes for values that contain whitespace: A backslash will escape the next character in values: Note that the connection parameter client_encoding (which sets the text encoding for the connection) may be set but must be "UTF8", matching with the same rules as Postgres. It is an error to provide any other value. database/sql does not dictate any specific format for parameter markers in query strings, but nzgo uses the Netezza-specific parameter markers i.e. '?', as shown below. First parameter marker in the query would be replaced by first arguement, second parameter marker in the query would be replaced by second arguement and so on. nzgo supports the RowsAffected() method of the Result type in database/sql. For additional instructions on querying see the documentation for the database/sql package. nzgo also supports transaction queries as specified in database/sql package https://github.com/golang/go/wiki/SQLInterface. Transactions are started by calling Begin. This package returns the following types for values from the Netezza backend: You can unload data from an IBM Netezza database table on a Netezza host system to a remote client. This unload does not remove rows from the database but instead stores the unloaded data in a flat file (external table) that is suitable for loading back into a Netezza database. Below query would create a file 'et1.txt' on remote system from Netezza table t2 with data delimeted by '|'. See https://www.ibm.com/support/knowledgecenter/en/SSULQD_7.2.1/com.ibm.nz.load.doc/t_load_unloading_data_remote_client_sys.html for more information about external table
Taowm is The Acutely Opinionated Window Manager. It is a minimalist, keyboard driven, low distraction, tiling window manager for someone who uses a computer primarily to run just two GUI programs: a web browser and a terminal emulator. To install taowm: This will install taowm in your $GOPATH, or under $GOROOT/bin if $GOPATH is empty. Run "go help gopath" to read more about $GOPATH. Taowm is designed to run from an Xsession session. Add this line to the end of your ~/.xsession file: where the path is wherever "go get" or "go install" wrote to. Again, run "go help gopath" for more information. Log out and log back in with the "Xsession" option. Some systems, such as Ubuntu 12.04 "Precise", do not offer an Xsession option by default. To enable it, create a new file /usr/share/xsessions/custom.desktop that contains: Taowm starts with each screen divided into two side-by-side frames, outlined in green. Frames can frame windows, but they can also be empty: closing a frame's window will not collapse that frame. The frame that contains the mouse pointer is the focused frame, and its border is brighter than other frames. Its window (if it contains one) will have the keyboard focus. Taowm is primarily keyboard driven, and all keyboard shortcuts involve first holding down the Caps Lock key, similar to how holding down the Control key followed by the 'N' key, in your web browser, creates a new browser window. The default Caps Lock behavior, CHANGING ALL TYPED LETTERS TO UPPER CASE, is disabled. Caps Lock and the Space key will open a new web browser window. Caps Lock and the Enter key will open a new terminal emulator window. Caps Lock and the Shift key and the '|' pipe key will lock the screen. Caps Lock and the Backspace key will close the window in the focused frame. Caps Lock and the Tab key will cycle through the frames. To quit taowm and return to the log in screen, hold down Caps Lock and the Shift key and hit the Escape key three times in quick succession. Normally, this will quit immediately. Some programs may ask for something before closing, such as a file name to write unsaved data to. In this case, taowm will quit in 60 seconds or whenever all such programs have closed, instead of quitting immediately, and the frame borders will turn red. If there are more windows than frames, then Caps Lock and the 'D' or 'F' key will cycle through hidden windows. Caps Lock and a number key like '1', '2', etc. will move the 1st, 2nd, etc. window to the focused frame. Caps Lock and the 'A' key will show a list of windows: the one currently in the focused frame is marked with a '+', other windows in other frames are marked with a '-', hidden windows that have not been seen yet are marked with an '@', and hidden windows that have been seen before are unmarked. In particular, newly created windows will not automatically be shown. Taowm prevents new windows from popping up and 'stealing' keyboard focus, a problem if the password you are typing into your terminal emulator accidentally gets written to a chat window that popped up at the wrong time. Instead, if there isn't an empty frame to accept a new window, taowm keeps that window hidden (and marked with an '@' in the window list) until you are ready to deal with it. If there are any such windows that have not been seen yet, the green frame borders will pulsate to remind you. Selected windows are also marked with a '#'; selection is described below. Caps Lock and the 'G' key will toggle the focused frame in occupying the entire screen. Caps Lock and Shift and the 'G' key will hide the window in the focused frame. Caps Lock and the '-' key, the '=' key or Shift and the '+' key will split the current frame horizontally, vertically, or merge a frame to undo a frame split respectively. A screen contains workspaces like a frame contains windows. Caps Lock and the 'T' key will create a new workspace, hiding the current one. Caps Lock and the 'E' or 'R' key will cycle through hidden workspaces. Caps Lock and Shift and the 'T' key will delete the current workspace, provided that it holds no windows and there is another hidden workspace to switch to. Caps Lock and the 'Q' key will show a list of workspaces (and their windows). Caps Lock and the '`' key will cycle through the screens. Caps Lock and the F1 key, F2 key, etc. will move the 1st, 2nd, etc. workspace to the current screen. Caps Lock and the 'S' key will select a window, or unselect a selected window. More than one window may be selected at a time. Caps Lock and Shift and the 'S' key will select or unselect all windows in the current workspace. Caps Lock and the 'W' key will migrate all selected windows to the current workspace and unselect them. Taowm also provides alternative ways to navigate within a program's window. Caps Lock and the 'H', 'J', 'K' or 'L' keys are equivalent to pressing the Left, Down, Up or Right arrow keys. Similarly, Caps Lock and the 'Y', 'U', 'B' or 'N' keys are equivalent to Home, Page Up, End or Page Down. The 'I' or 'M' keys are equivalent to a mouse wheel scrolling up or down, and the ',' or '.' keys are equivalent to the Backspace or Delete keys. Taowm provides similar shortcuts for other common actions. Caps Lock and the 'O' or 'P' keys will copy or paste, '/' or Shift-and-'?' will open or close a tab in the current window, 'C' or 'V' will cycle through tabs, 'Z' or 'X' will zoom in or out. By default, these keys will only work with the google-chrome web browser and the gnome-terminal terminal emulator. Making these work with other programs will require some customization. Customizing the keyboard shortcuts, web browser, terminal emulator, colors, etc., is done by editing config.go and re-compiling (and re-installing): run "go install github.com/nigeltao/taowm". When working on taowm, it can be run in a nested X server such as Xephyr. From the github.com/nigeltao/taowm directory under $GOPATH: The taowm mailing list is at http://groups.google.com/group/taowm Taowm is copyright 2013 The Taowm Authors. All rights reserved. Use of this source code is governed by a BSD-style license that can be found in the LICENSE file.
Package gofpdf implements a PDF document generator with high level support for text, drawing and images. - UTF-8 support - Choice of measurement unit, page format and margins - Page header and footer management - Automatic page breaks, line breaks, and text justification - Inclusion of JPEG, PNG, GIF, TIFF and basic path-only SVG images - Colors, gradients and alpha channel transparency - Outline bookmarks - Internal and external links - TrueType, Type1 and encoding support - Page compression - Lines, Bézier curves, arcs, and ellipses - Rotation, scaling, skewing, translation, and mirroring - Clipping - Document protection - Layers - Templates - Barcodes - Charting facility - Import PDFs as templates gofpdf has no dependencies other than the Go standard library. All tests pass on Linux, Mac and Windows platforms. gofpdf supports UTF-8 TrueType fonts and “right-to-left” languages. Note that Chinese, Japanese, and Korean characters may not be included in many general purpose fonts. For these languages, a specialized font (for example, NotoSansSC for simplified Chinese) can be used. Also, support is provided to automatically translate UTF-8 runes to code page encodings for languages that have fewer than 256 glyphs. This repository will not be maintained, at least for some unknown duration. But it is hoped that gofpdf has a bright future in the open source world. Due to Go’s promise of compatibility, gofpdf should continue to function without modification for a longer time than would be the case with many other languages. Forks should be based on the last viable commit. Tools such as active-forks can be used to select a fork that looks promising for your needs. If a particular fork looks like it has taken the lead in attracting followers, this README will be updated to point people in that direction. The efforts of all contributors to this project have been deeply appreciated. Best wishes to all of you. To install the package on your system, run Later, to receive updates, run The following Go code generates a simple PDF file. See the functions in the fpdf_test.go file (shown as examples in this documentation) for more advanced PDF examples. If an error occurs in an Fpdf method, an internal error field is set. After this occurs, Fpdf method calls typically return without performing any operations and the error state is retained. This error management scheme facilitates PDF generation since individual method calls do not need to be examined for failure; it is generally sufficient to wait until after Output() is called. For the same reason, if an error occurs in the calling application during PDF generation, it may be desirable for the application to transfer the error to the Fpdf instance by calling the SetError() method or the SetErrorf() method. At any time during the life cycle of the Fpdf instance, the error state can be determined with a call to Ok() or Err(). The error itself can be retrieved with a call to Error(). This package is a relatively straightforward translation from the original FPDF library written in PHP (despite the caveat in the introduction to Effective Go). The API names have been retained even though the Go idiom would suggest otherwise (for example, pdf.GetX() is used rather than simply pdf.X()). The similarity of the two libraries makes the original FPDF website a good source of information. It includes a forum and FAQ. However, some internal changes have been made. Page content is built up using buffers (of type bytes.Buffer) rather than repeated string concatenation. Errors are handled as explained above rather than panicking. Output is generated through an interface of type io.Writer or io.WriteCloser. A number of the original PHP methods behave differently based on the type of the arguments that are passed to them; in these cases additional methods have been exported to provide similar functionality. Font definition files are produced in JSON rather than PHP. A side effect of running go test ./... is the production of a number of example PDFs. These can be found in the gofpdf/pdf directory after the tests complete. Please note that these examples run in the context of a test. In order run an example as a standalone application, you’ll need to examine fpdf_test.go for some helper routines, for example exampleFilename() and summary(). Example PDFs can be compared with reference copies in order to verify that they have been generated as expected. This comparison will be performed if a PDF with the same name as the example PDF is placed in the gofpdf/pdf/reference directory and if the third argument to ComparePDFFiles() in internal/example/example.go is true. (By default it is false.) The routine that summarizes an example will look for this file and, if found, will call ComparePDFFiles() to check the example PDF for equality with its reference PDF. If differences exist between the two files they will be printed to standard output and the test will fail. If the reference file is missing, the comparison is considered to succeed. In order to successfully compare two PDFs, the placement of internal resources must be consistent and the internal creation timestamps must be the same. To do this, the methods SetCatalogSort() and SetCreationDate() need to be called for both files. This is done automatically for all examples. Nothing special is required to use the standard PDF fonts (courier, helvetica, times, zapfdingbats) in your documents other than calling SetFont(). You should use AddUTF8Font() or AddUTF8FontFromBytes() to add a TrueType UTF-8 encoded font. Use RTL() and LTR() methods switch between “right-to-left” and “left-to-right” mode. In order to use a different non-UTF-8 TrueType or Type1 font, you will need to generate a font definition file and, if the font will be embedded into PDFs, a compressed version of the font file. This is done by calling the MakeFont function or using the included makefont command line utility. To create the utility, cd into the makefont subdirectory and run “go build”. This will produce a standalone executable named makefont. Select the appropriate encoding file from the font subdirectory and run the command as in the following example. In your PDF generation code, call AddFont() to load the font and, as with the standard fonts, SetFont() to begin using it. Most examples, including the package example, demonstrate this method. Good sources of free, open-source fonts include Google Fonts and DejaVu Fonts. The draw2d package is a two dimensional vector graphics library that can generate output in different forms. It uses gofpdf for its document production mode. gofpdf is a global community effort and you are invited to make it even better. If you have implemented a new feature or corrected a problem, please consider contributing your change to the project. A contribution that does not directly pertain to the core functionality of gofpdf should be placed in its own directory directly beneath the contrib directory. Here are guidelines for making submissions. Your change should - be compatible with the MIT License - be properly documented - be formatted with go fmt - include an example in fpdf_test.go if appropriate - conform to the standards of golint and go vet, that is, golint . and go vet . should not generate any warnings - not diminish test coverage Pull requests are the preferred means of accepting your changes. gofpdf is released under the MIT License. It is copyrighted by Kurt Jung and the contributors acknowledged below. This package’s code and documentation are closely derived from the FPDF library created by Olivier Plathey, and a number of font and image resources are copied directly from it. Bruno Michel has provided valuable assistance with the code. Drawing support is adapted from the FPDF geometric figures script by David Hernández Sanz. Transparency support is adapted from the FPDF transparency script by Martin Hall-May. Support for gradients and clipping is adapted from FPDF scripts by Andreas Würmser. Support for outline bookmarks is adapted from Olivier Plathey by Manuel Cornes. Layer support is adapted from Olivier Plathey. Support for transformations is adapted from the FPDF transformation script by Moritz Wagner and Andreas Würmser. PDF protection is adapted from the work of Klemen Vodopivec for the FPDF product. Lawrence Kesteloot provided code to allow an image’s extent to be determined prior to placement. Support for vertical alignment within a cell was provided by Stefan Schroeder. Ivan Daniluk generalized the font and image loading code to use the Reader interface while maintaining backward compatibility. Anthony Starks provided code for the Polygon function. Robert Lillack provided the Beziergon function and corrected some naming issues with the internal curve function. Claudio Felber provided implementations for dashed line drawing and generalized font loading. Stani Michiels provided support for multi-segment path drawing with smooth line joins, line join styles, enhanced fill modes, and has helped greatly with package presentation and tests. Templating is adapted by Marcus Downing from the FPDF_Tpl library created by Jan Slabon and Setasign. Jelmer Snoeck contributed packages that generate a variety of barcodes and help with registering images on the web. Jelmer Snoek and Guillermo Pascual augmented the basic HTML functionality with aligned text. Kent Quirk implemented backwards-compatible support for reading DPI from images that support it, and for setting DPI manually and then having it properly taken into account when calculating image size. Paulo Coutinho provided support for static embedded fonts. Dan Meyers added support for embedded JavaScript. David Fish added a generic alias-replacement function to enable, among other things, table of contents functionality. Andy Bakun identified and corrected a problem in which the internal catalogs were not sorted stably. Paul Montag added encoding and decoding functionality for templates, including images that are embedded in templates; this allows templates to be stored independently of gofpdf. Paul also added support for page boxes used in printing PDF documents. Wojciech Matusiak added supported for word spacing. Artem Korotkiy added support of UTF-8 fonts. Dave Barnes added support for imported objects and templates. Brigham Thompson added support for rounded rectangles. Joe Westcott added underline functionality and optimized image storage. Benoit KUGLER contributed support for rectangles with corners of unequal radius, modification times, and for file attachments and annotations. - Remove all legacy code page font support; use UTF-8 exclusively - Improve test coverage as reported by the coverage tool. Example demonstrates the generation of a simple PDF document. Note that since only core fonts are used (in this case Arial, a synonym for Helvetica), an empty string can be specified for the font directory in the call to New(). Note also that the example.Filename() and example.Summary() functions belong to a separate, internal package and are not part of the gofpdf library. If an error occurs at some point during the construction of the document, subsequent method calls exit immediately and the error is finally retrieved with the output call where it can be handled by the application.
Package oci provides functionality for storing and retrieving Open Component Model (OCM) components using the Open Container Initiative (OCI) registry format. It implements the OCM repository interface using OCI registries as the underlying storage mechanism. Package Structure: The package is organized into several key components and subpackages: Core Repository Implementation The main repository.go file implements the core OCI repository functionality: - Component version storage and retrieval - Resource management - OCI manifest handling - Layer management Every Repository is based on a Resolver which in turn provides the Resolver.StoreForReference. This means that for every OCI reference, there is a Store implementation that backs it. The Store abstracts OCI Operations from the Repository and provides methods for - Fetching OCI Descriptors (and checking their Existence) - Pushing OCI Descriptors - Tagging OCI Descriptors and resolving those tags As long as a store provides these abstractions (which are lent from ORAS) the repository will be able to interact with the underlying storage as if it was an OCI registry. Subpackages: - access/v1: Provides version 1 of the OCI image access specification - digest/v1: Handles content addressing and digest operations - tar/: Manages TAR archive operations for OCI layouts - ctf/: Common Transport Format Store implementation that can be used to work with CTFs as if they were OCI registires - integration/: Integration tests Supporting Types and Utilities: - LocalBlobMemory: Manages temporary storage of local blobs - ComponentConfig: Stores component-specific configuration - ArtifactAnnotation: Handles OCI artifact annotations Resources are managed through multiple types: - LocalBlob: For temporary storage of resources - ResourceBlob: For resource-specific operations (a blob described by an OCM resource) - DescriptorBlob: For OCI descriptor management (a blob described by an OCI descriptor) Core Interfaces: ComponentVersionRepository: The main interface for managing component versions and their resources: - AddComponentVersion: Stores new component versions - GetComponentVersion: Retrieves existing component versions - AddLocalResource: Adds resources to components - GetLocalResource: Retrieves resources from components ResourceRepository: Handles resource operations independently of component versions: - UploadResource: Uploads resources to the repository - DownloadResource: Downloads resources from the repository Resolver: Maps component references to OCI stores: - StoreForReference: Resolves references to Store implementations - ComponentVersionReference: Generates unique references for components Store: Provides low-level OCI operations: - Fetch/Push: Basic blob operations - Tag/Resolve: Reference management Resource Management: Resources in OCM can be managed in two modes: LocalResourceCreationModeLocalBlobWithNestedGlobalAccess: - Creates a local blob access for resources - Embeds global access information in the local blob - Provides better isolation and control LocalResourceCreationModeOCIImage: - Creates an OCI image layer access for resources - Used when the resource is embedded without a local blob - More efficient for OCI-native resources Usage Example: Configuration and Options: The package supports flexible configuration through RepositoryOptions: Media Types: The package defines media types for OCM components: Annotations: The package uses specific annotations for OCI manifests: Dependencies: The package relies on several external packages:
Package uplink is the main entrypoint to interacting with Storj Labs' decentralized storage network. Sign up for an account on a Satellite today! https://storj.io/ The fundamental unit of access in the Storj Labs storage network is the Access Grant. An access grant is a serialized structure that is internally comprised of an API Key, a set of encryption key information, and information about which Storj Labs or Tardigrade network Satellite is responsible for the metadata. An access grant is always associated with exactly one Project on one Satellite. If you don't already have an access grant, you will need make an account on a Satellite, generate an API Key, and encapsulate that API Key with encryption information into an access grant. If you don't already have an account on a Satellite, first make one at https://storj.io/ and note the Satellite you choose (such as us1.storj.io, eu1.storj.io, etc). Then, make an API Key in the web interface. The first step to any project is to generate a restricted access grant with the minimal permissions that are needed. Access grants contains all encryption information and they should be restricted as much as possible. To make an access grant, you can create one using our Uplink CLI tool's 'share' subcommand (after setting up the Uplink CLI tool), or you can make one as follows: In the above example, 'serializedAccess' is a human-readable string that represents read-only access to just the "logs" bucket, and is only able to decrypt that one bucket thanks to hierarchical deterministic key derivation. Note: RequestAccessWithPassphrase is CPU-intensive, and your application's normal lifecycle should avoid it and use ParseAccess where possible instead. To revoke an access grant see the Project.RevokeAccess method. A common architecture for building applications is to have a single bucket for the entire application to store the objects of all users. In such architecture, it is of utmost importance to guarantee that users can access only their objects but not the objects of other users. This can be achieved by implementing an app-specific authentication service that generates an access grant for each user by restricting the main access grant of the application. This user-specific access grant is restricted to access the objects only within a specific key prefix defined for the user. When initialized, the authentication server creates the main application access grant with an empty passphrase as follows. The authentication service does not hold any encryption information about users, so the passphrase used to request the main application access grant does not matter. The encryption keys related to user objects will be overridden in a next step on the client-side. It is important that once set to a specific value, this passphrase never changes in the future. Therefore, the best practice is to use an empty passphrase. Whenever a user is authenticated, the authentication service generates the user-specific access grant as follows: The userID is something that uniquely identifies the users in the application and must never change. Along with the user access grant, the authentication service should return a user-specific salt. The salt must be always the same for this user. The salt size is 16-byte or 32-byte. Once the application receives the user-specific access grant and the user-specific salt from the authentication service, it has to override the encryption key in the access grant, so users can encrypt and decrypt their files with encryption keys derived from their passphrase. The user-specific access grant is now ready to use by the application. Once you have a valid access grant, you can open a Project with the access that access grant allows for. Projects allow you to manage buckets and objects within buckets. A bucket represents a collection of objects. You can upload, download, list, and delete objects of any size or shape. Objects within buckets are represented by keys, where keys can optionally be listed using the "/" delimiter. Note: Objects and object keys within buckets are end-to-end encrypted, but bucket names themselves are not encrypted, so the billing interface on the Satellite can show you bucket line items. Objects support a couple kilobytes of arbitrary key/value metadata, and arbitrary-size primary data streams with the ability to read at arbitrary offsets. If you want to access only a small subrange of the data you uploaded, you can use `uplink.DownloadOptions` to specify the download range. Listing objects returns an iterator that allows to walk through all the items:
Package gochrome aims to be a complete Chrome DevTools Protocol Viewer implementation. Versioned packages are available. Curently the only version is `tot` or Tip-of-Tree. Stable versions will be made available in the future. This is beta software and hasn't been well exercised in real-world applications. See https://chromedevtools.github.io/devtools-protocol/ The Chrome DevTools Protocol allows for tools to instrument, inspect, debug and profile Chromium, Chrome and other Blink-based browsers. Many existing projects currently use the protocol. The Chrome DevTools uses this protocol and the team maintains its API. Instrumentation is divided into a number of domains (DOM, Debugger, Network etc.). Each domain defines a number of commands it supports and events it generates. Both commands and events are serialized JSON objects of a fixed structure. You can either debug over the wire using the raw messages as they are described in the corresponding domain documentation, or use extension JavaScript API. The latest (tip-of-tree) protocol (tot) It changes frequently and can break at any time. However it captures the full capabilities of the Protocol, whereas the stable release is a subset. There is no backwards compatibility support guaranteed for the capabilities it introduces. Resources Basics: Using DevTools as protocol client The Developer Tools front-end can attach to a remotely running Chrome instance for debugging. For this scenario to work, you should start your host Chrome instance with the remote-debugging-port command line switch: Then you can start a separate client Chrome instance, using a distinct user profile: Now you can navigate to the given port from your client and attach to any of the discovered tabs for debugging: http://localhost:9222 You will find the Developer Tools interface identical to the embedded one and here is why: In this scenario, you can substitute Developer Tools front-end with your own implementation. Instead of navigating to the HTML page at http://localhost:9222, your application can discover available pages by requesting: http://localhost:9222/json and getting a JSON object with information about inspectable pages along with the WebSocket addresses that you could use in order to start instrumenting them. Remote debugging is especially useful when debugging remote instances of the browser or attaching to the embedded devices. Blink port owners are responsible for exposing debugging connections to the external users. This is especially handy to understand how the DevTools frontend makes use of the protocol. First, run Chrome with the debugging port open: Then, select the Chromium Projects item in the Inspectable Pages list. Now that DevTools is up and fullscreen, open DevTools to inspect it. Cmd-R in the new inspector to make the first restart. Now head to Network Panel, filter by Websocket, select the connection and click the Frames tab. Now you can easily see the frames of WebSocket activity as you use the first instance of the DevTools. To allow chrome extensions to interact with the protocol, we introduced chrome.debugger extension API that exposes this JSON message transport interface. As a result, you can not only attach to the remotely running Chrome instance, but also instrument it from its own extension. Chrome Debugger Extension API provides a higher level API where command domain, name and body are provided explicitly in the `sendCommand` call. This API hides request ids and handles binding of the request with its response, hence allowing `sendCommand` to report result in the callback function call. One can also use this API in combination with the other Extension APIs. If you are developing a Web-based IDE, you should implement an extension that exposes debugging capabilities to your page and your IDE will be able to open pages with the target application, set breakpoints there, evaluate expressions in console, live edit JavaScript and CSS, display live DOM, network interaction and any other aspect that Developer Tools is instrumenting today. Opening embedded Developer Tools will terminate the remote connection and thus detach the extension. https://chromedevtools.github.io/devtools-protocol/#simultaneous The canonical protocol definitions live in the Chromium source tree: (browser_protocol.json and js_protocol.json). They are maintained manually by the DevTools engineering team. These files are mirrored (hourly) on GitHub in the devtools-protocol repo. The declarative protocol definitions are used across tools. Within Chromium, a binding layer is created for the Chrome DevTools to interact with, and separately the protocol is used for Chrome Headless’s C++ interface. What’s the protocol_externs file It’s created via generate_protocol_externs.py and useful for tools using closure compiler. The TypeScript story is here. Not yet. See bugger-daemon’s third-party docs. See also the endpoints implementation in Chromium. /json/protocol was added in Chrome 60. The endpoint is exposed as webSocketDebuggerUrl in /json/version. Note the browser in the URL, rather than page. If Chrome was launched with --remote-debugging-port=0 and chose an open port, the browser endpoint is written to both stderr and the DevToolsActivePort file in browser profile folder. Yes, as of Chrome 63! See Multi-client remote debugging support. Upon disconnnection, the outgoing client will receive a detached event. For example: View the enum of possible reasons. (For reference: the original patch). After disconnection, some apps have chosen to pause their state and offer a reconnect button.
Package ctf provides various interfaces and types for working with Common Transport Format Archives (CTF) The Common Transport Format describes a file system structure that can be used for the representation of [content](https://github.com/opencontainers/image-spec) of an OCI repository. Therefore, it can be used to describe a subset of repositories of an OCI registry with a subset of artifacts, that can then be imported again into any OCI registry. artifact-index.json: This JSON file describes the contained artifact (versions). blobs The blobs directory contains the blobs described by the artifact index as a flat file list. These are layer blobs or artifact blobs for the artifact descriptors. Every file has a filename according to its digest (https://github.com/opencontainers/image-spec/blob/main/descriptor.md#digests). Hereby the algorithm separator character is replaced by a dot ("."). Every file SHOULD be referenced, directly or indirectly, in the artifact descriptor by a descriptor according the OCI Image Specification (https://github.com/opencontainers/image-spec/blob/main/descriptor.md). The artifact index describes the OCI manifests (image manifests and index manifests), which refer to further non-manifest blobs. Files not referenced by the artifacts described by the index SHOULD be ignored. The FileFormat of a CTF can differ: as directory of an operating system file system or a virtual file system (FormatDirectory) or as content of a TAR archive (unzipped - FormatTAR or zipped - FormatTGZ). The descriptor SHOULD be the first file if stored in an archive. This package also offers a legacy compatibility layer access for the ArtifactSet, a now no longer recommended artifact format that was used in the past by OCM CLI to package local blobs. We now instead recommend packaging in the format of OCI Image Layouts as they are almost identical. See ArtifactSet for more details.
Package file is a Go library to open files with file locking depending on the system. Currently file locking on the following systems are supported.
Package gatt provides a Bluetooth Low Energy gatt implementation. Gatt (Generic Attribute Profile) is the protocol used to write BLE peripherals (servers) and centrals (clients). This package is a work in progress. The API will change. As a peripheral, you can create services, characteristics, and descriptors, advertise, accept connections, and handle requests. As a central, you can scan, connect, discover services, and make requests. gatt supports both Linux and OS X. On Linux: To gain complete and exclusive control of the HCI device, gatt uses HCI_CHANNEL_USER (introduced in Linux v3.14) instead of HCI_CHANNEL_RAW. Those who must use an older kernel may patch in these relevant commits from Marcel Holtmann: Note that because gatt uses HCI_CHANNEL_USER, once gatt has opened the device no other program may access it. Before starting a gatt program, make sure that your BLE device is down: If you have BlueZ 5.14+ (or aren't sure), stop the built-in bluetooth server, which interferes with gatt, e.g.: Because gatt programs administer network devices, they must either be run as root, or be granted appropriate capabilities: USAGE See the server.go, discoverer.go, and explorer.go in the examples/ directory for writing server or client programs that run on Linux and OS X. Users, especially on Linux platforms, seeking finer-grained control over the devices can see the examples/server_lnx.go for the usage of Option, which are platform specific. See the rest of the docs for other options and finer-grained control. Note that some BLE central devices, particularly iOS, may aggressively cache results from previous connections. If you change your services or characteristics, you may need to reboot the other device to pick up the changes. This is a common source of confusion and apparent bugs. For an OS X central, see http://stackoverflow.com/questions/20553957. gatt started life as a port of bleno, to which it is indebted: https://github.com/sandeepmistry/bleno. If you are having problems with gatt, particularly around installation, issues filed with bleno might also be helpful references. To try out your GATT server, it is useful to experiment with a generic BLE client. LightBlue is a good choice. It is available free for both iOS and OS X.
Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Package firecracker provides a library to interact with the Firecracker API. Firecracker is an open-source virtualization technology that is purpose-built for creating and managing secure, multi-tenant containers and functions-based services. See https://firecracker-microvm.github.io/ for more details. This library requires Go 1.23 or later and can be used with Go modules. BUG(aws): There are some Firecracker features that are not yet supported by the SDK. These are tracked as GitHub issues with the firecracker-feature label: https://github.com/firecracker-microvm/firecracker-go-sdk/issues?q=is%3Aissue+is%3Aopen+label%3Afirecracker-feature This library is licensed under the Apache 2.0 License. Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with the License. A copy of the License is located at or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Package cloud is the root of the packages used to access Google Cloud Services. See https://pkg.go.dev/cloud.google.com/go#section-directories for a full list of sub-modules. All clients in sub-packages are configurable via client options. These options are described here: https://pkg.go.dev/google.golang.org/api/option. Endpoint configuration is used to specify the URL to which requests are sent. It is used for services that support or require regional endpoints, as well as for other use cases such as testing against fake servers. For example, the Vertex AI service recommends that you configure the endpoint to the location with the features you want that is closest to your physical location or the location of your users. There is no global endpoint for Vertex AI. See Vertex AI - Locations for more details. The following example demonstrates configuring a Vertex AI client with a regional endpoint: All of the clients support authentication via Google Application Default Credentials, or by providing a JSON key file for a Service Account. See examples below. Google Application Default Credentials (ADC) is the recommended way to authorize and authenticate clients. For information on how to create and obtain Application Default Credentials, see https://cloud.google.com/docs/authentication/production. If you have your environment configured correctly you will not need to pass any extra information to the client libraries. Here is an example of a client using ADC to authenticate: You can use a file with credentials to authenticate and authorize, such as a JSON key file associated with a Google service account. Service Account keys can be created and downloaded from https://console.cloud.google.com/iam-admin/serviceaccounts. This example uses the Secret Manger client, but the same steps apply to the all other client libraries this package as well. Example: In some cases (for instance, you don't want to store secrets on disk), you can create credentials from in-memory JSON and use the WithCredentials option. This example uses the Secret Manager client, but the same steps apply to all other client libraries as well. Note that scopes can be found at https://developers.google.com/identity/protocols/oauth2/scopes, and are also provided in all auto-generated libraries: for example, cloud.google.com/go/secretmanager/apiv1 provides DefaultAuthScopes. Example: By default, non-streaming methods, like Create or Get, will have a default deadline applied to the context provided at call time, unless a context deadline is already set. Streaming methods have no default deadline and will run indefinitely. To set timeouts or arrange for cancellation, use context. Transient errors will be retried when correctness allows. Here is an example of setting a timeout for an RPC using context.WithTimeout: Here is an example of setting a timeout for an RPC using github.com/googleapis/gax-go/v2.WithTimeout: Here is an example of how to arrange for an RPC to be canceled, use context.WithCancel: Do not attempt to control the initial connection (dialing) of a service by setting a timeout on the context passed to NewClient. Dialing is non-blocking, so timeouts would be ineffective and would only interfere with credential refreshing, which uses the same context. Regardless of which transport is used, request headers can be set in the same way using [`callctx.SetHeaders`]setheaders. Here is a generic example: There are a some header keys that Google reserves for internal use that must not be ovewritten. The following header keys are broadly considered reserved and should not be conveyed by client library users unless instructed to do so: * `x-goog-api-client` * `x-goog-request-params` Be sure to check the individual package documentation for other service-specific reserved headers. For example, Storage supports a specific auditing header that is mentioned in that [module's documentation]storagedocs. Google Cloud services respect system parameterssystem parameters that can be used to augment request and/or response behavior. For the most part, they are not needed when using one of the enclosed client libraries. However, those that may be necessary are made available via the [`callctx`]callctx package. If not present there, consider opening an issue on that repo to request a new constant. Connection pooling differs in clients based on their transport. Cloud clients either rely on HTTP or gRPC transports to communicate with Google Cloud. Cloud clients that use HTTP rely on the underlying HTTP transport to cache connections for later re-use. These are cached to the http.MaxIdleConns and http.MaxIdleConnsPerHost settings in http.DefaultTransport by default. For gRPC clients, connection pooling is configurable. Users of Cloud Client Libraries may specify google.golang.org/api/option.WithGRPCConnectionPool as a client option to NewClient calls. This configures the underlying gRPC connections to be pooled and accessed in a round robin fashion. Minimal container images like Alpine lack CA certificates. This causes RPCs to appear to hang, because gRPC retries indefinitely. See https://github.com/googleapis/google-cloud-go/issues/928 for more information. For tips on how to write tests against code that calls into our libraries check out our Debugging Guide. For tips on how to write tests against code that calls into our libraries check out our Testing Guide. Most of the errors returned by the generated clients are wrapped in an github.com/googleapis/gax-go/v2/apierror.APIError and can be further unwrapped into a google.golang.org/grpc/status.Status or google.golang.org/api/googleapi.Error depending on the transport used to make the call (gRPC or REST). Converting your errors to these types can be a useful way to get more information about what went wrong while debugging. APIError gives access to specific details in the error. The transport-specific errors can still be unwrapped using the APIError. Semver is used to communicate stability of the sub-modules of this package. Note, some stable sub-modules do contain packages, and sometimes features, that are considered unstable. If something is unstable it will be explicitly labeled as such. Example of package does in an unstable package: Clients that contain alpha and beta in their import path may change or go away without notice. Clients marked stable will maintain compatibility with future versions for as long as we can reasonably sustain. Incompatible changes might be made in some situations, including:
Package gofpdf implements a PDF document generator with high level support for text, drawing and images. - UTF-8 support - Choice of measurement unit, page format and margins - Page header and footer management - Automatic page breaks, line breaks, and text justification - Inclusion of JPEG, PNG, GIF, TIFF and basic path-only SVG images - Colors, gradients and alpha channel transparency - Outline bookmarks - Internal and external links - TrueType, Type1 and encoding support - Page compression - Lines, Bézier curves, arcs, and ellipses - Rotation, scaling, skewing, translation, and mirroring - Clipping - Document protection - Layers - Templates - Barcodes - Charting facility - Import PDFs as templates gofpdf has no dependencies other than the Go standard library. All tests pass on Linux, Mac and Windows platforms. gofpdf supports UTF-8 TrueType fonts and “right-to-left” languages. Note that Chinese, Japanese, and Korean characters may not be included in many general purpose fonts. For these languages, a specialized font (for example, NotoSansSC for simplified Chinese) can be used. Also, support is provided to automatically translate UTF-8 runes to code page encodings for languages that have fewer than 256 glyphs. This repository will not be maintained, at least for some unknown duration. But it is hoped that gofpdf has a bright future in the open source world. Due to Go’s promise of compatibility, gofpdf should continue to function without modification for a longer time than would be the case with many other languages. Forks should be based on the last viable commit. Tools such as active-forks can be used to select a fork that looks promising for your needs. If a particular fork looks like it has taken the lead in attracting followers, this README will be updated to point people in that direction. The efforts of all contributors to this project have been deeply appreciated. Best wishes to all of you. To install the package on your system, run Later, to receive updates, run The following Go code generates a simple PDF file. See the functions in the fpdf_test.go file (shown as examples in this documentation) for more advanced PDF examples. If an error occurs in an Fpdf method, an internal error field is set. After this occurs, Fpdf method calls typically return without performing any operations and the error state is retained. This error management scheme facilitates PDF generation since individual method calls do not need to be examined for failure; it is generally sufficient to wait until after Output() is called. For the same reason, if an error occurs in the calling application during PDF generation, it may be desirable for the application to transfer the error to the Fpdf instance by calling the SetError() method or the SetErrorf() method. At any time during the life cycle of the Fpdf instance, the error state can be determined with a call to Ok() or Err(). The error itself can be retrieved with a call to Error(). This package is a relatively straightforward translation from the original FPDF library written in PHP (despite the caveat in the introduction to Effective Go). The API names have been retained even though the Go idiom would suggest otherwise (for example, pdf.GetX() is used rather than simply pdf.X()). The similarity of the two libraries makes the original FPDF website a good source of information. It includes a forum and FAQ. However, some internal changes have been made. Page content is built up using buffers (of type bytes.Buffer) rather than repeated string concatenation. Errors are handled as explained above rather than panicking. Output is generated through an interface of type io.Writer or io.WriteCloser. A number of the original PHP methods behave differently based on the type of the arguments that are passed to them; in these cases additional methods have been exported to provide similar functionality. Font definition files are produced in JSON rather than PHP. A side effect of running go test ./... is the production of a number of example PDFs. These can be found in the gofpdf/pdf directory after the tests complete. Please note that these examples run in the context of a test. In order run an example as a standalone application, you’ll need to examine fpdf_test.go for some helper routines, for example exampleFilename() and summary(). Example PDFs can be compared with reference copies in order to verify that they have been generated as expected. This comparison will be performed if a PDF with the same name as the example PDF is placed in the gofpdf/pdf/reference directory and if the third argument to ComparePDFFiles() in internal/example/example.go is true. (By default it is false.) The routine that summarizes an example will look for this file and, if found, will call ComparePDFFiles() to check the example PDF for equality with its reference PDF. If differences exist between the two files they will be printed to standard output and the test will fail. If the reference file is missing, the comparison is considered to succeed. In order to successfully compare two PDFs, the placement of internal resources must be consistent and the internal creation timestamps must be the same. To do this, the methods SetCatalogSort() and SetCreationDate() need to be called for both files. This is done automatically for all examples. Nothing special is required to use the standard PDF fonts (courier, helvetica, times, zapfdingbats) in your documents other than calling SetFont(). You should use AddUTF8Font() or AddUTF8FontFromBytes() to add a TrueType UTF-8 encoded font. Use RTL() and LTR() methods switch between “right-to-left” and “left-to-right” mode. In order to use a different non-UTF-8 TrueType or Type1 font, you will need to generate a font definition file and, if the font will be embedded into PDFs, a compressed version of the font file. This is done by calling the MakeFont function or using the included makefont command line utility. To create the utility, cd into the makefont subdirectory and run “go build”. This will produce a standalone executable named makefont. Select the appropriate encoding file from the font subdirectory and run the command as in the following example. In your PDF generation code, call AddFont() to load the font and, as with the standard fonts, SetFont() to begin using it. Most examples, including the package example, demonstrate this method. Good sources of free, open-source fonts include Google Fonts and DejaVu Fonts. The draw2d package is a two dimensional vector graphics library that can generate output in different forms. It uses gofpdf for its document production mode. gofpdf is a global community effort and you are invited to make it even better. If you have implemented a new feature or corrected a problem, please consider contributing your change to the project. A contribution that does not directly pertain to the core functionality of gofpdf should be placed in its own directory directly beneath the contrib directory. Here are guidelines for making submissions. Your change should - be compatible with the MIT License - be properly documented - be formatted with go fmt - include an example in fpdf_test.go if appropriate - conform to the standards of golint and go vet, that is, golint . and go vet . should not generate any warnings - not diminish test coverage Pull requests are the preferred means of accepting your changes. gofpdf is released under the MIT License. It is copyrighted by Kurt Jung and the contributors acknowledged below. This package’s code and documentation are closely derived from the FPDF library created by Olivier Plathey, and a number of font and image resources are copied directly from it. Bruno Michel has provided valuable assistance with the code. Drawing support is adapted from the FPDF geometric figures script by David Hernández Sanz. Transparency support is adapted from the FPDF transparency script by Martin Hall-May. Support for gradients and clipping is adapted from FPDF scripts by Andreas Würmser. Support for outline bookmarks is adapted from Olivier Plathey by Manuel Cornes. Layer support is adapted from Olivier Plathey. Support for transformations is adapted from the FPDF transformation script by Moritz Wagner and Andreas Würmser. PDF protection is adapted from the work of Klemen Vodopivec for the FPDF product. Lawrence Kesteloot provided code to allow an image’s extent to be determined prior to placement. Support for vertical alignment within a cell was provided by Stefan Schroeder. Ivan Daniluk generalized the font and image loading code to use the Reader interface while maintaining backward compatibility. Anthony Starks provided code for the Polygon function. Robert Lillack provided the Beziergon function and corrected some naming issues with the internal curve function. Claudio Felber provided implementations for dashed line drawing and generalized font loading. Stani Michiels provided support for multi-segment path drawing with smooth line joins, line join styles, enhanced fill modes, and has helped greatly with package presentation and tests. Templating is adapted by Marcus Downing from the FPDF_Tpl library created by Jan Slabon and Setasign. Jelmer Snoeck contributed packages that generate a variety of barcodes and help with registering images on the web. Jelmer Snoek and Guillermo Pascual augmented the basic HTML functionality with aligned text. Kent Quirk implemented backwards-compatible support for reading DPI from images that support it, and for setting DPI manually and then having it properly taken into account when calculating image size. Paulo Coutinho provided support for static embedded fonts. Dan Meyers added support for embedded JavaScript. David Fish added a generic alias-replacement function to enable, among other things, table of contents functionality. Andy Bakun identified and corrected a problem in which the internal catalogs were not sorted stably. Paul Montag added encoding and decoding functionality for templates, including images that are embedded in templates; this allows templates to be stored independently of gofpdf. Paul also added support for page boxes used in printing PDF documents. Wojciech Matusiak added supported for word spacing. Artem Korotkiy added support of UTF-8 fonts. Dave Barnes added support for imported objects and templates. Brigham Thompson added support for rounded rectangles. Joe Westcott added underline functionality and optimized image storage. Benoit KUGLER contributed support for rectangles with corners of unequal radius, modification times, and for file attachments and annotations. - Remove all legacy code page font support; use UTF-8 exclusively - Improve test coverage as reported by the coverage tool. Example demonstrates the generation of a simple PDF document. Note that since only core fonts are used (in this case Arial, a synonym for Helvetica), an empty string can be specified for the font directory in the call to New(). Note also that the example.Filename() and example.Summary() functions belong to a separate, internal package and are not part of the gofpdf library. If an error occurs at some point during the construction of the document, subsequent method calls exit immediately and the error is finally retrieved with the output call where it can be handled by the application.
Package swagger (2.0) provides a powerful interface to your API Contains an implementation of Swagger 2.0. It knows how to serialize, deserialize and validate swagger specifications. Swagger is a simple yet powerful representation of your RESTful API. With the largest ecosystem of API tooling on the planet, thousands of developers are supporting Swagger in almost every modern programming language and deployment environment. With a Swagger-enabled API, you get interactive documentation, client SDK generation and discoverability. We created Swagger to help fulfill the promise of APIs. Swagger helps companies like Apigee, Getty Images, Intuit, LivingSocial, McKesson, Microsoft, Morningstar, and PayPal build the best possible services with RESTful APIs.Now in version 2.0, Swagger is more enabling than ever. And it's 100% open source software. More detailed documentation is available at https://goswagger.io. Install: The implementation also provides a number of command line tools to help working with swagger. Currently there is a spec validator tool: To generate a server for a swagger spec document: To generate a client for a swagger spec document: To generate a swagger spec document for a go application: There are several other sub commands available for the generate command You're free to add files to the directories the generated code lands in, but the files generated by the generator itself will be regenerated on following generation runs so any changes to those files will be lost. However extra files you create won't be lost so they are safe to use for customizing the application to your needs. To generate a server for a swagger spec document:
Package journal implements WAL-like append-only journals. A journal is split into segments; the last segment is the one being written to. Intended use cases: Features: Suitable for a large number of very short records. Per-record overhead can be as low as 2 bytes. Suitable for very large records, too. (In the future, it will be possible to write records in chunks.) Fault-resistant. Self-healing. Verifies the checksums and truncates corrupted data when opening the journal. Performant. Automatically rotates the files when they reach a certain size. TODO: Trigger rotation based on time (say, each day gets a new segment). Basically limit how old in-progress segments can be. Allow to rotate a file without writing a new record. (Otherwise rarely-used journals will never get archived.) Give work-in-progress file a prefixed name (W*). Auto-commit every N seconds, after K bytes, after M records. Option for millisecond timestamp precision? Reading API. (Search based on time and record ordinals.) Segment files: We always set bit 0 of commit checksums, and we use size*2 when encoding records; so bit 0 of the first byte of an item indicates whether it's a record or a commit. Timestamps are 32-bit unix times and have 1 second precision. (Rationale is that the primary use of timestamps is to search logs by time, and that does not require a higher precision. For high-frequency logs, with 1-second precision, timestamp deltas will typically fit within 1 byte.)