Package vix API allows you to automate virtual machine operations on most current VMware platform products, especially hosted VMware products such as: vmware workstation, player, fusion and server. vSphere API, starting from 5.0, merged VIX API in the GuestOperationsManager managed object. So, we encourage you to use VMware's official Go package for vSphere. This API supports: In order for Go to find libvix when running your compiled binary, a govix path has to be added to the LD_LIBRARY_PATH environment variable. Example: Be aware that the previous example assumes $GOPATH only has a path set. In order to enable VIX debugging in VMware, you have set the following setting: For logging, the following are the paths in each operating system: The Vix library is intended for use by multi-threaded clients. Vix shared objects are managed by the Vix library to avoid conflicts between threads. Clients need only be responsible for protecting user-defined shared data. As noted in the End User License Agreement, the VIX API allows you to build and distribute your own applications. To facilitate this, the following files are designated as redistributable for the purpose of that agreement: Redistribution of the open source libraries included with the VIX API is governed by their respective open source license agreements. http://blogs.vmware.com/vix/2010/05/redistibutable-vix-api-client-libraries.html
Amahi SPDY is a library built from scratch in the "Go way" for building SPDY clients and servers in the Go programming language. It supports a subset of SPDY 3.1 http://www.chromium.org/spdy/spdy-protocol/spdy-protocol-draft3-1 Check the source code, examples and overview at https://github.com/amahi/spdy This library is used in a streaming server/proxy implementation for Amahi, the [home and media server](https://www.amahi.org). The goals are reliability, streaming and performance/scalability. 1) Design for reliability means that network connections are assumed to disconnect at any time, especially when it's most inapropriate for the library to handle. This also includes potential issues with bugs in within the library, so the library tries to handle all crazy errors in the most reasonable way. A client or a server built with this library should be able to run for months and months of reliable operation. It's not there yet, but it will be. 2) Streaming requests, unlike typical HTTP requests (which are short), require working with an arbitrary large number of open requests (streams) simultaneously, and most of them are flow-constrained at the client endpoint. Streaming clients kind of misbehave too, for example, they open and close many streams rapidly with Range request to check certain parts of the file. This is common with endpoint clients like VLC or Quicktime (Safari on iOS or Mac OS X). We wrote this library with the goal of making it not just suitable for HTTP serving, but also for streaming. 3) The library was built with performance and scalability in mind, so things have been done using as little blocking and copying of data as possible. It was meant to be implemented in the "go way", using concurrency extensively and channel communication. The library uses mutexes very sparingly so that handling of errors at all manner of inapropriate times becomes easier. It goes to great lengths to not block, establishing timeouts when network and even channel communication may fail. The library should use very very little CPU, even in the presence of many streams and sessions running simultaneously.
Package nzgo is a pure Go language driver for the database/sql package to work with IBM PDA (aka Netezza) In most cases clients will use the database/sql package instead of using this package directly. For example: nzgo defines a simple logger interface. Set logLevel to control logging verbosity and logPath to specify log file path. By default logging will be enabled with logLevel=Info and current directory as logPath. You can configure logLevel and logPath (i.e. log file directory) as per your requirement. There is one more configuration parameter with logger "additionalLogFile". This parameter can be used to set additional logger file. additionalLogFile can be used to enable writing logs to stdout, this can be achieved by simply setting "additionalLogFile=stdout" Valid values for 'logLevel' are : "OFF" , "DEBUG", "INFO" and "FATAL". logLevel=OFF can be used to turn off logging. It will turn of both internal and additionalLogFile logs. These logger configuration parameters should be mentinoed in connection string. The level of security (SSL/TLS) that the driver uses for the connection to the data store. onlyUnSecured: The driver does not use SSL. preferredUnSecured: If the server provides a choice, the driver does not use SSL. preferredSecured: If the server provides a choice, the driver uses SSL. onlySecured: The driver does not connect unless an SSL connection is available. Similarly, Netezza server has above securityLevel. Cases which would fail: Client tries to connect with 'Only secured' or 'Preferred secured' mode while server is 'Only Unsecured' mode. Client tries to connect with 'Only secured' or 'Preferred secured' mode while server is 'Preferred Unsecured' mode. Client tries to connect with 'Only Unsecured' or 'Preferred Unsecured' mode while server is 'Only Secured' mode. Client tries to connect with 'Only Unsecured' or 'Preferred Unsecured' mode while server is 'Preferred Secured' mode. Below are the securityLevel you can pass in connection string : Use Open to create a database handle with connection parameters: The Go Netezza Driver supports the following connection syntaxes (or data source name formats): In this case, application is running from NPS server itself so using 'localhost'. Golang driver should connect on port 5480(postgres port). The user is admin, password is password, database is db1, sslmode is require, and the location of the root certificate file is C:/Users/root31.crt with securityLevel as 'Only Secured session' When establishing a connection using nzgo you are expected to supply a connection string containing zero or more parameters. Below are subset of the connection parameters supported by nzgo. The following special connection parameters are supported: Valid values for sslmode are: Use single quotes for values that contain whitespace: A backslash will escape the next character in values: Note that the connection parameter client_encoding (which sets the text encoding for the connection) may be set but must be "UTF8", matching with the same rules as Postgres. It is an error to provide any other value. database/sql does not dictate any specific format for parameter markers in query strings, but nzgo uses the Netezza-specific parameter markers i.e. '?', as shown below. First parameter marker in the query would be replaced by first arguement, second parameter marker in the query would be replaced by second arguement and so on. nzgo supports the RowsAffected() method of the Result type in database/sql. For additional instructions on querying see the documentation for the database/sql package. nzgo also supports transaction queries as specified in database/sql package https://github.com/golang/go/wiki/SQLInterface. Transactions are started by calling Begin. This package returns the following types for values from the Netezza backend: You can unload data from an IBM Netezza database table on a Netezza host system to a remote client. This unload does not remove rows from the database but instead stores the unloaded data in a flat file (external table) that is suitable for loading back into a Netezza database. Below query would create a file 'et1.txt' on remote system from Netezza table t2 with data delimeted by '|'. See https://www.ibm.com/support/knowledgecenter/en/SSULQD_7.2.1/com.ibm.nz.load.doc/t_load_unloading_data_remote_client_sys.html for more information about external table
Taowm is The Acutely Opinionated Window Manager. It is a minimalist, keyboard driven, low distraction, tiling window manager for someone who uses a computer primarily to run just two GUI programs: a web browser and a terminal emulator. To install taowm: This will install taowm in your $GOPATH, or under $GOROOT/bin if $GOPATH is empty. Run "go help gopath" to read more about $GOPATH. Taowm is designed to run from an Xsession session. Add this line to the end of your ~/.xsession file: where the path is wherever "go get" or "go install" wrote to. Again, run "go help gopath" for more information. Log out and log back in with the "Xsession" option. Some systems, such as Ubuntu 12.04 "Precise", do not offer an Xsession option by default. To enable it, create a new file /usr/share/xsessions/custom.desktop that contains: Taowm starts with each screen divided into two side-by-side frames, outlined in green. Frames can frame windows, but they can also be empty: closing a frame's window will not collapse that frame. The frame that contains the mouse pointer is the focused frame, and its border is brighter than other frames. Its window (if it contains one) will have the keyboard focus. Taowm is primarily keyboard driven, and all keyboard shortcuts involve first holding down the Caps Lock key, similar to how holding down the Control key followed by the 'N' key, in your web browser, creates a new browser window. The default Caps Lock behavior, CHANGING ALL TYPED LETTERS TO UPPER CASE, is disabled. Caps Lock and the Space key will open a new web browser window. Caps Lock and the Enter key will open a new terminal emulator window. Caps Lock and the Shift key and the '|' pipe key will lock the screen. Caps Lock and the Backspace key will close the window in the focused frame. Caps Lock and the Tab key will cycle through the frames. To quit taowm and return to the log in screen, hold down Caps Lock and the Shift key and hit the Escape key three times in quick succession. Normally, this will quit immediately. Some programs may ask for something before closing, such as a file name to write unsaved data to. In this case, taowm will quit in 60 seconds or whenever all such programs have closed, instead of quitting immediately, and the frame borders will turn red. If there are more windows than frames, then Caps Lock and the 'D' or 'F' key will cycle through hidden windows. Caps Lock and a number key like '1', '2', etc. will move the 1st, 2nd, etc. window to the focused frame. Caps Lock and the 'A' key will show a list of windows: the one currently in the focused frame is marked with a '+', other windows in other frames are marked with a '-', hidden windows that have not been seen yet are marked with an '@', and hidden windows that have been seen before are unmarked. In particular, newly created windows will not automatically be shown. Taowm prevents new windows from popping up and 'stealing' keyboard focus, a problem if the password you are typing into your terminal emulator accidentally gets written to a chat window that popped up at the wrong time. Instead, if there isn't an empty frame to accept a new window, taowm keeps that window hidden (and marked with an '@' in the window list) until you are ready to deal with it. If there are any such windows that have not been seen yet, the green frame borders will pulsate to remind you. Selected windows are also marked with a '#'; selection is described below. Caps Lock and the 'G' key will toggle the focused frame in occupying the entire screen. Caps Lock and Shift and the 'G' key will hide the window in the focused frame. Caps Lock and the '-' key, the '=' key or Shift and the '+' key will split the current frame horizontally, vertically, or merge a frame to undo a frame split respectively. A screen contains workspaces like a frame contains windows. Caps Lock and the 'T' key will create a new workspace, hiding the current one. Caps Lock and the 'E' or 'R' key will cycle through hidden workspaces. Caps Lock and Shift and the 'T' key will delete the current workspace, provided that it holds no windows and there is another hidden workspace to switch to. Caps Lock and the 'Q' key will show a list of workspaces (and their windows). Caps Lock and the '`' key will cycle through the screens. Caps Lock and the F1 key, F2 key, etc. will move the 1st, 2nd, etc. workspace to the current screen. Caps Lock and the 'S' key will select a window, or unselect a selected window. More than one window may be selected at a time. Caps Lock and Shift and the 'S' key will select or unselect all windows in the current workspace. Caps Lock and the 'W' key will migrate all selected windows to the current workspace and unselect them. Taowm also provides alternative ways to navigate within a program's window. Caps Lock and the 'H', 'J', 'K' or 'L' keys are equivalent to pressing the Left, Down, Up or Right arrow keys. Similarly, Caps Lock and the 'Y', 'U', 'B' or 'N' keys are equivalent to Home, Page Up, End or Page Down. The 'I' or 'M' keys are equivalent to a mouse wheel scrolling up or down, and the ',' or '.' keys are equivalent to the Backspace or Delete keys. Taowm provides similar shortcuts for other common actions. Caps Lock and the 'O' or 'P' keys will copy or paste, '/' or Shift-and-'?' will open or close a tab in the current window, 'C' or 'V' will cycle through tabs, 'Z' or 'X' will zoom in or out. By default, these keys will only work with the google-chrome web browser and the gnome-terminal terminal emulator. Making these work with other programs will require some customization. Customizing the keyboard shortcuts, web browser, terminal emulator, colors, etc., is done by editing config.go and re-compiling (and re-installing): run "go install github.com/nigeltao/taowm". When working on taowm, it can be run in a nested X server such as Xephyr. From the github.com/nigeltao/taowm directory under $GOPATH: The taowm mailing list is at http://groups.google.com/group/taowm Taowm is copyright 2013 The Taowm Authors. All rights reserved. Use of this source code is governed by a BSD-style license that can be found in the LICENSE file.
Package gofpdf implements a PDF document generator with high level support for text, drawing and images. - UTF-8 support - Choice of measurement unit, page format and margins - Page header and footer management - Automatic page breaks, line breaks, and text justification - Inclusion of JPEG, PNG, GIF, TIFF and basic path-only SVG images - Colors, gradients and alpha channel transparency - Outline bookmarks - Internal and external links - TrueType, Type1 and encoding support - Page compression - Lines, Bézier curves, arcs, and ellipses - Rotation, scaling, skewing, translation, and mirroring - Clipping - Document protection - Layers - Templates - Barcodes - Charting facility - Import PDFs as templates gofpdf has no dependencies other than the Go standard library. All tests pass on Linux, Mac and Windows platforms. gofpdf supports UTF-8 TrueType fonts and “right-to-left” languages. Note that Chinese, Japanese, and Korean characters may not be included in many general purpose fonts. For these languages, a specialized font (for example, NotoSansSC for simplified Chinese) can be used. Also, support is provided to automatically translate UTF-8 runes to code page encodings for languages that have fewer than 256 glyphs. This repository will not be maintained, at least for some unknown duration. But it is hoped that gofpdf has a bright future in the open source world. Due to Go’s promise of compatibility, gofpdf should continue to function without modification for a longer time than would be the case with many other languages. Forks should be based on the last viable commit. Tools such as active-forks can be used to select a fork that looks promising for your needs. If a particular fork looks like it has taken the lead in attracting followers, this README will be updated to point people in that direction. The efforts of all contributors to this project have been deeply appreciated. Best wishes to all of you. To install the package on your system, run Later, to receive updates, run The following Go code generates a simple PDF file. See the functions in the fpdf_test.go file (shown as examples in this documentation) for more advanced PDF examples. If an error occurs in an Fpdf method, an internal error field is set. After this occurs, Fpdf method calls typically return without performing any operations and the error state is retained. This error management scheme facilitates PDF generation since individual method calls do not need to be examined for failure; it is generally sufficient to wait until after Output() is called. For the same reason, if an error occurs in the calling application during PDF generation, it may be desirable for the application to transfer the error to the Fpdf instance by calling the SetError() method or the SetErrorf() method. At any time during the life cycle of the Fpdf instance, the error state can be determined with a call to Ok() or Err(). The error itself can be retrieved with a call to Error(). This package is a relatively straightforward translation from the original FPDF library written in PHP (despite the caveat in the introduction to Effective Go). The API names have been retained even though the Go idiom would suggest otherwise (for example, pdf.GetX() is used rather than simply pdf.X()). The similarity of the two libraries makes the original FPDF website a good source of information. It includes a forum and FAQ. However, some internal changes have been made. Page content is built up using buffers (of type bytes.Buffer) rather than repeated string concatenation. Errors are handled as explained above rather than panicking. Output is generated through an interface of type io.Writer or io.WriteCloser. A number of the original PHP methods behave differently based on the type of the arguments that are passed to them; in these cases additional methods have been exported to provide similar functionality. Font definition files are produced in JSON rather than PHP. A side effect of running go test ./... is the production of a number of example PDFs. These can be found in the gofpdf/pdf directory after the tests complete. Please note that these examples run in the context of a test. In order run an example as a standalone application, you’ll need to examine fpdf_test.go for some helper routines, for example exampleFilename() and summary(). Example PDFs can be compared with reference copies in order to verify that they have been generated as expected. This comparison will be performed if a PDF with the same name as the example PDF is placed in the gofpdf/pdf/reference directory and if the third argument to ComparePDFFiles() in internal/example/example.go is true. (By default it is false.) The routine that summarizes an example will look for this file and, if found, will call ComparePDFFiles() to check the example PDF for equality with its reference PDF. If differences exist between the two files they will be printed to standard output and the test will fail. If the reference file is missing, the comparison is considered to succeed. In order to successfully compare two PDFs, the placement of internal resources must be consistent and the internal creation timestamps must be the same. To do this, the methods SetCatalogSort() and SetCreationDate() need to be called for both files. This is done automatically for all examples. Nothing special is required to use the standard PDF fonts (courier, helvetica, times, zapfdingbats) in your documents other than calling SetFont(). You should use AddUTF8Font() or AddUTF8FontFromBytes() to add a TrueType UTF-8 encoded font. Use RTL() and LTR() methods switch between “right-to-left” and “left-to-right” mode. In order to use a different non-UTF-8 TrueType or Type1 font, you will need to generate a font definition file and, if the font will be embedded into PDFs, a compressed version of the font file. This is done by calling the MakeFont function or using the included makefont command line utility. To create the utility, cd into the makefont subdirectory and run “go build”. This will produce a standalone executable named makefont. Select the appropriate encoding file from the font subdirectory and run the command as in the following example. In your PDF generation code, call AddFont() to load the font and, as with the standard fonts, SetFont() to begin using it. Most examples, including the package example, demonstrate this method. Good sources of free, open-source fonts include Google Fonts and DejaVu Fonts. The draw2d package is a two dimensional vector graphics library that can generate output in different forms. It uses gofpdf for its document production mode. gofpdf is a global community effort and you are invited to make it even better. If you have implemented a new feature or corrected a problem, please consider contributing your change to the project. A contribution that does not directly pertain to the core functionality of gofpdf should be placed in its own directory directly beneath the contrib directory. Here are guidelines for making submissions. Your change should - be compatible with the MIT License - be properly documented - be formatted with go fmt - include an example in fpdf_test.go if appropriate - conform to the standards of golint and go vet, that is, golint . and go vet . should not generate any warnings - not diminish test coverage Pull requests are the preferred means of accepting your changes. gofpdf is released under the MIT License. It is copyrighted by Kurt Jung and the contributors acknowledged below. This package’s code and documentation are closely derived from the FPDF library created by Olivier Plathey, and a number of font and image resources are copied directly from it. Bruno Michel has provided valuable assistance with the code. Drawing support is adapted from the FPDF geometric figures script by David Hernández Sanz. Transparency support is adapted from the FPDF transparency script by Martin Hall-May. Support for gradients and clipping is adapted from FPDF scripts by Andreas Würmser. Support for outline bookmarks is adapted from Olivier Plathey by Manuel Cornes. Layer support is adapted from Olivier Plathey. Support for transformations is adapted from the FPDF transformation script by Moritz Wagner and Andreas Würmser. PDF protection is adapted from the work of Klemen Vodopivec for the FPDF product. Lawrence Kesteloot provided code to allow an image’s extent to be determined prior to placement. Support for vertical alignment within a cell was provided by Stefan Schroeder. Ivan Daniluk generalized the font and image loading code to use the Reader interface while maintaining backward compatibility. Anthony Starks provided code for the Polygon function. Robert Lillack provided the Beziergon function and corrected some naming issues with the internal curve function. Claudio Felber provided implementations for dashed line drawing and generalized font loading. Stani Michiels provided support for multi-segment path drawing with smooth line joins, line join styles, enhanced fill modes, and has helped greatly with package presentation and tests. Templating is adapted by Marcus Downing from the FPDF_Tpl library created by Jan Slabon and Setasign. Jelmer Snoeck contributed packages that generate a variety of barcodes and help with registering images on the web. Jelmer Snoek and Guillermo Pascual augmented the basic HTML functionality with aligned text. Kent Quirk implemented backwards-compatible support for reading DPI from images that support it, and for setting DPI manually and then having it properly taken into account when calculating image size. Paulo Coutinho provided support for static embedded fonts. Dan Meyers added support for embedded JavaScript. David Fish added a generic alias-replacement function to enable, among other things, table of contents functionality. Andy Bakun identified and corrected a problem in which the internal catalogs were not sorted stably. Paul Montag added encoding and decoding functionality for templates, including images that are embedded in templates; this allows templates to be stored independently of gofpdf. Paul also added support for page boxes used in printing PDF documents. Wojciech Matusiak added supported for word spacing. Artem Korotkiy added support of UTF-8 fonts. Dave Barnes added support for imported objects and templates. Brigham Thompson added support for rounded rectangles. Joe Westcott added underline functionality and optimized image storage. Benoit KUGLER contributed support for rectangles with corners of unequal radius, modification times, and for file attachments and annotations. - Remove all legacy code page font support; use UTF-8 exclusively - Improve test coverage as reported by the coverage tool. Example demonstrates the generation of a simple PDF document. Note that since only core fonts are used (in this case Arial, a synonym for Helvetica), an empty string can be specified for the font directory in the call to New(). Note also that the example.Filename() and example.Summary() functions belong to a separate, internal package and are not part of the gofpdf library. If an error occurs at some point during the construction of the document, subsequent method calls exit immediately and the error is finally retrieved with the output call where it can be handled by the application.
Package uplink is the main entrypoint to interacting with Storj Labs' decentralized storage network. Sign up for an account on a Satellite today! https://storj.io/ The fundamental unit of access in the Storj Labs storage network is the Access Grant. An access grant is a serialized structure that is internally comprised of an API Key, a set of encryption key information, and information about which Storj Labs or Tardigrade network Satellite is responsible for the metadata. An access grant is always associated with exactly one Project on one Satellite. If you don't already have an access grant, you will need make an account on a Satellite, generate an API Key, and encapsulate that API Key with encryption information into an access grant. If you don't already have an account on a Satellite, first make one at https://storj.io/ and note the Satellite you choose (such as us1.storj.io, eu1.storj.io, etc). Then, make an API Key in the web interface. The first step to any project is to generate a restricted access grant with the minimal permissions that are needed. Access grants contains all encryption information and they should be restricted as much as possible. To make an access grant, you can create one using our Uplink CLI tool's 'share' subcommand (after setting up the Uplink CLI tool), or you can make one as follows: In the above example, 'serializedAccess' is a human-readable string that represents read-only access to just the "logs" bucket, and is only able to decrypt that one bucket thanks to hierarchical deterministic key derivation. Note: RequestAccessWithPassphrase is CPU-intensive, and your application's normal lifecycle should avoid it and use ParseAccess where possible instead. To revoke an access grant see the Project.RevokeAccess method. A common architecture for building applications is to have a single bucket for the entire application to store the objects of all users. In such architecture, it is of utmost importance to guarantee that users can access only their objects but not the objects of other users. This can be achieved by implementing an app-specific authentication service that generates an access grant for each user by restricting the main access grant of the application. This user-specific access grant is restricted to access the objects only within a specific key prefix defined for the user. When initialized, the authentication server creates the main application access grant with an empty passphrase as follows. The authentication service does not hold any encryption information about users, so the passphrase used to request the main application access grant does not matter. The encryption keys related to user objects will be overridden in a next step on the client-side. It is important that once set to a specific value, this passphrase never changes in the future. Therefore, the best practice is to use an empty passphrase. Whenever a user is authenticated, the authentication service generates the user-specific access grant as follows: The userID is something that uniquely identifies the users in the application and must never change. Along with the user access grant, the authentication service should return a user-specific salt. The salt must be always the same for this user. The salt size is 16-byte or 32-byte. Once the application receives the user-specific access grant and the user-specific salt from the authentication service, it has to override the encryption key in the access grant, so users can encrypt and decrypt their files with encryption keys derived from their passphrase. The user-specific access grant is now ready to use by the application. Once you have a valid access grant, you can open a Project with the access that access grant allows for. Projects allow you to manage buckets and objects within buckets. A bucket represents a collection of objects. You can upload, download, list, and delete objects of any size or shape. Objects within buckets are represented by keys, where keys can optionally be listed using the "/" delimiter. Note: Objects and object keys within buckets are end-to-end encrypted, but bucket names themselves are not encrypted, so the billing interface on the Satellite can show you bucket line items. Objects support a couple kilobytes of arbitrary key/value metadata, and arbitrary-size primary data streams with the ability to read at arbitrary offsets. If you want to access only a small subrange of the data you uploaded, you can use `uplink.DownloadOptions` to specify the download range. Listing objects returns an iterator that allows to walk through all the items:
Package gochrome aims to be a complete Chrome DevTools Protocol Viewer implementation. Versioned packages are available. Curently the only version is `tot` or Tip-of-Tree. Stable versions will be made available in the future. This is beta software and hasn't been well exercised in real-world applications. See https://chromedevtools.github.io/devtools-protocol/ The Chrome DevTools Protocol allows for tools to instrument, inspect, debug and profile Chromium, Chrome and other Blink-based browsers. Many existing projects currently use the protocol. The Chrome DevTools uses this protocol and the team maintains its API. Instrumentation is divided into a number of domains (DOM, Debugger, Network etc.). Each domain defines a number of commands it supports and events it generates. Both commands and events are serialized JSON objects of a fixed structure. You can either debug over the wire using the raw messages as they are described in the corresponding domain documentation, or use extension JavaScript API. The latest (tip-of-tree) protocol (tot) It changes frequently and can break at any time. However it captures the full capabilities of the Protocol, whereas the stable release is a subset. There is no backwards compatibility support guaranteed for the capabilities it introduces. Resources Basics: Using DevTools as protocol client The Developer Tools front-end can attach to a remotely running Chrome instance for debugging. For this scenario to work, you should start your host Chrome instance with the remote-debugging-port command line switch: Then you can start a separate client Chrome instance, using a distinct user profile: Now you can navigate to the given port from your client and attach to any of the discovered tabs for debugging: http://localhost:9222 You will find the Developer Tools interface identical to the embedded one and here is why: In this scenario, you can substitute Developer Tools front-end with your own implementation. Instead of navigating to the HTML page at http://localhost:9222, your application can discover available pages by requesting: http://localhost:9222/json and getting a JSON object with information about inspectable pages along with the WebSocket addresses that you could use in order to start instrumenting them. Remote debugging is especially useful when debugging remote instances of the browser or attaching to the embedded devices. Blink port owners are responsible for exposing debugging connections to the external users. This is especially handy to understand how the DevTools frontend makes use of the protocol. First, run Chrome with the debugging port open: Then, select the Chromium Projects item in the Inspectable Pages list. Now that DevTools is up and fullscreen, open DevTools to inspect it. Cmd-R in the new inspector to make the first restart. Now head to Network Panel, filter by Websocket, select the connection and click the Frames tab. Now you can easily see the frames of WebSocket activity as you use the first instance of the DevTools. To allow chrome extensions to interact with the protocol, we introduced chrome.debugger extension API that exposes this JSON message transport interface. As a result, you can not only attach to the remotely running Chrome instance, but also instrument it from its own extension. Chrome Debugger Extension API provides a higher level API where command domain, name and body are provided explicitly in the `sendCommand` call. This API hides request ids and handles binding of the request with its response, hence allowing `sendCommand` to report result in the callback function call. One can also use this API in combination with the other Extension APIs. If you are developing a Web-based IDE, you should implement an extension that exposes debugging capabilities to your page and your IDE will be able to open pages with the target application, set breakpoints there, evaluate expressions in console, live edit JavaScript and CSS, display live DOM, network interaction and any other aspect that Developer Tools is instrumenting today. Opening embedded Developer Tools will terminate the remote connection and thus detach the extension. https://chromedevtools.github.io/devtools-protocol/#simultaneous The canonical protocol definitions live in the Chromium source tree: (browser_protocol.json and js_protocol.json). They are maintained manually by the DevTools engineering team. These files are mirrored (hourly) on GitHub in the devtools-protocol repo. The declarative protocol definitions are used across tools. Within Chromium, a binding layer is created for the Chrome DevTools to interact with, and separately the protocol is used for Chrome Headless’s C++ interface. What’s the protocol_externs file It’s created via generate_protocol_externs.py and useful for tools using closure compiler. The TypeScript story is here. Not yet. See bugger-daemon’s third-party docs. See also the endpoints implementation in Chromium. /json/protocol was added in Chrome 60. The endpoint is exposed as webSocketDebuggerUrl in /json/version. Note the browser in the URL, rather than page. If Chrome was launched with --remote-debugging-port=0 and chose an open port, the browser endpoint is written to both stderr and the DevToolsActivePort file in browser profile folder. Yes, as of Chrome 63! See Multi-client remote debugging support. Upon disconnnection, the outgoing client will receive a detached event. For example: View the enum of possible reasons. (For reference: the original patch). After disconnection, some apps have chosen to pause their state and offer a reconnect button.
Package file is a Go library to open files with file locking depending on the system. Currently file locking on the following systems are supported.
Package gatt provides a Bluetooth Low Energy gatt implementation. Gatt (Generic Attribute Profile) is the protocol used to write BLE peripherals (servers) and centrals (clients). This package is a work in progress. The API will change. As a peripheral, you can create services, characteristics, and descriptors, advertise, accept connections, and handle requests. As a central, you can scan, connect, discover services, and make requests. gatt supports both Linux and OS X. On Linux: To gain complete and exclusive control of the HCI device, gatt uses HCI_CHANNEL_USER (introduced in Linux v3.14) instead of HCI_CHANNEL_RAW. Those who must use an older kernel may patch in these relevant commits from Marcel Holtmann: Note that because gatt uses HCI_CHANNEL_USER, once gatt has opened the device no other program may access it. Before starting a gatt program, make sure that your BLE device is down: If you have BlueZ 5.14+ (or aren't sure), stop the built-in bluetooth server, which interferes with gatt, e.g.: Because gatt programs administer network devices, they must either be run as root, or be granted appropriate capabilities: USAGE See the server.go, discoverer.go, and explorer.go in the examples/ directory for writing server or client programs that run on Linux and OS X. Users, especially on Linux platforms, seeking finer-grained control over the devices can see the examples/server_lnx.go for the usage of Option, which are platform specific. See the rest of the docs for other options and finer-grained control. Note that some BLE central devices, particularly iOS, may aggressively cache results from previous connections. If you change your services or characteristics, you may need to reboot the other device to pick up the changes. This is a common source of confusion and apparent bugs. For an OS X central, see http://stackoverflow.com/questions/20553957. gatt started life as a port of bleno, to which it is indebted: https://github.com/sandeepmistry/bleno. If you are having problems with gatt, particularly around installation, issues filed with bleno might also be helpful references. To try out your GATT server, it is useful to experiment with a generic BLE client. LightBlue is a good choice. It is available free for both iOS and OS X.
Package progressio contains io.Reader and io.Writer wrappers to easily get progress feedback, including speed/sec, average speed, %, time remaining, size, transferred size, ... over a channel in a progressio.Progress object. Important note is that the returned object implements the io.Closer interface and you have to close the progressio.ProgressReader and progressio.ProgressWriter objects in order to clean everything up. Usage is pretty simple: A helper function is available that opens a file, determines it's size, and wraps it's os.File io.Reader object: A wrapper for an io.WriterCloser is available too, but no helper function is available to write to an os.File since the target size is not known. Usually, wrapping the io.Writer is more accurate, since writing potentially takes up more time and happens last. Useage is similar to wrapping the io.Reader: Note that you can also implement your own formatting. See the String() function implementation or consult the Progress struct layout and documentation
Package iris provides a beautifully expressive and easy to use foundation for your next website, API, or distributed app. Source code and other details for the project are available at GitHub: 8.5.9 Final The only requirement is the Go Programming Language, at least version 1.8 but 1.9 is highly recommended. Iris takes advantage of the vendor directory feature wisely: https://docs.google.com/document/d/1Bz5-UB7g2uPBdOx-rw5t9MxJwkfpx90cqG9AFL0JAYo. You get truly reproducible builds, as this method guards against upstream renames and deletes. A simple copy-paste and `go get ./...` to resolve two dependencies: https://github.com/kataras/golog and the https://github.com/iris-contrib/httpexpect will work for ever even for older versions, the newest version can be retrieved by `go get` but this file contains documentation for an older version of Iris. Follow the instructions below: 1. install the Go Programming Language: https://golang.org/dl 2. clear yours previously `$GOPATH/src/github.com/kataras/iris` folder or create new 3. download the Iris v8.5.9 (final): https://github.com/kataras/iris/archive/v8.zip 4. extract the contents of the `iris-v8` folder that's inside the downloaded zip file to your `$GOPATH/src/github.com/kataras/iris` 5. navigate to your `$GOPATH/src/github.com/kataras/iris` folder if you're not already there and open a terminal/command prompt, execute the command: `go get ./...` and you're ready to GO:) Example code: You can start the server(s) listening to any type of `net.Listener` or even `http.Server` instance. The method for initialization of the server should be passed at the end, via `Run` function. Below you'll see some useful examples: UNIX and BSD hosts can take advandage of the reuse port feature. Example code: That's all with listening, you have the full control when you need it. Let's continue by learning how to catch CONTROL+C/COMMAND+C or unix kill command and shutdown the server gracefully. In order to manually manage what to do when app is interrupted, we have to disable the default behavior with the option `WithoutInterruptHandler` and register a new interrupt handler (globally, across all possible hosts). Example code: Access to all hosts that serve your application can be provided by the `Application#Hosts` field, after the `Run` method. But the most common scenario is that you may need access to the host before the `Run` method, there are two ways of gain access to the host supervisor, read below. First way is to use the `app.NewHost` to create a new host and use one of its `Serve` or `Listen` functions to start the application via the `iris#Raw` Runner. Note that this way needs an extra import of the `net/http` package. Example Code: Second, and probably easier way is to use the `host.Configurator`. Note that this method requires an extra import statement of "github.com/kataras/iris/core/host" when using go < 1.9, if you're targeting on go1.9 then you can use the `iris#Supervisor` and omit the extra host import. All common `Runners` we saw earlier (`iris#Addr, iris#Listener, iris#Server, iris#TLS, iris#AutoTLS`) accept a variadic argument of `host.Configurator`, there are just `func(*host.Supervisor)`. Therefore the `Application` gives you the rights to modify the auto-created host supervisor through these. Example Code: Read more about listening and gracefully shutdown by navigating to: All HTTP methods are supported, developers can also register handlers for same paths for different methods. The first parameter is the HTTP Method, second parameter is the request path of the route, third variadic parameter should contains one or more iris.Handler executed by the registered order when a user requests for that specific resouce path from the server. Example code: In order to make things easier for the user, iris provides functions for all HTTP Methods. The first parameter is the request path of the route, second variadic parameter should contains one or more iris.Handler executed by the registered order when a user requests for that specific resouce path from the server. Example code: A set of routes that are being groupped by path prefix can (optionally) share the same middleware handlers and template layout. A group can have a nested group too. `.Party` is being used to group routes, developers can declare an unlimited number of (nested) groups. Example code: iris developers are able to register their own handlers for http statuses like 404 not found, 500 internal server error and so on. Example code: With the help of iris's expressionist router you can build any form of API you desire, with safety. Example code: Iris has first-class support for the MVC pattern, you'll not find these stuff anywhere else in the Go world. Example Code: Iris web framework supports Request data, Models, Persistence Data and Binding with the fastest possible execution. Characteristics: All HTTP Methods are supported, for example if want to serve `GET` then the controller should have a function named `Get()`, you can define more than one method function to serve in the same Controller struct. Persistence data inside your Controller struct (share data between requests) via `iris:"persistence"` tag right to the field or Bind using `app.Controller("/" , new(myController), theBindValue)`. Models inside your Controller struct (set-ed at the Method function and rendered by the View) via `iris:"model"` tag right to the field, i.e User UserModel `iris:"model" name:"user"` view will recognise it as `{{.user}}`. If `name` tag is missing then it takes the field's name, in this case the `"User"`. Access to the request path and its parameters via the `Path and Params` fields. Access to the template file that should be rendered via the `Tmpl` field. Access to the template data that should be rendered inside the template file via `Data` field. Access to the template layout via the `Layout` field. Access to the low-level `iris.Context` via the `Ctx` field. Get the relative request path by using the controller's name via `RelPath()`. Get the relative template path directory by using the controller's name via `RelTmpl()`. Flow as you used to, `Controllers` can be registered to any `Party`, including Subdomains, the Party's begin and done handlers work as expected. Optional `BeginRequest(ctx)` function to perform any initialization before the method execution, useful to call middlewares or when many methods use the same collection of data. Optional `EndRequest(ctx)` function to perform any finalization after any method executed. Inheritance, recursively, see for example our `mvc.SessionController/iris.SessionController`, it has the `mvc.Controller/iris.Controller` as an embedded field and it adds its logic to its `BeginRequest`. Source file: https://github.com/kataras/iris/blob/v8/mvc/session_controller.go. Read access to the current route via the `Route` field. Support for more than one input arguments (map to dynamic request path parameters). Register one or more relative paths and able to get path parameters, i.e Response via output arguments, optionally, i.e Where `any` means everything, from custom structs to standard language's types-. `Result` is an interface which contains only that function: Dispatch(ctx iris.Context) and Get where HTTP Method function(Post, Put, Delete...). Iris has a very powerful and blazing fast MVC support, you can return any value of any type from a method function and it will be sent to the client as expected. * if `string` then it's the body. * if `string` is the second output argument then it's the content type. * if `int` then it's the status code. * if `bool` is false then it throws 404 not found http error by skipping everything else. * if `error` and not nil then (any type) response will be omitted and error's text with a 400 bad request will be rendered instead. * if `(int, error)` and error is not nil then the response result will be the error's text with the status code as `int`. * if `custom struct` or `interface{}` or `slice` or `map` then it will be rendered as json, unless a `string` content type is following. * if `mvc.Result` then it executes its `Dispatch` function, so good design patters can be used to split the model's logic where needed. The example below is not intended to be used in production but it's a good showcase of some of the return types we saw before; Another good example with a typical folder structure, that many developers are used to work, can be found at: https://github.com/kataras/iris/tree/v8/_examples/mvc/overview. By creating components that are independent of one another, developers are able to reuse components quickly and easily in other applications. The same (or similar) view for one application can be refactored for another application with different data because the view is simply handling how the data is being displayed to the user. If you're new to back-end web development read about the MVC architectural pattern first, a good start is that wikipedia article: https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller. Follow the examples at: https://github.com/kataras/iris/tree/v8/_examples/#mvc At the previous example, we've seen static routes, group of routes, subdomains, wildcard subdomains, a small example of parameterized path with a single known parameter and custom http errors, now it's time to see wildcard parameters and macros. iris, like net/http std package registers route's handlers by a Handler, the iris' type of handler is just a func(ctx iris.Context) where context comes from github.com/kataras/iris/context. Iris has the easiest and the most powerful routing process you have ever meet. At the same time, iris has its own interpeter(yes like a programming language) for route's path syntax and their dynamic path parameters parsing and evaluation, We call them "macros" for shortcut. How? It calculates its needs and if not any special regexp needed then it just registers the route with the low-level path syntax, otherwise it pre-compiles the regexp and adds the necessary middleware(s). Standard macro types for parameters: if type is missing then parameter's type is defaulted to string, so {param} == {param:string}. If a function not found on that type then the "string"'s types functions are being used. i.e: Besides the fact that iris provides the basic types and some default "macro funcs" you are able to register your own too!. Register a named path parameter function: at the func(argument ...) you can have any standard type, it will be validated before the server starts so don't care about performance here, the only thing it runs at serve time is the returning func(paramValue string) bool. Example Code: A path parameter name should contain only alphabetical letters, symbols, containing '_' and numbers are NOT allowed. If route failed to be registered, the app will panic without any warnings if you didn't catch the second return value(error) on .Handle/.Get.... Last, do not confuse ctx.Values() with ctx.Params(). Path parameter's values goes to ctx.Params() and context's local storage that can be used to communicate between handlers and middleware(s) goes to ctx.Values(), path parameters and the rest of any custom values are separated for your own good. Run Static Files Example code: More examples can be found here: https://github.com/kataras/iris/tree/v8/_examples/beginner/file-server Middleware is just a concept of ordered chain of handlers. Middleware can be registered globally, per-party, per-subdomain and per-route. Example code: iris is able to wrap and convert any external, third-party Handler you used to use to your web application. Let's convert the https://github.com/rs/cors net/http external middleware which returns a `next form` handler. Example code: Iris supports 5 template engines out-of-the-box, developers can still use any external golang template engine, as `context/context#ResponseWriter()` is an `io.Writer`. All of these five template engines have common features with common API, like Layout, Template Funcs, Party-specific layout, partial rendering and more. Example code: View engine supports bundled(https://github.com/jteeuwen/go-bindata) template files too. go-bindata gives you two functions, asset and assetNames, these can be setted to each of the template engines using the `.Binary` func. Example code: A real example can be found here: https://github.com/kataras/iris/tree/v8/_examples/view/embedding-templates-into-app. Enable auto-reloading of templates on each request. Useful while developers are in dev mode as they no neeed to restart their app on every template edit. Example code: Note: In case you're wondering, the code behind the view engines derives from the "github.com/kataras/iris/view" package, access to the engines' variables can be granded by "github.com/kataras/iris" package too. Each one of these template engines has different options located here: https://github.com/kataras/iris/tree/v8/view . This example will show how to store and access data from a session. You don’t need any third-party library, but If you want you can use any session manager compatible or not. In this example we will only allow authenticated users to view our secret message on the /secret page. To get access to it, the will first have to visit /login to get a valid session cookie, which logs him in. Additionally he can visit /logout to revoke his access to our secret message. Example code: Running the example: Sessions persistence can be achieved using one (or more) `sessiondb`. Example Code: More examples: In this example we will create a small chat between web sockets via browser. Example Server Code: Example Client(javascript) Code: Running the example: But you should have a basic idea of the framework by now, we just scratched the surface. If you enjoy what you just saw and want to learn more, please follow the below links: Examples: Middleware: Home Page:
Continuation of byosh and SimpleSNIProxy projects. To ensure that Sniproxy works correctly, it's important to have ports 80, 443, and 53 open. However, on Ubuntu, it's possible that port 53 may be in use by systemd-resolved. To disable systemd-resolved and free up the port, follow these instructions. If you prefer to keep systemd-resolved and just disable the built-in resolver, you can use the following command: The simplest way to install the software is by utilizing the pre-built binaries available on the releases page. Alternatively, there are other ways to install, which include: Using "go install" command: Using Docker or Podman: Using the installer script: sniproxy can be configured using a configuration file or command line flags. The configuration file is a JSON file, and an example configuration file can be found under config.sample.json. Flags: In this tutorial, we will go over the steps to set up an SNI proxy using Vultr as a service provider. This will allow you to serve multiple SSL-enabled websites from a single IP address. - A Vultr account. If you don't have one, you can sign up for free using my Vultr referal link ## Step 1: Create a Vultr Server First, log in to your Vultr account and click on the "Instances" tab in the top menu. Then, click the "+" button to deploy a new server. On the "Deploy New Instance" page, select the following options: - Choose Server: Choose "Cloud Compute" - CPU & Storage Technology: Any of the choices should work perfectly fine - Server Location: Choose the location of the server. This will affect the latency of your website, so it's a good idea to choose a location that is close to your target audience. - Server Image: Any OS listed there is supported. If you're not sure what to choose, Ubuntu is a good option - Server Size: Choose a server size that is suitable for your needs. A small or medium-sized server should be sufficient for most SNI proxy setups. Pay attention to the monthly bandwidth usage as well - "Add Auto Backups": not strictly needed for sniproxy. - "SSH Keys": choose a SSH key to facilitate logging in later on. you can always use Vultr's builtin console as well. - Server Hostname: Choose a hostname for your server. This can be any name you like. After you have selected the appropriate options, click the "Deploy Now" button to create your server. ## Step 2: Install the SNI Proxy Once your server has been created, log in to the server using SSH or console. The root password is available under the "Overview" tab in instances list. Ensure the firewall (firewalld, ufw or iptables) is allowing connectivity to ports 80/TCP, 443/TCP and 53/UDP. For `ufw`, allow these ports with: once you have a shell in front of you, run the following (assuming you're on Ubuntu 22.04) above script is an interactive installer, it will ask you a few questions and then install sniproxy for you. it also installs sniproxy as a systemd servers, and enables it to start on boot. above wizard will set up execution arguments for sniproxy. you can edit them by running and then edit the execStart line to your liking. for example, if you want to use a different port for HTTP, you can edit the line to
Package clustersql is an SQL "meta"-Driver - A clustering, implementation- agnostic wrapper for any backend implementing "database/sql/driver". It does (latency-based) load-balancing and error-recovery over the registered set of nodes. It is assumed that database-state is transparently replicated over all nodes by some database-side clustering solution. This driver ONLY handles the client side of such a cluster. This package simply multiplexes the driver.Open() function of sql/driver to every attached node. The function is called on each node, returning the first successfully opened connection. (Any connections opening subsequently will be closed.) If opening does not succeed for any node, the latest error gets returned. Any other errors will be masked by default. However, any given latest error for any attached node will remain exposed through expvar, as well as some basic counters and timestamps. To make use of this kind of clustering, use this package with any backend driver implementing "database/sql/driver" like so: There is currently no way around instanciating the backend driver explicitly You can perform backend-driver specific settings such as Create a new clustering driver with the backend driver Add nodes, including driver-specific name format, in this case Go-MySQL DSN. Here, we add three nodes belonging to a galera (https://mariadb.com/kb/en/mariadb/documentation/replication-cluster-multi-master/galera/) cluster Make the clusterDriver available to the go sql interface under an arbitrary name Open the registered clusterDriver with an arbitrary DSN string (not used) Continue to use the sql interface as documented at http://golang.org/pkg/database/sql/ Before using this in production, you should configure your cluster details in config.toml and run Note however, that non-failure of the above is no guarantee for a correctly set-up cluster. Finally, you SHOULD set db.MaxIdleConns and db.MaxOpenConns to a non-zero value. Although the sql driver usually does a good job of doing its own pooling, file descriptors can leak in corner cases (of which this library might constitue an example).
Package preallocate allocates disk space efficiently via syscall (on supported platforms and filesystems) or by writing null bytes. Files opened with O_APPEND are not supported.
Package file is a Go library to open files with file locking depending on the system. Currently file locking on the following systems are supported.
Package database provides a block and metadata storage database. This package provides a database layer to store and retrieve block data and arbitrary metadata in a simple and efficient manner. The default backend, ffldb, has a strong focus on speed, efficiency, and robustness. It makes use leveldb for the metadata, flat files for block storage, and strict checksums in key areas to ensure data integrity. A quick overview of the features database provides are as follows: The main entry point is the DB interface. It exposes functionality for transactional-based access and storage of metadata and block data. It is obtained via the Create and Open functions which take a database type string that identifies the specific database driver (backend) to use as well as arguments specific to the specified driver. The Namespace interface is an abstraction that provides facilities for obtaining transactions (the Tx interface) that are the basis of all database reads and writes. Unlike some database interfaces that support reading and writing without transactions, this interface requires transactions even when only reading or writing a single key. The Begin function provides an unmanaged transaction while the View and Update functions provide a managed transaction. These are described in more detail below. The Tx interface provides facilities for rolling back or committing changes that took place while the transaction was active. It also provides the root metadata bucket under which all keys, values, and nested buckets are stored. A transaction can either be read-only or read-write and managed or unmanaged. A managed transaction is one where the caller provides a function to execute within the context of the transaction and the commit or rollback is handled automatically depending on whether or not the provided function returns an error. Attempting to manually call Rollback or Commit on the managed transaction will result in a panic. An unmanaged transaction, on the other hand, requires the caller to manually call Commit or Rollback when they are finished with it. Leaving transactions open for long periods of time can have several adverse effects, so it is recommended that managed transactions are used instead. The Bucket interface provides the ability to manipulate key/value pairs and nested buckets as well as iterate through them. The Get, Put, and Delete functions work with key/value pairs, while the Bucket, CreateBucket, CreateBucketIfNotExists, and DeleteBucket functions work with buckets. The ForEach function allows the caller to provide a function to be called with each key/value pair and nested bucket in the current bucket. As discussed above, all of the functions which are used to manipulate key/value pairs and nested buckets exist on the Bucket interface. The root metadata bucket is the upper-most bucket in which data is stored and is created at the same time as the database. Use the Metadata function on the Tx interface to retrieve it. The CreateBucket and CreateBucketIfNotExists functions on the Bucket interface provide the ability to create an arbitrary number of nested buckets. It is a good idea to avoid a lot of buckets with little data in them as it could lead to poor page utilization depending on the specific driver in use. This example demonstrates creating a new database and using a managed read-write transaction to store and retrieve metadata. This example demonstrates creating a new database, using a managed read-write transaction to store a block, and using a managed read-only transaction to fetch the block.
Package mspec is a BDD context/specification testing package for Go(Lang) with a strong emphases on spec'ing your feature(s) and scenarios first, before any code is written using as little syntax noise as possible. This leaves you free to think of your project and features as a whole without the distraction of writing any code with the added benefit of having tests ready for your project. [![GoDoc](https://godoc.org/github.com/eduncan911/mspec?status.svg)](https://godoc.org/github.com/eduncan911/mspec) holds the source documentation (where else?) * Uses natural language (Given/When/Then) * Stubbing * Human-readable outputs * HTML output (coming soon) * Use custom Assertions * Configuration options * Uses Testify's rich assertions * Uses Go's built-in testing.T package Install it with one line of code: There are no external dependencies and it is built against Go's internal packages. The only dependency is that you have [GOPATH setup normaly](https://golang.org/doc/code.html). Create a new file to hold your specs. Using Dan North's original BDD definitions, you spec code using the Given/When/Then storyline similar to: But this is just a static example. Let's take a real example from one of my projects: You represent these thoughts in code like this: Note that `Given`, `when` and `it` all have optional variadic parameters. This allows you to spec things out with as little or as far as you want. That's it. Now run it: Print it out and stick it on your office door for everyone to see what you are working on. This is actually colored output in Terminal: It is not uncommon to go back and tweak your stories over time as you talk with your domain experts, modifying exactly the scenarios and specifications that should happen. `GoMSpec` is a testing package for the Go framework that extends Go's built-in testing package. It is modeled after the BDD Feature Specification story workflow such as: Currently it has an included `Expectation` struct that mimics basic assertion behaviors. Future plans may allow for custom assertion packages (like testify). Getting it Importing it Writing Specs Testing it Which outputs the following: Nice eh? There is nothing like using a testing package to test itself. There is some nice rich information available. ## Examples Be sure to check out more examples in the examples/ folder. Or just open the files and take a look. That's the most important part anyways. When evaluating several BDD frameworks, [Pranavraja's Zen](https://github.com/pranavraja/zen) package for Go came close - really close; but, it was lacking the more "story" overview I've been accustomed to over the years with [Machine.Specifications](https://github.com/machine/machine.specifications) in C# (.NET land). Do note that there is something to be said for simple testing in Go (and simple coding); therefore, if you are the type to keep it short and sweet and just code, then you may want to use Pranavraja's framework as it is just the context (Desc) and specs writing. I forked his code and submitted a few bug tweaks at first. But along the way, I started to have grand visions of my soul mate [Machine.Specifications](https://github.com/machine/machine.specifications) (which is called MSpec for short) for BDD testing. The ease of defining complete stories right down to the scenarios without having to implement them intrigued me in C#. It freed me from worrying about implementation details and just focus on the feature I was writing: What did it need to do? What context was I given to start with? What should it do? So while using Pranavraja's Zen framework, I kept asking myself: Could I bring those MSpec practices to Go, using a bare-bones framework? Ok, done. And since it was so heavily inspired by Aaron's MSpec project, I kept the name going here: `GoMSpec`. While keeping backwards compatibility with his existing Zen framework, I defined several goals for this package: * Had to stay simple with Give/When/Then definitions. No complex coding. * Keep the low syntax noise from the existing Zen package. * I had to be able to write features, scenarios and specs with no implementation details needed. That last goal above is key and I think is what speaks truly about what BDD is: focus on the story, feature and/or context you are designing - focus on the Behavior! I tended to design my C# code using Machine.Specifications in this BDD-style by writing entire stories and grand specs up front - designing the system I was building, or the feature I was extending. In C# land, it's not unheard of me hitting 50 to 100 specs across a single feature and a few different contexts in an hour or two, before writing any code. Which at that point, I had everything planned out pretty much the way it should behave. So with this framework, I came up with a simple method name, `NA()`, to keep the syntax noise down. Therefore, you are free to code specs with just a little syntax noise:
Copyright 2024 Google LLC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Tries to use the guidelines at https://cloud.google.com/apis/design for the gRPC API where possible. TODO: permissive deadlines for all RPC calls Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Tries to use the guidelines at https://cloud.google.com/apis/design for the gRPC API where possible.
Package gofpdf implements a PDF document generator with high level support for text, drawing and images. - UTF-8 support - Choice of measurement unit, page format and margins - Page header and footer management - Automatic page breaks, line breaks, and text justification - Inclusion of JPEG, PNG, GIF, TIFF and basic path-only SVG images - Colors, gradients and alpha channel transparency - Outline bookmarks - Internal and external links - TrueType, Type1 and encoding support - Page compression - Lines, Bézier curves, arcs, and ellipses - Rotation, scaling, skewing, translation, and mirroring - Clipping - Document protection - Layers - Templates - Barcodes - Charting facility - Import PDFs as templates gofpdf has no dependencies other than the Go standard library. All tests pass on Linux, Mac and Windows platforms. gofpdf supports UTF-8 TrueType fonts and “right-to-left” languages. Note that Chinese, Japanese, and Korean characters may not be included in many general purpose fonts. For these languages, a specialized font (for example, NotoSansSC for simplified Chinese) can be used. Also, support is provided to automatically translate UTF-8 runes to code page encodings for languages that have fewer than 256 glyphs. To install the package on your system, run Later, to receive updates, run The following Go code generates a simple PDF file. See the functions in the fpdf_test.go file (shown as examples in this documentation) for more advanced PDF examples. If an error occurs in an Fpdf method, an internal error field is set. After this occurs, Fpdf method calls typically return without performing any operations and the error state is retained. This error management scheme facilitates PDF generation since individual method calls do not need to be examined for failure; it is generally sufficient to wait until after Output() is called. For the same reason, if an error occurs in the calling application during PDF generation, it may be desirable for the application to transfer the error to the Fpdf instance by calling the SetError() method or the SetErrorf() method. At any time during the life cycle of the Fpdf instance, the error state can be determined with a call to Ok() or Err(). The error itself can be retrieved with a call to Error(). This package is a relatively straightforward translation from the original FPDF library written in PHP (despite the caveat in the introduction to Effective Go). The API names have been retained even though the Go idiom would suggest otherwise (for example, pdf.GetX() is used rather than simply pdf.X()). The similarity of the two libraries makes the original FPDF website a good source of information. It includes a forum and FAQ. However, some internal changes have been made. Page content is built up using buffers (of type bytes.Buffer) rather than repeated string concatenation. Errors are handled as explained above rather than panicking. Output is generated through an interface of type io.Writer or io.WriteCloser. A number of the original PHP methods behave differently based on the type of the arguments that are passed to them; in these cases additional methods have been exported to provide similar functionality. Font definition files are produced in JSON rather than PHP. A side effect of running go test ./... is the production of a number of example PDFs. These can be found in the gofpdf/pdf directory after the tests complete. Please note that these examples run in the context of a test. In order run an example as a standalone application, you’ll need to examine fpdf_test.go for some helper routines, for example exampleFilename() and summary(). Example PDFs can be compared with reference copies in order to verify that they have been generated as expected. This comparison will be performed if a PDF with the same name as the example PDF is placed in the gofpdf/pdf/reference directory and if the third argument to ComparePDFFiles() in internal/example/example.go is true. (By default it is false.) The routine that summarizes an example will look for this file and, if found, will call ComparePDFFiles() to check the example PDF for equality with its reference PDF. If differences exist between the two files they will be printed to standard output and the test will fail. If the reference file is missing, the comparison is considered to succeed. In order to successfully compare two PDFs, the placement of internal resources must be consistent and the internal creation timestamps must be the same. To do this, the methods SetCatalogSort() and SetCreationDate() need to be called for both files. This is done automatically for all examples. Nothing special is required to use the standard PDF fonts (courier, helvetica, times, zapfdingbats) in your documents other than calling SetFont(). You should use AddUTF8Font() or AddUTF8FontFromBytes() to add a TrueType UTF-8 encoded font. Use RTL() and LTR() methods switch between “right-to-left” and “left-to-right” mode. In order to use a different non-UTF-8 TrueType or Type1 font, you will need to generate a font definition file and, if the font will be embedded into PDFs, a compressed version of the font file. This is done by calling the MakeFont function or using the included makefont command line utility. To create the utility, cd into the makefont subdirectory and run “go build”. This will produce a standalone executable named makefont. Select the appropriate encoding file from the font subdirectory and run the command as in the following example. In your PDF generation code, call AddFont() to load the font and, as with the standard fonts, SetFont() to begin using it. Most examples, including the package example, demonstrate this method. Good sources of free, open-source fonts include Google Fonts and DejaVu Fonts. The draw2d package is a two dimensional vector graphics library that can generate output in different forms. It uses gofpdf for its document production mode. gofpdf is a global community effort and you are invited to make it even better. If you have implemented a new feature or corrected a problem, please consider contributing your change to the project. A contribution that does not directly pertain to the core functionality of gofpdf should be placed in its own directory directly beneath the contrib directory. Here are guidelines for making submissions. Your change should - be compatible with the MIT License - be properly documented - be formatted with go fmt - include an example in fpdf_test.go if appropriate - conform to the standards of golint and go vet, that is, golint . and go vet . should not generate any warnings - not diminish test coverage Pull requests are the preferred means of accepting your changes. gofpdf is released under the MIT License. It is copyrighted by Kurt Jung and the contributors acknowledged below. This package’s code and documentation are closely derived from the FPDF library created by Olivier Plathey, and a number of font and image resources are copied directly from it. Bruno Michel has provided valuable assistance with the code. Drawing support is adapted from the FPDF geometric figures script by David Hernández Sanz. Transparency support is adapted from the FPDF transparency script by Martin Hall-May. Support for gradients and clipping is adapted from FPDF scripts by Andreas Würmser. Support for outline bookmarks is adapted from Olivier Plathey by Manuel Cornes. Layer support is adapted from Olivier Plathey. Support for transformations is adapted from the FPDF transformation script by Moritz Wagner and Andreas Würmser. PDF protection is adapted from the work of Klemen Vodopivec for the FPDF product. Lawrence Kesteloot provided code to allow an image’s extent to be determined prior to placement. Support for vertical alignment within a cell was provided by Stefan Schroeder. Ivan Daniluk generalized the font and image loading code to use the Reader interface while maintaining backward compatibility. Anthony Starks provided code for the Polygon function. Robert Lillack provided the Beziergon function and corrected some naming issues with the internal curve function. Claudio Felber provided implementations for dashed line drawing and generalized font loading. Stani Michiels provided support for multi-segment path drawing with smooth line joins, line join styles, enhanced fill modes, and has helped greatly with package presentation and tests. Templating is adapted by Marcus Downing from the FPDF_Tpl library created by Jan Slabon and Setasign. Jelmer Snoeck contributed packages that generate a variety of barcodes and help with registering images on the web. Jelmer Snoek and Guillermo Pascual augmented the basic HTML functionality with aligned text. Kent Quirk implemented backwards-compatible support for reading DPI from images that support it, and for setting DPI manually and then having it properly taken into account when calculating image size. Paulo Coutinho provided support for static embedded fonts. Dan Meyers added support for embedded JavaScript. David Fish added a generic alias-replacement function to enable, among other things, table of contents functionality. Andy Bakun identified and corrected a problem in which the internal catalogs were not sorted stably. Paul Montag added encoding and decoding functionality for templates, including images that are embedded in templates; this allows templates to be stored independently of gofpdf. Paul also added support for page boxes used in printing PDF documents. Wojciech Matusiak added supported for word spacing. Artem Korotkiy added support of UTF-8 fonts. Dave Barnes added support for imported objects and templates. Brigham Thompson added support for rounded rectangles. Joe Westcott added underline functionality and optimized image storage. Benoit KUGLER contributed support for rectangles with corners of unequal radius, modification times, and for file attachments and annotations. - Remove all legacy code page font support; use UTF-8 exclusively - Improve test coverage as reported by the coverage tool. Example demonstrates the generation of a simple PDF document. Note that since only core fonts are used (in this case Arial, a synonym for Helvetica), an empty string can be specified for the font directory in the call to New(). Note also that the example.Filename() and example.Summary() functions belong to a separate, internal package and are not part of the gofpdf library. If an error occurs at some point during the construction of the document, subsequent method calls exit immediately and the error is finally retrieved with the output call where it can be handled by the application.
Package multitenancy provides a Go framework for building multi-tenant applications, streamlining tenant management and model migrations. It abstracts multitenancy complexities through a unified, database-agnostic API compatible with GORM. The framework supports two primary multitenancy strategies: The following sections provide an overview of the package's features and usage instructions for implementing multitenancy in Go applications. The package supports multitenancy for both PostgreSQL and MySQL databases, offering three methods for establishing a new database connection with multitenancy support: OpenDB allows opening a database connection using a URL-like DSN string, providing a flexible and easy way to switch between drivers. This method abstracts the underlying driver mechanics, offering a straightforward connection process and a unified, database-agnostic API through the returned *DB instance, which embeds the gorm.DB instance. For constructing the DSN string, refer to the driver-specific documentation for the required parameters and formats. This approach is useful for applications that need to dynamically switch between different database drivers or configurations without changing the codebase. Postgres: MySQL: Open with a supported driver offers a unified, database-agnostic API for managing tenant-specific and shared data, embedding the gorm.DB instance. This method allows developers to initialize and configure the dialect themselves before opening the database connection, providing granular control over the database connection and configuration. It facilitates seamless switching between database drivers while maintaining access to GORM's full functionality. Postgres: MySQL: For direct access to the gorm.DB API and multitenancy features for specific tasks, this approach allows invoking driver-specific functions directly. It provides a lower level of abstraction, requiring manual management of database-specific operations. Prior to version 8, this was the only method available for using the package. Postgres: MySQL: All models must implement driver.TenantTabler, which extends GORM's Tabler interface. This extension allows models to define their table name and indicate whether they are shared across tenants. Public Models: These are models which are shared across tenants. driver.TenantTabler.TableName should return the table name prefixed with 'public.'. driver.TenantTabler.IsSharedModel should return true. Tenant-Specific Models: These models are specific to a single tenant and should not be shared across tenants. driver.TenantTabler.TableName should return the table name without any prefix. driver.TenantTabler.IsSharedModel should return false. Tenant Model: This package provides a TenantModel struct that can be embedded in any public model that requires tenant scoping, enriching it with essential fields for managing tenant-specific information. This structure incorporates fields for the tenant's domain URL and schema name, facilitating the linkage of tenant-specific models to their respective schemas. Before performing any migrations or operations on tenant-specific models, the models must be registered with the DB instance using DB.RegisterModels. Postgres Adapter: Use postgres.RegisterModels to register models. MySQL Adapter: Use mysql.RegisterModels to register models. To ensure data integrity and schema isolation across tenants,gorm.DB.AutoMigrate has been disabled. Instead, use the provided shared and tenant-specific migration methods. driver.ErrInvalidMigration is returned if the `AutoMigrate` method is called directly. To ensure tenant isolation and facilitate concurrent migrations, driver-specific locking mechanisms are used. These locks prevent concurrent migrations from interfering with each other, ensuring that only one migration process can run at a time for a given tenant. Consult the driver-specific documentation for more information. After registering models, shared models are migrated using DB.MigrateSharedModels. Postgres Adapter: Use postgres.MigrateSharedModels to migrate shared models. MySQL Adapter: Use mysql.MigrateSharedModels to migrate shared models. After registering models, tenant-specific models are migrated using DB.MigrateTenantModels. Postgres Adapter: Use postgres.MigrateTenantModels to migrate tenant-specific models. MySQL Adapter: Use mysql.MigrateTenantModels to migrate tenant-specific models. When a tenant is removed from the system, the tenant-specific schema and associated tables should be cleaned up using DB.OffboardTenant. Postgres Adapter: Use postgres.DropSchemaForTenant to offboard a tenant. MySQL Adapter: Use mysql.DropDatabaseForTenant to offboard a tenant. DB.UseTenant configures the database for operations specific to a tenant, abstracting database-specific operations for tenant context configuration. This method returns a reset function to revert the database context and an error if the operation fails. Postgres Adapter: Use postgres.SetSearchPath to set the search path for a tenant. MySQL Adapter: Use mysql.UseDatabase function to set the database for a tenant. For the most part, foreign key constraints work as expected, but there are some restrictions and considerations to keep in mind when working with multitenancy. The following guidelines outline the supported relationships between tables: Between Tables in the Public Schema: From Public Schema Tables to Tenant-Specific Tables: From Tenant-Specific Tables to Public Schema Tables: Between Tenant-Specific Tables: See the example application for a more comprehensive demonstration of the framework's capabilities. Always sanitize input to prevent SQL injection vulnerabilities. This framework does not perform any validation on the database name or schema name parameters. It is the responsibility of the caller to ensure that these parameters are sanitized. To facilitate this, the framework provides the following utilities: For a detailed technical overview of SQL design strategies adopted by the framework, see the STRATEGY.md file.