Package redblacktree provides a pure Golang implementation of a red-black tree as described by Thomas H. Cormen's et al. in their seminal Algorithms book (3rd ed). This data structure is not multi-goroutine safe.
Package suture provides Erlang-like supervisor trees. This implements Erlang-esque supervisor trees, as adapted for Go. This is intended to be an industrial-strength implementation, but it has not yet been deployed in a hostile environment. (It's headed there, though.) Supervisor Tree -> SuTree -> suture -> holds your code together when it's trying to fall apart. Why use Suture? Suture has 100% test coverage, and is golint clean. This doesn't prove it free of bugs, but it shows I care. A blog post describing the design decisions is available at http://www.jerf.org/iri/post/2930 . To idiomatically use Suture, create a Supervisor which is your top level "application" supervisor. This will often occur in your program's "main" function. Create "Service"s, which implement the Service interface. .Add() them to your Supervisor. Supervisors are also services, so you can create a tree structure here, depending on the exact combination of restarts you want to create. As a special case, when adding Supervisors to Supervisors, the "sub" supervisor will have the "super" supervisor's Log function copied. This allows you to set one log function on the "top" supervisor, and have it propagate down to all the sub-supervisors. This also allows libraries or modules to provide Supervisors without having to commit their users to a particular logging method. Finally, as what is probably the last line of your main() function, call .Serve() on your top level supervisor. This will start all the services you've defined. See the Example for an example, using a simple service that serves out incrementing integers.
Package suture provides Erlang-like supervisor trees. This implements Erlang-esque supervisor trees, as adapted for Go. This is intended to be an industrial-strength implementation, but it has not yet been deployed in a hostile environment. (It's headed there, though.) Supervisor Tree -> SuTree -> suture -> holds your code together when it's trying to fall apart. Why use Suture? Suture has 100% test coverage, and is golint clean. This doesn't prove it free of bugs, but it shows I care. A blog post describing the design decisions is available at http://www.jerf.org/iri/post/2930 . To idiomatically use Suture, create a Supervisor which is your top level "application" supervisor. This will often occur in your program's "main" function. Create "Service"s, which implement the Service interface. .Add() them to your Supervisor. Supervisors are also services, so you can create a tree structure here, depending on the exact combination of restarts you want to create. As a special case, when adding Supervisors to Supervisors, the "sub" supervisor will have the "super" supervisor's Log function copied. This allows you to set one log function on the "top" supervisor, and have it propagate down to all the sub-supervisors. This also allows libraries or modules to provide Supervisors without having to commit their users to a particular logging method. Finally, as what is probably the last line of your main() function, call .Serve() on your top level supervisor. This will start all the services you've defined. See the Example for an example, using a simple service that serves out incrementing integers.
Package suture provides Erlang-like supervisor trees. This implements Erlang-esque supervisor trees, as adapted for Go. This is intended to be an industrial-strength implementation, but it has not yet been deployed in a hostile environment. (It's headed there, though.) Supervisor Tree -> SuTree -> suture -> holds your code together when it's trying to fall apart. Why use Suture? Suture has 100% test coverage, and is golint clean. This doesn't prove it free of bugs, but it shows I care. A blog post describing the design decisions is available at http://www.jerf.org/iri/post/2930 . To idiomatically use Suture, create a Supervisor which is your top level "application" supervisor. This will often occur in your program's "main" function. Create "Service"s, which implement the Service interface. .Add() them to your Supervisor. Supervisors are also services, so you can create a tree structure here, depending on the exact combination of restarts you want to create. As a special case, when adding Supervisors to Supervisors, the "sub" supervisor will have the "super" supervisor's Log function copied. This allows you to set one log function on the "top" supervisor, and have it propagate down to all the sub-supervisors. This also allows libraries or modules to provide Supervisors without having to commit their users to a particular logging method. Finally, as what is probably the last line of your main() function, call .Serve() on your top level supervisor. This will start all the services you've defined. See the Example for an example, using a simple service that serves out incrementing integers.
Package gographviz provides parsing for the DOT grammar into an abstract syntax tree representing a graph, analysis of the abstract syntax tree into a more usable structure, and writing back of this structure into the DOT format.
Package gographviz provides parsing for the DOT grammar into an abstract syntax tree representing a graph, analysis of the abstract syntax tree into a more usable structure, and writing back of this structure into the DOT format.
Package reflow implements the core data structures and (abstract) runtime for Reflow. Reflow is a system for distributed program execution. The programs are described by Flows, which are an abstract specification of the program's execution. Each Flow node can take any number of other Flows as dependent inputs and perform some (local) execution over these inputs in order to compute some output value. Reflow supports a limited form of dynamic dependencies: a Flow may evaluate to a list of values, each of which may be executed independently. This mechanism also provides parallelism. The system orchestrates Flow execution by evaluating the flow in the manner of an abstract syntax tree; see Eval for more details.
Package errortree provides primitives for working with errors in tree structure errortree is intended to be used in places where errors are generated from an arbitrary tree structure, like the validation of a configuration file. This allows adding additional context as to why an error has happened in a clean and structured way. errortree fully supports nesting of multiple trees, including simplified retrieval of errors which, among other things, should help remove repeated boilerplate code from unit tests.
Package sops manages JSON, YAML and BINARY documents to be encrypted or decrypted. This package should not be used directly. Instead, Sops users should install the command line client via `go get -u github.com/getsops/sops/v3/cmd/sops`, or use the decryption helper provided at `github.com/getsops/sops/v3/decrypt`. We do not guarantee API stability for any package other than `github.com/getsops/sops/v3/decrypt`. A Sops document is a Tree composed of a data branch with arbitrary key/value pairs and a metadata branch with encryption and integrity information. In JSON and YAML formats, the structure of the cleartext tree is preserved, keys are stored in cleartext and only values are encrypted. Keeping the keys in cleartext provides better readability when storing Sops documents in version controls, and allows for merging competing changes on documents. This is a major difference between Sops and other encryption tools that store documents as encrypted blobs. In BINARY format, the cleartext data is treated as a single blob and the encrypted document is in JSON format with a single `data` key and a single encrypted value. Sops allows operators to encrypt their documents with multiple master keys. Each of the master key defined in the document is able to decrypt it, allowing users to share documents amongst themselves without sharing keys, or using a PGP key as a backup for KMS. In practice, this is achieved by generating a data key for each document that is used to encrypt all values, and encrypting the data with each master key defined. Being able to decrypt the data key gives access to the document. The integrity of each document is guaranteed by calculating a Message Authentication Code (MAC) that is stored encrypted by the data key. When decrypting a document, the MAC should be recalculated and compared with the MAC stored in the document to verify that no fraudulent changes have been applied. The MAC covers keys and values as well as their ordering.
Package gtree provides output or directory creation of tree structure.
Package guinea is a command line interface library. This library operates on a tree-like structure of available commands. In the following example we define a root command with two subcommands. It will most likely be the best to define the commands as global variables in your package. After defining the commands use the run function to execute them. The library will read os.Args to determine which command should be executed and to populate the context passed to it with options and arguments. The user can invoke a program in multiple ways. To let the user call a command with arguments or options populate the proper lists in the command struct. If you wish to parse the arguments in a different way simply don't define any options or arguments in the command struct and pass the arguments from the context to your parsing function.
Package mafsa implements Minimal Acyclic Finite State Automata (MA-FSA) in a space-optimized way as described by Dacuik, Mihov, Watson, and Watson in their paper, "Incremental Construction of Minimal Acyclic Finite-State Automata" (2000). It also implements Minimal Perfect Hashing (MPH) as described by Lucceshi and Kowaltowski in their paper, "Applications of Finite Automata Representing Large Vocabularies" (1992). Unscientifically speaking, this package lets you store large amounts of strings (with Unicode) in memory so that membership queries, prefix lookups, and fuzzy searches are fast. And because minimal perfect hashing is included, you can associate each entry in the tree with more data used by your application. See the README or the end of this documentation for a brief tutorial. MA-FSA structures are a specific type of Deterministic Acyclic Finite State Automaton (DAFSA) which fold equivalent state transitions into each other starting from the suffix of each entry. Typical construction algorithms involve building out the entire tree first, then minimizing the completed tree. However, the method described in the paper above allows the tree to be minimized after every word insertion, provided the insertions are performed in lexicographical order, which drastically reduces memory usage compared to regular prefix trees ("tries"). The goal of this package is to provide a simple, useful, and correct implementation of MA-FSA. Though more complex algorithms exist for removal of items and unordered insertion, these features may be outside the scope of this package. Membership queries are on the order of O(n), where n is the length of the input string, so basically O(1). It is advisable to keep n small since long entries without much in common, especially in the beginning or end of the string, will quickly overrun the optimizations that are available. In those cases, n-gram implementations might be preferable, though these will use more CPU. This package provides two kinds of MA-FSA implementations. One, the BuildTree, facilitates the construction of an optimized tree and allows ordered insertions. The other, MinTree, is effectively read-only but uses significantly less memory and is ideal for production environments where only reads will be occurring. Usually your build process will be separate from your production application, which will make heavy use of reading the structure. To use this package, create a BuildTree and insert your items in lexicographical order: The tree is now compressed to a minimum number of nodes and is ready to be saved. In your production application, then, you can read the file into a MinTree directly: The mt variable is a *MinTree which has the same data as the original BuildTree, but without all the extra "scaffolding" that was required for adding new elements. The package provides some basic read mechanisms.
Package gochrome aims to be a complete Chrome DevTools Protocol Viewer implementation. Versioned packages are available. Curently the only version is `tot` or Tip-of-Tree. Stable versions will be made available in the future. This is beta software and hasn't been well exercised in real-world applications. See https://chromedevtools.github.io/devtools-protocol/ The Chrome DevTools Protocol allows for tools to instrument, inspect, debug and profile Chromium, Chrome and other Blink-based browsers. Many existing projects currently use the protocol. The Chrome DevTools uses this protocol and the team maintains its API. Instrumentation is divided into a number of domains (DOM, Debugger, Network etc.). Each domain defines a number of commands it supports and events it generates. Both commands and events are serialized JSON objects of a fixed structure. You can either debug over the wire using the raw messages as they are described in the corresponding domain documentation, or use extension JavaScript API. The latest (tip-of-tree) protocol (tot) It changes frequently and can break at any time. However it captures the full capabilities of the Protocol, whereas the stable release is a subset. There is no backwards compatibility support guaranteed for the capabilities it introduces. Resources Basics: Using DevTools as protocol client The Developer Tools front-end can attach to a remotely running Chrome instance for debugging. For this scenario to work, you should start your host Chrome instance with the remote-debugging-port command line switch: Then you can start a separate client Chrome instance, using a distinct user profile: Now you can navigate to the given port from your client and attach to any of the discovered tabs for debugging: http://localhost:9222 You will find the Developer Tools interface identical to the embedded one and here is why: In this scenario, you can substitute Developer Tools front-end with your own implementation. Instead of navigating to the HTML page at http://localhost:9222, your application can discover available pages by requesting: http://localhost:9222/json and getting a JSON object with information about inspectable pages along with the WebSocket addresses that you could use in order to start instrumenting them. Remote debugging is especially useful when debugging remote instances of the browser or attaching to the embedded devices. Blink port owners are responsible for exposing debugging connections to the external users. This is especially handy to understand how the DevTools frontend makes use of the protocol. First, run Chrome with the debugging port open: Then, select the Chromium Projects item in the Inspectable Pages list. Now that DevTools is up and fullscreen, open DevTools to inspect it. Cmd-R in the new inspector to make the first restart. Now head to Network Panel, filter by Websocket, select the connection and click the Frames tab. Now you can easily see the frames of WebSocket activity as you use the first instance of the DevTools. To allow chrome extensions to interact with the protocol, we introduced chrome.debugger extension API that exposes this JSON message transport interface. As a result, you can not only attach to the remotely running Chrome instance, but also instrument it from its own extension. Chrome Debugger Extension API provides a higher level API where command domain, name and body are provided explicitly in the `sendCommand` call. This API hides request ids and handles binding of the request with its response, hence allowing `sendCommand` to report result in the callback function call. One can also use this API in combination with the other Extension APIs. If you are developing a Web-based IDE, you should implement an extension that exposes debugging capabilities to your page and your IDE will be able to open pages with the target application, set breakpoints there, evaluate expressions in console, live edit JavaScript and CSS, display live DOM, network interaction and any other aspect that Developer Tools is instrumenting today. Opening embedded Developer Tools will terminate the remote connection and thus detach the extension. https://chromedevtools.github.io/devtools-protocol/#simultaneous The canonical protocol definitions live in the Chromium source tree: (browser_protocol.json and js_protocol.json). They are maintained manually by the DevTools engineering team. These files are mirrored (hourly) on GitHub in the devtools-protocol repo. The declarative protocol definitions are used across tools. Within Chromium, a binding layer is created for the Chrome DevTools to interact with, and separately the protocol is used for Chrome Headless’s C++ interface. What’s the protocol_externs file It’s created via generate_protocol_externs.py and useful for tools using closure compiler. The TypeScript story is here. Not yet. See bugger-daemon’s third-party docs. See also the endpoints implementation in Chromium. /json/protocol was added in Chrome 60. The endpoint is exposed as webSocketDebuggerUrl in /json/version. Note the browser in the URL, rather than page. If Chrome was launched with --remote-debugging-port=0 and chose an open port, the browser endpoint is written to both stderr and the DevToolsActivePort file in browser profile folder. Yes, as of Chrome 63! See Multi-client remote debugging support. Upon disconnnection, the outgoing client will receive a detached event. For example: View the enum of possible reasons. (For reference: the original patch). After disconnection, some apps have chosen to pause their state and offer a reconnect button.
Package jin Copyright (c) 2020 eco. License that can be found in the LICENSE file. "your wish is my command" Jin is a comprehensive JSON manipulation tool bundle. All functions tested with random data with help of Node.js. All test-path and test-value creation automated with Node.js. Jin provides parse, interpret, build and format tools for JSON. Third-party packages only used for the benchmark. No dependency need for core functions. We make some benchmark with other packages like Jin. In Result, Jin is the fastest (op/ns) and more memory friendly then others (B/op). For more information please take a look at BENCHMARK section below. WHAT IS NEW? 7 new functions tested and added to package. - GetMap() get objects as map[string]string structure with key values pairs - GetAll() get only specific keys values - GetAllMap() get only specific keys with map[string]stringstructure - GetKeys() get objects keys as string array - GetValues() get objects values as string array - GetKeysValues() get objects keys and values with separate string arrays - Length() get length of JSON array. 06.04.2020 INSTALLATION And you are good to go. Import and start using. Major difference between parsing and interpreting is parser has to read all data before answer your needs. On the other hand interpreter reads up to find the data you need. With parser, once the parse is complete you can access any data with no time. But there is a time cost to parse all data and this cost can increase as data content grows. If you need to access all keys of a JSON then, we are simply recommend you to use Parser. But if you need to access some keys of a JSON then we strongly recommend you to use Interpreter, it will be much faster and much more memory-friendly than parser. Interpreter is core element of this package, no need to create an Interpreter type, just call which function you want. First let's look at general function parameters. We are gonna use Get() function to access the value of path has pointed. In this case 'jin'. Path value can consist hard coded values. Get() function return type is []byte but all other variations of return types are implemented with different functions. For example. If you need "value" as string use GetString(). Parser is another alternative for JSON manipulation. We recommend to use this structure when you need to access all or most of the keys in the JSON. Parser constructor need only one parameter. We can parse it with Parse() function. Let's look at Parser.Get() About path value look above. There is all return type variations of Parser.Get() function like Interpreter. For return string use Parser.GetString() like this, Other usefull functions of Jin. -Add(), AddKeyValue(), Set(), SetKey() Delete(), Insert(), IterateArray(), IterateKeyValue() Tree(). Let's look at IterateArray() function. There are two formatting functions. Flatten() and Indent(). Indent() is adds indentation to JSON for nicer visualization and Flatten() removes this indentation. Control functions are simple and easy way to check value types of any path. For example. IsArray(). Or you can use GetType(). There are lots of JSON build functions in this package and all of them has its own examples. We just want to mention a couple of them. Scheme is simple and powerful tool for create JSON schemes. Testing is very important thing for this type of packages and it shows how reliable it is. For that reasons we use Node.js for unit testing. Lets look at folder arrangement and working principle. - test/ folder: test-json.json, this is a temporary file for testing. all other test-cases copying here with this name so they can process by test-case-creator.js. test-case-creator.js is core path & value creation mechanism. When it executed with executeNode() function. It reads the test-json.json file and generates the paths and values from this files content. With command line arguments it can generate different paths and values. As a result, two files are created with this process. the first of these files is test-json-paths.json and the second is test-json-values.json test-json-paths.json has all the path values. test-json-values.json has all the values that corresponding to path values. - tests/ folder All files in this folder is a test-case. But it doesn't mean that you can't change anything, on the contrary, all test-cases are creating automatically based on this folder content. You can add or remove any .json file that you want. All GO side test-case automation functions are in core_test.go file. This package developed with Node.js v13.7.0. please make sure that your machine has a valid version of Node.js before testing. All functions and methods are tested with complicated randomly genereted .json files. Like this, Most of JSON packages not even run properly with this kind of JSON streams. We did't see such packages as competitors to ourselves. And that's because we didn't even bother to benchmark against them. Benchmark results. - Benchmark prefix removed from function names for make room to results. - Benchmark between 'buger/jsonparser' and 'ecoshub/jin' use the same payload (JSON test-cases) that 'buger/jsonparser' package use for benchmark it self. We are currently working on, - Marshal() and Unmarshal() functions. - http.Request parser/interpreter - Builder functions for http.ResponseWriter If you want to contribute this work feel free to fork it. We want to fill this section with contributors.
Package gcnotifier provides a way to receive notifications after every garbage collection (GC) cycle. This can be useful, in long-running programs, to instruct your code to free additional memory resources that you may be using. A common use case for this is when you have custom data structures (e.g. buffers, caches, rings, trees, pools, ...): instead of setting a maximum size to your data structure you can leave it unbounded and then drop all (or some) of the allocated-but-unused slots after every GC run (e.g. sync.Pool drops all allocated-but-unused objects in the pool during GC). To minimize the load on the GC the code that runs after receiving the notification should try to avoid allocations as much as possible, or at the very least make sure that the amount of new memory allocated is significantly smaller than the amount of memory that has been "freed" in response to the notification. GCNotifier guarantees to send a notification after every GC cycle completes. Note that the Go runtime does not guarantee that the GC will run: specifically there is no guarantee that a GC will run before the program terminates. Example implements a simple time-based buffering io.Writer: data sent over dataCh is buffered for up to 100ms, then flushed out in a single call to out.Write and the buffer is reused. If GC runs, the buffer is flushed and then discarded so that it can be collected during the next GC run. The example is necessarily simplistic, a real implementation would be more refined (e.g. on GC flush or resize the buffer based on a threshold, perform asynchronous flushes, properly signal completions and propagate errors, adaptively preallocate the buffer based on the previous capacity, etc.)
Implementation of an R-Way Trie data structure. A Trie has a root Node which is the base of the tree. Each subsequent Node has a letter and children, which are nodes that have letter values associated with them.
Package gtree provides tree-structured output.
Package gtree provides tree-structured output.
Package ogdl is used to process OGDL, the Ordered Graph Data Language. OGDL is a textual format to write trees or graphs of text, where indentation and spaces define the structure. Here is an example: The languange is simple, either in its textual representation or its number of productions (the specification rules), allowing for compact implementations. OGDL character streams are normally formed by Unicode characters, and encoded as UTF-8 strings, but any encoding that is ASCII transparent is compatible with the specification. See the full spec at http://ogdl.org. To install this package just do: If we have a text file 'config.ogdl' containing: then, will print If the timeout parameter was not present, then the default value (60) will be assigned to 'to'. The default value is optional, but be aware that Int64() will return 0 in case that the parameter doesn't exist. The configuration file can be written in a conciser way: The package includes a template processor. It takes an arbitrary input stream with some variables in it, and produces an output stream with the variables resolved out of a Graph object which acts as context. For example (given the previous config file): string(b) is then: Some rules are followed:
Package ogdl is used to process OGDL, the Ordered Graph Data Language. OGDL is a textual format to write trees or graphs of text, where indentation and spaces define the structure. Here is an example: The languange is simple, either in its textual representation or its number of productions (the specification rules), allowing for compact implementations. OGDL character streams are normally formed by Unicode characters, and encoded as UTF-8 strings, but any encoding that is ASCII transparent is compatible with the specification. See the full spec at http://ogdl.org. To install this package just do: If we have a text file 'config.g' containing: then, will print If the timeout parameter was not present, then the default value (60) will be assigned to 'to'. The default value is optional, but be aware that Int64() will return 0 in case that the parameter doesn't exist. The configuration file can be written in a conciser way: The package includes a template processor. It takes an arbitrary input stream with some variables in it, and produces an output stream with the variables resolved out of a Graph object which acts as context. For example (given the previous config file): string(b) is then: Some rules are followed:
Implementation of an R-Way Trie data structure. A Trie has a root Node which is the base of the tree. Each subsequent Node has a letter and children, which are nodes that have letter values associated with them.
Package itc implements the interval tree clock as described in the paper 'Interval Tree Clocks: A Logical Clock for Dynamic Systems' by Paulo Sergio Almeida, Carlos Baquero and Victor Fonte. (http://gsd.di.uminho.pt/members/cbm/ps/itc2008.pdf) Causality tracking mechanisms can be modeled by a set of core operations: fork; event and join, that act on stamps (logical clocks) whose structure is a pair (i, e), formed by an id and an event component that encodes causally known events.
The rbxfile package handles the decoding, encoding, and manipulation of Roblox instance data structures. This package can be used to manipulate Roblox instance trees outside of the Roblox client. Such data structures begin with a Root struct. A Root contains a list of child Instances, which in turn contain more child Instances, and so on, forming a tree of Instances. These Instances can be accessed and manipulated using an API similar to that of Roblox. Each Instance also has a set of "properties". Each property has a specific value of a certain type. Every available type implements the Value interface, and is prefixed with "Value". Root structures can be decoded from and encoded to various formats, including Roblox's native file formats. The two sub-packages "bin" and "xml" provide formats for Roblox's binary and XML formats. Root structures can also be encoded and decoded with the "json" package. Besides decoding from a format, root structures can also be created manually. The best way to do this is through the "declare" sub-package, which provides an easy way to generate root structures.
Package antlr implements the Go version of the ANTLR 4 runtime. ANTLR (ANother Tool for Language Recognition) is a powerful parser generator for reading, processing, executing, or translating structured text or binary files. It's widely used to build languages, tools, and frameworks. From a grammar, ANTLR generates a parser that can build parse trees and also generates a listener interface (or visitor) that makes it easy to respond to the recognition of phrases of interest. At version 4.11.x and prior, the Go runtime was not properly versioned for go modules. After this point, the runtime source code to be imported was held in the `runtime/Go/antlr/v4` directory, and the go.mod file was updated to reflect the version of ANTLR4 that it is compatible with (I.E. uses the /v4 path). However, this was found to be problematic, as it meant that with the runtime embedded so far underneath the root of the repo, the `go get` and related commands could not properly resolve the location of the go runtime source code. This meant that the reference to the runtime in your `go.mod` file would refer to the correct source code, but would not list the release tag such as @4.13.1 - this was confusing, to say the least. As of 4.13.0, the runtime is now available as a go module in its own repo, and can be imported as `github.com/antlr4-go/antlr` (the go get command should also be used with this path). See the main documentation for the ANTLR4 project for more information, which is available at ANTLR docs. The documentation for using the Go runtime is available at Go runtime docs. This means that if you are using the source code without modules, you should also use the source code in the new repo. Though we highly recommend that you use go modules, as they are now idiomatic for Go. I am aware that this change will prove Hyrum's Law, but am prepared to live with it for the common good. Go runtime author: Jim Idle jimi@idle.ws ANTLR supports the generation of code in a number of target languages, and the generated code is supported by a runtime library, written specifically to support the generated code in the target language. This library is the runtime for the Go target. To generate code for the go target, it is generally recommended to place the source grammar files in a package of their own, and use the `.sh` script method of generating code, using the go generate directive. In that same directory it is usual, though not required, to place the antlr tool that should be used to generate the code. That does mean that the antlr tool JAR file will be checked in to your source code control though, so you are, of course, free to use any other way of specifying the version of the ANTLR tool to use, such as aliasing in `.zshrc` or equivalent, or a profile in your IDE, or configuration in your CI system. Checking in the jar does mean that it is easy to reproduce the build as it was at any point in its history. Here is a general/recommended template for an ANTLR based recognizer in Go: Make sure that the package statement in your grammar file(s) reflects the go package the generated code will exist in. The generate.go file then looks like this: And the generate.sh file will look similar to this: depending on whether you want visitors or listeners or any other ANTLR options. Not that another option here is to generate the code into a From the command line at the root of your source package (location of go.mo)d) you can then simply issue the command: Which will generate the code for the parser, and place it in the parsing package. You can then use the generated code by importing the parsing package. There are no hard and fast rules on this. It is just a recommendation. You can generate the code in any way and to anywhere you like. Copyright (c) 2012-2023 The ANTLR Project. All rights reserved. Use of this file is governed by the BSD 3-clause license, which can be found in the LICENSE.txt file in the project root.
Package sops manages JSON, YAML and BINARY documents to be encrypted or decrypted. This package should not be used directly. Instead, Sops users should install the command line client via `go get -u github.com/getsops/sops/v3/cmd/sops`, or use the decryption helper provided at `github.com/getsops/sops/v3/decrypt`. We do not guarantee API stability for any package other than `github.com/getsops/sops/v3/decrypt`. A Sops document is a Tree composed of a data branch with arbitrary key/value pairs and a metadata branch with encryption and integrity information. In JSON and YAML formats, the structure of the cleartext tree is preserved, keys are stored in cleartext and only values are encrypted. Keeping the keys in cleartext provides better readability when storing Sops documents in version controls, and allows for merging competing changes on documents. This is a major difference between Sops and other encryption tools that store documents as encrypted blobs. In BINARY format, the cleartext data is treated as a single blob and the encrypted document is in JSON format with a single `data` key and a single encrypted value. Sops allows operators to encrypt their documents with multiple master keys. Each of the master key defined in the document is able to decrypt it, allowing users to share documents amongst themselves without sharing keys, or using a PGP key as a backup for KMS. In practice, this is achieved by generating a data key for each document that is used to encrypt all values, and encrypting the data with each master key defined. Being able to decrypt the data key gives access to the document. The integrity of each document is guaranteed by calculating a Message Authentication Code (MAC) that is stored encrypted by the data key. When decrypting a document, the MAC should be recalculated and compared with the MAC stored in the document to verify that no fraudulent changes have been applied. The MAC covers keys and values as well as their ordering.
This package is the root package of the govmomi library. The library is structured as follows: The minimal usable functionality is available through the vim25 package. It contains subpackages that contain generated types, managed objects, and all available methods. The vim25 package is entirely independent of the other packages in the govmomi tree -- it has no dependencies on its peers. The vim25 package itself contains a client structure that is passed around throughout the entire library. It abstracts a session and its immutable state. See the vim25 package for more information. The session package contains an abstraction for the session manager that allows a user to login and logout. It also provides access to the current session (i.e. to determine if the user is in fact logged in) The object package contains wrappers for a selection of managed objects. The constructors of these objects all take a *vim25.Client, which they pass along to derived objects, if applicable. The govc package contains the govc CLI. The code in this tree is not intended to be used as a library. Any functionality that govc contains that _could_ be used as a library function but isn't, _should_ live in a root level package. Other packages, such as "event", "guest", or "license", provide wrappers for the respective subsystems. They are typically not needed in normal workflows so are kept outside the object package.
Package toml is a TOML parser and manipulation library. This version supports the specification as described in https://github.com/toml-lang/toml/blob/master/versions/en/toml-v0.5.0.md Go-toml can marshal and unmarshal TOML documents from and to data structures. Go-toml can operate on a TOML document as a tree. Use one of the Load* functions to parse TOML data and obtain a Tree instance, then one of its methods to manipulate the tree. The package github.com/x-hgg-x/go-toml/query implements a system similar to JSONPath to quickly retrieve elements of a TOML document using a single expression. See the package documentation for more information. Package civil implements types for civil time, a time-zone-independent representation of time that follows the rules of the proleptic Gregorian calendar with exactly 24-hour days, 60-minute hours, and 60-second minutes. Because they lack location information, these types do not represent unique moments or intervals of time. Use time.Time for that purpose.
Package getfilelist (getfilelist.go) : This is a Golang library to retrieve the file list with the folder structure from the specific folder of Google Drive. $ go get -u github.com/tanaikech/go-getfilelist More information is https://github.com/tanaikech/go-getfilelist --------------------------------------------------------------- Package getfilelist (getfilelist.go) : This is a Golang library to retrieve the file list with the folder tree from the specific folder of Google Drive.
Package btree implements in-memory B-Trees of arbitrary degree. btree implements an in-memory B-Tree for use as an ordered data structure. It is not meant for persistent storage solutions. It has a flatter structure than an equivalent red-black or other binary tree, which in some cases yields better memory usage and/or performance. See some discussion on the matter here: Note, though, that this project is in no way related to the C++ B-Tree implementation written about there. Within this tree, each node contains a slice of items and a (possibly nil) slice of children. For basic numeric values or raw structs, this can cause efficiency differences when compared to equivalent C++ template code that stores values in arrays within the node: These issues don't tend to matter, though, when working with strings or other heap-allocated structures, since C++-equivalent structures also must store pointers and also distribute their values across the heap. This implementation is designed to be a drop-in replacement to gollrb.LLRB trees, (http://github.com/petar/gollrb), an excellent and probably the most widely used ordered tree implementation in the Go ecosystem currently. Its functions, therefore, exactly mirror those of llrb.LLRB where possible. Unlike gollrb, though, we currently don't support storing multiple equivalent values.
Package fenwick provides a list data structure supporting prefix sums. A Fenwick tree, or binary indexed tree, is a space-efficient list data structure that can efficiently update elements and calculate prefix sums in a list of numbers. Compared to a common array, a Fenwick tree achieves better balance between element update and prefix sum calculation – both operations run in O(log n) time – while using the same amount of memory. This is achieved by representing the list as an implicit tree, where the value of each node is the sum of the numbers in that subtree. Compute the sum of the first 4 elements in a list.
Package antlr implements the Go version of the ANTLR 4 runtime. ANTLR (ANother Tool for Language Recognition) is a powerful parser generator for reading, processing, executing, or translating structured text or binary files. It's widely used to build languages, tools, and frameworks. From a grammar, ANTLR generates a parser that can build parse trees and also generates a listener interface (or visitor) that makes it easy to respond to the recognition of phrases of interest. At version 4.11.x and prior, the Go runtime was not properly versioned for go modules. After this point, the runtime source code to be imported was held in the `runtime/Go/antlr/v4` directory, and the go.mod file was updated to reflect the version of ANTLR4 that it is compatible with (I.E. uses the /v4 path). However, this was found to be problematic, as it meant that with the runtime embedded so far underneath the root of the repo, the `go get` and related commands could not properly resolve the location of the go runtime source code. This meant that the reference to the runtime in your `go.mod` file would refer to the correct source code, but would not list the release tag such as @4.13.1 - this was confusing, to say the least. As of 4.13.0, the runtime is now available as a go module in its own repo, and can be imported as `github.com/antlr4-go/antlr` (the go get command should also be used with this path). See the main documentation for the ANTLR4 project for more information, which is available at ANTLR docs. The documentation for using the Go runtime is available at Go runtime docs. This means that if you are using the source code without modules, you should also use the source code in the new repo. Though we highly recommend that you use go modules, as they are now idiomatic for Go. I am aware that this change will prove Hyrum's Law, but am prepared to live with it for the common good. Go runtime author: Jim Idle jimi@idle.ws ANTLR supports the generation of code in a number of target languages, and the generated code is supported by a runtime library, written specifically to support the generated code in the target language. This library is the runtime for the Go target. To generate code for the go target, it is generally recommended to place the source grammar files in a package of their own, and use the `.sh` script method of generating code, using the go generate directive. In that same directory it is usual, though not required, to place the antlr tool that should be used to generate the code. That does mean that the antlr tool JAR file will be checked in to your source code control though, so you are, of course, free to use any other way of specifying the version of the ANTLR tool to use, such as aliasing in `.zshrc` or equivalent, or a profile in your IDE, or configuration in your CI system. Checking in the jar does mean that it is easy to reproduce the build as it was at any point in its history. Here is a general/recommended template for an ANTLR based recognizer in Go: Make sure that the package statement in your grammar file(s) reflects the go package the generated code will exist in. The generate.go file then looks like this: And the generate.sh file will look similar to this: depending on whether you want visitors or listeners or any other ANTLR options. Not that another option here is to generate the code into a From the command line at the root of your source package (location of go.mo)d) you can then simply issue the command: Which will generate the code for the parser, and place it in the parsing package. You can then use the generated code by importing the parsing package. There are no hard and fast rules on this. It is just a recommendation. You can generate the code in any way and to anywhere you like. Copyright (c) 2012-2023 The ANTLR Project. All rights reserved. Use of this file is governed by the BSD 3-clause license, which can be found in the LICENSE.txt file in the project root.
Package lunk provides a set of tools for structured logging in the style of Google's Dapper or Twitter's Zipkin. When we consider a complex event in a distributed system, we're actually considering a partially-ordered tree of events from various services, libraries, and modules. Consider a user-initiated web request. Their browser sends an HTTP request to an edge server, which extracts the credentials (e.g., OAuth token) and authenticates the request by communicating with an internal authentication service, which returns a signed set of internal credentials (e.g., signed user ID). The edge web server then proxies the request to a cluster of web servers, each running a PHP application. The PHP application loads some data from several databases, places the user in a number of treatment groups for running A/B experiments, writes some data to a Dynamo-style distributed database, and returns an HTML response. The edge server receives this response and proxies it to the user's browser. In this scenario we have a number of infrastructure-specific events: This scenario also involves a number of events which have little to do with the infrastructure, but are still critical information for the business the system supports: There are a number of different teams all trying to monitor and improve aspects of this system. Operational staff need to know if a particular host or service is experiencing a latency spike or drop in throughput. Development staff need to know if their application's response times have gone down as a result of a recent deploy. Customer support staff need to know if the system is operating nominally as a whole, and for customers in particular. Product designers and managers need to know the effect of an A/B test on user behavior. But the fact that these teams will be consuming the data in different ways for different purposes does mean that they are working on different systems. In order to instrument the various components of the system, we need a common data model. We adopt Dapper's notion of a tree to mean a partially-ordered tree of events from a distributed system. A tree in Lunk is identified by its root ID, which is the unique ID of its root event. All events in a common tree share a root ID. In our photo example, we would assign a unique root ID as soon as the edge server received the request. Events inside a tree are causally ordered: each event has a unique ID, and an optional parent ID. By passing the IDs across systems, we establish causal ordering between events. In our photo example, the two database queries from the app would share the same parent ID--the ID of the event corresponding to the app handling the request which caused those queries. Each event has a schema of properties, which allow us to record specific pieces of information about each event. For HTTP requests, we can record the method, the request URI, the elapsed time to handle the request, etc. Lunk is agnostic in terms of aggregation technologies, but two use cases seem clear: real-time process monitoring and offline causational analysis. For real-time process monitoring, events can be streamed to a aggregation service like Riemann (http://riemann.io) or Storm (http://storm.incubator.apache.org), which can calculate process statistics (e.g., the 95th percentile latency for the edge server responses) in real-time. This allows for adaptive monitoring of all services, with the option of including example root IDs in the alerts (e.g., 95th percentile latency is over 300ms, mostly as a result of requests like those in tree XXXXX). For offline causational analysis, events can be written in batches to batch processing systems like Hadoop or OLAP databases like Vertica. These aggregates can be queried to answer questions traditionally reserved for A/B testing systems. "Did users who were show the new navbar view more photos?" "Did the new image optimization algorithm we enabled for 1% of views run faster? Did it produce smaller images? Did it have any effect on user engagement?" "Did any services have increased exception rates after any recent deploys?" &tc &tc By capturing the root ID of a particular web request, we can assemble a partially-ordered tree of events which were involved in the handling of that request. All events with a common root ID are in a common tree, which allows for O(M) retrieval for a tree of M events. To send a request with a root ID and a parent ID, use the Event-ID HTTP header: The header value is simply the root ID and event ID, hex-encoded and separated with a slash. If the event has a parent ID, that may be included as an optional third parameter. A server that receives a request with this header can use this to properly parent its own events. Each event has a set of named properties, the keys and values of which are strings. This allows aggregation layers to take advantage of simplifying assumptions and either store events in normalized form (with event data separate from property data) or in denormalized form (essentially pre-materializing an outer join of the normalized relations). Durations are always recorded as fractional milliseconds. Lunk currently provides two formats for log entries: text and JSON. Text-based logs encode each entry as a single line of text, using key="value" formatting for all properties. Event property keys are scoped to avoid collisions. JSON logs encode each entry as a single JSON object.
Package merkletree implements a Merkle Tree capable of storing arbitrary content. A Merkle Tree is a hash tree that provides an efficient way to verify the contents of a set data are present and untampered with. At its core, a Merkle Tree is a list of items representing the data that should be verified. Each of these items is inserted into a leaf node and a tree of hashes is constructed bottom up using a hash of the nodes left and right children's hashes. This means that the root node will effictively be a hash of all other nodes (hashes) in the tree. This property allows the tree to be reproduced and thus verified by on the hash of the root node of the tree. The benefit of the tree structure is verifying any single content entry in the tree will require only nlog2(n) steps in the worst case. Creating a new merkletree requires that the type that the tree will be constructed from implements the Content interface. A slice of the Content items should be created and then passed to the NewTree method. t represents the Merkle Tree and can be verified and manipulated with the API methods described below.
Package mast provides an immutable, versioned, diffable map implementation of a Merkle Search Tree (MST). Masts can be huge (not limited to memory). Masts can be stored in anything, like a filesystem, KV store, or blob store. Masts are designed to be safely concurrently accessed by multiple threads/hosts. And like Merkle DAGs, Masts are designed to be easy to cache and synchronize. - Efficient storage of multiple versions of materialized views - Diffing of versions integrates CDC/streaming - Efficient copy-on-write alternative to Go builtin map Mast is an implementation of the structure described in the awesome paper, "Merkle Search Trees: Efficient State-Based CRDTs in Open Networks", by Alex Auvolat and François Taïani, 2019 (https://hal.inria.fr/hal-02303490/document). MSTs are similar to persistent B-Trees, except node splits are chosen deterministically based on key and tree height, rather than node size, so MSTs eliminate the dependence on insertion/deletion order that B-Trees have for equivalence, making MSTs an interesting choice for conflict-free replicated data types (CRDTs). (MSTs are also simpler to implement than B-Trees, needing less complicated rebalancing, and no rotations.) MSTs are like other Merkle structures in that two instances can easily be compared to confirm equality or find differences, since equal node hashes indicate equal contents. A Mast can be Clone()d for sharing between threads. Cloning creates a new version that can evolve independently, yet shares all the unmodified subtrees with its parent, and as such are relatively cheap to create. The immutable data types in Clojure, Haskell, ML and other functional languages really do make it easier to "reason about" systems; easier to test, provide a foundation to build more quickly on. https://github.com/bodil/im-rs, "Blazing fast immutable collection datatypes for Rust", by Bodil Stokke, is an exemplar: the diff algorithm and use of property testing, are instructive, and Chunks and PoolRefs fill gaps in understanding of Rust's ownership model for library writers coming from GC'd languages.
Package fibheap implements a Fibonacci heap. A Fibonacci heap is a data structure for priority queue operations, consisting of a collection of heap-ordered trees. The amortized time complexity of Fibonacci heap operations are as follows: fetching the minimum is Θ(1), extracting the minimum is O(log n), inserting is Θ(1), decreasing a key is Θ(1), and merging two heaps is Θ(1). In our implementation, we do not additionally track the key value of each element. Therefore, users should be aware that they should not insert elements with the same key into the Fibonacci heap.
Package flatset provides the sorted associative containers 'FlatSet' and 'FlatMultiSet' that store data in continuous memory instead of a binary tree (similar to C++ std::flat_set and std::flat_multiset). Both the FlatSet and FlatMultiSet implement stable ordering, but the previous indices for values are invalidated following an method that modifies the container. Compared to a binary tree, flatsets are considerably faster to read as they avoid the cache misses from pointer referencing. On modern CPUs it is also surprisingly fast to insert into a flatset, especially if the collection is small, or if you frequently update the end of the flatset. For larger collections you can optimize write operations by using a flatset of pointers to structures instead of values, although you must be careful not to modify the data inside the flatset as it might affect how the data is sorted. This package uses 'range over functions' so it requires golang >= 1.23