Package hercules contains the functions which are needed to gather various statistics from a Git repository. The analysis is expressed in a form of the tree: there are nodes - "pipeline items" - which require some other nodes to be executed prior to selves and in turn provide the data for dependent nodes. There are several service items which do not produce any useful statistics but rather provide the requirements for other items. The top-level items include: - BurndownAnalysis - line burndown statistics for project, files and developers. - CouplesAnalysis - coupling statistics for files and developers. - ShotnessAnalysis - structural hotness and couples, by any Babelfish UAST XPath (functions by default). The typical API usage is to initialize the Pipeline class: Then add the required analysis: This call will add all the needed intermediate pipeline items. Then link and execute the analysis tree: Finally extract the result: The actual usage example is cmd/hercules/root.go - the command line tool's code. You can provide additional options via `facts` on initialization. For example, to provide your own logger, enable people-tracking, and set a custom tick size: Hercules depends heavily on https://github.com/src-d/go-git and leverages the diff algorithm through https://github.com/sergi/go-diff. Besides, BurndownAnalysis involves File and RBTree. These are low level data structures which enable incremental blaming. File carries an instance of RBTree and the current line burndown state. RBTree implements the red-black balanced binary tree and is based on https://github.com/yasushi-saito/rbtree. Coupling stats are supposed to be further processed rather than observed directly. labours.py uses Swivel embeddings and visualises them in Tensorflow Projector. Shotness analysis as well as other UAST-featured items relies on [Babelfish](https://doc.bblf.sh) and requires the server to be running.
Package hercules contains the functions which are needed to gather various statistics from a Git repository. The analysis is expressed in a form of the tree: there are nodes - "pipeline items" - which require some other nodes to be executed prior to selves and in turn provide the data for dependent nodes. There are several service items which do not produce any useful statistics but rather provide the requirements for other items. The top-level items include: - BurndownAnalysis - line burndown statistics for project, files and developers. - CouplesAnalysis - coupling statistics for files and developers. - ShotnessAnalysis - structural hotness and couples, by any Babelfish UAST XPath (functions by default). The typical API usage is to initialize the Pipeline class: Then add the required analysis: This call will add all the needed intermediate pipeline items. Then link and execute the analysis tree: Finally extract the result: The actual usage example is cmd/hercules/root.go - the command line tool's code. Hercules depends heavily on https://github.com/src-d/go-git and leverages the diff algorithm through https://github.com/sergi/go-diff. Besides, BurndownAnalysis involves File and RBTree. These are low level data structures which enable incremental blaming. File carries an instance of RBTree and the current line burndown state. RBTree implements the red-black balanced binary tree and is based on https://github.com/yasushi-saito/rbtree. Coupling stats are supposed to be further processed rather than observed directly. labours.py uses Swivel embeddings and visualises them in Tensorflow Projector. Shotness analysis as well as other UAST-featured items relies on [Babelfish](https://doc.bblf.sh) and requires the server to be running.
Package redblacktree provides a pure Golang implementation of a red-black tree as described by Thomas H. Cormen's et al. in their seminal Algorithms book (3rd ed). This data structure is not multi-goroutine safe.
Package suture provides Erlang-like supervisor trees. This implements Erlang-esque supervisor trees, as adapted for Go. This is intended to be an industrial-strength implementation, but it has not yet been deployed in a hostile environment. (It's headed there, though.) Supervisor Tree -> SuTree -> suture -> holds your code together when it's trying to fall apart. Why use Suture? Suture has 100% test coverage, and is golint clean. This doesn't prove it free of bugs, but it shows I care. A blog post describing the design decisions is available at http://www.jerf.org/iri/post/2930 . To idiomatically use Suture, create a Supervisor which is your top level "application" supervisor. This will often occur in your program's "main" function. Create "Service"s, which implement the Service interface. .Add() them to your Supervisor. Supervisors are also services, so you can create a tree structure here, depending on the exact combination of restarts you want to create. As a special case, when adding Supervisors to Supervisors, the "sub" supervisor will have the "super" supervisor's Log function copied. This allows you to set one log function on the "top" supervisor, and have it propagate down to all the sub-supervisors. This also allows libraries or modules to provide Supervisors without having to commit their users to a particular logging method. Finally, as what is probably the last line of your main() function, call .Serve() on your top level supervisor. This will start all the services you've defined. See the Example for an example, using a simple service that serves out incrementing integers.
Package suture provides Erlang-like supervisor trees. This implements Erlang-esque supervisor trees, as adapted for Go. This is intended to be an industrial-strength implementation, but it has not yet been deployed in a hostile environment. (It's headed there, though.) Supervisor Tree -> SuTree -> suture -> holds your code together when it's trying to fall apart. Why use Suture? Suture has 100% test coverage, and is golint clean. This doesn't prove it free of bugs, but it shows I care. A blog post describing the design decisions is available at http://www.jerf.org/iri/post/2930 . To idiomatically use Suture, create a Supervisor which is your top level "application" supervisor. This will often occur in your program's "main" function. Create "Service"s, which implement the Service interface. .Add() them to your Supervisor. Supervisors are also services, so you can create a tree structure here, depending on the exact combination of restarts you want to create. As a special case, when adding Supervisors to Supervisors, the "sub" supervisor will have the "super" supervisor's Log function copied. This allows you to set one log function on the "top" supervisor, and have it propagate down to all the sub-supervisors. This also allows libraries or modules to provide Supervisors without having to commit their users to a particular logging method. Finally, as what is probably the last line of your main() function, call .Serve() on your top level supervisor. This will start all the services you've defined. See the Example for an example, using a simple service that serves out incrementing integers.
Package suture provides Erlang-like supervisor trees. This implements Erlang-esque supervisor trees, as adapted for Go. This is intended to be an industrial-strength implementation, but it has not yet been deployed in a hostile environment. (It's headed there, though.) Supervisor Tree -> SuTree -> suture -> holds your code together when it's trying to fall apart. Why use Suture? Suture has 100% test coverage, and is golint clean. This doesn't prove it free of bugs, but it shows I care. A blog post describing the design decisions is available at http://www.jerf.org/iri/post/2930 . To idiomatically use Suture, create a Supervisor which is your top level "application" supervisor. This will often occur in your program's "main" function. Create "Service"s, which implement the Service interface. .Add() them to your Supervisor. Supervisors are also services, so you can create a tree structure here, depending on the exact combination of restarts you want to create. As a special case, when adding Supervisors to Supervisors, the "sub" supervisor will have the "super" supervisor's Log function copied. This allows you to set one log function on the "top" supervisor, and have it propagate down to all the sub-supervisors. This also allows libraries or modules to provide Supervisors without having to commit their users to a particular logging method. Finally, as what is probably the last line of your main() function, call .Serve() on your top level supervisor. This will start all the services you've defined. See the Example for an example, using a simple service that serves out incrementing integers.
Package gographviz provides parsing for the DOT grammar into an abstract syntax tree representing a graph, analysis of the abstract syntax tree into a more usable structure, and writing back of this structure into the DOT format.
Package gographviz provides parsing for the DOT grammar into an abstract syntax tree representing a graph, analysis of the abstract syntax tree into a more usable structure, and writing back of this structure into the DOT format.
Package reflow implements the core data structures and (abstract) runtime for Reflow. Reflow is a system for distributed program execution. The programs are described by Flows, which are an abstract specification of the program's execution. Each Flow node can take any number of other Flows as dependent inputs and perform some (local) execution over these inputs in order to compute some output value. Reflow supports a limited form of dynamic dependencies: a Flow may evaluate to a list of values, each of which may be executed independently. This mechanism also provides parallelism. The system orchestrates Flow execution by evaluating the flow in the manner of an abstract syntax tree; see Eval for more details.
Package errortree provides primitives for working with errors in tree structure errortree is intended to be used in places where errors are generated from an arbitrary tree structure, like the validation of a configuration file. This allows adding additional context as to why an error has happened in a clean and structured way. errortree fully supports nesting of multiple trees, including simplified retrieval of errors which, among other things, should help remove repeated boilerplate code from unit tests.
Package sops manages JSON, YAML and BINARY documents to be encrypted or decrypted. This package should not be used directly. Instead, Sops users should install the command line client via `go get -u github.com/getsops/sops/v3/cmd/sops`, or use the decryption helper provided at `github.com/getsops/sops/v3/decrypt`. We do not guarantee API stability for any package other than `github.com/getsops/sops/v3/decrypt`. A Sops document is a Tree composed of a data branch with arbitrary key/value pairs and a metadata branch with encryption and integrity information. In JSON and YAML formats, the structure of the cleartext tree is preserved, keys are stored in cleartext and only values are encrypted. Keeping the keys in cleartext provides better readability when storing Sops documents in version controls, and allows for merging competing changes on documents. This is a major difference between Sops and other encryption tools that store documents as encrypted blobs. In BINARY format, the cleartext data is treated as a single blob and the encrypted document is in JSON format with a single `data` key and a single encrypted value. Sops allows operators to encrypt their documents with multiple master keys. Each of the master key defined in the document is able to decrypt it, allowing users to share documents amongst themselves without sharing keys, or using a PGP key as a backup for KMS. In practice, this is achieved by generating a data key for each document that is used to encrypt all values, and encrypting the data with each master key defined. Being able to decrypt the data key gives access to the document. The integrity of each document is guaranteed by calculating a Message Authentication Code (MAC) that is stored encrypted by the data key. When decrypting a document, the MAC should be recalculated and compared with the MAC stored in the document to verify that no fraudulent changes have been applied. The MAC covers keys and values as well as their ordering.
Package gtree provides output or directory creation of tree structure.
Package guinea is a command line interface library. This library operates on a tree-like structure of available commands. In the following example we define a root command with two subcommands. It will most likely be the best to define the commands as global variables in your package. After defining the commands use the run function to execute them. The library will read os.Args to determine which command should be executed and to populate the context passed to it with options and arguments. The user can invoke a program in multiple ways. To let the user call a command with arguments or options populate the proper lists in the command struct. If you wish to parse the arguments in a different way simply don't define any options or arguments in the command struct and pass the arguments from the context to your parsing function.
Package mafsa implements Minimal Acyclic Finite State Automata (MA-FSA) in a space-optimized way as described by Dacuik, Mihov, Watson, and Watson in their paper, "Incremental Construction of Minimal Acyclic Finite-State Automata" (2000). It also implements Minimal Perfect Hashing (MPH) as described by Lucceshi and Kowaltowski in their paper, "Applications of Finite Automata Representing Large Vocabularies" (1992). Unscientifically speaking, this package lets you store large amounts of strings (with Unicode) in memory so that membership queries, prefix lookups, and fuzzy searches are fast. And because minimal perfect hashing is included, you can associate each entry in the tree with more data used by your application. See the README or the end of this documentation for a brief tutorial. MA-FSA structures are a specific type of Deterministic Acyclic Finite State Automaton (DAFSA) which fold equivalent state transitions into each other starting from the suffix of each entry. Typical construction algorithms involve building out the entire tree first, then minimizing the completed tree. However, the method described in the paper above allows the tree to be minimized after every word insertion, provided the insertions are performed in lexicographical order, which drastically reduces memory usage compared to regular prefix trees ("tries"). The goal of this package is to provide a simple, useful, and correct implementation of MA-FSA. Though more complex algorithms exist for removal of items and unordered insertion, these features may be outside the scope of this package. Membership queries are on the order of O(n), where n is the length of the input string, so basically O(1). It is advisable to keep n small since long entries without much in common, especially in the beginning or end of the string, will quickly overrun the optimizations that are available. In those cases, n-gram implementations might be preferable, though these will use more CPU. This package provides two kinds of MA-FSA implementations. One, the BuildTree, facilitates the construction of an optimized tree and allows ordered insertions. The other, MinTree, is effectively read-only but uses significantly less memory and is ideal for production environments where only reads will be occurring. Usually your build process will be separate from your production application, which will make heavy use of reading the structure. To use this package, create a BuildTree and insert your items in lexicographical order: The tree is now compressed to a minimum number of nodes and is ready to be saved. In your production application, then, you can read the file into a MinTree directly: The mt variable is a *MinTree which has the same data as the original BuildTree, but without all the extra "scaffolding" that was required for adding new elements. The package provides some basic read mechanisms.
Package gochrome aims to be a complete Chrome DevTools Protocol Viewer implementation. Versioned packages are available. Curently the only version is `tot` or Tip-of-Tree. Stable versions will be made available in the future. This is beta software and hasn't been well exercised in real-world applications. See https://chromedevtools.github.io/devtools-protocol/ The Chrome DevTools Protocol allows for tools to instrument, inspect, debug and profile Chromium, Chrome and other Blink-based browsers. Many existing projects currently use the protocol. The Chrome DevTools uses this protocol and the team maintains its API. Instrumentation is divided into a number of domains (DOM, Debugger, Network etc.). Each domain defines a number of commands it supports and events it generates. Both commands and events are serialized JSON objects of a fixed structure. You can either debug over the wire using the raw messages as they are described in the corresponding domain documentation, or use extension JavaScript API. The latest (tip-of-tree) protocol (tot) It changes frequently and can break at any time. However it captures the full capabilities of the Protocol, whereas the stable release is a subset. There is no backwards compatibility support guaranteed for the capabilities it introduces. Resources Basics: Using DevTools as protocol client The Developer Tools front-end can attach to a remotely running Chrome instance for debugging. For this scenario to work, you should start your host Chrome instance with the remote-debugging-port command line switch: Then you can start a separate client Chrome instance, using a distinct user profile: Now you can navigate to the given port from your client and attach to any of the discovered tabs for debugging: http://localhost:9222 You will find the Developer Tools interface identical to the embedded one and here is why: In this scenario, you can substitute Developer Tools front-end with your own implementation. Instead of navigating to the HTML page at http://localhost:9222, your application can discover available pages by requesting: http://localhost:9222/json and getting a JSON object with information about inspectable pages along with the WebSocket addresses that you could use in order to start instrumenting them. Remote debugging is especially useful when debugging remote instances of the browser or attaching to the embedded devices. Blink port owners are responsible for exposing debugging connections to the external users. This is especially handy to understand how the DevTools frontend makes use of the protocol. First, run Chrome with the debugging port open: Then, select the Chromium Projects item in the Inspectable Pages list. Now that DevTools is up and fullscreen, open DevTools to inspect it. Cmd-R in the new inspector to make the first restart. Now head to Network Panel, filter by Websocket, select the connection and click the Frames tab. Now you can easily see the frames of WebSocket activity as you use the first instance of the DevTools. To allow chrome extensions to interact with the protocol, we introduced chrome.debugger extension API that exposes this JSON message transport interface. As a result, you can not only attach to the remotely running Chrome instance, but also instrument it from its own extension. Chrome Debugger Extension API provides a higher level API where command domain, name and body are provided explicitly in the `sendCommand` call. This API hides request ids and handles binding of the request with its response, hence allowing `sendCommand` to report result in the callback function call. One can also use this API in combination with the other Extension APIs. If you are developing a Web-based IDE, you should implement an extension that exposes debugging capabilities to your page and your IDE will be able to open pages with the target application, set breakpoints there, evaluate expressions in console, live edit JavaScript and CSS, display live DOM, network interaction and any other aspect that Developer Tools is instrumenting today. Opening embedded Developer Tools will terminate the remote connection and thus detach the extension. https://chromedevtools.github.io/devtools-protocol/#simultaneous The canonical protocol definitions live in the Chromium source tree: (browser_protocol.json and js_protocol.json). They are maintained manually by the DevTools engineering team. These files are mirrored (hourly) on GitHub in the devtools-protocol repo. The declarative protocol definitions are used across tools. Within Chromium, a binding layer is created for the Chrome DevTools to interact with, and separately the protocol is used for Chrome Headless’s C++ interface. What’s the protocol_externs file It’s created via generate_protocol_externs.py and useful for tools using closure compiler. The TypeScript story is here. Not yet. See bugger-daemon’s third-party docs. See also the endpoints implementation in Chromium. /json/protocol was added in Chrome 60. The endpoint is exposed as webSocketDebuggerUrl in /json/version. Note the browser in the URL, rather than page. If Chrome was launched with --remote-debugging-port=0 and chose an open port, the browser endpoint is written to both stderr and the DevToolsActivePort file in browser profile folder. Yes, as of Chrome 63! See Multi-client remote debugging support. Upon disconnnection, the outgoing client will receive a detached event. For example: View the enum of possible reasons. (For reference: the original patch). After disconnection, some apps have chosen to pause their state and offer a reconnect button.
Package jin Copyright (c) 2020 eco. License that can be found in the LICENSE file. "your wish is my command" Jin is a comprehensive JSON manipulation tool bundle. All functions tested with random data with help of Node.js. All test-path and test-value creation automated with Node.js. Jin provides parse, interpret, build and format tools for JSON. Third-party packages only used for the benchmark. No dependency need for core functions. We make some benchmark with other packages like Jin. In Result, Jin is the fastest (op/ns) and more memory friendly then others (B/op). For more information please take a look at BENCHMARK section below. WHAT IS NEW? 7 new functions tested and added to package. - GetMap() get objects as map[string]string structure with key values pairs - GetAll() get only specific keys values - GetAllMap() get only specific keys with map[string]stringstructure - GetKeys() get objects keys as string array - GetValues() get objects values as string array - GetKeysValues() get objects keys and values with separate string arrays - Length() get length of JSON array. 06.04.2020 INSTALLATION And you are good to go. Import and start using. Major difference between parsing and interpreting is parser has to read all data before answer your needs. On the other hand interpreter reads up to find the data you need. With parser, once the parse is complete you can access any data with no time. But there is a time cost to parse all data and this cost can increase as data content grows. If you need to access all keys of a JSON then, we are simply recommend you to use Parser. But if you need to access some keys of a JSON then we strongly recommend you to use Interpreter, it will be much faster and much more memory-friendly than parser. Interpreter is core element of this package, no need to create an Interpreter type, just call which function you want. First let's look at general function parameters. We are gonna use Get() function to access the value of path has pointed. In this case 'jin'. Path value can consist hard coded values. Get() function return type is []byte but all other variations of return types are implemented with different functions. For example. If you need "value" as string use GetString(). Parser is another alternative for JSON manipulation. We recommend to use this structure when you need to access all or most of the keys in the JSON. Parser constructor need only one parameter. We can parse it with Parse() function. Let's look at Parser.Get() About path value look above. There is all return type variations of Parser.Get() function like Interpreter. For return string use Parser.GetString() like this, Other usefull functions of Jin. -Add(), AddKeyValue(), Set(), SetKey() Delete(), Insert(), IterateArray(), IterateKeyValue() Tree(). Let's look at IterateArray() function. There are two formatting functions. Flatten() and Indent(). Indent() is adds indentation to JSON for nicer visualization and Flatten() removes this indentation. Control functions are simple and easy way to check value types of any path. For example. IsArray(). Or you can use GetType(). There are lots of JSON build functions in this package and all of them has its own examples. We just want to mention a couple of them. Scheme is simple and powerful tool for create JSON schemes. Testing is very important thing for this type of packages and it shows how reliable it is. For that reasons we use Node.js for unit testing. Lets look at folder arrangement and working principle. - test/ folder: test-json.json, this is a temporary file for testing. all other test-cases copying here with this name so they can process by test-case-creator.js. test-case-creator.js is core path & value creation mechanism. When it executed with executeNode() function. It reads the test-json.json file and generates the paths and values from this files content. With command line arguments it can generate different paths and values. As a result, two files are created with this process. the first of these files is test-json-paths.json and the second is test-json-values.json test-json-paths.json has all the path values. test-json-values.json has all the values that corresponding to path values. - tests/ folder All files in this folder is a test-case. But it doesn't mean that you can't change anything, on the contrary, all test-cases are creating automatically based on this folder content. You can add or remove any .json file that you want. All GO side test-case automation functions are in core_test.go file. This package developed with Node.js v13.7.0. please make sure that your machine has a valid version of Node.js before testing. All functions and methods are tested with complicated randomly genereted .json files. Like this, Most of JSON packages not even run properly with this kind of JSON streams. We did't see such packages as competitors to ourselves. And that's because we didn't even bother to benchmark against them. Benchmark results. - Benchmark prefix removed from function names for make room to results. - Benchmark between 'buger/jsonparser' and 'ecoshub/jin' use the same payload (JSON test-cases) that 'buger/jsonparser' package use for benchmark it self. We are currently working on, - Marshal() and Unmarshal() functions. - http.Request parser/interpreter - Builder functions for http.ResponseWriter If you want to contribute this work feel free to fork it. We want to fill this section with contributors.
Package btree implements in-memory B-Trees of arbitrary degree. btree implements an in-memory B-Tree for use as an ordered data structure. It is not meant for persistent storage solutions. It has a flatter structure than an equivalent red-black or other binary tree, which in some cases yields better memory usage and/or performance. See some discussion on the matter here: Note, though, that this project is in no way related to the C++ B-Tree implementation written about there. Within this tree, each node contains a slice of items and a (possibly nil) slice of children. For basic numeric values or raw structs, this can cause efficiency differences when compared to equivalent C++ template code that stores values in arrays within the node: These issues don't tend to matter, though, when working with strings or other heap-allocated structures, since C++-equivalent structures also must store pointers and also distribute their values across the heap. This implementation is designed to be a drop-in replacement to gollrb.LLRB trees, (http://github.com/petar/gollrb), an excellent and probably the most widely used ordered tree implementation in the Go ecosystem currently. Its functions, therefore, exactly mirror those of llrb.LLRB where possible. Unlike gollrb, though, we currently don't support storing multiple equivalent values.
Package gcnotifier provides a way to receive notifications after every garbage collection (GC) cycle. This can be useful, in long-running programs, to instruct your code to free additional memory resources that you may be using. A common use case for this is when you have custom data structures (e.g. buffers, caches, rings, trees, pools, ...): instead of setting a maximum size to your data structure you can leave it unbounded and then drop all (or some) of the allocated-but-unused slots after every GC run (e.g. sync.Pool drops all allocated-but-unused objects in the pool during GC). To minimize the load on the GC the code that runs after receiving the notification should try to avoid allocations as much as possible, or at the very least make sure that the amount of new memory allocated is significantly smaller than the amount of memory that has been "freed" in response to the notification. GCNotifier guarantees to send a notification after every GC cycle completes. Note that the Go runtime does not guarantee that the GC will run: specifically there is no guarantee that a GC will run before the program terminates. Example implements a simple time-based buffering io.Writer: data sent over dataCh is buffered for up to 100ms, then flushed out in a single call to out.Write and the buffer is reused. If GC runs, the buffer is flushed and then discarded so that it can be collected during the next GC run. The example is necessarily simplistic, a real implementation would be more refined (e.g. on GC flush or resize the buffer based on a threshold, perform asynchronous flushes, properly signal completions and propagate errors, adaptively preallocate the buffer based on the previous capacity, etc.)
* @brief XATMI main package NOTES for finalizers! Note that if we pass from finalized object (typed ubf, expression tree or ATMI Context) pointer to C, the and the function call for the object is last in the object's go scope, the go might being to GC the go object, while the C function have received pointer in args. Thus C side in the middle of processing might get destructed object. e.g. c_str := C.BPrintStrC(&u.Buf.Ctx.c_ctx, (*C.UBFH)(unsafe.Pointer(u.Buf.C_ptr))) after enter in C.BPrintStrC(), the GC might kill the u object. Thus to avoid this, we create a defered "no-op" call in the enter of go func. With "u.Buf.nop()" at the end of the functions. * @file atmi.go ----------------------------------------------------------------------------- Enduro/X Middleware Platform for Distributed Transaction Processing Copyright (C) 2009-2016, ATR Baltic, Ltd. All Rights Reserved. Copyright (C) 2017-2019, Mavimax, Ltd. All Rights Reserved. This software is released under one of the following licenses: LGPL or Mavimax's license for commercial use. See LICENSE file for full text. * C (as designed by Dennis Ritchie and later authors) language code is licensed under Enduro/X Modified GNU Affero General Public License, version 3. See LICENSE_C file for full text. ----------------------------------------------------------------------------- LGPL license: * This program is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License, version 3 as published by the Free Software Foundation; * This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License, version 3 for more details. * You should have received a copy of the Lesser General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA * ----------------------------------------------------------------------------- A commercial use license is available from Mavimax, Ltd contact@mavimax.com ----------------------------------------------------------------------------- * @brief XATMI main package, server API * @file atmisrv.go ----------------------------------------------------------------------------- Enduro/X Middleware Platform for Distributed Transaction Processing Copyright (C) 2009-2016, ATR Baltic, Ltd. All Rights Reserved. Copyright (C) 2017-2019, Mavimax, Ltd. All Rights Reserved. This software is released under one of the following licenses: LGPL or Mavimax's license for commercial use. See LICENSE file for full text. * C (as designed by Dennis Ritchie and later authors) language code is licensed under Enduro/X Modified GNU Affero General Public License, version 3. See LICENSE_C file for full text. ----------------------------------------------------------------------------- LGPL license: * This program is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License, version 3 as published by the Free Software Foundation; * This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License, version 3 for more details. * You should have received a copy of the Lesser General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA * ----------------------------------------------------------------------------- A commercial use license is available from Mavimax, Ltd contact@mavimax.com ----------------------------------------------------------------------------- * @brief TPLOG - text logging and debuging API provided by Enduro/X * @file tplog.go ----------------------------------------------------------------------------- Enduro/X Middleware Platform for Distributed Transaction Processing Copyright (C) 2009-2016, ATR Baltic, Ltd. All Rights Reserved. Copyright (C) 2017-2019, Mavimax, Ltd. All Rights Reserved. This software is released under one of the following licenses: LGPL or Mavimax's license for commercial use. See LICENSE file for full text. * C (as designed by Dennis Ritchie and later authors) language code is licensed under Enduro/X Modified GNU Affero General Public License, version 3. See LICENSE_C file for full text. ----------------------------------------------------------------------------- LGPL license: * This program is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License, version 3 as published by the Free Software Foundation; * This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License, version 3 for more details. * You should have received a copy of the Lesser General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA * ----------------------------------------------------------------------------- A commercial use license is available from Mavimax, Ltd contact@mavimax.com ----------------------------------------------------------------------------- * @brief Typed C-Array (binary array) IPC buffer support * @file typed_carray.go ----------------------------------------------------------------------------- Enduro/X Middleware Platform for Distributed Transaction Processing Copyright (C) 2009-2016, ATR Baltic, Ltd. All Rights Reserved. Copyright (C) 2017-2019, Mavimax, Ltd. All Rights Reserved. This software is released under one of the following licenses: LGPL or Mavimax's license for commercial use. See LICENSE file for full text. * C (as designed by Dennis Ritchie and later authors) language code is licensed under Enduro/X Modified GNU Affero General Public License, version 3. See LICENSE_C file for full text. ----------------------------------------------------------------------------- LGPL license: * This program is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License, version 3 as published by the Free Software Foundation; * This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License, version 3 for more details. * You should have received a copy of the Lesser General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA * ----------------------------------------------------------------------------- A commercial use license is available from Mavimax, Ltd contact@mavimax.com ----------------------------------------------------------------------------- * @brief JSON IPC Buffer support * @file typed_json.go ----------------------------------------------------------------------------- Enduro/X Middleware Platform for Distributed Transaction Processing Copyright (C) 2009-2016, ATR Baltic, Ltd. All Rights Reserved. Copyright (C) 2017-2019, Mavimax, Ltd. All Rights Reserved. This software is released under one of the following licenses: LGPL or Mavimax's license for commercial use. See LICENSE file for full text. * C (as designed by Dennis Ritchie and later authors) language code is licensed under Enduro/X Modified GNU Affero General Public License, version 3. See LICENSE_C file for full text. ----------------------------------------------------------------------------- LGPL license: * This program is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License, version 3 as published by the Free Software Foundation; * This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License, version 3 for more details. * You should have received a copy of the Lesser General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA * ----------------------------------------------------------------------------- A commercial use license is available from Mavimax, Ltd contact@mavimax.com ----------------------------------------------------------------------------- * @brief Plain text IPC buffer support * @file typed_string.go ----------------------------------------------------------------------------- Enduro/X Middleware Platform for Distributed Transaction Processing Copyright (C) 2009-2016, ATR Baltic, Ltd. All Rights Reserved. Copyright (C) 2017-2019, Mavimax, Ltd. All Rights Reserved. This software is released under one of the following licenses: LGPL or Mavimax's license for commercial use. See LICENSE file for full text. * C (as designed by Dennis Ritchie and later authors) language code is licensed under Enduro/X Modified GNU Affero General Public License, version 3. See LICENSE_C file for full text. ----------------------------------------------------------------------------- LGPL license: * This program is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License, version 3 as published by the Free Software Foundation; * This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License, version 3 for more details. * You should have received a copy of the Lesser General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA * ----------------------------------------------------------------------------- A commercial use license is available from Mavimax, Ltd contact@mavimax.com ----------------------------------------------------------------------------- * @brief Unified Buffer Format (UBF) - Key value protocol buffer support * @file typed_ubf.go ----------------------------------------------------------------------------- Enduro/X Middleware Platform for Distributed Transaction Processing Copyright (C) 2009-2016, ATR Baltic, Ltd. All Rights Reserved. Copyright (C) 2017-2019, Mavimax, Ltd. All Rights Reserved. This software is released under one of the following licenses: LGPL or Mavimax's license for commercial use. See LICENSE file for full text. * C (as designed by Dennis Ritchie and later authors) language code is licensed under Enduro/X Modified GNU Affero General Public License, version 3. See LICENSE_C file for full text. ----------------------------------------------------------------------------- LGPL license: * This program is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License, version 3 as published by the Free Software Foundation; * This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License, version 3 for more details. * You should have received a copy of the Lesser General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA * ----------------------------------------------------------------------------- A commercial use license is available from Mavimax, Ltd contact@mavimax.com ----------------------------------------------------------------------------- * @brief Unified Buffer Format (UBF) marshal/unmarshal to/from structures * @file typed_ubf_tag.go ----------------------------------------------------------------------------- Enduro/X Middleware Platform for Distributed Transaction Processing Copyright (C) 2009-2016, ATR Baltic, Ltd. All Rights Reserved. Copyright (C) 2017-2019, Mavimax, Ltd. All Rights Reserved. This software is released under one of the following licenses: LGPL or Mavimax's license for commercial use. See LICENSE file for full text. * C (as designed by Dennis Ritchie and later authors) language code is licensed under Enduro/X Modified GNU Affero General Public License, version 3. See LICENSE_C file for full text. ----------------------------------------------------------------------------- LGPL license: * This program is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License, version 3 as published by the Free Software Foundation; * This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License, version 3 for more details. * You should have received a copy of the Lesser General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA * ----------------------------------------------------------------------------- A commercial use license is available from Mavimax, Ltd contact@mavimax.com ----------------------------------------------------------------------------- * @brief VIEW buffer support - dynamic access * @file typed_view.go ----------------------------------------------------------------------------- Enduro/X Middleware Platform for Distributed Transaction Processing Copyright (C) 2009-2016, ATR Baltic, Ltd. All Rights Reserved. Copyright (C) 2017-2019, Mavimax, Ltd. All Rights Reserved. This software is released under one of the following licenses: LGPL or Mavimax's license for commercial use. See LICENSE file for full text. * C (as designed by Dennis Ritchie and later authors) language code is licensed under Enduro/X Modified GNU Affero General Public License, version 3. See LICENSE_C file for full text. ----------------------------------------------------------------------------- LGPL license: * This program is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License, version 3 as published by the Free Software Foundation; * This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License, version 3 for more details. * You should have received a copy of the Lesser General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA * ----------------------------------------------------------------------------- A commercial use license is available from Mavimax, Ltd contact@mavimax.com -----------------------------------------------------------------------------
Implementation of an R-Way Trie data structure. A Trie has a root Node which is the base of the tree. Each subsequent Node has a letter and children, which are nodes that have letter values associated with them.
Package gtree provides tree-structured output.
Package gtree provides tree-structured output.
Package ogdl is used to process OGDL, the Ordered Graph Data Language. OGDL is a textual format to write trees or graphs of text, where indentation and spaces define the structure. Here is an example: The languange is simple, either in its textual representation or its number of productions (the specification rules), allowing for compact implementations. OGDL character streams are normally formed by Unicode characters, and encoded as UTF-8 strings, but any encoding that is ASCII transparent is compatible with the specification. See the full spec at http://ogdl.org. To install this package just do: If we have a text file 'config.ogdl' containing: then, will print If the timeout parameter was not present, then the default value (60) will be assigned to 'to'. The default value is optional, but be aware that Int64() will return 0 in case that the parameter doesn't exist. The configuration file can be written in a conciser way: The package includes a template processor. It takes an arbitrary input stream with some variables in it, and produces an output stream with the variables resolved out of a Graph object which acts as context. For example (given the previous config file): string(b) is then: Some rules are followed:
Package ogdl is used to process OGDL, the Ordered Graph Data Language. OGDL is a textual format to write trees or graphs of text, where indentation and spaces define the structure. Here is an example: The languange is simple, either in its textual representation or its number of productions (the specification rules), allowing for compact implementations. OGDL character streams are normally formed by Unicode characters, and encoded as UTF-8 strings, but any encoding that is ASCII transparent is compatible with the specification. See the full spec at http://ogdl.org. To install this package just do: If we have a text file 'config.g' containing: then, will print If the timeout parameter was not present, then the default value (60) will be assigned to 'to'. The default value is optional, but be aware that Int64() will return 0 in case that the parameter doesn't exist. The configuration file can be written in a conciser way: The package includes a template processor. It takes an arbitrary input stream with some variables in it, and produces an output stream with the variables resolved out of a Graph object which acts as context. For example (given the previous config file): string(b) is then: Some rules are followed:
Implementation of an R-Way Trie data structure. A Trie has a root Node which is the base of the tree. Each subsequent Node has a letter and children, which are nodes that have letter values associated with them.
Package toml is a TOML parser and manipulation library. This version supports the specification as described in https://github.com/toml-lang/toml/blob/master/versions/en/toml-v0.4.0.md Go-toml can marshal and unmarshal TOML documents from and to data structures. Go-toml can operate on a TOML document as a tree. Use one of the Load* functions to parse TOML data and obtain a Tree instance, then one of its methods to manipulate the tree. The package github.com/pelletier/go-toml/query implements a system similar to JSONPath to quickly retrieve elements of a TOML document using a single expression. See the package documentation for more information.
Package hercules contains the functions which are needed to gather various statistics from a Git repository. The analysis is expressed in a form of the tree: there are nodes - "pipeline items" - which require some other nodes to be executed prior to selves and in turn provide the data for dependent nodes. There are several service items which do not produce any useful statistics but rather provide the requirements for other items. The top-level items include: - BurndownAnalysis - line burndown statistics for project, files and developers. - CouplesAnalysis - coupling statistics for files and developers. - ShotnessAnalysis - structural hotness and couples, by any Babelfish UAST XPath (functions by default). The typical API usage is to initialize the Pipeline class: Then add the required analysis: This call will add all the needed intermediate pipeline items. Then link and execute the analysis tree: Finally extract the result: The actual usage example is cmd/hercules/root.go - the command line tool's code. You can provide additional options via `facts` on initialization. For example, to provide your own logger, enable people-tracking, and set a custom tick size: Hercules depends heavily on https://github.com/src-d/go-git and leverages the diff algorithm through https://github.com/sergi/go-diff. Besides, BurndownAnalysis involves File and RBTree. These are low level data structures which enable incremental blaming. File carries an instance of RBTree and the current line burndown state. RBTree implements the red-black balanced binary tree and is based on https://github.com/yasushi-saito/rbtree. Coupling stats are supposed to be further processed rather than observed directly. labours.py uses Swivel embeddings and visualises them in Tensorflow Projector. Shotness analysis as well as other UAST-featured items relies on [Babelfish](https://doc.bblf.sh) and requires the server to be running.
Package hercules contains the functions which are needed to gather various statistics from a Git repository. The analysis is expressed in a form of the tree: there are nodes - "pipeline items" - which require some other nodes to be executed prior to selves and in turn provide the data for dependent nodes. There are several service items which do not produce any useful statistics but rather provide the requirements for other items. The top-level items include: - BurndownAnalysis - line burndown statistics for project, files and developers. - CouplesAnalysis - coupling statistics for files and developers. - ShotnessAnalysis - structural hotness and couples, by any Babelfish UAST XPath (functions by default). The typical API usage is to initialize the Pipeline class: Then add the required analysis: This call will add all the needed intermediate pipeline items. Then link and execute the analysis tree: Finally extract the result: The actual usage example is cmd/hercules/root.go - the command line tool's code. You can provide additional options via `facts` on initialization. For example, to provide your own logger, enable people-tracking, and set a custom tick size: Hercules depends heavily on https://github.com/src-d/go-git and leverages the diff algorithm through https://github.com/sergi/go-diff. Besides, BurndownAnalysis involves File and RBTree. These are low level data structures which enable incremental blaming. File carries an instance of RBTree and the current line burndown state. RBTree implements the red-black balanced binary tree and is based on https://github.com/yasushi-saito/rbtree. Coupling stats are supposed to be further processed rather than observed directly. labours.py uses Swivel embeddings and visualises them in Tensorflow Projector. Shotness analysis as well as other UAST-featured items relies on [Babelfish](https://doc.bblf.sh) and requires the server to be running.
Package itc implements the interval tree clock as described in the paper 'Interval Tree Clocks: A Logical Clock for Dynamic Systems' by Paulo Sergio Almeida, Carlos Baquero and Victor Fonte. (http://gsd.di.uminho.pt/members/cbm/ps/itc2008.pdf) Causality tracking mechanisms can be modeled by a set of core operations: fork; event and join, that act on stamps (logical clocks) whose structure is a pair (i, e), formed by an id and an event component that encodes causally known events.
The rbxfile package handles the decoding, encoding, and manipulation of Roblox instance data structures. This package can be used to manipulate Roblox instance trees outside of the Roblox client. Such data structures begin with a Root struct. A Root contains a list of child Instances, which in turn contain more child Instances, and so on, forming a tree of Instances. These Instances can be accessed and manipulated using an API similar to that of Roblox. Each Instance also has a set of "properties". Each property has a specific value of a certain type. Every available type implements the Value interface, and is prefixed with "Value". Root structures can be decoded from and encoded to various formats, including Roblox's native file formats. The two sub-packages "bin" and "xml" provide formats for Roblox's binary and XML formats. Root structures can also be encoded and decoded with the "json" package. Besides decoding from a format, root structures can also be created manually. The best way to do this is through the "declare" sub-package, which provides an easy way to generate root structures.
Package antlr implements the Go version of the ANTLR 4 runtime. ANTLR (ANother Tool for Language Recognition) is a powerful parser generator for reading, processing, executing, or translating structured text or binary files. It's widely used to build languages, tools, and frameworks. From a grammar, ANTLR generates a parser that can build parse trees and also generates a listener interface (or visitor) that makes it easy to respond to the recognition of phrases of interest. At version 4.11.x and prior, the Go runtime was not properly versioned for go modules. After this point, the runtime source code to be imported was held in the `runtime/Go/antlr/v4` directory, and the go.mod file was updated to reflect the version of ANTLR4 that it is compatible with (I.E. uses the /v4 path). However, this was found to be problematic, as it meant that with the runtime embedded so far underneath the root of the repo, the `go get` and related commands could not properly resolve the location of the go runtime source code. This meant that the reference to the runtime in your `go.mod` file would refer to the correct source code, but would not list the release tag such as @4.13.1 - this was confusing, to say the least. As of 4.13.0, the runtime is now available as a go module in its own repo, and can be imported as `github.com/antlr4-go/antlr` (the go get command should also be used with this path). See the main documentation for the ANTLR4 project for more information, which is available at ANTLR docs. The documentation for using the Go runtime is available at Go runtime docs. This means that if you are using the source code without modules, you should also use the source code in the new repo. Though we highly recommend that you use go modules, as they are now idiomatic for Go. I am aware that this change will prove Hyrum's Law, but am prepared to live with it for the common good. Go runtime author: Jim Idle jimi@idle.ws ANTLR supports the generation of code in a number of target languages, and the generated code is supported by a runtime library, written specifically to support the generated code in the target language. This library is the runtime for the Go target. To generate code for the go target, it is generally recommended to place the source grammar files in a package of their own, and use the `.sh` script method of generating code, using the go generate directive. In that same directory it is usual, though not required, to place the antlr tool that should be used to generate the code. That does mean that the antlr tool JAR file will be checked in to your source code control though, so you are, of course, free to use any other way of specifying the version of the ANTLR tool to use, such as aliasing in `.zshrc` or equivalent, or a profile in your IDE, or configuration in your CI system. Checking in the jar does mean that it is easy to reproduce the build as it was at any point in its history. Here is a general/recommended template for an ANTLR based recognizer in Go: Make sure that the package statement in your grammar file(s) reflects the go package the generated code will exist in. The generate.go file then looks like this: And the generate.sh file will look similar to this: depending on whether you want visitors or listeners or any other ANTLR options. Not that another option here is to generate the code into a From the command line at the root of your source package (location of go.mo)d) you can then simply issue the command: Which will generate the code for the parser, and place it in the parsing package. You can then use the generated code by importing the parsing package. There are no hard and fast rules on this. It is just a recommendation. You can generate the code in any way and to anywhere you like. Copyright (c) 2012-2023 The ANTLR Project. All rights reserved. Use of this file is governed by the BSD 3-clause license, which can be found in the LICENSE.txt file in the project root.
Package antlr implements the Go version of the ANTLR 4 runtime. ANTLR (ANother Tool for Language Recognition) is a powerful parser generator for reading, processing, executing, or translating structured text or binary files. It's widely used to build languages, tools, and frameworks. From a grammar, ANTLR generates a parser that can build parse trees and also generates a listener interface (or visitor) that makes it easy to respond to the recognition of phrases of interest. At version 4.11.x and prior, the Go runtime was not properly versioned for go modules. After this point, the runtime source code to be imported was held in the `runtime/Go/antlr/v4` directory, and the go.mod file was updated to reflect the version of ANTLR4 that it is compatible with (I.E. uses the /v4 path). However, this was found to be problematic, as it meant that with the runtime embedded so far underneath the root of the repo, the `go get` and related commands could not properly resolve the location of the go runtime source code. This meant that the reference to the runtime in your `go.mod` file would refer to the correct source code, but would not list the release tag such as @4.13.1 - this was confusing, to say the least. As of 4.13.0, the runtime is now available as a go module in its own repo, and can be imported as `github.com/antlr4-go/antlr` (the go get command should also be used with this path). See the main documentation for the ANTLR4 project for more information, which is available at ANTLR docs. The documentation for using the Go runtime is available at Go runtime docs. This means that if you are using the source code without modules, you should also use the source code in the new repo. Though we highly recommend that you use go modules, as they are now idiomatic for Go. I am aware that this change will prove Hyrum's Law, but am prepared to live with it for the common good. Go runtime author: Jim Idle jimi@idle.ws ANTLR supports the generation of code in a number of target languages, and the generated code is supported by a runtime library, written specifically to support the generated code in the target language. This library is the runtime for the Go target. To generate code for the go target, it is generally recommended to place the source grammar files in a package of their own, and use the `.sh` script method of generating code, using the go generate directive. In that same directory it is usual, though not required, to place the antlr tool that should be used to generate the code. That does mean that the antlr tool JAR file will be checked in to your source code control though, so you are, of course, free to use any other way of specifying the version of the ANTLR tool to use, such as aliasing in `.zshrc` or equivalent, or a profile in your IDE, or configuration in your CI system. Checking in the jar does mean that it is easy to reproduce the build as it was at any point in its history. Here is a general/recommended template for an ANTLR based recognizer in Go: Make sure that the package statement in your grammar file(s) reflects the go package the generated code will exist in. The generate.go file then looks like this: And the generate.sh file will look similar to this: depending on whether you want visitors or listeners or any other ANTLR options. Not that another option here is to generate the code into a From the command line at the root of your source package (location of go.mo)d) you can then simply issue the command: Which will generate the code for the parser, and place it in the parsing package. You can then use the generated code by importing the parsing package. There are no hard and fast rules on this. It is just a recommendation. You can generate the code in any way and to anywhere you like. Copyright (c) 2012-2023 The ANTLR Project. All rights reserved. Use of this file is governed by the BSD 3-clause license, which can be found in the LICENSE.txt file in the project root.
Package sops manages JSON, YAML and BINARY documents to be encrypted or decrypted. This package should not be used directly. Instead, Sops users should install the command line client via `go get -u github.com/getsops/sops/v3/cmd/sops`, or use the decryption helper provided at `github.com/getsops/sops/v3/decrypt`. We do not guarantee API stability for any package other than `github.com/getsops/sops/v3/decrypt`. A Sops document is a Tree composed of a data branch with arbitrary key/value pairs and a metadata branch with encryption and integrity information. In JSON and YAML formats, the structure of the cleartext tree is preserved, keys are stored in cleartext and only values are encrypted. Keeping the keys in cleartext provides better readability when storing Sops documents in version controls, and allows for merging competing changes on documents. This is a major difference between Sops and other encryption tools that store documents as encrypted blobs. In BINARY format, the cleartext data is treated as a single blob and the encrypted document is in JSON format with a single `data` key and a single encrypted value. Sops allows operators to encrypt their documents with multiple master keys. Each of the master key defined in the document is able to decrypt it, allowing users to share documents amongst themselves without sharing keys, or using a PGP key as a backup for KMS. In practice, this is achieved by generating a data key for each document that is used to encrypt all values, and encrypting the data with each master key defined. Being able to decrypt the data key gives access to the document. The integrity of each document is guaranteed by calculating a Message Authentication Code (MAC) that is stored encrypted by the data key. When decrypting a document, the MAC should be recalculated and compared with the MAC stored in the document to verify that no fraudulent changes have been applied. The MAC covers keys and values as well as their ordering.