Package intlist supports a string notation specifying a series of integers. This was written to support a data-driven text file entered by humans that contained a mix of integers and sequences. This format made it easy to enter and to visually recognize a sequence of consecutive integers. Format: Examples: There are two supported use cases; creating an int slice and an Iterator to produce the ints as needed. "Parse" will parse a string and return a integer slice. This is useful when a slice is wanted and the size of the result is not too large. "NewIterator" / "Next" / "Err" functions - provide the functionality necessary to iterate through the list of integers. This may be especially useful when the resulting list is too huge or when it is possible to stop before using the whole list. Example of iterator usage:
Package mdtopdf implements a PDF document generator for markdown documents. This package depends on two other packages: * The BlackFriday v2 parser to read the markdown source * The gofpdf packace to generate the PDF The tests included here are from the BlackFriday package. See the "testdata" folder. The tests create PDF files and thus while the tests may complete without errors, visual inspection of the created PDF is the only way to determine if the tests *really* pass! The tests create log files that trace the BlackFriday parser callbacks. This is a valuable debug tool showing each callback and data provided in each while the AST is presented. To install the package: In the cmd folder is an example using the package. It demonstrates a number of features. The test PDF was created with this command: Package mdtopdf converts markdown to PDF.
Issue is a client for reading and updating issues in a GitHub project issue tracker. Issue runs the query against the given project's issue tracker and prints a table of matching issues, sorted by issue summary. The default owner/repo is golang/go. If multiple arguments are given as the query, issue joins them by spaces to form a single issue search. These two commands are equivalent: Searches are always limited to open issues. If the query is a single number, issue prints that issue in detail, including all comments. Issue expects to find a GitHub "personal access token" in $HOME/.github-issue-token and will use that token to authenticate to GitHub when reading or writing issue data. A token can be created by visiting https://github.com/settings/tokens/new. The token only needs the 'repo' scope checkbox, and optionally 'private_repo' if you want to work with issue trackers for private repositories. It does not need any other permissions. The -token flag specifies an alternate file from which to read the token. If the -a flag is specified, issue runs as a collection of acme windows instead of a command-line tool. In this mode, the query is optional. If no query is given, issue uses "state:open". There are three kinds of acme windows: issue, issue creation, issue list, search result, and milestone list. The following text forms can be looked for (right clicked on) and open a window (or navigate to an existing one). Executing "New" opens an issue creation window. Executing "Search <query>" opens a new window showing the results of that search. An issue window, opened by loading an issue number, displays full detail about an issue, a header followed by each comment. For example: Executing "Get" reloads the issue data. Executing "Put" updates an issue. It saves any changes to the issue header and, if any text has been entered between the header and the "Reported by" line, posts that text as a new comment. If both succeed, Put then reloads the issue data. The "Closed" and "URL" headers cannot be changed. An issue creation window, opened by executing "New", is like an issue window but displays only an empty issue template: Once the template has been completed (only the title is required), executing "Put" creates the issue and converts the window into a issue window for the new issue. An issue list window displays a list of all open issue numbers and titles. If the project has any open milestones, they are listed in a header line. For example: As in any window, right clicking on an issue number opens a window for that issue. A search result window, opened by executing "Search <query>", displays a list of issues matching a search query. It shows the query in a header line. For example: Executing "Sort" in a search result window toggles between sorting by title and sorting by decreasing issue number. Executing "Bulk" in an issue list or search result window opens a new bulk edit window applying to the displayed issues. If there is a non-empty text selection in the issue list or search result list, the bulk edit window is restricted to issues in the selection. The bulk edit window consists of a metadata header followed by a list of issues, like: The metadata header shows only metadata shared by all the issues. In the above example, all four issues are open and have milestone Go1.4.3, but they have no common labels nor a common assignee. The bulk edit applies to the issues listed in the window text; adding or removing issue lines changes the set of issues affected by Get or Put operations. Executing "Get" refreshes the metadata header and issue summaries. Executing "Put" updates all the listed issues. It applies any changes made to the metadata header and, if any text has been entered between the header and the first issue line, posts that text as a comment. If all operations succeed, Put then refreshes the window as Get does. The milestone list window, opened by loading any of the names "milestone", "Milestone", or "Milestones", displays the open project milestones, sorted by due date, along with the number of open issues in each. For example: Loading one of the listed milestone names opens a search for issues in that milestone. The -e flag enables basic editing of issues with editors other than acme. The editor invoked is $VISUAL if set, $EDITOR if set, or else ed. Issue prepares a textual representation of issue data in a temporary file, opens that file in the editor, waits for the editor to exit, and then applies any changes from the file to the actual issues. When <query> is a single number, issue -e edits a single issue. See the “Issue Window” section above. If the <query> is the text "new", issue -e creates a new issue. See the “Issue Creation Window” section above. Otherwise, for general queries, issue -e edits multiple issues in bulk. See the “Bulk Edit Window” section above. The -json flag causes issue to print the results in JSON format using these data structures: If asked for a specific issue, the output is an Issue with Comments. Otherwise, the result is an array of Issues without Comments.
Package radolan parses the DWD RADOLAN / RADVOR radar composite format. This data is available at the Open Data Portal (https://www.dwd.de/DE/leistungen/opendata/opendata.html). The obtained results can be processed and visualized with additional functions. Tested input products and grids: Those can be considered working with sufficient accuracy. In cases, where the publicly available format specification is unprecise or contradictory, reverse engineering was used to obtain reasonable approaches.
Package tie provides a Processing-like API for simple and fun drawing, game making, data and algorithm visualization, generally - art :). To start writing a new sketch, You need to initialize the engine first: Then You need to pass the functions You want to act as the ones listed below, in the right order, but only preload, setup, and draw are necessary: To do that, call PassFunctions with the functions You want to use as arguments: The only thing that's left is launching the engine with: The whole sketch should look something like this: For more examples visit https://github.com/franeklubi/tie-examples/
Banshee is a real-time anomalies(outliers) detection system for periodic metrics. We are using it to monitor our website and rpc services intefaces, including called frequency, response time and exception calls. Our services send statistics to statsd, statsd aggregates them every 10 seconds and broadcasts the results to its backends including banshee, banshee analyzes current metrics with history data, calculates the trending and alerts us if the trending behaves anomalous. For example, we have an api named get_user, this api's response time (in milliseconds) is reported to banshee from statsd every 10 seconds: Banshee will catch the latest metric 300 and report it as an anomaly. Why don't we just set a fixed threshold instead (i.e. 200ms)? This may also works but it is boring and hard to maintain a lot of thresholds. Banshee will analyze metric trendings automatically, it will find the "thresholds" automatically. 1. Designed for periodic metrics. Reality metrics are always with periodicity, banshee only peeks metrics with the same "phase" to detect. 2. Multiple alerting rule configuration options, to alert via fixed-thresholds or via anomalous trendings. 3. Coming with anomalies visualization webapp and alerting rules admin panels. 4. Require no extra storage services, banshee handles storage on disk by itself. 1. Go >= 1.4 and godep. 2. Node and gulp. 1. Clone the repo. 2. Build binary via `make`. 3. Build static files via `make static`. Usage: Flags: See package config. In order to forward metrics to banshee from statsd, we need to add the npm module statsd-banshee to statsd's banckends: 1. Install statsd-banshee on your statsd servers: 2. Add module statsd-banshee to statsd's backends in config.js: Require bell.js v2.0+ and banshee v0.0.7+: Banshee have 4 compontents and they are running in the same process: 1. Detector is to detect incoming metrics with history data and store the results. 2. Webapp is to visualize the detection results and provides panels to manage alerting rules, projects and users. 3. Alerter is to send sms and emails once anomalies are found. 4. Cleaner is to clean outdated metrics from storage. See package alerter and alerter/exampleCommand. 1. Detection algorithms, see package detector. 2. Detector input net protocol, see package detector. 3. Storage, see package storage. 4. Filter, see package filter. MIT (c) eleme, inc.
Package jin Copyright (c) 2020 eco. License that can be found in the LICENSE file. "your wish is my command" Jin is a comprehensive JSON manipulation tool bundle. All functions tested with random data with help of Node.js. All test-path and test-value creation automated with Node.js. Jin provides parse, interpret, build and format tools for JSON. Third-party packages only used for the benchmark. No dependency need for core functions. We make some benchmark with other packages like Jin. In Result, Jin is the fastest (op/ns) and more memory friendly then others (B/op). For more information please take a look at BENCHMARK section below. WHAT IS NEW? 7 new functions tested and added to package. - GetMap() get objects as map[string]string structure with key values pairs - GetAll() get only specific keys values - GetAllMap() get only specific keys with map[string]stringstructure - GetKeys() get objects keys as string array - GetValues() get objects values as string array - GetKeysValues() get objects keys and values with separate string arrays - Length() get length of JSON array. 06.04.2020 INSTALLATION And you are good to go. Import and start using. Major difference between parsing and interpreting is parser has to read all data before answer your needs. On the other hand interpreter reads up to find the data you need. With parser, once the parse is complete you can access any data with no time. But there is a time cost to parse all data and this cost can increase as data content grows. If you need to access all keys of a JSON then, we are simply recommend you to use Parser. But if you need to access some keys of a JSON then we strongly recommend you to use Interpreter, it will be much faster and much more memory-friendly than parser. Interpreter is core element of this package, no need to create an Interpreter type, just call which function you want. First let's look at general function parameters. We are gonna use Get() function to access the value of path has pointed. In this case 'jin'. Path value can consist hard coded values. Get() function return type is []byte but all other variations of return types are implemented with different functions. For example. If you need "value" as string use GetString(). Parser is another alternative for JSON manipulation. We recommend to use this structure when you need to access all or most of the keys in the JSON. Parser constructor need only one parameter. We can parse it with Parse() function. Let's look at Parser.Get() About path value look above. There is all return type variations of Parser.Get() function like Interpreter. For return string use Parser.GetString() like this, Other usefull functions of Jin. -Add(), AddKeyValue(), Set(), SetKey() Delete(), Insert(), IterateArray(), IterateKeyValue() Tree(). Let's look at IterateArray() function. There are two formatting functions. Flatten() and Indent(). Indent() is adds indentation to JSON for nicer visualization and Flatten() removes this indentation. Control functions are simple and easy way to check value types of any path. For example. IsArray(). Or you can use GetType(). There are lots of JSON build functions in this package and all of them has its own examples. We just want to mention a couple of them. Scheme is simple and powerful tool for create JSON schemes. Testing is very important thing for this type of packages and it shows how reliable it is. For that reasons we use Node.js for unit testing. Lets look at folder arrangement and working principle. - test/ folder: test-json.json, this is a temporary file for testing. all other test-cases copying here with this name so they can process by test-case-creator.js. test-case-creator.js is core path & value creation mechanism. When it executed with executeNode() function. It reads the test-json.json file and generates the paths and values from this files content. With command line arguments it can generate different paths and values. As a result, two files are created with this process. the first of these files is test-json-paths.json and the second is test-json-values.json test-json-paths.json has all the path values. test-json-values.json has all the values that corresponding to path values. - tests/ folder All files in this folder is a test-case. But it doesn't mean that you can't change anything, on the contrary, all test-cases are creating automatically based on this folder content. You can add or remove any .json file that you want. All GO side test-case automation functions are in core_test.go file. This package developed with Node.js v13.7.0. please make sure that your machine has a valid version of Node.js before testing. All functions and methods are tested with complicated randomly genereted .json files. Like this, Most of JSON packages not even run properly with this kind of JSON streams. We did't see such packages as competitors to ourselves. And that's because we didn't even bother to benchmark against them. Benchmark results. - Benchmark prefix removed from function names for make room to results. - Benchmark between 'buger/jsonparser' and 'ecoshub/jin' use the same payload (JSON test-cases) that 'buger/jsonparser' package use for benchmark it self. We are currently working on, - Marshal() and Unmarshal() functions. - http.Request parser/interpreter - Builder functions for http.ResponseWriter If you want to contribute this work feel free to fork it. We want to fill this section with contributors.
Banshee is a real-time anomalies(outliers) detection system for periodic metrics. We are using it to monitor our website and rpc services intefaces, including called frequency, response time and exception calls. Our services send statistics to statsd, statsd aggregates them every 10 seconds and broadcasts the results to its backends including banshee, banshee analyzes current metrics with history data, calculates the trending and alerts us if the trending behaves anomalous. For example, we have an api named get_user, this api's response time (in milliseconds) is reported to banshee from statsd every 10 seconds: Banshee will catch the latest metric 300 and report it as an anomaly. Why don't we just set a fixed threshold instead (i.e. 200ms)? This may also works but it is boring and hard to maintain a lot of thresholds. Banshee will analyze metric trendings automatically, it will find the "thresholds" automatically. 1. Designed for periodic metrics. Reality metrics are always with periodicity, banshee only peeks metrics with the same "phase" to detect. 2. Multiple alerting rule configuration options, to alert via fixed-thresholds or via anomalous trendings. 3. Coming with anomalies visualization webapp and alerting rules admin panels. 4. Require no extra storage services, banshee handles storage on disk by itself. 1. Go >= 1.5. 2. Node and gulp. 1. Clone the repo. 2. Build binary via `make`. 3. Build static files via `make static`. Usage: Flags: See package config. In order to forward metrics to banshee from statsd, we need to add the npm module statsd-banshee to statsd's banckends: 1. Install statsd-banshee on your statsd servers: 2. Add module statsd-banshee to statsd's backends in config.js: Require bell.js v2.0+ and banshee v0.0.7+: Banshee have 4 compontents and they are running in the same process: 1. Detector is to detect incoming metrics with history data and store the results. 2. Webapp is to visualize the detection results and provides panels to manage alerting rules, projects and users. 3. Alerter is to send sms and emails once anomalies are found. 4. Cleaner is to clean outdated metrics from storage. See package alerter and alerter/exampleCommand. Via fabric(http://www.fabfile.org/): See deploy.py docs for more. Just pull the latest code: Note that the admin storage sqlite3 schema will be auto-migrated. 1. Detection algorithms, see package detector. 2. Detector input net protocol, see package detector. 3. Storage, see package storage. 4. Filter, see package filter. MIT (c) eleme, inc.
Banshee is a real-time anomalies(outliers) detection system for periodic metrics. We are using it to monitor our website and rpc services intefaces, including called frequency, response time and exception calls. Our services send statistics to statsd, statsd aggregates them every 10 seconds and broadcasts the results to its backends including banshee, banshee analyzes current metrics with history data, calculates the trending and alerts us if the trending behaves anomalous. For example, we have an api named get_user, this api's response time (in milliseconds) is reported to banshee from statsd every 10 seconds: Banshee will catch the latest metric 300 and report it as an anomaly. Why don't we just set a fixed threshold instead (i.e. 200ms)? This may also works but it is boring and hard to maintain a lot of thresholds. Banshee will analyze metric trendings automatically, it will find the "thresholds" automatically. 1. Designed for periodic metrics. Reality metrics are always with periodicity, banshee only peeks metrics with the same "phase" to detect. 2. Multiple alerting rule configuration options, to alert via fixed-thresholds or via anomalous trendings. 3. Coming with anomalies visualization webapp and alerting rules admin panels. 4. Require no extra storage services, banshee handles storage on disk by itself. 1. Go >= 1.4 and godep. 2. Node and gulp. 1. Clone the repo. 2. Build binary via `make`. 3. Build static files via `make static`. Usage: Flags: See package config. In order to forward metrics to banshee from statsd, we need to add the npm module statsd-banshee to statsd's banckends: 1. Install statsd-banshee on your statsd servers: 2. Add module statsd-banshee to statsd's backends in config.js: Require bell.js v2.0+ and banshee v0.0.7+: Banshee have 4 compontents and they are running in the same process: 1. Detector is to detect incoming metrics with history data and store the results. 2. Webapp is to visualize the detection results and provides panels to manage alerting rules, projects and users. 3. Alerter is to send sms and emails once anomalies are found. 4. Cleaner is to clean outdated metrics from storage. See package alerter and alerter/exampleCommand. 1. Detection algorithms, see package detector. 2. Detector input net protocol, see package detector. 3. Storage, see package storage. 4. Filter, see package filter. MIT (c) eleme, inc.
Banshee is a real-time anomalies(outliers) detection system for periodic metrics. We are using it to monitor our website and rpc services intefaces, including called frequency, response time and exception calls. Our services send statistics to statsd, statsd aggregates them every 10 seconds and broadcasts the results to its backends including banshee, banshee analyzes current metrics with history data, calculates the trending and alerts us if the trending behaves anomalous. For example, we have an api named get_user, this api's response time (in milliseconds) is reported to banshee from statsd every 10 seconds: Banshee will catch the latest metric 300 and report it as an anomaly. Why don't we just set a fixed threshold instead (i.e. 200ms)? This may also works but it is boring and hard to maintain a lot of thresholds. Banshee will analyze metric trendings automatically, it will find the "thresholds" automatically. 1. Designed for periodic metrics. Reality metrics are always with periodicity, banshee only peeks metrics with the same "phase" to detect. 2. Multiple alerting rule configuration options, to alert via fixed-thresholds or via anomalous trendings. 3. Coming with anomalies visualization webapp and alerting rules admin panels. 4. Require no extra storage services, banshee handles storage on disk by itself. 1. Go >= 1.5. 2. Node and gulp. 1. Clone the repo. 2. Build binary via `make`. 3. Build static files via `make static`. Usage: Flags: See package config. In order to forward metrics to banshee from statsd, we need to add the npm module statsd-banshee to statsd's banckends: 1. Install statsd-banshee on your statsd servers: 2. Add module statsd-banshee to statsd's backends in config.js: Require bell.js v2.0+ and banshee v0.0.7+: Banshee have 4 compontents and they are running in the same process: 1. Detector is to detect incoming metrics with history data and store the results. 2. Webapp is to visualize the detection results and provides panels to manage alerting rules, projects and users. 3. Alerter is to send sms and emails once anomalies are found. 4. Cleaner is to clean outdated metrics from storage. See package alerter and alerter/exampleCommand. Via fabric(http://www.fabfile.org/): See deploy.py docs for more. Just pull the latest code: Note that the admin storage sqlite3 schema will be auto-migrated. 1. Detection algorithms, see package detector. 2. Detector input net protocol, see package detector. 3. Storage, see package storage. 4. Filter, see package filter. Reference: https://github.com/eleme/banshee/blob/master/intro.md MIT (c) eleme, inc.
Package mdtopdf implements a PDF document generator for markdown documents. This package depends on two other packages: * The BlackFriday v2 parser to read the markdown source * The gofpdf packace to generate the PDF The tests included here are from the BlackFriday package. See the "testdata" folder. The tests create PDF files and thus while the tests may complete without errors, visual inspection of the created PDF is the only way to determine if the tests *really* pass! The tests create log files that trace the BlackFriday parser callbacks. This is a valuable debug tool showing each callback and data provided in each while the AST is presented. To install the package: In the cmd folder is an example using the package. It demonstrates a number of features. The test PDF was created with this command: Package mdtopdf converts markdown to PDF.
Banshee is a real-time anomalies(outliers) detection system for periodic metrics. We are using it to monitor our website and rpc services intefaces, including called frequency, response time and exception calls. Our services send statistics to statsd, statsd aggregates them every 10 seconds and broadcasts the results to its backends including banshee, banshee analyzes current metrics with history data, calculates the trending and alerts us if the trending behaves anomalous. For example, we have an api named get_user, this api's response time (in milliseconds) is reported to banshee from statsd every 10 seconds: Banshee will catch the latest metric 300 and report it as an anomaly. Why don't we just set a fixed threshold instead (i.e. 200ms)? This may also works but it is boring and hard to maintain a lot of thresholds. Banshee will analyze metric trendings automatically, it will find the "thresholds" automatically. 1. Designed for periodic metrics. Reality metrics are always with periodicity, banshee only peeks metrics with the same "phase" to detect. 2. Multiple alerting rule configuration options, to alert via fixed-thresholds or via anomalous trendings. 3. Coming with anomalies visualization webapp and alerting rules admin panels. 4. Require no extra storage services, banshee handles storage on disk by itself. 1. Go >= 1.5. 2. Node and gulp. 1. Clone the repo. 2. Build binary via `make`. 3. Build static files via `make static`. Usage: Flags: See package config. In order to forward metrics to banshee from statsd, we need to add the npm module statsd-banshee to statsd's banckends: 1. Install statsd-banshee on your statsd servers: 2. Add module statsd-banshee to statsd's backends in config.js: Require bell.js v2.0+ and banshee v0.0.7+: Banshee have 4 compontents and they are running in the same process: 1. Detector is to detect incoming metrics with history data and store the results. 2. Webapp is to visualize the detection results and provides panels to manage alerting rules, projects and users. 3. Alerter is to send sms and emails once anomalies are found. 4. Cleaner is to clean outdated metrics from storage. See package alerter and alerter/exampleCommand. Via fabric(http://www.fabfile.org/): See deploy.py docs for more. Just pull the latest code: Note that the admin storage sqlite3 schema will be auto-migrated. 1. Detection algorithms, see package detector. 2. Detector input net protocol, see package detector. 3. Storage, see package storage. 4. Filter, see package filter. MIT (c) eleme, inc.
go-xvid are Go bindings to xvidcore from Xvid 1.3.X (which uses the MPEG-4 Part 2, MPEG-4 Visual, ISO/IEC 14496-2 video codec standard). This library can encode a sequence of images to an encoded Xvid stream, decode images from an encoded Xvid stream, and convert images between different color spaces. go-xvid only handles raw Xvid streams. Nearly all video files commonly found are stored in a media container, that encapsulate, but are not, raw Xvid video streams. go-xvid cannot decode or encode container data, and the raw video streams must be encapsulated or decapsulated. go-xvid tries to not abbreviate names and identifiers so that all the names used can easily be searched on the Internet when they are not known. This means that this documentation will not redefine or explain common codec concepts like macroblocks, quantizers, rate-control, and such. Most of the complex configuration structures can be initialized to sane default values in case the user is not familiar with advanced encoding concepts. Before any other function in the package can be called, Init or InitWithFlags must be called once to initialize all internal Xvid state. There is no Close method corresponding to the Init call. As an exception, GetGlobalInfo, which returns general information about the runtime Xvid build, can be called at any time before and after Init. go-xvid defines a specific error type, Error, which is used to represent internal xvidcore errors. Images in go-xvid is stored in the Image structure, which stores both an image color space and its data as an array of planes, which are themselves arrays of data. Each plane has a specific stride. The classic RGBA color space has only one plane and data array but some color spaces can have up to three. See Image for more information. Images can be converted from one color space to another with the Convert function. go-xvid can decode a sequence of images from a raw encoded Xvid stream. Decoder is the struct used to decode from a stream. A Decoder is created with NewDecoder, which takes a DecoderInit configuration struct to initialize it. Once created, Decoder.Decode can be called in a loop to decode a single frame at a time until the entire stream has been processed. Each decoded frame contains extra statistics returned by Decoder.Decode. When the Decoder is no longer needed, it must be closed with Decoder.Close to free any internal data. go-xvid can encode a sequence of images to a raw encoded Xvid stream. Encoder is the struct used to encode from a stream. An Encoder is created with NewEncoder, which takes an EncoderInit configuration struct to initialize it, which itself should be initialized with NewEncoderInit to sane default values. Once created, Encoder.Encode can be called in a loop to encode a single image at a time until all the images have been processed. Each encoded frame contains extra statistics returned by Encoder.Encode. When the Encoder is no longer needed, it must be closed with Encoder.Close to free any internal data. Plugins are used to read and write internal frame data when encoding. Some standard plugins are defined in the library but custom ones can be created by implementing the Plugin interface. In Xvid, rate-control is achieved by using plugins (for both 1-pass rate-control and 2-pass rate-control). You will probably need to use one of these rate-control plugins when encoding (otherwise the smallest quantizer is always used).
Package chart implements common chart/plot types. The following chart types are available: Chart tries to provides useful defaults and produce nice charts without sacrificing accuracy. The generated charts look good and are higly customizable but will not match the visual quality of handmade photoshop charts or the statistical features of charts produced by S or R. Creating charts consists of the following steps: You may change the configuration at any step or render to different outputs. The different chart types and their fields are all simple struct types where the zero value provides suitable defaults. All fields are exported, even if you are not supposed to manipulate them directy or are 'output fields'. E.g. the common Data field of all chart types will store the sample data added with one or more Add... methods. Some fields are mere output which expose internal stuff for your use like the Data2Screen and Screen2Data functions of the Ranges. Some fields are even input/output fields: E.g. you may set the Range.TicSetting.Delta to some positive value which will be used as the spacing between tics on that axis; on the other hand if you leave Range.TicSetting.Delta at its default 0 you indicate to the plotting routine to automatically determine the tic delta which is then reported back in this fields. All charts (except pie/ring charts) contain at least one axis represented by a field of type Range. Axis can be differented into following categories: How the axis is autoscaled can be controlled for both ends of the axis individually by MinMode and MaxMode which allow a fine control of the (auto-) scaling. After setting up the chart, adding data, samples, functions you can render the chart to a Graphics output. This process will set several internal fields of the chart. If you reuse the chart, add additional data and output it again these fields might no longer indicate 'automatical/default' but contain the value calculated in the first output round.
Package mdtopdf implements a PDF document generator for markdown documents. This package depends on two other packages: * The BlackFriday v2 parser to read the markdown source * The gofpdf packace to generate the PDF The tests included here are from the BlackFriday package. See the "testdata" folder. The tests create PDF files and thus while the tests may complete without errors, visual inspection of the created PDF is the only way to determine if the tests *really* pass! The tests create log files that trace the BlackFriday parser callbacks. This is a valuable debug tool showing each callback and data provided in each while the AST is presented. To install the package: In the cmd folder is an example using the package. It demonstrates a number of features. The test PDF was created with this command: Package mdtopdf converts markdown to PDF.
Package mdtopdf implements a PDF document generator for markdown documents. This package depends on two other packages: * The BlackFriday v2 parser to read the markdown source * The gofpdf packace to generate the PDF The tests included here are from the BlackFriday package. See the "testdata" folder. The tests create PDF files and thus while the tests may complete without errors, visual inspection of the created PDF is the only way to determine if the tests *really* pass! The tests create log files that trace the BlackFriday parser callbacks. This is a valuable debug tool showing each callback and data provided in each while the AST is presented. To install the package: In the cmd folder is an example using the package. It demonstrates a number of features. The test PDF was created with this command: Package mdtopdf converts markdown to PDF.
Package applicationdiscoveryservice provides the client and types for making API requests to AWS Application Discovery Service. AWS Application Discovery Service helps you plan application migration projects by automatically identifying servers, virtual machines (VMs), software, and software dependencies running in your on-premises data centers. Application Discovery Service also collects application performance data, which can help you assess the outcome of your migration. The data collected by Application Discovery Service is securely retained in an Amazon-hosted and managed database in the cloud. You can export the data as a CSV or XML file into your preferred visualization tool or cloud-migration solution to plan your migration. For more information, see the Application Discovery Service FAQ (http://aws.amazon.com/application-discovery/faqs/). Application Discovery Service offers two modes of operation. Agentless discovery mode is recommended for environments that use VMware vCenter Server. This mode doesn't require you to install an agent on each host. Agentless discovery gathers server information regardless of the operating systems, which minimizes the time required for initial on-premises infrastructure assessment. Agentless discovery doesn't collect information about software and software dependencies. It also doesn't work in non-VMware environments. We recommend that you use agent-based discovery for non-VMware environments and if you want to collect information about software and software dependencies. You can also run agent-based and agentless discovery simultaneously. Use agentless discovery to quickly complete the initial infrastructure assessment and then install agents on select hosts to gather information about software and software dependencies. Agent-based discovery mode collects a richer set of data than agentless discovery by using Amazon software, the AWS Application Discovery Agent, which you install on one or more hosts in your data center. The agent captures infrastructure and application information, including an inventory of installed software applications, system and process performance, resource utilization, and network dependencies between workloads. The information collected by agents is secured at rest and in transit to the Application Discovery Service database in the cloud. Application Discovery Service integrates with application discovery solutions from AWS Partner Network (APN) partners. Third-party application discovery tools can query Application Discovery Service and write to the Application Discovery Service database using a public API. You can then import the data into either a visualization tool or cloud-migration solution. Application Discovery Service doesn't gather sensitive information. All data is handled according to the AWS Privacy Policy (http://aws.amazon.com/privacy/). You can operate Application Discovery Service using offline mode to inspect collected data before it is shared with the service. Your AWS account must be granted access to Application Discovery Service, a process called whitelisting. This is true for AWS partners and customers alike. To request access, sign up for AWS Application Discovery Service here (http://aws.amazon.com/application-discovery/preview/). We send you information about how to get started. This API reference provides descriptions, syntax, and usage examples for each of the actions and data types for Application Discovery Service. The topic for each action shows the API request parameters and the response. Alternatively, you can use one of the AWS SDKs to access an API that is tailored to the programming language or platform that you're using. For more information, see AWS SDKs (http://aws.amazon.com/tools/#SDKs). This guide is intended for use with the AWS Application Discovery Service User Guide (http://docs.aws.amazon.com/application-discovery/latest/userguide/). See https://docs.aws.amazon.com/goto/WebAPI/discovery-2015-11-01 for more information on this service. See applicationdiscoveryservice package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/applicationdiscoveryservice/ To AWS Application Discovery Service with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the AWS Application Discovery Service client ApplicationDiscoveryService for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/applicationdiscoveryservice/#New
Package pprof-garbage writes runtime profiling data in the format expected by the pprof visualization tool. The profile shows estimates for garbage allocations over a given time duration: See https://github.com/golang/go/issues/16629 for more details.
Package cors provides handlers to enable CORS support. Package expvar provides a standardized interface to public variables, such as operation counters in servers. It exposes these variables via HTTP at /debug/vars in JSON format. Operations to set or modify these public variables are atomic. In addition to adding the HTTP handler, this package registers the following variables: The package is sometimes only imported for the side effect of registering its HTTP handler and the above variables. To use it this way, link this package into your program: Package pprof serves via its HTTP server runtime profiling data in the format expected by the pprof visualization tool. For more information about pprof, see http://code.google.com/p/google-perftools/. The package is typically only imported for the side effect of registering its HTTP handlers. The handled paths all begin with /debug/pprof/. To use pprof, link this package into your program: If your application is not already running an http server, you need to start one. Add "net/http" and "log" to your imports and the following code to your main function: Then use the pprof tool to look at the heap profile: Or to look at a 30-second CPU profile: Or to look at the goroutine blocking profile: To view all available profiles, open http://localhost:6060/debug/pprof/ in your browser. For a study of the facility in action, visit
Package mdtopdf implements a PDF document generator for markdown documents. This package depends on two other packages: * The BlackFriday v2 parser to read the markdown source * The gofpdf packace to generate the PDF The tests included here are from the BlackFriday package. See the "testdata" folder. The tests create PDF files and thus while the tests may complete without errors, visual inspection of the created PDF is the only way to determine if the tests *really* pass! The tests create log files that trace the BlackFriday parser callbacks. This is a valuable debug tool showing each callback and data provided in each while the AST is presented. To install the package: In the cmd folder is an example using the package. It demonstrates a number of features. The test PDF was created with this command: Package mdtopdf converts markdown to PDF.
Package jin Copyright (c) 2020 eco. License that can be found in the LICENSE file. "your wish is my command" Jin is a comprehensive JSON manipulation tool bundle. All functions tested with random data with help of Node.js. All test-path and test-value creation automated with Node.js. Jin provides parse, interpret, build and format tools for JSON. Third-party packages only used for the benchmark. No dependency need for core functions. We make some benchmark with other packages like Jin. In Result, Jin is the fastest (op/ns) and more memory friendly then others (B/op). For more information please take a look at BENCHMARK section below. WHAT IS NEW? 7 new functions tested and added to package. - GetMap() get objects as map[string]string structure with key values pairs - GetAll() get only specific keys values - GetAllMap() get only specific keys with map[string]stringstructure - GetKeys() get objects keys as string array - GetValues() get objects values as string array - GetKeysValues() get objects keys and values with separate string arrays - Length() get length of JSON array. 06.04.2020 INSTALLATION And you are good to go. Import and start using. Major difference between parsing and interpreting is parser has to read all data before answer your needs. On the other hand interpreter reads up to find the data you need. With parser, once the parse is complete you can access any data with no time. But there is a time cost to parse all data and this cost can increase as data content grows. If you need to access all keys of a JSON then, we are simply recommend you to use Parser. But if you need to access some keys of a JSON then we strongly recommend you to use Interpreter, it will be much faster and much more memory-friendly than parser. Interpreter is core element of this package, no need to create an Interpreter type, just call which function you want. First let's look at general function parameters. We are gonna use Get() function to access the value of path has pointed. In this case 'jin'. Path value can consist hard coded values. Get() function return type is []byte but all other variations of return types are implemented with different functions. For example. If you need "value" as string use GetString(). Parser is another alternative for JSON manipulation. We recommend to use this structure when you need to access all or most of the keys in the JSON. Parser constructor need only one parameter. We can parse it with Parse() function. Let's look at Parser.Get() About path value look above. There is all return type variations of Parser.Get() function like Interpreter. For return string use Parser.GetString() like this, Other usefull functions of Jin. -Add(), AddKeyValue(), Set(), SetKey() Delete(), Insert(), IterateArray(), IterateKeyValue() Tree(). Let's look at IterateArray() function. There are two formatting functions. Flatten() and Indent(). Indent() is adds indentation to JSON for nicer visualization and Flatten() removes this indentation. Control functions are simple and easy way to check value types of any path. For example. IsArray(). Or you can use GetType(). There are lots of JSON build functions in this package and all of them has its own examples. We just want to mention a couple of them. Scheme is simple and powerful tool for create JSON schemes. Testing is very important thing for this type of packages and it shows how reliable it is. For that reasons we use Node.js for unit testing. Lets look at folder arrangement and working principle. - test/ folder: test-json.json, this is a temporary file for testing. all other test-cases copying here with this name so they can process by test-case-creator.js. test-case-creator.js is core path & value creation mechanism. When it executed with executeNode() function. It reads the test-json.json file and generates the paths and values from this files content. With command line arguments it can generate different paths and values. As a result, two files are created with this process. the first of these files is test-json-paths.json and the second is test-json-values.json test-json-paths.json has all the path values. test-json-values.json has all the values that corresponding to path values. - tests/ folder All files in this folder is a test-case. But it doesn't mean that you can't change anything, on the contrary, all test-cases are creating automatically based on this folder content. You can add or remove any .json file that you want. All GO side test-case automation functions are in core_test.go file. This package developed with Node.js v13.7.0. please make sure that your machine has a valid version of Node.js before testing. All functions and methods are tested with complicated randomly genereted .json files. Like this, Most of JSON packages not even run properly with this kind of JSON streams. We did't see such packages as competitors to ourselves. And that's because we didn't even bother to benchmark against them. Benchmark results. - Benchmark prefix removed from function names for make room to results. - Benchmark between 'buger/jsonparser' and 'ecoshub/jin' use the same payload (JSON test-cases) that 'buger/jsonparser' package use for benchmark it self. We are currently working on, - Marshal() and Unmarshal() functions. - http.Request parser/interpreter - Builder functions for http.ResponseWriter If you want to contribute this work feel free to fork it. We want to fill this section with contributors.
Package jin Copyright (c) 2020 eco. License that can be found in the LICENSE file. "your wish is my command" Jin is a comprehensive JSON manipulation tool bundle. All functions tested with random data with help of Node.js. All test-path and test-value creation automated with Node.js. Jin provides parse, interpret, build and format tools for JSON. Third-party packages only used for the benchmark. No dependency need for core functions. We make some benchmark with other packages like Jin. In Result, Jin is the fastest (op/ns) and more memory friendly then others (B/op). For more information please take a look at BENCHMARK section below. WHAT IS NEW? 7 new functions tested and added to package. - GetMap() get objects as map[string]string structure with key values pairs - GetAll() get only specific keys values - GetAllMap() get only specific keys with map[string]stringstructure - GetKeys() get objects keys as string array - GetValues() get objects values as string array - GetKeysValues() get objects keys and values with separate string arrays - Length() get length of JSON array. 06.04.2020 INSTALLATION And you are good to go. Import and start using. Major difference between parsing and interpreting is parser has to read all data before answer your needs. On the other hand interpreter reads up to find the data you need. With parser, once the parse is complete you can access any data with no time. But there is a time cost to parse all data and this cost can increase as data content grows. If you need to access all keys of a JSON then, we are simply recommend you to use Parser. But if you need to access some keys of a JSON then we strongly recommend you to use Interpreter, it will be much faster and much more memory-friendly than parser. Interpreter is core element of this package, no need to create an Interpreter type, just call which function you want. First let's look at general function parameters. We are gonna use Get() function to access the value of path has pointed. In this case 'jin'. Path value can consist hard coded values. Get() function return type is []byte but all other variations of return types are implemented with different functions. For example. If you need "value" as string use GetString(). Parser is another alternative for JSON manipulation. We recommend to use this structure when you need to access all or most of the keys in the JSON. Parser constructor need only one parameter. We can parse it with Parse() function. Let's look at Parser.Get() About path value look above. There is all return type variations of Parser.Get() function like Interpreter. For return string use Parser.GetString() like this, Other usefull functions of Jin. -Add(), AddKeyValue(), Set(), SetKey() Delete(), Insert(), IterateArray(), IterateKeyValue() Tree(). Let's look at IterateArray() function. There are two formatting functions. Flatten() and Indent(). Indent() is adds indentation to JSON for nicer visualization and Flatten() removes this indentation. Control functions are simple and easy way to check value types of any path. For example. IsArray(). Or you can use GetType(). There are lots of JSON build functions in this package and all of them has its own examples. We just want to mention a couple of them. Scheme is simple and powerful tool for create JSON schemes. Testing is very important thing for this type of packages and it shows how reliable it is. For that reasons we use Node.js for unit testing. Lets look at folder arrangement and working principle. - test/ folder: test-json.json, this is a temporary file for testing. all other test-cases copying here with this name so they can process by test-case-creator.js. test-case-creator.js is core path & value creation mechanism. When it executed with executeNode() function. It reads the test-json.json file and generates the paths and values from this files content. With command line arguments it can generate different paths and values. As a result, two files are created with this process. the first of these files is test-json-paths.json and the second is test-json-values.json test-json-paths.json has all the path values. test-json-values.json has all the values that corresponding to path values. - tests/ folder All files in this folder is a test-case. But it doesn't mean that you can't change anything, on the contrary, all test-cases are creating automatically based on this folder content. You can add or remove any .json file that you want. All GO side test-case automation functions are in core_test.go file. This package developed with Node.js v13.7.0. please make sure that your machine has a valid version of Node.js before testing. All functions and methods are tested with complicated randomly genereted .json files. Like this, Most of JSON packages not even run properly with this kind of JSON streams. We did't see such packages as competitors to ourselves. And that's because we didn't even bother to benchmark against them. Benchmark results. - Benchmark prefix removed from function names for make room to results. - Benchmark between 'buger/jsonparser' and 'ecoshub/jin' use the same payload (JSON test-cases) that 'buger/jsonparser' package use for benchmark it self. We are currently working on, - Marshal() and Unmarshal() functions. - http.Request parser/interpreter - Builder functions for http.ResponseWriter If you want to contribute this work feel free to fork it. We want to fill this section with contributors.
pprofetheus is a collector for Prometheus that collects CPU profiling data for the current process and exports them as metrics. It can be used to monitor, visualize, and alert on profiling data from any Go process that imports pprofetheus and exports metrics via Prometheus. In order to use pprofetheus in your Prometheus-enabled Go application, you just need to and then import the same package, and set up the collector with Prometheus in your code, e.g. like this:
Trek is a simple collector and visualization for IOTracker data. The setup is as follows: - iotracker is configured via the iotracker console (note the parameters) - create an account on The Things Network (TTN) - create an application in TTN and add the device with the previously noted parameters - create an MQTT API key Based on the above, the data flows: - iotracker sends a message via LoRa - one or more LoRa gateways nearby pick the message up and relay it to TTN - TTN forwards it to MQTT subscribers - trek picks up the messages sent to MQTT and stores it in a sqlite DB - messages are visualized in a web UI - the web UI can also be used to send messages to the iotracker to reconfigure it References: - https://docs.iotracker.eu/devices/iot3/ - https://docs.iotracker.eu/configuration/introduction/ - https://www.thethingsnetwork.org/docs/applications/mqtt/quick-start/
Package chart implements common chart/plot types. The following chart types are available: Chart tries to provides useful defaults and produce nice charts without sacrificing accuracy. The generated charts look good and are higly customizable but will not match the visual quality of handmade photoshop charts or the statistical features of charts produced by S or R. Creating charts consists of the following steps: You may change the configuration at any step or render to different outputs. The different chart types and their fields are all simple struct types where the zero value provides suitable defaults. All fields are exported, even if you are not supposed to manipulate them directy or are 'output fields'. E.g. the common Data field of all chart types will store the sample data added with one or more Add... methods. Some fields are mere output which expose internal stuff for your use like the Data2Screen and Screen2Data functions of the Ranges. Some fields are even input/output fields: E.g. you may set the Range.TicSetting.Delta to some positive value which will be used as the spacing between tics on that axis; on the other hand if you leave Range.TicSetting.Delta at its default 0 you indicate to the plotting routine to automatically determine the tic delta which is then reported back in this fields. All charts (except pie/ring charts) contain at least one axis represented by a field of type Range. Axis can be differented into following categories: How the axis is autoscaled can be controlled for both ends of the axis individually by MinMode and MaxMode which allow a fine control of the (auto-) scaling. After setting up the chart, adding data, samples, functions you can render the chart to a Graphics output. This process will set several internal fields of the chart. If you reuse the chart, add additional data and output it again these fields might no longer indicate 'automatical/default' but contain the value calculated in the first output round.
Package radolan parses the DWD RADOLAN / RADVOR radar composite format. This data is available at the Open Data Portal (https://www.dwd.de/DE/leistungen/opendata/opendata.html). The obtained results can be processed and visualized with additional functions. Tested input products and grids: Those can be considered working with sufficient accuracy. In cases, where the publicly available format specification is unprecise or contradictory, reverse engineering was used to obtain reasonable approaches.
Package pprof is a fork of net/http/pprof modified to communicate over a unix socket. --------------------------------------------------------------- Package pprof serves via its HTTP server runtime profiling data in the format expected by the pprof visualization tool. For more information about pprof, see http://code.google.com/p/google-perftools/. The package is typically only imported for the side effect of registering its HTTP handlers. The handled paths all begin with /debug/pprof/. To use pprof, link this package into your program: If your application is not already running an http server, you need to start one. Add "net/http" and "log" to your imports and the following code to your main function: Then use the pprof tool to look at the heap profile: Or to look at a 30-second CPU profile: Or to look at the goroutine blocking profile: To view all available profiles, open http://localhost:6060/debug/pprof/ in your browser. For a study of the facility in action, visit
Package geobin.io runs a web server which creates a geobin url that can receive geo data via POSTs and visualizes it on a map.
Package chart implements common chart/plot types. The following chart types are available: Chart tries to provides useful defaults and produce nice charts without sacrificing accuracy. The generated charts look good and are higly customizable but will not match the visual quality of handmade photoshop charts or the statistical features of charts produced by S or R. Creating charts consists of the following steps: You may change the configuration at any step or render to different outputs. The different chart types and their fields are all simple struct types where the zero value provides suitable defaults. All fields are exported, even if you are not supposed to manipulate them directy or are 'output fields'. E.g. the common Data field of all chart types will store the sample data added with one or more Add... methods. Some fields are mere output which expose internal stuff for your use like the Data2Screen and Screen2Data functions of the Ranges. Some fields are even input/output fields: E.g. you may set the Range.TicSetting.Delta to some positive value which will be used as the spacing between tics on that axis; on the other hand if you leave Range.TicSetting.Delta at its default 0 you indicate to the plotting routine to automatically determine the tic delta which is then reported back in this fields. All charts (except pie/ring charts) contain at least one axis represented by a field of type Range. Axis can be differented into following categories: How the axis is autoscaled can be controlled for both ends of the axis individually by MinMode and MaxMode which allow a fine control of the (auto-) scaling. After setting up the chart, adding data, samples, functions you can render the chart to a Graphics output. This process will set several internal fields of the chart. If you reuse the chart, add additional data and output it again these fields might no longer indicate 'automatical/default' but contain the value calculated in the first output round.
Package geobin.io runs a web server which creates a geobin url that can receive geo data via POSTs and visualizes it on a map.
Package mdtopdf implements a PDF document generator for markdown documents. This package depends on two other packages: * The BlackFriday v2 parser to read the markdown source * The gofpdf packace to generate the PDF The tests included here are from the BlackFriday package. See the "testdata" folder. The tests create PDF files and thus while the tests may complete without errors, visual inspection of the created PDF is the only way to determine if the tests *really* pass! The tests create log files that trace the BlackFriday parser callbacks. This is a valuable debug tool showing each callback and data provided in each while the AST is presented. To install the package: In the cmd folder is an example using the package. It demonstrates a number of features. The test PDF was created with this command: Package mdtopdf converts markdown to PDF.
Package mdtopdf implements a PDF document generator for markdown documents. This package depends on two other packages: * The BlackFriday v2 parser to read the markdown source * The gofpdf packace to generate the PDF The tests included here are from the BlackFriday package. See the "testdata" folder. The tests create PDF files and thus while the tests may complete without errors, visual inspection of the created PDF is the only way to determine if the tests *really* pass! The tests create log files that trace the BlackFriday parser callbacks. This is a valuable debug tool showing each callback and data provided in each while the AST is presented. To install the package: In the cmd folder is an example using the package. It demonstrates a number of features. The test PDF was created with this command: Package mdtopdf converts markdown to PDF.
Package mdtopdf implements a PDF document generator for markdown documents. This package depends on two other packages: * The BlackFriday v2 parser to read the markdown source * The gofpdf packace to generate the PDF The tests included here are from the BlackFriday package. See the "testdata" folder. The tests create PDF files and thus while the tests may complete without errors, visual inspection of the created PDF is the only way to determine if the tests *really* pass! The tests create log files that trace the BlackFriday parser callbacks. This is a valuable debug tool showing each callback and data provided in each while the AST is presented. To install the package: In the cmd folder is an example using the package. It demonstrates a number of features. The test PDF was created with this command: Package mdtopdf converts markdown to PDF.
Package mdtopdf implements a PDF document generator for markdown documents. This package depends on two other packages: * The BlackFriday v2 parser to read the markdown source * The gofpdf packace to generate the PDF The tests included here are from the BlackFriday package. See the "testdata" folder. The tests create PDF files and thus while the tests may complete without errors, visual inspection of the created PDF is the only way to determine if the tests *really* pass! The tests create log files that trace the BlackFriday parser callbacks. This is a valuable debug tool showing each callback and data provided in each while the AST is presented. To install the package: In the cmd folder is an example using the package. It demonstrates a number of features. The test PDF was created with this command: Package mdtopdf converts markdown to PDF.
Package log is an important part of the application and having a consistent logging mechanism and structure is mandatory. With several teams writing different components that talk to each other, being able to read each others logs could be the difference between finding bugs quickly or wasting hours. With the log package in the standard library, we have the ability to create custom loggers that can be configured to write to one or many devices. Since we use syslog to send logging output to a central log repository, our logger can be configured to just write to stdout. This not only simplifies things for us, but will keep each log trace in correct sequence. This package does not included logging levels. Everything needs to be logged to help trace the code and find bugs. There is no such thing as over logging. By the time you decide to change the logging level, it is always too late. The question of performance comes up quite a bit. If the only performance issue we see is coming from logging, we are doing very well. I have had these opinions for a long time, but if you want more clarity on the subject listen to this recent podcast: Jon Gifford On Logging And Logging Infrastructure: Robert Blumen talks to Jon Gifford of Loggly about logging and logging infrastructure. Topics include logging defined, purposes of logging, uses of logging in understanding the run-time behavior of programs, who produces logs, who consumes logs and for what reasons, software as the consumer of logs, log formats (structured versus free form), log meta-data, logging APIs, logging as coding, logging and frameworks, the massive hairball of log file management, modern logging infrastructure in which log records are stored and indexed in a search engine, how searchable logs have transformed the uses of log data, log data and analytics, leveraging the log database for statistical insights, performance and resource issues of logging, are logs really different than other data that systems record in databases, and how log visualization gives users insights into their system. The show wraps up with a discussion of open source logging platforms versus commercial SAAS providers. There are two types of tracing lines we need to log. One is a trace line that describes where the program is, what it is doing and any data associated with that trace. The second is formatted data such as a JSON document or binary dump of data. Each serve a different purpose but they both exists within the same scope of space and time. The format of each trace line needs to be consistent and helpful or else the logging will just be noise and ultimately useless. Here is a breakdown of each section and a sample value: Here are examples of how trace lines would show in the log: In the end, we want to see the flow of most functions starting and completing so we can follow the code in the logs. We want to quickly see and filter errors, which can be accomplished by using a capitalized version of the word ERROR. The context is an important value. The context allows us to extract trace lines for one context over others. Maybe in this case 8890 represents a user id. When there is a need to dump formatted data into the logs, there are three approaches. If the data can be represented as key/value pairs, you can write each pair on their own line with the DATA tag: When there is a single block of data to dump, then it can be written as a single multi-line trace: When special block formatting required, the Stringer interface can be implemented to format data in custom ways: The API for the log package is focused on initializing the logger and then provides function abstractions for the different tags we have defined.
mail account ping (maping) - utility for checking sets of mail servers (SMTP/IMAPv4). Saves results to database and may generate an SVG data visualization matrix from the results. For the moment, please refer to the documentation on https://github.com/nfdesign/maping
Package multimap provides an abstract MultiMap interface. Multimap is a collection that maps keys to values, similar to map. However, each key may be associated with multiple values. You can visualize the contents of a multimap either as a map from keys to nonempty collections of values: ... or a single "flattened" collection of key-value pairs. Similar to a map, operations associated with this data type allow: - the addition of a pair to the collection - the removal of a pair from the collection - the lookup of a value associated with a particular key - the lookup whether a key, value or key/value pair exists in this data type.
Package mdtopdf implements a PDF document generator for markdown documents. This package depends on two other packages: * The BlackFriday v2 parser to read the markdown source * The gofpdf packace to generate the PDF The tests included here are from the BlackFriday package. See the "testdata" folder. The tests create PDF files and thus while the tests may complete without errors, visual inspection of the created PDF is the only way to determine if the tests *really* pass! The tests create log files that trace the BlackFriday parser callbacks. This is a valuable debug tool showing each callback and data provided in each while the AST is presented. To install the package: In the cmd folder is an example using the package. It demonstrates a number of features. The test PDF was created with this command: Package mdtopdf converts markdown to PDF.
Package lttb implements the Largest-Triangle-Three-Buckets algorithm for downsampling points The downsampled data maintains the visual characteristics of the original line using considerably fewer data points. This is a translation of the javascript code at
Package tracing includes high-level tools for instrumenting your application (and library) code using OpenTelemetry and go-logr. This is done by interconnecting logs and traces; such that critical operations that need to be instrumented start a tracing span using the *TracerBuilder builder. Upon starting a span, the user gives it the context which it is operating in. If the context contains a parent span, the new "child" span and the parent are connected together. To the span various types of metadata can be registered, for example attributes, status information, and potential errors. Spans always need to be ended; most commonly using a defer statement right after creation. The context given to the *TracerBuilder might carry a TracerProvider to use for exporting span data, e.g. to Jaeger for visualization, or a logr.Logger, to which logs are sent. The context can also carry a LogLevelIncreaser, which correlates log levels to trace depth. The core idea of interconnecting logs and traces is that when some metadata is registered with a span (for example, it starts, ends, or has attributes or errors registered), information about this is also logged. And upon logging something in a function that is executing within a span, it is also registered with the span. This means you have dual ways of looking at your application's execution; the "waterfall" visualization of spans in a trace in an OpenTelemetry-compliant UI like Jaeger, or through pluggable logging using logr. Additionally, there is a way to output semi-human-readable YAML data based on the trace information, which is useful when you want to unit-test a function based on its output trace data using a "golden file" in a testdata/ directory. Let's talk about trace depth and log levels. Consider this example trace (tree of spans): Span A is at depth 0, as this is a "root span". Inside of span A, span B starts, at depth 1 (span B has exactly 1 parent span). Span B spawns span C at depth 2. Span B ends, but after this span D starts at depth 1, as a child of span A. After D is done executing, span A also ends after a while. Using the TraceEnabler interface, the user can decide what spans are "enabled" and hence sent to the TracerProvider backend, for example, Jaeger. By default, spans of any depth are sent to the backing TracerProvider, but this is often not desirable in production. The TraceEnabler can decide whether a span should be enabled based on all data in tracing.TracerConfig, which includes e.g. span name, trace depth and so on. For example, MaxDepthEnabler(maxDepth) allows all traces with depth maxDepth or less, but LoggerEnabler() allows traces as long as the given Logger is enabled. With that, lets take a look at how trace depth correlates with log levels. The LogLevelIncreaser interface, possibly attached to a context, correlates how much the log level (verboseness) should increase as an effect of the trace depth increasing. The NoLogLevelIncrease() implementation, for example, never increases the log level although the trace depth gets arbitrarily deep. However, that is most often not desired, so there is also a NthLogLevelIncrease(n) implementation that raises the log level every n-th increase of trace depth. For example, given the earlier example, log level (often shortened "v") is increased like follows for NthLogLevelIncrease(2): As per how logr.Loggers work, log levels can never be decreased, i.e. become less verbose, they can only be increased. The logr.Logger backend enables log levels up to a given maximum, configured by the user, similar to how MaxDepthEnabler works. Log output for the example above would looks something like: This is of course a bit dull example, because only the start/end span events are logged, but it shows the spirit. If span operations like span.Set{Name,Attributes,Status} are executed within the instrumented function, e.g. to record errors, important return values, arbitrary attributes, or a decision, this information will be logged automatically, without a need to call log.Info() separately. At the same time, all trace data is nicely visualized in Jaeger :). For convenience, a builder-pattern constructor for the zap logger, compliant with the Logger interface is provided through the ZapLogger() function and zaplog sub-directory. In package traceyaml there are utilities for unit testing the traces. In package filetest there are utilities for using "golden" testdata/ files for comparing actual output of loggers, tracers, and general writers against expected output. Both the TracerProviderBuilder and zaplog.Builder support deterministic output for unit tests and examples. The philosophy behind this package is that instrumentable code (functions, structs, and so on), should use the TracerBuilder to start spans; and will from there get a Span and Logger implementation to use. It is safe for libraries used by other consumers to use the TracerBuilder as well, if the user didn't want or request tracing nor logging, all calls to the Span and Logger will be discarded! The application owner wanting to (maybe conditionally) enable tracing and logging, creates "backend" implementations of TracerProvider and Logger, e.g. using the TracerProviderBuilder and/or zaplog.Builder. These backends control where the telemetry data is sent, and how much of it is enabled. These "backend" implementations are either attached specifically to a context, or registered globally. Using this setup, telemetry can be enabled even on the fly, using e.g. a HTTP endpoint for debugging a production system. Have fun using this library and happy tracing!
Package geobin.io runs a web server which creates a geobin url that can receive geo data via POSTs and visualizes it on a map.
Package chart implements common chart/plot types. The following chart types are available: Chart tries to provides useful defaults and produce nice charts without sacrificing accuracy. The generated charts look good and are higly customizable but will not match the visual quality of handmade photoshop charts or the statistical features of charts produced by S or R. Creating charts consists of the following steps: You may change the configuration at any step or render to different outputs. The different chart types and their fields are all simple struct types where the zero value provides suitable defaults. All fields are exported, even if you are not supposed to manipulate them directy or are 'output fields'. E.g. the common Data field of all chart types will store the sample data added with one or more Add... methods. Some fields are mere output which expose internal stuff for your use like the Data2Screen and Screen2Data functions of the Ranges. Some fields are even input/output fields: E.g. you may set the Range.TicSetting.Delta to some positive value which will be used as the spacing between tics on that axis; on the other hand if you leave Range.TicSetting.Delta at its default 0 you indicate to the plotting routine to automatically determine the tic delta which is then reported back in this fields. All charts (except pie/ring charts) contain at least one axis represented by a field of type Range. Axis can be differented into following categories: How the axis is autoscaled can be controlled for both ends of the axis individually by MinMode and MaxMode which allow a fine control of the (auto-) scaling. After setting up the chart, adding data, samples, functions you can render the chart to a Graphics output. This process will set several internal fields of the chart. If you reuse the chart, add additional data and output it again these fields might no longer indicate 'automatical/default' but contain the value calculated in the first output round.
Package chart implements common chart/plot types. The following chart types are available: Chart tries to provides useful defaults and produce nice charts without sacrificing accuracy. The generated charts look good and are higly customizable but will not match the visual quality of handmade photoshop charts or the statistical features of charts produced by S or R. Creating charts consists of the following steps: You may change the configuration at any step or render to different outputs. The different chart types and their fields are all simple struct types where the zero value provides suitable defaults. All fields are exported, even if you are not supposed to manipulate them directy or are 'output fields'. E.g. the common Data field of all chart types will store the sample data added with one or more Add... methods. Some fields are mere output which expose internal stuff for your use like the Data2Screen and Screen2Data functions of the Ranges. Some fields are even input/output fields: E.g. you may set the Range.TicSetting.Delta to some positive value which will be used as the spacing between tics on that axis; on the other hand if you leave Range.TicSetting.Delta at its default 0 you indicate to the plotting routine to automatically determine the tic delta which is then reported back in this fields. All charts (except pie/ring charts) contain at least one axis represented by a field of type Range. Axis can be differented into following categories: How the axis is autoscaled can be controlled for both ends of the axis individually by MinMode and MaxMode which allow a fine control of the (auto-) scaling. After setting up the chart, adding data, samples, functions you can render the chart to a Graphics output. This process will set several internal fields of the chart. If you reuse the chart, add additional data and output it again these fields might no longer indicate 'automatical/default' but contain the value calculated in the first output round.
Issue is a client for reading and updating issues in a GitHub project issue tracker. Issue runs the query against the given project's issue tracker and prints a table of matching issues, sorted by issue summary. The default owner/repo is golang/go. If multiple arguments are given as the query, issue joins them by spaces to form a single issue search. These two commands are equivalent: Searches are always limited to open issues. If the query is a single number, issue prints that issue in detail, including all comments. Issue expects to find a GitHub "personal access token" in $HOME/.github-issue-token and will use that token to authenticate to GitHub when reading or writing issue data. A token can be created by visiting https://github.com/settings/tokens/new. The token only needs the 'repo' scope checkbox, and optionally 'private_repo' if you want to work with issue trackers for private repositories. It does not need any other permissions. The -token flag specifies an alternate file from which to read the token. If the -a flag is specified, issue runs as a collection of acme windows instead of a command-line tool. In this mode, the query is optional. If no query is given, issue uses "state:open". There are three kinds of acme windows: issue, issue creation, issue list, search result, and milestone list. The following text forms can be looked for (right clicked on) and open a window (or navigate to an existing one). Executing "New" opens an issue creation window. Executing "Search <query>" opens a new window showing the results of that search. An issue window, opened by loading an issue number, displays full detail about an issue, a header followed by each comment. For example: Executing "Get" reloads the issue data. Executing "Put" updates an issue. It saves any changes to the issue header and, if any text has been entered between the header and the "Reported by" line, posts that text as a new comment. If both succeed, Put then reloads the issue data. The "Closed" and "URL" headers cannot be changed. An issue creation window, opened by executing "New", is like an issue window but displays only an empty issue template: Once the template has been completed (only the title is required), executing "Put" creates the issue and converts the window into a issue window for the new issue. An issue list window displays a list of all open issue numbers and titles. If the project has any open milestones, they are listed in a header line. For example: As in any window, right clicking on an issue number opens a window for that issue. A search result window, opened by executing "Search <query>", displays a list of issues matching a search query. It shows the query in a header line. For example: Executing "Sort" in a search result window toggles between sorting by title and sorting by decreasing issue number. Executing "Bulk" in an issue list or search result window opens a new bulk edit window applying to the displayed issues. If there is a non-empty text selection in the issue list or search result list, the bulk edit window is restricted to issues in the selection. The bulk edit window consists of a metadata header followed by a list of issues, like: The metadata header shows only metadata shared by all the issues. In the above example, all four issues are open and have milestone Go1.4.3, but they have no common labels nor a common assignee. The bulk edit applies to the issues listed in the window text; adding or removing issue lines changes the set of issues affected by Get or Put operations. Executing "Get" refreshes the metadata header and issue summaries. Executing "Put" updates all the listed issues. It applies any changes made to the metadata header and, if any text has been entered between the header and the first issue line, posts that text as a comment. If all operations succeed, Put then refreshes the window as Get does. The milestone list window, opened by loading any of the names "milestone", "Milestone", or "Milestones", displays the open project milestones, sorted by due date, along with the number of open issues in each. For example: Loading one of the listed milestone names opens a search for issues in that milestone. The -e flag enables basic editing of issues with editors other than acme. The editor invoked is $VISUAL if set, $EDITOR if set, or else ed. Issue prepares a textual representation of issue data in a temporary file, opens that file in the editor, waits for the editor to exit, and then applies any changes from the file to the actual issues. When <query> is a single number, issue -e edits a single issue. See the “Issue Window” section above. If the <query> is the text "new", issue -e creates a new issue. See the “Issue Creation Window” section above. Otherwise, for general queries, issue -e edits multiple issues in bulk. See the “Bulk Edit Window” section above. The -json flag causes issue to print the results in JSON format using these data structures: If asked for a specific issue, the output is an Issue with Comments. Otherwise, the result is an array of Issues without Comments.
The sockdrawer command is an analysis and visualization tool to help you reorganize a complex Go package into several simpler ones. sockdrawer operates on three kinds of graphs at different levels of abstraction. The lowest level is the NODE GRAPH. A node is a package-level declaration of a named entity (func, var, const or type). An entire constant declaration is treated as a single node, even if it contains multiple "specs" each defining multiple names, since constants so grouped are typically closely related; an important special case is an enumerated set data type. Also, we treat each "spec" of a var or type declaration as a single node. Each reference to a package-level entity E forms an edge in the node graph, from the node in which it appears to the node E. For example: Each method declaration depends on its receiver named type; in addition we add an edge from each receiver type to its methods: to ensure that a type and its methods stay together. The node graph is highly cyclic, and obviously all nodes in a cycle must belong to the same package for the package import graph to remain acyclic. So, we compute the second graph, the SCNODE GRAPH. In essence, the scnode graph is the graph of strongly connected components (SCCs) of the (ordinary) node graph. By construction, the scnode graph is acyclic. We optionally perform an optimization at this point, which is to fuse single-predecessor scnodes with their sole predecessor, as this tends to reduce clutter in big graphs. This means that the scnodes are no longer true SCCs; however, the scnode graph remains acyclic. We define a valid PARTITION P of the scnode graph as a mapping from scnodes to CLUSTERS such that the projection of the scnode graph using mapping P is an acyclic graph. This third graph is the CLUSTER GRAPH. Every partition represents a valid refactoring of the original package into hypothetical subpackages, each cluster being a subpackage. Two partitions define the extreme ends of a spectrum: the MINIMAL partition maps every scnode to a single cluster; it represents the status quo, a monolithic package. The MAXIMAL partition maps each scnode to a unique cluster; this breaks the package up into an impractically large number of small fragments. The ideal partition lies somewhere in between. The --clusters=<file> argument specifies a CLUSTERS FILE that constrains the partition algorithm. The file consists of a number of stanzas, each assigning an import path to a cluster ("mypkg/internal/util") and assigning a set of initial nodes ({x, y, z}) to it: Order of stanzas is important: clusters must be be declared bottom to top. After each stanza, all nodes transitively reachable (via the node graph) from that cluster are assigned to that cluster, if they have not yet been assigned to some other cluster. Thus we need only mention the root nodes of the cluster, not all its internal nodes. A warning is reported if a node mentioned in a stanza already belongs to a previously defined cluster. There is an implicit cluster, "residue", that holds all remaining nodes after the clusters defined by the file have been processed. Initially, when the clusters file is empty, the residue cluster contains the entire package. (It is logically at the top.) The task for the user is to iteratively define new clusters until the residue becomes empty. When sockdrawer is run, it analyzes the source package, builds the node graph and the scgraph, loads the clusters file, computes the clusters for every node, and then emits SVG renderings of the three levels of graphs, with nodes colors coded as follows: The graphs of all clusters, a DAG, has green nodes; clicking one takes you to the graph over scnodes for that cluster, also a DAG. Each pink node in this graph represents a cyclical bunch of the node graph, collapsed together for ease of viewing. Each blue node here represents a singleton SCC, a single declaration; singular SCCs are replaced by their sole element for simplicity. Clicking a pink (plural) scnode shows the cyclical portion of the node graph that it represents. (If the fusion optimization was enabled, it may not be fully cyclic.) All of its nodes are blue. Clicking a blue node shows the definition of that node in godoc. (The godoc server's base URL is specified by the --godoc flag.) Initially, all nodes belong to the "residue" cluster. (GraphViz graph rendering can be slow for the first several iterations. A large monitor is essential.) The sockdrawer user's task when decomposing a package into clusters is to identify the lowest-hanging fruit (so to speak) in the residue cluster---a coherent group of related scnodes at the bottom of the graph---and to "snip off" a bunch at the "stem" by appending a new stanza to the clusters file and listing the roots of that bunch in the stanza, and then to re-run the tool. Nodes may be added to an existing stanza if appropriate, but if they are added to a cluster that is "too low", this may create conflicts; keep an eye out for warnings. This process continues iteratively until the residue has become empty and the sets of clusters are satisfactory. The tool prints the assignments of nodes to clusters: the "shopping list" for the refactoring work. Clusters should be split off into subpackages in dependency order, lowest first. The analysis chooses a single configuration, such as linux/amd64. Declarations for other configurations (e.g. windows/arm) will be absent from the node graph. There may be some excessively large SCCs in the node graph that reflect a circularity in the design. For the purposes of analysis, you can break them arbitrarily by commenting out some code, though more thought will be required for a principled fix (e.g. dependency injection).
Package pprof serves via its HTTP server runtime profiling data in the format expected by the pprof visualization tool. For more information about pprof, see http://code.google.com/p/google-perftools/. Unlike net/http/pprof, this package does not register HTTP handlers on import. To use pprof, pass a HTTP ServeMux to Register: Then use the pprof tool to look at the heap profile: Or to look at a 30-second CPU profile: Or to look at the goroutine blocking profile: To view all available profiles, open http://localhost:6060/debug/pprof/ in your browser. For a study of the facility in action, visit