Package oam provides the API client, operations, and parameter types for CloudWatch Observability Access Manager. Use Amazon CloudWatch Observability Access Manager to create and manage links between source accounts and monitoring accounts by using CloudWatch cross-account observability. With CloudWatch cross-account observability, you can monitor and troubleshoot applications that span multiple accounts within a Region. Seamlessly search, visualize, and analyze your metrics, logs, and traces in any of the linked accounts without account boundaries. Set up one or more Amazon Web Services accounts as monitoring accounts and link them with multiple source accounts. A monitoring account is a central Amazon Web Services account that can view and interact with observability data generated from source accounts. A source account is an individual Amazon Web Services account that generates observability data for the resources that reside in it. Source accounts share their observability data with the monitoring account. The shared observability data can include metrics in Amazon CloudWatch, logs in Amazon CloudWatch Logs, and traces in X-Ray.
Package graph is a library for creating generic graph data structures and modifying, analyzing, and visualizing them. A graph consists of vertices of type T, which are identified by a hash value of type K. The hash value for a given vertex is obtained using the hashing function passed to New. A hashing function takes a T and returns a K. For primitive types like integers, you may use a predefined hashing function such as IntHash – a function that takes an integer and uses that integer as the hash value at the same time: For storing custom data types, you need to provide your own hashing function. This example takes a City instance and returns its name as the hash value: Creating a graph using this hashing function will yield a graph of vertices of type City identified by hash values of type string. Adding vertices to a graph of integers is simple. graph.Graph.AddVertex takes a vertex and adds it to the graph. Most functions accept and return only hash values instead of entire instances of the vertex type T. For example, graph.Graph.AddEdge creates an edge between two vertices and accepts the hash values of those vertices. Because this graph uses the IntHash hashing function, the vertex values and hash values are the same. All operations that modify the graph itself are methods of Graph. All other operations are top-level functions of by this library. For detailed usage examples, take a look at the README.
Package chart implements common chart/plot types. The following chart types are available: Chart tries to provides useful defaults and produce nice charts without sacrificing accuracy. The generated charts look good and are higly customizable but will not match the visual quality of handmade photoshop charts or the statistical features of charts produced by S or R. Creating charts consists of the following steps: You may change the configuration at any step or render to different outputs. The different chart types and their fields are all simple struct types where the zero value provides suitable defaults. All fields are exported, even if you are not supposed to manipulate them directy or are 'output fields'. E.g. the common Data field of all chart types will store the sample data added with one or more Add... methods. Some fields are mere output which expose internal stuff for your use like the Data2Screen and Screen2Data functions of the Ranges. Some fields are even input/output fields: E.g. you may set the Range.TicSetting.Delta to some positive value which will be used as the spacing between tics on that axis; on the other hand if you leave Range.TicSetting.Delta at its default 0 you indicate to the plotting routine to automatically determine the tic delta which is then reported back in this fields. All charts (except pie/ring charts) contain at least one axis represented by a field of type Range. Axis can be differented into following categories: How the axis is autoscaled can be controlled for both ends of the axis individually by MinMode and MaxMode which allow a fine control of the (auto-) scaling. After setting up the chart, adding data, samples, functions you can render the chart to a Graphics output. This process will set several internal fields of the chart. If you reuse the chart, add additional data and output it again these fields might no longer indicate 'automatical/default' but contain the value calculated in the first output round.
Package grafana provides the API client, operations, and parameter types for Amazon Managed Grafana. Amazon Managed Grafana is a fully managed and secure data visualization service that you can use to instantly query, correlate, and visualize operational metrics, logs, and traces from multiple sources. Amazon Managed Grafana makes it easy to deploy, operate, and scale Grafana, a widely deployed data visualization tool that is popular for its extensible data support. With Amazon Managed Grafana, you create logically isolated Grafana servers called workspaces. In a workspace, you can create Grafana dashboards and visualizations to analyze your metrics, logs, and traces without having to build, package, or deploy any hardware to run Grafana servers.
Package iotanalytics provides the API client, operations, and parameter types for AWS IoT Analytics. IoT Analytics allows you to collect large amounts of device data, process messages, and store them. You can then query the data and run sophisticated analytics on it. IoT Analytics enables advanced data exploration through integration with Jupyter Notebooks and data visualization through integration with Amazon QuickSight. Traditional analytics and business intelligence tools are designed to process structured data. IoT data often comes from devices that record noisy processes (such as temperature, motion, or sound). As a result the data from these devices can have significant gaps, corrupted messages, and false readings that must be cleaned up before analysis can occur. Also, IoT data is often only meaningful in the context of other data from external sources. IoT Analytics automates the steps required to analyze data from IoT devices. IoT Analytics filters, transforms, and enriches IoT data before storing it in a time-series data store for analysis. You can set up the service to collect only the data you need from your devices, apply mathematical transforms to process the data, and enrich the data with device-specific metadata such as device type and location before storing it. Then, you can analyze your data by running queries using the built-in SQL query engine, or perform more complex analytics and machine learning inference. IoT Analytics includes pre-built models for common IoT use cases so you can answer questions like which devices are about to fail or which customers are at risk of abandoning their wearable devices.
Package lttb implements the Largest-Triangle-Three-Buckets algorithm for downsampling points The downsampled data maintains the visual characteristics of the original line using considerably fewer data points. This is a translation of the javascript code at
Package duplo provides tools to efficiently query large sets of images for visual duplicates. The technique is based on the paper "Fast Multiresolution Image Querying" by Charles E. Jacobs, Adam Finkelstein, and David H. Salesin, with a few modifications and additions, such as the addition of a width to height ratio, the dHash metric by Dr. Neal Krawetz as well as some histogram-based metrics. Quering the data structure will return a list of potential matches, sorted by the score described in the main paper. The user can make searching for duplicates stricter, however, by filtering based on the additional metrics. Package example.
Package detective provides the API client, operations, and parameter types for Amazon Detective. Detective uses machine learning and purpose-built visualizations to help you to analyze and investigate security issues across your Amazon Web Services (Amazon Web Services) workloads. Detective automatically extracts time-based events such as login attempts, API calls, and network traffic from CloudTrail and Amazon Virtual Private Cloud (Amazon VPC) flow logs. It also extracts findings detected by Amazon GuardDuty. The Detective API primarily supports the creation and management of behavior graphs. A behavior graph contains the extracted data from a set of member accounts, and is created and managed by an administrator account. To add a member account to the behavior graph, the administrator account sends an invitation to the account. When the account accepts the invitation, it becomes a member account in the behavior graph. Detective is also integrated with Organizations. The organization management account designates the Detective administrator account for the organization. That account becomes the administrator account for the organization behavior graph. The Detective administrator account is also the delegated administrator account for Detective in Organizations. The Detective administrator account can enable any organization account as a member account in the organization behavior graph. The organization accounts do not receive invitations. The Detective administrator account can also invite other accounts to the organization behavior graph. Every behavior graph is specific to a Region. You can only use the API to manage behavior graphs that belong to the Region that is associated with the currently selected endpoint. The administrator account for a behavior graph can use the Detective API to do the following: The organization management account can use the Detective API to select the delegated administrator for Detective. The Detective administrator account for an organization can use the Detective API to do the following: An invited member account can use the Detective API to do the following: All API actions are logged as CloudTrail events. See Logging Detective API Calls with CloudTrail (https://docs.aws.amazon.com/detective/latest/adminguide/logging-using-cloudtrail.html) . We replaced the term "master account" with the term "administrator account." An administrator account is used to centrally manage multiple accounts. In the case of Detective, the administrator account manages the accounts in their behavior graph.
Package tfortools provides a set of functions that are designed to make it easier for developers to add template based scripting to their command line tools. Command line tools written in Go often allow users to specify a template script to tailor the output of the tool to their specific needs. This can be useful both when visually inspecting the data and also when invoking command line tools in scripts. The best example of this is go list which allows users to pass a template script to extract interesting information about Go packages. For example, prints all the imports of the current package. The aim of this package is to make it easier for developers to add template scripting support to their tools and easier for users of these tools to extract the information they need. It does this by augmenting the templating language provided by the standard library package text/template in two ways: 1. It auto generates descriptions of the data structures passed as input to a template script for use in help messages. This ensures that help usage information is always up to date with the source code. 2. It provides a suite of convenience functions to make it easy for script writers to extract the data they need. There are functions for sorting, selecting rows and columns and generating nicely formatted tables. For example, if a program passed a slice of structs containing stock data to a template script, we could use the following script to extract the names of the 3 stocks with the highest trade volume. The output might look something like this: The functions head, sort, tables and col are provided by this package.
Package mdtopdf implements a PDF document generator for markdown documents. This package depends on two other packages: * The BlackFriday v2 parser to read the markdown source * The fpdf packace to generate the PDF The tests included here are from the BlackFriday package. See the "testdata" folder. The tests create PDF files and thus while the tests may complete without errors, visual inspection of the created PDF is the only way to determine if the tests *really* pass! The tests create log files that trace the BlackFriday parser callbacks. This is a valuable debug tool showing each callback and data provided in each while the AST is presented. To install the package: In the cmd folder is an example using the package. It demonstrates a number of features. The test PDF was created with this command: Package mdtopdf converts markdown to PDF.
Package multimap provides an abstract MultiMap interface. Multimap is a collection that maps keys to values, similar to map. However, each key may be associated with multiple values. You can visualize the contents of a multimap either as a map from keys to nonempty collections of values: ... or a single "flattened" collection of key-value pairs. Similar to a map, operations associated with this data type allow: - the addition of a pair to the collection - the removal of a pair from the collection - the lookup of a value associated with a particular key - the lookup whether a key, value or key/value pair exists in this data type.
Package xmlwriter provides a fast, non-cached, forward-only way to generate XML data. The API is based heavily on libxml's xmlwriter API [1], which is itself based on C#'s XmlWriter [2]. It offers some advantages over Go's default encoding/xml package and some tradeoffs. You can have complete control of the generated documents and it uses very little memory. There are two styles for interacting with the writer: structured and heap-friendly. If you want a visual representation of the hierarchy of some of your writes in your code and you don't care about a few instances of memory escaping to the heap (and most of the time you won't), you can use the structured API. If you are writing a code generator or your interactions with the API are minimal, you should use the direct API. xmlwriter.Writer{} takes any io.Writer, along with a variable list of options. xmlwriter options are based on Dave Cheney's functional options pattern (https://dave.cheney.net/2014/10/17/functional-options-for-friendly-apis): Provided options are: Using the structured API, you might express a small tree of elements like this. These nodes will escape to the heap, but judicious use of this nesting can make certain structures a lot more readable by representing the desired XML hierarchy in the code that produces it: The code can be made even less dense by importing xmlwriter with a prefix: `import xw "github.com/shabbyrobe/xmlwriter"` The same output is possible with the heap-friendy API. This has a lot more stutter and it's harder to tell the hierarchical relationship just by looking at the code, but there are no heap escapes this way: Use whichever API reads best in your code, but favour the latter style in all code generators and performance hotspots. xmlwriter.Writer extends bufio.Writer! Don't forget to flush otherwise you'll lose data. There are two ways to flush: The EndAllFlush form is just a convenience, it calls EndAll() and Flush() for you. Nodes which can have children can be passed to `Writer.Start()`. This adds them to the stack and opens them, allowing children to be added. Becomes: <foo><bar><baz/></bar></foo> Nodes which have no children, or nodes which can be opened and fully closed with only a trivial amount of information, can be passed to `Writer.Write()`. If written nodes are put on to the stack, they will be popped before Write returns. Becomes: <foo/><bar/><baz/> Block takes a Startable and a variable number of Writable nodes. The Startable will be opened, the Writables will be written, then the Startable will be closed: Becomes: There are several ways to end an element. Choose the End that's right for you! Nodes as they are written can be in three states: StateOpen, StateOpened or StateEnd. StateOpen == "<elem". StateOpened == "<elem>". StateEnd == "<elem></elem>". Node structs are available for writing in the following hierarchy. Nodes which are "Startable" (passed to `writer.Start(n)`) are marked with an S. Nodes which are "Writable" (passed to `writer.Write(n)`) are marked with a W. - xmlwriter.Raw* (W) - xmlwriter.Doc (S) * `xmlwriter.Raw` can be written anywhere, at any time. If a node is in the "open" state but not in the "opened" state, for example you have started an element and written an attribute, writing "raw" will add the content to the inside of the element opening tag unless you call `w.Next()`. Every node has a corresponding NodeKind constant, which can be found by affixing "Node" to the struct name, i.e. "xmlwriter.Elem" becomes "xmlwriter.ElemNode". These are used for calls to Writer.End(). xmlwriter.Attr{} values can be assigned from any golang primitive like so: xmlwriter supports encoders from the golang.org/x/text/encoding package. UTF-8 strings written in from golang will be converted on the fly and the document declaration will be written correctly. To write your XML using the windows-1252 encoder: The document line will look like this:
Package codeguruprofiler provides the API client, operations, and parameter types for Amazon CodeGuru Profiler. This section provides documentation for the Amazon CodeGuru Profiler API operations. Amazon CodeGuru Profiler collects runtime performance data from your live applications, and provides recommendations that can help you fine-tune your application performance. Using machine learning algorithms, CodeGuru Profiler can help you find your most expensive lines of code and suggest ways you can improve efficiency and remove CPU bottlenecks. Amazon CodeGuru Profiler provides different visualizations of profiling data to help you identify what code is running on the CPU, see how much time is consumed, and suggest ways to reduce CPU utilization. Amazon CodeGuru Profiler currently supports applications written in all Java virtual machine (JVM) languages and Python. While CodeGuru Profiler supports both visualizations and recommendations for applications written in Java, it can also generate visualizations and a subset of recommendations for applications written in other JVM languages and Python. For more information, see What is Amazon CodeGuru Profiler (https://docs.aws.amazon.com/codeguru/latest/profiler-ug/what-is-codeguru-profiler.html) in the Amazon CodeGuru Profiler User Guide.
Package databrew provides the API client, operations, and parameter types for AWS Glue DataBrew. Glue DataBrew is a visual, cloud-scale data-preparation service. DataBrew simplifies data preparation tasks, targeting data issues that are hard to spot and time-consuming to fix. DataBrew empowers users of all technical levels to visualize the data and perform one-click data transformations, with no coding required.
Package amplifyuibuilder provides the API client, operations, and parameter types for AWS Amplify UI Builder. The Amplify UI Builder API provides a programmatic interface for creating and configuring user interface (UI) component libraries and themes for use in your Amplify applications. You can then connect these UI components to an application's backend Amazon Web Services resources. You can also use the Amplify Studio visual designer to create UI components and model data for an app. For more information, see Introduction (https://docs.amplify.aws/console/adminui/intro) in the Amplify Docs. The Amplify Framework is a comprehensive set of SDKs, libraries, tools, and documentation for client app development. For more information, see the Amplify Framework (https://docs.amplify.aws/) . For more information about deploying an Amplify application to Amazon Web Services, see the Amplify User Guide (https://docs.aws.amazon.com/amplify/latest/userguide/welcome.html) .
Package lookoutvision provides the API client, operations, and parameter types for Amazon Lookout for Vision. This is the Amazon Lookout for Vision API Reference. It provides descriptions of actions, data types, common parameters, and common errors. Amazon Lookout for Vision enables you to find visual defects in industrial products, accurately and at scale. It uses computer vision to identify missing components in an industrial product, damage to vehicles or structures, irregularities in production lines, and even minuscule defects in silicon wafers — or any other physical item where quality is important such as a missing capacitor on printed circuit boards.
Package jin Copyright (c) 2020 eco. License that can be found in the LICENSE file. "your wish is my command" Jin is a comprehensive JSON manipulation tool bundle. All functions tested with random data with help of Node.js. All test-path and test-value creation automated with Node.js. Jin provides parse, interpret, build and format tools for JSON. Third-party packages only used for the benchmark. No dependency need for core functions. We make some benchmark with other packages like Jin. In Result, Jin is the fastest (op/ns) and more memory friendly then others (B/op). For more information please take a look at BENCHMARK section below. WHAT IS NEW? 7 new functions tested and added to package. - GetMap() get objects as map[string]string structure with key values pairs - GetAll() get only specific keys values - GetAllMap() get only specific keys with map[string]stringstructure - GetKeys() get objects keys as string array - GetValues() get objects values as string array - GetKeysValues() get objects keys and values with separate string arrays - Length() get length of JSON array. 06.04.2020 INSTALLATION And you are good to go. Import and start using. Major difference between parsing and interpreting is parser has to read all data before answer your needs. On the other hand interpreter reads up to find the data you need. With parser, once the parse is complete you can access any data with no time. But there is a time cost to parse all data and this cost can increase as data content grows. If you need to access all keys of a JSON then, we are simply recommend you to use Parser. But if you need to access some keys of a JSON then we strongly recommend you to use Interpreter, it will be much faster and much more memory-friendly than parser. Interpreter is core element of this package, no need to create an Interpreter type, just call which function you want. First let's look at general function parameters. We are gonna use Get() function to access the value of path has pointed. In this case 'jin'. Path value can consist hard coded values. Get() function return type is []byte but all other variations of return types are implemented with different functions. For example. If you need "value" as string use GetString(). Parser is another alternative for JSON manipulation. We recommend to use this structure when you need to access all or most of the keys in the JSON. Parser constructor need only one parameter. We can parse it with Parse() function. Let's look at Parser.Get() About path value look above. There is all return type variations of Parser.Get() function like Interpreter. For return string use Parser.GetString() like this, Other usefull functions of Jin. -Add(), AddKeyValue(), Set(), SetKey() Delete(), Insert(), IterateArray(), IterateKeyValue() Tree(). Let's look at IterateArray() function. There are two formatting functions. Flatten() and Indent(). Indent() is adds indentation to JSON for nicer visualization and Flatten() removes this indentation. Control functions are simple and easy way to check value types of any path. For example. IsArray(). Or you can use GetType(). There are lots of JSON build functions in this package and all of them has its own examples. We just want to mention a couple of them. Scheme is simple and powerful tool for create JSON schemes. Testing is very important thing for this type of packages and it shows how reliable it is. For that reasons we use Node.js for unit testing. Lets look at folder arrangement and working principle. - test/ folder: test-json.json, this is a temporary file for testing. all other test-cases copying here with this name so they can process by test-case-creator.js. test-case-creator.js is core path & value creation mechanism. When it executed with executeNode() function. It reads the test-json.json file and generates the paths and values from this files content. With command line arguments it can generate different paths and values. As a result, two files are created with this process. the first of these files is test-json-paths.json and the second is test-json-values.json test-json-paths.json has all the path values. test-json-values.json has all the values that corresponding to path values. - tests/ folder All files in this folder is a test-case. But it doesn't mean that you can't change anything, on the contrary, all test-cases are creating automatically based on this folder content. You can add or remove any .json file that you want. All GO side test-case automation functions are in core_test.go file. This package developed with Node.js v13.7.0. please make sure that your machine has a valid version of Node.js before testing. All functions and methods are tested with complicated randomly genereted .json files. Like this, Most of JSON packages not even run properly with this kind of JSON streams. We did't see such packages as competitors to ourselves. And that's because we didn't even bother to benchmark against them. Benchmark results. - Benchmark prefix removed from function names for make room to results. - Benchmark between 'buger/jsonparser' and 'ecoshub/jin' use the same payload (JSON test-cases) that 'buger/jsonparser' package use for benchmark it self. We are currently working on, - Marshal() and Unmarshal() functions. - http.Request parser/interpreter - Builder functions for http.ResponseWriter If you want to contribute this work feel free to fork it. We want to fill this section with contributors.
Package pprof serves via its HTTP server runtime profiling data in the format expected by the pprof visualization tool. For more information about pprof, see http://code.google.com/p/google-perftools/. The package is typically only imported for the side effect of registering its HTTP handlers. The handled paths all begin with /debug/pprof/. To use pprof, link this package into your program: If your application is not already running an http server, you need to start one. Add "net/http" and "log" to your imports and the following code to your main function: Then use the pprof tool to look at the heap profile: Or to look at a 30-second CPU profile: Or to look at the goroutine blocking profile: To view all available profiles, open http://localhost:6060/debug/pprof/ in your browser. For a study of the facility in action, visit
Package vision is a repository containing visual processing packages in Go (golang), focused mainly on providing efficient V1 (primary visual cortex) level filtering of images, with the output then suitable as input for neural networks. Two main types of filters are supported: * **Gabor** filters simulate V1 simple-cell responses in terms of an oriented sine wave times a gaussian envelope that localizes the filter in space. This produces an edge detector that detects oriented contrast transitions between light and dark. In general, the main principle of primary visual filtering is to focus on spatial (and temporal) changes, while filtering out static, uniform areas. * **DoG** (difference of gaussian) filters simulate retinal On-center vs. Off-center contrast coding cells -- unlike gabor filters, these do not have orientation tuning. Mathematically, they are a difference between a narrow (center) vs wide (surround) gaussian, of opposite signs, balanced so that a uniform input generates offsetting values that sum to zero. In the visual system, orientation tuning is constructed from aligned DoG-like inputs, but it is more efficient to just use the Gabor filters directly. However, DoG filters capture the "blob" cells that encode color contrasts. The `vfilter` package contains general-purpose filtering code that applies (convolves) any given filter with a visual input. It also supports converting an `image.Image` into a `etensor.Float32` tensor which is the main data type used in this framework. It also supports max-pooling for efficiently reducing the dimensionality of inputs. The `kwta` package provides an implementation of the feedforward and feedback (FFFB) inhibition dynamics (and noisy X-over-X-plus-1 activation function) from the `Leabra` algorithm to produce a k-Winners-Take-All processing of visual filter outputs -- this increases the contrast and simplifies the representations, and is a good model of the dynamics in primary visual cortex.
Package mdtopdf implements a PDF document generator for markdown documents. This package depends on two other packages: * The BlackFriday v2 parser to read the markdown source * The gofpdf packace to generate the PDF The tests included here are from the BlackFriday package. See the "testdata" folder. The tests create PDF files and thus while the tests may complete without errors, visual inspection of the created PDF is the only way to determine if the tests *really* pass! The tests create log files that trace the BlackFriday parser callbacks. This is a valuable debug tool showing each callback and data provided in each while the AST is presented. To install the package: In the cmd folder is an example using the package. It demonstrates a number of features. The test PDF was created with this command: Package mdtopdf converts markdown to PDF.
Package applicationdiscoveryservice provides the client and types for making API requests to AWS Application Discovery Service. AWS Application Discovery Service helps you plan application migration projects by automatically identifying servers, virtual machines (VMs), software, and software dependencies running in your on-premises data centers. Application Discovery Service also collects application performance data, which can help you assess the outcome of your migration. The data collected by Application Discovery Service is securely retained in an Amazon-hosted and managed database in the cloud. You can export the data as a CSV or XML file into your preferred visualization tool or cloud-migration solution to plan your migration. For more information, see the Application Discovery Service FAQ (http://aws.amazon.com/application-discovery/faqs/). Application Discovery Service offers two modes of operation. Agentless discovery mode is recommended for environments that use VMware vCenter Server. This mode doesn't require you to install an agent on each host. Agentless discovery gathers server information regardless of the operating systems, which minimizes the time required for initial on-premises infrastructure assessment. Agentless discovery doesn't collect information about software and software dependencies. It also doesn't work in non-VMware environments. We recommend that you use agent-based discovery for non-VMware environments and if you want to collect information about software and software dependencies. You can also run agent-based and agentless discovery simultaneously. Use agentless discovery to quickly complete the initial infrastructure assessment and then install agents on select hosts to gather information about software and software dependencies. Agent-based discovery mode collects a richer set of data than agentless discovery by using Amazon software, the AWS Application Discovery Agent, which you install on one or more hosts in your data center. The agent captures infrastructure and application information, including an inventory of installed software applications, system and process performance, resource utilization, and network dependencies between workloads. The information collected by agents is secured at rest and in transit to the Application Discovery Service database in the cloud. Application Discovery Service integrates with application discovery solutions from AWS Partner Network (APN) partners. Third-party application discovery tools can query Application Discovery Service and write to the Application Discovery Service database using a public API. You can then import the data into either a visualization tool or cloud-migration solution. Application Discovery Service doesn't gather sensitive information. All data is handled according to the AWS Privacy Policy (http://aws.amazon.com/privacy/). You can operate Application Discovery Service using offline mode to inspect collected data before it is shared with the service. Your AWS account must be granted access to Application Discovery Service, a process called whitelisting. This is true for AWS partners and customers alike. To request access, sign up for AWS Application Discovery Service here (http://aws.amazon.com/application-discovery/preview/). We send you information about how to get started. This API reference provides descriptions, syntax, and usage examples for each of the actions and data types for Application Discovery Service. The topic for each action shows the API request parameters and the response. Alternatively, you can use one of the AWS SDKs to access an API that is tailored to the programming language or platform that you're using. For more information, see AWS SDKs (http://aws.amazon.com/tools/#SDKs). This guide is intended for use with the AWS Application Discovery Service User Guide (http://docs.aws.amazon.com/application-discovery/latest/userguide/). See https://docs.aws.amazon.com/goto/WebAPI/discovery-2015-11-01 for more information on this service. See applicationdiscoveryservice package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/applicationdiscoveryservice/ To AWS Application Discovery Service with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the AWS Application Discovery Service client ApplicationDiscoveryService for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/applicationdiscoveryservice/#New
Package mdtopdf implements a PDF document generator for markdown documents. This package depends on two other packages: * The BlackFriday v2 parser to read the markdown source * The gofpdf packace to generate the PDF The tests included here are from the BlackFriday package. See the "testdata" folder. The tests create PDF files and thus while the tests may complete without errors, visual inspection of the created PDF is the only way to determine if the tests *really* pass! The tests create log files that trace the BlackFriday parser callbacks. This is a valuable debug tool showing each callback and data provided in each while the AST is presented. To install the package: In the cmd folder is an example using the package. It demonstrates a number of features. The test PDF was created with this command: Package mdtopdf converts markdown to PDF.
Package log is an important part of the application and having a consistent logging mechanism and structure is mandatory. With several teams writing different components that talk to each other, being able to read each others logs could be the difference between finding bugs quickly or wasting hours. With the log package in the standard library, we have the ability to create custom loggers that can be configured to write to one or many devices. Since we use syslog to send logging output to a central log repository, our logger can be configured to just write to stdout. This not only simplifies things for us, but will keep each log trace in correct sequence. This package does not included logging levels. Everything needs to be logged to help trace the code and find bugs. There is no such thing as over logging. By the time you decide to change the logging level, it is always too late. The question of performance comes up quite a bit. If the only performance issue we see is coming from logging, we are doing very well. I have had these opinions for a long time, but if you want more clarity on the subject listen to this recent podcast: Jon Gifford On Logging And Logging Infrastructure: Robert Blumen talks to Jon Gifford of Loggly about logging and logging infrastructure. Topics include logging defined, purposes of logging, uses of logging in understanding the run-time behavior of programs, who produces logs, who consumes logs and for what reasons, software as the consumer of logs, log formats (structured versus free form), log meta-data, logging APIs, logging as coding, logging and frameworks, the massive hairball of log file management, modern logging infrastructure in which log records are stored and indexed in a search engine, how searchable logs have transformed the uses of log data, log data and analytics, leveraging the log database for statistical insights, performance and resource issues of logging, are logs really different than other data that systems record in databases, and how log visualization gives users insights into their system. The show wraps up with a discussion of open source logging platforms versus commercial SAAS providers. There are two types of tracing lines we need to log. One is a trace line that describes where the program is, what it is doing and any data associated with that trace. The second is formatted data such as a JSON document or binary dump of data. Each serve a different purpose but they both exists within the same scope of space and time. The format of each trace line needs to be consistent and helpful or else the logging will just be noise and ultimately useless. Here is a breakdown of each section and a sample value: Here are examples of how trace lines would show in the log: In the end, we want to see the flow of most functions starting and completing so we can follow the code in the logs. We want to quickly see and filter errors, which can be accomplished by using a capitalized version of the word ERROR. The context is an important value. The context allows us to extract trace lines for one context over others. Maybe in this case 8890 represents a user id. When there is a need to dump formatted data into the logs, there are three approaches. If the data can be represented as key/value pairs, you can write each pair on their own line with the DATA tag: When there is a single block of data to dump, then it can be written as a single multi-line trace: When special block formatting required, the Stringer interface can be implemented to format data in custom ways: The API for the log package is focused on initializing the logger and then provides function abstractions for the different tags we have defined.
Package mdtopdf implements a PDF document generator for markdown documents. This package depends on two other packages: * The BlackFriday v2 parser to read the markdown source * The gofpdf packace to generate the PDF The tests included here are from the BlackFriday package. See the "testdata" folder. The tests create PDF files and thus while the tests may complete without errors, visual inspection of the created PDF is the only way to determine if the tests *really* pass! The tests create log files that trace the BlackFriday parser callbacks. This is a valuable debug tool showing each callback and data provided in each while the AST is presented. To install the package: In the cmd folder is an example using the package. It demonstrates a number of features. The test PDF was created with this command: Package mdtopdf converts markdown to PDF.
Package decksh is a little language that generates deck markup assignments ```decksh``` is a domain-specific language (DSL) for generating [```deck```](https://github.com/ajstarks/deck/blob/master/README.md) markup. ## References and Examples * [```decksh``` overview](https://speakerdeck.com/ajstarks/decksh-a-little-language-for-decks) * [```decksh``` object reference](https://speakerdeck.com/ajstarks/decksh-object-reference) * [Repository of decksh projects and visualizations](https://github.com/ajstarks/deckviz) ## Package use There is a simple method ```Process``` that reads decksh commands from an ```io.Reader``` and writes deck markup to an ```io.Writer```, returning an error. ## Running This repository also contains ```cmd/decksh```, a client decksh command: ```decksh``` reads from the specified input, and writes deck markup to the specified output destination: Typically, ```decksh``` acts as the head of a rendering pipeline: ## Example input This deck script: Text, font, color, caption and link arguments follow Go convetions (surrounded by double quotes). Colors formats are: * rgb format "rgb(n,n,n)", for example "```"rgb(128,0,128)"``` * hex "#rrggbb", for example ```"#aa00aa"```, or * [SVG color names](https://www.w3.org/TR/SVG11/types.html#ColorKeywords). Color gradients (used for slide backgrounds and rectangle and square fills) are specified as color1/color2/percent, for example, ```"blue/white/90"``` Coordinates, dimensions, scales and opacities are floating point numbers ranging from from 0-100 (representing percentages of the canvas width and percent opacity). Some arguments are optional, and if omitted defaults are applied (black for text, gray for graphics, 100% opacity). Canvas size and image dimensions are in pixels. ## Begin or end a deck. ## Begin, end a slide with optional background and text colors. ## Specify the size of the canvas. ## Simple assignments ```id=<number>``` defines a constant, which may be then subtitited. For example: ## Assignment operations ```id+=<number>``` increment the value of ```id``` by ```<number>``` ```id-=<number>``` decrement the value of ```id``` by ```<number>``` ```id*=<number>``` multiply the value of ```id``` by ```<number>``` ```id*=<number>``` divide the value of ```id``` by ```<number>``` ## Binary operations Addition ```id=<id> + number or <id>``` Subtraction ```id=<id> - number or <id>``` Muliplication ```id=<id> * number or <id>``` Division ```id=<id> / number or <id>``` ## Coordinate assignments Assign (x,y) coordinates to the specified identifier. The x coordinate is ```id_x``` and the y coordinate is ```id_y```. The expression with the parentheses may be a constant, variable or binary expression. This code: makes this: ## Polar Coordinates Return the polar coordinate given the center at ```(cx, cy)```, radius ```r```, and angle ```theta``` (in degrees) ## Polar Coordinates (composite) Return the polar coordinates ```(p_x)``` and ```(p_y)``` given the center at ```(cx, cy)```, radius ```r```, and angle ```theta``` (in degrees) ## Area return the circular area, ```a``` for the diameter ```d```. ## Formatted Text Assign a string variable with formatted text (using package fmt floating point format strings) ## Random Number assign a random number in the specified range ## Square Root return the square root of the number of expression (```id``` or binary operation) ## Mapping For value ```v```, map the range ```vmin-vmax``` to ```min-max```. ## Loops Loop over ```statements```, with ```x``` starting at ```begin```, ending at ```end``` with an optional ```increment``` (if omitted the increment is 1). Substitution of ```x``` will occur in statements. Loop over ```statements```, with ```x``` ranging over the contents of items within ```[]```. Substitution of ```x``` will occur in statements. Loop over ```statements```, with ```x``` ranging over the contents ```"file"```. Substitution of ```x``` will occur in statements. ## Include decksh markup from a file places the contents of ```"file"``` inline. ## Functions Functions have a defined ```name``` and arguments, and are specifed with statements between the ```def``` and ```edef``` keywords ## Importing function defintions Functions may be imported once, and then called by name. For example, given a file ```redcircle.dsh```: which is referenced: Functions may also be called with the ```func``` keyword: For example, given a file "ftest.dsh" calling the function: produces: ## Data: Make a file makes a file named ```foo.d``` with the lines between ```data``` and ```edata```. ## Grid: Place objects on a grid The first file argument (```"file.dsh"``` above) specifies a file with decksh commands; each item in the file must include the arguments "x" and "y". Normal variable substitution occurs for other arguments. For example if the contents of ```file.dsh``` has six items: The line: creates two rows: three circles and then three squares ```x, y``` specify the beginning location of the items, ```xskip``` is the horizontal spacing between items. ```yinternal``` is the vertical spacing between items and ```limit``` the the horizontal limit. When the ```limit``` is reached, a new row is created. ## Text Left, centered, end, or block-aligned text or file contents (```x``` and ```y``` are the text's reference point), with optional font ("sans", "serif", "mono", or "symbol"), color and opacity. Text rotated along the specified angle (in degrees) Text on an arc centered at ```(x,y)```, with specified radius, between begin and ending angles (in degrees). if the beginning angle is less than the ending angle the text is rendered counter-clockwise. if the beginning angle is greater than the ending angle, the text is rendered clockwise. Place the contents of "filename" at (x,y). Place the contents of "filename" in gray box, using a monospaced font. ## Images Plain and captioned, with optional scales, links and caption size. ```(x, y)``` is the center of the image, and ```width``` and ```height``` are the image dimensions in pixels. ## Lists (plain, bulleted, numbered, centered). Optional arguments specify the color, opacity, line spacing, link and rotation (degrees) ### list items, and ending the list ## Graphics Rectangles, ellipses, squares, circles: specify the center location ```(x, y)``` and dimensions ```(w,h)``` with optional color and opacity. The default color and opacity is gray, 100%. In the case of the ```acircle``` keyword, the ```a``` argument is the area, not the diameter. Rounded rectangles are similar, with the added radius for the corners: (solid colors only) For polygons, specify the x and y coordinates as a series of numbers, with optional color and opacity. Note that the coordinates may be either discrete: or use substitution: A combination of constants and substitution is also allowed. For lines, specify the coordinates for the beginning ```(x1,y1)``` and end points ```(x2, y2)```. For horizontal and vertical lines specify the initial point and the length. Line thickness, color and opacity are optional, with defaults (0.2, gray, 100%). A "pill" shape has is a horizontal line with rounded ends. Curve is a quadratic Bezier curve: specify the beginning location ```(bx, by)```, the control point ```(cx, cy)```, and ending location ```(ex, ey)```. For arcs, specify the location of the center point ```(x,y)```, the width and height, and the beginning and ending angles (in degrees). Line thickness, color and opacity are optional, with defaults (0.2, gray, 100%). To make n-sided stars, use the "star" keyword: ```(x,y)``` is the center of the star, ```np``` is the number of points, and ```inner``` and ```outer``` are the sizes of the inner and outer points, respectively. ## Arrows Arrows with optional linewidth, width, height, color, and opacity. Default linewidth is 0.2, default arrow width and height is 3, default color and opacity is gray, 100%. The curve variants use the same syntax for specifying curves. ## Braces Left, right, up and down-facing braces. (x, y) is the location of the point of the brace, (aw, ah) are width and height of the braces's end curves; ```linewidth```, ```color``` and ```opacity``` are optional (defaults are 0.2, gray, 100%) ## Brackets Left, right, up and down-facing brackets. (x, y) is the location of the center of the bracket. For left and right-facing brackets, ```width``` is the size of the top and bottom portions, and ```height``` is the span of the bracket. For upward and downward-facing brackets, ```width``` is the span of of bracket, and ```height``` is the size of the left and right portions. ```linewidth```, ```color``` and ```opacity``` are optional (defaults are 0.2, gray, 100%) ## Charts Run the dchart(https://github.com/ajstarks/dchart/blob/master/README.md) command with the specified arguments. ## Legend Show a colored legend Package decksh is a little language that generates deck markup code generation Package decksh is a little language that generates deck markup loops Package decksh is a little language that generates deck markup parsing
Package chart implements common chart/plot types. The following chart types are available: Chart tries to provides useful defaults and produce nice charts without sacrificing accuracy. The generated charts look good and are higly customizable but will not match the visual quality of handmade photoshop charts or the statistical features of charts produced by S or R. Creating charts consists of the following steps: You may change the configuration at any step or render to different outputs. The different chart types and their fields are all simple struct types where the zero value provides suitable defaults. All fields are exported, even if you are not supposed to manipulate them directy or are 'output fields'. E.g. the common Data field of all chart types will store the sample data added with one or more Add... methods. Some fields are mere output which expose internal stuff for your use like the Data2Screen and Screen2Data functions of the Ranges. Some fields are even input/output fields: E.g. you may set the Range.TicSetting.Delta to some positive value which will be used as the spacing between tics on that axis; on the other hand if you leave Range.TicSetting.Delta at its default 0 you indicate to the plotting routine to automatically determine the tic delta which is then reported back in this fields. All charts (except pie/ring charts) contain at least one axis represented by a field of type Range. Axis can be differented into following categories: How the axis is autoscaled can be controlled for both ends of the axis individually by MinMode and MaxMode which allow a fine control of the (auto-) scaling. After setting up the chart, adding data, samples, functions you can render the chart to a Graphics output. This process will set several internal fields of the chart. If you reuse the chart, add additional data and output it again these fields might no longer indicate 'automatical/default' but contain the value calculated in the first output round.
Package mdtopdf implements a PDF document generator for markdown documents. This package depends on two other packages: * The BlackFriday v2 parser to read the markdown source * The gofpdf packace to generate the PDF The tests included here are from the BlackFriday package. See the "testdata" folder. The tests create PDF files and thus while the tests may complete without errors, visual inspection of the created PDF is the only way to determine if the tests *really* pass! The tests create log files that trace the BlackFriday parser callbacks. This is a valuable debug tool showing each callback and data provided in each while the AST is presented. To install the package: In the cmd folder is an example using the package. It demonstrates a number of features. The test PDF was created with this command: Package mdtopdf converts markdown to PDF.
Package jin Copyright (c) 2020 eco. License that can be found in the LICENSE file. "your wish is my command" Jin is a comprehensive JSON manipulation tool bundle. All functions tested with random data with help of Node.js. All test-path and test-value creation automated with Node.js. Jin provides parse, interpret, build and format tools for JSON. Third-party packages only used for the benchmark. No dependency need for core functions. We make some benchmark with other packages like Jin. In Result, Jin is the fastest (op/ns) and more memory friendly then others (B/op). For more information please take a look at BENCHMARK section below. WHAT IS NEW? 7 new functions tested and added to package. - GetMap() get objects as map[string]string structure with key values pairs - GetAll() get only specific keys values - GetAllMap() get only specific keys with map[string]stringstructure - GetKeys() get objects keys as string array - GetValues() get objects values as string array - GetKeysValues() get objects keys and values with separate string arrays - Length() get length of JSON array. 06.04.2020 INSTALLATION And you are good to go. Import and start using. Major difference between parsing and interpreting is parser has to read all data before answer your needs. On the other hand interpreter reads up to find the data you need. With parser, once the parse is complete you can access any data with no time. But there is a time cost to parse all data and this cost can increase as data content grows. If you need to access all keys of a JSON then, we are simply recommend you to use Parser. But if you need to access some keys of a JSON then we strongly recommend you to use Interpreter, it will be much faster and much more memory-friendly than parser. Interpreter is core element of this package, no need to create an Interpreter type, just call which function you want. First let's look at general function parameters. We are gonna use Get() function to access the value of path has pointed. In this case 'jin'. Path value can consist hard coded values. Get() function return type is []byte but all other variations of return types are implemented with different functions. For example. If you need "value" as string use GetString(). Parser is another alternative for JSON manipulation. We recommend to use this structure when you need to access all or most of the keys in the JSON. Parser constructor need only one parameter. We can parse it with Parse() function. Let's look at Parser.Get() About path value look above. There is all return type variations of Parser.Get() function like Interpreter. For return string use Parser.GetString() like this, Other usefull functions of Jin. -Add(), AddKeyValue(), Set(), SetKey() Delete(), Insert(), IterateArray(), IterateKeyValue() Tree(). Let's look at IterateArray() function. There are two formatting functions. Flatten() and Indent(). Indent() is adds indentation to JSON for nicer visualization and Flatten() removes this indentation. Control functions are simple and easy way to check value types of any path. For example. IsArray(). Or you can use GetType(). There are lots of JSON build functions in this package and all of them has its own examples. We just want to mention a couple of them. Scheme is simple and powerful tool for create JSON schemes. Testing is very important thing for this type of packages and it shows how reliable it is. For that reasons we use Node.js for unit testing. Lets look at folder arrangement and working principle. - test/ folder: test-json.json, this is a temporary file for testing. all other test-cases copying here with this name so they can process by test-case-creator.js. test-case-creator.js is core path & value creation mechanism. When it executed with executeNode() function. It reads the test-json.json file and generates the paths and values from this files content. With command line arguments it can generate different paths and values. As a result, two files are created with this process. the first of these files is test-json-paths.json and the second is test-json-values.json test-json-paths.json has all the path values. test-json-values.json has all the values that corresponding to path values. - tests/ folder All files in this folder is a test-case. But it doesn't mean that you can't change anything, on the contrary, all test-cases are creating automatically based on this folder content. You can add or remove any .json file that you want. All GO side test-case automation functions are in core_test.go file. This package developed with Node.js v13.7.0. please make sure that your machine has a valid version of Node.js before testing. All functions and methods are tested with complicated randomly genereted .json files. Like this, Most of JSON packages not even run properly with this kind of JSON streams. We did't see such packages as competitors to ourselves. And that's because we didn't even bother to benchmark against them. Benchmark results. - Benchmark prefix removed from function names for make room to results. - Benchmark between 'buger/jsonparser' and 'ecoshub/jin' use the same payload (JSON test-cases) that 'buger/jsonparser' package use for benchmark it self. We are currently working on, - Marshal() and Unmarshal() functions. - http.Request parser/interpreter - Builder functions for http.ResponseWriter If you want to contribute this work feel free to fork it. We want to fill this section with contributors.
Banshee is a real-time anomalies(outliers) detection system for periodic metrics. We are using it to monitor our website and rpc services intefaces, including called frequency, response time and exception calls. Our services send statistics to statsd, statsd aggregates them every 10 seconds and broadcasts the results to its backends including banshee, banshee analyzes current metrics with history data, calculates the trending and alerts us if the trending behaves anomalous. For example, we have an api named get_user, this api's response time (in milliseconds) is reported to banshee from statsd every 10 seconds: Banshee will catch the latest metric 300 and report it as an anomaly. Why don't we just set a fixed threshold instead (i.e. 200ms)? This may also works but it is boring and hard to maintain a lot of thresholds. Banshee will analyze metric trendings automatically, it will find the "thresholds" automatically. 1. Designed for periodic metrics. Reality metrics are always with periodicity, banshee only peeks metrics with the same "phase" to detect. 2. Multiple alerting rule configuration options, to alert via fixed-thresholds or via anomalous trendings. 3. Coming with anomalies visualization webapp and alerting rules admin panels. 4. Require no extra storage services, banshee handles storage on disk by itself. 1. Go >= 1.4 and godep. 2. Node and gulp. 1. Clone the repo. 2. Build binary via `make`. 3. Build static files via `make static`. Usage: Flags: See package config. In order to forward metrics to banshee from statsd, we need to add the npm module statsd-banshee to statsd's banckends: 1. Install statsd-banshee on your statsd servers: 2. Add module statsd-banshee to statsd's backends in config.js: Require bell.js v2.0+ and banshee v0.0.7+: Banshee have 4 compontents and they are running in the same process: 1. Detector is to detect incoming metrics with history data and store the results. 2. Webapp is to visualize the detection results and provides panels to manage alerting rules, projects and users. 3. Alerter is to send sms and emails once anomalies are found. 4. Cleaner is to clean outdated metrics from storage. See package alerter and alerter/exampleCommand. 1. Detection algorithms, see package detector. 2. Detector input net protocol, see package detector. 3. Storage, see package storage. 4. Filter, see package filter. MIT (c) eleme, inc.
Banshee is a real-time anomalies(outliers) detection system for periodic metrics. We are using it to monitor our website and rpc services intefaces, including called frequency, response time and exception calls. Our services send statistics to statsd, statsd aggregates them every 10 seconds and broadcasts the results to its backends including banshee, banshee analyzes current metrics with history data, calculates the trending and alerts us if the trending behaves anomalous. For example, we have an api named get_user, this api's response time (in milliseconds) is reported to banshee from statsd every 10 seconds: Banshee will catch the latest metric 300 and report it as an anomaly. Why don't we just set a fixed threshold instead (i.e. 200ms)? This may also works but it is boring and hard to maintain a lot of thresholds. Banshee will analyze metric trendings automatically, it will find the "thresholds" automatically. 1. Designed for periodic metrics. Reality metrics are always with periodicity, banshee only peeks metrics with the same "phase" to detect. 2. Multiple alerting rule configuration options, to alert via fixed-thresholds or via anomalous trendings. 3. Coming with anomalies visualization webapp and alerting rules admin panels. 4. Require no extra storage services, banshee handles storage on disk by itself. 1. Go >= 1.4 and godep. 2. Node and gulp. 1. Clone the repo. 2. Build binary via `make`. 3. Build static files via `make static`. Usage: Flags: See package config. In order to forward metrics to banshee from statsd, we need to add the npm module statsd-banshee to statsd's banckends: 1. Install statsd-banshee on your statsd servers: 2. Add module statsd-banshee to statsd's backends in config.js: Require bell.js v2.0+ and banshee v0.0.7+: Banshee have 4 compontents and they are running in the same process: 1. Detector is to detect incoming metrics with history data and store the results. 2. Webapp is to visualize the detection results and provides panels to manage alerting rules, projects and users. 3. Alerter is to send sms and emails once anomalies are found. 4. Cleaner is to clean outdated metrics from storage. See package alerter and alerter/exampleCommand. 1. Detection algorithms, see package detector. 2. Detector input net protocol, see package detector. 3. Storage, see package storage. 4. Filter, see package filter. MIT (c) eleme, inc.
mail account ping (maping) - utility for checking sets of mail servers (SMTP/IMAPv4). Saves results to database and may generate an SVG data visualization matrix from the results. For the moment, please refer to the documentation on https://github.com/nfdesign/maping
Package mdtopdf implements a PDF document generator for markdown documents. This package depends on two other packages: * The BlackFriday v2 parser to read the markdown source * The gofpdf packace to generate the PDF The tests included here are from the BlackFriday package. See the "testdata" folder. The tests create PDF files and thus while the tests may complete without errors, visual inspection of the created PDF is the only way to determine if the tests *really* pass! The tests create log files that trace the BlackFriday parser callbacks. This is a valuable debug tool showing each callback and data provided in each while the AST is presented. To install the package: In the cmd folder is an example using the package. It demonstrates a number of features. The test PDF was created with this command: Package mdtopdf converts markdown to PDF.
Package geobin.io runs a web server which creates a geobin url that can receive geo data via POSTs and visualizes it on a map.
Banshee is a real-time anomalies(outliers) detection system for periodic metrics. We are using it to monitor our website and rpc services intefaces, including called frequency, response time and exception calls. Our services send statistics to statsd, statsd aggregates them every 10 seconds and broadcasts the results to its backends including banshee, banshee analyzes current metrics with history data, calculates the trending and alerts us if the trending behaves anomalous. For example, we have an api named get_user, this api's response time (in milliseconds) is reported to banshee from statsd every 10 seconds: Banshee will catch the latest metric 300 and report it as an anomaly. Why don't we just set a fixed threshold instead (i.e. 200ms)? This may also works but it is boring and hard to maintain a lot of thresholds. Banshee will analyze metric trendings automatically, it will find the "thresholds" automatically. 1. Designed for periodic metrics. Reality metrics are always with periodicity, banshee only peeks metrics with the same "phase" to detect. 2. Multiple alerting rule configuration options, to alert via fixed-thresholds or via anomalous trendings. 3. Coming with anomalies visualization webapp and alerting rules admin panels. 4. Require no extra storage services, banshee handles storage on disk by itself. 1. Go >= 1.5. 2. Node and gulp. 1. Clone the repo. 2. Build binary via `make`. 3. Build static files via `make static`. Usage: Flags: See package config. In order to forward metrics to banshee from statsd, we need to add the npm module statsd-banshee to statsd's banckends: 1. Install statsd-banshee on your statsd servers: 2. Add module statsd-banshee to statsd's backends in config.js: Require bell.js v2.0+ and banshee v0.0.7+: Banshee have 4 compontents and they are running in the same process: 1. Detector is to detect incoming metrics with history data and store the results. 2. Webapp is to visualize the detection results and provides panels to manage alerting rules, projects and users. 3. Alerter is to send sms and emails once anomalies are found. 4. Cleaner is to clean outdated metrics from storage. See package alerter and alerter/exampleCommand. Via fabric(http://www.fabfile.org/): See deploy.py docs for more. Just pull the latest code: Note that the admin storage sqlite3 schema will be auto-migrated. 1. Detection algorithms, see package detector. 2. Detector input net protocol, see package detector. 3. Storage, see package storage. 4. Filter, see package filter. Reference: https://github.com/eleme/banshee/blob/master/intro.md MIT (c) eleme, inc.
Package mdtopdf implements a PDF document generator for markdown documents. This package depends on two other packages: * The BlackFriday v2 parser to read the markdown source * The gofpdf packace to generate the PDF The tests included here are from the BlackFriday package. See the "testdata" folder. The tests create PDF files and thus while the tests may complete without errors, visual inspection of the created PDF is the only way to determine if the tests *really* pass! The tests create log files that trace the BlackFriday parser callbacks. This is a valuable debug tool showing each callback and data provided in each while the AST is presented. To install the package: In the cmd folder is an example using the package. It demonstrates a number of features. The test PDF was created with this command: Package mdtopdf converts markdown to PDF.
Package chart implements common chart/plot types. The following chart types are available: Chart tries to provides useful defaults and produce nice charts without sacrificing accuracy. The generated charts look good and are higly customizable but will not match the visual quality of handmade photoshop charts or the statistical features of charts produced by S or R. Creating charts consists of the following steps: You may change the configuration at any step or render to different outputs. The different chart types and their fields are all simple struct types where the zero value provides suitable defaults. All fields are exported, even if you are not supposed to manipulate them directy or are 'output fields'. E.g. the common Data field of all chart types will store the sample data added with one or more Add... methods. Some fields are mere output which expose internal stuff for your use like the Data2Screen and Screen2Data functions of the Ranges. Some fields are even input/output fields: E.g. you may set the Range.TicSetting.Delta to some positive value which will be used as the spacing between tics on that axis; on the other hand if you leave Range.TicSetting.Delta at its default 0 you indicate to the plotting routine to automatically determine the tic delta which is then reported back in this fields. All charts (except pie/ring charts) contain at least one axis represented by a field of type Range. Axis can be differented into following categories: How the axis is autoscaled can be controlled for both ends of the axis individually by MinMode and MaxMode which allow a fine control of the (auto-) scaling. After setting up the chart, adding data, samples, functions you can render the chart to a Graphics output. This process will set several internal fields of the chart. If you reuse the chart, add additional data and output it again these fields might no longer indicate 'automatical/default' but contain the value calculated in the first output round.
go-xvid are Go bindings to xvidcore from Xvid 1.3.X (which uses the MPEG-4 Part 2, MPEG-4 Visual, ISO/IEC 14496-2 video codec standard). This library can encode a sequence of images to an encoded Xvid stream, decode images from an encoded Xvid stream, and convert images between different color spaces. go-xvid only handles raw Xvid streams. Nearly all video files commonly found are stored in a media container, that encapsulate, but are not, raw Xvid video streams. go-xvid cannot decode or encode container data, and the raw video streams must be encapsulated or decapsulated. go-xvid tries to not abbreviate names and identifiers so that all the names used can easily be searched on the Internet when they are not known. This means that this documentation will not redefine or explain common codec concepts like macroblocks, quantizers, rate-control, and such. Most of the complex configuration structures can be initialized to sane default values in case the user is not familiar with advanced encoding concepts. Before any other function in the package can be called, Init or InitWithFlags must be called once to initialize all internal Xvid state. There is no Close method corresponding to the Init call. As an exception, GetGlobalInfo, which returns general information about the runtime Xvid build, can be called at any time before and after Init. go-xvid defines a specific error type, Error, which is used to represent internal xvidcore errors. Images in go-xvid is stored in the Image structure, which stores both an image color space and its data as an array of planes, which are themselves arrays of data. Each plane has a specific stride. The classic RGBA color space has only one plane and data array but some color spaces can have up to three. See Image for more information. Images can be converted from one color space to another with the Convert function. go-xvid can decode a sequence of images from a raw encoded Xvid stream. Decoder is the struct used to decode from a stream. A Decoder is created with NewDecoder, which takes a DecoderInit configuration struct to initialize it. Once created, Decoder.Decode can be called in a loop to decode a single frame at a time until the entire stream has been processed. Each decoded frame contains extra statistics returned by Decoder.Decode. When the Decoder is no longer needed, it must be closed with Decoder.Close to free any internal data. go-xvid can encode a sequence of images to a raw encoded Xvid stream. Encoder is the struct used to encode from a stream. An Encoder is created with NewEncoder, which takes an EncoderInit configuration struct to initialize it, which itself should be initialized with NewEncoderInit to sane default values. Once created, Encoder.Encode can be called in a loop to encode a single image at a time until all the images have been processed. Each encoded frame contains extra statistics returned by Encoder.Encode. When the Encoder is no longer needed, it must be closed with Encoder.Close to free any internal data. Plugins are used to read and write internal frame data when encoding. Some standard plugins are defined in the library but custom ones can be created by implementing the Plugin interface. In Xvid, rate-control is achieved by using plugins (for both 1-pass rate-control and 2-pass rate-control). You will probably need to use one of these rate-control plugins when encoding (otherwise the smallest quantizer is always used).
Package chart implements common chart/plot types. The following chart types are available: Chart tries to provides useful defaults and produce nice charts without sacrificing accuracy. The generated charts look good and are higly customizable but will not match the visual quality of handmade photoshop charts or the statistical features of charts produced by S or R. Creating charts consists of the following steps: You may change the configuration at any step or render to different outputs. The different chart types and their fields are all simple struct types where the zero value provides suitable defaults. All fields are exported, even if you are not supposed to manipulate them directy or are 'output fields'. E.g. the common Data field of all chart types will store the sample data added with one or more Add... methods. Some fields are mere output which expose internal stuff for your use like the Data2Screen and Screen2Data functions of the Ranges. Some fields are even input/output fields: E.g. you may set the Range.TicSetting.Delta to some positive value which will be used as the spacing between tics on that axis; on the other hand if you leave Range.TicSetting.Delta at its default 0 you indicate to the plotting routine to automatically determine the tic delta which is then reported back in this fields. All charts (except pie/ring charts) contain at least one axis represented by a field of type Range. Axis can be differented into following categories: How the axis is autoscaled can be controlled for both ends of the axis individually by MinMode and MaxMode which allow a fine control of the (auto-) scaling. After setting up the chart, adding data, samples, functions you can render the chart to a Graphics output. This process will set several internal fields of the chart. If you reuse the chart, add additional data and output it again these fields might no longer indicate 'automatical/default' but contain the value calculated in the first output round.
Banshee is a real-time anomalies(outliers) detection system for periodic metrics. We are using it to monitor our website and rpc services intefaces, including called frequency, response time and exception calls. Our services send statistics to statsd, statsd aggregates them every 10 seconds and broadcasts the results to its backends including banshee, banshee analyzes current metrics with history data, calculates the trending and alerts us if the trending behaves anomalous. For example, we have an api named get_user, this api's response time (in milliseconds) is reported to banshee from statsd every 10 seconds: Banshee will catch the latest metric 300 and report it as an anomaly. Why don't we just set a fixed threshold instead (i.e. 200ms)? This may also works but it is boring and hard to maintain a lot of thresholds. Banshee will analyze metric trendings automatically, it will find the "thresholds" automatically. 1. Designed for periodic metrics. Reality metrics are always with periodicity, banshee only peeks metrics with the same "phase" to detect. 2. Multiple alerting rule configuration options, to alert via fixed-thresholds or via anomalous trendings. 3. Coming with anomalies visualization webapp and alerting rules admin panels. 4. Require no extra storage services, banshee handles storage on disk by itself. 1. Go >= 1.5. 2. Node and gulp. 1. Clone the repo. 2. Build binary via `make`. 3. Build static files via `make static`. Usage: Flags: See package config. In order to forward metrics to banshee from statsd, we need to add the npm module statsd-banshee to statsd's banckends: 1. Install statsd-banshee on your statsd servers: 2. Add module statsd-banshee to statsd's backends in config.js: Require bell.js v2.0+ and banshee v0.0.7+: Banshee have 4 compontents and they are running in the same process: 1. Detector is to detect incoming metrics with history data and store the results. 2. Webapp is to visualize the detection results and provides panels to manage alerting rules, projects and users. 3. Alerter is to send sms and emails once anomalies are found. 4. Cleaner is to clean outdated metrics from storage. See package alerter and alerter/exampleCommand. Via fabric(http://www.fabfile.org/): See deploy.py docs for more. Just pull the latest code: Note that the admin storage sqlite3 schema will be auto-migrated. 1. Detection algorithms, see package detector. 2. Detector input net protocol, see package detector. 3. Storage, see package storage. 4. Filter, see package filter. MIT (c) eleme, inc.
Package intlist supports a string notation specifying a series of integers. This was written to support a data-driven text file entered by humans that contained a mix of integers and sequences. This format made it easy to enter and to visually recognize a sequence of consecutive integers. Format: Examples: There are two supported use cases; creating an int slice and an Iterator to produce the ints as needed. "Parse" will parse a string and return a integer slice. This is useful when a slice is wanted and the size of the result is not too large. "NewIterator" / "Next" / "Err" functions - provide the functionality necessary to iterate through the list of integers. This may be especially useful when the resulting list is too huge or when it is possible to stop before using the whole list. Example of iterator usage:
Package multimap provides an abstract MultiMap interface. Multimap is a collection that maps keys to values, similar to map. However, each key may be associated with multiple values. You can visualize the contents of a multimap either as a map from keys to nonempty collections of values: ... or a single "flattened" collection of key-value pairs. Similar to a map, operations associated with this data type allow: - the addition of a pair to the collection - the removal of a pair from the collection - the lookup of a value associated with a particular key - the lookup whether a key, value or key/value pair exists in this data type.
Package jin Copyright (c) 2020 eco. License that can be found in the LICENSE file. "your wish is my command" Jin is a comprehensive JSON manipulation tool bundle. All functions tested with random data with help of Node.js. All test-path and test-value creation automated with Node.js. Jin provides parse, interpret, build and format tools for JSON. Third-party packages only used for the benchmark. No dependency need for core functions. We make some benchmark with other packages like Jin. In Result, Jin is the fastest (op/ns) and more memory friendly then others (B/op). For more information please take a look at BENCHMARK section below. WHAT IS NEW? 7 new functions tested and added to package. - GetMap() get objects as map[string]string structure with key values pairs - GetAll() get only specific keys values - GetAllMap() get only specific keys with map[string]stringstructure - GetKeys() get objects keys as string array - GetValues() get objects values as string array - GetKeysValues() get objects keys and values with separate string arrays - Length() get length of JSON array. 06.04.2020 INSTALLATION And you are good to go. Import and start using. Major difference between parsing and interpreting is parser has to read all data before answer your needs. On the other hand interpreter reads up to find the data you need. With parser, once the parse is complete you can access any data with no time. But there is a time cost to parse all data and this cost can increase as data content grows. If you need to access all keys of a JSON then, we are simply recommend you to use Parser. But if you need to access some keys of a JSON then we strongly recommend you to use Interpreter, it will be much faster and much more memory-friendly than parser. Interpreter is core element of this package, no need to create an Interpreter type, just call which function you want. First let's look at general function parameters. We are gonna use Get() function to access the value of path has pointed. In this case 'jin'. Path value can consist hard coded values. Get() function return type is []byte but all other variations of return types are implemented with different functions. For example. If you need "value" as string use GetString(). Parser is another alternative for JSON manipulation. We recommend to use this structure when you need to access all or most of the keys in the JSON. Parser constructor need only one parameter. We can parse it with Parse() function. Let's look at Parser.Get() About path value look above. There is all return type variations of Parser.Get() function like Interpreter. For return string use Parser.GetString() like this, Other usefull functions of Jin. -Add(), AddKeyValue(), Set(), SetKey() Delete(), Insert(), IterateArray(), IterateKeyValue() Tree(). Let's look at IterateArray() function. There are two formatting functions. Flatten() and Indent(). Indent() is adds indentation to JSON for nicer visualization and Flatten() removes this indentation. Control functions are simple and easy way to check value types of any path. For example. IsArray(). Or you can use GetType(). There are lots of JSON build functions in this package and all of them has its own examples. We just want to mention a couple of them. Scheme is simple and powerful tool for create JSON schemes. Testing is very important thing for this type of packages and it shows how reliable it is. For that reasons we use Node.js for unit testing. Lets look at folder arrangement and working principle. - test/ folder: test-json.json, this is a temporary file for testing. all other test-cases copying here with this name so they can process by test-case-creator.js. test-case-creator.js is core path & value creation mechanism. When it executed with executeNode() function. It reads the test-json.json file and generates the paths and values from this files content. With command line arguments it can generate different paths and values. As a result, two files are created with this process. the first of these files is test-json-paths.json and the second is test-json-values.json test-json-paths.json has all the path values. test-json-values.json has all the values that corresponding to path values. - tests/ folder All files in this folder is a test-case. But it doesn't mean that you can't change anything, on the contrary, all test-cases are creating automatically based on this folder content. You can add or remove any .json file that you want. All GO side test-case automation functions are in core_test.go file. This package developed with Node.js v13.7.0. please make sure that your machine has a valid version of Node.js before testing. All functions and methods are tested with complicated randomly genereted .json files. Like this, Most of JSON packages not even run properly with this kind of JSON streams. We did't see such packages as competitors to ourselves. And that's because we didn't even bother to benchmark against them. Benchmark results. - Benchmark prefix removed from function names for make room to results. - Benchmark between 'buger/jsonparser' and 'ecoshub/jin' use the same payload (JSON test-cases) that 'buger/jsonparser' package use for benchmark it self. We are currently working on, - Marshal() and Unmarshal() functions. - http.Request parser/interpreter - Builder functions for http.ResponseWriter If you want to contribute this work feel free to fork it. We want to fill this section with contributors.
pprofetheus is a collector for Prometheus that collects CPU profiling data for the current process and exports them as metrics. It can be used to monitor, visualize, and alert on profiling data from any Go process that imports pprofetheus and exports metrics via Prometheus. In order to use pprofetheus in your Prometheus-enabled Go application, you just need to and then import the same package, and set up the collector with Prometheus in your code, e.g. like this:
Package pprof is a fork of net/http/pprof modified to communicate over a unix socket. --------------------------------------------------------------- Package pprof serves via its HTTP server runtime profiling data in the format expected by the pprof visualization tool. For more information about pprof, see http://code.google.com/p/google-perftools/. The package is typically only imported for the side effect of registering its HTTP handlers. The handled paths all begin with /debug/pprof/. To use pprof, link this package into your program: If your application is not already running an http server, you need to start one. Add "net/http" and "log" to your imports and the following code to your main function: Then use the pprof tool to look at the heap profile: Or to look at a 30-second CPU profile: Or to look at the goroutine blocking profile: To view all available profiles, open http://localhost:6060/debug/pprof/ in your browser. For a study of the facility in action, visit
Package pprof-garbage writes runtime profiling data in the format expected by the pprof visualization tool. The profile shows estimates for garbage allocations over a given time duration: See https://github.com/golang/go/issues/16629 for more details.