Package statsviz allows visualizing Go runtime metrics data in real time in your browser. Register a Statsviz HTTP handlers with your server's http.ServeMux (preferred method): Alternatively, you can register with http.DefaultServeMux: By default, Statsviz is served at http://host:port/debug/statsviz/. This, and other settings, can be changed by passing some Option to NewServer. If your application is not already running an HTTP server, you need to start one. Add "net/http" and "log" to your imports, and use the following code in your main function: Then open your browser and visit http://localhost:8080/debug/statsviz/. If you want more control over Statsviz HTTP handlers, examples are: then use NewServer to obtain a Server instance. Both the Server.Index and Server.Ws() methods return http.HandlerFunc.
Package oam provides the API client, operations, and parameter types for CloudWatch Observability Access Manager. Use Amazon CloudWatch Observability Access Manager to create and manage links between source accounts and monitoring accounts by using CloudWatch cross-account observability. With CloudWatch cross-account observability, you can monitor and troubleshoot applications that span multiple accounts within a Region. Seamlessly search, visualize, and analyze your metrics, logs, traces, and Application Insights applications in any of the linked accounts without account boundaries. Set up one or more Amazon Web Services accounts as monitoring accounts and link them with multiple source accounts. A monitoring account is a central Amazon Web Services account that can view and interact with observability data generated from source accounts. A source account is an individual Amazon Web Services account that generates observability data for the resources that reside in it. Source accounts share their observability data with the monitoring account. The shared observability data can include metrics in Amazon CloudWatch, logs in Amazon CloudWatch Logs, traces in X-Ray, and applications in Amazon CloudWatch Application Insights.
Package graph is a library for creating generic graph data structures and modifying, analyzing, and visualizing them. A graph consists of vertices of type T, which are identified by a hash value of type K. The hash value for a given vertex is obtained using the hashing function passed to New. A hashing function takes a T and returns a K. For primitive types like integers, you may use a predefined hashing function such as IntHash – a function that takes an integer and uses that integer as the hash value at the same time: For storing custom data types, you need to provide your own hashing function. This example takes a City instance and returns its name as the hash value: Creating a graph using this hashing function will yield a graph of vertices of type City identified by hash values of type string. Adding vertices to a graph of integers is simple. graph.Graph.AddVertex takes a vertex and adds it to the graph. Most functions accept and return only hash values instead of entire instances of the vertex type T. For example, graph.Graph.AddEdge creates an edge between two vertices and accepts the hash values of those vertices. Because this graph uses the IntHash hashing function, the vertex values and hash values are the same. All operations that modify the graph itself are methods of Graph. All other operations are top-level functions of by this library. For detailed usage examples, take a look at the README.
Package chart implements common chart/plot types. The following chart types are available: Chart tries to provides useful defaults and produce nice charts without sacrificing accuracy. The generated charts look good and are higly customizable but will not match the visual quality of handmade photoshop charts or the statistical features of charts produced by S or R. Creating charts consists of the following steps: You may change the configuration at any step or render to different outputs. The different chart types and their fields are all simple struct types where the zero value provides suitable defaults. All fields are exported, even if you are not supposed to manipulate them directy or are 'output fields'. E.g. the common Data field of all chart types will store the sample data added with one or more Add... methods. Some fields are mere output which expose internal stuff for your use like the Data2Screen and Screen2Data functions of the Ranges. Some fields are even input/output fields: E.g. you may set the Range.TicSetting.Delta to some positive value which will be used as the spacing between tics on that axis; on the other hand if you leave Range.TicSetting.Delta at its default 0 you indicate to the plotting routine to automatically determine the tic delta which is then reported back in this fields. All charts (except pie/ring charts) contain at least one axis represented by a field of type Range. Axis can be differented into following categories: How the axis is autoscaled can be controlled for both ends of the axis individually by MinMode and MaxMode which allow a fine control of the (auto-) scaling. After setting up the chart, adding data, samples, functions you can render the chart to a Graphics output. This process will set several internal fields of the chart. If you reuse the chart, add additional data and output it again these fields might no longer indicate 'automatical/default' but contain the value calculated in the first output round.
Package grafana provides the API client, operations, and parameter types for Amazon Managed Grafana. Amazon Managed Grafana is a fully managed and secure data visualization service that you can use to instantly query, correlate, and visualize operational metrics, logs, and traces from multiple sources. Amazon Managed Grafana makes it easy to deploy, operate, and scale Grafana, a widely deployed data visualization tool that is popular for its extensible data support. With Amazon Managed Grafana, you create logically isolated Grafana servers called workspaces. In a workspace, you can create Grafana dashboards and visualizations to analyze your metrics, logs, and traces without having to build, package, or deploy any hardware to run Grafana servers.
Package iotanalytics provides the API client, operations, and parameter types for AWS IoT Analytics. IoT Analytics allows you to collect large amounts of device data, process messages, and store them. You can then query the data and run sophisticated analytics on it. IoT Analytics enables advanced data exploration through integration with Jupyter Notebooks and data visualization through integration with Amazon QuickSight. Traditional analytics and business intelligence tools are designed to process structured data. IoT data often comes from devices that record noisy processes (such as temperature, motion, or sound). As a result the data from these devices can have significant gaps, corrupted messages, and false readings that must be cleaned up before analysis can occur. Also, IoT data is often only meaningful in the context of other data from external sources. IoT Analytics automates the steps required to analyze data from IoT devices. IoT Analytics filters, transforms, and enriches IoT data before storing it in a time-series data store for analysis. You can set up the service to collect only the data you need from your devices, apply mathematical transforms to process the data, and enrich the data with device-specific metadata such as device type and location before storing it. Then, you can analyze your data by running queries using the built-in SQL query engine, or perform more complex analytics and machine learning inference. IoT Analytics includes pre-built models for common IoT use cases so you can answer questions like which devices are about to fail or which customers are at risk of abandoning their wearable devices.
Package lttb implements the Largest-Triangle-Three-Buckets algorithm for downsampling points The downsampled data maintains the visual characteristics of the original line using considerably fewer data points. This is a translation of the javascript code at
Package duplo provides tools to efficiently query large sets of images for visual duplicates. The technique is based on the paper "Fast Multiresolution Image Querying" by Charles E. Jacobs, Adam Finkelstein, and David H. Salesin, with a few modifications and additions, such as the addition of a width to height ratio, the dHash metric by Dr. Neal Krawetz as well as some histogram-based metrics. Quering the data structure will return a list of potential matches, sorted by the score described in the main paper. The user can make searching for duplicates stricter, however, by filtering based on the additional metrics. Package example.
Package detective provides the API client, operations, and parameter types for Amazon Detective. Detective uses machine learning and purpose-built visualizations to help you to analyze and investigate security issues across your Amazon Web Services (Amazon Web Services) workloads. Detective automatically extracts time-based events such as login attempts, API calls, and network traffic from CloudTrail and Amazon Virtual Private Cloud (Amazon VPC) flow logs. It also extracts findings detected by Amazon GuardDuty. The Detective API primarily supports the creation and management of behavior graphs. A behavior graph contains the extracted data from a set of member accounts, and is created and managed by an administrator account. To add a member account to the behavior graph, the administrator account sends an invitation to the account. When the account accepts the invitation, it becomes a member account in the behavior graph. Detective is also integrated with Organizations. The organization management account designates the Detective administrator account for the organization. That account becomes the administrator account for the organization behavior graph. The Detective administrator account is also the delegated administrator account for Detective in Organizations. The Detective administrator account can enable any organization account as a member account in the organization behavior graph. The organization accounts do not receive invitations. The Detective administrator account can also invite other accounts to the organization behavior graph. Every behavior graph is specific to a Region. You can only use the API to manage behavior graphs that belong to the Region that is associated with the currently selected endpoint. The administrator account for a behavior graph can use the Detective API to do the following: Enable and disable Detective. Enabling Detective creates a new behavior graph. View the list of member accounts in a behavior graph. Add member accounts to a behavior graph. Remove member accounts from a behavior graph. Apply tags to a behavior graph. The organization management account can use the Detective API to select the delegated administrator for Detective. The Detective administrator account for an organization can use the Detective API to do the following: Perform all of the functions of an administrator account. Determine whether to automatically enable new organization accounts as member accounts in the organization behavior graph. An invited member account can use the Detective API to do the following: View the list of behavior graphs that they are invited to. Accept an invitation to contribute to a behavior graph. Decline an invitation to contribute to a behavior graph. Remove their account from a behavior graph. All API actions are logged as CloudTrail events. See Logging Detective API Calls with CloudTrail. We replaced the term "master account" with the term "administrator account". An administrator account is used to centrally manage multiple accounts. In the case of Detective, the administrator account manages the accounts in their behavior graph.
Package mdtopdf implements a PDF document generator for markdown documents. This package depends on two other packages: * The BlackFriday v2 parser to read the markdown source * The fpdf packace to generate the PDF The tests included here are from the BlackFriday package. See the "testdata" folder. The tests create PDF files and thus while the tests may complete without errors, visual inspection of the created PDF is the only way to determine if the tests *really* pass! The tests create log files that trace the BlackFriday parser callbacks. This is a valuable debug tool showing each callback and data provided in each while the AST is presented. To install the package: In the cmd folder is an example using the package. It demonstrates a number of features. The test PDF was created with this command: Package mdtopdf converts markdown to PDF.
Package tfortools provides a set of functions that are designed to make it easier for developers to add template based scripting to their command line tools. Command line tools written in Go often allow users to specify a template script to tailor the output of the tool to their specific needs. This can be useful both when visually inspecting the data and also when invoking command line tools in scripts. The best example of this is go list which allows users to pass a template script to extract interesting information about Go packages. For example, prints all the imports of the current package. The aim of this package is to make it easier for developers to add template scripting support to their tools and easier for users of these tools to extract the information they need. It does this by augmenting the templating language provided by the standard library package text/template in two ways: 1. It auto generates descriptions of the data structures passed as input to a template script for use in help messages. This ensures that help usage information is always up to date with the source code. 2. It provides a suite of convenience functions to make it easy for script writers to extract the data they need. There are functions for sorting, selecting rows and columns and generating nicely formatted tables. For example, if a program passed a slice of structs containing stock data to a template script, we could use the following script to extract the names of the 3 stocks with the highest trade volume. The output might look something like this: The functions head, sort, tables and col are provided by this package.
Package multimap provides an abstract MultiMap interface. Multimap is a collection that maps keys to values, similar to map. However, each key may be associated with multiple values. You can visualize the contents of a multimap either as a map from keys to nonempty collections of values: ... or a single "flattened" collection of key-value pairs. Similar to a map, operations associated with this data type allow: - the addition of a pair to the collection - the removal of a pair from the collection - the lookup of a value associated with a particular key - the lookup whether a key, value or key/value pair exists in this data type.
Package xmlwriter provides a fast, non-cached, forward-only way to generate XML data. The API is based heavily on libxml's xmlwriter API [1], which is itself based on C#'s XmlWriter [2]. It offers some advantages over Go's default encoding/xml package and some tradeoffs. You can have complete control of the generated documents and it uses very little memory. There are two styles for interacting with the writer: structured and heap-friendly. If you want a visual representation of the hierarchy of some of your writes in your code and you don't care about a few instances of memory escaping to the heap (and most of the time you won't), you can use the structured API. If you are writing a code generator or your interactions with the API are minimal, you should use the direct API. xmlwriter.Writer{} takes any io.Writer, along with a variable list of options. xmlwriter options are based on Dave Cheney's functional options pattern (https://dave.cheney.net/2014/10/17/functional-options-for-friendly-apis): Provided options are: Using the structured API, you might express a small tree of elements like this. These nodes will escape to the heap, but judicious use of this nesting can make certain structures a lot more readable by representing the desired XML hierarchy in the code that produces it: The code can be made even less dense by importing xmlwriter with a prefix: `import xw "github.com/shabbyrobe/xmlwriter"` The same output is possible with the heap-friendy API. This has a lot more stutter and it's harder to tell the hierarchical relationship just by looking at the code, but there are no heap escapes this way: Use whichever API reads best in your code, but favour the latter style in all code generators and performance hotspots. xmlwriter.Writer extends bufio.Writer! Don't forget to flush otherwise you'll lose data. There are two ways to flush: The EndAllFlush form is just a convenience, it calls EndAll() and Flush() for you. Nodes which can have children can be passed to `Writer.Start()`. This adds them to the stack and opens them, allowing children to be added. Becomes: <foo><bar><baz/></bar></foo> Nodes which have no children, or nodes which can be opened and fully closed with only a trivial amount of information, can be passed to `Writer.Write()`. If written nodes are put on to the stack, they will be popped before Write returns. Becomes: <foo/><bar/><baz/> Block takes a Startable and a variable number of Writable nodes. The Startable will be opened, the Writables will be written, then the Startable will be closed: Becomes: There are several ways to end an element. Choose the End that's right for you! Nodes as they are written can be in three states: StateOpen, StateOpened or StateEnd. StateOpen == "<elem". StateOpened == "<elem>". StateEnd == "<elem></elem>". Node structs are available for writing in the following hierarchy. Nodes which are "Startable" (passed to `writer.Start(n)`) are marked with an S. Nodes which are "Writable" (passed to `writer.Write(n)`) are marked with a W. - xmlwriter.Raw* (W) - xmlwriter.Doc (S) * `xmlwriter.Raw` can be written anywhere, at any time. If a node is in the "open" state but not in the "opened" state, for example you have started an element and written an attribute, writing "raw" will add the content to the inside of the element opening tag unless you call `w.Next()`. Every node has a corresponding NodeKind constant, which can be found by affixing "Node" to the struct name, i.e. "xmlwriter.Elem" becomes "xmlwriter.ElemNode". These are used for calls to Writer.End(). xmlwriter.Attr{} values can be assigned from any golang primitive like so: xmlwriter supports encoders from the golang.org/x/text/encoding package. UTF-8 strings written in from golang will be converted on the fly and the document declaration will be written correctly. To write your XML using the windows-1252 encoder: The document line will look like this:
Package lookoutvision provides the API client, operations, and parameter types for Amazon Lookout for Vision. This is the Amazon Lookout for Vision API Reference. It provides descriptions of actions, data types, common parameters, and common errors. Amazon Lookout for Vision enables you to find visual defects in industrial products, accurately and at scale. It uses computer vision to identify missing components in an industrial product, damage to vehicles or structures, irregularities in production lines, and even minuscule defects in silicon wafers — or any other physical item where quality is important such as a missing capacitor on printed circuit boards.
Package amplifyuibuilder provides the API client, operations, and parameter types for AWS Amplify UI Builder. The Amplify UI Builder API provides a programmatic interface for creating and configuring user interface (UI) component libraries and themes for use in your Amplify applications. You can then connect these UI components to an application's backend Amazon Web Services resources. You can also use the Amplify Studio visual designer to create UI components and model data for an app. For more information, see Introductionin the Amplify Docs. The Amplify Framework is a comprehensive set of SDKs, libraries, tools, and documentation for client app development. For more information, see the Amplify Framework. For more information about deploying an Amplify application to Amazon Web Services, see the Amplify User Guide.
Package databrew provides the API client, operations, and parameter types for AWS Glue DataBrew. Glue DataBrew is a visual, cloud-scale data-preparation service. DataBrew simplifies data preparation tasks, targeting data issues that are hard to spot and time-consuming to fix. DataBrew empowers users of all technical levels to visualize the data and perform one-click data transformations, with no coding required.
Package codeguruprofiler provides the API client, operations, and parameter types for Amazon CodeGuru Profiler. operations. Amazon CodeGuru Profiler collects runtime performance data from your live applications, and provides recommendations that can help you fine-tune your application performance. Using machine learning algorithms, CodeGuru Profiler can help you find your most expensive lines of code and suggest ways you can improve efficiency and remove CPU bottlenecks. Amazon CodeGuru Profiler provides different visualizations of profiling data to help you identify what code is running on the CPU, see how much time is consumed, and suggest ways to reduce CPU utilization. Amazon CodeGuru Profiler currently supports applications written in all Java virtual machine (JVM) languages and Python. While CodeGuru Profiler supports both visualizations and recommendations for applications written in Java, it can also generate visualizations and a subset of recommendations for applications written in other JVM languages and Python. For more information, see What is Amazon CodeGuru Profiler in the Amazon CodeGuru Profiler User Guide.
Package jin Copyright (c) 2020 eco. License that can be found in the LICENSE file. "your wish is my command" Jin is a comprehensive JSON manipulation tool bundle. All functions tested with random data with help of Node.js. All test-path and test-value creation automated with Node.js. Jin provides parse, interpret, build and format tools for JSON. Third-party packages only used for the benchmark. No dependency need for core functions. We make some benchmark with other packages like Jin. In Result, Jin is the fastest (op/ns) and more memory friendly then others (B/op). For more information please take a look at BENCHMARK section below. WHAT IS NEW? 7 new functions tested and added to package. - GetMap() get objects as map[string]string structure with key values pairs - GetAll() get only specific keys values - GetAllMap() get only specific keys with map[string]stringstructure - GetKeys() get objects keys as string array - GetValues() get objects values as string array - GetKeysValues() get objects keys and values with separate string arrays - Length() get length of JSON array. 06.04.2020 INSTALLATION And you are good to go. Import and start using. Major difference between parsing and interpreting is parser has to read all data before answer your needs. On the other hand interpreter reads up to find the data you need. With parser, once the parse is complete you can access any data with no time. But there is a time cost to parse all data and this cost can increase as data content grows. If you need to access all keys of a JSON then, we are simply recommend you to use Parser. But if you need to access some keys of a JSON then we strongly recommend you to use Interpreter, it will be much faster and much more memory-friendly than parser. Interpreter is core element of this package, no need to create an Interpreter type, just call which function you want. First let's look at general function parameters. We are gonna use Get() function to access the value of path has pointed. In this case 'jin'. Path value can consist hard coded values. Get() function return type is []byte but all other variations of return types are implemented with different functions. For example. If you need "value" as string use GetString(). Parser is another alternative for JSON manipulation. We recommend to use this structure when you need to access all or most of the keys in the JSON. Parser constructor need only one parameter. We can parse it with Parse() function. Let's look at Parser.Get() About path value look above. There is all return type variations of Parser.Get() function like Interpreter. For return string use Parser.GetString() like this, Other usefull functions of Jin. -Add(), AddKeyValue(), Set(), SetKey() Delete(), Insert(), IterateArray(), IterateKeyValue() Tree(). Let's look at IterateArray() function. There are two formatting functions. Flatten() and Indent(). Indent() is adds indentation to JSON for nicer visualization and Flatten() removes this indentation. Control functions are simple and easy way to check value types of any path. For example. IsArray(). Or you can use GetType(). There are lots of JSON build functions in this package and all of them has its own examples. We just want to mention a couple of them. Scheme is simple and powerful tool for create JSON schemes. Testing is very important thing for this type of packages and it shows how reliable it is. For that reasons we use Node.js for unit testing. Lets look at folder arrangement and working principle. - test/ folder: test-json.json, this is a temporary file for testing. all other test-cases copying here with this name so they can process by test-case-creator.js. test-case-creator.js is core path & value creation mechanism. When it executed with executeNode() function. It reads the test-json.json file and generates the paths and values from this files content. With command line arguments it can generate different paths and values. As a result, two files are created with this process. the first of these files is test-json-paths.json and the second is test-json-values.json test-json-paths.json has all the path values. test-json-values.json has all the values that corresponding to path values. - tests/ folder All files in this folder is a test-case. But it doesn't mean that you can't change anything, on the contrary, all test-cases are creating automatically based on this folder content. You can add or remove any .json file that you want. All GO side test-case automation functions are in core_test.go file. This package developed with Node.js v13.7.0. please make sure that your machine has a valid version of Node.js before testing. All functions and methods are tested with complicated randomly genereted .json files. Like this, Most of JSON packages not even run properly with this kind of JSON streams. We did't see such packages as competitors to ourselves. And that's because we didn't even bother to benchmark against them. Benchmark results. - Benchmark prefix removed from function names for make room to results. - Benchmark between 'buger/jsonparser' and 'ecoshub/jin' use the same payload (JSON test-cases) that 'buger/jsonparser' package use for benchmark it self. We are currently working on, - Marshal() and Unmarshal() functions. - http.Request parser/interpreter - Builder functions for http.ResponseWriter If you want to contribute this work feel free to fork it. We want to fill this section with contributors.
Package pprof serves via its HTTP server runtime profiling data in the format expected by the pprof visualization tool. For more information about pprof, see http://code.google.com/p/google-perftools/. The package is typically only imported for the side effect of registering its HTTP handlers. The handled paths all begin with /debug/pprof/. To use pprof, link this package into your program: If your application is not already running an http server, you need to start one. Add "net/http" and "log" to your imports and the following code to your main function: Then use the pprof tool to look at the heap profile: Or to look at a 30-second CPU profile: Or to look at the goroutine blocking profile: To view all available profiles, open http://localhost:6060/debug/pprof/ in your browser. For a study of the facility in action, visit
Package vision is a repository containing visual processing packages in Go (golang), focused mainly on providing efficient V1 (primary visual cortex) level filtering of images, with the output then suitable as input for neural networks. Two main types of filters are supported: * **Gabor** filters simulate V1 simple-cell responses in terms of an oriented sine wave times a gaussian envelope that localizes the filter in space. This produces an edge detector that detects oriented contrast transitions between light and dark. In general, the main principle of primary visual filtering is to focus on spatial (and temporal) changes, while filtering out static, uniform areas. * **DoG** (difference of gaussian) filters simulate retinal On-center vs. Off-center contrast coding cells -- unlike gabor filters, these do not have orientation tuning. Mathematically, they are a difference between a narrow (center) vs wide (surround) gaussian, of opposite signs, balanced so that a uniform input generates offsetting values that sum to zero. In the visual system, orientation tuning is constructed from aligned DoG-like inputs, but it is more efficient to just use the Gabor filters directly. However, DoG filters capture the "blob" cells that encode color contrasts. The `vfilter` package contains general-purpose filtering code that applies (convolves) any given filter with a visual input. It also supports converting an `image.Image` into a `etensor.Float32` tensor which is the main data type used in this framework. It also supports max-pooling for efficiently reducing the dimensionality of inputs. The `kwta` package provides an implementation of the feedforward and feedback (FFFB) inhibition dynamics (and noisy X-over-X-plus-1 activation function) from the `Leabra` algorithm to produce a k-Winners-Take-All processing of visual filter outputs -- this increases the contrast and simplifies the representations, and is a good model of the dynamics in primary visual cortex.
Package mdtopdf implements a PDF document generator for markdown documents. This package depends on two other packages: * The BlackFriday v2 parser to read the markdown source * The fpdf packace to generate the PDF The tests included here are from the BlackFriday package. See the "testdata" folder. The tests create PDF files and thus while the tests may complete without errors, visual inspection of the created PDF is the only way to determine if the tests *really* pass! The tests create log files that trace the BlackFriday parser callbacks. This is a valuable debug tool showing each callback and data provided in each while the AST is presented. To install the package: In the cmd folder is an example using the package. It demonstrates a number of features. The test PDF was created with this command: Package mdtopdf converts markdown to PDF.
Package mdtopdf implements a PDF document generator for markdown documents. This package depends on two other packages: * The BlackFriday v2 parser to read the markdown source * The gofpdf packace to generate the PDF The tests included here are from the BlackFriday package. See the "testdata" folder. The tests create PDF files and thus while the tests may complete without errors, visual inspection of the created PDF is the only way to determine if the tests *really* pass! The tests create log files that trace the BlackFriday parser callbacks. This is a valuable debug tool showing each callback and data provided in each while the AST is presented. To install the package: In the cmd folder is an example using the package. It demonstrates a number of features. The test PDF was created with this command: Package mdtopdf converts markdown to PDF.
Package applicationdiscoveryservice provides the client and types for making API requests to AWS Application Discovery Service. AWS Application Discovery Service helps you plan application migration projects by automatically identifying servers, virtual machines (VMs), software, and software dependencies running in your on-premises data centers. Application Discovery Service also collects application performance data, which can help you assess the outcome of your migration. The data collected by Application Discovery Service is securely retained in an Amazon-hosted and managed database in the cloud. You can export the data as a CSV or XML file into your preferred visualization tool or cloud-migration solution to plan your migration. For more information, see the Application Discovery Service FAQ (http://aws.amazon.com/application-discovery/faqs/). Application Discovery Service offers two modes of operation. Agentless discovery mode is recommended for environments that use VMware vCenter Server. This mode doesn't require you to install an agent on each host. Agentless discovery gathers server information regardless of the operating systems, which minimizes the time required for initial on-premises infrastructure assessment. Agentless discovery doesn't collect information about software and software dependencies. It also doesn't work in non-VMware environments. We recommend that you use agent-based discovery for non-VMware environments and if you want to collect information about software and software dependencies. You can also run agent-based and agentless discovery simultaneously. Use agentless discovery to quickly complete the initial infrastructure assessment and then install agents on select hosts to gather information about software and software dependencies. Agent-based discovery mode collects a richer set of data than agentless discovery by using Amazon software, the AWS Application Discovery Agent, which you install on one or more hosts in your data center. The agent captures infrastructure and application information, including an inventory of installed software applications, system and process performance, resource utilization, and network dependencies between workloads. The information collected by agents is secured at rest and in transit to the Application Discovery Service database in the cloud. Application Discovery Service integrates with application discovery solutions from AWS Partner Network (APN) partners. Third-party application discovery tools can query Application Discovery Service and write to the Application Discovery Service database using a public API. You can then import the data into either a visualization tool or cloud-migration solution. Application Discovery Service doesn't gather sensitive information. All data is handled according to the AWS Privacy Policy (http://aws.amazon.com/privacy/). You can operate Application Discovery Service using offline mode to inspect collected data before it is shared with the service. Your AWS account must be granted access to Application Discovery Service, a process called whitelisting. This is true for AWS partners and customers alike. To request access, sign up for AWS Application Discovery Service here (http://aws.amazon.com/application-discovery/preview/). We send you information about how to get started. This API reference provides descriptions, syntax, and usage examples for each of the actions and data types for Application Discovery Service. The topic for each action shows the API request parameters and the response. Alternatively, you can use one of the AWS SDKs to access an API that is tailored to the programming language or platform that you're using. For more information, see AWS SDKs (http://aws.amazon.com/tools/#SDKs). This guide is intended for use with the AWS Application Discovery Service User Guide (http://docs.aws.amazon.com/application-discovery/latest/userguide/). See https://docs.aws.amazon.com/goto/WebAPI/discovery-2015-11-01 for more information on this service. See applicationdiscoveryservice package documentation for more information. https://docs.aws.amazon.com/sdk-for-go/api/service/applicationdiscoveryservice/ To AWS Application Discovery Service with the SDK use the New function to create a new service client. With that client you can make API requests to the service. These clients are safe to use concurrently. See the SDK's documentation for more information on how to use the SDK. https://docs.aws.amazon.com/sdk-for-go/api/ See aws.Config documentation for more information on configuring SDK clients. https://docs.aws.amazon.com/sdk-for-go/api/aws/#Config See the AWS Application Discovery Service client ApplicationDiscoveryService for more information on creating client for this service. https://docs.aws.amazon.com/sdk-for-go/api/service/applicationdiscoveryservice/#New
Package cors provides handlers to enable CORS support. Package expvar provides a standardized interface to public variables, such as operation counters in servers. It exposes these variables via HTTP at /debug/vars in JSON format. Operations to set or modify these public variables are atomic. In addition to adding the HTTP handler, this package registers the following variables: The package is sometimes only imported for the side effect of registering its HTTP handler and the above variables. To use it this way, link this package into your program: Package pprof serves via its HTTP server runtime profiling data in the format expected by the pprof visualization tool. For more information about pprof, see http://code.google.com/p/google-perftools/. The package is typically only imported for the side effect of registering its HTTP handlers. The handled paths all begin with /debug/pprof/. To use pprof, link this package into your program: If your application is not already running an http server, you need to start one. Add "net/http" and "log" to your imports and the following code to your main function: Then use the pprof tool to look at the heap profile: Or to look at a 30-second CPU profile: Or to look at the goroutine blocking profile: To view all available profiles, open http://localhost:6060/debug/pprof/ in your browser. For a study of the facility in action, visit
Package rum provides the API client, operations, and parameter types for CloudWatch RUM. With Amazon CloudWatch RUM, you can perform real-user monitoring to collect client-side data about your web application performance from actual user sessions in real time. The data collected includes page load times, client-side errors, and user behavior. When you view this data, you can see it all aggregated together and also see breakdowns by the browsers and devices that your customers use. You can use the collected data to quickly identify and debug client-side performance issues. CloudWatch RUM helps you visualize anomalies in your application performance and find relevant debugging data such as error messages, stack traces, and user sessions. You can also use RUM to understand the range of end-user impact including the number of users, geolocations, and browsers used.
Package multimap provides an abstract MultiMap interface. Multimap is a collection that maps keys to values, similar to map. However, each key may be associated with multiple values. You can visualize the contents of a multimap either as a map from keys to nonempty collections of values: ... or a single "flattened" collection of key-value pairs. Similar to a map, operations associated with this data type allow: - the addition of a pair to the collection - the removal of a pair from the collection - the lookup of a value associated with a particular key - the lookup whether a key, value or key/value pair exists in this data type.
Package mdtopdf implements a PDF document generator for markdown documents. This package depends on two other packages: * The BlackFriday v2 parser to read the markdown source * The gofpdf packace to generate the PDF The tests included here are from the BlackFriday package. See the "testdata" folder. The tests create PDF files and thus while the tests may complete without errors, visual inspection of the created PDF is the only way to determine if the tests *really* pass! The tests create log files that trace the BlackFriday parser callbacks. This is a valuable debug tool showing each callback and data provided in each while the AST is presented. To install the package: In the cmd folder is an example using the package. It demonstrates a number of features. The test PDF was created with this command: Package mdtopdf converts markdown to PDF.
Package mdtopdf implements a PDF document generator for markdown documents. This package depends on two other packages: * The BlackFriday v2 parser to read the markdown source * The gofpdf packace to generate the PDF The tests included here are from the BlackFriday package. See the "testdata" folder. The tests create PDF files and thus while the tests may complete without errors, visual inspection of the created PDF is the only way to determine if the tests *really* pass! The tests create log files that trace the BlackFriday parser callbacks. This is a valuable debug tool showing each callback and data provided in each while the AST is presented. To install the package: In the cmd folder is an example using the package. It demonstrates a number of features. The test PDF was created with this command: Package mdtopdf converts markdown to PDF.
Package jin Copyright (c) 2020 eco. License that can be found in the LICENSE file. "your wish is my command" Jin is a comprehensive JSON manipulation tool bundle. All functions tested with random data with help of Node.js. All test-path and test-value creation automated with Node.js. Jin provides parse, interpret, build and format tools for JSON. Third-party packages only used for the benchmark. No dependency need for core functions. We make some benchmark with other packages like Jin. In Result, Jin is the fastest (op/ns) and more memory friendly then others (B/op). For more information please take a look at BENCHMARK section below. WHAT IS NEW? 7 new functions tested and added to package. - GetMap() get objects as map[string]string structure with key values pairs - GetAll() get only specific keys values - GetAllMap() get only specific keys with map[string]stringstructure - GetKeys() get objects keys as string array - GetValues() get objects values as string array - GetKeysValues() get objects keys and values with separate string arrays - Length() get length of JSON array. 06.04.2020 INSTALLATION And you are good to go. Import and start using. Major difference between parsing and interpreting is parser has to read all data before answer your needs. On the other hand interpreter reads up to find the data you need. With parser, once the parse is complete you can access any data with no time. But there is a time cost to parse all data and this cost can increase as data content grows. If you need to access all keys of a JSON then, we are simply recommend you to use Parser. But if you need to access some keys of a JSON then we strongly recommend you to use Interpreter, it will be much faster and much more memory-friendly than parser. Interpreter is core element of this package, no need to create an Interpreter type, just call which function you want. First let's look at general function parameters. We are gonna use Get() function to access the value of path has pointed. In this case 'jin'. Path value can consist hard coded values. Get() function return type is []byte but all other variations of return types are implemented with different functions. For example. If you need "value" as string use GetString(). Parser is another alternative for JSON manipulation. We recommend to use this structure when you need to access all or most of the keys in the JSON. Parser constructor need only one parameter. We can parse it with Parse() function. Let's look at Parser.Get() About path value look above. There is all return type variations of Parser.Get() function like Interpreter. For return string use Parser.GetString() like this, Other usefull functions of Jin. -Add(), AddKeyValue(), Set(), SetKey() Delete(), Insert(), IterateArray(), IterateKeyValue() Tree(). Let's look at IterateArray() function. There are two formatting functions. Flatten() and Indent(). Indent() is adds indentation to JSON for nicer visualization and Flatten() removes this indentation. Control functions are simple and easy way to check value types of any path. For example. IsArray(). Or you can use GetType(). There are lots of JSON build functions in this package and all of them has its own examples. We just want to mention a couple of them. Scheme is simple and powerful tool for create JSON schemes. Testing is very important thing for this type of packages and it shows how reliable it is. For that reasons we use Node.js for unit testing. Lets look at folder arrangement and working principle. - test/ folder: test-json.json, this is a temporary file for testing. all other test-cases copying here with this name so they can process by test-case-creator.js. test-case-creator.js is core path & value creation mechanism. When it executed with executeNode() function. It reads the test-json.json file and generates the paths and values from this files content. With command line arguments it can generate different paths and values. As a result, two files are created with this process. the first of these files is test-json-paths.json and the second is test-json-values.json test-json-paths.json has all the path values. test-json-values.json has all the values that corresponding to path values. - tests/ folder All files in this folder is a test-case. But it doesn't mean that you can't change anything, on the contrary, all test-cases are creating automatically based on this folder content. You can add or remove any .json file that you want. All GO side test-case automation functions are in core_test.go file. This package developed with Node.js v13.7.0. please make sure that your machine has a valid version of Node.js before testing. All functions and methods are tested with complicated randomly genereted .json files. Like this, Most of JSON packages not even run properly with this kind of JSON streams. We did't see such packages as competitors to ourselves. And that's because we didn't even bother to benchmark against them. Benchmark results. - Benchmark prefix removed from function names for make room to results. - Benchmark between 'buger/jsonparser' and 'ecoshub/jin' use the same payload (JSON test-cases) that 'buger/jsonparser' package use for benchmark it self. We are currently working on, - Marshal() and Unmarshal() functions. - http.Request parser/interpreter - Builder functions for http.ResponseWriter If you want to contribute this work feel free to fork it. We want to fill this section with contributors.
Package mdtopdf implements a PDF document generator for markdown documents. This package depends on two other packages: * The BlackFriday v2 parser to read the markdown source * The gofpdf packace to generate the PDF The tests included here are from the BlackFriday package. See the "testdata" folder. The tests create PDF files and thus while the tests may complete without errors, visual inspection of the created PDF is the only way to determine if the tests *really* pass! The tests create log files that trace the BlackFriday parser callbacks. This is a valuable debug tool showing each callback and data provided in each while the AST is presented. To install the package: In the cmd folder is an example using the package. It demonstrates a number of features. The test PDF was created with this command: Package mdtopdf converts markdown to PDF.
Package pprof-garbage writes runtime profiling data in the format expected by the pprof visualization tool. The profile shows estimates for garbage allocations over a given time duration: See https://github.com/golang/go/issues/16629 for more details.
Banshee is a real-time anomalies(outliers) detection system for periodic metrics. We are using it to monitor our website and rpc services intefaces, including called frequency, response time and exception calls. Our services send statistics to statsd, statsd aggregates them every 10 seconds and broadcasts the results to its backends including banshee, banshee analyzes current metrics with history data, calculates the trending and alerts us if the trending behaves anomalous. For example, we have an api named get_user, this api's response time (in milliseconds) is reported to banshee from statsd every 10 seconds: Banshee will catch the latest metric 300 and report it as an anomaly. Why don't we just set a fixed threshold instead (i.e. 200ms)? This may also works but it is boring and hard to maintain a lot of thresholds. Banshee will analyze metric trendings automatically, it will find the "thresholds" automatically. 1. Designed for periodic metrics. Reality metrics are always with periodicity, banshee only peeks metrics with the same "phase" to detect. 2. Multiple alerting rule configuration options, to alert via fixed-thresholds or via anomalous trendings. 3. Coming with anomalies visualization webapp and alerting rules admin panels. 4. Require no extra storage services, banshee handles storage on disk by itself. 1. Go >= 1.4 and godep. 2. Node and gulp. 1. Clone the repo. 2. Build binary via `make`. 3. Build static files via `make static`. Usage: Flags: See package config. In order to forward metrics to banshee from statsd, we need to add the npm module statsd-banshee to statsd's banckends: 1. Install statsd-banshee on your statsd servers: 2. Add module statsd-banshee to statsd's backends in config.js: Require bell.js v2.0+ and banshee v0.0.7+: Banshee have 4 compontents and they are running in the same process: 1. Detector is to detect incoming metrics with history data and store the results. 2. Webapp is to visualize the detection results and provides panels to manage alerting rules, projects and users. 3. Alerter is to send sms and emails once anomalies are found. 4. Cleaner is to clean outdated metrics from storage. See package alerter and alerter/exampleCommand. 1. Detection algorithms, see package detector. 2. Detector input net protocol, see package detector. 3. Storage, see package storage. 4. Filter, see package filter. MIT (c) eleme, inc.
Package jin Copyright (c) 2020 eco. License that can be found in the LICENSE file. "your wish is my command" Jin is a comprehensive JSON manipulation tool bundle. All functions tested with random data with help of Node.js. All test-path and test-value creation automated with Node.js. Jin provides parse, interpret, build and format tools for JSON. Third-party packages only used for the benchmark. No dependency need for core functions. We make some benchmark with other packages like Jin. In Result, Jin is the fastest (op/ns) and more memory friendly then others (B/op). For more information please take a look at BENCHMARK section below. WHAT IS NEW? 7 new functions tested and added to package. - GetMap() get objects as map[string]string structure with key values pairs - GetAll() get only specific keys values - GetAllMap() get only specific keys with map[string]stringstructure - GetKeys() get objects keys as string array - GetValues() get objects values as string array - GetKeysValues() get objects keys and values with separate string arrays - Length() get length of JSON array. 06.04.2020 INSTALLATION And you are good to go. Import and start using. Major difference between parsing and interpreting is parser has to read all data before answer your needs. On the other hand interpreter reads up to find the data you need. With parser, once the parse is complete you can access any data with no time. But there is a time cost to parse all data and this cost can increase as data content grows. If you need to access all keys of a JSON then, we are simply recommend you to use Parser. But if you need to access some keys of a JSON then we strongly recommend you to use Interpreter, it will be much faster and much more memory-friendly than parser. Interpreter is core element of this package, no need to create an Interpreter type, just call which function you want. First let's look at general function parameters. We are gonna use Get() function to access the value of path has pointed. In this case 'jin'. Path value can consist hard coded values. Get() function return type is []byte but all other variations of return types are implemented with different functions. For example. If you need "value" as string use GetString(). Parser is another alternative for JSON manipulation. We recommend to use this structure when you need to access all or most of the keys in the JSON. Parser constructor need only one parameter. We can parse it with Parse() function. Let's look at Parser.Get() About path value look above. There is all return type variations of Parser.Get() function like Interpreter. For return string use Parser.GetString() like this, Other usefull functions of Jin. -Add(), AddKeyValue(), Set(), SetKey() Delete(), Insert(), IterateArray(), IterateKeyValue() Tree(). Let's look at IterateArray() function. There are two formatting functions. Flatten() and Indent(). Indent() is adds indentation to JSON for nicer visualization and Flatten() removes this indentation. Control functions are simple and easy way to check value types of any path. For example. IsArray(). Or you can use GetType(). There are lots of JSON build functions in this package and all of them has its own examples. We just want to mention a couple of them. Scheme is simple and powerful tool for create JSON schemes. Testing is very important thing for this type of packages and it shows how reliable it is. For that reasons we use Node.js for unit testing. Lets look at folder arrangement and working principle. - test/ folder: test-json.json, this is a temporary file for testing. all other test-cases copying here with this name so they can process by test-case-creator.js. test-case-creator.js is core path & value creation mechanism. When it executed with executeNode() function. It reads the test-json.json file and generates the paths and values from this files content. With command line arguments it can generate different paths and values. As a result, two files are created with this process. the first of these files is test-json-paths.json and the second is test-json-values.json test-json-paths.json has all the path values. test-json-values.json has all the values that corresponding to path values. - tests/ folder All files in this folder is a test-case. But it doesn't mean that you can't change anything, on the contrary, all test-cases are creating automatically based on this folder content. You can add or remove any .json file that you want. All GO side test-case automation functions are in core_test.go file. This package developed with Node.js v13.7.0. please make sure that your machine has a valid version of Node.js before testing. All functions and methods are tested with complicated randomly genereted .json files. Like this, Most of JSON packages not even run properly with this kind of JSON streams. We did't see such packages as competitors to ourselves. And that's because we didn't even bother to benchmark against them. Benchmark results. - Benchmark prefix removed from function names for make room to results. - Benchmark between 'buger/jsonparser' and 'ecoshub/jin' use the same payload (JSON test-cases) that 'buger/jsonparser' package use for benchmark it self. We are currently working on, - Marshal() and Unmarshal() functions. - http.Request parser/interpreter - Builder functions for http.ResponseWriter If you want to contribute this work feel free to fork it. We want to fill this section with contributors.