Package tracing includes high-level tools for instrumenting your application (and library) code using OpenTelemetry and go-logr. This is done by interconnecting logs and traces; such that critical operations that need to be instrumented start a tracing span using the *TracerBuilder builder. Upon starting a span, the user gives it the context which it is operating in. If the context contains a parent span, the new "child" span and the parent are connected together. To the span various types of metadata can be registered, for example attributes, status information, and potential errors. Spans always need to be ended; most commonly using a defer statement right after creation. The context given to the *TracerBuilder might carry a TracerProvider to use for exporting span data, e.g. to Jaeger for visualization, or a logr.Logger, to which logs are sent. The context can also carry a LogLevelIncreaser, which correlates log levels to trace depth. The core idea of interconnecting logs and traces is that when some metadata is registered with a span (for example, it starts, ends, or has attributes or errors registered), information about this is also logged. And upon logging something in a function that is executing within a span, it is also registered with the span. This means you have dual ways of looking at your application's execution; the "waterfall" visualization of spans in a trace in an OpenTelemetry-compliant UI like Jaeger, or through pluggable logging using logr. Additionally, there is a way to output semi-human-readable YAML data based on the trace information, which is useful when you want to unit-test a function based on its output trace data using a "golden file" in a testdata/ directory. Let's talk about trace depth and log levels. Consider this example trace (tree of spans): Span A is at depth 0, as this is a "root span". Inside of span A, span B starts, at depth 1 (span B has exactly 1 parent span). Span B spawns span C at depth 2. Span B ends, but after this span D starts at depth 1, as a child of span A. After D is done executing, span A also ends after a while. Using the TraceEnabler interface, the user can decide what spans are "enabled" and hence sent to the TracerProvider backend, for example, Jaeger. By default, spans of any depth are sent to the backing TracerProvider, but this is often not desirable in production. The TraceEnabler can decide whether a span should be enabled based on all data in tracing.TracerConfig, which includes e.g. span name, trace depth and so on. For example, MaxDepthEnabler(maxDepth) allows all traces with depth maxDepth or less, but LoggerEnabler() allows traces as long as the given Logger is enabled. With that, lets take a look at how trace depth correlates with log levels. The LogLevelIncreaser interface, possibly attached to a context, correlates how much the log level (verboseness) should increase as an effect of the trace depth increasing. The NoLogLevelIncrease() implementation, for example, never increases the log level although the trace depth gets arbitrarily deep. However, that is most often not desired, so there is also a NthLogLevelIncrease(n) implementation that raises the log level every n-th increase of trace depth. For example, given the earlier example, log level (often shortened "v") is increased like follows for NthLogLevelIncrease(2): As per how logr.Loggers work, log levels can never be decreased, i.e. become less verbose, they can only be increased. The logr.Logger backend enables log levels up to a given maximum, configured by the user, similar to how MaxDepthEnabler works. Log output for the example above would looks something like: This is of course a bit dull example, because only the start/end span events are logged, but it shows the spirit. If span operations like span.Set{Name,Attributes,Status} are executed within the instrumented function, e.g. to record errors, important return values, arbitrary attributes, or a decision, this information will be logged automatically, without a need to call log.Info() separately. At the same time, all trace data is nicely visualized in Jaeger :). For convenience, a builder-pattern constructor for the zap logger, compliant with the Logger interface is provided through the ZapLogger() function and zaplog sub-directory. In package traceyaml there are utilities for unit testing the traces. In package filetest there are utilities for using "golden" testdata/ files for comparing actual output of loggers, tracers, and general writers against expected output. Both the TracerProviderBuilder and zaplog.Builder support deterministic output for unit tests and examples. The philosophy behind this package is that instrumentable code (functions, structs, and so on), should use the TracerBuilder to start spans; and will from there get a Span and Logger implementation to use. It is safe for libraries used by other consumers to use the TracerBuilder as well, if the user didn't want or request tracing nor logging, all calls to the Span and Logger will be discarded! The application owner wanting to (maybe conditionally) enable tracing and logging, creates "backend" implementations of TracerProvider and Logger, e.g. using the TracerProviderBuilder and/or zaplog.Builder. These backends control where the telemetry data is sent, and how much of it is enabled. These "backend" implementations are either attached specifically to a context, or registered globally. Using this setup, telemetry can be enabled even on the fly, using e.g. a HTTP endpoint for debugging a production system. Have fun using this library and happy tracing!
Package geobin.io runs a web server which creates a geobin url that can receive geo data via POSTs and visualizes it on a map.
An HTTP client for interacting with the Kubecost Allocation API. For documentation on the Go standard library net/http package, see the following: For documentation on the Kubecost Allocation API, see the following: Package main is a generated GoMock package. Application configuration. For documentation on Viper, see the following: An HTTP server for exposing cost allocation metrics retrieved from Kubecost. Metrics are exposed via an HTTP metrics endpoint. Applications that provide a Prometheus OpenMetrics integration can gather cost allocation metrics from this endpoint to store and visualize the data. Generate Prometheus metrics from configuration. For documentation on the Go client library for Prometheus, see the following: Utility functions.
Package geobin.io runs a web server which creates a geobin url that can receive geo data via POSTs and visualizes it on a map.
Package vision is a repository containing visual processing packages in Go (golang), focused mainly on providing efficient V1 (primary visual cortex) level filtering of images, with the output then suitable as input for neural networks. Two main types of filters are supported: * **Gabor** filters simulate V1 simple-cell responses in terms of an oriented sine wave times a gaussian envelope that localizes the filter in space. This produces an edge detector that detects oriented contrast transitions between light and dark. In general, the main principle of primary visual filtering is to focus on spatial (and temporal) changes, while filtering out static, uniform areas. * **DoG** (difference of gaussian) filters simulate retinal On-center vs. Off-center contrast coding cells -- unlike gabor filters, these do not have orientation tuning. Mathematically, they are a difference between a narrow (center) vs wide (surround) gaussian, of opposite signs, balanced so that a uniform input generates offsetting values that sum to zero. In the visual system, orientation tuning is constructed from aligned DoG-like inputs, but it is more efficient to just use the Gabor filters directly. However, DoG filters capture the "blob" cells that encode color contrasts. The `vfilter` package contains general-purpose filtering code that applies (convolves) any given filter with a visual input. It also supports converting an `image.Image` into a `tensor.Float32` tensor which is the main data type used in this framework. It also supports max-pooling for efficiently reducing the dimensionality of inputs. The `kwta` package provides an implementation of the feedforward and feedback (FFFB) inhibition dynamics (and noisy X-over-X-plus-1 activation function) from the `Leabra` algorithm to produce a k-Winners-Take-All processing of visual filter outputs -- this increases the contrast and simplifies the representations, and is a good model of the dynamics in primary visual cortex.
Package mdtopdf implements a PDF document generator for markdown documents. This package depends on two other packages: * The BlackFriday v2 parser to read the markdown source * The gofpdf packace to generate the PDF The tests included here are from the BlackFriday package. See the "testdata" folder. The tests create PDF files and thus while the tests may complete without errors, visual inspection of the created PDF is the only way to determine if the tests *really* pass! The tests create log files that trace the BlackFriday parser callbacks. This is a valuable debug tool showing each callback and data provided in each while the AST is presented. To install the package: In the cmd folder is an example using the package. It demonstrates a number of features. The test PDF was created with this command: Package mdtopdf converts markdown to PDF.
Package chart implements common chart/plot types. The following chart types are available: Chart tries to provides useful defaults and produce nice charts without sacrificing accuracy. The generated charts look good and are higly customizable but will not match the visual quality of handmade photoshop charts or the statistical features of charts produced by S or R. Creating charts consists of the following steps: You may change the configuration at any step or render to different outputs. The different chart types and their fields are all simple struct types where the zero value provides suitable defaults. All fields are exported, even if you are not supposed to manipulate them directy or are 'output fields'. E.g. the common Data field of all chart types will store the sample data added with one or more Add... methods. Some fields are mere output which expose internal stuff for your use like the Data2Screen and Screen2Data functions of the Ranges. Some fields are even input/output fields: E.g. you may set the Range.TicSetting.Delta to some positive value which will be used as the spacing between tics on that axis; on the other hand if you leave Range.TicSetting.Delta at its default 0 you indicate to the plotting routine to automatically determine the tic delta which is then reported back in this fields. All charts (except pie/ring charts) contain at least one axis represented by a field of type Range. Axis can be differented into following categories: How the axis is autoscaled can be controlled for both ends of the axis individually by MinMode and MaxMode which allow a fine control of the (auto-) scaling. After setting up the chart, adding data, samples, functions you can render the chart to a Graphics output. This process will set several internal fields of the chart. If you reuse the chart, add additional data and output it again these fields might no longer indicate 'automatical/default' but contain the value calculated in the first output round.
Package graph is a library for creating generic graph data structures and modifying, analyzing, and visualizing them. A graph consists of vertices of type T, which are identified by a hash value of type K. The hash value for a given vertex is obtained using the hashing function passed to New. A hashing function takes a T and returns a K. For primitive types like integers, you may use a predefined hashing function such as IntHash – a function that takes an integer and uses that integer as the hash value at the same time: For storing custom data types, you need to provide your own hashing function. This example takes a City instance and returns its name as the hash value: Creating a graph using this hashing function will yield a graph of vertices of type City identified by hash values of type string. Adding vertices to a graph of integers is simple. graph.Graph.AddVertex takes a vertex and adds it to the graph. Most functions accept and return only hash values instead of entire instances of the vertex type T. For example, graph.Graph.AddEdge creates an edge between two vertices and accepts the hash values of those vertices. Because this graph uses the IntHash hashing function, the vertex values and hash values are the same. All operations that modify the graph itself are methods of Graph. All other operations are top-level functions of by this library. For detailed usage examples, take a look at the README.
Package mdtopdf implements a PDF document generator for markdown documents. This package depends on two other packages: * The BlackFriday v2 parser to read the markdown source * The gofpdf packace to generate the PDF The tests included here are from the BlackFriday package. See the "testdata" folder. The tests create PDF files and thus while the tests may complete without errors, visual inspection of the created PDF is the only way to determine if the tests *really* pass! The tests create log files that trace the BlackFriday parser callbacks. This is a valuable debug tool showing each callback and data provided in each while the AST is presented. To install the package: In the cmd folder is an example using the package. It demonstrates a number of features. The test PDF was created with this command: Package mdtopdf converts markdown to PDF.
Package mdtopdf implements a PDF document generator for markdown documents. This package depends on two other packages: * The BlackFriday v2 parser to read the markdown source * The fpdf packace to generate the PDF The tests included here are from the BlackFriday package. See the "testdata" folder. The tests create PDF files and thus while the tests may complete without errors, visual inspection of the created PDF is the only way to determine if the tests *really* pass! The tests create log files that trace the BlackFriday parser callbacks. This is a valuable debug tool showing each callback and data provided in each while the AST is presented. To install the package: In the cmd folder is an example using the package. It demonstrates a number of features. The test PDF was created with this command: Package mdtopdf converts markdown to PDF.
Package mdtopdf implements a PDF document generator for markdown documents. This package depends on two other packages: * The BlackFriday v2 parser to read the markdown source * The fpdf packace to generate the PDF The tests included here are from the BlackFriday package. See the "testdata" folder. The tests create PDF files and thus while the tests may complete without errors, visual inspection of the created PDF is the only way to determine if the tests *really* pass! The tests create log files that trace the BlackFriday parser callbacks. This is a valuable debug tool showing each callback and data provided in each while the AST is presented. To install the package: In the cmd folder is an example using the package. It demonstrates a number of features. The test PDF was created with this command: Package mdtopdf converts markdown to PDF.
Package graph is a library for creating generic graph data structures and modifying, analyzing, and visualizing them. A graph consists of vertices of type T, which are identified by a hash value of type K. The hash value for a given vertex is obtained using the hashing function passed to New. A hashing function takes a T and returns a K. For primitive types like integers, you may use a predefined hashing function such as IntHash – a function that takes an integer and uses that integer as the hash value at the same time: For storing custom data types, you need to provide your own hashing function. This example takes a City instance and returns its name as the hash value: Creating a graph using this hashing function will yield a graph of vertices of type City identified by hash values of type string. Adding vertices to a graph of integers is simple. graph.Graph.AddVertex takes a vertex and adds it to the graph. Most functions accept and return only hash values instead of entire instances of the vertex type T. For example, graph.Graph.AddEdge creates an edge between two vertices and accepts the hash values of those vertices. Because this graph uses the IntHash hashing function, the vertex values and hash values are the same. All operations that modify the graph itself are methods of Graph. All other operations are top-level functions of by this library. For detailed usage examples, take a look at the README.
Package graph is a library for creating generic graph data structures and modifying, analyzing, and visualizing them. A graph consists of vertices of type T, which are identified by a hash value of type K. The hash value for a given vertex is obtained using the hashing function passed to New. A hashing function takes a T and returns a K. For primitive types like integers, you may use a predefined hashing function such as IntHash – a function that takes an integer and uses that integer as the hash value at the same time: For storing custom data types, you need to provide your own hashing function. This example takes a City instance and returns its name as the hash value: Creating a graph using this hashing function will yield a graph of vertices of type City identified by hash values of type string. Adding vertices to a graph of integers is simple. graph.Graph.AddVertex takes a vertex and adds it to the graph. Most functions accept and return only hash values instead of entire instances of the vertex type T. For example, graph.Graph.AddEdge creates an edge between two vertices and accepts the hash values of those vertices. Because this graph uses the IntHash hashing function, the vertex values and hash values are the same. All operations that modify the graph itself are methods of Graph. All other operations are top-level functions of by this library. For detailed usage examples, take a look at the README.
inertia is a Go package for real-time estimation of a power system's inertia levels. It defines software interfaces for ingesting and reporting data in real-time. Unit commitment ("H-constant")-based estimation logic and data interfaces are available in the inertia/uc package. PMU-based estimation methods are planned as future work. System integrators can provide deployment-specfic data ingestion code (e.g., developed for use with a specific EMS or historian system) that conforms to the stated data interfaces for the desired estimation method. Once these input interfaces are implemented, ingested data can be automatically processed and reported out via the package's real-time visualization framework. This package provides two off-the-shelf visualization modules in inertia/sink/text and inertia/sink/web, but custom implementations of the [Visualizer] interface can also be used. Multiple Visualizers can be associated with a single real-time data stream, allowing for reporting to multiple outputs at the same time, for example logging to a text file while also visualizing results in a web browser.
Package mdtopdf implements a PDF document generator for markdown documents. This package depends on two other packages: * The BlackFriday v2 parser to read the markdown source * The fpdf packace to generate the PDF The tests included here are from the BlackFriday package. See the "testdata" folder. The tests create PDF files and thus while the tests may complete without errors, visual inspection of the created PDF is the only way to determine if the tests *really* pass! The tests create log files that trace the BlackFriday parser callbacks. This is a valuable debug tool showing each callback and data provided in each while the AST is presented. To install the package: In the cmd folder is an example using the package. It demonstrates a number of features. The test PDF was created with this command: Package mdtopdf converts markdown to PDF.