Package timestamp implements the Time-Stamp Protocol (TSP) as specified in RFC3161 (Internet X.509 Public Key Infrastructure Time-Stamp Protocol (TSP)).
Package amp provides the API client, operations, and parameter types for Amazon Prometheus Service. Amazon Managed Service for Prometheus is a serverless, Prometheus-compatible monitoring service for container metrics that makes it easier to securely monitor container environments at scale. With Amazon Managed Service for Prometheus, you can use the same open-source Prometheus data model and query language that you use today to monitor the performance of your containerized workloads, and also enjoy improved scalability, availability, and security without having to manage the underlying infrastructure. For more information about Amazon Managed Service for Prometheus, see the Amazon Managed Service for Prometheus User Guide. Amazon Managed Service for Prometheus includes two APIs. Use the Amazon Web Services API described in this guide to manage Amazon Managed Service for Prometheus resources, such as workspaces, rule groups, and alert managers. Use the Prometheus-compatible APIto work within your Prometheus workspace.
JSONenums is a tool to automate the creation of methods that satisfy the fmt.Stringer, json.Marshaler and json.Unmarshaler interfaces. Given the name of a (signed or unsigned) integer type T that has constants defined, jsonenums will create a new self-contained Go source file implementing The file is created in the same package and directory as the package that defines T. It has helpful defaults designed for use with go generate. JSONenums is a simple implementation of a concept and the code might not be the most performant or beautiful to read. For example, given this snippet, running this command in the same directory will create the file pill_jsonenums.go, in package painkiller, containing a definition of That method will translate the value of a Pill constant to the string representation of the respective constant name, so that the call fmt.Print(painkiller.Aspirin) will print the string "Aspirin". Typically this process would be run using go generate, like this: If multiple constants have the same value, the lexically first matching name will be used (in the example, Acetaminophen will print as "Paracetamol"). With no arguments, it processes the package in the current directory. Otherwise, the arguments must name a single directory holding a Go package or a set of Go source files that represent a single Go package. The -type flag accepts a comma-separated list of types so a single run can generate methods for multiple types. The default output file is t_jsonenums.go, where t is the lower-cased name of the first type listed. The suffix can be overridden with the -suffix flag and a prefix may be added with the -prefix flag.
Sample database-sql demonstrates connecting to a Cloud SQL instance. The application is a Go version of the "Tabs vs Spaces" web app presented at Google Cloud Next 2019 as seen in this video: https://www.youtube.com/watch?v=qVgzP3PsXFw&t=1833s [START cloud_sql_postgres_databasesql_connect_connector] [START cloud_sql_postgres_databasesql_connect_tcp] [START cloud_sql_postgres_databasesql_connect_tcp_sslcerts] [START cloud_sql_postgres_databasesql_sslcerts] [START cloud_sql_postgres_databasesql_connect_unix]
Package consistent provides a consistent probability based sampler.
Golang Gonic/Gin startup project fork form RealWorld https://realworld.io This project will include objects and relationships' CRUD, you will know how to write a golang/gin app though small perfectly formed.
Package unique provides primitives for sorting slices removing repeated elements.
Package main starts the example server
Package example exists as an empty Go module containing all Vecty example dependencies. It is done such that example dependencies do not end up in downstream go.mod files for users who are just importing Vecty alone.
Package dynsampler contains several sampling algorithms to help you select a representative set of events instead of a full stream. This package is intended to help sample a stream of tracking events, where events are typically created in response to a stream of traffic (for the purposes of logging or debugging). In general, sampling is used to reduce the total volume of events necessary to represent the stream of traffic in a meaningful way. For the purposes of these examples, the "traffic" will be a set of HTTP requests being handled by a server, and "event" will be a blob of metadata about a given HTTP request that might be useful to keep track of later. A "sample rate" of 100 means that for every 100 requests, we capture a single event and indicate that it represents 100 similar requests. Use the `Sampler` interface in your code. Each different sampling algorithm implements the Sampler interface. The following guidelines can help you choose a sampler. Depending on the shape of your traffic, one may serve better than another, or you may need to write a new one! Please consider contributing it back to this package if you do. * If your system has a completely homogeneous stream of requests: use `Static` to use a constant sample rate. * If your system has a steady stream of requests and a well-known low cardinality partition key (e.g. http status): use `Static` and override sample rates on a per-key basis (e.g. if you know want to sample `HTTP 200/OK` events at a different rate from `HTTP 503/Server Error`). * If your logging system has a strict cap on the rate it can receive events, use `TotalThroughput`, which will calculate sample rates based on keeping *the entire system's* representative event throughput right around (or under) particular cap. * If your system has a rough cap on the rate it can receive events and your partitioned keyspace is fairly steady, use `PerKeyThroughput`, which will calculate sample rates based on keeping the event throughput roughly constant *per key/partition* (e.g. per user id) * The best choice for a system with a large key space and a large disparity between the highest volume and lowest volume keys is `AvgSampleRateWithMin` - it will increase the sample rate of higher volume traffic proportionally to the logarithm of the specific key's volume. If total traffic falls below a configured minimum, it stops sampling to avoid any sampling when the traffic is too low to warrant it. * `EMASampleRate` works like `AvgSampleRate`, but calculates sample rates based on a moving average (Exponential Moving Average) of many measurement intervals rather than a single isolated interval. In addition, it can detect large bursts in traffic and will trigger a recalculation of sample rates before the regular interval. Each sampler implementation below has additional configuration parameters and a detailed description of how it chooses a sample rate. Some implementations implement `SaveState` and `LoadState` - enabling you to serialize the Sampler's internal state and load it back. This is useful, for example, if you want to avoid losing calculated sample rates between process restarts.
Sample grpc-ping acts as an intermediary to the ping service.