
Security News
Node.js TSC Declines to Endorse Feature Bounty Program
The Node.js TSC opted not to endorse a feature bounty program, citing concerns about incentives, governance, and project neutrality.
py-async-grpc-prometheus
Advanced tools
Instrument library to provide prometheus metrics similar to:
Currently, the library has the parity metrics with the Java and Go library.
pip install py-async-grpc-prometheus
Client metrics monitoring is done by intercepting the gPRC channel.
from grpc import aio
from py_async_grpc_prometheus.prometheus_async_client_interceptor import get_client_interceptors
channel = aio.insecure_channel("server:6565",
interceptors=get_client_interceptors())
# Start an end point to expose metrics.
start_http_server(metrics_port)
Server metrics are exposed by adding the interceptor when the gRPC server is started. Take a look at
tests/integration/hello_world/hello_world_client.py
for the complete example.
from grpc import aio
from concurrent import futures
from py_async_grpc_prometheus.prometheus_async_server_interceptor import PromAsyncServerInterceptor
from prometheus_client import start_http_server
Start the gRPC server with the interceptor, take a look at
tests/integration/hello_world/hello_world_server.py
for the complete example.
server = aio.server(futures.ThreadPoolExecutor(max_workers=10),
interceptors=(
PromAsyncServerInterceptor(),
))
# Start an end point to expose metrics.
start_http_server(metrics_port)
Prometheus histograms are a great way to measure latency distributions of your RPCs. However, since it is bad practice to have metrics of high cardinality the latency monitoring metrics are disabled by default. To enable them please call the following in your interceptor initialization code:
server = aio.server(futures.ThreadPoolExecutor(max_workers=10),
interceptors=(
PromAsyncServerInterceptor(enable_handling_time_histogram=True),
))
After the call completes, its handling time will be recorded in a Prometheus histogram
variable grpc_server_handling_seconds
. The histogram variable contains three sub-metrics:
grpc_server_handling_seconds_count
- the count of all completed RPCs by status and methodgrpc_server_handling_seconds_sum
- cumulative time of RPCs by status and method, useful for
calculating average handling timesgrpc_server_handling_seconds_bucket
- contains the counts of RPCs by status and method in respective
handling-time buckets. These buckets can be used by Prometheus to estimate SLAs (see here)Metric names have been updated to be in line with those from https://github.com/grpc-ecosystem/go-grpc-prometheus.
The legacy metrics are:
In order to be able to use these legacy metrics for backwards compatibility, the legacy
flag can be set to True
when initialising the server/client interceptors
For example, to enable the server side legacy metrics:
server = aio.server(futures.ThreadPoolExecutor(max_workers=10),
interceptors=(
PromAsyncServerInterceptor(legacy=True),
))
make initialize-development
make test
FAQs
Python async gRPC Prometheus Interceptors
We found that py-async-grpc-prometheus demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
The Node.js TSC opted not to endorse a feature bounty program, citing concerns about incentives, governance, and project neutrality.
Research
Security News
A look at the top trends in how threat actors are weaponizing open source packages to deliver malware and persist across the software supply chain.
Security News
ESLint now supports HTML linting with 48 new rules, expanding its language plugin system to cover more of the modern web development stack.