🚀 Just launched v1 of managed rate limiting service
Support us with your feedback and questions on Product Hunt or Discord
Rate Limiting for Python Applications
The aperture-py
SDK provides an easy way to integrate your Python applications
with FluxNinja Aperture. It allows flow
control functionality on fine-grained features inside service code.
Refer to documentation for more
details.
Usage
Install SDK
Run the command below to install the SDK:
pip install aperture-py
Create Aperture Client
The next step is to create an Aperture Client instance, for which, the address
of the organization created in Aperture Cloud and API key are needed. You can
locate both these details by clicking on the Aperture tab in the sidebar menu of
Aperture Cloud.
from aperture_sdk.client import ApertureClient, FlowParams
agent_address = os.getenv("APERTURE_AGENT_ADDRESS", default_agent_address)
api_key = os.getenv("APERTURE_API_KEY", "")
insecure = os.getenv("APERTURE_AGENT_INSECURE", "true").lower() == "true"
aperture_client = ApertureClient.new_client(
address=agent_address, insecure=insecure, api_key=api_key
)
Flow Functionality
The created instance can then be used to start a flow:
labels = {
"user_id": "some_user_id",
"user_tier": "gold",
"priority": "100",
}
flow_params = FlowParams(
check_timeout=timedelta(seconds=200),
explicit_labels=labels,
)
flow = await aperture_client.start_flow(
control_point="AwesomeFeature",
params=flow_params,
)
if not flow.success:
logger.info("Flow check failed - will fail-open")
if flow.should_run():
pass
else:
flow.set_status(FlowStatus.Error)
res = await flow.end()
if res.get_error():
logger.error("Error: {}".format(res.get_error()))
elif res.get_flow_end_response():
logger.info("Flow End Response: {}".format(res.get_flow_end_response()))
await asyncio.sleep(2)
return "", 202
The above code snippet is making start_flow
calls to Aperture. For this call,
it is important to specify the control point (AwesomeFeature
in the example)
and FlowParams
that will be aligned with the policy created in Aperture Cloud.
For request prioritization use cases, it's important to set a higher gRPC
deadline. This parameter specifies the maximum duration a request can remain in
the queue. For each flow that is started, a should_run
decision is made,
determining whether to allow the request into the system or to rate limit it. It
is important to make the end
call made after processing each request, to send
telemetry data that would provide granular visibility for each flow.