Judoscale
This is the official Python adapter for Judoscale. You can use Judoscale without it, but this gives you request queue time metrics and job queue time (for supported job processors).
It is recommended to install the specific web framework and/or background job library support as "extras" to the judoscale
PyPI package. This ensures that checking if the installed web framework and/or background task processing library is supported happens at dependency resolution time.
Supported web frameworks
Supported job processors
Using Judoscale with Django
Install Judoscale for Django with:
$ pip install 'judoscale[django]'
Add Judoscale app to settings.py
:
INSTALLED_APPS = [
"judoscale.django",
]
This sets up the Judoscale middleware to capture request queue times.
Optionally, you can customize Judoscale in settings.py
:
JUDOSCALE = {
"LOG_LEVEL": "DEBUG",
}
Once deployed, you will see your request queue time metrics available in the Judoscale UI.
Using Judoscale with Flask
Install Judoscale for Flask with:
$ pip install 'judoscale[flask]'
The Flask support for Judoscale is packaged into a Flask extension. Import the extension class and use like you normally would in a Flask application:
from judoscale.flask import Judoscale
app = Flask("MyFlaskApp")
app.config.from_object('...')
judoscale = Judoscale(app)
judoscale = Judoscale()
def create_app():
app = Flask("MyFlaskApp")
app.config.from_object('...')
judoscale.init_app(app)
return app
This sets up the Judoscale extension to capture request queue times.
Optionally, you can override Judoscale's own configuration via your application's configuration dictionary. The Judoscale Flask extension looks for a top-level "JUDOSCALE"
key in app.config
, which should be a dictionary, and which the extension uses to configure itself as soon as judoscale.init_app()
is called.
JUDOSCALE = {
"LOG_LEVEL": "DEBUG",
}
Note the official recommendations for configuring Flask.
Using Judoscale with FastAPI
Install Judoscale for FastAPI with:
$ pip install 'judoscale[asgi]'
Since FastAPI uses Starlette, an ASGI framework, the integration is packaged into ASGI middleware. Import the middleware class and register it with your FastAPI app:
from judoscale.asgi.middleware import FastAPIRequestQueueTimeMiddleware
app = FastAPI()
app.add_middleware(FastAPIRequestQueueTimeMiddleware)
def create_app():
app = FastAPI()
app.add_middleware(FastAPIRequestQueueTimeMiddleware)
return app
This sets up the Judoscale extension to capture request queue times.
Optionally, you can override Judoscale's configuration by passing in extra configuration to the add_middleware
method:
app.add_middleware(FastAPIRequestQueueTimeMiddleware, extra_config={"LOG_LEVEL": "DEBUG"})
Other ASGI frameworks
Judoscale also provides middleware classes for Starlette and Quart. You can use them with
from judoscale.asgi.middleware import StarletteRequestQueueTimeMiddleware
from judoscale.asgi.middleware import QuartRequestQueueTimeMiddleware
If your app uses a framework for which we have not provided a middleware class, but it implements the ASGI spec, you can easily create your own version of the request queue time middleware.
from judoscale.asgi.middleware import RequestQueueTimeMiddleware
class YourFrameworkRequestQueueTimeMiddleware(RequestQueueTimeMiddleware):
platform = "your_framework"
Then register YourFrameworkRequestQueueTimeMiddleware
with your application like you normally would.
Using Judoscale with Celery and Redis
Install Judoscale for Celery with:
$ pip install 'judoscale[celery-redis]'
:warning: NOTE 1: The Judoscale Celery integration currently only works with the Redis broker. The minimum supported Redis server version is 6.0.
:warning: NOTE 2: Using task priorities is currently not supported by judoscale
. You can still use task priorities, but judoscale
won't see and report metrics on any queues other than the default, unprioritised queue.
Judoscale can automatically scale the number of Celery workers based on the queue latency (the age of the oldest pending task in the queue).
Setting up the integration
To use the Celery integration, import judoscale_celery
and call it with the Celery app instance. judoscale_celery
should be called after you have set up and configured the Celery instance.
from celery import Celery
from judoscale.celery import judoscale_celery
celery_app = Celery(broker="redis://localhost:6379/0")
judoscale_celery(celery_app)
This sets up Judoscale to periodically calculate and report queue latency for each Celery queue.
If you need to change the Judoscale integration configuration, you can pass a dictionary of Judoscale configuration options to judoscale_celery
to override the default Judoscale config variables:
judoscale_celery(celery_app, extra_config={"LOG_LEVEL": "DEBUG"})
An example configuration dictionary accepted by extra_config
:
{
"LOG_LEVEL": "INFO",
"CELERY": {
"ENABLED": True,
"MAX_QUEUES": 20,
"QUEUES": [],
"TRACK_BUSY_JOBS": False,
}
}
:warning: NOTE: Calling judoscale_celery
turns on sending task-sent
events. This is required for the Celery integration with Judoscale to work.
Judoscale with Celery and Flask
Depending on how you've structured your Flask app, you should call judoscale_celery
after your application has finished configuring the Celery app instance. If you have followed the Flask guide in the Flask documentation, the easiest place to initialise the Judoscale integration is in the application factory:
def create_app():
app = Flask(__name__)
app.config.from_object(...)
celery_app = celery_init_app(app)
judoscale_celery(celery_app, extra_config=app.config["JUDOSCALE"])
return app
Judoscale with Celery and Django
If you have followed the Django guide in the Celery documentation, you should have a module where you initialise the Celery app instance, auto-discover tasks, etc. You should call judoscale_celery
after you have configured the Celery app instance:
from celery import Celery
from django.conf import settings
from judoscale.celery import judoscale_celery
app = Celery()
app.config_from_object("django.conf:settings", namespace="CELERY")
app.autodiscover_tasks()
judoscale_celery(app, extra_config=settings.JUDOSCALE)
Using Judoscale with RQ
Install Judoscale for RQ with:
$ pip install 'judoscale[rq]'
Judoscale can automatically scale the number of RQ workers based on the queue latency (the age of the oldest pending task in the queue).
Setting up the integration
To use the RQ integration, import judoscale_rq
and call it with an instance of Redis
pointing to the same Redis database that RQ uses.
from redis import Redis
from judoscale.rq import judoscale_rq
redis = Redis(...)
judoscale_rq(redis)
This sets up Judoscale to periodically calculate and report queue latency for each RQ queue.
If you need to change the Judoscale integration configuration, you can pass a dictionary of Judoscale configuration options to judoscale_rq
to override the default Judoscale config variables:
judoscale_rq(redis, extra_config={"LOG_LEVEL": "DEBUG"})
An example configuration dictionary accepted by extra_config
:
{
"LOG_LEVEL": "INFO",
"RQ": {
"ENABLED": True,
"MAX_QUEUES": 20,
"QUEUES": [],
"TRACK_BUSY_JOBS": False,
}
Judoscale with RQ and Flask
The recommended way to initialise Judoscale for RQ is in the application factory:
judoscale = Judoscale()
def create_app():
app = Flask(__name__)
app.config.from_object("...")
app.redis = Redis()
judoscale.init_app(app)
judoscale_rq(app.redis)
return app
Then, in your worker script, make sure that you create an app, which will initialise the Judoscale integration with RQ. Although not required, it's useful to run the worker within the Flask app context. If you have followed the RQ on Heroku pattern for setting up your RQ workers on Heroku, your worker script should look something like this:
from rq.worker import HerokuWorker as Worker
app = create_app()
worker = Worker(..., connection=app.redis)
with app.app_context():
worker.work()
See the run-worker.py script for reference.
Judoscale with RQ and Django
The Judoscale integration for RQ is packaged into a separate Django app.
You should already have judoscale.django
in your INSTALLED_APPS
. Next, add the RQ integration app judoscale.rq
:
INSTALLED_APPS = [
"judoscale.django",
"judoscale.rq",
]
By default, judoscale.rq
will connect to the Redis instance as specified by the REDIS_URL
environment variable. If that is not suitable, you can supply Redis connection information in the JUDOSCALE
configuration dictionary under the "REDIS"
key.
Accepted formats are:
- a dictionary containing a single key
"URL"
pointing to a Redis server URL, or; - a dictionary of configuration options corresponding to the keyword arguments of the
Redis
constructor.
JUDOSCALE = {
"REDIS": {
"URL": os.getenv("REDISTOGO_URL"),
"SSL_CERT_REQS": None
}
"REDIS": {
"HOST": "localhost",
"PORT": 6379,
"DB": 0
"SSL_CERT_REQS": None
}
}
:warning: NOTE: If you are running on Heroku and using any of the Premium plans for Heroku Data for Redis, you will have to turn off SSL certificate verification as per https://help.heroku.com/HC0F8CUS/redis-connection-issues.
If you are using Django-RQ, you can also pull configuration from RQ_QUEUES
directly:
RQ_QUEUES = {
"high_priority": {
"HOST": "...",
"PORT": 6379,
"DB": 0
},
}
JUDOSCALE = {
"REDIS": RQ_QUEUES["high_priority"]
}
:warning: NOTE: Django-RQ enables configuring RQ such that different queues and workers use different Redis instances. Judoscale currently only supports connecting to and monitoring queue latency on a single Redis instance.
Debugging & troubleshooting
If Judoscale is not recognizing your adapter installation or if you're not seeing expected metrics in Judoscale, you'll want to check the logging output. Here's how you'd do that on Heroku.
First, enable debug logging:
heroku config:set JUDOSCALE_LOG_LEVEL=debug
Then, tail your logs while your app initializes:
heroku logs --tail | grep Judoscale
You should see Judoscale collecting and reporting metrics every 10 seconds from every running process. If the issue is not clear from the logs, email help@judoscale.com for support. Please include any logging you've collected and any other behavior you've observed.
Development
This repo includes a sample-apps
directory containing apps you can run locally. These apps use the judoscale
adapter, but they override API_BASE_URL
so they're not connected to the real Judoscale API. Instead, they post API requests to https://requestinspector.com so you can observe the API behavior.
See the README
in a sample app for details on how to set it up and run locally.
Contributing
judoscale
uses Poetry for managing dependencies and packaging the project. Head over to the installations instructions and install Poetry, if needed.
Clone the repo with
$ git clone git@github.com:judoscale/judoscale-python.git
$ cd judoscale-python
Verify that you are on a recent version of Poetry:
$ poetry --version
Poetry (version 1.3.1)
Install dependencies with Poetry and activate the virtualenv
$ poetry install --all-extras
$ poetry shell
Run tests with
$ pytest