
Product
Rubygems Ecosystem Support Now Generally Available
Socket's Rubygems ecosystem support is moving from beta to GA, featuring enhanced security scanning to detect supply chain threats beyond traditional CVEs in your Ruby dependencies.
celery-longterm-scheduler
Advanced tools
Schedules celery tasks to run in the potentially far future, using a separate storage backend (currently only redis is supported) in combination with a cronjob.
longterm_scheduler_backend = 'redis://localhost:6739/1'
to your celery configuration.
(The storage also respects the built-in celery configuration settings
redis_socket_timeout
, redis_socket_connect_timeout
and
redis_max_connections
.)MYCELERY = celery.Celery(task_cls=celery_longterm_scheduler.Task)
celery longterm_scheduler
(e.g. every 5 minutes)mytask.apply_async(args, kwargs, eta=datetime)
as normal. This returns
a normal AsyncResult
object, but only reading the .id
is supported;
any other methods or properties may fail explictly or implicitly.celery_longterm_scheduler.get_scheduler(MYCELERY).revoke('mytaskid')
(we cannot hook into the celery built-in AsyncResult.revoke()
,
unfortunately). revoke()
returns True on success and False if the given
task cannot be found in the storage backend (e.g. because it has already come
due and been executed).Instead of sending a normal job to the celery broker (with added timing information), this creates a job entry in the scheduler storage backend. The cronjob then periodically checks the storage for any jobs that are due, and only then sends a normal celery job to the broker.
Why not use the celery built-in apply_async(eta=)
? Because you cannot ever
really delete a pending job. AsyncResult('mytaskid').revoke()
can only add
the task ID to the statedb, where it has to stay forever so the job is
recognized as revoked. For jobs that are scheduled to run in 6 months time or
later, this would create an unmanageable, ever-growing statedb.
Why not use celerybeat? Because it is built for periodic jobs, and we need single-shot jobs. And then there's not much to gain from the celerybeat implementation, especially since we want to use redis as storage (since we're already using that as broker and result backend).
celery_longterm_scheduler assumes that it talks to a dedicated redis database.
It creates an entry per scheduled job using SET jobid job-configuration
(job-configuration is serialized with JSON) and uses a single sorted set named
scheduled_task_id_by_time
that contains the jobids scored by the unix
timestamp (UTC) when they are due.
Using tox
_ and py.test
_. Maybe install tox
(e.g. via pip install tox
)
and then simply run tox
.
For the integration tests you need to have the redis binary installed (tests
start their own server
_).
.. _tox
: http://tox.readthedocs.io/
.. _py.test
: http://pytest.org/
.. _their own server
: https://pypi.python.org/pypi/testing.redis
rediss://
URLsapply_async(eta=None)
callsFAQs
Schedules celery tasks to run in the potentially far future
We found that celery-longterm-scheduler demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 2 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Product
Socket's Rubygems ecosystem support is moving from beta to GA, featuring enhanced security scanning to detect supply chain threats beyond traditional CVEs in your Ruby dependencies.
Research
The Socket Research Team investigates a malicious npm package that appears to be an Advcash integration but triggers a reverse shell during payment success, targeting servers handling transactions.
Security Fundamentals
The Socket Threat Research Team uncovers how threat actors weaponize shell techniques across npm, PyPI, and Go ecosystems to maintain persistence and exfiltrate data.