ZSpec
ZSpec is a distributed test runner for RSpec. It consists of a worker
, a client
, and a redis
store.
The Worker
The workers are se pods running on k8s, they run zspec work
which polls redis for work and upload the results back to redis.
The Client
The client (in this case drone) queues up the specs by running zspec queue_specs spec/ scenarios
. Then zspec kicks off the following events:
- calls out to rspec to get the specs to run.
- cleans the filepaths.
- orders the specs by previous runtime, longest to shortest.
- adds the specs to the redis queue.
- sets a counter with the count of specs that were added.
Then the client runs zspec present
which polls redis for completed specs, for each non-duplicate completed spec, it stores the result in memory and decrements the counter. Once the counter hits 0 it exits the loop and prints the results.

Having an Issue?
Issue: My ZSpec build is stuck in the images state for more than 30 minutes.
Remediation:
- Click the Cancel button on the build in Drone
- Click the Restart button on the build in Drone
FAQ
- Drone provides an output of frequent flaky specs after each test run. How do I reproduce a flaky spec identified in that report?
The unit of work in ZSpec is an individual spec file. Based on ZSpecs architecture, each spec file is run in isolation and not subject to possible polluted data from other files. To reproduce a spec, run the file mentioned in the report with rspec in your local development environment.