Testing
quiv is extensively tested with 108 tests covering the full lifecycle of tasks, jobs, event listeners, progress callbacks, configuration, models, and edge cases. Tests run on every commit via CI across Python 3.10 through 3.14.
# Run all tests
uv run pytest
# Run with coverage
uv run pytest --cov=quiv
# Run a specific test file
uv run pytest tests/test_scheduler.py
# Run a single test
uv run pytest tests/test_scheduler.py::test_backpressure_skips_dispatch_when_pool_full
Test architecture
Most tests require a running asyncio event loop for callback dispatch. The
running_main_loop fixture (in conftest.py) spins up an event loop in a
background thread and yields it to each test. Every test calls
scheduler.shutdown() in a finally block to clean up threads and temp DB
files.
What is tested
Scheduler lifecycle and configuration
- Mixing
config=QuivConfig(...)with explicit kwargs raisesConfigurationError pool_size <= 0andhistory_retention_seconds < 0are rejectedstart()is idempotent (safe to call multiple times)shutdown()handles DB cleanup failures gracefully- Database initialization failure raises
DatabaseInitializationError - quiv's internal tables (
quiv_task,quiv_job) do not leak into user SQLModel metadata
Task input validation
- Empty
task_nameis rejected interval <= 0anddelay < 0are rejectedargsmust be atuple(not list)kwargsmust be adict(not string)- Unpicklable args (e.g. lambdas) raise
ConfigurationErrorwith a clear message
Task registration and identification
add_task()returns a uniquetask_id(UUID)- Duplicate
task_namevalues are allowed, each getting a distincttask_id - Handlers, progress callbacks, and DB rows are all keyed by
task_id remove_task()cleans up handler, progress callback, and DB row- Removing a non-existent task raises
TaskNotFoundError - Deleting a non-existent task from persistence raises
TaskNotFoundError
Task execution
- Sync run-once task executes and produces a
completedjob - Async run-once task executes via thread-local event loop
- Sync handler without
_stop_eventor_progress_hookstill runs correctly - Failed handler sets job status to
failed _job_idis injected as a UUID string when handler accepts itargsandkwargsordering is preserved through pickle round-trip (tested with 8 positional args and 5 keyword args)
Concurrent execution and backpressure
- Same task is never dispatched concurrently (status set to
runningblocks re-dispatch) - When the thread pool is full, due tasks are deferred to the next tick instead of queued unboundedly
- Deferred tasks execute once a worker becomes available
_active_job_countdecrements correctly after job completion- Late-starting jobs (due to pool saturation) log a warning with the delay
Interval scheduling (fixed_interval)
- Fixed interval (
fixed_interval=True): next run is aligned tostart_time + interval - Skipped intervals: a 70-second job with 60-second interval skips to
start_time + 120s; a 130-second job skips tostart_time + 180s - Wait between runs (
fixed_interval=False): next run iscompletion_time + interval - Recurring task finalization sets status back to
activeand updatesnext_run_at - Run-once task finalization deletes the task row
Cancellation
cancel_job()returnsTruewhen stop event exists,Falseotherwise- Handler that sets
_stop_eventresults incancelledstatus remove_task()on a running task cancels its active jobshutdown()cancels all tracked running jobs
Progress callbacks
- Async progress callback dispatched on main event loop via handler's
_progress_hook - Sync progress callback dispatched on main event loop via
call_soon_threadsafe - Async handler with sync progress callback works correctly
- Progress callback registration and clearing via
None - Sync callback works without an event loop (runs on worker thread)
- Async callback works without an event loop (runs in temporary event loop)
- Failing sync callback is logged, does not crash the job
- Failing async callback is logged, does not crash the job
- Closed main loop does not crash progress dispatch
Event listeners
- Invalid event type (non-
Eventenum) raisesConfigurationError - Non-callable callback raises
ConfigurationError - Removing an unregistered listener is silently ignored
TASK_ADDED: listener receivesEventandTaskwith correcttask_nameandtask_idTASK_REMOVED: listener receives snapshot of task before deletionTASK_PAUSED: listener receives task withpausedstatusTASK_RESUMED: listener receives task withactivestatusJOB_STARTED: listener receivesTaskandJobwithrunningstatusJOB_COMPLETED: listener receivesJobwithduration_secondsset anderror_messageasNoneJOB_FAILED: listener receivesJobwitherror_messagematching the exceptionJOB_CANCELLED: listener receivesJobwithcancelledstatus- Multiple listeners for the same event are all called
- Async listener dispatched on main event loop
- Failing listener is logged and swallowed; subsequent listeners still run
- Sync listener works without an event loop
- Async listener works without an event loop (temporary event loop)
- Async listener failure without an event loop is caught and logged
Handler injection
_job_id,_stop_event, and_progress_hookare injected when handler accepts them- Injection is skipped when handler signature does not include the parameters
- Handlers with
**kwargsreceive all injected parameters - Uninspectable callables (e.g.
object()) are handled gracefully (no injection, no crash)
Deserialization safety
- Corrupt pickle data in
args_pickledraisesConfigurationError - Corrupt pickle data in
kwargs_pickledraisesConfigurationError - Pickled kwargs that aren't a
dictraisesConfigurationError - Corrupt pickle in
Task.model_validate()from dict falls back to empty defaults - Corrupt pickle in
Task.model_validate()fromTaskDBobject falls back to empty defaults - Non-standard input types pass through the validator without crashing
Datetime normalization
- Naive datetimes are normalized to UTC-aware (treated as UTC)
- Timezone-aware datetimes are converted to UTC
Nonedatetimes pass through unchangedTaskpublic model normalizesnext_run_atfromTaskDBTaskDBdatetimes are normalized on DB load via@reconstructorJobdatetimes (started_at,ended_at) are normalized on DB load via@reconstructorJobwithNoneended_atis handled correctlyget_all_tasks()returns UTC-awarenext_run_atregardless of configured display timezoneget_job()returns UTC-awarestarted_atandended_at
Persistence layer
queue_task_for_immediate_run()raisesTaskNotScheduledErrorfor missing taskpause_task()andresume_task()raiseTaskNotFoundErrorfor missing taskmark_task_running()raisesTaskNotFoundErrorfor missing taskmark_job_running()andfinalize_job()raiseJobNotFoundErrorfor missing job- History cleanup deletes old finished jobs while keeping recent ones
- Job status filtering (
completed,failed) returns correct subsets get_all_tasks(include_run_once=False)excludes run-once tasks- Paused tasks are excluded from due-task queries
- Resumed tasks appear in due-task queries
Immediate execution
run_task_immediately()raisesHandlerNotRegisteredErrorfor unregistered handlerrun_task_immediately()raisesTaskNotScheduledErrorwhen task row is missingrun_task_immediately()successfully queues a registered task
Configuration
- IANA timezone string resolves correctly
tzinfoinstance passes through- Invalid timezone string raises
InvalidTimezoneError - Invalid type raises
InvalidTimezoneError QuivConfigworks without conflict when no explicit kwargs are passed
Scheduler loop resilience
- Loop catches and logs exceptions, retries after sleep
- Loop does not crash on persistent errors (verified over multiple iterations)
Model serialization
TaskDBfield serializer unpickles valid bytes for JSON outputTaskmodel serializes args/kwargs as unpickled Python objects- JSON serialization produces correct output (tested with
model_dump_json()) - Time helpers (
next_run_time,get_current_time) return UTC-aware datetimes id_generator()returns UUID strings