Deployment Overview
Ratchet is a portable, CDI-based job scheduler for Jakarta EE 10/11. It deploys as a set of JAR modules inside your application, running on Jakarta EE runtimes with the services used by the reference implementation.
What You Need
| Component | Requirement | Notes |
|---|---|---|
| Java | 17 or later | Virtual threads available on 21+ |
| Jakarta EE Runtime | 10/11 with CDI, JPA, Interceptors, and Jakarta Concurrency | WildFly, Payara, Open Liberty, GlassFish 8, etc. |
| CDI | 4.0+ | beans.xml with bean-discovery-mode="all" |
| Database | MySQL 8+, PostgreSQL 14+, or MongoDB 6+ | One store module per database |
| Build Tool | Maven 3.8+ | BOM import for version management |
Ratchet Modules
A typical deployment includes three Ratchet JARs:
ratchet-api Public API, events, enums, SPI interfaces (Jakarta EE APIs only)
ratchet Reference implementation — core engine, CDI integration, polling
ratchet-store-* One of: ratchet-store-mysql, ratchet-store-postgresql, ratchet-store-mongodb
Optional modules:
ratchet-micrometer Micrometer metrics integration
All versions are managed through the ratchet-bom:
<dependencyManagement>
<dependencies>
<dependency>
<groupId>run.ratchet</groupId>
<artifactId>ratchet-bom</artifactId>
<version>0.1.0-SNAPSHOT</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
Deployment Scenarios
Single-Node
The simplest deployment: one application instance connected to one database. No clustering configuration needed. Ratchet's polling engine runs inside the application server and executes jobs using its managed thread pool.
This is suitable for:
- Development and testing
- Low-throughput workloads (< 1,000 jobs/hour)
- Applications where high availability is not critical
Multi-Node (Clustered)
Multiple application instances share the same database. Ratchet uses database-level claiming (SELECT ... FOR UPDATE SKIP LOCKED on PostgreSQL/MySQL, atomic document updates on MongoDB) to ensure each job is claimed by exactly one node.
Recurring scans and destructive startup cleanup are already serialized through store-backed locks and leases. Implement ClusterCoordinator only if you want low-latency cross-node wakeups.
See Cluster Configuration for details.
Containerized
Ratchet runs in Docker or Kubernetes without any special configuration beyond what a standard Jakarta EE application needs. The database runs as a separate container or managed service.
See Docker Deployment and Kubernetes Deployment.
Database Schema
Ratchet ships SQL DDL as plain files — no Flyway or Liquibase dependency is required. The schema files are bundled inside each SQL store module JAR at ddl/:
ratchet-store-mysqlcontainsddl/mysql-schema.sqlratchet-store-postgresqlcontainsddl/postgresql-schema.sqlratchet-store-mongodbinitializes collections and indexes at startup
For SQL stores, apply the schema using whatever mechanism your team prefers: CLI tools, migration frameworks, or application startup scripts. MongoDB bootstraps collections and indexes automatically. See Database Setup for step-by-step instructions.
Core Tables and Collections
The SQL schema creates these primary tables. MongoDB uses analogous collections created by the store module.
| Table | Purpose |
|---|---|
ratchet_schema_version | SQL schema migration/checksum tracking |
scheduler_job | Cold job metadata, payload, and terminal state |
scheduler_business_key_reservation | Active business-key reservation guard |
scheduler_job_queue | Hot executable queue table for claim/poll state |
scheduler_job_tag | Tags for job categorization and querying |
scheduler_job_execution | Per-attempt execution history with timing and errors |
scheduler_job_log | Optional per-job log entries if your application persists JobLogLine events |
scheduler_batch | Batch progress tracking |
scheduler_batch_metrics | Batch performance metrics |
scheduler_job_archive | Archived completed/failed jobs |
scheduler_node | Cluster node heartbeats |
scheduler_lock | Distributed lock management |
scheduler_resource_limit | Resource concurrency configuration |
scheduler_resource_permit | Active resource permits for concurrency control |
scheduler_workflow_condition | Workflow branching conditions |
scheduler_dlq_alerts | Dead letter queue alert tracking |
Configuration
Ratchet requires a CDI-produced RatchetOptions bean. If no producer is found, CDI fails deployment with UnsatisfiedResolutionException and the scheduler never starts — a first-class kill-switch for any deployment that includes ratchet without wanting it active.
The producer may build options programmatically or read RATCHET_* environment variables and MicroProfile Config via RatchetOptionsFactory.fromEnvironment(). See Configuration for both patterns.
Key configuration areas:
| Area | RatchetOptions path | Default |
|---|---|---|
| Thread pool | execution.maxConcurrency("SINGLE", ...) | 20 |
| Polling | polling.minDelayMs(...) | 2000 |
| Batch size | polling.batchSize(...) | 50 |
| Job retention | maintenance.jobRetentionDays(...) | 90 |
| Clustering / node health | node.heartbeatIntervalSeconds(...) | 10 |
See Configuration for the full reference.
Monitoring and Observability
Ratchet provides multiple monitoring integration points:
- Event system — CDI events for job lifecycle (started, completed, failed, DLQ)
- MetricsCollector SPI — Plug in Micrometer or any custom metrics backend
- MicroProfile Health — Implement health checks against the job store
- Store queries — Direct SQL queries or MongoDB queries against Ratchet storage for dashboards
See Monitoring & Observability for integration guides.
Deployment Checklist
Before going to production:
- Apply or initialize storage — Run schema SQL for MySQL/PostgreSQL; let MongoDB initialize collections and indexes at startup
- Configure the store resource — JNDI-bound, JTA-managed
DataSourcefor SQL stores, or a CDI-producedMongoDatabasefor MongoDB - Set isolation level for SQL stores — MySQL requires
READ COMMITTED(not the defaultREPEATABLE READ) - Tune polling — Adjust
polling.minDelayMs,polling.maxDelayMs, andpolling.batchSizefor your workload - Set up retention — Configure
maintenance.jobRetentionDays,maintenance.dlqPurgeDays, andmaintenance.logRetentionDaysto prevent unbounded table growth - Enable metrics — Wire
MetricsCollectorto your monitoring stack - Configure wakeups if needed — If running multiple nodes and you want faster cross-node responsiveness, implement
ClusterCoordinator - Test failover — Verify jobs recover when a node goes down
Next Steps
- Installation & Setup — Step-by-step getting started
- Database Setup — Schema application for all stores
- Configuration — Full configuration reference
- Docker Deployment — Containerized deployments
- Kubernetes Deployment — Orchestrated deployments