Skip to main content

Deployment Overview

Ratchet is a portable, CDI-based job scheduler for Jakarta EE 10/11. It deploys as a set of JAR modules inside your application, running on Jakarta EE runtimes with the services used by the reference implementation.

What You Need

ComponentRequirementNotes
Java17 or laterVirtual threads available on 21+
Jakarta EE Runtime10/11 with CDI, JPA, Interceptors, and Jakarta ConcurrencyWildFly, Payara, Open Liberty, GlassFish 8, etc.
CDI4.0+beans.xml with bean-discovery-mode="all"
DatabaseMySQL 8+, PostgreSQL 14+, or MongoDB 6+One store module per database
Build ToolMaven 3.8+BOM import for version management

Ratchet Modules

A typical deployment includes three Ratchet JARs:

ratchet-api          Public API, events, enums, SPI interfaces (Jakarta EE APIs only)
ratchet Reference implementation — core engine, CDI integration, polling
ratchet-store-* One of: ratchet-store-mysql, ratchet-store-postgresql, ratchet-store-mongodb

Optional modules:

ratchet-micrometer   Micrometer metrics integration

All versions are managed through the ratchet-bom:

<dependencyManagement>
<dependencies>
<dependency>
<groupId>run.ratchet</groupId>
<artifactId>ratchet-bom</artifactId>
<version>0.1.0-SNAPSHOT</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>

Deployment Scenarios

Single-Node

The simplest deployment: one application instance connected to one database. No clustering configuration needed. Ratchet's polling engine runs inside the application server and executes jobs using its managed thread pool.

This is suitable for:

  • Development and testing
  • Low-throughput workloads (< 1,000 jobs/hour)
  • Applications where high availability is not critical

Multi-Node (Clustered)

Multiple application instances share the same database. Ratchet uses database-level claiming (SELECT ... FOR UPDATE SKIP LOCKED on PostgreSQL/MySQL, atomic document updates on MongoDB) to ensure each job is claimed by exactly one node.

Recurring scans and destructive startup cleanup are already serialized through store-backed locks and leases. Implement ClusterCoordinator only if you want low-latency cross-node wakeups.

See Cluster Configuration for details.

Containerized

Ratchet runs in Docker or Kubernetes without any special configuration beyond what a standard Jakarta EE application needs. The database runs as a separate container or managed service.

See Docker Deployment and Kubernetes Deployment.

Database Schema

Ratchet ships SQL DDL as plain files — no Flyway or Liquibase dependency is required. The schema files are bundled inside each SQL store module JAR at ddl/:

  • ratchet-store-mysql contains ddl/mysql-schema.sql
  • ratchet-store-postgresql contains ddl/postgresql-schema.sql
  • ratchet-store-mongodb initializes collections and indexes at startup

For SQL stores, apply the schema using whatever mechanism your team prefers: CLI tools, migration frameworks, or application startup scripts. MongoDB bootstraps collections and indexes automatically. See Database Setup for step-by-step instructions.

Core Tables and Collections

The SQL schema creates these primary tables. MongoDB uses analogous collections created by the store module.

TablePurpose
ratchet_schema_versionSQL schema migration/checksum tracking
scheduler_jobCold job metadata, payload, and terminal state
scheduler_business_key_reservationActive business-key reservation guard
scheduler_job_queueHot executable queue table for claim/poll state
scheduler_job_tagTags for job categorization and querying
scheduler_job_executionPer-attempt execution history with timing and errors
scheduler_job_logOptional per-job log entries if your application persists JobLogLine events
scheduler_batchBatch progress tracking
scheduler_batch_metricsBatch performance metrics
scheduler_job_archiveArchived completed/failed jobs
scheduler_nodeCluster node heartbeats
scheduler_lockDistributed lock management
scheduler_resource_limitResource concurrency configuration
scheduler_resource_permitActive resource permits for concurrency control
scheduler_workflow_conditionWorkflow branching conditions
scheduler_dlq_alertsDead letter queue alert tracking

Configuration

Ratchet requires a CDI-produced RatchetOptions bean. If no producer is found, CDI fails deployment with UnsatisfiedResolutionException and the scheduler never starts — a first-class kill-switch for any deployment that includes ratchet without wanting it active.

The producer may build options programmatically or read RATCHET_* environment variables and MicroProfile Config via RatchetOptionsFactory.fromEnvironment(). See Configuration for both patterns.

Key configuration areas:

AreaRatchetOptions pathDefault
Thread poolexecution.maxConcurrency("SINGLE", ...)20
Pollingpolling.minDelayMs(...)2000
Batch sizepolling.batchSize(...)50
Job retentionmaintenance.jobRetentionDays(...)90
Clustering / node healthnode.heartbeatIntervalSeconds(...)10

See Configuration for the full reference.

Monitoring and Observability

Ratchet provides multiple monitoring integration points:

  • Event system — CDI events for job lifecycle (started, completed, failed, DLQ)
  • MetricsCollector SPI — Plug in Micrometer or any custom metrics backend
  • MicroProfile Health — Implement health checks against the job store
  • Store queries — Direct SQL queries or MongoDB queries against Ratchet storage for dashboards

See Monitoring & Observability for integration guides.

Deployment Checklist

Before going to production:

  1. Apply or initialize storage — Run schema SQL for MySQL/PostgreSQL; let MongoDB initialize collections and indexes at startup
  2. Configure the store resource — JNDI-bound, JTA-managed DataSource for SQL stores, or a CDI-produced MongoDatabase for MongoDB
  3. Set isolation level for SQL stores — MySQL requires READ COMMITTED (not the default REPEATABLE READ)
  4. Tune polling — Adjust polling.minDelayMs, polling.maxDelayMs, and polling.batchSize for your workload
  5. Set up retention — Configure maintenance.jobRetentionDays, maintenance.dlqPurgeDays, and maintenance.logRetentionDays to prevent unbounded table growth
  6. Enable metrics — Wire MetricsCollector to your monitoring stack
  7. Configure wakeups if needed — If running multiple nodes and you want faster cross-node responsiveness, implement ClusterCoordinator
  8. Test failover — Verify jobs recover when a node goes down

Next Steps