Skip to main content

Architecture Overview

Ratchet is a portable, CDI-based job scheduler for Jakarta EE 10/11 applications. It provides persistent, cluster-safe background job scheduling with a fluent API -- covering batching, chaining, workflows, and transactional enqueueing out of the box.

Where Ratchet Fits

In a typical Jakarta EE application, Ratchet sits between your business logic and the database. You inject JobSchedulerService, enqueue work using lambda expressions, and Ratchet handles persistence, polling, execution, retries, and lifecycle events.

┌─────────────────────────────────────────────────────────┐
│ Your Application │
│ │
│ @Inject JobSchedulerService scheduler; │
│ scheduler.enqueue(() -> orderService.process(id)) │
│ .withMaxRetries(3) │
│ .submit(); │
│ │
├─────────────────────────────────────────────────────────┤
│ Ratchet Engine │
│ │
│ ┌──────────┐ ┌────────┐ ┌──────────┐ ┌──────────┐ │
│ │ Poller │ │JobTask │ │ Workflow │ │ Batch │ │
│ │ │──│ │──│Scheduler │──│ Service │ │
│ │ Adaptive │ │Executor│ │ │ │ │ │
│ │ Polling │ │ │ │ Chains + │ │ Parent/ │ │
│ │ │ │ Retry │ │Conditions│ │ Child │ │
│ └──────────┘ └────────┘ └──────────┘ └──────────┘ │
│ │
├─────────────────────────────────────────────────────────┤
│ JobStore SPI │
│ (composed store SPI, one marker) │
├─────────────────────────────────────────────────────────┤
│ MySQL Store │ PostgreSQL Store │ MongoDB Store │
│ (SKIP LOCKED) │ (SKIP LOCKED) │ (atomic updates) │
└─────────────────┴──────────────────┴─────────────────────┘

Module Structure

Ratchet is organized into modules following the Jakarta EE API / RI / TCK pattern:

ratchet/
├── ratchet-api Public API, SPIs, events, annotations
├── ratchet Reference implementation + CDI integration
├── ratchet-store-core Shared JPA entities (internal, not user-facing)
├── ratchet-store-mysql MySQL JobStore + DDL
├── ratchet-store-postgresql PostgreSQL JobStore + DDL
├── ratchet-store-mongodb MongoDB JobStore + collection/index bootstrap
├── ratchet-tck Technology Compatibility Kit aggregator (pom)
│ ├── util JUnit-only helpers shared across TCK modules
│ ├── store Store SPI conformance contracts
│ ├── api Public-API conformance contracts (container-free)
│ └── jakarta Jakarta-EE conformance contracts (Arquillian)
├── ratchet-testsuite Integration tests
└── ratchet-bom Bill of Materials POM

Module Dependency Graph

                 ┌──────────────┐
│ ratchet-api │ (EE APIs only)
└──────┬───────┘

┌─────────────┼──────────────┐
│ │ │
▼ ▼ ▼
┌─────────────┐ ┌──────────┐ ┌──────────┐
│ ratchet │ │store-core│ │ tck │
│ (engine + │ │(internal)│ │ │
│ CDI) │ └────┬─────┘ └──────────┘
└─────────────┘ │
┌─────────────┼──────────────┐
│ │ │
▼ ▼ ▼
┌────────────┐ ┌──────────────┐ ┌──────────────┐
│store-mysql │ │store-postgres│ │store-mongodb │
└────────────┘ └──────────────┘ └──────────────┘

Key design constraint: ratchet-api has zero runtime dependencies beyond Jakarta EE APIs supplied by the runtime. Your application can depend on ratchet-api for event types and annotations without pulling in the engine.

Core Concepts

Pull-Based Architecture

Ratchet uses a pull model where worker threads poll the database for available jobs. This provides natural backpressure -- workers only claim new jobs when they have capacity. The Poller uses adaptive algorithms to balance responsiveness against database load.

Store as the Queue

Unlike message-broker-based schedulers, Ratchet uses your selected store backend as the job queue. SQL stores keep jobs in the scheduler_job table and claim with SELECT ... FOR UPDATE SKIP LOCKED; MongoDB keeps jobs in the scheduler_job collection and claims with atomic document updates. This gives you:

  • Transactional enqueueing -- SQL job creation participates in your existing transaction; MongoDB uses store-level atomic writes
  • Durability -- jobs survive application restarts
  • Visibility -- query job status with standard SQL or MongoDB queries
  • No additional infrastructure -- no Redis, RabbitMQ, or Kafka required

Lambda-Based API

Jobs are defined as serializable lambda expressions. Ratchet uses ASM bytecode analysis to extract the target class, method, and arguments from the lambda, then persists this information as a portable payload. At execution time, the method is invoked reflectively on a CDI-managed bean:

// This lambda is analyzed at enqueue time, not executed
scheduler.enqueue(() -> orderService.processOrder(orderId))
.submit();

// Ratchet extracts: target=OrderService, method=processOrder, args=[orderId]
// At execution time: CDI resolves OrderService, invokes processOrder(orderId)

SPI-Driven Extension

Ratchet separates API contracts from implementation through Service Provider Interfaces. The engine consults SPIs for persistence, invocation resolution, retry logic, security, metrics, logging, configuration, and cluster coordination. Default implementations are provided, and you can replace any of them:

SPIPurposeDefault
JobStorePersistence backendMySQL / PostgreSQL / MongoDB modules
JobInvocationResolverCallback-to-method invocation resolutionASM bytecode analysis
ResultPersistenceStrategyJob return-value persistenceJSON metadata with size cap
RatchetOptionsTyped runtime optionsRequired CDI producer — see Configuration
RetryPolicyCustom retry decisionsPassthrough (uses job config)
ClassPolicySecurity allowlistEmpty package allowlist; startup fails fast until you provide one
MetricsCollectorObservability hooksNo-op
ClusterCoordinatorCross-node wakeupsNo-op (single-node)

SPIs marked @Incubating may change before 1.0:

SPIPurposeDefault
NodeIdentityProviderCluster node identityHostname + PID
ExecutorProviderThread pool managementContainer-managed executors
JobLoggerFactoryStructured job loggingJBoss Logging-backed logger
ErrorSanitizerException message sanitizationTruncate + strip PII

Event System

Ratchet publishes lifecycle events that your application can observe. Events live in ratchet-api so you can depend on event types without pulling in the engine. In CDI environments, use standard @Observes:

public void onJobFailed(@Observes JobFailedEvent event) {
alertService.notify(event.getJobId(), event.getErrorMessage());
}

In non-CDI environments, register a programmatic listener:

scheduler.addEventListener(event -> {
if (event instanceof JobDlqEvent dlq) {
log.severe("Job " + dlq.getJobId() + " moved to DLQ");
}
});

Minimal Setup

Add Ratchet to your Jakarta EE application:

<dependencyManagement>
<dependencies>
<dependency>
<groupId>run.ratchet</groupId>
<artifactId>ratchet-bom</artifactId>
<version>${ratchet.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>

<dependencies>
<dependency>
<groupId>run.ratchet</groupId>
<artifactId>ratchet</artifactId>
</dependency>
<dependency>
<groupId>run.ratchet</groupId>
<artifactId>ratchet-store-mysql</artifactId>
<!-- or ratchet-store-postgresql / ratchet-store-mongodb -->
</dependency>
</dependencies>

For SQL stores, apply the DDL schema from the store module's ddl/ directory. For MongoDB, let the store module initialize collections and indexes at startup. Then inject and use:

@Inject JobSchedulerService scheduler;

// Fire-and-forget
scheduler.enqueueNow(() -> emailService.sendWelcome(userId));

// Configured job
scheduler.enqueue(() -> reportService.generate(month))
.withPriority(JobPriority.HIGH)
.withTimeout(Duration.ofMinutes(30))
.withMaxRetries(3)
.withBackoff(BackoffPolicy.EXPONENTIAL, Duration.ofSeconds(10))
.withTags("reports", "monthly")
.submit();

// Recurring job via annotation
@Recurring(cron = "0 0 2 * * ?", name = "Nightly Cleanup")
public void cleanup() { ... }

What's Next