Skip to main content

Job Types

Ratchet classifies jobs into public types (what users see) and internal execution types (what the engine uses). The public JobType enum describes the scheduling pattern. The internal JobExecutionType enum describes the execution role within that pattern.

Public Job Types

The JobType enum appears in events, SPIs, and monitoring. It represents the user-visible category:

TypeDescriptionCreated By
SINGLEOne-time execution at a scheduled timeenqueue(), schedule()
RECURRINGAutomatically rescheduled on a cron expressionscheduleRecurring(), @Recurring
BATCHCoordinated parallel execution of many itemsenqueueBatch(), streamingBatch()
CHAINSequential multi-step pipelinethen() on JobBuilder
WORKFLOWConditional branching based on job resultsthenOnSuccess(), when(), branch()
SYSTEMFramework-managed internal workEngine only (not user-creatable)

Internal Execution Types

The JobExecutionType enum adds granularity the engine needs for routing:

Execution TypeMaps to Public TypeRole
SINGLESINGLEStandard one-time job
RECURRINGRECURRINGRecurring master job
BATCH_PARENTBATCHBatch coordinator (tracks progress, no work)
BATCH_CHILDBATCHIndividual item within a batch
CHAIN_STEPCHAINOne step in a sequential chain
WORKFLOW_BRANCHWORKFLOWConditional branch job
WORKFLOW_JOINWORKFLOWJoin point for workflow convergence
DLQ_ALERTSYSTEMDead letter queue alert tracking

This separation lets external observers see clean semantic categories while the engine routes jobs to the correct handler based on their execution role.

SINGLE Jobs

The most common type. A SINGLE job executes exactly once at its scheduled time. It supports all standard features: retries, timeouts, priorities, tags, callbacks, and idempotency keys.

// Immediate execution
scheduler.enqueue(() -> orderService.processOrder(orderId))
.withPriority(JobPriority.HIGH)
.withMaxRetries(3)
.submit();

// Delayed execution
scheduler.schedule(Duration.ofMinutes(30), () -> reminderService.send(userId))
.submit();

After a SINGLE job completes (succeeds or permanently fails), it is eligible for archival. It does not reschedule itself.

RECURRING Jobs

Recurring jobs execute on a cron schedule. Each execution spawns a new job instance, so the recurring "master" persists indefinitely while its individual runs follow the normal lifecycle.

Annotation-Based

The simplest way to create recurring jobs:

@ApplicationScoped
public class MaintenanceService {

@Recurring(cron = "0 0 2 * * ?", name = "Nightly Cleanup")
public void performCleanup() {
// Runs at 2 AM daily
}

@Recurring(
cron = "0 */15 * * * ?",
zone = "America/New_York",
maxRetries = 5,
backoffPolicy = BackoffPolicy.EXPONENTIAL,
tags = {"health", "monitoring"}
)
public void healthCheck(JobContext context) {
context.logger().info("Running health check");
}
}

At startup, the RecurringJobProcessor CDI bean scans for @Recurring methods, validates them, and registers them with the scheduler. The annotation's id (or auto-generated fully-qualified method name) serves as the business key, ensuring exactly one active master per annotation.

Programmatic API

scheduler.scheduleRecurring(
"0 0 * * * ?",
ZoneId.of("UTC"),
() -> reportService.generateHourly())
.withOptions(JobOptions.defaults()
.withMaxRetries(3)
.withTimeout(Duration.ofMinutes(10)))
.withTags(List.of("reports"))
.withBusinessKey("hourly-report")
.submit();

How Recurring Execution Works

  1. The recurring master job stores the cron expression and timezone
  2. RecurringScheduler calculates the next fire time from the cron expression
  3. When the fire time arrives, a new SINGLE job is created for that execution
  4. After the execution completes, the next fire time is recalculated
  5. This continues until the recurring job is canceled

The @Recurring annotation supports these properties:

PropertyDefaultDescription
cron(required)Quartz cron expression
zone"UTC"Timezone for cron evaluation
namemethod nameHuman-readable name
idclass + methodBusiness key for idempotency
priority5 (NORMAL)Priority on 1-10 scale
maxRetries3Retry attempts per execution
backoffPolicyEXPONENTIALBackoff between retries
backoffDelayMs1000Base delay for backoff
timeoutSeconds3600Max execution time
enabledtrueWhether the recurring job is registered at startup
tags{}Tags for filtering

BATCH Jobs

Batch jobs process a collection of items in parallel. Internally, a BATCH_PARENT job is created to track overall progress, and individual BATCH_CHILD jobs are created for each item.

scheduler.enqueueBatch("Import Users")
.forEach(users, user -> userService.importUser(user))
.onProgress(ctx -> log.info("{}% complete", ctx.percentDone()))
.thenOnBatchSuccess(() -> notificationService.sendImportComplete())
.thenOnBatchFailure(() -> alertService.sendImportFailed())
.submit();

The parent job doesn't execute work itself -- it monitors child job completion. When all children finish, the parent evaluates batch-level workflow conditions and fires the appropriate branches.

See Batches for details on BatchBuilder, StreamingBatchBuilder, and chunk processing.

CHAIN Jobs

Chains execute tasks sequentially. Each step depends on the previous step's completion. Internally, each step is a CHAIN_STEP job linked by the depends_on column.

scheduler.enqueue(() -> validateData())
.then(() -> transformData())
.then(() -> loadData())
.then(() -> sendNotification())
.submit();

How Chains Work Internally

  1. All steps are persisted at submission time
  2. Steps 2-N are created with scheduled_time = 9999-12-31T23:59:59Z (a sentinel value that makes them invisible to the Poller)
  3. When step 1 succeeds, ChainScheduler.scheduleNext() sets step 2's scheduled_time = now, making it eligible for polling
  4. This continues until the final step completes

Failure cascading: If any step fails permanently (exhausts retries), all downstream steps are canceled recursively using depth-first traversal.

WORKFLOW Jobs

Workflows extend chains with conditional branching. Instead of always executing the next step, the engine evaluates WorkflowCondition predicates against the job's result to decide which branches fire.

scheduler.enqueue(() -> analyzeData())
.thenOnSuccess(() -> archiveResults())
.thenOnFailure(() -> notifyAdmins())
.whenResult(score -> score > 0.8, () -> triggerHighScoreWorkflow())
.submit();

Workflow branches are stored as WorkflowConditionEntity rows linked to the parent job. When the parent completes, WorkflowScheduler loads all conditions, evaluates them in priority order, and schedules the matching branches.

Multiple branches can fire from a single parent -- unlike chains (which are linear), workflows support fan-out.

See Workflows for the full condition type catalog and branching patterns.

SYSTEM Jobs

System jobs are framework-managed and not directly creatable through the public API. Currently, the DLQ_ALERT execution type is the only system job, used internally for dead letter queue alert tracking.

Type Routing in the Engine

When a job completes, the PostExecutionHandler routes the next action based on the job's execution type:

Execution TypeOn SuccessOn Permanent Failure
SINGLEInvoke callbacksInvoke callbacks, move to DLQ
BATCH_CHILDUpdate parent progressUpdate parent progress (as failure)
CHAIN_STEPSchedule next stepCancel downstream steps
WORKFLOW_BRANCHEvaluate conditions, schedule matchesEvaluate conditions (FAILURE branches may fire)
RECURRINGCalculate next fire timeCalculate next fire time (retry if configured)

This routing is the reason the internal JobExecutionType exists -- the engine needs to know not just "this is a BATCH job" but "this is a BATCH_CHILD job" to route correctly.

  • Batches -- BatchBuilder and StreamingBatchBuilder
  • Workflows -- Conditional branching and WorkflowCondition
  • Scheduling -- Immediate, delayed, and cron-based scheduling
  • Job Lifecycle -- State machine for all job types