Skip to main content

Database Setup

Ratchet requires a database to persist jobs, execution history, and scheduling metadata. This guide covers setup for all three supported stores.

SQL stores ship DDL as plain SQL files bundled inside each SQL store module JAR. There is no Flyway or Liquibase dependency — you apply the schema using whatever mechanism your team prefers. MongoDB initializes collections and indexes at startup.

PostgreSQL

Create the Database

# Connect as superuser
psql -U postgres

# Create the database and user
CREATE USER ratchet WITH PASSWORD 'your-secure-password';
CREATE DATABASE ratchet OWNER ratchet ENCODING 'UTF8';
GRANT ALL PRIVILEGES ON DATABASE ratchet TO ratchet;

# Connect to the new database
\c ratchet

# Grant schema privileges
GRANT ALL ON SCHEMA public TO ratchet;

Apply the Schema

The DDL file is at ratchet-store-postgresql/src/main/resources/ddl/postgresql-schema.sql in the source, or ddl/postgresql-schema.sql inside the JAR.

# From the source tree
psql -U ratchet -d ratchet -f ratchet-store-postgresql/src/main/resources/ddl/postgresql-schema.sql

# Or extract from the JAR
jar xf ratchet-store-postgresql-0.1.0-SNAPSHOT.jar ddl/postgresql-schema.sql
psql -U ratchet -d ratchet -f ddl/postgresql-schema.sql

Using a migration framework:

# Copy the DDL into your migration directory
cp ddl/postgresql-schema.sql src/main/resources/db/migration/V1__ratchet_schema.sql

# Run with Flyway
flyway migrate

# Or with Liquibase
liquibase update

Verify Installation

\dt scheduler_*
\dt ratchet_schema_version

You should see these scheduler tables plus ratchet_schema_version:

TablePurpose
scheduler_jobCold job metadata, payload, and terminal state
scheduler_job_queueHot executable queue state
scheduler_business_key_reservationActive business-key reservation guard
scheduler_job_tagTags for categorization
scheduler_job_executionPer-attempt execution history
scheduler_job_logOptional per-job log entries if your application persists JobLogLine events
scheduler_batchBatch progress tracking
scheduler_batch_metricsBatch performance metrics
scheduler_job_archiveArchived completed/failed jobs
scheduler_nodeCluster node heartbeats
scheduler_lockDistributed locks
scheduler_resource_limitResource concurrency config
scheduler_resource_permitActive resource permits
scheduler_workflow_conditionWorkflow branching conditions
scheduler_dlq_alertsDLQ alert tracking
ratchet_schema_versionApplied schema migration/checksum tracking

DataSource Configuration

WildFly

# WildFly CLI
/subsystem=datasources/data-source=RatchetDS:add( \
jndi-name=java:/RatchetDS, \
driver-name=postgresql, \
connection-url=jdbc:postgresql://localhost:5432/ratchet, \
user-name=ratchet, \
password=your-secure-password, \
min-pool-size=5, \
max-pool-size=20, \
valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker)

Open Liberty

<!-- server.xml -->
<dataSource id="RatchetDS" jndiName="java:/RatchetDS">
<jdbcDriver libraryRef="postgresLib"/>
<properties.postgresql
serverName="localhost"
portNumber="5432"
databaseName="ratchet"
user="ratchet"
password="your-secure-password"/>
<connectionManager minPoolSize="5" maxPoolSize="20"/>
</dataSource>

<library id="postgresLib">
<fileset dir="${shared.resource.dir}/jdbc" includes="postgresql-*.jar"/>
</library>

Payara / GlassFish

# asadmin
create-jdbc-connection-pool \
--datasourceclassname=org.postgresql.ds.PGSimpleDataSource \
--restype=javax.sql.DataSource \
--property=serverName=localhost:portNumber=5432:databaseName=ratchet:user=ratchet:password=your-secure-password \
RatchetPool

create-jdbc-resource --connectionpoolid=RatchetPool java:/RatchetDS

PostgreSQL-Specific Notes

  • Ratchet uses SKIP LOCKED for lock-free job claiming across multiple nodes
  • Generated columns extract target_class and method_name from the JSON payload
  • A partial unique index enforces business key uniqueness for active jobs only (PENDING, RUNNING, PAUSED)
  • JSONB is not used for the payload column (it uses TEXT), but you can query parameters via casting: payload::jsonb ->> 'target'

MySQL

Create the Database

mysql -u root -p

CREATE DATABASE ratchet CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
CREATE USER 'ratchet'@'%' IDENTIFIED BY 'your-secure-password';
GRANT ALL PRIVILEGES ON ratchet.* TO 'ratchet'@'%';
FLUSH PRIVILEGES;

Apply the Schema

# From the source tree
mysql -u ratchet -p ratchet < ratchet-store-mysql/src/main/resources/ddl/mysql-schema.sql

# Or extract from the JAR
jar xf ratchet-store-mysql-0.1.0-SNAPSHOT.jar ddl/mysql-schema.sql
mysql -u ratchet -p ratchet < ddl/mysql-schema.sql

Verify Installation

SHOW TABLES LIKE 'scheduler_%';
SHOW TABLES LIKE 'ratchet_schema_version';

You should see the same core tables as PostgreSQL, with MySQL-specific column types (ENUM, JSON, GENERATED ALWAYS columns).

DataSource Configuration

WildFly

/subsystem=datasources/data-source=RatchetDS:add( \
jndi-name=java:/RatchetDS, \
driver-name=mysql, \
connection-url=jdbc:mysql://localhost:3306/ratchet, \
user-name=ratchet, \
password=your-secure-password, \
min-pool-size=5, \
max-pool-size=20, \
transaction-isolation=TRANSACTION_READ_COMMITTED, \
valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLValidConnectionChecker)

Open Liberty

<!-- server.xml -->
<dataSource id="RatchetDS" jndiName="java:/RatchetDS" isolationLevel="TRANSACTION_READ_COMMITTED">
<jdbcDriver libraryRef="mysqlLib"/>
<properties.mysql
serverName="localhost"
portNumber="3306"
databaseName="ratchet"
user="ratchet"
password="your-secure-password"/>
<connectionManager minPoolSize="5" maxPoolSize="20"/>
</dataSource>
MySQL Isolation Level

MySQL defaults to REPEATABLE READ, which acquires gap locks on SELECT ... FOR UPDATE that block concurrent inserts. This causes lock wait timeouts under production load. Always configure READ COMMITTED isolation via one of:

  • DataSource property: transaction-isolation=TRANSACTION_READ_COMMITTED
  • JDBC URL parameter: ?sessionVariables=transaction_isolation='READ-COMMITTED'
  • persistence.xml: <property name="hibernate.connection.isolation" value="2"/>
  • WildFly -ds.xml: <transaction-isolation>TRANSACTION_READ_COMMITTED</transaction-isolation>

MySQL-Specific Notes

  • Uses ENUM types for status, job type, and backoff policy columns
  • Uses JSON column type for payload, params, and result data
  • GENERATED ALWAYS AS ... STORED columns extract target_class and method_name from payload JSON
  • active_business_key is a generated column that enforces uniqueness only for active jobs
  • All tables use InnoDB engine with utf8mb4_unicode_ci collation

MongoDB

Create the Database

mongosh

use ratchet

db.createUser({
user: "ratchet",
pwd: "your-secure-password",
roles: [{ role: "readWrite", db: "ratchet" }]
});

Initialize Collections and Indexes

MongoDB does not require a DDL file — the store module creates collections and indexes automatically on startup. However, you can pre-create the same collections and indexes for faster initial startup:

// mongosh
use ratchet;

db.createCollection("scheduler_job");
db.createCollection("scheduler_batch");
db.createCollection("scheduler_batch_metrics");
db.createCollection("scheduler_job_execution");
db.createCollection("scheduler_job_log");
db.createCollection("scheduler_job_archive");
db.createCollection("scheduler_node");
db.createCollection("scheduler_lock");
db.createCollection("scheduler_workflow_condition");
db.createCollection("scheduler_dlq_alerts");
db.createCollection("scheduler_resource_permit");
db.createCollection("scheduler_resource_limit");

// Key indexes
db.scheduler_job.createIndex({ status: 1, priority: -1, scheduled_time: 1 }, { name: "idx_job_poll_composite" });
db.scheduler_job.createIndex({ status: 1, job_type: 1, priority: -1, scheduled_time: 1, _id: 1 }, { name: "idx_job_claim_exec" });
db.scheduler_job.createIndex({ job_type: 1, status: 1, next_fire: 1 }, { name: "idx_job_recurring_composite" });
db.scheduler_job.createIndex({ status: 1, job_type: 1, priority: -1, next_fire: 1, _id: 1 }, { name: "idx_job_claim_recurring" });
db.scheduler_job.createIndex({ idempotency_key: 1 }, { name: "idx_job_idempotency_key", unique: true });
db.scheduler_job.createIndex(
{ business_key: 1 },
{
name: "idx_job_active_business_key",
unique: true,
partialFilterExpression: {
status: { $in: ["PENDING", "RUNNING", "PAUSED"] },
business_key: { $type: "string" }
}
}
);
db.scheduler_job.createIndex({ tags: 1 }, { name: "idx_job_tags" });
db.scheduler_job_execution.createIndex({ job_id: 1 }, { name: "idx_execution_job_id" });
db.scheduler_node.createIndex({ heartbeat_ts: 1 }, { name: "idx_node_heartbeat" });
db.scheduler_lock.createIndex({ expires_at: 1 }, { name: "idx_lock_ttl", expireAfterSeconds: 0 });
db.scheduler_job_log.createIndex({ job_id: 1, ts: 1 }, { name: "idx_log_job_ts" });

Connection Configuration

Ratchet does not define its own MongoDB URI property. Configure the connection through your application runtime and expose a MongoDatabase bean. For example:

@Produces
@ApplicationScoped
public MongoDatabase mongoDatabase() {
return MongoClients.create("mongodb://ratchet:password@localhost:27017")
.getDatabase("ratchet");
}

Connection Pool Sizing

The connection pool should be sized based on the number of executor threads plus overhead for the polling engine and administrative queries.

Formula

pool_size = worker_threads + polling_threads + admin_overhead

A reasonable starting point:

Executor ThreadsRecommended Pool Size
8 (default)15-20
1625-30
3240-50
64+executor threads * 1.5

PostgreSQL Connection Limits

Check the server's max connections:

SHOW max_connections;  -- Default: 100

Increase if needed:

ALTER SYSTEM SET max_connections = 200;
-- Restart PostgreSQL

MySQL Connection Limits

SHOW VARIABLES LIKE 'max_connections';  -- Default: 151
# my.cnf
[mysqld]
max_connections = 200

Schema Upgrades

Ratchet uses CREATE TABLE IF NOT EXISTS and CREATE INDEX IF NOT EXISTS in its DDL. This means you can safely re-run the schema file against an existing database — it will create any missing tables or indexes without modifying existing ones.

For schema changes between Ratchet versions:

  1. Check the release notes for migration instructions
  2. Back up your database
  3. Apply any migration SQL provided in the release
  4. Re-run the full schema DDL to create any new tables/indexes

Since Ratchet does not bundle a migration framework, you are free to manage schema changes using whatever tool your team already uses (Flyway, Liquibase, manual scripts, etc.).

If you want Ratchet to run its own packaged ddl/migrations/V###__description.sql files at startup, wire the optional SchemaMigrator utility from a lifecycle hook:

@ApplicationScoped
class RatchetSchemaMigrationHook implements SchedulerLifecycleHook {
@Inject DataSource dataSource;
@Inject RatchetOptions options;

@Override
public void beforeStart() throws Exception {
if (options.schema().autoMigrate()) {
new SchemaMigrator(
dataSource, options.schema().migrationDialect(), options.schema().migrationPrefix())
.migrate();
}
}
}

SchemaMigrator currently supports mysql and postgresql, uses ratchet_schema_version for checksum tracking, and must run before Poller.init(). The built-in RatchetLifecycle invokes SchedulerLifecycleHook.beforeStart before scheduler startup, so this hook is the safe integration point.

Backup Strategy

PostgreSQL

# Logical backup
pg_dump -Fc -v ratchet > ratchet-backup.dump

# Restore
pg_restore -d ratchet ratchet-backup.dump

MySQL

# Logical backup
mysqldump -u ratchet -p ratchet > ratchet-backup.sql

# Restore
mysql -u ratchet -p ratchet < ratchet-backup.sql

MongoDB

# Backup
mongodump --db ratchet --out /backup/

# Restore
mongorestore --db ratchet /backup/ratchet/

For all databases, schedule regular backups and test restoration periodically. In production, consider point-in-time recovery using WAL archiving (PostgreSQL), binary log (MySQL), or oplog (MongoDB).

See Also