Docker Compose Test Database Optimization
The Current Problem
# docker-compose.test.yml (CURRENT)
services:
test-db:
ports:
- '5433:5432' # ← Single port = single container
volumes:
- test_db_data:/var/lib/postgresql/data # ← Shared volume
Issue: When parallel test runs happen:
Terminal 1: npm test (connects to localhost:5433)
↓
global-setup.js → Acquire migration lock
↓
Terminal 2: npm test (connects to localhost:5433) ← SAME CONTAINER!
↓
global-setup.js → Wait for migration lock... (timeout)
↓
FAIL ✗ "Migration table is already locked"
Solution 1: Isolated Database Per Test Run (Recommended)
Create separate containers for parallel test sessions using unique ports.
Implementation
Create scripts/setup-test-db.js:
#!/usr/bin/env node
const { spawnSync } = require('child_process');
const fs = require('fs');
const path = require('path');
const testRunId = Date.now().toString().slice(-4); // Unique ID: 1234
const port = 5430 + (testRunId % 100); // Ports: 5430-5529
console.log(`🔧 Setting up isolated test database on port ${port} (Run ID: ${testRunId})`);
// Create temporary docker-compose for this test run
const composeContent = `
version: '3.8'
services:
test-db-${testRunId}:
build:
context: ./backend/database
dockerfile: Dockerfile
environment:
POSTGRES_USER: testuser
POSTGRES_PASSWORD: testpassword
POSTGRES_DB: testdb_${testRunId}
POSTGRES_HOST_AUTH_METHOD: trust
ports:
- '${port}:5432'
volumes:
- test_db_${testRunId}:/var/lib/postgresql/data
- /dev/shm:/dev/shm # Use shared memory (tmpfs) for speed
networks:
- chainalign-test-${testRunId}
healthcheck:
test: ['CMD-SHELL', 'pg_isready -U testuser -d testdb_${testRunId}']
interval: 5s
timeout: 5s
retries: 5
volumes:
test_db_${testRunId}:
networks:
chainalign-test-${testRunId}:
driver: bridge
`;
const composePath = path.join(__dirname, `docker-compose.test.${testRunId}.yml`);
fs.writeFileSync(composePath, composeContent);
// Start the container
console.log(`Starting isolated database container...`);
const startResult = spawnSync('docker-compose', [
'-f', composePath,
'up', '-d'
], { stdio: 'inherit' });
if (startResult.status !== 0) {
console.error('Failed to start database container');
process.exit(1);
}
// Wait for database to be ready
let attempts = 0;
const maxAttempts = 30;
while (attempts < maxAttempts) {
const healthCheck = spawnSync('docker-compose', [
'-f', composePath,
'ps', '--services', '--filter', 'status=running'
], { encoding: 'utf-8' });
if (healthCheck.stdout.includes(`test-db-${testRunId}`)) {
console.log(`✅ Database ready on port ${port}`);
// Write config to .env.test
const envContent = `
DB_HOST=localhost
DB_PORT=${port}
DB_USER=testuser
DB_PASSWORD=testpassword
DB_NAME=testdb_${testRunId}
TEST_COMPOSE_FILE=docker-compose.test.${testRunId}.yml
TEST_RUN_ID=${testRunId}
`;
fs.writeFileSync('.env.test', envContent);
process.exit(0);
}
attempts++;
require('child_process').execSync('sleep 1');
}
console.error('Database failed to start');
process.exit(1);
`;
### Update package.json
```json
{
"scripts": {
"test:setup": "node scripts/setup-test-db.js",
"test:cleanup": "node scripts/cleanup-test-db.js",
"test": "npm run test:setup && vitest run; npm run test:cleanup",
"test:watch": "npm run test:setup && vitest; npm run test:cleanup"
}
}
Create cleanup script scripts/cleanup-test-db.js:
#!/usr/bin/env node
const fs = require('fs');
const path = require('path');
const { spawnSync } = require('child_process');
const dotenv = require('dotenv');
// Load test config
if (fs.existsSync('.env.test')) {
dotenv.config({ path: '.env.test' });
const testRunId = process.env.TEST_RUN_ID;
const composeFile = process.env.TEST_COMPOSE_FILE;
if (testRunId && composeFile) {
console.log(`🧹 Cleaning up test database (Run ID: ${testRunId})`);
// Stop and remove containers
spawnSync('docker-compose', [
'-f', composeFile,
'down', '-v'
], { stdio: 'inherit' });
// Remove temporary compose file
fs.unlinkSync(composeFile);
fs.unlinkSync('.env.test');
console.log('✅ Cleanup complete');
}
}
Benefits
✅ Completely isolated - Each test run has its own database container ✅ No lock conflicts - Different ports = different containers ✅ Parallel-safe - Run 10 test sessions simultaneously ✅ Clean state - Database created fresh for each run ✅ Automatic cleanup - Container destroyed after tests
Drawbacks
⚠️ Slower first run (container startup) ⚠️ More Docker overhead (multiple containers) ⚠️ Uses more disk space temporarily
Solution 2: In-Memory Database (Fastest)
Use tmpfs (RAM disk) instead of persistent volume for ultra-fast isolation.
Updated docker-compose.test.yml
version: '3.8'
services:
test-db:
build:
context: ./backend/database
dockerfile: Dockerfile
environment:
POSTGRES_USER: testuser
POSTGRES_PASSWORD: testpassword
POSTGRES_DB: testdb
POSTGRES_HOST_AUTH_METHOD: trust
ports:
- '5433:5432'
volumes:
# ✅ NEW: Use tmpfs (in-memory) instead of persistent volume
- /dev/shm:/var/lib/postgresql/data:rshared
# OR use tmpfs mount
# - type: tmpfs
# target: /var/lib/postgresql/data
# tmpfs:
# size: 500m # 500MB max
networks:
- chainalign-test-net
healthcheck:
test: ['CMD-SHELL', 'pg_isready -U testuser -d testdb']
interval: 5s
timeout: 5s
retries: 5
volumes: {}
networks:
chainalign-test-net:
driver: bridge
Benefits
✅ Ultra-fast - No disk I/O, everything in RAM ✅ Still isolated - tmpfs persists per container lifecycle ✅ Simple - Single container, no new scripts ✅ Clean state - Memory cleared when container stops
Drawbacks
⚠️ Database lost if container crashes
⚠️ Limited by available RAM (configure size limit)
⚠️ Requires docker-compose v3.4+
Performance Comparison
Migration Time:
- Disk volume: 5-10 seconds
- tmpfs (RAM): 1-2 seconds ← 5-10x faster!
Test Run Total Time:
- Current (shared disk): 3-4 minutes (lock contention)
- tmpfs isolated: 2-3 minutes (no contention)
- Option 1 (separate): 2-3 minutes (true isolation)
Solution 3: Keep Database Running + Reset Tables (Best for Development)
For faster iteration during development, keep one database running and just reset data.
Create scripts/reset-test-db.js
#!/usr/bin/env node
const db = require('knex')({
client: 'pg',
connection: {
host: process.env.DB_HOST || 'localhost',
port: process.env.DB_PORT || 5433,
user: process.env.DB_USER || 'testuser',
password: process.env.DB_PASSWORD || 'testpassword',
database: process.env.DB_NAME || 'testdb',
},
});
async function resetDatabase() {
console.log('🔄 Resetting test database (keeping structure)...');
// Get all tables
const result = await db.raw(`
SELECT tablename FROM pg_tables
WHERE schemaname = 'public'
AND tablename NOT LIKE '%migrations%'
`);
// Truncate all tables (keep schema)
for (const row of result.rows) {
try {
await db.raw(`TRUNCATE TABLE "${row.tablename}" CASCADE`);
console.log(` ✓ Truncated ${row.tablename}`);
} catch (e) {
console.warn(` ⚠ Failed to truncate ${row.tablename}: ${e.message}`);
}
}
// Reseed
console.log('🌱 Reseeding database...');
await db.seed.run();
await db.destroy();
console.log('✅ Database reset complete');
}
resetDatabase().catch(err => {
console.error('❌ Reset failed:', err);
process.exit(1);
});
Update package.json
{
"scripts": {
"db:start": "docker-compose -f docker-compose.test.yml up -d test-db",
"db:stop": "docker-compose -f docker-compose.test.yml down",
"db:reset": "node scripts/reset-test-db.js",
"test": "npm run db:reset && vitest run",
"test:watch": "npm run db:reset && vitest"
}
}
Benefits
✅ Fastest iteration - Reuse running container ✅ No container overhead - Single persistent database ✅ Simple - Standard docker-compose ✅ Predictable state - Same database structure each run
Drawbacks
⚠️ Need to manually manage database lifecycle
⚠️ Schema changes require docker-compose down -v
⚠️ Shared state between test runs (could cause issues)
⚠️ Manual cleanup if interrupted
Recommended Approach
For CI/CD (GitHub Actions, etc.)
Use Solution 1 (Isolated Databases)
- True isolation prevents flakiness
- Parallel test runs supported
- Safe for concurrent deployments
For Local Development
Use Solution 3 (Reset Tables)
- Fast iteration with
npm run test:watch - Simple workflow
- Automatic with
npm test
For Speed-Critical Environments
Use Solution 2 (tmpfs)
- Best balance of speed and isolation
- Single config change
- Still provides container isolation
Implementation Decision Matrix
| Scenario | Solution | Reason |
|---|---|---|
| CI/CD with parallel jobs | #1 (Isolated) | True isolation needed |
| Local dev, watch mode | #3 (Reset) | Fast iteration |
| Speed > everything | #2 (tmpfs) | RAM is faster than disk |
| Limited RAM on machine | #1 (Isolated) | tmpfs might be constrained |
| Running single test | #3 (Reset) | Simplest setup |
Migration Path
- Phase 1 (Now): Try Solution 2 (tmpfs) - one-line change, immediate benefit
- Phase 2 (If needed): Implement Solution 3 (reset) for dev velocity
- Phase 3 (For CI): Implement Solution 1 (isolated) for parallel CI runs
Commands by Solution
Solution 1: Isolated Databases
npm run test:setup # Creates unique container
npm test # Runs with automatic cleanup
npm run test:watch # Watch mode with isolation
Solution 2: tmpfs (In-Memory)
# Just use standard commands
npm test
npm run test:watch
Solution 3: Reset Tables
npm run db:start # One-time: start database
npm run test # Run tests (resets data)
npm run test:watch # Watch mode (resets before each run)
npm run db:stop # Cleanup
Measuring Impact
Before choosing, benchmark each approach:
# Time current setup
time npm test
# After changes, compare:
time npm test # Should be faster with no lock errors
Expected improvements:
- Current: 3-5 minutes (lock contention)
- Solution 1: 2-3 minutes (isolated)
- Solution 2: 1-2 minutes (tmpfs)
- Solution 3: 2-3 minutes (reset)