Configuration

Configuration

Configure BunnyDB using environment variables and mirror creation parameters. This page covers all configuration options for running BunnyDB in development and production environments.

Environment Variables

BunnyDB is configured through environment variables. Set these in your Docker Compose file, Kubernetes manifests, or shell environment.

Catalog Database

The catalog database stores BunnyDB’s metadata including peers, mirrors, users, and logs.

VariableDefaultDescription
BUNNY_CATALOG_HOSTcatalogCatalog PostgreSQL hostname
BUNNY_CATALOG_PORT5432Catalog PostgreSQL port
BUNNY_CATALOG_USERpostgresCatalog database username
BUNNY_CATALOG_PASSWORDbunnydbCatalog database password
BUNNY_CATALOG_DATABASEbunnydbCatalog database name

Example:

BUNNY_CATALOG_HOST=catalog-db.example.com
BUNNY_CATALOG_PORT=5432
BUNNY_CATALOG_USER=bunny_admin
BUNNY_CATALOG_PASSWORD=secure-catalog-password
BUNNY_CATALOG_DATABASE=bunnydb_catalog

The catalog database is created automatically on first startup if it doesn’t exist. BunnyDB will run migrations to set up the required schema.

Temporal

BunnyDB uses Temporal for workflow orchestration. The API and worker connect to a Temporal server.

VariableDefaultDescription
TEMPORAL_HOST_PORTtemporal:7233Temporal server host:port
TEMPORAL_NAMESPACEdefaultTemporal namespace to use

Example:

TEMPORAL_HOST_PORT=temporal.example.com:7233
TEMPORAL_NAMESPACE=production
⚠️

Temporal must be running before starting BunnyDB. Use the included docker-compose.yml or connect to an existing Temporal cluster.

Worker

The worker process executes replication workflows.

VariableDefaultDescription
BUNNY_WORKER_TASK_QUEUEbunny-workerTemporal task queue name

Example:

BUNNY_WORKER_TASK_QUEUE=production-worker-queue

The task queue name must match between the API and worker. Multiple workers can share the same task queue for horizontal scaling.

Authentication

Configure JWT-based authentication and the default admin user.

VariableDefaultDescription
BUNNY_JWT_SECRETAuto-generatedSecret key for signing JWT tokens
BUNNY_ADMIN_USERadminDefault admin username
BUNNY_ADMIN_PASSWORDadminDefault admin password

Example:

BUNNY_JWT_SECRET=your-very-long-random-secret-key-here-min-32-chars
BUNNY_ADMIN_USER=superadmin
BUNNY_ADMIN_PASSWORD=strong-password-here
🚫

Security Critical: In production, always set a strong BUNNY_JWT_SECRET and change the default admin password. The auto-generated JWT secret is not persistent across container restarts.

Generating a Secure JWT Secret

Use a cryptographically secure random string of at least 32 characters:

openssl rand -base64 32

Mirror Configuration Parameters

When creating a mirror via the API, these parameters control snapshot and CDC behavior.

Snapshot Parameters

Control the initial snapshot phase where existing data is copied to the destination.

ParameterTypeDefaultDescription
snapshot_num_rows_per_partitionnumber500000Number of rows per partition during parallel snapshot
snapshot_max_parallel_workersnumber4Maximum parallel workers per table snapshot
snapshot_num_tables_in_parallelnumber4Number of tables to snapshot simultaneously
do_initial_snapshotbooleantrueWhether to perform initial snapshot before CDC

Example:

{
  "snapshot_num_rows_per_partition": 1000000,
  "snapshot_max_parallel_workers": 8,
  "snapshot_num_tables_in_parallel": 2,
  "do_initial_snapshot": true
}

Tuning Snapshot Performance

Small databases (< 1GB)

  • snapshot_num_rows_per_partition: 500000
  • snapshot_max_parallel_workers: 4
  • snapshot_num_tables_in_parallel: 4

Medium databases (1-100GB)

  • snapshot_num_rows_per_partition: 1000000
  • snapshot_max_parallel_workers: 8
  • snapshot_num_tables_in_parallel: 2

Large databases (> 100GB)

  • snapshot_num_rows_per_partition: 2000000
  • snapshot_max_parallel_workers: 16
  • snapshot_num_tables_in_parallel: 1
⚠️

Higher parallelism increases load on source and destination databases. Monitor CPU, memory, and I/O during snapshot to avoid impacting production workloads.

CDC Parameters

Control the continuous replication phase.

ParameterTypeDefaultDescription
cdc_sync_interval_secondsnumber60How often to poll for new changes (seconds)
cdc_batch_sizenumber10000Maximum changes per batch

Example:

{
  "cdc_sync_interval_seconds": 30,
  "cdc_batch_size": 5000
}

Tuning CDC Performance

Low-latency replication

  • cdc_sync_interval_seconds: 10-30
  • cdc_batch_size: 1000-5000
  • Trade-off: More frequent polling, higher overhead

High-throughput replication

  • cdc_sync_interval_seconds: 60-300
  • cdc_batch_size: 10000-50000
  • Trade-off: Larger batches, higher latency

Balanced (default)

  • cdc_sync_interval_seconds: 60
  • cdc_batch_size: 10000

Smaller cdc_sync_interval_seconds reduces replication lag but increases database load. Larger cdc_batch_size improves throughput but may cause longer transaction times on the destination.

Publication and Slot Names

Optionally specify custom names for PostgreSQL logical replication resources.

ParameterTypeDefaultDescription
publication_namestringAuto-generatedPostgreSQL publication name on source
replication_slot_namestringAuto-generatedPostgreSQL replication slot name on source

Example:

{
  "publication_name": "bunny_pub_prod_analytics",
  "replication_slot_name": "bunny_slot_prod_analytics"
}

If not specified, BunnyDB auto-generates names based on the mirror name. Custom names are useful for managing multiple BunnyDB instances or integrating with existing replication setups.

Source Database Requirements

The source PostgreSQL database must be configured for logical replication.

Required PostgreSQL Settings

Add these to postgresql.conf on the source database:

# Enable logical replication
wal_level = logical
 
# Replication slots (one per mirror)
max_replication_slots = 10
 
# WAL senders (one per active replication slot)
max_wal_senders = 10
 
# Optional: Increase WAL size for high-traffic databases
wal_keep_size = 1GB

After changing these settings, restart PostgreSQL:

sudo systemctl restart postgresql

Verify Configuration

-- Check wal_level
SHOW wal_level;
-- Should return: logical
 
-- Check replication slots capacity
SHOW max_replication_slots;
SHOW max_wal_senders;
 
-- View active replication slots
SELECT * FROM pg_replication_slots;

User Permissions

The replication user needs these permissions:

-- Create replication user
CREATE USER bunny_replication WITH REPLICATION PASSWORD 'secure-password';
 
-- Grant permissions on database
GRANT CONNECT ON DATABASE production TO bunny_replication;
 
-- Grant schema permissions
GRANT USAGE ON SCHEMA public TO bunny_replication;
 
-- Grant table permissions (for all tables to replicate)
GRANT SELECT ON ALL TABLES IN SCHEMA public TO bunny_replication;
 
-- Grant permissions for future tables
ALTER DEFAULT PRIVILEGES IN SCHEMA public
  GRANT SELECT ON TABLES TO bunny_replication;

pg_hba.conf

Allow replication connections in pg_hba.conf:

# Allow replication from BunnyDB worker
host    replication     bunny_replication    10.0.0.0/8    md5
host    production      bunny_replication    10.0.0.0/8    md5

Reload PostgreSQL after changes:

sudo systemctl reload postgresql

Destination Database Requirements

The destination database has minimal requirements:

  • PostgreSQL 10 or later (same version as source recommended)
  • User with CREATE, INSERT, UPDATE, DELETE permissions on destination schemas
  • Sufficient disk space for replicated data

Destination User Permissions

-- Create destination user
CREATE USER bunny_destination WITH PASSWORD 'secure-password';
 
-- Grant database permissions
GRANT CONNECT ON DATABASE analytics TO bunny_destination;
 
-- Grant schema permissions
GRANT CREATE, USAGE ON SCHEMA public TO bunny_destination;
 
-- Grant table permissions
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO bunny_destination;
 
-- Grant permissions for future tables (BunnyDB creates tables during snapshot)
ALTER DEFAULT PRIVILEGES IN SCHEMA public
  GRANT SELECT, INSERT, UPDATE, DELETE ON TABLES TO bunny_destination;

Production Recommendations

Security

🚫

Critical Security Settings:

  1. Set a strong BUNNY_JWT_SECRET (32+ random characters)
  2. Change default admin password via BUNNY_ADMIN_PASSWORD
  3. Use SSL/TLS for all database connections (ssl_mode: require or higher)
  4. Deploy BunnyDB in a private network, not exposed to the public internet
  5. Use strong passwords for all database users
  6. Enable PostgreSQL SSL: ssl = on in postgresql.conf

SSL Configuration

Configure SSL for peer connections:

{
  "name": "production-db",
  "host": "prod-db.example.com",
  "port": 5432,
  "user": "bunny_replication",
  "password": "secure-password",
  "database": "production",
  "ssl_mode": "verify-full"
}

SSL modes (in order of security):

  • disable - No SSL (development only)
  • prefer - Use SSL if available
  • require - Require SSL connection
  • verify-ca - Require SSL and verify CA certificate
  • verify-full - Require SSL and verify hostname (most secure)

Resource Sizing

Small deployment (< 10GB, < 100 changes/sec)

  • API: 512MB RAM, 1 CPU
  • Worker: 1GB RAM, 2 CPU
  • Catalog DB: 512MB RAM, 1 CPU

Medium deployment (10-100GB, 100-1000 changes/sec)

  • API: 1GB RAM, 2 CPU
  • Worker: 2GB RAM, 4 CPU
  • Catalog DB: 1GB RAM, 2 CPU

Large deployment (> 100GB, > 1000 changes/sec)

  • API: 2GB RAM, 4 CPU
  • Worker: 4GB+ RAM, 8+ CPU (scale horizontally with multiple workers)
  • Catalog DB: 2GB RAM, 4 CPU

Batch Size Tuning

Monitor these metrics to tune cdc_batch_size:

  • Too small (< 1000): High overhead, frequent polls, low throughput
  • Optimal (1000-50000): Balanced latency and throughput
  • Too large (> 50000): Long transactions, destination lock contention, high memory

Signs your batch size is too large:

  • Destination database lock timeouts
  • High memory usage on worker
  • Long transaction durations (> 30 seconds)

Monitoring

Set up monitoring for:

  • Replication lag (LSN difference between source and mirror)
  • Batch throughput (changes/second)
  • Error count and error rate
  • Replication slot disk usage on source
  • Worker CPU and memory usage

See Monitoring for detailed guidance.

Docker Compose Example

Complete docker-compose.yml with production-ready settings:

version: '3.8'
 
services:
  catalog:
    image: postgres:14
    environment:
      POSTGRES_USER: bunny_catalog
      POSTGRES_PASSWORD: secure-catalog-password
      POSTGRES_DB: bunnydb
    volumes:
      - catalog-data:/var/lib/postgresql/data
    networks:
      - bunnydb
 
  temporal:
    image: temporalio/auto-setup:latest
    environment:
      - DB=postgresql
      - DB_PORT=5432
      - POSTGRES_USER=temporal
      - POSTGRES_PWD=temporal-password
      - POSTGRES_SEEDS=temporal-db
    depends_on:
      - temporal-db
    networks:
      - bunnydb
 
  temporal-db:
    image: postgres:14
    environment:
      POSTGRES_USER: temporal
      POSTGRES_PASSWORD: temporal-password
    volumes:
      - temporal-data:/var/lib/postgresql/data
    networks:
      - bunnydb
 
  bunny-api:
    image: bunnydb/api:latest
    ports:
      - "8112:8112"
    environment:
      BUNNY_CATALOG_HOST: catalog
      BUNNY_CATALOG_PORT: 5432
      BUNNY_CATALOG_USER: bunny_catalog
      BUNNY_CATALOG_PASSWORD: secure-catalog-password
      BUNNY_CATALOG_DATABASE: bunnydb
      TEMPORAL_HOST_PORT: temporal:7233
      TEMPORAL_NAMESPACE: default
      BUNNY_WORKER_TASK_QUEUE: bunny-worker
      BUNNY_JWT_SECRET: your-very-long-random-secret-key-here-min-32-chars
      BUNNY_ADMIN_USER: admin
      BUNNY_ADMIN_PASSWORD: strong-admin-password
    depends_on:
      - catalog
      - temporal
    networks:
      - bunnydb
 
  bunny-worker:
    image: bunnydb/worker:latest
    environment:
      BUNNY_CATALOG_HOST: catalog
      BUNNY_CATALOG_PORT: 5432
      BUNNY_CATALOG_USER: bunny_catalog
      BUNNY_CATALOG_PASSWORD: secure-catalog-password
      BUNNY_CATALOG_DATABASE: bunnydb
      TEMPORAL_HOST_PORT: temporal:7233
      TEMPORAL_NAMESPACE: default
      BUNNY_WORKER_TASK_QUEUE: bunny-worker
    depends_on:
      - catalog
      - temporal
    networks:
      - bunnydb
    deploy:
      replicas: 2  # Run multiple workers for high availability
 
volumes:
  catalog-data:
  temporal-data:
 
networks:
  bunnydb:
    driver: bridge

Environment Variables Reference

Quick reference table of all environment variables:

VariableComponentDefaultRequired
BUNNY_CATALOG_HOSTAPI, WorkercatalogNo
BUNNY_CATALOG_PORTAPI, Worker5432No
BUNNY_CATALOG_USERAPI, WorkerpostgresNo
BUNNY_CATALOG_PASSWORDAPI, WorkerbunnydbNo
BUNNY_CATALOG_DATABASEAPI, WorkerbunnydbNo
TEMPORAL_HOST_PORTAPI, Workertemporal:7233No
TEMPORAL_NAMESPACEAPI, WorkerdefaultNo
BUNNY_WORKER_TASK_QUEUEAPI, Workerbunny-workerNo
BUNNY_JWT_SECRETAPIAuto-generatedNo (Yes for production)
BUNNY_ADMIN_USERAPIadminNo
BUNNY_ADMIN_PASSWORDAPIadminNo (Change for production)