I recently upgraded my Rails app to Rails 8 and moved from Heroku to Digital Ocean with Kamal 2. My app runs quite a few jobs and, after poor performance when using the puma plugin on the same server, I decided to dedicate a Droplet solely for running jobs. However, I'm encountering issues when deploying.
Below is my deploy.yml
file:
deploy_timeout: 60
# ssh:
# log_level: debug
# Name of your application. Used to uniquely configure containers.
service: myapp
# Name of the container image.
image: myapp/myapp
# Deploy to these servers.
servers:
web:
- web_app_ip
job:
hosts:
- new_droplet_ip
cmd: bin/jobs
# Enable SSL auto certification via Let's Encrypt and allow for multiple apps on a single web server.
# Remove this section when using multiple web servers and ensure you terminate SSL at your load balancer.
#
# Note: If using Cloudflare, set encryption mode in SSL/TLS setting to "Full" to enable CF-to-app encryption.
proxy:
ssl: true
host: myapp.com
# Proxy connects to your container on port 80 by default.
# app_port: 3000
# Credentials for your image host.
registry:
# Specify the registry server, if you're not using Docker Hub
# server: registry.digitalocean.com / ghcr.io / ...
username: username
# Always use an access token rather than real password (pulled from .kamal/secrets).
password:
- KAMAL_REGISTRY_PASSWORD
# Configure builder setup.
builder:
arch: amd64
# Pass in additional build args needed for your Dockerfile.
# args:
# RUBY_VERSION: <%= File.read('.ruby-version').strip %>
# Inject ENV variables into containers (secrets come from .kamal/secrets).
#
env:
clear:
DB_HOST: DB_HOST
secret:
- RAILS_MASTER_KEY
# Aliases are triggered with "bin/kamal <alias>". You can overwrite arguments on invocation:
# "bin/kamal logs -r job" will tail logs from the first server in the job section.
#
aliases:
console: app exec --interactive --reuse "bin/rails console"
shell: app exec --interactive --reuse "bash"
logs: app logs -f
dbc: app exec --interactive --reuse "bin/rails dbconsole"
# Use a different ssh user than root
#
# ssh:
# user: app
# Use a persistent storage volume.
#
volumes:
- "volume:/rails/storage"
# Bridge fingerprinted assets, like JS and CSS, between versions to avoid
# hitting 404 on in-flight requests. Combines all files from new and old
# version inside the asset_path.
#
# asset_path: /app/public/assets
# Configure rolling deploys by setting a wait time between batches of restarts.
#
# boot:
# limit: 10 # Can also specify as a percentage of total hosts, such as "25%"
# wait: 2
# Use accessory services (secrets come from .kamal/secrets).
#
# accessories:
# db:
# image: mysql:8.0
# host: 192.168.0.2
# port: 3306
# env:
# clear:
# MYSQL_ROOT_HOST: '%'
# secret:
# - MYSQL_ROOT_PASSWORD
# files:
# - config/mysql/production.cnf:/etc/mysql/my.cnf
# - db/production.sql:/docker-entrypoint-initdb.d/setup.sql
# directories:
# - data:/var/lib/mysql
# redis:
# image: valkey/valkey:8
# host: 192.168.0.2
# port: 6379
# directories:
# - data:/data
EDIT:
When deploying and running kamal logs -r job
, I see the following error:
2025-01-20T15:30:49.548170344Z /usr/local/bundle/ruby/3.3.0/gems/activerecord-8.0.1/lib/active_record/connection_adapters/sqlite3_adapter.rb:512:in `table_structure': Could not find table 'solid_queue_recurring_tasks' (ActiveRecord::StatementInvalid)
I suspect the issue might be related to the database configuration or missing migrations on the job Droplet. Any help to ensure that my job container connects to the correct database and includes all necessary tables would be greatly appreciated.