We use the delayed_jobs gem and I’m wondering what I need to do to make it run in a containerized environment.
As part of our current deployment, each time we deploy, we issue a delayed_job restart command. This (re)launches a process that remains running, and handles the tasks that are assigned to it. As this is a separate process, does it need to be defined as a separate service, similar to the way Aptible recommends that one run cron jobs?
We do recommend running delayed job as a separate service (or app altogether), as opposed to running it in the background next to your web app server.
The reason for this is that if delayed job is running in the background and crashes, then your Container will not exit, and Enclave’s Container Recovery will not be able to restart it for you. Running delayed_job in the foreground, as its own service, avoids this problem. Besides, doing so also makes it easier for you to scale your web and background containers separately.
From delayed_job’s docs, to run it in the foreground it appears you’d simply want to set up a separate service running the command rake jobs:work
Does this help? let me know if you have any followup questions!
Hi Thomas,
Thanks for your reply. I was able to define a service to execute the rake task. But I cannot get the container to stay alive; it starts, but finding no work to do, quickly exits.
In our current server-based environment, we use this script below (called with bin/delayed_job restart each deploy). This restarts a persistent process each deploy; that process allows the periodic tasks to run as scheduled. (This script is, from what I’ve read, the same as calling rake jobs:work, although the script allows you to daemonize it, which is what you want in this context.)
I can’t figure out why in the containerized context, this same script (called with bin/delayed_job start) will not create a persistent process. I’ve tried it with and without the .daemonize and in neither case does the process persist. I must be missing something…
Indeed, in the context of an Enclave service (or, more generally, a Docker container), you do want to define your services commands so that they run in the foreground and you do not want to run them in the background.
In a containerized context, the command specified when launching a container is expected to run in the foreground. If the command exits (e.g., if it daemonizes into the background), then the container itself exits, killing the daemon process with it.
So, I recommend you define this worker service command as rake jobs:work (or bundle exec rake jobs:work), instead of using the daemonized script you mentioned.
Okay… so I could have sworn that I tried that before, but I guess not, because just rake jobs:work does keep the container running…
The issue I’m having now is the logging. I can confirm that the delayed_jobs service is necessary to do certain work, such as send a registration confirmation (if I scale that container to zero, it won’t send the email), but the log stream from the container does not show this work being done, and I’d like to be able to capture output if I want to. I’ve followed this example I came across:
desc "switch rails logger to stdout"
task :verbose => [:environment] do
Rails.logger = Logger.new(STDOUT)
end
desc "switch rails logger log level to debug"
task :debug => [:environment, :verbose] do
Rails.logger.level = Logger::DEBUG
end
and then changing the service definition to rake verbose jobs:work but no dice. I’ve also added
Delayed::Worker.logger = Logger.new(STDOUT)
to my production.rb file but again, it didn’t help.
(Oddly, though, I am able to see messages from STDERR in the logs, but no STDOUT.)
Probably something easy, but since I’m not a developer, it’s all Greek to me.
was the ticket. I think I couldn’t see work being logged until work was being thrown at it by my supercronic container.
Also, I added STDOUT.sync to my production.rb and that makes the output come into Kibana in (close to) real time rather than waiting in a buffer and coming over in chunks.