whenever + delayed_job with cron job in starting worker - ruby-on-rails-3

I am learning cron job and delayed job, and I want to send emails using background job; for that I'm using the delayed_job gem. I don't want to start the worker manually by running the rake jobs:work command, but I want to set this rake in cron job so whenever an user login into the dashboard this command is fired and a mail is sent to his address. Following is my code:
Sending mail method
def dashboard
#user = User.find(params[:id])
UserMailer.delay.initial_email(#user)
end
UserMailer
def initial_email(user)
#user = user
mail(:to => user.email,:subject => "Welcome to my website!")
end
For the cron job I am using "whenever" Gem, so what should I write in my schedule.rb file so that when I login into the dashboard I get a mail without running worker manually?

DelayedJob is supposed to be running all the time in the background so it doesn't need to be fired up.
The worker agent checks the queue to see if any tasks need to be performed, and runs them.
It's pretty much like a 2nd instance of your application that runs in the background checking for tasks that need to run.
So you should start the worker agent with script/delayed_job start and let it run all the time. You can use a separate tool like monit or god to monitoring your worker agent to make sure it is always running.

Related

Sidekiq : how to restart sidekiq when deploying project to server?

Here is my restart script for sidekiq
def restart
process_list.each do |p|
process_stop p
process_start p
end
end
when i am deploying code to production then this script executes and restarts every process.
But now I want to restart sidekiq as running process shouldn't be affected.
In my case :
I am sending mails using sidekiq.
for example i am sending 100000 mails and this process is executing. If I am deploying this time. then many mails have already triggered and after restarting they will trigger again.
How can I fix this issue?
Thanks
Each mail should be a separate job.

rufus-scheduler and delayed_job on Heroku: why use a worker dyno?

I'm developing a Rails 3.2.16 app and deploying to a Heroku dev account with one free web dyno and no worker dynos. I'm trying to determine if a (paid) worker dyno is really needed.
The app sends various emails. I use delayed_job_active_record to queue those and send them out.
I also need to check a notification count every minute. For that I'm using rufus-scheduler.
rufus-scheduler seems able to run a background task/thread within a Heroku web dyno.
On the other hand, everything I can find on delayed_job indicates that it requires a separate worker process. Why? If rufus-scheduler can run a daemon within a web dyno, why can't delayed_job do the same?
I've tested the following for running my every-minute task and working off delayed_jobs, and it seems to work within the single Heroku web dyno:
config/initializers/rufus-scheduler.rb
require 'rufus-scheduler'
require 'delayed/command'
s = Rufus::Scheduler.singleton
s.every '1m', :overlap => false do # Every minute
Rails.logger.info ">> #{Time.now}: rufus-scheduler task started"
# Check for pending notifications and queue to delayed_job
User.send_pending_notifications
# work off delayed_jobs without a separate worker process
Delayed::Worker.new.work_off
end
This seems so obvious that I'm wondering if I'm missing something? Is this an acceptable way to handle the delayed_job queue without the added complexity and expense of a separate worker process?
Update
As #jmettraux points out, Heroku will idle an inactive web dyno after an hour. I haven't set it up yet, but let's assume I'm using one of the various keep-alive methods to keep it from sleeping: Easy way to prevent Heroku idling?.
According to this
https://blog.heroku.com/archives/2013/6/20/app_sleeping_on_heroku
your dyno will go to sleep if he hasn't serviced requests for an hour. No dyno, no scheduling.
This could help as well: https://devcenter.heroku.com/articles/clock-processes-ruby

Rails delayed job fail after close session

I use the delayed job gem to handle my email deliveries. It is working fine in the development and I am very happy with it. However after I deployed to the server, when I use command:
RAILS_ENV=production script/delayed_job start
it will be working. I've checked the log file and database, everything is fine and I can receive the mails just as I expected. However, when I exit from the server, nothing is going to happen.
I've checked my database by using sequel pro and seen that the delayed job has created a row in the DB and after the time in the run_at column, the row would disappear, but no mails can be received. When I log in again, the delayed job process is still running, and the log is nothing strange, but I just cannot receive and email that I suppose to. I can't keep my self log in all the time. Without the delayed job, I can use the traditional way and it's working properly but slow. Why the delayed job failed after I log out of the server?
This is my delayed job setting in the config/initializers/delay_job.rb
require "bcrypt"
Delayed::Worker.max_attempts = 5
Delayed::Worker.delay_jobs = !Rails.env.test?
Delayed::Worker.destroy_failed_jobs = false
P.S. I am not sure is it anything to do with the standalone passenger as I have to use different version of rails so I have to use a standalone passenger with port 3002.
I think I've found the solution.
After reading through this https://github.com/collectiveidea/delayed_job/wiki/Common-problems#wiki-jobs_are_silently_removed_from_the_database
I soon realized I might miss the "require bcrypt" in the configuration file.
I use RVM and have many gemsets, but just this particular gemset has the gem bcrypt-ruby. The delayed job might use the global or default gemset after I log out the system, so I install bcrypt-ruby in all the gemsets and restart the standalone passenger and it works!.
But still, I dont really know the connection between bcrypt and the delayed job.

How to run delayed_job's queued tasks on Heroku?

I'm currently using the delayed_job gem to queue and run background tasks in my application. In the local system, I can just use rake jobs:work to run the queued tasks. However, when I deploy my app onto Heroku, I do not want to continue using the rake command. Instead, I want the rake command to be called automatically. Is there a way to do so, without paying for a worker in Heroku?
I use cron with out problems (with django). All do you need is to configure as task the same command that you can execute after heroku run command.
Remember that cron time compute as worker time, be sure that command ends.
No, you can't do it without a worker.
The earlier point saying you need a worker is right, however you do have free worker hours. There 750 free hours per month http://www.heroku.com/pricing#1-0. Given a 31 day month is 744 hours, you have at least 6 free worker hours to use each month.
If you use the workless gem https://github.com/lostboy/workless this will spin up the worker only when needed (i.e. jobs waiting in delayed_job), then close it down again. Works perfectly for my app, and 6 hours of background processing time a month is more than enough for my requirements.

How to set up Scheduler add-on at Heroku

I am accustomed from PHP to set up CRON on the URL address, that I want to run automatically in a time interval.
Now I try to set up the Schedular add-on at Heroku and I have a little problem - I created the file lib/tasks/scheduler.rake and in admin section on Heroku set up everything what is possible, but:
I am a bit confused, how it all works - for example, these lines are in lib/tasks/scheduler.rake:
desc "This task is called by the Heroku scheduler add-on"
task :update_feed => :environment do
puts "Updating feed..."
NewsFeed.update
puts "done."
end
task :send_reminders => :environment do
User.send_reminders
end
What mean the task task :update_feed? In the set up hour will be run this action? But this is action in which controller? For example, what if I would need to run every day the action my_action in the controller home? I should set up there again only my_action instead update_feed?
With a cron to call an http action, such as using curl or wget, you are scheduling an http request, and the http request then results in the php action running, and in that action's code you have some work/logic that occurs.
With heroku scheduler, you are skipping all the http request stuff and action stuff, and can put the logic/work into the rake task directly (or put it in a regular ruby class or model and invoke that from the task body).
This works because the rake task is loading up the full rails environment (the :environment dependency part of the task definition does this), so inside the rake task body, you have access to your rails app models, required gems, application configuration, everything - just like inside a controller or model class in rails.
What's also nice, if you are on cedar, is that the scheduler invokes tasks in a one-off dynamo instance, so your app's main dynamo is not occupied by a task run by the scheduler, which is not the case when you use the cron -> http request -> controller action pattern.
If you tell me more about what you are trying to do, I can give more specific advice, but in general I usually have the task's logic defined in a plain ruby class in the lib directory, or as a class method on a model, and that is what would be called from the task body (as in the example code you cite above).