Iam using delayed job in a rails application. I want to notify an error to airbake whenever a delayed job fails. I checked on github and leant about the failure method.
I want to send the last_error attribute of failed delayed job to airbrake. Something like this:
class ParanoidNewsletterJob < NewsletterJob
def perform
end
def failure
Airbrake.notify(:message => self.last_error, :error_class => self.handler)
end
end
But it gives me the following runtime error:
undefined method `last_error' for #<struct ParanoidNewsletterJob>
Please help me figure out how I can notify Airbrake the last_error of a failed delayed_job.
Many Thanks!!
You should be able to pass the job to the failure method, and then extract the last_error from the job. i.e.
def failure(job)
Airbrake.notify(:message => job.last_error, :error_class => job.handler)
end
this should work fine
def failure(job)
Airbrake.notify(:message => job.error, :error_class => job.error.class, :backtrace => job.error.backtrace)
end
There are two ways you can achieve what you want:
A job specific method which only applies to the type of job you want by implementing the failure method with the job as the parameter. The job will contain error and last_error. And this is what other answers are about.
A global option where a plugin can be developed to apply it to any job type created. This is desired if all jobs need to be monitored. The plugin can be registered and perform actions around various events in the lifecycle of a job. For example, below is a plugin to update the last_error if we want to process it before storing to database
One example below:
require 'delayed_job'
class ErrorDelayedJobPlugin < Delayed::Plugin
def self.update_last_error(event, job)
begin
unless job.last_error.nil?
job.last_error = job.last_error.gsub("\u0000", '') # Replace null byte
job.last_error = job.last_error.encode('UTF-8', invalid: :replace, undef: :replace, replace: '')
end
rescue => e
end
end
callbacks do |lifecycle|
lifecycle.around(:failure) do |worker, job, *args, &block|
update_last_error(:around_failure, job)
block.call(worker, job)
end
end
end
Basically it will be called when any failure occurs for any job. For details on how this callback thing work, you can refer to A plugin to update last_error in Delayed Job.
Related
I'm trying to get the provider_job_id inside the ActiveJob, but it is getting as nil
Using delayed_job as queue adapter
class Application < Rails::Application
config.active_job.queue_adapter = :delayed_job
end
On debug I found that getting provider_job_id inside after_enqueue callback, but it is nil inside before_perform callback.
class MyJob < ApplicationJob
def perform(email)
# get provider_job_id for another purpose
jid = self.provider_job_id # this is nil
end
end
Job is queued as
job = MyJob.set(wait: 30.minutes).perform_later('dummy#example.com')
jid = job.provider_job_id # getting provider_job_id here
In my case, I'm calling job to perform immediately after it is enqueued on specific case using delayed job as below
Delayed::Job.find(jid).invoke_job
Anyone has idea about why provider_job_id is nil inside job class?
Also, how can we trigger Active Job to invoke immediately once it is enqueued like delayed job ?
I can see fix is applied for sidekiq adapter here
but for delayed job it is not there here
I can get the JOB ID from before_enqueue method
e.g.
before_enqueue do |job|
puts "Before Enqueu JOB ID: #{job.job_id}"
end
BUT I want to get it also in perform method
Please Help!
I have two rake tasks that I'd like to run nightly. I'd like them to log to one file. I thought this would do the trick (got it here: Rake: logging any task executing):
application.rb
module Rake
class Task
alias_method :origin_invoke, :invoke if method_defined?(:invoke)
def invoke(*args)
#logger = Logger.new('rake_tasks_log.log')
#logger.info "#{Time.now} -- #{name} -- #{args.inspect}"
origin_invoke(args)
end
end
end
and then in the rakefile:
task :hello do
#logger.warn "Starting Hello task"
puts "Hello World!"
puts "checking connection "
checkConnection
puts "done checking"
#logger.debug "End hello rake task"
end
But when I run the task I get:
private method 'warn' called for nil:NilClass
I've tried a couple of flavors of that call to logging (#, ##, no #) to no avail. Read several threads on here about it. The
rubyonrails.org site doesn't mention logging in rake tasks. The tasks that I'm invoking are fairly complex (about 20-40 mins to complete) so I'll really want to know what went wrong if they fail. I'd prefer for DRY reasons to only create the logger object once.
Unless you're wrapping everything in giant begin/rescue's and catching errors that way, the best way to log errors is to catch all output from stderr and stdout with something like:
rake your:job 2>&1 >> /var/log/rake.log
You could also set your Rails environment to use the system logger as well.
I ended up solving this (or at least well enough) by making a "log" task and depending on that in other tasks. Not really ideal, since that means having to include that dependency in any new task, but I have only a few tasks so this will do fine. I'm aware that there is a "file" task but it didn't seem to want to work in Windows, so I chose this because it seems to be more cross platform and it's more explicit.
I need a logger object because I am passing that object into some method calls in the [...] sections. There's enough begin/rescue/end in there that writing to the output stream wouldn't work (I think).
#log_file = "log/tasks.log"
directory "log"
task :check_log => ["log"] do
log = #log_file
puts 'checking log existence'
if not FileTest.exists? ("./#{log}")
puts 'creating log file'
File.open(log, 'w')
end
end
task :check_connection => [:check_log] do
begin
conn = Mongo::Connection.new
[...]
end
end
task :nightly_tasks => [:check_connection, :environment ] do
for i in 1..2
logger.warn "#########################"
end
[...]
logger.warn "nightly tasks complete"
end
def logger
##logger ||= Logger.new( File.join(Rails.root, #log_file) )
end
I'd like ExceptionNotifier to send out an email when an exception happens in a delayed job, just like for other exceptions. How can I achieve that?
I do this with Rails 3.2.6, delayed_job 3.0.3 and exception_notification 2.6.1 gem
# In config/environments/production.rb or config/initializers/delayed_job.rb
# Optional but recommended for less future surprises.
# Fail at startup if method does not exist instead of later in a background job
[[ExceptionNotifier::Notifier, :background_exception_notification]].each do |object, method_name|
raise NoMethodError, "undefined method `#{method_name}' for #{object.inspect}" unless object.respond_to?(method_name, true)
end
# Chain delayed job's handle_failed_job method to do exception notification
Delayed::Worker.class_eval do
def handle_failed_job_with_notification(job, error)
handle_failed_job_without_notification(job, error)
# only actually send mail in production
if Rails.env.production?
# rescue if ExceptionNotifier fails for some reason
begin
ExceptionNotifier::Notifier.background_exception_notification(error)
rescue Exception => e
Rails.logger.error "ExceptionNotifier failed: #{e.class.name}: #{e.message}"
e.backtrace.each do |f|
Rails.logger.error " #{f}"
end
Rails.logger.flush
end
end
end
alias_method_chain :handle_failed_job, :notification
end
It's probably a good idea to load this code in all environments to catch errors after bundle update etc before they reach production. I do this by having a config/initializers/delayed_job.rb file but you could duplicate the code for each config/environments/* environment.
Another tip is to tune the delayed job config a bit as default you may get a lot of duplicate exception mails when job fails.
# In config/initializers/delayed_job_config.rb
Delayed::Worker.max_attempts = 3
Update I had some problems with the delayed_job daemon silently exiting and it turned out to be when ExceptionNotifier fails to send mail and no one rescued the exception. Now the code rescues and log them.
Adding to #MattiasWadman answer, since exception_notification 4.0 there's a new way to handle manual notify. So instead of:
ExceptionNotifier::Notifier.background_exception_notification(error)
use
ExceptionNotifier.notify_exception(error)
Another way to handle exceptions (put as an initializer):
class DelayedErrorHandler < Delayed::Plugin
callbacks do |lifecycle|
lifecycle.around(:invoke_job) do |job, *args, &block|
begin
block.call(job, *args)
rescue Exception => e
# ...Process exception here...
raise e
end
end
end
end
Delayed::Worker.plugins << DelayedErrorHandler
alias_method_chain no longer exists in Rails 5.
Here's the new (proper) way to do this using Ruby 2's prepend
# In config/initializers/delayed_job.rb
module CustomFailedJob
def handle_failed_job(job, error)
super
ExceptionNotifier.notify_exception(error, data: {job: job})
end
end
class Delayed::Worker
prepend CustomFailedJob
end
For exception_notification 3.0.0 change:
ExceptionNotifier::Notifier.background_exception_notification(error)
to:
ExceptionNotifier::Notifier.background_exception_notification(error).deliver
simpler and updated answer:
# Chain delayed job's handle_failed_job method to do exception notification
Delayed::Worker.class_eval do
def handle_failed_job_with_notification job, error
handle_failed_job_without_notification job, error
ExceptionNotifier.notify_exception error,
data: {job: job, handler: job.handler} rescue nil
end
alias_method_chain :handle_failed_job, :notification
end
And test on console with:
Delayed::Job.enqueue (JS=Struct.new(:a){ def perform; raise 'here'; end }).new(1)
Let's say I have delayed_job running in the background. Tasks can be scheduled or run immediately(some are long tasks some are not)
If a task is too long, a user should be able to cancel it. Is it possible in delayed job? I checked the docs and can't seem to find a terminate method or something. They only provide a catch to cancel delayed job itself(thus cancelling all tasks...I need to just cancel a certain running task)
UPDATE
My boss(who's a great programmer btw) suggested to use Ruby Threading for this feature of ours. Is this possible? Like creating new threads per task and killing that thread while it's running?
something like:
t1 = Thread.new(task.run)
self.delay.t1.join (?) -- still reading on threads so correct me if im wrong
then to stop it i'll just use t1.stop (?) again don't know yet
Is this possible? Thanks!
It seems that my boss hit the spot so here's what we did(please tell us if there's some possibility this is bad practice so I can bring it up):
First, we have a Job model that has def execute! (which runs what it's supposed to do).
Next, we have delayed_job worker in the background, listening for new jobs. Now when you create a job, you can schedule it to run immediately or run every certain day (we use rufus for this one)
When a job is created, it checks if its supposed to run immediately. If it is, it adds itself to the delayed job queue. The execute function creates a Thread, so each job has its own thread.
User in the ui can see if a job is running(if there's a started_at and no finished_at). If it IS running, there's a button to cancel it. Canceling it just sets the job's canceled_at to Time.now.
While the job is running it also checks itself if it has a canceled_at or if Time.now is > finished_at. If so, kill the thread.
Voila! We've tested it for one job and it seems to work. Now the only problem is scaling...
If you see any problems with this please do so in the comments or give more suggestions if ever :) I hope this helps some one too!
Delayed::Job is an < ActiveRecord::Base model, so you can query it just like you normally would like Delayed::Job.all(:conditions => {:last_error => nil}).
Delayed::Job objects have a payload field which contain a serialized version of the method or job that you're attempting to run. This object is accessed by their '#payload_object' method, which loads the object in question.
You can combine these two capabilities to make queriable job workers, for instance, if you have a User model, and the user has a paperclip'ed :avatar, then you can make a method to delete unprocessed jobs like so:
class User < ActiveRecord::Base
has_attached_file :avatar, PaperclipOptions.new(:avatar)
before_create :'process_avatar_later'
def process_avatar_later
filename = Rails.root.join('tmp/avatars_for_processing/',self.id)
open(filename, 'w') do |file| file <<self.avatar.to_file end
Delayed::Job.enqueue(WorkAvatar.new(self.id, filename))
self.avatar = nil
end
def cancel_future_avatar_processing
WorkAvatar.future_jobs_for_user(self.id).each(&:destroy)
#ummm... tell them to reupload their avatar, I guess?
end
class WorkAvatar < Struct.new(:user_id, :path)
def user
#user ||= User.find(self.user_id)
end
def self.all_jobs
Delayed::Job.scoped(:conditions => 'payload like "%WorkAvatar%"')
end
def self.future_jobs_for_user(user_id)
all_jobs.scoped(:conditions => {:locked_at => nil}).select do |job|
job.payload_object.user_id == user_id
end
end
def perform
#user.avatar = File.open(path, 'rb')
#user.save()
end
end
end
It's possible someone has made a plugin make queryable objects like this. Perhaps searching on GitHub would be fruitful.
Note also that you'd have to work with any process monitoring tools you might have to cancel any running job worker processes that are being executed if you want to cancel a job that has locked_at and locked_by set.
You can wrap the task into a Timeout statement.
require 'timeout'
class TaskWithTimeout < Struct.new(:parameter)
def perform
Timeout.timeout(10) do
# ...
end
rescue Timeout::Error => e
# the task took longer than 10 seconds
end
end
No, there's no way to do this. If you're concerned about a runaway job you should definitely wrap it in a timeout as Simone suggests. However, it sounds like you're in search of something more but I'm unclear on your end goal.
There will never be a way for a user to have a "cancel" button since this would involve finding a method to directly communicate with the worker running process running the job. It would be possible to add a signal handler to the worker so that you could do something like kill -USR1 pid to have it abort the job it's currently working and move on. Would this accomplish you goal?