how do I get all my rake tasks to write to the same log file? - ruby-on-rails-3

I have two rake tasks that I'd like to run nightly. I'd like them to log to one file. I thought this would do the trick (got it here: Rake: logging any task executing):
application.rb
module Rake
class Task
alias_method :origin_invoke, :invoke if method_defined?(:invoke)
def invoke(*args)
#logger = Logger.new('rake_tasks_log.log')
#logger.info "#{Time.now} -- #{name} -- #{args.inspect}"
origin_invoke(args)
end
end
end
and then in the rakefile:
task :hello do
#logger.warn "Starting Hello task"
puts "Hello World!"
puts "checking connection "
checkConnection
puts "done checking"
#logger.debug "End hello rake task"
end
But when I run the task I get:
private method 'warn' called for nil:NilClass
I've tried a couple of flavors of that call to logging (#, ##, no #) to no avail. Read several threads on here about it. The
rubyonrails.org site doesn't mention logging in rake tasks. The tasks that I'm invoking are fairly complex (about 20-40 mins to complete) so I'll really want to know what went wrong if they fail. I'd prefer for DRY reasons to only create the logger object once.

Unless you're wrapping everything in giant begin/rescue's and catching errors that way, the best way to log errors is to catch all output from stderr and stdout with something like:
rake your:job 2>&1 >> /var/log/rake.log
You could also set your Rails environment to use the system logger as well.

I ended up solving this (or at least well enough) by making a "log" task and depending on that in other tasks. Not really ideal, since that means having to include that dependency in any new task, but I have only a few tasks so this will do fine. I'm aware that there is a "file" task but it didn't seem to want to work in Windows, so I chose this because it seems to be more cross platform and it's more explicit.
I need a logger object because I am passing that object into some method calls in the [...] sections. There's enough begin/rescue/end in there that writing to the output stream wouldn't work (I think).
#log_file = "log/tasks.log"
directory "log"
task :check_log => ["log"] do
log = #log_file
puts 'checking log existence'
if not FileTest.exists? ("./#{log}")
puts 'creating log file'
File.open(log, 'w')
end
end
task :check_connection => [:check_log] do
begin
conn = Mongo::Connection.new
[...]
end
end
task :nightly_tasks => [:check_connection, :environment ] do
for i in 1..2
logger.warn "#########################"
end
[...]
logger.warn "nightly tasks complete"
end
def logger
##logger ||= Logger.new( File.join(Rails.root, #log_file) )
end

Related

Cucumber - perform ActiveJob `perform_later` jobs immediately

I have many jobs that are calling other nested jobs using perform_later. However, during some tests on Cucumber, I'd like to execute those jobs immediately after to proceed with the rests of the tests.
I thought it would be enough to add
# features/support/active_job.rb
World(ActiveJob::TestHelper)
And to call jobs using this in a step definition file
perform_enqueued_jobs do
# call step that calls MyJob.perform_later(*args)
end
However I run into something like that
undefined method `perform_enqueued_jobs' for #<ActiveJob::QueueAdapters::AsyncAdapter:0x007f98fd03b900> (NoMethodError)
What am I missing / doing wrong ?
I switched to the :test adapter in tests and it worked out for me:
# initialisers/test.rb
config.active_job.queue_adapter = :test
# features/support/env.rb
World(ActiveJob::TestHelper)
It would seem as long as you call .perform_now inside the cucumber step, even if there are nested jobs with .deliver_later inside, it does work too
#support/active_job.rb
World(ActiveJob::TestHelper)
#my_job_steps.rb
Given(/^my job starts$/) do
MyJob.perform_now(logger: 'stdout')
end
#jobs/my_job.rb
...
MyNestedJob.perform_later(*args) # is triggered during the step
...
Also, in my environment/test.rb file I didn't write anything concerning ActiveJob, the default was working fine. I believe the default adapter for tests is :inline so calling .deliver_later _now shouldn't matter

Access the last_error in failure method of Delayed Job Rails

Iam using delayed job in a rails application. I want to notify an error to airbake whenever a delayed job fails. I checked on github and leant about the failure method.
I want to send the last_error attribute of failed delayed job to airbrake. Something like this:
class ParanoidNewsletterJob < NewsletterJob
def perform
end
def failure
Airbrake.notify(:message => self.last_error, :error_class => self.handler)
end
end
But it gives me the following runtime error:
undefined method `last_error' for #<struct ParanoidNewsletterJob>
Please help me figure out how I can notify Airbrake the last_error of a failed delayed_job.
Many Thanks!!
You should be able to pass the job to the failure method, and then extract the last_error from the job. i.e.
def failure(job)
Airbrake.notify(:message => job.last_error, :error_class => job.handler)
end
this should work fine
def failure(job)
Airbrake.notify(:message => job.error, :error_class => job.error.class, :backtrace => job.error.backtrace)
end
There are two ways you can achieve what you want:
A job specific method which only applies to the type of job you want by implementing the failure method with the job as the parameter. The job will contain error and last_error. And this is what other answers are about.
A global option where a plugin can be developed to apply it to any job type created. This is desired if all jobs need to be monitored. The plugin can be registered and perform actions around various events in the lifecycle of a job. For example, below is a plugin to update the last_error if we want to process it before storing to database
One example below:
require 'delayed_job'
class ErrorDelayedJobPlugin < Delayed::Plugin
def self.update_last_error(event, job)
begin
unless job.last_error.nil?
job.last_error = job.last_error.gsub("\u0000", '') # Replace null byte
job.last_error = job.last_error.encode('UTF-8', invalid: :replace, undef: :replace, replace: '')
end
rescue => e
end
end
callbacks do |lifecycle|
lifecycle.around(:failure) do |worker, job, *args, &block|
update_last_error(:around_failure, job)
block.call(worker, job)
end
end
end
Basically it will be called when any failure occurs for any job. For details on how this callback thing work, you can refer to A plugin to update last_error in Delayed Job.

Rails 3.2.x: How to change logging levels without restarting the application

I would like to change the logging levels of a running Rails 3.2.x application without restarting the application. My intent is to use it to do short-time debugging and information gathering before reverting it to the usual logging level.
I also understand that the levels in ascending order are debug, info, warn, error, and fatal, and that production servers log info and higher, while development logs debug and higher.
I understand that if I run
Rails.logger.level=:debug #or :info, :warn, :error, :fatal
Will this change the logging level immediately?
If so, can I do this by writing a Rake task to adjust the logging level, or do I need to support this by adding a route? For example in config/routes.rb:
match "/set_logging_level/:level/:secret" => "logcontroller#setlevel"
and then setting the levels in the logcontroller. (:level is the logging level, and :secret which is shared between client and server, is something to prevent random users from tweaking the log levels)
Which is more appropriate, rake task or /set_logging_level?
Why don't you use operating system signals for that? For example on UNIX user1 and user2 signals are free to use for your application:
config/initializers/signals.rb:
trap('USR1') do
Rails.logger.level = Logger::DEBUG
end
trap('USR2') do
Rails.logger.level = Logger::WARN
end
Then just do this:
kill -SIGUSR1 pid
kill -SIGUSR2 pid
Just make sure you dont override signals of your server - each server leverages various signals for things like log rotation, child process killing and terminating and so on.
In Rails console, you can simply do:
Rails.logger.level = :debug
Now all executed code will run with this log level
As you have to change the level in the running rails instance, a simple rake task will not work.
I would go with the dedicated route.
instead of a shared secret I would use the app's standard user authentication (if your app has users) and restrict access to admin/super user.
In your controller LogController try this
def setlevel
begin
Rails.logger.level = Logger.const_get(params[:level].upcase)
rescue
logger.info("Logging level #{params[:level]} not supported")
end
end
You can also use gdb to attach to the running process, set the Rails.logger to debug level and then detach. I have created the following 1 liner to do this for my puma process:
gdb attach $(pidof puma) -ex 'call(rb_eval_string("Rails.logger.level = Logger::DEBUG"))' -ex detach -ex quit
NOTE: pidof will return multiple pids, in descending order. So if you have multiple processes with the same name this will only run on the first one returned by pidof. The others will be discarded by the "gdb attach" command with the message: "Excess command line arguments ignored. (26762)". However you can safely ignore it if you only care about the first process returned by pidof.
Using rufus-scheduler, I created this schedule:
scheduler.every 1.second do
file_path = "#{Rails.root}/tmp/change_log_level.#{Process.pid}"
if File.exists? file_path
log_level = File.open(file_path).read.strip
case log_level
when "INFO"
Rails.logger.level = Logger::INFO
Rails.logger.info "Changed log_level to INFO"
when "DEBUG"
Rails.logger.level = Logger::DEBUG
Rails.logger.info "Changed log_level to DEBUG"
end
File.delete file_path
end
end
Then, log level can be changed by creating a file under tmp/change_log_level.PID, where pid is the process id of the rails process. You can create a rake/capistrano task to detect and create these files, allowing you to quickly switch log level of your running production server.
Just remember to start rufus in the worker threads, if you are using unicorn or similar.

How to make ExceptionNotifier work with delayed_job in Rails 3?

I'd like ExceptionNotifier to send out an email when an exception happens in a delayed job, just like for other exceptions. How can I achieve that?
I do this with Rails 3.2.6, delayed_job 3.0.3 and exception_notification 2.6.1 gem
# In config/environments/production.rb or config/initializers/delayed_job.rb
# Optional but recommended for less future surprises.
# Fail at startup if method does not exist instead of later in a background job
[[ExceptionNotifier::Notifier, :background_exception_notification]].each do |object, method_name|
raise NoMethodError, "undefined method `#{method_name}' for #{object.inspect}" unless object.respond_to?(method_name, true)
end
# Chain delayed job's handle_failed_job method to do exception notification
Delayed::Worker.class_eval do
def handle_failed_job_with_notification(job, error)
handle_failed_job_without_notification(job, error)
# only actually send mail in production
if Rails.env.production?
# rescue if ExceptionNotifier fails for some reason
begin
ExceptionNotifier::Notifier.background_exception_notification(error)
rescue Exception => e
Rails.logger.error "ExceptionNotifier failed: #{e.class.name}: #{e.message}"
e.backtrace.each do |f|
Rails.logger.error " #{f}"
end
Rails.logger.flush
end
end
end
alias_method_chain :handle_failed_job, :notification
end
It's probably a good idea to load this code in all environments to catch errors after bundle update etc before they reach production. I do this by having a config/initializers/delayed_job.rb file but you could duplicate the code for each config/environments/* environment.
Another tip is to tune the delayed job config a bit as default you may get a lot of duplicate exception mails when job fails.
# In config/initializers/delayed_job_config.rb
Delayed::Worker.max_attempts = 3
Update I had some problems with the delayed_job daemon silently exiting and it turned out to be when ExceptionNotifier fails to send mail and no one rescued the exception. Now the code rescues and log them.
Adding to #MattiasWadman answer, since exception_notification 4.0 there's a new way to handle manual notify. So instead of:
ExceptionNotifier::Notifier.background_exception_notification(error)
use
ExceptionNotifier.notify_exception(error)
Another way to handle exceptions (put as an initializer):
class DelayedErrorHandler < Delayed::Plugin
callbacks do |lifecycle|
lifecycle.around(:invoke_job) do |job, *args, &block|
begin
block.call(job, *args)
rescue Exception => e
# ...Process exception here...
raise e
end
end
end
end
Delayed::Worker.plugins << DelayedErrorHandler
alias_method_chain no longer exists in Rails 5.
Here's the new (proper) way to do this using Ruby 2's prepend
# In config/initializers/delayed_job.rb
module CustomFailedJob
def handle_failed_job(job, error)
super
ExceptionNotifier.notify_exception(error, data: {job: job})
end
end
class Delayed::Worker
prepend CustomFailedJob
end
For exception_notification 3.0.0 change:
ExceptionNotifier::Notifier.background_exception_notification(error)
to:
ExceptionNotifier::Notifier.background_exception_notification(error).deliver
simpler and updated answer:
# Chain delayed job's handle_failed_job method to do exception notification
Delayed::Worker.class_eval do
def handle_failed_job_with_notification job, error
handle_failed_job_without_notification job, error
ExceptionNotifier.notify_exception error,
data: {job: job, handler: job.handler} rescue nil
end
alias_method_chain :handle_failed_job, :notification
end
And test on console with:
Delayed::Job.enqueue (JS=Struct.new(:a){ def perform; raise 'here'; end }).new(1)

Is it possible to terminate an already running delayed job using Ruby Threading?

Let's say I have delayed_job running in the background. Tasks can be scheduled or run immediately(some are long tasks some are not)
If a task is too long, a user should be able to cancel it. Is it possible in delayed job? I checked the docs and can't seem to find a terminate method or something. They only provide a catch to cancel delayed job itself(thus cancelling all tasks...I need to just cancel a certain running task)
UPDATE
My boss(who's a great programmer btw) suggested to use Ruby Threading for this feature of ours. Is this possible? Like creating new threads per task and killing that thread while it's running?
something like:
t1 = Thread.new(task.run)
self.delay.t1.join (?) -- still reading on threads so correct me if im wrong
then to stop it i'll just use t1.stop (?) again don't know yet
Is this possible? Thanks!
It seems that my boss hit the spot so here's what we did(please tell us if there's some possibility this is bad practice so I can bring it up):
First, we have a Job model that has def execute! (which runs what it's supposed to do).
Next, we have delayed_job worker in the background, listening for new jobs. Now when you create a job, you can schedule it to run immediately or run every certain day (we use rufus for this one)
When a job is created, it checks if its supposed to run immediately. If it is, it adds itself to the delayed job queue. The execute function creates a Thread, so each job has its own thread.
User in the ui can see if a job is running(if there's a started_at and no finished_at). If it IS running, there's a button to cancel it. Canceling it just sets the job's canceled_at to Time.now.
While the job is running it also checks itself if it has a canceled_at or if Time.now is > finished_at. If so, kill the thread.
Voila! We've tested it for one job and it seems to work. Now the only problem is scaling...
If you see any problems with this please do so in the comments or give more suggestions if ever :) I hope this helps some one too!
Delayed::Job is an < ActiveRecord::Base model, so you can query it just like you normally would like Delayed::Job.all(:conditions => {:last_error => nil}).
Delayed::Job objects have a payload field which contain a serialized version of the method or job that you're attempting to run. This object is accessed by their '#payload_object' method, which loads the object in question.
You can combine these two capabilities to make queriable job workers, for instance, if you have a User model, and the user has a paperclip'ed :avatar, then you can make a method to delete unprocessed jobs like so:
class User < ActiveRecord::Base
has_attached_file :avatar, PaperclipOptions.new(:avatar)
before_create :'process_avatar_later'
def process_avatar_later
filename = Rails.root.join('tmp/avatars_for_processing/',self.id)
open(filename, 'w') do |file| file <<self.avatar.to_file end
Delayed::Job.enqueue(WorkAvatar.new(self.id, filename))
self.avatar = nil
end
def cancel_future_avatar_processing
WorkAvatar.future_jobs_for_user(self.id).each(&:destroy)
#ummm... tell them to reupload their avatar, I guess?
end
class WorkAvatar < Struct.new(:user_id, :path)
def user
#user ||= User.find(self.user_id)
end
def self.all_jobs
Delayed::Job.scoped(:conditions => 'payload like "%WorkAvatar%"')
end
def self.future_jobs_for_user(user_id)
all_jobs.scoped(:conditions => {:locked_at => nil}).select do |job|
job.payload_object.user_id == user_id
end
end
def perform
#user.avatar = File.open(path, 'rb')
#user.save()
end
end
end
It's possible someone has made a plugin make queryable objects like this. Perhaps searching on GitHub would be fruitful.
Note also that you'd have to work with any process monitoring tools you might have to cancel any running job worker processes that are being executed if you want to cancel a job that has locked_at and locked_by set.
You can wrap the task into a Timeout statement.
require 'timeout'
class TaskWithTimeout < Struct.new(:parameter)
def perform
Timeout.timeout(10) do
# ...
end
rescue Timeout::Error => e
# the task took longer than 10 seconds
end
end
No, there's no way to do this. If you're concerned about a runaway job you should definitely wrap it in a timeout as Simone suggests. However, it sounds like you're in search of something more but I'm unclear on your end goal.
There will never be a way for a user to have a "cancel" button since this would involve finding a method to directly communicate with the worker running process running the job. It would be possible to add a signal handler to the worker so that you could do something like kill -USR1 pid to have it abort the job it's currently working and move on. Would this accomplish you goal?