I have many jobs that are calling other nested jobs using perform_later. However, during some tests on Cucumber, I'd like to execute those jobs immediately after to proceed with the rests of the tests.
I thought it would be enough to add
# features/support/active_job.rb
World(ActiveJob::TestHelper)
And to call jobs using this in a step definition file
perform_enqueued_jobs do
# call step that calls MyJob.perform_later(*args)
end
However I run into something like that
undefined method `perform_enqueued_jobs' for #<ActiveJob::QueueAdapters::AsyncAdapter:0x007f98fd03b900> (NoMethodError)
What am I missing / doing wrong ?
I switched to the :test adapter in tests and it worked out for me:
# initialisers/test.rb
config.active_job.queue_adapter = :test
# features/support/env.rb
World(ActiveJob::TestHelper)
It would seem as long as you call .perform_now inside the cucumber step, even if there are nested jobs with .deliver_later inside, it does work too
#support/active_job.rb
World(ActiveJob::TestHelper)
#my_job_steps.rb
Given(/^my job starts$/) do
MyJob.perform_now(logger: 'stdout')
end
#jobs/my_job.rb
...
MyNestedJob.perform_later(*args) # is triggered during the step
...
Also, in my environment/test.rb file I didn't write anything concerning ActiveJob, the default was working fine. I believe the default adapter for tests is :inline so calling .deliver_later _now shouldn't matter
Related
So I need to stop a running Job in Sidekiq (3.1.2) programmatically, not a scheduled one. I did read the API documentation but didn't really find anything about cancelling running jobs. Is this possible with sidekiq?
When this is not directly possible, my idea was to circumvent this, by raising an exception in the job when I call the signal, then deleting the job from the retryset. This is clearly not optimal though.
Thanks in advance
Correct, the only way to stop a job is for the job to stop itself. Your application must implement that logic.
https://github.com/mperham/sidekiq/wiki/FAQ#how-do-i-cancel-a-sidekiq-job
If you know the long running job's Thread ID, its possible to terminate it from another task:
class ThreadLightly
include Sidekiq::Worker
def perform(tid)
puts "I'm %s, and I'll be terminating TID: %s..." % [self.class, tid]
Thread.list.each {|t|
if t.object_id.to_s == tid
puts "Goodbye %s!" % t
t.exit
end
}
end
end
You can trigger it from the sidekiq_pusher:
bundle exec ./pusher.rb ThreadLightly $YOURJOBSTHREADID
You'll need to log the Thread.current.object_id from each job since the UI dosn't show it. Also, if you run distributed sidekiqs, you'll need to run this task until it runs on the same instance.
I am testing my Delayed::Job using Rspec.
In my rspec_controller:
it "queues up delayed job and fires" do
setup
expect {
post :create, {:job => valid_attributes}
}.to change(Delayed::Job, :count).by(2)
Delayed::Worker.new.work_off.should == [2,0]
end
Delayed::Job.count passes as expected, but Delayed::Worker.new.work_off returns as [0,0], indicating there are 0 successes and 0 failures when there are 2 jobs.
How should I debug to find out why work_off doesn't fire the jobs.
Edit: The 2 jobs that are supposed to run, have their run_at set into the future. Does work_off fire off jobs that are not meant to be immediate?
Although this could be an older question, there's one parameter that's not much documented, try using
Delayed::Worker.new(quiet: false).work_off
to debug the result of your background jobs, this could help you to find out if the fact that they're supposed to run in the future is messing with the assert itself.
EDIT: Don't forget to take off the "quiet:false" when you're done, otherwise your tests will always output the results of the background jobs.
The construct
Delayed::Worker.new.work_off
immediately processes everything that is in the DJ queue, and in the same thread as the caller (it doesn't spawn a separate worker thread). But this doesn't explain why you're not getting [2, 0] for a result.
To answer your original question 'How should I debug to find out why work_off doesn't fire the jobs?', I suggest you use the callback hooks to trace the lifecycle of the jobs. Add a comment if you need to be shown how to do that... :)
I have a controller action that creates 2 background jobs to be run at a future date.
I am trying to test that the background jobs get run
# spec/controllers/job_controller_spec.rb
setup
post :create, {:job => valid_attributes}
Delayed::Job.count.should == 2
Delayed::Worker.logger = Rails.logger
#Delayed::Worker.new.work_off.should == [2,0]
Delayed::Worker.new.work_off
Delayed::Job.count.should == 0 # this is where it fails
This is the error:
1) JobsController POST create with valid params queues up delayed job and fires
Failure/Error: Delayed::Job.count.should == 0
expected: 0
got: 2 (using ==)
For some reason it seems like it is not firing.
You can try to use
Delayed::Worker.new(quiet: false).work_off
to debug the result of your background jobs, this could help you to find out if the fact that they're supposed to run in the future is messing with the assert itself.
Don't forget to take off the "quiet:false" when you're done, otherwise your tests will always output the results of the background jobs.
For some reason, a simple piece of decorator-code fails on my production machine, but runs fine on development[1].
I dumbed it down, and found that the following is the simplest failing piece:
Spree::Variant.class_eval do
def price=(value)
self.price = normalize_number(value)
end
end
Failing with SystemStackError (stack level too deep):
Debugging shows me that, indeed, the function keeps being called. self.price= calls price=.
What is a usual Rails/Ruby pattern to tackle this? What I want is:
When attribute_foo=(bar) gets called, delegate it to my custom code, where I can run the passed bar trough a small piece of custom code. Then assign that altered bar to attribute_foo.
[1]:
The only difference is the Ruby patch version and the fact that the production machine has 64-bit version, versus 32bit-version on dev: ruby 1.8.7 (2011-02-18 patchlevel 334) [x86_64-linux].
The solition was simple: just use write_attribute.
Spree::Variant.class_eval do
def price=(value)
write_attribute(:price, bar(value))
end
end
I have two rake tasks that I'd like to run nightly. I'd like them to log to one file. I thought this would do the trick (got it here: Rake: logging any task executing):
application.rb
module Rake
class Task
alias_method :origin_invoke, :invoke if method_defined?(:invoke)
def invoke(*args)
#logger = Logger.new('rake_tasks_log.log')
#logger.info "#{Time.now} -- #{name} -- #{args.inspect}"
origin_invoke(args)
end
end
end
and then in the rakefile:
task :hello do
#logger.warn "Starting Hello task"
puts "Hello World!"
puts "checking connection "
checkConnection
puts "done checking"
#logger.debug "End hello rake task"
end
But when I run the task I get:
private method 'warn' called for nil:NilClass
I've tried a couple of flavors of that call to logging (#, ##, no #) to no avail. Read several threads on here about it. The
rubyonrails.org site doesn't mention logging in rake tasks. The tasks that I'm invoking are fairly complex (about 20-40 mins to complete) so I'll really want to know what went wrong if they fail. I'd prefer for DRY reasons to only create the logger object once.
Unless you're wrapping everything in giant begin/rescue's and catching errors that way, the best way to log errors is to catch all output from stderr and stdout with something like:
rake your:job 2>&1 >> /var/log/rake.log
You could also set your Rails environment to use the system logger as well.
I ended up solving this (or at least well enough) by making a "log" task and depending on that in other tasks. Not really ideal, since that means having to include that dependency in any new task, but I have only a few tasks so this will do fine. I'm aware that there is a "file" task but it didn't seem to want to work in Windows, so I chose this because it seems to be more cross platform and it's more explicit.
I need a logger object because I am passing that object into some method calls in the [...] sections. There's enough begin/rescue/end in there that writing to the output stream wouldn't work (I think).
#log_file = "log/tasks.log"
directory "log"
task :check_log => ["log"] do
log = #log_file
puts 'checking log existence'
if not FileTest.exists? ("./#{log}")
puts 'creating log file'
File.open(log, 'w')
end
end
task :check_connection => [:check_log] do
begin
conn = Mongo::Connection.new
[...]
end
end
task :nightly_tasks => [:check_connection, :environment ] do
for i in 1..2
logger.warn "#########################"
end
[...]
logger.warn "nightly tasks complete"
end
def logger
##logger ||= Logger.new( File.join(Rails.root, #log_file) )
end