I am trying to figure out how I can access the status of the background job using sidekiq.
To check the status of the job I check the size of the queue.
class PagesWorker
include Sidekiq::Worker
include Sidekiq::Status::Worker
sidekiq_options queue: "high"
def perform(ifile)
system "rake db:..."
# What do the following do? In what scenario would I need to set these?
total 100
at 5, "Almost done"
store vino: 'veritas'
vino = retrieve :vino
end
end
class PagesController
def submit
job_id = SomeWorker.perform_async(file)
flash[:notice] = "#{job_id} is being processed."
end
def finished
queue = Sidekiq::Status.new('high')
if queue.size == 0
flash[:success] = "Job is finished."
elsif queue.size > 0
flash[:notice] = "Job is being processed."
end
end
end
It prints "Job is finished" from the start (while the job is still running) because the queue is always zero. Specifically, in myrailsapp/sidekiq page I see the following:
The number of processed jobs increments by one every time I submit a job, which is correct.
The 'high' queue is there but its size is always 0, which isn't correct.
And there are no jobs listed under the queue, which again isn't correct.
However the job is being processed and finishes successfully. Could the reason for the job not appearing in the sidekiq page be that the job finishes in less that a minute? Or maybe because the worker runs a rake process?
I have also checked the following in the terminal, while the process is running:
Sidekiq::Status::complete? job_id #returns false
Sidekiq::Status::status(job_id) #returns nothing
Sidekiq::Status::get_all job_id #returns {"update_time"=>"1458063970", "total"=>"100", "at"=>"5", "message"=>"Almost done"}, and after a while returns {}
To sum up, my queue is empty, my jobs list is empty, but my job is running. How is this possible? What can I do to track the status of my job?
EDIT
I use the job status to check whether it is finished or not.
Controller:
def finished
job_id = params[:job_id]
if Sidekiq::Status::complete? job_id == true
flash[:notice] = "Job is finished."
elsif Sidekiq::Status::complete? job_id == false
flash[:notice] = "Job is being processed."
end
end
Form:
=form_tag({action: :submit},multipart: true) do # submit is my controller method as defined above
= file_field_tag 'file'
= submit_tag 'submit'
=flash[:notice] # defined above as "#{job_id} is being processed."
-job_id = flash[:notice].split(" ")[0]
=form_tag({action: :finished}) do
= hidden_field_tag 'job_id', job_id
= submit_tag 'check'
Routes:
resources :pages do
collection do
post 'submit'
post 'finished'
end
end
But the Sidekiq::Status::complete? job_id never turns to true. I am tailing the log file to see when the process will finish, and click on the check button but still get that it is still getting processed. Why?
As far as I understand, you are using sidekiq-status gem and you can use the following way
https://github.com/utgarda/sidekiq-status#retrieving-status
If you just wanted to know if the job is queued, then you may stop running the sidekiq process and simply send the job using the sidekiq client and from there you can list the queues in the redis-cli like
$ redis-cli
> select 0 # (or whichever namespace Sidekiq is using)
> keys * # (just to get an idea of what you're working with)
> smembers queues
> lrange queues:app_queue 0 -1
> lrem queues:app_queue -1 "payload"
and see if the job is queued and once after starting the sidekiq process again, you would see the job being processed and you can puts in the perform method also and that too would be printed in the logs of the sidekiq.
Related
I'm trying to get the provider_job_id inside the ActiveJob, but it is getting as nil
Using delayed_job as queue adapter
class Application < Rails::Application
config.active_job.queue_adapter = :delayed_job
end
On debug I found that getting provider_job_id inside after_enqueue callback, but it is nil inside before_perform callback.
class MyJob < ApplicationJob
def perform(email)
# get provider_job_id for another purpose
jid = self.provider_job_id # this is nil
end
end
Job is queued as
job = MyJob.set(wait: 30.minutes).perform_later('dummy#example.com')
jid = job.provider_job_id # getting provider_job_id here
In my case, I'm calling job to perform immediately after it is enqueued on specific case using delayed job as below
Delayed::Job.find(jid).invoke_job
Anyone has idea about why provider_job_id is nil inside job class?
Also, how can we trigger Active Job to invoke immediately once it is enqueued like delayed job ?
I can see fix is applied for sidekiq adapter here
but for delayed job it is not there here
I have an application which uses Sidekiq. The web server process will sometimes put a job on Sidekiq, but I won't necessarily have the worker running. Is there a utility which I could call from the Rails console which would pull one job off the Redis queue and run the appropriate Sidekiq worker?
Here's a way that you'll likely need to modify to get the job you want (maybe like g8M suggests above), but it should do the trick:
> job = Sidekiq::Queue.new("your_queue").first
> job.klass.constantize.new.perform(*job.args)
If you want to delete the job:
> job.delete
Tested on sidekiq 5.2.3.
I wouldn't try to hack sidekiq's API to run the jobs manually since it could leave some unwanted internal state but I believe the following code would work
# Fetch the Queue
queue = Sidekiq::Queue.new # default queue
# OR
# queue = Sidekiq::Queue.new(:my_queue_name)
# Fetch the job
# job = queue.first
# OR
job = queue.find do |job|
meta = job.args.first
# => {"job_class" => "MyJob", "job_id"=>"1afe424a-f878-44f2-af1e-e299faee7e7f", "queue_name"=>"my_queue_name", "arguments"=>["Arg1", "Arg2", ...]}
meta['job_class'] == 'MyJob' && meta['arguments'].first == 'Arg1'
end
# Removes from queue so it doesn't get processed twice
job.delete
meta = job.args.first
klass = meta['job_class'].constantize
# => MyJob
# Performs the job without using Sidekiq's API, does not count as performed job and so on.
klass.new.perform(*meta['arguments'])
# OR
# Perform the job using Sidekiq's API so it counts as performed job and so on.
# klass.new(*meta['arguments']).perform_now
Please let me know if this doesn't work or if someone knows a better way to do this.
Iam using delayed job in a rails application. I want to notify an error to airbake whenever a delayed job fails. I checked on github and leant about the failure method.
I want to send the last_error attribute of failed delayed job to airbrake. Something like this:
class ParanoidNewsletterJob < NewsletterJob
def perform
end
def failure
Airbrake.notify(:message => self.last_error, :error_class => self.handler)
end
end
But it gives me the following runtime error:
undefined method `last_error' for #<struct ParanoidNewsletterJob>
Please help me figure out how I can notify Airbrake the last_error of a failed delayed_job.
Many Thanks!!
You should be able to pass the job to the failure method, and then extract the last_error from the job. i.e.
def failure(job)
Airbrake.notify(:message => job.last_error, :error_class => job.handler)
end
this should work fine
def failure(job)
Airbrake.notify(:message => job.error, :error_class => job.error.class, :backtrace => job.error.backtrace)
end
There are two ways you can achieve what you want:
A job specific method which only applies to the type of job you want by implementing the failure method with the job as the parameter. The job will contain error and last_error. And this is what other answers are about.
A global option where a plugin can be developed to apply it to any job type created. This is desired if all jobs need to be monitored. The plugin can be registered and perform actions around various events in the lifecycle of a job. For example, below is a plugin to update the last_error if we want to process it before storing to database
One example below:
require 'delayed_job'
class ErrorDelayedJobPlugin < Delayed::Plugin
def self.update_last_error(event, job)
begin
unless job.last_error.nil?
job.last_error = job.last_error.gsub("\u0000", '') # Replace null byte
job.last_error = job.last_error.encode('UTF-8', invalid: :replace, undef: :replace, replace: '')
end
rescue => e
end
end
callbacks do |lifecycle|
lifecycle.around(:failure) do |worker, job, *args, &block|
update_last_error(:around_failure, job)
block.call(worker, job)
end
end
end
Basically it will be called when any failure occurs for any job. For details on how this callback thing work, you can refer to A plugin to update last_error in Delayed Job.
I have a controller action that creates 2 background jobs to be run at a future date.
I am trying to test that the background jobs get run
# spec/controllers/job_controller_spec.rb
setup
post :create, {:job => valid_attributes}
Delayed::Job.count.should == 2
Delayed::Worker.logger = Rails.logger
#Delayed::Worker.new.work_off.should == [2,0]
Delayed::Worker.new.work_off
Delayed::Job.count.should == 0 # this is where it fails
This is the error:
1) JobsController POST create with valid params queues up delayed job and fires
Failure/Error: Delayed::Job.count.should == 0
expected: 0
got: 2 (using ==)
For some reason it seems like it is not firing.
You can try to use
Delayed::Worker.new(quiet: false).work_off
to debug the result of your background jobs, this could help you to find out if the fact that they're supposed to run in the future is messing with the assert itself.
Don't forget to take off the "quiet:false" when you're done, otherwise your tests will always output the results of the background jobs.
Let's say I have delayed_job running in the background. Tasks can be scheduled or run immediately(some are long tasks some are not)
If a task is too long, a user should be able to cancel it. Is it possible in delayed job? I checked the docs and can't seem to find a terminate method or something. They only provide a catch to cancel delayed job itself(thus cancelling all tasks...I need to just cancel a certain running task)
UPDATE
My boss(who's a great programmer btw) suggested to use Ruby Threading for this feature of ours. Is this possible? Like creating new threads per task and killing that thread while it's running?
something like:
t1 = Thread.new(task.run)
self.delay.t1.join (?) -- still reading on threads so correct me if im wrong
then to stop it i'll just use t1.stop (?) again don't know yet
Is this possible? Thanks!
It seems that my boss hit the spot so here's what we did(please tell us if there's some possibility this is bad practice so I can bring it up):
First, we have a Job model that has def execute! (which runs what it's supposed to do).
Next, we have delayed_job worker in the background, listening for new jobs. Now when you create a job, you can schedule it to run immediately or run every certain day (we use rufus for this one)
When a job is created, it checks if its supposed to run immediately. If it is, it adds itself to the delayed job queue. The execute function creates a Thread, so each job has its own thread.
User in the ui can see if a job is running(if there's a started_at and no finished_at). If it IS running, there's a button to cancel it. Canceling it just sets the job's canceled_at to Time.now.
While the job is running it also checks itself if it has a canceled_at or if Time.now is > finished_at. If so, kill the thread.
Voila! We've tested it for one job and it seems to work. Now the only problem is scaling...
If you see any problems with this please do so in the comments or give more suggestions if ever :) I hope this helps some one too!
Delayed::Job is an < ActiveRecord::Base model, so you can query it just like you normally would like Delayed::Job.all(:conditions => {:last_error => nil}).
Delayed::Job objects have a payload field which contain a serialized version of the method or job that you're attempting to run. This object is accessed by their '#payload_object' method, which loads the object in question.
You can combine these two capabilities to make queriable job workers, for instance, if you have a User model, and the user has a paperclip'ed :avatar, then you can make a method to delete unprocessed jobs like so:
class User < ActiveRecord::Base
has_attached_file :avatar, PaperclipOptions.new(:avatar)
before_create :'process_avatar_later'
def process_avatar_later
filename = Rails.root.join('tmp/avatars_for_processing/',self.id)
open(filename, 'w') do |file| file <<self.avatar.to_file end
Delayed::Job.enqueue(WorkAvatar.new(self.id, filename))
self.avatar = nil
end
def cancel_future_avatar_processing
WorkAvatar.future_jobs_for_user(self.id).each(&:destroy)
#ummm... tell them to reupload their avatar, I guess?
end
class WorkAvatar < Struct.new(:user_id, :path)
def user
#user ||= User.find(self.user_id)
end
def self.all_jobs
Delayed::Job.scoped(:conditions => 'payload like "%WorkAvatar%"')
end
def self.future_jobs_for_user(user_id)
all_jobs.scoped(:conditions => {:locked_at => nil}).select do |job|
job.payload_object.user_id == user_id
end
end
def perform
#user.avatar = File.open(path, 'rb')
#user.save()
end
end
end
It's possible someone has made a plugin make queryable objects like this. Perhaps searching on GitHub would be fruitful.
Note also that you'd have to work with any process monitoring tools you might have to cancel any running job worker processes that are being executed if you want to cancel a job that has locked_at and locked_by set.
You can wrap the task into a Timeout statement.
require 'timeout'
class TaskWithTimeout < Struct.new(:parameter)
def perform
Timeout.timeout(10) do
# ...
end
rescue Timeout::Error => e
# the task took longer than 10 seconds
end
end
No, there's no way to do this. If you're concerned about a runaway job you should definitely wrap it in a timeout as Simone suggests. However, it sounds like you're in search of something more but I'm unclear on your end goal.
There will never be a way for a user to have a "cancel" button since this would involve finding a method to directly communicate with the worker running process running the job. It would be possible to add a signal handler to the worker so that you could do something like kill -USR1 pid to have it abort the job it's currently working and move on. Would this accomplish you goal?