I have a Rails with a worker (Worker App) that I want another Rails app to invoke (Requester App). One option is to create a controller action on the Worker App that the Requester App can post to.
Is there a way to directly add the job to the Worker App's Redis server? I know I can just push the value into the redis server, but I'm not sure what format it should be in and I haven't found the documentation for it. Is this even possible, or is Resque doing a bunch of stuff I don't know about?
Looking at the Resque code you can push a job into the queue by doing the following:
Resque.push('my_queue', 'class' => 'MyQueue', 'args' => [ 123, 'bar'])
That will push a job into the my_queue queue for the MyQueue job to perform it.
Here's the piece of code of interest
https://github.com/resque/resque/blob/master/lib/resque.rb#L142-L159
Related
I would like to know if we can create an infinite job/task (kind like an Azure Worker role) with Hangfire. I would like to queue emails in an Azure Queue using an Hangfire scheduled job (every 4 hours) and then run an infinite Hangfire fire and forget job/task (when Website start) to process (dequeue) each email and send it to Amazon SES every 200 milliseconds (5 emails per second). The infinite job needs to be up all the time in order to process new queue emails.
So, performance wise is it okay to do that and then how to manage potential errors that could stop the infinite job and how to restart it if needed. In the infinite loop job should I also create new fire and forget task for each email in order have the possibility make other retry attempt for each email. My Hangfire server will be hosted in an Azure Website.
Finally, I'm doing all this because Amazon SES cannot queue emails. My Amazon AWS subscription authorizes me to send 15 emails per second.
Hope it's clear,
Thank you,
Not sure why you would want to use Hangfire for this purpose, but Azure WebJobs seem to be quite a nice fit where it can process messages in Azure Queue out of the box.
https://azure.microsoft.com/en-us/documentation/articles/websites-dotnet-webjobs-sdk-storage-queues-how-to/#trigger
You can then enqueue a Hangfire job when the queue is filled with something.
public static void ProcessQueueMessage([QueueTrigger("emailqueue")] string email, TextWriter logger)
{
BackgroundJobClient.Enqueue(() => SendEmail(email));
}
I have a listener (using the Listen gem) object that I'm adding to a constant within an initializer:
LISTENER = Listen.to(REPORT, ERROR, SENT) do |modified, added, removed|
listener.ignore! /\.swp/
listener.ignore /\.DS_Store/
Communicator.notify(added)
end
I put a little admin interface around this functionality, and I display the status of the listener in the view.
In my deployment, I have a Utility instance where all my background jobs run. I may have 1 or many app servers spinning at one time, so I only want this listener to listen on the Utility instance. Sidekiq is my background processor. Therefore, in my admin interface I enlist a simple Sidekiq worker to spin up this listener.
When I obtain the status of the listener, it says it isn't running. But the process is there. This of course is because the App server is attempting to get the listener status from the constant on the application server!
How can I get the status of the object on the Sidekiq server?
Rails 4.2 has implemented GlobalID and here is a good blog post outlining ActiveJob and it covers using GlobalID. (You can parse live object! OMG section)
I know this is an old post but I just came across it and thought this answer might help someone.
Sidekiq and Application Servers do no share memory. Even the Application server instances do not share memory. You will have to use database to share the information between sidekiq and your application.
Add the status of the listener to the database from your sidekiq process. Read the database value in your application server request.
Seems that you want to use Distributed Ruby: http://www.ruby-doc.org/stdlib-2.1.0/libdoc/drb/rdoc/DRb.html.
dRuby allows methods to be called in one Ruby process upon a Ruby object located in another Ruby process, even on another machine. References to objects can be passed between processes. Method arguments and return values are dumped and loaded in marshalled format. All of this is done transparently to both the caller of the remote method and the object that it is called upon.
Im using resque to have some jobs running in the background, a client ( user ) initiates these by doing an action on the web app in there browser.
The problem is it takes several seconds for the action to be triggered. How could one speed it up ? I need resque to respond more instant.
IM using all default setup and config nothing modified.
Are there any guidelines for configuration or suggestions out of the field to make resque response faster?
Im running with 1 worker and low queues like 1,2 at a time.
Resque workers check the queue every 5 seconds by default, taken from the Resque page on Github:
start
loop do
if job = reserve
job.process
else
sleep 5 # Polling frequency = 5
end
end
shutdown
Under "Polling frequency" it then says:
You can pass an INTERVAL option which is a float representing the polling frequency.
The default is 5 seconds, but for a semi-active app you may want to use a smaller value.
$ INTERVAL=0.1 QUEUE=file_serve rake environment resque:work
Also you could have a look at something like beanstalkd instead, you can watch this railscast about it.
I am building an app that is using Twilio and we need to call Twilio's server a certain period after the call starts.
I'm using RoR and right now its on Heroku, but we can move it elsewhere if need be.
I've looked at Delayed Jobs and Cron jobs but I don't know enough about it to know which route I should take.
Seems like Delayed jobs and cron jobs are usually reoccurring (not very accurate with timing) and are started with rake not by a user action, right?
What tool will give me minute accuracy for this use case?
Twilio Evangelist here.
I like to approach this with Redis backed worker queues. Redis is a key-value store, which we can use with Resque (pronounced 'rescue'), and Resque-Scheduler to provide a queueing mechanism. For example, we can have the application respond to user interaction by creating a call using the Twilio REST API. Then we can queue a worker task that will do something with the call a specified amount of time later. Here, I'm just going to get the status of the call after a few seconds (instead of waiting for the status callback when the call completes).
Heroku have a really good walkthrough on using Resque, there is also an excellent Rails Casts episode on Resque with Rails. A data add-on is available from Heroku called Reddis Cloud to provide a Redis server. Although for development I run a local server on my computer.
I've created a simple app based on the Heroku tutorial and the Rails Casts episode. It has a single controller and a single model. I'm using a callback on the model to create an outbound Twilio call. Such that when a new Call record is created, we initiate the outbound Twilio call:
class Call < ActiveRecord::Base
# Twilio functionality is in a concern...
include TwilioDialable
after_create do |call|
call.call_sid = initiate_outbound call.to,
call.from,
"http://example.com/controller/action"
call.save
call
end
end
I'm using a concern to initiate the call, as I may want to do that from many different places in my app:
module TwilioDialable
extend ActiveSupport::Concern
include Twilio::REST
def initiate_outbound to, from, url
client = Twilio::REST::Client.new ENV['TW_SID'], ENV['TW_TOKEN']
client.account.calls.create(to: to, from: from, url: url).sid
end
end
This simply creates the outbound Twilio call, and returns the call SID so I can update my model.
Now that the call has been placed, so I want to check on its status. But as we want the call to ring a little first, we'll check it in 15 seconds. In my controller I use the Resque.enqueue_in method from Resque-Scheduler to manage the timing for me:
Resque.enqueue_in(15.seconds, MessageWorker, :call => #call.id)
This is instructing Resque to wait 15 seconds before performing the task. It is expecting to call a Resque worker class called MessageWorker, and will pass a hash with the parameters: {:call => #call.id}. To make sure this happens on time, I set the RESQUE_SCHEDULER_INTERVAL to 0.1 seconds, so it will be very precise. The MessageWorker itself is fairly simple, it finds the call record and uses the Call SID to get the updated status and save this to the database:
class MessageWorker
#queue = :message_queue
def self.perform params
# Get the call ID to do some work with...
call = Call.find(params["call"])
# Setup a Twilio Client Helper with credentials form the Environment...
client = Twilio::REST::Client.new ENV['TW_SID'], ENV['TW_TOKEN']
# We could get some
status = client.account.calls.get(call.call_sid).status
# Update my call object.
call.status = status
end
end
When I ran this locally, I had to start the Redis server, the Redis Worker rake task, the Redis Schedule rake task, and I am using Foreman and Unicorn to run the Rails app. You can configure all of this into your Procfile for running on Heroku. The tutorial on Heroku is a great walkthrough for setting this up.
But now I can create a call and have it dial out to a phone. In the mean time I can refresh the calls/show/:id page and see the value magically updated when the worker process is run 15 seconds later.
I also find that resque-web is really useful for debugging Resque, as it makes it really easy to see what is happening with your tasks.
Hope this helps!
I am using Heroku to host a small app. It's running a screenscraper using Mechanize and Nokogiri for every search request, and this takes about 3 seconds to complete.
Does the screenscraper block anyone else who wants to access the app at that moment? In other words, is it in holding mode for only the current user or for everyone?
If you have only one heroku dyno, then yes it is the case that other users would have to wait in line.
Background jobs are for cases like yours where there is some heavy processing to be done. The process running rails doesn't do the hard work up-front, instead it triggers a background job to do it and responds quickly, freeing itself up to respond to other requests.
The data processed by the background job is viewed later - perhaps in a few requests time, or whenever the job is done, loaded in by javascript.
Definitely, because a dyno is single threaded if it's busy scrapping then other requests will be queued until the dyno is free or ultimately timed out if they hang for more than 30 seconds.
Anything that relies on an external service would be best run through a worker via DJ - even sending an email, so your controller puts the message into the queue and returns the user to a screen and then the mail is picked up by DJ and delivered so the user doesn't have to wait for the mailing process to complete.
Install the NewRelic gem to see what your queue length is doing
John.
If you're interested in a full service worker system we're looking for beta testers of our Heroku appstore app SimpleWorker.com. It's integrated tightly into Heroku and offers a number of advantages over DJ.
It's as simple as a few line of code to send your work off to our elastic worker cloud and you can take advantage of massive parallel processing because we do not limit the number of workers.
Shoot me a message if you're interested.
Chad