Resque on Heroku not running properly - ruby-on-rails-3

Im using Resque + RedisToGo for my background jobs. I am having trouble running background tasks on Heroku. Its strange because, the first time I run a background job, it gets executed without any issues. But when I run it again, it doesnt execute. I would have to run "heroku restart', every other time for a background to complete successfully....
For example. If I have the following code in my Resque background action:
module EncodeSong
#queue = :encode_song
def self.perform(media_id, s3_file_url)
puts 'foobar'
media = Media.find(media_id)
puts 'media id is supposed to be here'
puts media.id
puts s3_file_url
end
end
In "heroku console", I do:
Resque.enqueue(EncodeSong, 26, 'http://actual_bucket_name.s3.amazonaws.com/unencoded/users/1/songs/test.mp3')
In "heroku logs", I can see the 4 "puts", when I run the above code for the first time. But running it a second time, only returns 'foobar'. The other "puts" are not displayed.....
I suspect that it is not able to run "media = Media.find(media_id)", the second time around. This is only happening on the production server on Heroku. On our local development machine, we do not experience this problem.... Anyone know what is wrong?
PS. I have tried enabling Heroku logs, but still don't get any useful response. I have followed this tutorial to set up Heroku with Resque

Seems like your job is failing (probably you're right about the 'media = Media.find' line). Look through the resque admin interface. Look for failed jobs, there you'll find the backtrace of each failed job.
Hope this helps.

Related

Running sidekiq jobs against more than one database?

I have one Rails app, which uses different databases depending on the domain name (ie. it supports multiple websites). This works by loading up different environments, without issue.
I am trying to figure out how to run the same set of Sidekiq jobs for each of them.
Sidekiq runs on a worker-server instance.
I have tried running a second instance of sidekiq on the commandline of the worker, giving it a different pidfile, logfile, environment and config file.
Problem 1: In the Dashboard, all recurring tasks listed in first instance of sidekiq's config file are gone and only the task from my 2nd instance's config file is there on the recurring jobs tab.
Problem 2: For that job, if I try to enqueue it, I get unitialized constant uninitialized constant JofProductUpdateLive -> I am guessing this is because I defined the class in app/jobs/jof_product_update_live.rb on worker, and it is seeking it on master server ?
Problem 3: If my theory for the error is correct and I place that file on master server, seems to me it will run with environment/db1 and i'm not sure how to run it with db2/environment2 ?
I'm seeking any advice as to how to set something like this up, as I have tried every idea that came my way and as of yet, zero success. I have also combed through every forum I could find on sidekiq to no avail.
Thanks for any help !
Check out the Apartment gem and apartment-sidekiq.
https://github.com/influitive/apartment-sidekiq

inittab respawn of Node.js too fast

So I am trying to keep my Node server on a embedded computer running when it is out in the field. This lead me to leveraging inittab's respawn action. Here is the file I added to inittab:
node:5:respawn:node /path/to/node/files &
I know for a fact that when I startup this node application from command line, it does not get to the bottom of the main body and console.log "done" until a good 2-3 seconds after I issue the command.
So I feel like in that 2-3 second window the OS just keeps firing off respawns of the node app. I see in the error logs too in fact that the kernel ends up killing off a bunch of node processes because its running out of memory and stuff... plus I do get the 'node' process respawning too fast will suspend for 5 minutes message too.
I tried wrapping this in a script, dint work. I know I can use crontab but thats every minute... am I doing something wrong? or should I have a different approach all together?
Any and all advice is welcome!
TIA
Surely too late for you, but in case someone else finds such a problem: try removing the & from the command invocation.
What happens is that when the command goes to the background (thanks to the &), the parent (init) sees that it exited, and respawns it. Result: a storm of new instantations of your command.
Worse, you mention embedded, so I guess you are using busybox, whose init won't rate-limit the respawning - as would other implementations. So the respawning will only end when the system is out of memory.
inittab is overkill for this. I found out what I need is a process monitor. I found one that is lightweight and effective; it has some good reports of working great out in the field. http://en.wikipedia.org/wiki/Process_control_daemon
Using this would entail configuring this daemon to start and monitor your Node.js application for you.
That is a solution that works from the OS side.
Another way to do it is as follows. So if you are trying to keep Node.js running like I was, there are several modules written meant to keep other Node.js apps running. To mention a couple there are forever and respawn. I chose to use respawn.
This method entails starting one app written in Node.js that uses the respawn module to start and monitor the actual Node.js app you were interested in keeping running anyway.
Of course the downside of this is that if the Node.js engine (V8) goes down altogether then both your monitoring and monitored process will go down with it :-(. But its better than nothing!
PCD would be the ideal option. It would go down probably only if the OS goes down, and if the OS goes down then hope fully one has a watchdog in place to reboot the device/hardware.
Niko

Running Rails code/initializers but not via Rake

I keep running into a recurring issue with my application. Basically, I have certain code that I want it to run when it first starts up the server to check whether certain things have been defined e.g. a schedule, particular columns in the database, existence of files, etc. and then act accordingly.
However, I definitely don't want this code to run when I'm starting a Rake task (or doing a 'generate', etc. For example, I don't want the database fields to be checked under Rake because the Rake task might be the migration to define the fields. Another example, I have a dynamic schedule for Resque but I don't want to load that when starting the Resque workers. And so on and so forth...
And I definitely need the Rake tasks to be loading the environment!
Is there any way of determining how the application has been loaded? I do want to run the code when its loaded via 'rails server', Apache/Passenger, console, etc. but not at other times.
If not, where or how could you define this code to ensure it is only executed in the manner described above?
The easiest way is checking some environment variable in your initialization code with something like
if ENV['need_complex_init']
do_complex_init
end
and running application with need_complex_init=1 rails s

CGI-LUA with lighttpd: 500 internal server error

I'm running CGI-LUA scripts with lighttpd on embedded device. The web client attempts to run via POST three scripts every 3 seconds.
Most of the time it works, but the issue is that from time to time I get 500 internal server error, like the server fails to run the script, though nothing changed and in the 'top' I see that the CPU is idle most of the time.
I'm new to web, any ideas?
If I were trying to solve this problem I would start with:
1) Look in /var/log/lighttpd/error.log to see what lighttpd is reporting when the failure occurs.
2) Write a very simple CGI-LUA script that does something traceable, like touch a file with the current unixtime as its name, and hit it every 3 seconds instead of your script. This will help you figure out if the problem is in CGI-LUA or in your script.
3) Run your script outside CGI-LUA repeatedly in a loop to see if it ever fails.

Delayed job not executed despite disappearing from delayed_jobs table immediately

I implemented a delayed job in my Rails application, following the instructions.
I start the Rails app
I click on a link that launches the delayed job
The job is visible in the database's delayed_jobs table
I run rake jobs:work
The job disappears from the table, so I guess the job has been performed
BUT PROBLEM: /tmp/job.log has not been written (what the job should have done)
The job:
class CsvImportJob
def perform
File.open('/tmp/job.log', 'w') {|f| f.write("Hello") }
end
end
The call:
job = CsvImportJob.new
job.delay.perform
Nothing in the logs.
The rake jobs:work terminal says nothing after its start message:
[Worker(host:nico pid:25453)] Starting job worker
Nothing happen either when I launch the job while rake jobs:work is running.
In contrast, when the line "hello".delay.length is executed, delayed_jobs processes it and a message String#length completed after 0.0556 - 1 jobs processed at 3.6912 j/s, 0 failed does appear.
See https://github.com/collectiveidea/delayed_job/wiki/Common-problems#wiki-undefined_method_xxx_for_class in documentation.
Even delayed_job author don't know the reason. It somehow depends on the webserver you run in on. Try the wiki's recommendation.
See also delayed_job: NoMethodError
I'm a little late to the party, but also look at:
https://github.com/collectiveidea/delayed_job/wiki/Common-problems#wiki-jobs_are_silently_removed_from_the_database
Your case doesn't sound like a YAML deserialization error, but (as the WIKI suggests), you might set Delayed::Worker.destroy_failed_jobs = false so the failed job stays in the table and you can examine the cause of the error.
update
As I think about it: are you sure that the CsvImportJob class is known to the worker task? That is, is csv_import_job.rb defined in one of the "well known" directories for a Rails class? If not, then the worker task won't be able to de-serialize it, which will lead to exactly the behavior that you're seeing.
If for some reason, cav_import_job.rb is not in a well known directory, you can always require it from an initialization file -- that should fix the problem.