Clockwork for Heroku scheduled task is causing NameError - ruby-on-rails-3

This is my first time using the Ruby on Rails 'clockwork' library. I'm getting the following error when my scheduled job tries to execute:
ERROR -- : uninitialized constant Delayed (NameError)
Here's the code in the job causing the error:
every(1.day, 'Queueing scheduled job', :at => '22:40') { Delayed::Job.enqueue ScheduledJob.new }
I followed Heroku's guide for using 'clockwork' (https://devcenter.heroku.com/articles/clock-processes-ruby), but I'm not entirely sure how the scheduled job is supposed to know what task to execute? Does it know simply because the task itself resides in 'lib/tasks'?
My n00bie gut tells me that the NameError that 'Delayed' is causing is where I should identify the task to run.
Any insight into this would be very much appreciated!

I don't know if you still have that problem. Do you have gem 'delayed_job_active_record' in your Gemfile?. Have you followed the installation step from here:https://github.com/collectiveidea/delayed_job/

Related

Cron is not Working in Alfresco

I have written a cron to run every 30 minutes in scheduled-action-services-context.xml file
However I see that it is not working, when I check the log I can find only this error.
For my cron, I have also used lucene search. So I beleive this error is regarding that, so kindly help me in fixing it. Here is the error:
ERROR [quartz.core.JobRunShell] [DefaultScheduler_Worker-8] Job jobGroup.jobD threw an unhandled Exception:
org.alfresco.repo.search.impl.lucene.LuceneQueryParserException: 03020086
The error log you show is most likely the reason behind your scheduled action not properly working. In facts, it seems that the action is properly scheduled, but it then fails to complete as you provided an invalid Lucene query. Without the query itself or any other detail such as the relevant Spring config or action implementation details, I can only tell you to:
double check the lucene query
verify that that error log appears precisely when you would expect your action to be scheduled

Delayed job not executed despite disappearing from delayed_jobs table immediately

I implemented a delayed job in my Rails application, following the instructions.
I start the Rails app
I click on a link that launches the delayed job
The job is visible in the database's delayed_jobs table
I run rake jobs:work
The job disappears from the table, so I guess the job has been performed
BUT PROBLEM: /tmp/job.log has not been written (what the job should have done)
The job:
class CsvImportJob
def perform
File.open('/tmp/job.log', 'w') {|f| f.write("Hello") }
end
end
The call:
job = CsvImportJob.new
job.delay.perform
Nothing in the logs.
The rake jobs:work terminal says nothing after its start message:
[Worker(host:nico pid:25453)] Starting job worker
Nothing happen either when I launch the job while rake jobs:work is running.
In contrast, when the line "hello".delay.length is executed, delayed_jobs processes it and a message String#length completed after 0.0556 - 1 jobs processed at 3.6912 j/s, 0 failed does appear.
See https://github.com/collectiveidea/delayed_job/wiki/Common-problems#wiki-undefined_method_xxx_for_class in documentation.
Even delayed_job author don't know the reason. It somehow depends on the webserver you run in on. Try the wiki's recommendation.
See also delayed_job: NoMethodError
I'm a little late to the party, but also look at:
https://github.com/collectiveidea/delayed_job/wiki/Common-problems#wiki-jobs_are_silently_removed_from_the_database
Your case doesn't sound like a YAML deserialization error, but (as the WIKI suggests), you might set Delayed::Worker.destroy_failed_jobs = false so the failed job stays in the table and you can examine the cause of the error.
update
As I think about it: are you sure that the CsvImportJob class is known to the worker task? That is, is csv_import_job.rb defined in one of the "well known" directories for a Rails class? If not, then the worker task won't be able to de-serialize it, which will lead to exactly the behavior that you're seeing.
If for some reason, cav_import_job.rb is not in a well known directory, you can always require it from an initialization file -- that should fix the problem.

Mysql2 Error MySQL server has gone away

I am getting this error occasionally. I have read some solutions in stackoverflow but they were about rails 2 or mysql. Any help will be appreciated.
ActiveRecord::StatementInvalid (Mysql2::Error: MySQL server has gone away
There are numerous causes for the error. See below page for possible causes. Perhaps your packet size is set too small.
http://dev.mysql.com/doc/refman/5.0/en/gone-away.html
I got this error while trying to import a large file through seeds.rb with rake db:seed by calling one statement:
ActiveRecord::Base.connection.execute(IO.read("path/to/file.sql"))
And I kept on getting ActiveRecord::StatementInvalid (Mysql2::Error: MySQL server has gone away...
SOLUTION
I resolved this by a combination of two things:
Add reconnect: true to the database specification in database.yml
Read the SQL file and execute the statement individually, as such:
f = File.new('path/to/file.sql')
while statements = f.gets("") do
ActiveRecord::Base.connection.execute(statements)
end
I had to modify to remove some comments from my SQL file -- they made ActiveRecord throw errors for some reason, but that resolved my problem.
I experience exactly same issue when I run "rake db:reset" command on my development environment. But I never see this error message when I run "rake db:migrate:reset && rake db:seed".
Though it is very strange, but this may throw some lights on this issue. I am glad if my post leads to a solution somehow.
Maybe the server you are hosted on is overloaded and in some cases the MySQL server can not execute a query. Ask your hosting provider about performance monitoring tools, or tell him about this problem directly. This error message should be enough for them to give you an answer.

How do I allow a watchr script to be in the scope of my ActiveRecord models

I have a watchr script running on my Ruby on Rails 3.1 app and inside the script I need to make a call like: game = Game.find(0)
except whenever the script is being executed I receive this error: uninitialized constant Watchr::Script::EvalContext::Game (NameError)
I'm assuming that I have to require something in the beginning of the script but I'm not sure what. Incase it matters the script is located at /data/xmlwatcher.watchr
The best way that I figured out how to do this was to put everything that deals with the database into a rake task and invoke the task with Rake::Task[].invoke. Inside of the Rake task I invoke Rake::Task['environment'] and then it works.

Resque on Heroku not running properly

Im using Resque + RedisToGo for my background jobs. I am having trouble running background tasks on Heroku. Its strange because, the first time I run a background job, it gets executed without any issues. But when I run it again, it doesnt execute. I would have to run "heroku restart', every other time for a background to complete successfully....
For example. If I have the following code in my Resque background action:
module EncodeSong
#queue = :encode_song
def self.perform(media_id, s3_file_url)
puts 'foobar'
media = Media.find(media_id)
puts 'media id is supposed to be here'
puts media.id
puts s3_file_url
end
end
In "heroku console", I do:
Resque.enqueue(EncodeSong, 26, 'http://actual_bucket_name.s3.amazonaws.com/unencoded/users/1/songs/test.mp3')
In "heroku logs", I can see the 4 "puts", when I run the above code for the first time. But running it a second time, only returns 'foobar'. The other "puts" are not displayed.....
I suspect that it is not able to run "media = Media.find(media_id)", the second time around. This is only happening on the production server on Heroku. On our local development machine, we do not experience this problem.... Anyone know what is wrong?
PS. I have tried enabling Heroku logs, but still don't get any useful response. I have followed this tutorial to set up Heroku with Resque
Seems like your job is failing (probably you're right about the 'media = Media.find' line). Look through the resque admin interface. Look for failed jobs, there you'll find the backtrace of each failed job.
Hope this helps.