I have a Rails 3.2.14 app with a rake task that listens for gps coordinates. I'm implementing a new method of collecting gps data that doesn't require a rake task to listen. So I'm trying to kill the rake task(s) that are spun up on my production server.
I did a ps aux | grep rake to get a list of the rake instances I want to kill and issued a kill "pid" and even the ugly kill -9 "pid" but the rake tasks keeps respawning. There are three instances of the rake task that are running that I need to kill. Is there a better way to kill these rake tasks then what I'm doing? I've also tried doing a killall -9 rake but it says rake: no process found
Any thoughts on how to stop this task would be greatly appreciated.
Actually I was able to kill the rake processes via Capistrano through a Capistrano task I wrote a while back. This killed all instances, now to remove the task from my app entirely.
Related
This is my command
bundle exec rake resque:work QUEUE="*" --trace
I want to run this command on my server as a background process.
please help me.
A method I often use is:
nohup bundle exec rake resque:work QUEUE="*" --trace > rake.out 2>&1 &
This will keep the task running even if you exit your shell. Then if I want to just observe trace output live, I do:
tail -f rake.out
And you can examine rake.out at any time.
If you need to kill it before completion, you can find it with ps and kill the pid.
Just in case somebody finds this 4 years later, bundle has an elegant way of doing this now. For example if you want to run sidekiq in the background you can do:
bundle exec sidekiq -e production -d -L ./log/sidekiq.log
The -d daemonizes to run in the background, but you will also need to use the -L to provide a logfile, else bundler will refuse to run your command in the background (deamonize).
Tested with bundler version 1.15.4
Update Oct 2019.
While the command still works in general, the specific command above will no longer work for sidekiq 6.0+, you'll need to use Upstart or Systemd if you use Linux: https://github.com/mperham/sidekiq/wiki/Deployment#running-your-own-process
I'm getting occasional "file too short" messages when running bundle exec rake:
rake aborted!
/var/lib/jenkins/.rvm/gems/ruby-1.9.3-p327/bundler/gems/amatch-0f95f4ce269f/lib/amatch_ext.so: file too short - /var/lib/jenkins/.rvm/gems/ruby-1.9.3-p327/bundler/gems/amatch-0f95f4ce269f/lib/amatch_ext.so
Is there a way to make bundler more fault-tolerant and try to re-run when it encounters these spurious failures?
Why might they be happening in the first place? Multiple processes may be executing rake tasks simultaneously - can this corrupt rvm's gem repository, and if so how do I avoid the problem?
if you use it in multiple processes then use bundle --standalone - assuming every process is ran from different path - if they all use the same path then you could try bundle --path /path/for/gems$$/ the $$ will be replaced with process pid - but --path is recorded option and this will not help as only the last run will be visible in this directory.
best would be to limit amount of runs that are performed at the same time.
other option would be modifying GEM_HOME at runtime, but this can get complicated with jenkins so most likely this would not work:
OLD_GEM_HOME=$GEM_HOME
GEM_HOME=$( mktemp -d )
cp -r $OLD_GEM_HOME/ $GEM_HOME/
bundle install
# other commands
rm -rf $GEM_HOME/
GEM_HOME=$OLD_GEM_HOME
I start a worker with:
rake environment resque:work RAILS_ENV=development VVERBOSE=1 QUEUE=* LOGFILE=/Users/matteo/workspace/APP/log/resque.log --trace
I get the output:
** Invoke environment (first_time)
** Execute environment
** Invoke resque:work (first_time)
** Invoke resque:preload (first_time)
** Invoke resque:setup (first_time)
** Execute resque:setup
** Execute resque:preload
** Invoke resque:setup
** Execute resque:work
A quick ps tells me the process is up and running.
Now I do NOT have an instance of Redis running. A quick ps auxwwww | grep redis-server confirms that.
Shouldn't the worker fails?
I downloaded the resque code and put a breakpoint into the resque code in worker.rb:
(rdb:1) eval redis
#<Redis::Namespace:0x007f868cb57880 #namespace=:resque, #redis=#<Redis client v2.2.2 connected to redis://127.0.0.1:6379/1 (Redis v0.07)>>
How is this possible nothing is running on that port?
Thanks for any help
The answers is: "No, workers cannot run without a copy of Redis running".
The problem was caused by the fakeredis gem. Even though the gem was not included in the development environment the developer was requiring "redis/connction/memory" therefore using a fakeredis instance.
Resque definetely requires redis to work, although it may appear that your queue is up, it will fail to process without redis.
You can view the status of your queues by going to this url after booting up your server...
http://localhost:3000/resque/overview
That is if you have the following in your gemfile
gem 'resque', :require => "resque/server"
OR if you don't have that setup run this in the console resque-web to get the admin interface accessible at localhost:3000/overview
If you don't get an error and can access that page without a redis error, redis must be running somewhere, perhaps as a service on bootup?
For basic redis/resque setup see here - http://railscasts.com/episodes/271-resque?view=asciicast
I need to monitor my delayed_job worker with god. It starts perfectly, but when i want to stop it using "sudo god stop dj" it says
Sending 'stop' command
The following watches were affected:
dj-0
But worker is still on(it processes tasks etc.)
I looked through sites providing their god configs for delayed_job and stop command wasn't specified there. Do I need to specify stop task for god config or smth?
I start delayed_job with w.start = "cd #{rails_root} && QUEUE=work_server1 bundle exec rake -f #{rails_root}/Rakefile RAILS_ENV=#{environment} --trace jobs:work"
I've solved this problem. The reason was that using "bundle exec" two processes were spawned and god was monitoring the wrong one. So I've just upgraded rake to not use "bundle exec" and it works.
On my local machine I can do
QUEUES=a,b,c,d rake resque:work
And it processes those queues in that order. However, on Heroku Cedar I add this to my procfile:
worker: QUEUES=a,b,c,d exec bundle exec rake resque:work
And it crashes the app on deploy. I'm probably missing something dumb, but I'm stumped.
PS I prefix the command with exec because of a bug with resque not properly decrementing the worker count.
You shouldn't need the initial exec. The entry should look like this:
worker: bundle exec rake resque:work QUEUE=a,b,c,d
Use #hone's fork to properly clean up workers when they quit. In your Gemfile:
gem 'resque', git: 'https://github.com/hone/resque.git', branch: 'heroku', require: 'resque/server'