I'm using redis with sidekiq on my rails app, and I'm seeing weird behaviors :
lots of connection resets, and also time spent on redis is often above 1 sec per request.
I'm running rails 4.2.7.1, ruby 2.3.1
sidekiq 4.2.8
redis 3.2.1 (recently downgraded from 3.3.2)
it's running on heroku
any idea would be greatly appreciated.
Please upgrade to Sidekiq 4.2.9.
Related
I'm having some troubles using airflow 1.9.0 with CeleryExecutor using redis as broker.
I need to run a job that takes more than 6 hours to complete and I'm losing my celery workers.
Looking into airflow code in GitHub, There is a hard-coded configuration:
https://github.com/apache/incubator-airflow/blob/d760d63e1a141a43a4a43daee9abd54cf11c894b/airflow/config_templates/default_celery.py#L31
How could I bypass this problem?
This is configurable in airflow.cfg under the section celery_broker_transport_options.
See the commit adding this possibility https://github.com/apache/incubator-airflow/commit/be79f87f36b6b99649e0a1f6ab92b41640b3beaa
I have a rails app which deployed on 2 ec2 instance with nginx and capistrano. For background job, I had used sidekiq with redis. I have 50 gb memory on server. I also have set max_pool_size to 50 of sidekiq concurrency. I want use one instance as a dedicated server for Sidekiq. How shall I do that?
You wouldn't be able to have a Rails app and a Sidekiq server. You can have a Rails app that connects to an instance which also has a copy of your Rails app and a Sidekiq server. The only thing you'd have to do on the Sidekiq server is not run rails but instead only run sidekiq.
In my opinion, you're complicating things by wanting it like this. You're better having both on the same server and have a separate ElasticCache Redis server running to which your Sidekiq can connect.
I was wondering why do we need Redis server for running CKAN.
If needed, why? And How do I configure it with CKAN?
p.s
I am running my ckan instance in RHEL7.
Update: Redis has been a requirement since CKAN 2.7, when a new system for asynchronous background jobs was introduced which relies on Redis. You can configure the Redis connection using the ckan.redis.url option.
Redis is not required for the current version of CKAN (2.6.2 at the time of this writing), it's not even mentioned in the CKAN 2.6.2 documentation.
However, the upcoming 2.7 release will require Redis for its new system of asynchronous background jobs. Redis will be configured using the new ckan.redis.url option.
I just started evaluating Redis. I am using Redis 2.8.19 which the most latest stable release. Redis 2.9 is still unstable and Redis 3.0 is just available for developer's preview (not recommended for production). I was tryin to setus a cluster of Redis and when I changed my redis.conf and appended
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
and started my Redis server by
src/redis-server ./redis.conf
it gave me an error as follows
* FATAL CONFIG FILE ERROR *
Reading the configuration file, at line 2
'cluster-enabled yes'
Bad directive or wrong number of arguments
I googled the error and got to know that my version (2.8.19) does not support cluster. I was still unable to fine any such specification in Redis Docs. My question is simple. Does Redis 2.8.19 supports redis cluster configuration? Or I have to upgrade to Redis 2.9 or Redis 3.0. I am evaluating Redis because I need to deploy it in production. Please guide.
Redis Cluster support is only for versions >= 3.0.0. Redis 3.0.0 will be released as a stable version in a matter of days, it's a good idea to use it if you want to use Cluster. The cluster support is considered to be stable, however for it to be considered mature we want to see adoption. Btw there is already at least a very large site using it in production. Currently the most sane thing to do if you need Redis Cluster is to test it for your use case, and if it looks great, use it.
Redis cluster is supported only in Redis 3.0+ (which is now stable). I have written a simple API called "Simple Redis Cluster Client" which can be used in redis's sub 3.0 versions for running in a cluster like mode (Not precisely a cluster, it just distributes keys among redis nodes based on the key's hashcode, You can have a look # https://github.com/prash-mi/simple-redis-cluster-client
Cluster support for Redis is only from v3 - v2.8.19 doesn't do clustering.
I have a Rails 3.2.20 app which I'm introducing Resque into the environment to background job mail and sms alerts. I have everything setup in development properly and now I'm to the point of preparing to merge my branch and push to staging and then to production for testing. But I have a few questions.
1.) How much memory will I need to run Resque in production (approximately). I understand that by starting up a Resque worker it loads the full environment. I'm a bit tight on memory and don't want to have any issues. I'll be using a single Resque worker as our email/sms traffic is very light and we don't mind queues being backed up for a few seconds to a minute. I know this question is very vague, but I'd like to get a feel for the memory footprint that Resque requires.
2.) I will have Redis running but need to figure out how to start a Resque worker on deployment as well as kill the existing Resque worker. I've come up with the following which I would add as a Cap after deploy action.
task :resque_restart do
run "kill $(ps aux | grep 'resque' | awk '{print $2}')"
run "cd #{current_path}; bundle exec rake resque:work QUEUE=*"
end
I haven't actually tested this with Capistrano 2 (which I'm using), but have tested the commands manually and the first command does kill all the resque rake tasks and the second command starts up the worker with all Queues enabled.
I'm not sure if this is the best way to go or not, so I'd really like to hear some feedback on this simple Capistrano task I wrote.
3.) What is the best way to monitor my Resque rake task. So for instance if it crashes or catches a signal to terminate, how can I have it restarted so the app doesn't crash and to assure the worker rake task is always running?
Thanks in advance for any advice or guidance you can provide.
It really depends on the size of your app. From my experience, generally a single Resque worker isn't much larger than your app's footprint. However, if your Resque worker will instantiate a lot of large objects, the size of the Resque instance could grow very quickly.
Check out the capistrano-resque gem. It provides all this functionality for you, plus more. :)
There are several options for this. A lot of people have followed something similar to this post about running Resque in Production and using the God gem. Personally, I've used a process similar to what is described in this post using Monit. Monit can be a bit of a pain to set up, so I'd strongly recommend checking out the God gem.