I have a rails app which deployed on 2 ec2 instance with nginx and capistrano. For background job, I had used sidekiq with redis. I have 50 gb memory on server. I also have set max_pool_size to 50 of sidekiq concurrency. I want use one instance as a dedicated server for Sidekiq. How shall I do that?
You wouldn't be able to have a Rails app and a Sidekiq server. You can have a Rails app that connects to an instance which also has a copy of your Rails app and a Sidekiq server. The only thing you'd have to do on the Sidekiq server is not run rails but instead only run sidekiq.
In my opinion, you're complicating things by wanting it like this. You're better having both on the same server and have a separate ElasticCache Redis server running to which your Sidekiq can connect.
Related
I have some redis instances running on Ubuntu servers. Is it possible to update the version of redis installed on those servers from 3.2.9 to 3.2.13 (https://raw.githubusercontent.com/antirez/redis/3.2/00-RELEASENOTES) without causing any downtime for users? Users currently interact with the redis instances running on the servers through a proxy.
On rails code deployment, sidekiq is re-started and we would like to remove the sidekiq specific redis cache from the instance before it re-starts.
This is what we want to achieve
1. sidekiq:stop
2. connect to remote redis pointed to by sideiq
3. select database (say select 1).
3. remove cache (say flushall)
How should I automate this via capistrano.
You can flush Sidekiq queues by calling them directly, or in their own Rake Task in your Step #3
Sidekiq::ScheduledSet.new.clear #clear the scheduled queue..
Sidekiq::RetrySet.new.clear #clear any quequed retries.
I am working on Ruby on Rails app with Mongodb .My app is deployed on heroku and for delayed jobs i am using amazon ec2. Things I have a doubt
1)How to connect to the mongo database in amazon ec2 which is basically at heroku?
2)When i run delayed jobs how it will went to amazon server what are the changes i have to make to the app? If somebody can point me tutorial for this.
If you want to make your EC2 instance visible to your application on Heroku, you need to add your instance to Heroku's security group from Amazon. There are some instructions in Heroku's documentation that explain how to connect to external services like this.
https://devcenter.heroku.com/articles/dynos#connecting-to-external-services
In the case of MongoDB running on its default ports, you'd want to do something like this:
$ ec2-authorize YOURGROUP -P tcp -p 27017 -u 098166147350 -o default
As for how to handle your delayed jobs running remotely on the EC2 instance, you might find this article from the Artsy engineering team helpful. It sounds like they developed a fairly similar setup.
http://artsy.github.io/blog/2012/01/31/beyond-heroku-satellite-delayed-job-workers-on-ec2/
The Rails application I'm currently working on is hosted at Amazon EC2 servers. It's using Resque for running background jobs, and there are 2 such instances (would-be production and a stage). Also I've mounted Resque monitoring web app to the /resque route (on stage only).
Here is my question:
Why there are workers from multiple hosts registered within my stage system and how can I avoid this?
Some additional details:
I see workers from apparently 3 different machines, but only 2 of them I managed to identify - the stage(obviously) and the production. The third has another address format(starts with domU) and haven't any clue what it could be.
It looks like you're sharing a single Redis server across multiple resque server environments.
The best way to do this safely is to use separate Redis servers or separate Redis databases or namespaces. The Redis-namespace gem can be used with Resque to isolate each environments Resque queues and worker data.
I can't really help you with what the unknown one is, but I had something similar happen when moving hosts and having dns names change. The only way I found to clear out the old ones was to stop all workers on the machine, fire up IRB, require 'resque' and look at Resque.workers. This will list all the workers resque knows about, which in your case will include about 20 bogus ones. You can then do:
Resque.workers.each do {|worker| worker.unregister_worker}
This should prune all the not-really-there workers and get you back to a proper display of the real workers.
I have a Procfile like so:
web: bundle exec rails server -p $PORT
em: script/eventmachine
The em process fires up an eventmachine with start_server (port ENV['PORT']) and my web process occasionally needs to communicate with it.
My question is how does the web process know what port to communicate with it on? If I understand heroku correctly it assigns you a random port when the process starts up (and it can change if the ps is killed or restarted). Thanks!
According to Heroku documentation,
Two processes running on the same dyno can communicate over TCP/IP using whatever ports they want.
Two processes running on different dynos cannot communicate over TCP/IP at all. They need to use memcached, or the database, or one of the Heroku plugins, to communicate.
Processes are isolated and cannot communicate directly with each other.
http://www.12factor.net/processes
There are, however, a few other ways. One is to use a backing service such as Redis, or Postgres to act as an intermediary - another is to use FIFO to communicate.
http://en.wikipedia.org/wiki/FIFO
It is a good thing that your processes are isolated and share-nothing, but you do need to architecture your application slightly differently to accommodate this.
I'm reading this while on my commute to work. So I haven't tried anything with it (sorry) but this looks relevant and potentially awesome.
https://blog.heroku.com/archives/2013/5/2/new_dyno_networking_model