Need some help connecting Resque Web UI (Rack config.ru) to a Redis server with AUTH
Using Resque + Unicorn + Nginx and installed most using apt-get install (Debian) and gem install
So basically Unicorn loads up resque-web (via Rack) using the standard config.ru
http://etagwerker.wordpress.com/2011/06/27/how-to-setup-resque-web-with-nginx-and-unicorn/
#!/usr/bin/env ruby
# Put this in /var/www/resque-web/config.ru
require 'logger'
$LOAD_PATH.unshift ::File.expand_path(::File.dirname(__FILE__) + '/lib')
require 'resque/server'
Resque::Server.use Rack::Auth::Basic do |username, password|
password == '{{password}}' # password
end
# Set the RESQUE_CONFIG env variable if you’ve a `resque.rb` or similar
# config file you want loaded on boot.
if ENV['RESQUECONFIG'] && ::File.exists?(::File.expand_path(ENV['RESQUE_CONFIG']))
load ::File.expand_path(ENV['RESQUE_CONFIG'])
end
use Rack::ShowExceptions
run Resque::Server.new
I'm trying to find out how to connect this to a Redis server with AUTH per the documentation here: http://redis.io/topics/security (basically in /etc/redis/redis.conf)
This rack configuration seem to only connection to a "vanilla" Redis server using defaults (localhost with standard 6379 port) -- how do I specify the Redis connection so I can pass the user/pass in the format below
redis://user:PASSWORD#redis-server:6379
I've tried using ENV['RESQUE_CONFIG'] to load up a resque.rb file
require 'resque'
Resque.redis = Redis.new(:password => '{{password}}')
this gets pulled via /etc/unicorn/resque-web.conf
# Put this in /etc/unicorn/resque-web.conf
RAILS_ROOT=/var/www/resque-web
RAILS_ENV=production
RESQUE_CONFIG=/var/www/resque-web/config/resque.rb
but it's still not really working
BTW, everything works without the Redis AUTH and just using the "vanilla" localhost Redis connection
Try this
redis_client = Redis.new(:url => "redis://user:PASSWORD#redis-server:6379")
and then do this
Resque.redis = redis_client
Hope this help
Related
I have to deploy a rails app after the server had a problem, and the IP Address has changed.
I've updated the IP Address in deploy/production.rb, and also git's remote branches, to the correct value, namely 192.168.30.24, but as you can see from the following output, the deployment is failing due to trying to connect over 192.168.30.23.
Where is Capistrano retrieving 192.168.30.23 from?
INFO [fa83a838] Running /usr/bin/env git remote update as code#192.168.30.24
DEBUG [fa83a838] Command: cd /var/www/paperless_office/repo && ( export RBENV_ROOT="~/.rbenv" RBENV_VERSION="2.3.0" GIT_ASKPASS="/bin/echo" GIT_SSH="/tmp/paperless_office/git-ssh.sh" ; /usr/bin/env git remote update )
DEBUG [fa83a838] Fetching origin
DEBUG [fa83a838] ssh: connect to host 192.168.30.23 port 22: No route to host
Capfile
# Load DSL and Setup Up Stages
require 'capistrano/setup'
# Includes default deployment tasks
require 'capistrano/deploy'
require 'capistrano/rbenv'
require 'capistrano/bundler'
require 'capistrano/rails/assets'
require 'capistrano/rails/migrations'
# Loads custom tasks from `lib/capistrano/tasks' if you have any defined.
Dir.glob('lib/capistrano/tasks/*.cap').each { |r| import r }
production.rb as follows:
role :app, %w{192.168.30.24}
role :web, %w{192.168.30.24}
role :db, %w{192.168.30.24}
server '192.168.30.24', user: 'code', roles: %w{web app}
after 'deploy:publishing', 'deploy:restart'
Thanks
Fixed this by removing the remote repo that Capistrano builds, so that on the next deploy, it was rebuilt using the correct IP Address.
I was deploying to /var/www/app_name so the repo to remove was /var/www/app_name/repo
I'm trying to set up Rails on my site via ssh. When everything is set up, I start the server with rails server and I get:
=> Rails 5.0.1 application starting in development on http://localhost:3000
=> Run rails server -h for more startup options
Puma starting in single mode...
* Version 3.6.2 (ruby 2.3.3-p222), codename: Sleepy Sunday Serenity
* Min threads: 5, max threads: 5
* Environment: development
* Listening on tcp://localhost:3000
Use Ctrl-C to stop
It would be fine if the issue didn't stop there, but when I point my browser at the IP address on port 3000, my browser just hangs instead of displaying the Rails smoke page.
Since I can't type in more commands, I open a new terminal and log in via:
ssh -i /path/to/cloud.key user_name#XXX.XXX.XXX.XXX
I think I've seen it work before, but now it's timing out:
ssh: connect to host xxx.xxx.xxx.xxx port 22: Operation timed out
I found similar problems resolved on stackoverflow, but none of them solved the problem for public key authentication and when I try their solutions (ssh user_name#XXX.XXX.XXX.XXX), I turn up Permission denied (publickey).
So, I want to either learn why my browser is hanging (if I need to install nginx or apache2 or configure puma, etc.), and/or why my attempts to log into a second ssh session are failing.
Any help for this one?
Ubuntu Server 16.04
Rails 5.0.1
Ruby 2.3.0p0
You can't point your browser to the ip because the server is binding to localhost. You must enter rails s -b0.0.0.0
I'm using Terraform to automate build out of an AWS EC2 based docker host and then using its remote exec option to download a docker file, build and run it.
I'd hoped to integrate this with Serverspec but am struggling to work out two things:
The best way to pass the external dns of the newly created AWS EC2 instance to Serverspec.
How to configure the SSH options for Serverspec so that it executes correctly on an Amazon Linux AMI using the ec2-user account.
I would normally connect to the EC2 instance using a pre-defined key pair and never use a password however ServerSpec seems to run commands on the server with a sudo -p format.
Any advice much appreciated.
Contents of spec_helper.rb
require 'serverspec'
require 'net/ssh'
set :ssh_options, :user => 'ec2-user'
Also using edited rakefile as follows to force correct EC2 external dns (masked):
require 'rake'
require 'rspec/core/rake_task'
hosts = %w(
ec2-nn-nn-nn-nnn.eu-west-1.compute.amazonaws.com
)
set :ssh_options, :user => 'ec2-user'
task :spec => 'spec:all'
namespace :spec do
task :all => hosts.map {|h| 'spec:' + h.split('.')[0] }
hosts.each do |host|
short_name = host.split('.')[0]
role = short_name.match(/[^0-9]+/)[0]
desc "Run serverspec to #{host}"
RSpec::Core::RakeTask.new(short_name) do |t|
ENV['TARGET_HOST'] = host
t.pattern = "spec/Nexus/*_spec.rb"
end
end
end
You could make the IP address an output in Terraform. In fact, the link gives an example doing just that to get the IP address of an AWS instance, named web in this case:
output "address" {
value = "${aws_instance.web.public_dns}"
}
Then you can get this value from the command line after a terraform apply with terraform output address.
You can set the sudo password with the config option :sudo_password. If the ec2-user can run sudo without a password, set this to ''. (See this blog post for an example.) Or pass it in the SUDO_PASSWORD environment variable, described here: http://serverspec.org/tutorial.html
Is there anyway to use one redis (for background jobs ) for multiple rails application ?
EDIT:
If I use same redis for all the applications then a redis have many jobs queued from different application, which raise the issue that Resque of an app may process the wrong job.
As specified in the documentation you can setup Resque to use a namespace to connect to redis like this:
Resque.configure do |config|
# Set the redis connection. Takes any of:
# String - a redis url string (e.g., 'redis://host:port')
# String - 'hostname:port[:db][/namespace]'
# Redis - a redis connection that will be namespaced :resque
# Redis::Namespace - a namespaced redis connection that will be used as-is
# Redis::Distributed - a distributed redis connection that will be used as-is
# Hash - a redis connection hash (e.g. {:host => 'localhost', :port => 6379, :db => 0})
config.redis = 'localhost:6379:alpha/high'
end
How to remove redis specific application cache via capistrano prior to a restart. Redis is running a remote machine and the redis client need not be installed on the machine which performs the deployment.
As long as capistrano can run any command upon deployment, just remove the cache key(s) with redis-cli:
role :redisserver, "127.0.0.1"
...
namespace :deploy do
...
before "deploy:restart", "deploy:reset_redis_cache"
task :reset_redis_cache, :roles => :rediserver do
run "redis-cli DEL cachekey"
end
...
UPD. added role reference