Is there anyway to use one redis (for background jobs ) for multiple rails application ?
EDIT:
If I use same redis for all the applications then a redis have many jobs queued from different application, which raise the issue that Resque of an app may process the wrong job.
As specified in the documentation you can setup Resque to use a namespace to connect to redis like this:
Resque.configure do |config|
# Set the redis connection. Takes any of:
# String - a redis url string (e.g., 'redis://host:port')
# String - 'hostname:port[:db][/namespace]'
# Redis - a redis connection that will be namespaced :resque
# Redis::Namespace - a namespaced redis connection that will be used as-is
# Redis::Distributed - a distributed redis connection that will be used as-is
# Hash - a redis connection hash (e.g. {:host => 'localhost', :port => 6379, :db => 0})
config.redis = 'localhost:6379:alpha/high'
end
Related
When configuring Redis 6 with ACLs in a cluster environment an additional user must be created (assuming the default user is not desired or does not have access to the PSYNC command). What are the exact commands that must be assigned to this user?
There is a small note about ACL rules for Sentinel and Replicas in the documentation indicating that Sentinel needs:
AUTH, CLIENT, SUBSCRIBE, SCRIPT, PUBLISH, PING, INFO, MULTI, SLAVEOF,
CONFIG, CLIENT, EXEC
and replicas need:
PSYNC, REPLCONF, PING
My best guess is to combine the two for a command set of:
AUTH, CLIENT, SUBSCRIBE, SCRIPT, PUBLISH, PING, INFO, MULTI, SLAVEOF,
CONFIG, CLIENT, EXEC, PSYNC, REPLCONF
Excerpt from redis.conf which indicates "and/or other commands needed for replication":
# If the master is password protected (using the "requirepass" configuration
# directive below) it is possible to tell the replica to authenticate before
# starting the replication synchronization process, otherwise the master will
# refuse the replica request.
#
masterauth mymasterpassword
#
# However this is not enough if you are using Redis ACLs (for Redis version
# 6 or greater), and the default user is not capable of running the PSYNC
# command and/or other commands needed for replication. In this case it's
# better to configure a special user to use with replication, and specify the
# masteruser configuration as such:
#
masteruser mymasteruser
#
# When masteruser is specified, the replica will authenticate against its
# master using the new AUTH form: AUTH <username> <password>.
I am getting this error when I run celery beat -S redbeat.RedBeatScheduler.
beat raised exception : ConnectionError('Error -2 connecting to redis-sentinel:26379. Name or service not known.',)
How can I create a service_name and password in redis-sentinel
I am not trying to use redis as a message broker. I am using celery-redbeat to store celerybeat data in redis-sentinel cluster from this page.https://pypi.org/project/celery-redbeat/
and
from this configuration
redbeat_redis_url = 'redis-sentinel://redis-sentinel:26379/0'
redbeat_redis_options = {
'sentinels': [('192.168.1.1', 26379),
('192.168.1.2', 26379),
('192.168.1.3', 26379)],
'socket_timeout': 0.1,
}
I add 192.168.1.1:26379 instead of redis-sentinel:/26379 but when master node down in redis-sentinel cluster beat is down too.
redbeat_redis_url = 'redis-sentinel://192.168.1.1:26379/0'
redbeat_redis_options = {
'sentinels': [('192.168.1.2', 26379),
('192.168.1.3', 26379)],
'socket_timeout': 0.1,
}
Unless you have redis-sentinel in your /etc/hosts file it will not be able to resolve it to a correct IP address. You may try to replace redis-sentinel with an IP address of your Redis server. Furthermore, it does not look like a proper Redis Sentinel configuration. Redis Configuration section explains how to connect to Redis Sentinel, please read it.
I am trying to run airflow with celery with redis as broker but jobs are getting stuck in waiting state.
Airflow is running on local and I am using example DAGs for testing purpose.
executor = CeleryExecutor
sql_alchemy_conn = mysql://root#localhost/airflow
sql_alchemy_pool_size = 5
[celery]
# This section only applies if you are using the CeleryExecutor in
# [core] section above
# The app name that will be used by celery
celery_app_name = airflow.executors.celery_executor
# The concurrency that will be used when starting workers with the
# "airflow worker" command. This defines the number of task instances that
# a worker will take, so size up your workers based on the resources on
# your worker box and the nature of your tasks
celeryd_concurrency = 16
# When you start an airflow worker, airflow starts a tiny web server
# subprocess to serve the workers local log files to the airflow main
# web server, who then builds pages and sends them to users. This defines
# the port on which the logs are served. It needs to be unused, and open
# visible from the main web server to connect into the workers.
worker_log_server_port = 8793
# The Celery broker URL. Celery supports RabbitMQ, Redis and experimentally
# a sqlalchemy database. Refer to the Celery documentation for more
# information.
broker_url = redis://localhost:6379/0
# Another key Celery setting
celery_result_backend = redis://localhost:6379/0
# Celery Flower is a sweet UI for Celery. Airflow has a shortcut to start
# it `airflow flower`. This defines the port that Celery Flower runs on
flower_port = 5555
# Default queue that tasks get assigned to and that worker listen on.
default_queue = default
How to remove redis specific application cache via capistrano prior to a restart. Redis is running a remote machine and the redis client need not be installed on the machine which performs the deployment.
As long as capistrano can run any command upon deployment, just remove the cache key(s) with redis-cli:
role :redisserver, "127.0.0.1"
...
namespace :deploy do
...
before "deploy:restart", "deploy:reset_redis_cache"
task :reset_redis_cache, :roles => :rediserver do
run "redis-cli DEL cachekey"
end
...
UPD. added role reference
Need some help connecting Resque Web UI (Rack config.ru) to a Redis server with AUTH
Using Resque + Unicorn + Nginx and installed most using apt-get install (Debian) and gem install
So basically Unicorn loads up resque-web (via Rack) using the standard config.ru
http://etagwerker.wordpress.com/2011/06/27/how-to-setup-resque-web-with-nginx-and-unicorn/
#!/usr/bin/env ruby
# Put this in /var/www/resque-web/config.ru
require 'logger'
$LOAD_PATH.unshift ::File.expand_path(::File.dirname(__FILE__) + '/lib')
require 'resque/server'
Resque::Server.use Rack::Auth::Basic do |username, password|
password == '{{password}}' # password
end
# Set the RESQUE_CONFIG env variable if you’ve a `resque.rb` or similar
# config file you want loaded on boot.
if ENV['RESQUECONFIG'] && ::File.exists?(::File.expand_path(ENV['RESQUE_CONFIG']))
load ::File.expand_path(ENV['RESQUE_CONFIG'])
end
use Rack::ShowExceptions
run Resque::Server.new
I'm trying to find out how to connect this to a Redis server with AUTH per the documentation here: http://redis.io/topics/security (basically in /etc/redis/redis.conf)
This rack configuration seem to only connection to a "vanilla" Redis server using defaults (localhost with standard 6379 port) -- how do I specify the Redis connection so I can pass the user/pass in the format below
redis://user:PASSWORD#redis-server:6379
I've tried using ENV['RESQUE_CONFIG'] to load up a resque.rb file
require 'resque'
Resque.redis = Redis.new(:password => '{{password}}')
this gets pulled via /etc/unicorn/resque-web.conf
# Put this in /etc/unicorn/resque-web.conf
RAILS_ROOT=/var/www/resque-web
RAILS_ENV=production
RESQUE_CONFIG=/var/www/resque-web/config/resque.rb
but it's still not really working
BTW, everything works without the Redis AUTH and just using the "vanilla" localhost Redis connection
Try this
redis_client = Redis.new(:url => "redis://user:PASSWORD#redis-server:6379")
and then do this
Resque.redis = redis_client
Hope this help