I'm having a little trouble understanding this config segment:
Sidekiq.configure_client do |config|
config.redis = { size: SIDEKIQ_THREADS }
end
Sidekiq.configure_server do |config|
config.options['concurrency'] = SIDEKIQ_THREADS
It looks like the client (the part of redis that enqueues jobs in Redis) can be configured to have a number of connections to Redis. Does this make the enqueueing of jobs faster in some way?
What does the concurrency option do?
The client can be multithreaded, e.g. Puma. In this case it helps to have multiple connections that can be used by all the threads.
Related
Has anyone tried actioncable with active job with an adaptor other than async?
When I use active job(with sidekiq) to broadcast messages to clients it does not send data to any of the clients. This makes sense also because sidekiq is running as another process and doesn't have connections to Action cable clients.
When I switch to active job with async adaptor it works which also makes sense because the jobs are run by Puma.
Any idea how can we use Sidekiq or any adaptor can be used to read jobs from redis and send messages to all connected clients?
Thanks
well, it's can be late, but i solve this using redis on actioncable
in cable.yml
adapter: redis
I am a novice to Celery, Redis, and RabbitMQ.
Currently, I'm using RabbitMQ as a message broker, and nothings are set in configuration. (with Django, MySQL)
I am wondering if it's possible to use Redis as a result store in backend, at the same time, RabbitMQ as a message broker.
The thing I know is only adding some settings, CELERY_RESULT_BACKEND = "redis"
Yes, it's possible. Just set:
CELERY_RESULT_BACKEND = "redis://:<password>#<hostname>:<port>/<db_number>"
replacing <password>, <hostname>, <port> and <db_number>.
In my production environment there are 7 parallel servers. I use redis to make emails queue like that:
$this->getRedis()->lpush('mailsQueue', serialize($mail));
And the daemon that is listening to the queue:
do {
$mail = $this->getRedis()->rpop('mailsQueue');
if ($mail) {
// sending an email
}
usleep(1000);
} while (true);
It works pretty well when the daemon is run in only one instance. But in production environment each of 7 servers has own daemon service. This makes a problem that sometimes, an email is sending couple times. It's because sometimes not only the one daemon service load the same email from "mailsQueue" list.
How can I make sure, that the element load with "rpop" will be loaded the only one time regardless how many daemon services I've got run?
Huge thanks for every help!
Wierd, I would have that that rpop would be atomic. You should be able to use MULTI to force a transaction so that no one else can interfear with that variable.
http://redis.io/topics/transactions
All the commands in a transaction are serialized and executed sequentially. It can never happen that a request issued by another client is served in the middle of the execution of a Redis transaction. This guarantees that the commands are executed as a single isolated operation.
More info:
https://github.com/StackExchange/StackExchange.Redis/blob/master/Docs/Transactions.md
I am using Jedis client for connecting to my Redis server. The following are the settings I'm using for connecting with Jedis (using apache common pool):
JedisPoolConfig poolConfig = new JedisPoolConfig();
poolConfig.setTestOnBorrow(true);
poolConfig.setTestOnReturn(true);
poolConfig.setMaxIdle(400);
// Tests whether connections are dead during idle periods
poolConfig.setTestWhileIdle(true);
poolConfig.setMaxTotal(400);
// configuring it for some good max value so that timeout don't occur
poolConfig.setMaxWaitMillis(120000);
So far with these setting I'm not facing any issues in terms of reliability (I can always get the Jedis connection whenever I want), but I am seeing a certain lag with Jedis performance.
Can any one suggest me some more optimization for achieving high performance?
You have 3 tests configured:
TestOnBorrow - Sends a PING request when you ask for the resource.
TestOnReturn - Sends a PING whe you return a resource to the pool.
TestWhileIdle - Sends periodic PINGS from idle resources in the pool.
While it is nice to know your connections are still alive, those onBorrow PING requests are wasting an RTT before your request, and the other two tests are wasting valuable Redis resources. In theory, a connection can go bad even after the PING test so you should catch a connection exception in your code and deal with it even if you send a PING. If your network is stable, and you do not have too many drops, you should remove those tests and handle this scenario in your exception catches only.
Also, by setting MaxIdle == MaxTotal, there will be no eviction of resources from your pool (good/bad?, depends on your usage). And when your pool is exhausted, an attempt to get a resource will endup in timeout after 2 minutes of waiting for a free resource.
I use redis-py to operate on redis, and our environment use twemproxy as redis proxy. But looks clinet pipeline doesn't work when connect to twemproxy.
import redis
client = redis.StrictRedis(host=host, port=port, db=0)
pipe = client.pipeline()
pipe.smembers('key')
print pipe.execute()
it throws exception when do execute method
redis.exceptions.ConnectionError: Socket closed on remote end
In twemproxy environment, client pipeline doesn't work or it is an issue of redis-py ?
Becouse of twemproxy supports not all redis commands.
Here is actual supported commands list https://github.com/twitter/twemproxy/blob/master/src/proto/nc_redis.c
redis-py pipeline will use transaction by default, try this:
pipe = r.pipeline(transaction=False)