I went through the documentation (https://docs.moodle.org/310/en/Redis_cache_store) and figured out the steps to configure the AWS Redis ElastiCache server and the Stores are in the Ready state.
But when I went to the Test performance section for the Redis server and found that the Store was not ready. How can I discover why this is happening and is this a problem stating that the AWS ElastiCache Redis server has not been configured properly?
And how can I find out if there is any way/steps to check if the Moodle application of mine is served through the Redis cache or not?
Related
Originally was trying to install RediSearch ontop of Aws ElastiCache but it seems they dont support modules in their managed service (makes sense).
Then I was looking into running RediSearch on a separate EC2 instance with my VPC instance that would allow me to utilize it while not having to install it in ElastiCache directly.
Is this possible?
Thanks!
RediSearch is available as a service on Redis Enterprise Cloud from Redis Labs on AWS, Azure and GCP.
I am trying to move away from a single AWS ElastiCache (Redis) server as Celery broker to a Redis cluster. Trouble is - nowhere in the Celery or redis-py documentation can I find the way to connect to the AWS RedisCluster.
redis-py that is used by Celery to communicate with the Redis server can be configured to use Redis Sentinel, but AWS does not support it (at least I did not find sentinel support in the AWS ElastiCache documentation).
So is there a way to communicate somehow with the ElastiCache Redis cluster using redis-py, or, is there a way to instruct Celery to use redis-py-cluster (a separate project)?
Elasticache should give you a configuration endpoint address that you can use for connecting to celery. Just use that endpoint in either the setting for the broker_url or results_backend.
How to get all connected clients of a redis cluster?
I am using AWS elasticCache redis with non cluster mode and redission as my redis client.
My Use Case:
I need to run specific code from only 1 connected redis client.
Thanks
redis has command about client information like CLIENT LIST, check out this page .
you could checkout this page for the command redisson has not supported yet.
I am currently setting up an infrastructure for an App in AWS. App is written in Django and is using Redis for some transactions. High availability is key for this application and I am having a hard time trying to get my head around how to configure Redis for High availability.
Application level changes are not an option.
Ideally I would like to have a redis setup, to which I can write and read and replicate and scale when required.
Current Setup is a Redis Fail-over scenario with HAProxy --> Redis Master --> Replica Slave.
Could someone guide me understand various options ? and how to scale redis for high availability !
Use AWS ElastiCache Redis Cluster with Multi-AZ. They provides automatic fail-over. It provides endpoint to access master node.
If master goes down AWS route your endpoint to another node. everything happens automatically, you don't have to do anything.
Just make sure that if you are doing DNS to IP caching in your application, its set to 60 seconds or so instead of default.
http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/AutoFailover.html
Thanks,
KS
The redis cache offered by CloudFoundry has a small capacity, i.e. 16MB.
I know redis has a command "FLUSHALL" which is used to delete all the keys in the cache. How to do the same thing in cloudfoundry?
You can recreate and rebind the service as you wish unless you have any specific configuration that cannot be migrated. (I assume services provisioned by CF.com should be created as the same.)
Also sending FLUSHALL to the redis tunnel should be another option if you have vmc and caldecott gem installed as well as a redis execution locally. Would you mind if you can send the error why you cannot connect to it?