How to monitor redis clusters - redis

I can monitor single redis instance through redis-stats but if I want to monitor a redis cluster, I need to specify each server name. But if I add more servers to cluster, then I need to add more servers to redis-stats. So is there any utility so that I can monitor all servers in a cluster.

You can use Data-Dog or new-Relic for monitoring each redis-node and then you can create proper dashboard as well.

Related

What is the difference between:Redis Replicated setup, Redis Cluster setup Redis Sentinel setup and Redis with Master with Slave only?[REDISSON]

I've read https://github.com/redisson/redisson
And I found out that there are several
Redis Replicated setup (including support of AWS ElastiCache and Azure Redis Cache)
Redis Cluster setup (including support of AWS ElastiCache Cluster and Azure Redis Cache)
Redis Sentinel setup
Redis with Master with Slave only
I am not a big expert in clusters and I don't understand the difference between these setups.
Could you beiefly explain the differences ?
Disclaimer I am an AWS employee.
I do not know how Redis Replicated Setup is different from Redis in Master-Slave mode. Maybe they mean cross-region replication?
In any case, I can try and explain setups I know about:
Redis with Master with Slave only - is a single shard setup where you create a primary replica together with one or more secondary (slave) replicas (let's hope PC police won't arrest me). This setup is used to improve the durability of your in-memory store. It's not advised to use your secondaries for reads because such setup has eventual consistency guarantees and your replica reads may be stale (depending on the replication lag).
Redis Cluster setup - the setup supported by cloud provides such as AWS Elasticache. In this setup your workload can be spread horizontally across multiple shards and each shard may have its own secondary replicas. Your client library must support this setup since it requires maintaining multiple connections to several nodes at a client level. Moreover, there are some locality rules you need to follow in order to use cluster mode efficiently:
Keys with foo{<shard>}bar notation will be routed to their shard according to what is stored inside curly brackets.
You can not use mset, mget and other multi-key commands across shards. You can still use these commands if their keys contain the same {shard} part.
There are additional cluster mode admin commands that are exposed by Redis but they are usually hijacked and hidden from users by cloud providers since cloud provides use them in order to manage redis cluster themselves.
Redis cluster have an ability to migrate part of your workload between shards. However, it still obliged to preserve correctness with respect to {shard} notation. Since your client library is responsible to fetch data from specific shard it must handle "moved" response when a shard might redirect it to another node.
Redis Sentinel setup - using an additional server that provides service discovery functionality for Redis clusters. Not strictly required and I believe is less popular across users. It serves as a single source of truth regarding each node's health and state. It provides monitoring, management, and service discovery functions for managing your Redis cluster. Many Redis client libraries provide the option of connecting to Redis sentinel nodes in order to achieve automatic service discovery and seamless failover flow. One of the reasons why this setup is less popular is because cloud companies like AWS Elasticache provide this service out of the box.

Does it require to put load balancer before Redis cluster

I am using Redis Cluster on 3 Linux servers (CentOS 7). I have standard configuration i.e. 6 nodes, 3 master instances, and 3 slave instances (one master have one slave) distributed on these 3 Linux servers. I am using this setup for my web application for data caching, HTTP response caching. My aim is to read primary and write secondary i.e. Read operation should not fail or delayed.
Now I would like to ask is it necessary to configure any load balancer before by 3 Linux servers so that my web application requests to Redis cluster instances can be distributed properly on these Redis servers? Or Redis cluster itself able to handle the load distribution?
If Yes, then please mention any reference link to configure the same. I have checked official documentation Redis Cluster but it does not specify anything regarding load balancer setup.
If you're running Redis in "Cluster Mode" you don't need a load balancer. Your Redis client (assuming it's any good) should contact Redis for a list of which slots are on which nodes when your application starts up. It will hash keys locally (in your application) and send requests directly to the node which owns the slot for that key (which avoids the extra call to Redis which results in a MOVED response).
You should be able to configure your client to do reads on slave and writes on master - or to do both reads and writes on only masters. In addition to configuring your client, if you want to do reads on slaves, check out the READONLY command: https://redis.io/commands/readonly .

How to configure Redis DB to be Read Only?

I have a Redis database (no cluster, no replica).
How can i configure it to be read only? (so client can not modify it)
I do not want to set up a replication or cluster.
Thanks in advance
There is no such configuration for Redis master. Only replicas can be configured as read-only. If you have control over the Redis client library used by your clients, you can change it to expose only read methods to the clients.

High-availability Redis?

I am currently setting up an infrastructure for an App in AWS. App is written in Django and is using Redis for some transactions. High availability is key for this application and I am having a hard time trying to get my head around how to configure Redis for High availability.
Application level changes are not an option.
Ideally I would like to have a redis setup, to which I can write and read and replicate and scale when required.
Current Setup is a Redis Fail-over scenario with HAProxy --> Redis Master --> Replica Slave.
Could someone guide me understand various options ? and how to scale redis for high availability !
Use AWS ElastiCache Redis Cluster with Multi-AZ. They provides automatic fail-over. It provides endpoint to access master node.
If master goes down AWS route your endpoint to another node. everything happens automatically, you don't have to do anything.
Just make sure that if you are doing DNS to IP caching in your application, its set to 60 seconds or so instead of default.
http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/AutoFailover.html
Thanks,
KS

Is automatic failover built into Redis 2.8?

I am planning on adding Redis to our application as a session and cache store. I have been looking at how to make Redis highly available on an on-premise hosted solution.
The standard approach appears to be to set up Redis as a 3 node replica and use Sentinel for the monitoring and automatic failover.
Redis 2.8 introduces Redis cluster. Does that mean it brings in automatic failover etc and we no longer need to use Sentinel?
No, Cluster and Failover are different scenarios. Also Cluster is in 3.0, not 2.8.
The standard (and minimum) setup for HA is a master and one slave (aka "a pod"), with a separate set of three nodes which run Sentinel and monitor the pod.
This is to ensure failover of the server. However, either your client library has to support using Sentinel to discover master and reconnect on failure, you implement it in your code, or you set up a TCP load balancer and a sentinel monitoring daemon to update your load balancer configuration when a failover occurs at which point the client code doesn't know or care about sentinel.
Cluster isn't there to provide HA, it is there for server-side sharding of data. For Cluster you're looking at 6-7 nodes minimum (3 master, 3 slave, 1 spare) as well as Cluster support in the client and restrictions about commands and Lua script which need to access multiple keys.