Configure HAproxy for Redis with deferent Auth keys - redis

I have Redis cluster of three instances and the cluster is powered by Redis Sentinel and they are running as [master,slave,slave].
Also and HAproxy instance is running to transfer the traffic to the master node, and those tow slaves are read only, are used by another applications.
It was very easy to configure HAproxy to select the Master Node when same auth key used for all instance, but now we have different auth keys for every instance different from others.
#listen redis-16
bind ip_address:6379 name redis
mode tcp
default_backend bk_redis_16
backend bk_redis_16
# mode tcp
option tcp-check
tcp-check connect
tcp-check send AUTH\ auth_key\r\n
tcp-check send PING\r\n
tcp-check expect string +PONG
tcp-check send info\ replication\r\n
tcp-check expect string role:master
tcp-check send QUIT\r\n
tcp-check expect string +OK
server R1 ip_address:6379 check inter 1s
server R2 ip_address:6380 check inter 1s
server R3 ip_address:6381 check inter 1s
So the above code works only when we have one passwords across {R1,R2,R3}, How to configuer HAproxy for different passwords.
I mean how to make HAproxy use the the each auth key for its server, like the following:
R1 : abc
R2 : klm
R3 : xyz

You have two primary options:
Set up an HA Proxy config for each set of servers which have different passwords.
Set up HA Proxy to not use auth but rather pass all connections through transparently.
You have other problems with the setup you list. Your read-only slaves will not have a role of "master". Thus even if you could assign each a different password, your check would refuse the connection. Also, in the case of a partition your check will allow split-brain conditions.
When using HA Proxy in front of a Sentinel managed Redis pod[1] if you try to have HA Proxy figure out where to route connections to you must have HA Proxy check all Sentinels to ensure that the Redis instance the majority of Sentinels have decided is indeed the master. Otherwise you can suffer from split-brain where two or more instances report themselves as the Master. There is actually a moment after a failover when you can see this happen.
If your master goes down and a slave is promoted, when the master comes back up it will report itself as master until Sentinel detects the master and reconfigures it to be a slave. During this time your HA Proxy check will send writes to the original master. These writes will be lost when Sentinel reconfigures it to be a slave.
For the case of option 1:
You can either run a separate configured instance of HA Proxy or you can set up front ends and multiple back ends (paired up). Personally I'd go with multiple instances of HA Proxy as it allows you to manage them without interference with each other.
For the case of option 2:
You'll need to glue Sentinel's notification mechanism to HA Proxy being reconfigured. This can easily be done using a script triggered on Sentinel to reach out and reconfigure HA Proxy on the switch-master event. The details on doing this are at http://redis.io/topics/sentinel and more directly at the bottom of the example file found at http://download.redis.io/redis-stable/sentinel.conf
In a Redis Pod + Sentinel setup with direct connectivity the clients are able to gather the information needed to determine where to connect to. When you place a non-transparent proxy in between them your proxy needs to be able to make those decisions - or have them made for it when topology changes occur - on behalf of the client.
Note: what you describe is not a Redis cluster, it is a replication setup. A Redis cluster is entirely different. I use the term "pod" to apply to a replication based setup.

Related

How to setup Redis cluster behind a load balancer?

We want to set up Redis 6.2 clustering behind a LB. There are only master nodes and there is no Redis Sentinel being used. Each cluster-enabled Redis instance is running on a different host with the same configuration (eg. all of them are configured with port 6379). Is this possible with some port configuration on the LB such that a unique port on an LB maps to a unique_ip:6379?
Our idea is to use a cluster-aware Redis client like Lettuce RedisClusterClient which would issue CLUSTER NODES/SLOTS commands or react to MOVED/ASK redirection. It would also take care of split up a pipeline into using separate connections based on the slot for a command
It seems like this is not possible to achieve if the same port is used on all Redis hosts. Using https://docs.redis.com/latest/rs/networking/cluster-lba-setup/ as a guide, the best we could manage was to configure each Redis with a unique port and set cluster-announce-ip as the virtual IP (points to LB) and then manually make sure that the same port is used on LB as the Redis host. With this, the CLUSTER SLOTS and MOVED responses from Redis hosts could be correctly acted upon by the client. But this complicates our setup when a new Redis host has to be added or removed
You can use Route 53 if you're on AWS to achieve this.
Create A setup like this:
Add all hosts(IP addresses) in Route 53 and set TTL to smaller values like 30 seconds or so. Route 53 will return one of these Redis IP addresses, using this endpoint Redis clients like Lettuce or Jedis will discover all the Redis nodes.
You can use any other DNS system as well, record type should be A.

Redis 6 TLS Support and Redis Sentinel

I would like to set up a basic 3-node Redis Sentinel setup using the new TLS features of Redis 6. Unfortunately, it doesn't seem like Redis 6 Sentinel is smart enough to speak TLS to clients.
Does anyone know of a way to do this, or if it's not possible, if there are any mentions online about adding support for this in the future? It seems a shame to have these nice TLS features and not be able to use them with Redis' own tools.
I am aware that in the past people have used Stunnel to do this. With TLS support added to Redis, I am only interested in doing this if it can be done without third party addtions.
My setup:
3 Redis servers (6.0-rc, last pulled last week), running TLS with the test certs as specified in the Redis docs - one master and 2 replicas
3 Sentinels (6.0-rc, also last pulled last week), not running TLS on their ports (I would like to, but that's a secondary problem)
What I've Tried:
Pointing Sentinel to the Redis TLS port - this results in lots of TLS errors in Redis' logs about incorrect TLS version received, as Sentinel is not speaking TLS to Redis. Since it fails, Sentinel thinks the master is down.
Adding "https://" in the Sentinel config in front of the master IP - this results in Sentinel refusing to run, saying it can't find the master hostname.
Adding TLS options to Sentinel - this results in Sentinel trying to talk TLS on its ports, but not to clients, which doesn't help. I couldn't find any options specifically about making Sentinel speak TLS to clients.
Pointing Sentinel to the Redis not-TLS port (not ideal, I would rather only have the TLS port open) - this results in Sentinel reporting the wrong (not-TLS) port for the master to the simple Python client I'm testing with (it literally just tries to get master info from Sentinel) - I want the client to talk to Redis over TLS for obvious reasons
Adding the "replica-announce-port" directive to Redis with Sentinel still pointed to the not-TLS port - this fails in 2 ways: the master port is still reported incorrectly as the not-TLS port (seems to be because the master is not a replica and so the directive does not apply), and Sentinel now thinks the replicas are both down (because the TLS port is reported, replicas are auto discovered, and it can't speak to the replicas on the TLS port).
I am aware of this StackOverflow question (Redis Sentinel and TLS) - it is old and asks about Redis 4, so it's not the same.
I did figure this out and forgot to post the answer earlier: The piece I was missing was that I needed to set the tls-replication yes option on both the Redis and Sentinel servers.
Previously, I had only set it on the Redis servers, as they were the only ones that needed to do replication over TLS. But for some reason, that particular option is what is needed to actually make Sentinel speak TLS to Redis.
So overall, for TLS options, both sides of the equation needed:
tls-port <port>
port 0
tls-auth-clients yes
tls-ca-cert-file <file>
tls-key-file <file>
tls-cert-file <file>
tls-replication yes
Try to add tls-port option to the sentinel.conf as it seems to enable TLS support in general and the same is stated in documentation. For me the below two statements added to sentinel.conf on a top of the rest of TLS configuration actually made the trick.
tls-port 26379
port 0

Aerospike: How to find from any aerospike server which clients are accessing it?

We had multiple clients configured to talk to this cluster of aerospike nodes. Now that we have removed the configuration from all the clients we are aware of, there are still some read/write requests coming to this cluster, as shown in the AMC.
I looked at the log file generated in /var/log/aerospike/aerospike.log, but could not get any information.
Update
The netstat command as mentioned in the answer by #kporter shows the number of connections, with statuses ESTABLISHED, TIME_WAIT, CLOSE_WAIT etc. But, that does not mean those connections are currently being used for get/set operations. How do I get the IPs from which aerospike operations are currently being done?
Update 2 (Solved)
As mentioned in the comments to #kporter's answer, a tcpdump command on the culprit client showed packets still being sent to the aerospike cluster which was no more referenced in the config file. This was happening while even AMC of that cluster did not show any more read/write TPS.
I later found that this stopped after doing a restart of the nginx service on the client. Please note that the config file in the client now references a new aerospike cluster and packets sent to that cluster did not stop after the nginx restart. This is weird but it worked.
Clients connect to Aerospike over port 3000:
The following command, when run on the server nodes, will show the addresses of hosts connecting to the server over port 3000.
netstat --tcp --numeric-ports | grep 3000

How can you disable protected mode in Redis 3.2.6 Sentinel?

I have attempted everything recommended by the following error message:
(error) DENIED Redis is running in protected mode because protected mode is enabled, no bind address was specified, no authentication password is requested to clients. In this mode connections are only accepted from the loopback interface. If you want to connect from external computers to Redis you may adopt one of the following solutions: 1) Just disable protected mode sending the command 'CONFIG SET protected-mode no' from the loopback interface by connecting to Redis from the same host the server is running, however MAKE SURE Redis is not publicly accessible from internet if you do so. Use CONFIG REWRITE to make this change permanent. 2) Alternatively you can just disable the protected mode by editing the Redis configuration file, and setting the protected mode option to 'no', and then restarting the server. 3) If you started the server manually just for testing, restart it with the '--protected-mode no' option. 4) Setup a bind address or an authentication password. NOTE: You only need to do one of the above things in order for the server to start accepting connections from the outside.
My /etc/redis/sentinel.conf:
daemonize yes
sentinel myid XXX
sentinel monitor master XXX 6379 2
sentinel down-after-milliseconds master 60000
sentinel config-epoch master 0
protected-mode no
bind 0.0.0.0
port 26379
EDIT: My /etc/redis/redis.conf:
port 6379
bind 0.0.0.0
protected-mode no
I've also tried adding sentinel auth-pass master XXX.
My entire backend is on private subnets. I'm VPN'd into my datacenter behind the firewall, coming from the same private network, and I can still only connect locally without getting that frustrating error message.
Server Environment: Debian 8, Redis 3.2.6
Client Environment: Ubuntu 16.10, redis-cli 3.2.1
Redis instances: 3
Sentinel instances: 3
I've done not just one, but 3/4 of the things suggested (didn't set the command-line flags). Does anyone have any guidance or ideas? I'm clearly missing something that I've been unable to figure out from the error message, documentation, Stackoverflow, Google, and trial & error. I figured I'd post a question here first, before diving into the source code.
Any help is appreciated. Thanks!
... and, yes, I've restarted the daemons after configuration changes. :)
https://www.reddit.com/r/redis/comments/3zv85m/new_security_feature_redis_protected_mode/
As you know we got several problems from unprotected Redis instances exposed to the internet. I covered the reason why a restrictive binding to 127.0.0.1 by default may be an usability concern and, even worse, may not fix the problem (hey just comment the "bind" statement and restart!) in my blog post.
The same blog post introduced an attack that was heavily used by script kiddies to break into Redis instances (serious security researchers where already able to do this, I guess).
So I finally decided to do something before Redis 3.2 official release: Protected mode is the result and will be merged into 3.2 RC2.
The feature is already available in the unstable branch, introduced by this commit. This is how it works.
If and only if:
Protected mode is enabled (this is the default both in the configuration file and in the configless default).
AND IF No AUTH password is configured.
AND IF No "bind" directive is used in order to restrict Redis to certain interfaces.
Then Redis only accepts connections from the loopback IPv4 and IPv6 addresses. External connections are accepted just for the time to send the client an error that makes the user aware of what is happening:
> PING
(error) DENIED Redis is running in protected mode because protected mode is enabled, no bind address was specified, no authentication password is requested to clients.
In this mode connections are only accepted from the lookback interface. If you want to connect from external computers to Redis you may adopt one of the following solutions:
Just disable protected mode sending the command 'CONFIG SET protected-mode no' from the loopback interface by connecting to Redis from the same host the server is running, however MAKE SURE Redis is not publicly accessible from internet if you do so. Use CONFIG REWRITE to make this change permanent.
Alternatively you can just disable the protected mode by editing the Redis configuration file, and setting the protected mode option to 'no', and then restarting the server.
If you started the server manually just for testing, restart it with the --protected-mode no option.
Setup a bind address or an authentication password. NOTE: You only need to do one of the above things in order for the server to start accepting connections from the outside.
This should protect errors in a reasonable way while providing users with a clue instead of a connection refused. Please share your feedbacks so that we can make changes to this feature if needed, before it will get merged into Redis 3.2 RC2. Thanks.

activemq master not giving up on network failure

I have an activemq installation with master / slave failover.
Master and Slave are synced using the lease-database-locker
Master and Slave run on 2 different machines and the database is located on a third machine.
Failover and client reconnection works properly on a forced shutdown of the master broker. The slave is taking over properly and the clients reconnect due to their failover setting.
The problems start, if I simulate a network outage on the master broker only. This is done by using an iptables Drop Rule for packages going to the database on the master.
The master now realizes, that it cannot connect to the Database any longer. The slave starts up, since it's network connection is still alive.
It seems from the logs, that the clients still try to reconnect to the non responding master
For my understanding the master should inform the clients, that there is no connection anymore. The clients should failover and reconnect to the slave.
But this is not happening.
The clients do reconnect to the slave if I reestablish the db connection by reenabling the network connection to the db for the master. The master gives up beeing the master then.
I have set a queryTimeout on the lease-database-locker.
I have set updateClusterClients=true for the transport connector.
I have set a validationQueryTimeout of 10s on the db connection.
I have set a testOnBorrow for the db connection
Is there a way to force the master to inform the clients to failover in this particular case ?
After some digging I found the trick.
The broker was not informing the clients due to a missing ioExceptionHandler configuration.
The documentation can be found here
http://activemq.apache.org/configurable-ioexception-handling.html
I needed to specify
<bean id="ioExceptionHandler" class="org.apache.activemq.util.LeaseLockerIOExceptionHandler">
<property name="stopStartConnectors"><value>true</value></property>
<property name="resumeCheckSleepPeriod"><value>5000</value></property>
</bean>
and tell the broker to use the Handler
<broker xmlns="http://activemq.apache.org/schema/core" ....
ioExceptionHandler="#ioExceptionHandler" >
In order to produce an error on network outages I also had to set a queryTimeout on the lease query:
<jdbcPersistenceAdapter dataDirectory="${activemq.base}/data" dataSource="#mysql-ds-db01-st" lockKeepAlivePeriod="3000">
<locker>
<lease-database-locker lockAcquireSleepInterval="10000" queryTimeout="8" />
</locker>
This will produce an sql exception if the query takes to long due to a network outage.
I did test the network by dropping packages to the database using an iptables rule:
/sbin/iptables -A OUTPUT -p tcp --destination-port 13306 -j DROP
Sounds like you client doesn't have the address of the slave in its URI so it doesn't know where to reconnect to. The master broker doesn't inform the client where the slave is as it doesn't know there is a slave(s) or where that slave might be on the network, and even if it did that would be unreliable depending on what the conditions are that caused the master broker to drop in the first place.
You need to provide the client with the connection information for the master and the slave in the failover URI.