Redis: OK to put on a multi-homed system? - redis

We want to use a single Redis server for servers that span two subnets.
If we put Redis on just subnet A, the servers on B will have to go across a router to get to redis.
Our thought is to make the Redis server multi homed (multiple nics), attached to both subnets A and B.
1) Will this work?
2) Will Redis then attach to both IP's?
Thanks!

You can provide the bind address in the redis configuration file (bind parameter).
Now if you comment out the definition and do not provide a bind address, Redis will listen to its port on all the interfaces (i.e. it will listen to 0.0.0.0).
I did not try, but I would say a configuration with 2 addresses should work.

Related

How to setup Redis cluster behind a load balancer?

We want to set up Redis 6.2 clustering behind a LB. There are only master nodes and there is no Redis Sentinel being used. Each cluster-enabled Redis instance is running on a different host with the same configuration (eg. all of them are configured with port 6379). Is this possible with some port configuration on the LB such that a unique port on an LB maps to a unique_ip:6379?
Our idea is to use a cluster-aware Redis client like Lettuce RedisClusterClient which would issue CLUSTER NODES/SLOTS commands or react to MOVED/ASK redirection. It would also take care of split up a pipeline into using separate connections based on the slot for a command
It seems like this is not possible to achieve if the same port is used on all Redis hosts. Using https://docs.redis.com/latest/rs/networking/cluster-lba-setup/ as a guide, the best we could manage was to configure each Redis with a unique port and set cluster-announce-ip as the virtual IP (points to LB) and then manually make sure that the same port is used on LB as the Redis host. With this, the CLUSTER SLOTS and MOVED responses from Redis hosts could be correctly acted upon by the client. But this complicates our setup when a new Redis host has to be added or removed
You can use Route 53 if you're on AWS to achieve this.
Create A setup like this:
Add all hosts(IP addresses) in Route 53 and set TTL to smaller values like 30 seconds or so. Route 53 will return one of these Redis IP addresses, using this endpoint Redis clients like Lettuce or Jedis will discover all the Redis nodes.
You can use any other DNS system as well, record type should be A.

Redis - Bind interfaces does not work (Connection refused)!

I'm trying to configure Redis (redis.conf, bind parameter) to accept access only from certain ips. In my case I want to enable access for the loopback network interface (127.0.0.1/::1) and for the ip 192.168.56.101 (192.168.56.102 is the ip of the Redis server). According to all the documentation that I have read so far the configuration below should work...
bind 127.0.0.1 ::1 192.168.56.101
... but that's not what happens.
I've tried several other configurations...
bind 127.0.0.1 192.168.56.101 ::1
bind 127.0.0.1 192.168.56.101
bind 192.168.56.101
bind 192.168.56.0
bind 192.168.0.0
... and nothing works. =|
The only configuration that worked was this...
bind 0.0.0.0
But, this configuration opens access to any ip!
NOTE: The protected-mode parameter (redis.conf) has a no value.
Any idea what might be happening?
REFERENCE:
Redis bind to more than one IP
https://redis.io/topics/security
http://download.redis.io/redis-stable/redis.conf
FURTHER QUESTION:
How could I enable access for an IP range (bind parameter)? Something like...
bind 192.168.56.0
... or...
bind 192.168.56.0/24
In these examples any machine with an ip starting at "192.168.56" will have access to the Redis server.
#Carl Dacosta
#Jacky
Thanks!
I think you misunderstand the bind configuration and IP-whitelist.
The bind configuration specifies the IP addresses that Redis listens to. If you bind Redis to loopback interface, only local clients can access Redis. If you want other hosts to access Redis, you have to bind Redis to all network interfaces (i.e. 0.0.0.0), or some specified network interfaces.
What's you need is IP-whitelist, which lists the IP addresses that can access Redis. AFAIK, so far, Redis DOES NOT support that (correct me, if I'm wrong).
There are other solutions to limit the access to Redis (all these solution needs Redis NOT to bind on loopback interface).
Limit access by authentication
You can use the requirepass configuration to set a password for Redis. Only clients with the password can access Redis.
Limit access by OS utility
On Linux, you can use iptables to control the network access. With this utility, you can only allow specified hosts to access the port that Redis bind to.

Aerospike: How to find from any aerospike server which clients are accessing it?

We had multiple clients configured to talk to this cluster of aerospike nodes. Now that we have removed the configuration from all the clients we are aware of, there are still some read/write requests coming to this cluster, as shown in the AMC.
I looked at the log file generated in /var/log/aerospike/aerospike.log, but could not get any information.
Update
The netstat command as mentioned in the answer by #kporter shows the number of connections, with statuses ESTABLISHED, TIME_WAIT, CLOSE_WAIT etc. But, that does not mean those connections are currently being used for get/set operations. How do I get the IPs from which aerospike operations are currently being done?
Update 2 (Solved)
As mentioned in the comments to #kporter's answer, a tcpdump command on the culprit client showed packets still being sent to the aerospike cluster which was no more referenced in the config file. This was happening while even AMC of that cluster did not show any more read/write TPS.
I later found that this stopped after doing a restart of the nginx service on the client. Please note that the config file in the client now references a new aerospike cluster and packets sent to that cluster did not stop after the nginx restart. This is weird but it worked.
Clients connect to Aerospike over port 3000:
The following command, when run on the server nodes, will show the addresses of hosts connecting to the server over port 3000.
netstat --tcp --numeric-ports | grep 3000

Configure HAproxy for Redis with deferent Auth keys

I have Redis cluster of three instances and the cluster is powered by Redis Sentinel and they are running as [master,slave,slave].
Also and HAproxy instance is running to transfer the traffic to the master node, and those tow slaves are read only, are used by another applications.
It was very easy to configure HAproxy to select the Master Node when same auth key used for all instance, but now we have different auth keys for every instance different from others.
#listen redis-16
bind ip_address:6379 name redis
mode tcp
default_backend bk_redis_16
backend bk_redis_16
# mode tcp
option tcp-check
tcp-check connect
tcp-check send AUTH\ auth_key\r\n
tcp-check send PING\r\n
tcp-check expect string +PONG
tcp-check send info\ replication\r\n
tcp-check expect string role:master
tcp-check send QUIT\r\n
tcp-check expect string +OK
server R1 ip_address:6379 check inter 1s
server R2 ip_address:6380 check inter 1s
server R3 ip_address:6381 check inter 1s
So the above code works only when we have one passwords across {R1,R2,R3}, How to configuer HAproxy for different passwords.
I mean how to make HAproxy use the the each auth key for its server, like the following:
R1 : abc
R2 : klm
R3 : xyz
You have two primary options:
Set up an HA Proxy config for each set of servers which have different passwords.
Set up HA Proxy to not use auth but rather pass all connections through transparently.
You have other problems with the setup you list. Your read-only slaves will not have a role of "master". Thus even if you could assign each a different password, your check would refuse the connection. Also, in the case of a partition your check will allow split-brain conditions.
When using HA Proxy in front of a Sentinel managed Redis pod[1] if you try to have HA Proxy figure out where to route connections to you must have HA Proxy check all Sentinels to ensure that the Redis instance the majority of Sentinels have decided is indeed the master. Otherwise you can suffer from split-brain where two or more instances report themselves as the Master. There is actually a moment after a failover when you can see this happen.
If your master goes down and a slave is promoted, when the master comes back up it will report itself as master until Sentinel detects the master and reconfigures it to be a slave. During this time your HA Proxy check will send writes to the original master. These writes will be lost when Sentinel reconfigures it to be a slave.
For the case of option 1:
You can either run a separate configured instance of HA Proxy or you can set up front ends and multiple back ends (paired up). Personally I'd go with multiple instances of HA Proxy as it allows you to manage them without interference with each other.
For the case of option 2:
You'll need to glue Sentinel's notification mechanism to HA Proxy being reconfigured. This can easily be done using a script triggered on Sentinel to reach out and reconfigure HA Proxy on the switch-master event. The details on doing this are at http://redis.io/topics/sentinel and more directly at the bottom of the example file found at http://download.redis.io/redis-stable/sentinel.conf
In a Redis Pod + Sentinel setup with direct connectivity the clients are able to gather the information needed to determine where to connect to. When you place a non-transparent proxy in between them your proxy needs to be able to make those decisions - or have them made for it when topology changes occur - on behalf of the client.
Note: what you describe is not a Redis cluster, it is a replication setup. A Redis cluster is entirely different. I use the term "pod" to apply to a replication based setup.

Redis bind to more than one IP

In the redis.conf the normal setting is
bind 127.0.0.1
I want redis to listen to another ip too (say my local development address)
I tried
bind 127.0.0.1, 123.33.xx.xx
but this does not work. I cannot find any relevant in the document or by googling. Hope someone can help.
Binding to multiple IPs is indeed possible since Redis 2.8. Just separate each IP by whitespace (not commas).
bind 127.0.0.1 123.33.xx.xx
Source: Official default config
This answer is not outdated and will work for both older and newer versions
The problem in understanding is that Redis binding doesn't show the client machine's address, but shows the interface through which connection should be established. In your example, if your local development (client) address is 123.33.xx.xx, it doesn't mean that you have to put exactly the same address as a binding, otherwise Redis service will not start.
So if ifconfig on your Redis server machine shows that you have some network interface similar to this:
eth0 Link encap:Ethernet HWaddr 00:0c:...
inet addr:192.168.1.110 Bcast:192.168.1.255 Mask:255.255.255.0
you can put the interface's address 192.168.1.110 as a binding and every request to Redis, which pass through this interface, should succeed.
Since:
--[ Redis 2.8 Release Candidate 1 (2.7.101) ] Release date: 18 Jul 2013
you can:
[NEW] Ability to bind multiple IP addresses.
Cheers!!
Edit: it seems that the correct way is, still, only one line and one or more IPs separated by space
This way:
bind 127.0.0.1 10.150.220.121
EDIT: This is an outdated answer. Please check newer answers for solution.
You cannot set redis to listen on specific multiple interfaces. If multiple interfaces are required just remove the bind line.
As #taro pointed out use firewall to restrict access.
I tried finding that answer too, as it stands, it's not possible to do this, I found this while searching for the answer on multiple (but not all interfaces). This is what turned up http://code.google.com/p/redis/issues/detail?id=497 stating it will not be supported by redis itself.
In conjunction with haproxy that makes it impossible to put it in front of redis in one go. You need to use a different port, or the other or choose to bind on 1 IP.
The only way this worked for me, was by adding separate lines:
bind 111.222.33.44
bind 127.0.0.1 ::1
bind 127.0.0.1 192.168.152.2
Note, I have to put the 127.0.0.1 first otherwise the 192.x will not be bound at system boot. However another systemctl restart redis will suffice -- might be a bug? (Debian 10 and Redis 5.0.3)
For macOS Homebrew installation, make sure you are editing /usr/local/etc/redis.conf instead of the template file: /usr/local/Cellar/redis/6.2.6/.bottle/etc/redis.conf