I have this twemproxy_sentinel setup that uses their default port 22122 as entry and forwards the requests to underlying redis servers running on port 6380, 6381.
Every now and then, the port 22122 becomes unavailable. Thus clients using the redis would not be able to connect. telnet to it would close instantly. All I needed to do was to /etc/init.d/nutcracker restart and things would be back to normal. All along, the sentinel and redis services are working. Only the twemproxy seems to get cut off. Before the time of restart, the nutcracker service is still running (ps would show it's running). The logs do not show any indication of things failing.
I'm not sure why this happens and tried to dig through the logs of both the redis servers, redis sentinel and twemproxy logs. I also tried looking into /var/log/messages and tried to ensure file-max won't be blocking the # of ports being opened.
Wonder where I can start to look into why things would go down.
Realized I've overlooked that max-files doesn't necessarily allows nutcracker to use those ports but merely allows the system to use so many ports. It is back to normal after actually enabling nutcracker to open more ports.
Related
I'm running HAProxy as a TCP loadbalancer in front of an on-prem Kubernetes cluster. I have set up a small app on each cluster node which return HTTP200 when the node is considered healthy. One of the healthchecks it performs is to query the KubeAPI and verify the status according to K8S itself. Now, if for some reason the Kube API goes down, all nodes will be considered unhealthy at the same time, even though the applications running on the workers are still available.
I'd like to set up HAProxy in such a way that whenever all worker nodes are down according to the health check, HAProxy just assumes they are all alive. If indeed all nodes are down, whether or not traffic is forwarded doesn't matter. If the reason they're all down is that some shared component doesn't respond, just blindly sending traffic will at least keep the service going.
I've parsed the HAProxy reference in search of an option which does this. I can't seem to find one. I think I should be able to get this functionality by registering each worker node twice, once regularly and once with the backup option specified. Then adding allbackups to the backend would make it so that if all worker nodes are down, alls worker nodes are used as a backup. That would look like this:
backend workers
mode tcp
option httpchk HEAD /
option allbackups
server worker-001-1 <address-1> check port 32000
server worker-001-2 <address-2> check port 32000
server worker-001-1-backup <address-1> backup
server worker-001-2-backup <address-2> backup
While this solution seems to work. It seems very hacky. Is there any way to do this in a cleaner way. Is there an option I missed in the reference?
Thanks!
I found a more suitable solution in this answer: https://serverfault.com/a/923624/255500
It boils down to using backend switching rules and creating two backends for each group of clusters:
frontend ingress
bind *:80 http
bind *:443 https
bind *:30000-32676 nodeports
mode tcp
default_backend workers
use_backend workers_backup if { nbsrv(workers) eq 0 }
backend workers
mode tcp
option httpchk HEAD /
server worker-001-1 <address-1> check port 32000
server worker-001-2 <address-2> check port 32000
backend workers_backup
mode tcp
server worker-001-1 <address-1> no-check
server worker-001-2 <address-2> no-check
Once backend workers has zero servers up, backend workers_backup will be used. It's still registering each node twice, but I think this is the better solution.
Is it possible that you're trying to solve the wrong problem? If the nodes report as unhealthy if the Kube API is unavailable, then should you focus on making Kube API highly available?
In this article, they describe a way to create a highly available control plane. https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/
I was recently doing some investigation into some issues I'm facing on my Redis clusters, and saw that I have many connections that are sticking around, despite being idle indefinitely after some period of time.
After some investigation, I found that I have these two settings on my cluster:
timeout 300
tcp-keepalive 0
The stale connections that aren't going away are PUB/SUB client connections (StackExchange.Redis clients, in fact, but that's beside the point), so they do not respect the timeout configuration. As such, the tcp-keepalive seems to be the only other configuration that can ensure these connections get cleaned up over time.
So I applied this setting to all the nodes:
redis-trib.rb call 127.0.0.1:6001 config set tcp-keepalive 300
At this point I went home, and I came back the next morning, assuming the stale connections would have been disposed of properly. Sadly, I was greeted by all the same connections.
My question is: Is there any way from the Redis server side to dispose of these connections gracefully after they've been established? Is it expected that applying the tcp-keepalive configuration after the connections are established and old that they will not be disposed of?
The only solution I've found besides restarting the Redis servers are to do a bit of scripting and use the CLIENT KILL command, which is doable, but I was hoping for something configuration based to handle this.
Thanks in advance for any insight here!
I have a Redis Cluster that clients are connecting to via HAPRoxy with a Virtual IP. The Redis cluster has three nodes (with each node sharing the same server with a running sentinel instance).
My question is, when i clients gets a "MOVED" error/message from a cluster node upon sending a request, does it bypass the HAProxy the second time when it connects since it has been provided with an IP:port when the MOVEd message was issued? If not, how does the HAProxy know the second time to send it to the correct node?
I just need to understand how this works under the hood.
If you want to use HAProxy in front of Redis Cluster nodes, you will need to either:
Set up an HAProxy for each master/slave pair, and wire up something to update HAProxy when a failure happens, as well as probably intercept the topology related commands to insert the virtual IPs rather than the IPs the nodes themselves have and report via the topology commands/responses.
Customize HAProxy to teach it how to be the cluster-aware Redis client so the actual client doesn't know about cluster at all. This means teaching it the Redis protocol, storing the cluster's topology information, and selecting the node to query based on the key(s) being accessed by the consumer code.
With Redis Cluster the client must be able to access every node in the cluster. Of the two options above Option 2 is the "easier" one, but at this point I wouldn't recommend either.
Conceivably you could use the VIP as a "first place to get the topology info" IP but I suspect you'd have serious issues develop as that original IP would not be one of the ones properly being reported as a nod handling data. For that you could simply use round-robin DNS and avoid that problem, or use the built-in "here is a list of cluster IPs (or names?)" to the initial connection configuration.
Your simplest, and least likely to be problematic, route is to go "full native" and simply give full and direct access to every node in the cluster to your clients and not use HAProxy at all.
I've been using ServiceStack PooledRedisClientManager with success. I'm now adding Twemproxy into the mix and have 4 Redis instances fronted with Twemproxy running on a single Ubuntu server.
This has caused problems with light load tests (100 users) connecting to Redis through ServiceStack. I've tried the original PooledRedisClientManager and BasicRedisClientManager, both are giving the error No connection could be made because the target machine actively refused it
Is there something I need to do to get these two to play nice together? This is the Twemproxy config
alpha:
listen: 0.0.0.0:12112
hash: fnv1a_64
distribution: ketama
auto_eject_hosts: true
redis: true
timeout: 400
server_retry_timeout: 30000
server_failure_limit: 3
server_connections: 1000
servers:
- 0.0.0.0:6379:1
- 0.0.0.0:6380:1
- 0.0.0.0:6381:1
- 0.0.0.0:6382:1
I can connect to each one of the Redis server instances individually, it just fails going through Twemproxy.
I haven't used twemproxy before but I would say your list of servers is wrong. I don't think you are using 0.0.0.0 correctly.
Your servers would need to be (for your local testing):
servers:
- 127.0.0.1:6379:1
- 127.0.0.1:6380:1
- 127.0.0.1:6381:1
- 127.0.0.1:6382:1
You use 0.0.0.0 on the listen command to tell twemproxy to listen on all available network interfaces on the server. This mean twemproxy will try to listen on:
the loopback address 127.0.0.1 (localhost),
on your private IP (i.e. 192.168.0.1) and
on your public IP (i.e. 134.xxx.50.34)
When you are specifying servers, the server config needs to know the actual address it should connect on. 0.0.0.0 doesn't make sense. It needs a real value. So when you come to use different Redis machines you will want to use, the private IPs of each machine like this:
servers:
- 192.168.0.10:6379:1
- 192.168.0.13:6379:1
- 192.168.0.14:6379:1
- 192.168.0.27:6379:1
Obviously your IP addresses will be different. You can use ifconfig to determine the IP on each machine. Though it may be worth using a hostname if your IPs are not statically assigned.
Update:
As you have said you are still having issues, I would make these recommendations:
Remove auto_eject_hosts: true. If you were getting some connectivity, then after time you end up with no connectivity, it's because something has caused twemproxy to think there was something wrong with the Redis hosts and reject them.
So eventually when your ServiceStack client connects to twemproxy, there will be no hosts to pass the request onto and you get the error No connection could be made because the target machine actively refused it.
Do you actually have enough RAM to stress test your local machine this way? You are running at least 4 instances of Redis, which require real memory to store the values, twemproxy consumes a large amount of memory to buffer the requests it passes to Redis, this memory pool is never released, see here for more information. Your ServiceStack app will consume memory - more so in Debug mode. You'll probably have Visual Studio or another IDE open, the stress test application, and your operating system. On top of all that there will likely be background processes and other applications you haven't closed.
A good practice is to try to run tests on isolated hardware as far as possible. If it is not possible, then the system must be monitored to check the benchmark is not impacted by some external activity.
You should read the Redis article here about benchmarking.
As you are using this in a localhost situation use the BasicRedisClientManager not the PooledRedisClientManager.
According to the git commit messages, ServiceStack has recently added failover support. I initially assumed this meant that I could pull one of my Redis instances down, and my pooled client manager would handle the failover elegantly and try to connect with one of my alternate Redis instances. Unfortunately, my code just bugs out and says that it can't connect with the initial Redis instance.
I am currently running instances of Redis 2.6.12 on a Windows, with the master at port 6379 and a slave at 6380, with sentinels set up to automatically promote the slave to a master if the master goes down. I am currently instantiating my client manager like this:
PooledRedisClientManager pooledClientManager =
new PooledRedisClientManager(new string[1] { "localhost:6379"},
new string[1] {"localhost:6380"});
where the first array is read-write hosts (for the master), and the second array is read-only hosts (for the slave).
When I terminate the master at port 6379, the sentinels promote the slave to a master. Now, when I try to run my C# code, instead of failing over to port 6380, it simply breaks and returns the error "could not connect to redis Instance at localhost:6379".
Is there a way around this, or will failover simply not work the way I want it to?
PooledRedisClientManager.FailoverTo allows you to reset which are the read/write hosts, vs readonly hosts, and restart the factory. This allows for a quick transition without needing to recreate clients.