What is the default TCP keep alive in redis - redis

What is the default tcp-keepalive for redis 3 if i don't specify? i commented the tcp-keepalive option in redis.conf file.
# A reasonable value for this option is 60 seconds.
#tcp-keepalive 0

The default is 0.
You can verify this by running CONFIG GET tcp-keepalive
127.0.0.1:6379> CONFIG GET tcp-keepalive
1) "tcp-keepalive"
2) "0"
or looking at the source code.

Depends.
"…
TCP keepalive
Recent versions of Redis (3.2 or greater) have TCP keepalive (SO_KEEPALIVE socket option) enabled by default and set to about 300 seconds. This option is useful in order to detect dead peers (clients that cannot be reached even if they look connected). Moreover, if there is network equipment between clients and servers that need to see some traffic in order to take the connection open, the option will prevent unexpected connection closed events.
…"

Related

Redis 6 TLS Support and Redis Sentinel

I would like to set up a basic 3-node Redis Sentinel setup using the new TLS features of Redis 6. Unfortunately, it doesn't seem like Redis 6 Sentinel is smart enough to speak TLS to clients.
Does anyone know of a way to do this, or if it's not possible, if there are any mentions online about adding support for this in the future? It seems a shame to have these nice TLS features and not be able to use them with Redis' own tools.
I am aware that in the past people have used Stunnel to do this. With TLS support added to Redis, I am only interested in doing this if it can be done without third party addtions.
My setup:
3 Redis servers (6.0-rc, last pulled last week), running TLS with the test certs as specified in the Redis docs - one master and 2 replicas
3 Sentinels (6.0-rc, also last pulled last week), not running TLS on their ports (I would like to, but that's a secondary problem)
What I've Tried:
Pointing Sentinel to the Redis TLS port - this results in lots of TLS errors in Redis' logs about incorrect TLS version received, as Sentinel is not speaking TLS to Redis. Since it fails, Sentinel thinks the master is down.
Adding "https://" in the Sentinel config in front of the master IP - this results in Sentinel refusing to run, saying it can't find the master hostname.
Adding TLS options to Sentinel - this results in Sentinel trying to talk TLS on its ports, but not to clients, which doesn't help. I couldn't find any options specifically about making Sentinel speak TLS to clients.
Pointing Sentinel to the Redis not-TLS port (not ideal, I would rather only have the TLS port open) - this results in Sentinel reporting the wrong (not-TLS) port for the master to the simple Python client I'm testing with (it literally just tries to get master info from Sentinel) - I want the client to talk to Redis over TLS for obvious reasons
Adding the "replica-announce-port" directive to Redis with Sentinel still pointed to the not-TLS port - this fails in 2 ways: the master port is still reported incorrectly as the not-TLS port (seems to be because the master is not a replica and so the directive does not apply), and Sentinel now thinks the replicas are both down (because the TLS port is reported, replicas are auto discovered, and it can't speak to the replicas on the TLS port).
I am aware of this StackOverflow question (Redis Sentinel and TLS) - it is old and asks about Redis 4, so it's not the same.
I did figure this out and forgot to post the answer earlier: The piece I was missing was that I needed to set the tls-replication yes option on both the Redis and Sentinel servers.
Previously, I had only set it on the Redis servers, as they were the only ones that needed to do replication over TLS. But for some reason, that particular option is what is needed to actually make Sentinel speak TLS to Redis.
So overall, for TLS options, both sides of the equation needed:
tls-port <port>
port 0
tls-auth-clients yes
tls-ca-cert-file <file>
tls-key-file <file>
tls-cert-file <file>
tls-replication yes
Try to add tls-port option to the sentinel.conf as it seems to enable TLS support in general and the same is stated in documentation. For me the below two statements added to sentinel.conf on a top of the rest of TLS configuration actually made the trick.
tls-port 26379
port 0

how many total connection or max connections are available in Redis Server?

How many total connections are or max how many connections are present in redis ?
How many connections are busy ?
How many connections are free waiting for the requests ?
which commands or configuration i need see to answer above questions ?
Am asking total / max connections not clients
Clients ARE connections. Redis doesn't know if two connections are from the same client.
Current
info clients
# Clients
connected_clients:2
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
Maximum
config get maxclients
1) "maxclients"
2) "4064"
If you want to change maxclients, you may do so in conf file, or at runtime with the command config set maxclients <val>, but note that this value is limited by available file descriptors, so run appropriate ulimit -n <val> before.
As of writing this in Redis 2.6 the default limit is 10000 clients but can be over-ridden in the redis.conf
If the number that we require is more than the maximum number of file descriptors that can be opened by the File System, then REDIS sets the maximum number of clients/connections to what it can realistically handle.
Read more about it here

RabbitMQ Binary memory consumption

According to the images below (Rabbit 3.6.6-1) I am wondering where is all the memory being used for "Binaries" when it doesnt show the same memory usage on the "Binary references" / breakdown
Can anyone enlighten?
I suspect something needs to be "Cleaned up"... but what?
This big consumption of "Binaries" can also be seen on machines with 4 queues and no messages...
EDIT 17/07/2017:
We have found that this is mainly due to the fact that we open and close multiple connections to rabbitmq, which somehow does not seem to free up the memory in a clean way.
The fact that the biggest part of used memory isn't associated with any particular messages (the "binary references" part of your screenshot), suggests this memory is being used by operating system resources not directly managed by RabbitMQ. My biggest suspect would be open connections.
To test this theory, you can run netstat and see if you get similar results (assuming you're running rabbitmq on the default port - 5672):
root#rabbitmqhost:~# netstat -ntpo | grep -E ':5672\>'
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name Timer
tcp6 0 0 xxx.xxx.xxx.xxx:5672 yyy.yyy.yyy.yyy:57656 ESTABLISHED 27872/beam.smp off (0.00/0/0)
tcp6 0 0 xxx.xxx.xxx.xxx:5672 yyy.yyy.yyy.yyy:49962 ESTABLISHED 27872/beam.smp off (0.00/0/0)
tcp6 0 0 xxx.xxx.xxx.xxx:5672 yyy.yyy.yyy.yyy:56546 ESTABLISHED 27872/beam.smp off (0.00/0/0)
tcp6 0 0 xxx.xxx.xxx.xxx:5672 yyy.yyy.yyy.yyy:50726 ESTABLISHED 27872/beam.smp off (0.00/0/0)
⋮
The interesting part is the last column showing "Timer off". This indicates this connection isn't using keepalives, which means they'll just dangle there eating up resources if the client dies without a chance of closing them gracefully.
There are two ways to avoid this problem:
TCP keepalives
application protocol heartbeats
TCP Keepalives
These are handled by the kernel. Whenever a connection sees no packages for a certain amount of time, the kernel tries to send some probes to see if the other side's still there.
Since the current Linux (e.g. 4.12) timeout defaults are pretty high (7200 seconds + 9 probes every 75 seconds > 2 hours), rabbitmq does not use them by default.
To activate them, you have to add it to your rabbitmq.config:
[
{rabbit, [
⋮
{tcp_listen_options,
[
⋮
{keepalive, true},
⋮
]
},
⋮
]},
⋮
].
and probably lower the timeouts to some more sensible values. Something like this might work (but of course YMMV):
root#rabbitmqhost:~# sysctl net.ipv4.tcp_keepalive_intvl=30
root#rabbitmqhost:~# sysctl net.ipv4.tcp_keepalive_probes=3
root#rabbitmqhost:~# sysctl net.ipv4.tcp_keepalive_time=60
Application protocol heartbeats
These are handled by the actual messaging protocols (e.g. AMQP, STOMP, MQTT), but require the client to opt-in. Since each protocol is different, you have to check the documentation to set it up in your client applications.
Conclusion
The safest option, from the perspective of avoiding dangling resources, are TCP keepalives, since you don't have to rely on your client applications behaving.
However, they are less versatile and if misconfigured on a high-throughput but "bursty" system, may lead to worse performance, since false-positives will cause reconnects.
Application protocol hearbeats are the more fine-grained option if you need to avoid this problem while also keeping your system's performance, but they require more coordination, in the sense that clients have to opt in and chose their own sensible timeouts.
Since you can never be 100% sure your clients won't die without gracefully closing connections, enabling TCP keepalives as a fallback (even with higher timeouts) might also be a good idea.

How to use SSH with an unstable internet connection?

Sometimes, I'm forced to use ssh over an unstable internet connection.
ping some.doma.in
PING some.doma.in (x.x.x.x): 56 data bytes
Request timeout for icmp_seq 0
Request timeout for icmp_seq 1
Request timeout for icmp_seq 2
64 bytes from x.x.x.x: icmp_seq=3 ttl=44 time=668.824 ms
Request timeout for icmp_seq 4
Request timeout for icmp_seq 5
Request timeout for icmp_seq 6
Request timeout for icmp_seq 7
64 bytes from x.x.x.x: icmp_seq=8 ttl=44 time=719.034 ms
Is there a way to use tools to increase the reliability of tcp connections (above all ssh)?
I imagine something like an SSH proxy, that runs on a machine with a decent connection, that will receive UDP packets, order them using a higher network layer protocol, forward them to the destination server using ssh and reply to the origin.
Or are there any ssh command line switches to enable more data redundancy or anything else to avoid "broken pipes"?
Or maybe a client-server application that uses the bittorrent network to distribute packets, and allows to forward commands to ssh back-and-forth. (=high latency but high reliability)
// I tried screen and stuff but sometimes the connection is just too unreliable to enable efficient working.
Cheers and thx in advance!
After some more research and some luck, I stumbled upon mosh.
http://mosh.mit.edu
It's amazing. A client-server implementation using UDP and lots of small little things (like echo prediction). Everyone should use it.

Force a router to keep a an IDLE UDP port open

A client opens a UDP connection to my server , after some time (10 minutes-24 hours) the server needs to send data back to the client but it finds that the UDP port of the client is closed !.
After testing , we found that the client still have the UDP port open , but the router (nat) closed the port probably for inactivity !
is there any way to force the router to keep the UDP port open without sending keep-alive packets ? (server or client side) .
is there anything like that in ICMP ?
Thank you .
I had the same problem and I find this solution, not for the router, but for the server:
Try to configure the keep alive.
The way to do it depends of which service / program / OS are you using.
For example, using OpenSSH in the Client, you had to add/configure this lines in the file ./ssh/config or /etc/ssh/ssh_config:
ServerAliveInterval 30
ServerAliveCountMax 60
In the server (where I made the change) add/configure this lines in the file /etc/ssh/sshd_config:
ClientAliveInterval 30
ClientAliveCountMax 60
Of course it depends of the operative system, etc. but the idea is to configurate the keep alive right in the service.
Good luck!