Aerospike: How to find from any aerospike server which clients are accessing it? - aerospike

We had multiple clients configured to talk to this cluster of aerospike nodes. Now that we have removed the configuration from all the clients we are aware of, there are still some read/write requests coming to this cluster, as shown in the AMC.
I looked at the log file generated in /var/log/aerospike/aerospike.log, but could not get any information.
Update
The netstat command as mentioned in the answer by #kporter shows the number of connections, with statuses ESTABLISHED, TIME_WAIT, CLOSE_WAIT etc. But, that does not mean those connections are currently being used for get/set operations. How do I get the IPs from which aerospike operations are currently being done?
Update 2 (Solved)
As mentioned in the comments to #kporter's answer, a tcpdump command on the culprit client showed packets still being sent to the aerospike cluster which was no more referenced in the config file. This was happening while even AMC of that cluster did not show any more read/write TPS.
I later found that this stopped after doing a restart of the nginx service on the client. Please note that the config file in the client now references a new aerospike cluster and packets sent to that cluster did not stop after the nginx restart. This is weird but it worked.

Clients connect to Aerospike over port 3000:
The following command, when run on the server nodes, will show the addresses of hosts connecting to the server over port 3000.
netstat --tcp --numeric-ports | grep 3000

Related

How to setup Redis cluster behind a load balancer?

We want to set up Redis 6.2 clustering behind a LB. There are only master nodes and there is no Redis Sentinel being used. Each cluster-enabled Redis instance is running on a different host with the same configuration (eg. all of them are configured with port 6379). Is this possible with some port configuration on the LB such that a unique port on an LB maps to a unique_ip:6379?
Our idea is to use a cluster-aware Redis client like Lettuce RedisClusterClient which would issue CLUSTER NODES/SLOTS commands or react to MOVED/ASK redirection. It would also take care of split up a pipeline into using separate connections based on the slot for a command
It seems like this is not possible to achieve if the same port is used on all Redis hosts. Using https://docs.redis.com/latest/rs/networking/cluster-lba-setup/ as a guide, the best we could manage was to configure each Redis with a unique port and set cluster-announce-ip as the virtual IP (points to LB) and then manually make sure that the same port is used on LB as the Redis host. With this, the CLUSTER SLOTS and MOVED responses from Redis hosts could be correctly acted upon by the client. But this complicates our setup when a new Redis host has to be added or removed
You can use Route 53 if you're on AWS to achieve this.
Create A setup like this:
Add all hosts(IP addresses) in Route 53 and set TTL to smaller values like 30 seconds or so. Route 53 will return one of these Redis IP addresses, using this endpoint Redis clients like Lettuce or Jedis will discover all the Redis nodes.
You can use any other DNS system as well, record type should be A.

Syslog-ng to Syslog-ng over TLS - destination not writing to disk

Trying to configure a syslog-ng server to send all of the logs that it receives, to another syslog-ng server over TLS. Both running RHEL 7. Everything seems to be working from an encryption and cert perspective. Not seeing any error messages in the logs, an openssl s_client test connection works successfully, I can see the packets coming in over the port that I'm using for TLS, but nothing is being written to disk on the second syslog-ng server. Here's the summary of the config on the syslog server that I'm trying to send the logs to:
source:
source s_encrypted_syslog {
syslog(ip(0.0.0.0) port(1470) transport("tls")
tls(key-file("/etc/syslog-ng/key.d/privkey.pem")
certfile("/etc/syslog-ng/cert.d/servercert.pem")
peer-verify(optional-untrusted)
}
#changing to trusted once issue is fixed
destination:
destination d_syslog_facility_f {
file("/mnt/syslog/$LOGHOST/log/$R_YEAR-$R_MONTH-$R_DAY/$HOST_FROM/$HOST/$FACILITY.log" dir-owner ("syslogng") dir-group("syslogng") owner("syslogng") group("syslogng"));
log setting:
log { source (s_encrypted_syslog); destination (d_syslog_facility_f); };
syslog-ng is currently running as root to rule out permission issues. selinux is currently set to permissive. Tried increasing the verbosity on syslog-ng logs and turned on debugging, but not seeing anything jumping out at me as far as errors or issues go. Also the odd thing is, I have very similar config on the first syslog-ng server and it's receiving and storing logs just fine.
Also, I should note that there could be some small typo's in the config above as I'm not able to copy and paste it. Syslog-ng allows me to start up the service with no errors with the config that I have loaded currently. It's simply not writing the data that it's receiving to the destination that I have specified.
It happens quite often that the packet filter prevents a connection to the syslog port, or in your case port 1470. In that case the server starts up successfully, you might even be able to connect using openssl s_client on the same host, but the client will not be able to establish a connection to the server.
Please check that you can actually connect to the server from the client computer (e.g. via openssl s_client, or at least with something like netcat or telnet).
If the connection works, another issue might be that the client is not routing messages to this encrypted destination. syslog-ng only performs the SSL handshake as messages are being sent. No messages would result in the connection being open but not really exchanging packets on the TCP level.
Couple of troubleshooting tips:
You can check if there is a connection between the client and the server with "netstat -antp | grep syslog-ng" on the server or the client. You should see connections in the ESTABLISHED state on both sides of the connection (with local/remote addresses switched of course).
Check that your packet filter lets port 1470 connections through. You are most likely using iptables, try reviewing your ruleset and see if port 1470 on TCP is allowed to pass in the INPUT chain. You could try adding a "LOG" rule right before the default rule to see if the packets are dropped at that level. If you already have LOG rules, you might check the kernel logs of the server to see if that LOG rule produced any messages.
You can also confirm if there's traffic with tcpdump on the server (e.g. tcpdump -pen port 1470). If you write the traffic dump to a file (e.g. the -w argument to tcpdump, along with -s 0 to avoid truncation), then this dump file can be analyzed with wireshark to see if the negotiation takes place. You should at the very least see a "Client Hello" and a "Server Hello" packet which are not encrypted at the beginning of the handshake.

Redis 6 TLS Support and Redis Sentinel

I would like to set up a basic 3-node Redis Sentinel setup using the new TLS features of Redis 6. Unfortunately, it doesn't seem like Redis 6 Sentinel is smart enough to speak TLS to clients.
Does anyone know of a way to do this, or if it's not possible, if there are any mentions online about adding support for this in the future? It seems a shame to have these nice TLS features and not be able to use them with Redis' own tools.
I am aware that in the past people have used Stunnel to do this. With TLS support added to Redis, I am only interested in doing this if it can be done without third party addtions.
My setup:
3 Redis servers (6.0-rc, last pulled last week), running TLS with the test certs as specified in the Redis docs - one master and 2 replicas
3 Sentinels (6.0-rc, also last pulled last week), not running TLS on their ports (I would like to, but that's a secondary problem)
What I've Tried:
Pointing Sentinel to the Redis TLS port - this results in lots of TLS errors in Redis' logs about incorrect TLS version received, as Sentinel is not speaking TLS to Redis. Since it fails, Sentinel thinks the master is down.
Adding "https://" in the Sentinel config in front of the master IP - this results in Sentinel refusing to run, saying it can't find the master hostname.
Adding TLS options to Sentinel - this results in Sentinel trying to talk TLS on its ports, but not to clients, which doesn't help. I couldn't find any options specifically about making Sentinel speak TLS to clients.
Pointing Sentinel to the Redis not-TLS port (not ideal, I would rather only have the TLS port open) - this results in Sentinel reporting the wrong (not-TLS) port for the master to the simple Python client I'm testing with (it literally just tries to get master info from Sentinel) - I want the client to talk to Redis over TLS for obvious reasons
Adding the "replica-announce-port" directive to Redis with Sentinel still pointed to the not-TLS port - this fails in 2 ways: the master port is still reported incorrectly as the not-TLS port (seems to be because the master is not a replica and so the directive does not apply), and Sentinel now thinks the replicas are both down (because the TLS port is reported, replicas are auto discovered, and it can't speak to the replicas on the TLS port).
I am aware of this StackOverflow question (Redis Sentinel and TLS) - it is old and asks about Redis 4, so it's not the same.
I did figure this out and forgot to post the answer earlier: The piece I was missing was that I needed to set the tls-replication yes option on both the Redis and Sentinel servers.
Previously, I had only set it on the Redis servers, as they were the only ones that needed to do replication over TLS. But for some reason, that particular option is what is needed to actually make Sentinel speak TLS to Redis.
So overall, for TLS options, both sides of the equation needed:
tls-port <port>
port 0
tls-auth-clients yes
tls-ca-cert-file <file>
tls-key-file <file>
tls-cert-file <file>
tls-replication yes
Try to add tls-port option to the sentinel.conf as it seems to enable TLS support in general and the same is stated in documentation. For me the below two statements added to sentinel.conf on a top of the rest of TLS configuration actually made the trick.
tls-port 26379
port 0

How can you disable protected mode in Redis 3.2.6 Sentinel?

I have attempted everything recommended by the following error message:
(error) DENIED Redis is running in protected mode because protected mode is enabled, no bind address was specified, no authentication password is requested to clients. In this mode connections are only accepted from the loopback interface. If you want to connect from external computers to Redis you may adopt one of the following solutions: 1) Just disable protected mode sending the command 'CONFIG SET protected-mode no' from the loopback interface by connecting to Redis from the same host the server is running, however MAKE SURE Redis is not publicly accessible from internet if you do so. Use CONFIG REWRITE to make this change permanent. 2) Alternatively you can just disable the protected mode by editing the Redis configuration file, and setting the protected mode option to 'no', and then restarting the server. 3) If you started the server manually just for testing, restart it with the '--protected-mode no' option. 4) Setup a bind address or an authentication password. NOTE: You only need to do one of the above things in order for the server to start accepting connections from the outside.
My /etc/redis/sentinel.conf:
daemonize yes
sentinel myid XXX
sentinel monitor master XXX 6379 2
sentinel down-after-milliseconds master 60000
sentinel config-epoch master 0
protected-mode no
bind 0.0.0.0
port 26379
EDIT: My /etc/redis/redis.conf:
port 6379
bind 0.0.0.0
protected-mode no
I've also tried adding sentinel auth-pass master XXX.
My entire backend is on private subnets. I'm VPN'd into my datacenter behind the firewall, coming from the same private network, and I can still only connect locally without getting that frustrating error message.
Server Environment: Debian 8, Redis 3.2.6
Client Environment: Ubuntu 16.10, redis-cli 3.2.1
Redis instances: 3
Sentinel instances: 3
I've done not just one, but 3/4 of the things suggested (didn't set the command-line flags). Does anyone have any guidance or ideas? I'm clearly missing something that I've been unable to figure out from the error message, documentation, Stackoverflow, Google, and trial & error. I figured I'd post a question here first, before diving into the source code.
Any help is appreciated. Thanks!
... and, yes, I've restarted the daemons after configuration changes. :)
https://www.reddit.com/r/redis/comments/3zv85m/new_security_feature_redis_protected_mode/
As you know we got several problems from unprotected Redis instances exposed to the internet. I covered the reason why a restrictive binding to 127.0.0.1 by default may be an usability concern and, even worse, may not fix the problem (hey just comment the "bind" statement and restart!) in my blog post.
The same blog post introduced an attack that was heavily used by script kiddies to break into Redis instances (serious security researchers where already able to do this, I guess).
So I finally decided to do something before Redis 3.2 official release: Protected mode is the result and will be merged into 3.2 RC2.
The feature is already available in the unstable branch, introduced by this commit. This is how it works.
If and only if:
Protected mode is enabled (this is the default both in the configuration file and in the configless default).
AND IF No AUTH password is configured.
AND IF No "bind" directive is used in order to restrict Redis to certain interfaces.
Then Redis only accepts connections from the loopback IPv4 and IPv6 addresses. External connections are accepted just for the time to send the client an error that makes the user aware of what is happening:
> PING
(error) DENIED Redis is running in protected mode because protected mode is enabled, no bind address was specified, no authentication password is requested to clients.
In this mode connections are only accepted from the lookback interface. If you want to connect from external computers to Redis you may adopt one of the following solutions:
Just disable protected mode sending the command 'CONFIG SET protected-mode no' from the loopback interface by connecting to Redis from the same host the server is running, however MAKE SURE Redis is not publicly accessible from internet if you do so. Use CONFIG REWRITE to make this change permanent.
Alternatively you can just disable the protected mode by editing the Redis configuration file, and setting the protected mode option to 'no', and then restarting the server.
If you started the server manually just for testing, restart it with the --protected-mode no option.
Setup a bind address or an authentication password. NOTE: You only need to do one of the above things in order for the server to start accepting connections from the outside.
This should protect errors in a reasonable way while providing users with a clue instead of a connection refused. Please share your feedbacks so that we can make changes to this feature if needed, before it will get merged into Redis 3.2 RC2. Thanks.

Clustering doesn't work with mod_cluster on JbossAS7 - Stateful Application

I'm going to explain my situation.
Background:
I'm running three virtual machines with Debian Jessie on Open Nebula, one as master and the other two as slaves. In them i've installed JBoss AS 7.1 and mod_cluster 1.2.
Goal:
Run a stateful app, so when I shutdown the master server the cluster allows me to continue using the app with shared session and mantain the variables values.
I followed this guide with the given web application.
Errors:
I can't access directly the app at http://master/cluster-demo/ like as in the guide above, I have to specify the port (8330 for server-three).
When I shutdown server-three the slaves notices that the server is shutted down but the session is not shared and the application is no more accessible. This is the output on slave when i shoutdown server-three on master.
Configuration Files
I attach my configuration files:
/opt/jboss/domain/configuration/domain.xml
/opt/jboss/httpd/httpd/conf/httpd.conf
/opt/jboss/domain/configuration/host.xml in the master
/opt/jboss/domain/configuration/host.xml in the slaves
Answer
mod_cluster does not have anything in common with messaging (JMS, HornetQ) subsystems. mod_cluster setting also does not have anything in common with clustering subsystem, i.e. Infinispan and its workhorse, JGroups.
What AS7 mod_cluster subsystem does is that is listens to UDP multicast advertising messages emitted by Apache HTTP Server mod_cluster modules. When it receives such message, it registers itself with your Apache HTTP Server load balancer. From that moment, your registered AS7 "worker" node keeps sending specialized HTTP messages (via TCP), informing Apache HTTP Server about:
its name (jvmRoute or generated)
its current load
its deployments, i.e. application contexts
aliases etc.
When there are no worker nodes registered with your Apache HTTP Server balancer, there are no contexts, hence there is nowhere to forward your requests to.
According to the configuration you posted, you rely on UDP multicast messages being sent to/received from 224.0.1.105:23364.
Open Nebula, firewall and UDP multicast
It is possible that Open Nebula doesn't allow UDP multicast between hosts or that your iptables are blocking it. Try this:
use curl on your worker host to access the balancer host -- exactly the VirtualHost where you have the directive EnableMCPMReceive defined.
if it doesn't work, you must fix iptables, selinux, httpd's allow/deny and such
if it works, it's a good sign that worker can talk to the balancer
go to your AS7 xml, modcluster subsystem, and add attribute to the config: <mod-cluster-config advertise-socket="modcluster" proxy-list="your-httpd-address:port"> -- the one you've just tried with curl
now it should work even without UDP multicast
if you would like to debug your UDP multicast settings in Open Nebula, give it a shot with Advertize.java
1.2.0 is too old, do not use vulnerable code
Please, do not use mod_cluster 1.2.0 with your Apache HTTP Server. The version is completely obsolete and it contains serious bugs, including a code injection CVE and severe performance issue. Download mod_cluster 1.3.1.Final for httpd 2.4.x or build your own from the sources, if you desire httpd 2.2.x support. If you happen to need any any help with that, ask.