Is there any way to Remove all Redis Client Connections? - redis

Is there any way to Remove all Redis Client Connections with one command?
I know that it's possible to remove by IP:PORT
CLIENT KILL addr:port
Also I found that is possible to do this since Redis 2.8.12.
But
I couldn't find anything about.

CLIENT KILL can receive TYPE argument that can be one of a three connection types; normal, slave and pubsub.
You can kill all open connections by sending the following three commands:
CLIENT KILL TYPE normal
CLIENT KILL TYPE slave
CLIENT KILL TYPE pubsub
Note that you can skip the later two if you do not use them (slave and pubsub connections).
You can also add a SKIPME no for a kamikaze connections killer.

So SHUTDOWN is definitely the easiest way, especially in dev.
However, although Redis doesn't have a CLIENT KILL * variant, you can script it. AFAIR you could even do it in Lua but I checked now and CLIENT LIST errs so I'm guessing that's changed. Still, it is fairly easy to do this with the CLI - this appears to do the trick:
redis-cli CLIENT LIST | cut -d ' ' -f 2 | cut -d = -f 2 | awk -e '{ print "CLIENT KILL " $0 }' | redis-cli -x

You can use the following command to check your connection numbers:
netstat -an | grep :6379 | grep ESTABLISHED | wc -l
Then try Redis Client command to kill connection:
http://redis.io/commands/client-kill

Related

redis-cli --pipe yields MOVED errors when bulk uploading to Elasticache with cluster-mode enabled

I am trying to use redis-cli --pipe to bulk upload some commands to my AWS Elasticache for redis cluster. The commands come from parsing a file via a custom awk command, which helps generate some HSET commands. The awk command is in a custom shell script. When my Elasticache for redis server had cluster-mode disabled, doing something like the following worked like a charm:
sh script_containing_awk.sh $FILE_TO_PARSE | redis-cli -h <Primary_endpoint> -p <port> --tls --cacert <path/to/cert> --pipe
Due to an internal project requirement, the Elasticache for Redis server has been re-created with cluster-mode enabled, and hence I am adding the -c flag to the above command to specify as such.
I see the following results when trying to work with my Elasticache for Redis server with cluster-mode enabled:
I can connect to the cluster via the configuration endpoint no problem!
Single command uploads work (i.e: redis-cli -h <config_endpoint> -p <port> -c --tls --cacert <path/to/certs> SET key value)
It would be extremely convenient to just pipe output from my script to the cli:
sh script_containing_awk.sh $FILE_TO_PARSE | redis-cli -h <config_endpoint> -p <port> -c --tls --cacert <path/to/cert> --pipe
but adding the --pipe flag results in "MOVED" errors.
I have tried modifying the script to include {} (ex: HSET {user1}:hash field1 val1 field2 val2 ... brackets to try to force keys to the same CLUSTER SLOTS, but I still get the "MOVED" errors and I am attempting to bulk upload millions of keys so I don't think they would all fit in the same slot anyway.
Does anyone have experience getting --pipe to work with cluster-mode enabled Redis/Elasticache?
Thanks!
I am sure you understand that the core difference between Cluster Mode Disabled and Cluster Mode Enabled is that there is a split in your total Key slots.
Just to put in context;
CMD - Let's say we have 4 node cluster with 1 Primary and 3 Replicas.
if we have 100 key slots -
All the 100 key slots will be there in all the nodes. 3 of them will serve Read only commands and 1 of the node will serve all the commands.
CME - Let's say we have 4 nodes split in 2 shards - 1 replica and 1 primary each.
We can look at them as logical sub-clusters ie. they will have different sets of key-slots. Ideally a 50-50 split.
Now, the MOVED message is not necessarily an error.
When you connect to the configuration endpoint, by default you are being connected with one of the primary nodes (chosen at random, at first).
when you make a command, the client sends that command and the primary node decides if it has the correct hash-slot to serve that command.
As explained here, if the node does not have the hash-slot that your client is looking for, it will redirect you with a MOVED message.
So, I would assume MOVED messages are somewhat expected with CME clusters.

Autokill broken reverse ssh tunnels

I have 1 server which is behind a NAT and a firewall and I have another in another location that is accessible via a domain. The server behind the NAT and firewall is running on a cloud environment and is designed to be disposable ie if it breaks we can simply redeploy it with a single script, in this case, it is OpenStack using a heat template. When that server fires up it runs the following command to create a reverse SSH tunnel to the server outside the NAT and Firewall to allow us to connect via port 8080 on that server. The issue I am having is it seems if that OpenSSH tunnel gets broken (server goes down maybe) the tunnel remains, meaning when we re-deploy the heat template to launch the server again it will no longer be able to connect to that port unless I kill the ssh process on the server outside the NAT beforehand.
here is the command I am using currently to start the reverse tunnel:
sudo ssh -f -N -T -R 9090:localhost:80 user#example.com
I had a similar issue, and fixed it this way:
First, at the server, I created in the home directory a script called .kill_tunel_ssh.sh with this contents:
#this finds the process that is opening the port 9090, finds its PID and kills it
sudo netstat -ltpun | grep 9090 | grep 127 | awk -F ' ' '{print $7}' | awk -F '/' '{print $1}' | xargs kill -9
Then, at the client, I created a script called connect_ssh.sh with this contents:
#this opens a ssh connection, runs the script .kill_tunnel_ssh.sh and exit
ssh user#remote.com "./.kill_tunel_ssh.sh"
#this opens a ssh connection opening the reverse tunnel
ssh user#remote.com -R 9090:localhost:80
Now, I always use connect_ssh.sh to open the SSH connection, instead of using the ssh command directly.
It requires the user at the remote host to have sudo configured without asking for password when executing the netstat command.
Maybe (probably) there is a better way to accomplish it, but that is working for me.

ssh v2 maximum compression (xzip/7zip)

I am on a slow dial-up connection, but I have root access to a fast server.
Currently I use ssh v2 to connect to the server with Compression enabled in ~/.ssh/config. However this only uses gzip level 6 (as mentioned here https://serverfault.com/questions/388658/ssh-compression/.
However, it is possible to use better algorithms (like xzip with -e9 or 7zip with -mx=9 ) using pipes as mentioned here [https://serverfault.com/a/586739/506887]. The example in that answer:
ssh ${SSH_USER}#${SSH_HOST} "
echo 'string to be compressed' | gzip -9
" | zcat | echo -
compresses a single string using xzip and pipes on the remote server.
1) I would like to do this (compress with xzip) for all traffic.How can that be done.
2) To save data, when I run firefox on my client, I use a socks v5 proxy with ssh to take advantage of compression
ssh -D 8123 -C -v -N root#myserver
and I point firefox to socks://localhost:8123. Again this using gzip level 6. Can this example be similarly modified to use xzip or 7zip.
I am aware that the bandwidth advantage of using xzip vs gzip may not be significant for a single connection. But I am hoping the bandwith savings will accumulate to a significant amount over a period of time.
Thanks

IPython interface to distributed Dask workers over ssh yields "Connection refused"

Today, I thought I would attempt get to know my workers better through spawning an ipython kernel. Doing so seemed easy enough using the handy
client.start_ipython_workers()
I was able to get the connection information, and then wrote a script to dump it to JSON. I then configured some port forwarding to connect to the worker, however the worker client does not seem to accept connections.
connect channel 2: open failed: connect failed: Connection refused
It is possible that I still have some configuration problems with ssh, however I have been successfully connecting to my Jupyter notebook kernel through a similar channel. Is there some reason why the worker would be blocking connections?
winfo = client.start_ipython_workers()
for worker in winfo.keys():
winfo[worker]['key']=winfo[worker]['key'].decode('utf8')
with open(os.path.join('/home/centos/kernels/','kernel-'+winfo[worker].pop('ip')+'.json'), 'w+') as f:
winfo[worker]['ip']='127.0.0.1'
json.dump(winfo[worker], f,indent=2)
#!/bin/bash
for port in $(cat $2 | grep '_port' | grep -o '[0-9]\+'); do
echo "establishing tunnel to "$port
ssh $1 -f -N -L $port:127.0.0.1:$port
done

redis bulk import using --pipe

I'm trying to import one million lines of redis commands, using the --pipe feature.
redis_version:2.8.1
cat file.txt | redis-cli --pipe
This results in the following error:
Error reading from the server: Connection reset by peer
Does anyone know what I'm doing wrong?
file.txt contains, for example,
lpush name joe
lpush name bob
edit: I now see there's probably a special format(?) for using pipe mode - http://redis.io/topics/protocol
The first point is that the parameters have to be double-quoted. The documentation is somewhat misleading on this point.
So a working syntax is :
lpush "name" "joe"
lpush "name" "bob"
The second point is that each line has to end by an \r\n and not just by \n. To fix that point, you just have to convert your file with the command unix2dos
like : unix2dos file.txt
Then you can import your file using cat file.txt | src/redis-cli --pipe
This worked for me.
To use the pipe mode (a.k.a bulk loading, or mass insertion) you must indeed provide your commands directly in Redis protocol format.
The corresponding Redis protocol for LPUSH name joe is:
*3
$5
LPUSH
$4
name
$3
joe
Or as a quoted string: "*3\r\n$5\r\nLPUSH\r\n$4\r\nname\r\n$3\r\njoe\r\n".
This is what your input file must contain.
Redis documentation includes a Ruby sample to help you generate the protocol: see gen_redis_proto.
A Python sample is available e.g. in the redis-tools package.
There are existing tools that convert client commands directly to redis wire protocol messages. Example:
redis-mass my-client-script.txt | redis-cli --pipe option
https://golanglibs.com/dig_in/redis-mass
https://github.com/almeida/redis-mass
There are two kinds of possibilities.
First check point is exceed of maxclients limits.
You can check using 'info clients' and 'config get maxclients' redis command.
In my desktop result is below.
127.0.0.1:6379> info clients
# Clients
connected_clients:2
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
127.0.0.1:6379> config get maxclients
1) "maxclients"
2) "2"
and then i tried to use pipe command, below is result.
[localhost redis-2.8.1]$ cat test.txt | ./src/redis-cli --pipe
All data transferred. Waiting for the last reply...
Error reading from the server: Connection reset by peer
If that result is same. you have to change redis.conf file.
Seconds check point is ulimit option.
ulimit option change needs a root privilige. check below link.
How do I change the number of open files limit in Linux?
This error happens because the timeout set in Redis is Default, 0. You need to configure this timeout value by redis-cli using the command below:
To connect in redis server:
redis-cli -h -p -a
To view timeout value configured:
this command-line: config get timemout, Works to see what is the timeout value was configured in Redis server.
To Set new value for redis timeout:
this command-line: config set timeout 120, Set the timeout to 2 minutes. So, you need to set the redis timeout so long your execution need.
I hope this answers help you. Cyu!!!
You can use the following command to import your file's data to redis
cat file.txt | xargs -L1 redis-cli