filtering redis monitor lpush - redis

I have a box that has a lot of redis stuff going on. I converted my code from writing to a channel to doing a lpush. To see how my output I used redis-cli's monitor command. Is there a way to filter the monitor to command to show only the lpush channel I'm interested in. Theres a lot on the server so I can't catch my output.
For reference I use to do redis-cli subscribe channelname. But this does not work for lpush

I'd grep it:
$ redis-cli monitor | grep -i "lpush"

Related

How to automate processes done through screen /dev/ttyUSB0 using paramiko

I have some devices being managed via an RPi. I'm able to SSH to the RPi using PuTTY and send custom commands to those devices but to do so I need to go through screen /dev/ttyUSB0 115200. I would like to automate this process with a Python script using paramiko.
So far I've succeeded in establishing a connection to the host in the usual way:
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(host, 22, username, pw)
The problem comes in with the commands that I would normally issue via screen. Trying to use ssh.exec_command('screen /dev/ttyUSB0 115200', get_pty=True) followed by something like ssh.exec_command('my command') doesn't work and returns a bash command not recognized error. I've since tried to use a channel, using the following format:
chan = ssh.invoke_shell()
chan.settimeout(3)
chan.send('screen /dev/ttyUSB0 115200\n')
time.sleep(1)
chan.send('my command')
time.sleep(1)
if chan.recv_ready():
print(chan.recv(9999).decode('ascii'))
In PuTTY, sending the command would return a series of strings in the terminal. My goal is to capture those strings and print them through Python. When I use the channel approach I don't receive any notice of unrecognized commands, however the received data is limited to my input commands (expected) and some ANSI escape codes. The data I'm looking for does not get returned.
I'm a complete noob when it comes to networking and Linux so I've been feeling my way through the darkness with this. Based on the data I am receiving from the channel, I suspect I'm not connecting to the CLI in the same way I do using a screen in PuTTY. How might I go about doing this or sidestepping the screen entirely?

Permissions that need to be assigned for a RabbitMQ monitoring user

What permissions do I need to assign to a software, which will monitor my RabbitMQ server? The software agent should monitor most of the metrics explained and recommended in this document.
I think I have to create a user, e.g. monitoring and then give this user access to all virtual hosts which contain ressources that should be monitored.
I think when creating the user I have to assign it the tag monitoring which is a predefined tag.
What I do not understand is, what RegEx I need to assign to configure, write and read. The documentation contains a table with the permissions on ressources.
I think a monitoring software should not be able to create or delete ressources (configure permission) nor should it be able to ADD messages to a queue or READ and ACK messages from a queue. But for example it should be able to read the number of messages waiting in a queue to alert if a queue has a number of growing messages which are not retrieved.
Could anybody explain, what permissions and settings are required for such a monitoring user?
Here is a quick guide from the beginning to the end while you are setting up RabbitMQ queues monitoring.
1) Create an account:
rabbitmqctl add_user monitoring password
2) Add monitoring tag (you can read more about RabbitMQ tags here https://www.rabbitmq.com/management.html)
rabbitmqctl set_user_tags monitoring monitoring
3) Now get the names of your virtual hosts:
rabbitmqctl list_vhosts
4) Add permission for monitoring user to virtual host:
rabbitmqctl set_permissions -p Some_Virtual_Host monitoring "" "" ""
5) Check if access is granted successfully:
curl -s -u monitoring:password http://localhost:15672/api/queues | jq
Look at the "messages" parameter
Optional) You can publish a fake message from the command line:
rabbitmqadmin publish --vhost=Some_Virtual_Host exchange=some_exchange routing_key=outgoing_routing_key payload="hello world"
Look at the "messages" again!
Tip: make sure to enable rabbitmq_management plugin in your RabbitMQ build to be able to execute these queries.
Figured that out myself with some testing. If someone is interested:
Create an account with monitoring tag
Add that account to EVERY vhost that should be monitored and add empty strings ("") to configure, write and read permissions.
With a nice bash script you can then for example get the numbers of messages in every queue:
curl -u username:password \
--silent \
http://<ServerOrIP>:15672/api/queues/<vhostname> | jq '.[] | .name, .messages'
What is jq? An explanation is missing from the answers provided above.
The jq command is in the EPEL repository.
https://www.cyberithub.com/how-to-install-jq-json-processor-on-rhel-centos-7-8/
# yum --enablerepo=epel install jq

How does scp manages to handle Ctrl+C in sink mode

I'm curious about how does scp handles a situation when a binary file contains escape sequences - and, in particular, the Ctrl+C ("\0x03") character from the programmer's side of view.
I have already tried starting it in sink mode and sending it a "\0x03" character, but it clearly exited upon receiving it:
$ perl -e 'print "\x03"'|xsel
$ scp -t /tmp/somefile.txt
^C
$
However, transfering of a binary file that contains the same character doesn't fails, though I believe that it should.
I have also tried to read the scp.c:source function's source code to see if it attempts to perform any characters escape, but to my surprise it doesn't appears so.
The short answer is that the source scp instance communicates with the sink instance through a clean data pipe. Any byte values can be sent through the pipe, and no bytes receive any special treatment. This is the expected behavior for an ssh "shell" or "exec" channel with no PTY.
For the longer answer, I'm going to restrict this to OpenSSH scp running on unix systems. I assume that's the scenario that you're talking about.
The special behavior of keystrokes like Ctrl-C (interrupt the process) or Ctrl-D (end of file) is provided by the TTY interface. Programs like the ssh server use a unix feature called PTYs (pseudo-ttys) to provide a TTY interface for network clients. When you run a command like scp -t ... within an interactive session, you're running it in the context of a TTY (or PTY), and it's the TTY which would convert a typed Ctrl-C into an interrupt signal. Processes can disable this behavior, but the scp program doesn't do that, because it doesn't need to.
When you run scp in the normal way, like scp /some/local/file user#host:/some/remote/dir, the scp process that you launch runs a copy of ssh to make the connection to the remote system. The actual command that it runs will be something like this (simplified):
ssh user#localhost "scp -t /some/remote/dir"
In other words, it starts a copy of ssh which connects to the remote system and runs another copy of scp.
When ssh is run in this fashion, specifying a command to run on the remote system, it doesn't request a PTY for the channel by default. So the two scp instances will communicate with each other through a clean data pipe with no PTY involved.

sftp fails with 'message too long' error

My java program uses ssh/sftp for transferring files into linux machines (obviously...), and my library for doing so is JSch (though it's not to blame).
Now, some of these linux machines, have shell login startup scripts, which tragically causes the ssh/sftp connection to fail, with the following message:
Received message too long 1349281116
After briefly reading about it, it's clearly a known ssh design issue (not a bug - see here). And all suggested solutions are on ssh-server side (i.e. disable scripts which output messages during shell login).
My question - is there on option to avoid this issue on client side?
Check your .bashrc and .bash_profile on the server, remove anything that can echo. For now, comment the lines out.
Try again. You should not be seeing this message again.
put following into the top of file ~/.bashrc on username of id on remote machine
# If not running interactively, don't do anything just return early from .bashrc
[[ $- == *i* ]] || return
this will just exit early from the .bashrc instead of sourcing entire file which you do not want when performing a scp or sftp onto that target remote machine ... depending on shell of that remote username, make this edit on ~/.bashrc
or ~/.bash_profile or ~/.profile or similar
I got that error too, during and sftp get call in a bash script.
And according to the TO's error message, which was similar to mine, it looks like the -B option of the sftp command was set. Although a buffer size of 1349281116 bytes is "a bit" too high.
In my case I also did set the buffer size explicitly (with "good intentions"), which cause the same error message, followed by my set value.
Removing the forced value and letting sftp run with the default of 32K solved the problem to me.
-B buffer_size
Specify the size of the buffer that sftp uses when transferring
files. Larger buffers require fewer round trips at the cost of
higher memory consumption. The default is 32768 bytes.
In case it confirms to be the same issue, that whould suite as client side solution.
NOTE: I had to fix .bashrc output on the remote hosts, not the host that's issuing the scp or sftp command.
Here's a quick-n'-dirty solution, but it seems to work - also on binary files. All credits goes to uvgroovy.
Given file 'some-file.txt', just do:
cat some-file.txt | ssh root:1.1.1.1 /bin/bash -c "cat > /root/some-new-file.txt"
Still, if anyone know a sftp/scp built-in way to do so on client side, it'll be great.

How do I reattach to a detached mosh session?

How do I reattach to a detached mosh session or otherwise get rid of
Mosh: You have a detached Mosh session on this server (mosh [XXXX]).
i.e. what's the mosh equivalent of
screen -D -R
or possibly
screen -wipe
Furthermore, where can this answer be found in documentation?
For security reasons, you can not reattach, see https://github.com/keithw/mosh/issues/394
To kill the detached session, use the PID number displayed in that message (that's the 'XXXX' part.) For example, if you see --
Mosh: You have a detached Mosh session on this server (mosh [12345]).
And can run this command:
kill 12345
Also, to close all mosh connections you can:
kill `pidof mosh-server`
Note that if you are currently connected via mosh, this last command will also disconnect you.
To my amazement, I used CRIU (https://criu.org) to checkpoint and restart a mosh client and it worked.
Shocking.
Find your mosh-client's PID:
$ ps -ef | grep mosh
Then, install CRIU according to their instructions.
Then, checkpoint it like this:
$ mkdir checkpoint
$ sudo ./criu dump -D checkpoint -t PID --shell-job
Then, restore it:
$ sudo ./criu restore -D checkpoint --shell-job
And, there it is. Your mosh client is back.
One thing to note, however, is that if your laptop reboots (which is the whole point of what we're trying to protect against), mosh uses a monotonic clock to track time on the client side, which doesn't work across reboots. This will NOT work, however, if your laptop just flat out crashes it won't work because mosh sequence numbers will be out of sync with the version that was checkpointed (the binary will resume, but communication will stop).
In order to fix this, you need to tell mosh to stop doing that and download the mosh source code. Then, edit this file:
cd mosh
vim configure.ac
Then, search for GETTIME and comment out that line.
Then do:
autoreconf # or ./autogen.sh if you've just cloned it for the first time
./configure
make
make install
After that, your CRIU-checkpointed mosh client sessions will survive reboots.
(Obviously you'd need to write something to perform the checkpoints regularly enough to be useful. But, that's an exercise for the reader).
I realize this is an old post, but there is a very simple solution to this, as suggested by Keith Winstein, mosh author, here: https://github.com/mobile-shell/mosh/issues/394
"Well, first off, if you want the ability to attach to a session from multiple clients (or after the client dies), you should use screen or tmux. Mosh is a substitute (in some cases) for SSH, not for screen. Many Mosh users use it together with screen and like it that way."
Scenario: I'm logged into a remote server via mosh. I've then run screen and have a process running in the screen session, htop, for example. I lose connection (laptop battery dies, lose network connection, etc.). I connect again via mosh and get that message on the server,
Mosh: You have a detached Mosh session on this server (mosh [XXXX]).
All I have to do is kill the prior mosh session
kill XXXX
and reattach to the screen session, which still exists.
screen -r
Now, htop (or whatever process was running) is back just as it was without interruption.This is especially useful for running upgrades or other processes that would leave the server in a messy, unknown state if suddenly interrupted. I assume you can do the same with tmux, though I have not tried it. I believe this is what Annihilannic and eskhool were suggesting.
As an addition to Varta's answer, I use the following command to close all mosh connections except the current one:
pgrep mosh-server | grep -v $(ps -o ppid --no-headers $$) | xargs kill
As #varta pointed out, the mosh owners are very against reattaching from different clients for security reasons. So if your client is gone (e.g. you restarted your laptop) your only option is to kill the sessions.
To kill only detached sessions you can use the following line (which I have as an alias in my .bashrc).
who | grep -v 'via mosh' | grep -oP '(?<=mosh \[)(\d+)(?=\])' | xargs kill
That command depends on the fact that who lists connected users including mosh sessions, only attached mosh sessions have "via mosh", and that mosh sessions have their pid in square brackets. So it finds the pids for just the detached mosh sessions and passes them to kill using xargs.
Here is an example who result for reference:
$ who
theuser pts/32 2018-01-03 08:39 (17X.XX.248.9 via mosh [193891])
theuser pts/17 2018-01-03 08:31 (17X.XX.248.9 via mosh [187483])
theuser pts/21 2018-01-02 18:52 (mosh [205286])
theuser pts/44 2017-12-21 13:58 (:1001.0)
An alternative is to use the mosh-server environment variable MOSH_SERVER_SIGNAL_TMOUT. You can set it to something like 300 in your .bashrc on the server side. Then if you do a pkill -SIGUSER1 mosh-server it will only kill mosh-servers that have not been connected in the last 300 seconds (the others will ignore the SIGUSER1). More info in the mosh-server man page. I am using the command above because, once aliased, it seems simpler to me.
Note, as mentioned by #Annihilannic, if you are using tmux/screen inside your mosh sessions then those tmux/screen sessions are still around after you kill the mosh sessions. So you can still attach to them (so you really don't lose much by killing the mosh sessions themselves).
The answers here claiming that killing mosh-server is the only option are largely obsolete, as we can use criu and reptyr to recover and reattach arbitrary processes.
Not to mention that nowadays we can kill -USR1 mosh-server to only kill detached sessions in a clean and safe way, without resorting to unsafe who output or cumbersome commands to avoid killing our own session.
Next to the criu answer from Michael R. Hines, there is the slightly more "light-weight" reptyr which can be used to reattach processes started by mosh-server (i.e. not the mosh-server itself). I typically use
pstree -p <mosh-server PID>
to list the tree of processes under the detached mosh-server, and then
reptyr PID
to reattach the desired process to my current terminal. After repeating the procedure for all processes I care about, I
kill -USR1 <mosh-server PID>
whereas I take care to only kill sessions I know are mine (shared system).
Use ps command for getting the list of running task or use ps -ef | grep mosh
Kill the mosh PID using this command:
kill <pid>
Also, to close all mosh connections you can:
Note that if you are currently connected via mosh, then this also disconnect you
kill `pidof mosh-server`