I have a read-only remote filesystem that stores logs.
I use ssh -t to run grep queries on these logs. Sometimes, the queries can take too long and cause the ssh to timeout.
Is there some way to stream the stdout back and keep ssh connection alive?
Example command:
ssh -t my-host.com "cd /path/to/my/folder ; find ./ -name '*' -print0 | xargs -0 -n1 -P8 zgrep -B 5 -H 'My search string'" > search_result.txt
Thanks
Related
I have to collect measurement files from different servers, so I used scp command to retrieve them.
But in case the distant server is hanged or no response, I need to close the connection and put a 0 in my measurement file.
Is there any option in scp command allow me to close the connection after 10 seconds for example?
for serv in $SERV_LIST
do
echo "--- Working on server: $serv ---"
trc_file=`ssh user#$serv "$(typeset -f collectSTATS); collectSTATS $serv $DATE $LastRunTime
scp user#$serv:/tmp/result_rechHM2_$serv.tmp /home/voms/HDB2/result_rechHM2_$serv.tmp > /dev/null 2>&1
deleteFile=`ssh voms#$serv "rm /tmp/result_rechHM2_$serv.tmp 2> /dev/null"`
if [ -f /home/voms/HDB2/result_rechHM2_* ]
then
cat /home/voms/HDB2/result_rechHM2_* >> /home/voms/HDB2/TraceRecharge.log
rm -rf /home/voms/HDB2/result_rechHM2_*
fi
done
When ssh or scp command fail with no response, I need to wait only 10 seconds.
we just use the ssh -o ConnectTimeout=5.
it resolve my problem
Using scp and interactively entering the password the file copy progress is sent to the console but there is no console output when using sshpass in a script to scp files.
$ sshpass -p [password] scp [file] root#[ip]:/[dir]
It seems sshpass is suppressing or hiding the console output of scp. Is there a way to enable the sshpass scp output to console?
After
sudo apt-get install expect
the file send-files.exp works as desired:
#!/usr/bin/expect -f
spawn scp -r $FILES $DEST
match_max 100000
expect "*?assword:*"
send -- "12345\r"
expect eof
Not exactly what was desired, but better than silence:
SSHPASS="12345" sshpass -e scp -v -r $FILES $DEST 2>&1 | grep -v debug1
Note that -e is considered a bit safer than -p.
Output:
Executing: program /usr/bin/ssh host servername, user username, command scp -v -t /src/path/dst_file.txt
OpenSSH_6.6.1, OpenSSL 1.0.1i-fips 6 Aug 2014
Authenticated to servername ([10.11.12.13]:22).
Sending file modes: C0600 590493 src_file.txt
Sink: C0600 590493 src_file.txt
Transferred: sent 594696, received 2600 bytes, in 0.1 seconds
Bytes per second: sent 8920671.8, received 39001.0
In this way:
output=$(sshpass -p $PASSWD scp -v $filename root#192.168.8.1:/root 2>&1)
echo "Output = $output"
you redirect the console output in variable output.
Or, if you only want to see the console output of scp command, you should add only -v command in your ssh pass cmd:
sshpass -p $PASSWD scp -v $filename root#192.168.8.1:/root
Setup:
Local *nix machine with a SQL script script.sql (Postgres).
Remote machine remote (Debian 7) with Postgres.
I can SSH in as some_user, who is a sudoer.
Anything with Postgres needs to be done as postgres user.
The server only listens on localhost:5432.
How do I execute script.sql on remote without copying it there first?
This works well:
ssh -t some_user#remote 'sudo -u postgres psql -c "COMMANDS FOO BAR"'
The -t flag means that sudo will ask for some_user's password correctly on the local terminal.
One thing remains, to be able to pipe script.sql to psql. This does not work:
ssh -t some_user#remote 'sudo -u postgres psql' < script.sql
It fails with the message:
Pseudo-terminal will not be allocated because stdin is not a terminal.
sudo: no tty present and no askpass program specified
Edit: simplified example
Postgres and psql don't seem to figure much in the problem. The following code has the same issues:
ssh some_user#remote xargs sudo ls < input_file
The problem seems to be: we need to send 2 inputs to sudo, both the password using a tty, and the stdin to pass to ls.
Edit: even simpler
ssh localhost xargs sudo ls < input_file
sudo: no tty present and no askpass program specified
Adding -t does not work:
$ ssh -t localhost xargs sudo ls < input_file
Pseudo-terminal will not be allocated because stdin is not a terminal.
sudo: no tty present and no askpass program specified
Adding another -t does not work either:
$ ssh -t -t localhost xargs sudo ls < input_file
<content of input_file>
<waiting on a prompt>
ssh -T some_user#remote "sudo -u postgres psql -f-" < script.sql
"-f-" will read the script from STDIN. Just redirect the file in there, and there you go.
Don't bother with -t option to ssh, you don't need a full terminal for this.
ssh -T ${user}#${ip} sudo DEBIAN_FRONTEND=noninteractive postgres psql -f- < test.sql
Use DEBIAN_FRONTEND=noninteractive for resolve no tty present or equivalent of your distribution.
Instade of move I want to copy all my keys from a particular db to another.
Is it possible in redis if yes than how ?
If you can't use MIGRATE COPY because of your redis version (2.6) you might want to copy each key separately which takes longer but doesn't require you to login to the machines themselves and allows you to move data from one database to another.
Here's how I copy all keys from one database to another (but without preserving ttls)
#set connection data accordingly
source_host=localhost
source_port=6379
source_db=0
target_host=localhost
target_port=6379
target_db=1
#copy all keys without preserving ttl!
redis-cli -h $source_host -p $source_port -n $source_db keys \* | while read key; do
echo "Copying $key"
redis-cli --raw -h $source_host -p $source_port -n $source_db DUMP "$key" \
| head -c -1 \
| redis-cli -x -h $target_host -p $target_port -n $target_db RESTORE "$key" 0
done
Keys are not going to be overwritten, in order to do that, delete those keys before copying or simply flush the whole target database before starting.
Copies all keys from database number 0 to database number 1 on localhost.
redis-cli --scan | xargs redis-cli migrate localhost 6379 '' 1 0 copy keys
If you use the same server/port you will get a timeout error but the keys seem to copy successfully anyway. GitHub Redis issue #1903
redis-cli -a $source_password -p $source_port -h $source_ip keys /*| while read key;
do echo "Copying $key";
redis-cli --raw -a $source_password -h $source_ip -p $source_port -n $dbname DUMP "$key"| head -c -1| redis-cli -x -a $destination_password -h $destination_IP -p $destination_port RESTORE "$key" 0;
Latest solution:
Use the RIOT open-source command line tool provided by Redislabs to copy the data.
Reference: https://developer.redis.com/riot/riot-redis/cookbook.html#_performing_migration
GitHub project link: https://github.com/redis-developer/riot
How to install: https://developer.redis.com/riot/riot-redis/
# Source Redis db
SH=test1-redis.com
SP=6379
# Target Redis db
TH=test1-redis.com
TP=6379
# Copy from db0 to db1 (standalone Redis db, Or cluster mode disabled)
#
riot-redis -h $SH -p $SP --db 0 replicate -h $TH -p $TP --db 1 --batch 10000 \
--scan-count 10000 \
--threads 4 \
--reader-threads 4 \
--reader-batch 500 \
--reader-queue 2000 \
--reader-pool 4
RIOT is quicker, supports multithreading, and works well with cross-environment Redis data copy ( AWS Elasticache, Redis OSS, and Redislabs ).
Not directly. I would suggest to use the always convenient redis-rdb-tools package (from Sripathi Krishnan) to extract the data from a normal rdb dump, and reinject it to another instance.
See https://github.com/sripathikrishnan/redis-rdb-tools
As far as I understand you need to copy keys from a particular DB (e.g 5 ) to a particular DB say 10. If that is the case you can use redis database dumper (https://github.com/r043v/rdd). Although as per documentation it has a switch (-d) to select a database for operation but didn't work for me, so what I did
1.) Edit the rdd.c file and look for int main(int argc,char argv) function
2.) Change the DB to as per your requirement
3.) compile the src by **make
4.) Dump all keys using ./rdd -o "save.rdd"
5.) Edit the rdd.c file again and change the DB
6.) Make again
7.) Import by using ./rdd "save.rdd" -o insert -s "IP" -p"Port"
I know this is old, but for those of you coming here form Google:
I just published a command line interface utility to npm and github that allows you to copy keys that match a given pattern (even *) from one Redis database to another.
You can find the utility here:
https://www.npmjs.com/package/redis-utils-cli
Try using dump to first dump all the keys and then restore the same
If migrating keys inside of the same redis engine, then you might use internal command MOVE for that (pipelining for more speed):
#!/bin/bash
#set connection data accordingly
source_host=localhost
source_port=6379
source_db=4
target_db=0
total=$(redis-cli -n 4 keys \* | sed 's/^/MOVE /g' | sed 's/$/ '$target_db'/g' | wc -c)
#copy all keys without preserving ttl!
time redis-cli -h $source_host -p $source_port -n $source_db keys \* | \
sed 's/^/MOVE /g' | sed 's/$/ 0/g' | \
pv -s $total | \
redis-cli -h $source_host -p $source_port -n $source_db >/dev/null
I'm having a problem with xargs and Wget when run as shell scripts in an Applescript app. I want Wget to run 4 parallel processes in the background. The problem: basically, when I try to run the process in the background with
cat urls.txt | xargs -P 4 -n 1 /usr/local/bin/wget -q -E -b 1> NUL 2> NUL
a Wget process is apparently started for each URL passed in from the .txt file. This is too burdensome on the user's memory. When I run it in the foreground, however, with something like:
cat urls.txt | xargs -P 4 -n 1 /usr/local/bin/wget -q -E
I seem to get the four parallel Wget processes I need. Does anybody know how to get this script to run in the background with only 4 processes? I'm a bit of a novice, and I'm afraid I can't figure out why backgrounding the process causes this change.
You might run xargs on the background instead:
cat urls.txt | xargs -P4 -n1 wget -q &
Or if you want to return control to the AppleScript, disown the xargs process:
do shell script "cat urls.txt | xargs -P4 -n1 /usr/local/bin/wget -q & disown $!"
As far as I can tell, I have solved the problem with
cat urls.txt| (xargs -P4 -n1 wget -q -E >/dev/null 2>&1) &
There may well be a better solution, though...