Unix Df -k ouput in csv format - ssh

I’m trying to create a shell script to get the server stats of 100 servers and load the details into a table.Initially I’m creating a parameter file which has the list of all the servers, then I ‘m connecting these servers through ssh and run df –k. ssh keys are already setup.
Issues I’m facing is that I’m not able to associate server name to the result, I want server name added as a column to the df –k output.
Also the output format cannot be loaded into a table as there is no delimiter or tab or space properly formatted to load. I have tried sed & various other options but no luck.
#!/bin/ksh
PARMFILE=/opt/sdw/scripts/db_scripts/server_stats.txt
value=$(<server_list1.txt)
echo "$value"
sourceservers=`grep =/opt/sdw/scripts/db_scripts/server_stats.txt |cut -d= -f2`
#Input array passed as parameter file to the script
set -A array_value $value
vLen=${#array_value[#]}
echo $vLen
for(( j=0; j<$vLen; j++))
do
#echo "${array_value[$j]}"
#ssh -q "${array_value[$j]}"; df -k
ssh -q "${array_value[$j]}" 'df -h' >> df.out
ssh -q "${array_value[$j]}" df -h | column -t >> df1.out
ssh -q "${array_value[$j]}" df -k | tr -s " " | sed 's/ /, /g' | sed '1 s/, / /g' | column -t >> df3.out
[[ ! $? = 0 ]] && echo Failure, errno $?, cannot connect to host "${array_value[$j]}" >> sshfailed.list
done
Output
Filesystem Size Used Avail Use%
Mounted on
/dev/mapper/vg00-lvol3 1.5G 434M 923M 32%
Desired output
Filesystem, Size, Used , Avail, Use%, Mounted on, Servername
/dev/mapper/vg00-lvol3, 1.5G, 434M, 923M, 32%, / br724

put "${array_value[$j]}" in a variable like $server for better readability.
then in sed, do a substitution like sed "s/Mounted on/Mounted on $server/g"

Related

GNU Parallel -q option causing BCP "unknown option" errors (different string quotes on local vs remote hosts)

Seeing very strange behavior where when when using gnu parallel to distribute export jobs using bcp from mssql-tools. It appears that when using the -q option for parallel, strings are interpreted differently on local host than on remote hosts.
Running only as a loop through files on local host, the bcp processes throws no errors
However, distributing the file exports with parallel, the bcp processes executing on the local host throw
/opt/mssql-tools/bin/bcp: unknown option
errors, while those executing on remote hosts (via a --sshloginfile param) finish successfully. The basic code being run looks like...
# setting some vars to pass
TO_SERVER_ODBCDSN="-D -S MyMSSQLServer"
TO_SERVER_IP="-S 172.18.54.22"
DB="$dest_db" #TODO: enforce being more careful with this value
TABLE="$tablename" # MUST exist beforehand, case matters
USER=$(tail -n+1 $source_home/mssql-creds.txt | head -1)
PASSWORD=$(tail -n+2 $source_home/mssql-creds.txt | head -1)
DATAFILES="/some/path/to/files/"
TARGET_GLOB="*.tsv"
RECOMMEDED_IMPORT_MODE='-c' # makes a HUGE difference, see https://stackoverflow.com/a/16310219/8236733
DELIMITER="\\\t" # (currently not used) DO NOT use format like "'\t'", nested quotes seem to cause hard-to-catch error, want "\t" literal
....
bcpexport() {
filename=$1
TO_SERVER_ODBCDSN=$2
DB=$3
TABLE=$4 # MUST exist beforehand, case matters
USER=$5
PASSWORD=$6
RECOMMEDED_IMPORT_MODE=$7 # makes a HUGE difference, see https://stackoverflow.com/a/16310219/8236733
DELIMITER=$8 # not currently used
WORKDIR=$9
LOGDIR=${10}
....
/opt/mssql-tools/bin/bcp "$TABLE" in "$localfile" \
$TO_SERVER_ODBCDSN \
-U $USER -P $PASSWORD \
-d $DB \
$RECOMMEDED_IMPORT_MODE
-t "\t" \
-e ${localfile}.bcperror.log
}
export -f bcpexport
parallelization_pernode=5
parallel -q -j $parallelization_pernode \
--sshloginfile $source_home/parallel-nodes.txt \
--env bcpexport \
bcpexport {} "$TO_SERVER_ODBCDSN" $DB $TABLE $USER $PASSWORD $RECOMMEDED_IMPORT_MODE $DELIMITER $workingdir $logdir \
::: $DATAFILES/$TARGET_GLOB #from hdfs nfs gateway
Looking at the bash interpretation of the processes (by running ps -aux | grep bcp on the hosts that parallelis given in the --sshloginfile) for the remote hosts we see...
/bin/bash -c bcpexport() { ... /opt/mssql-tools/bin/bcp "$TABLE" in "$localfile" $TO_SERVER_ODBCDSN -U $USER -P $PASSWORD -d $DB $RECOMMEDED_IMPORT_MODE; -t "\t" -e ${localfile}.bcperror.log; ...
for the local host, the bash interpretation is...
/bin/bash -c bcpexport() { ... /opt/mssql-tools/bin/bcp "$TABLE" in "$localfile" $TO_SERVER_ODBCDSN -U $USER -P $PASSWORD -d $DB $RECOMMEDED_IMPORT_MODE; -t "\t" -e ${localfile}.bcperror.log; ...
that is, they look the same.
My current thought is that the "\t" in the bcp command is being interpreted in a problematic way. Debugging parallel without vs with the -q option we see...
$ parallel -j 5 --sshloginfile ./parallel-nodes.txt echo "Number {}: Running on \`hostname\`: \t" ::: 1 2 3 4 5
Number 4: Running on HW04.ucera.local: t
Number 1: Running on HW04.ucera.local: t
Number 2: Running on HW03.ucera.local: t
Number 5: Running on HW03.ucera.local: t
Number 3: Running on HW02.ucera.local: t
$ parallel -q -j 5 --sshloginfile ./parallel-nodes.txt echo "Number {}: Running on \`hostname\`: \t" ::: 1 2 3 4 5
Number 1: Running on `hostname`:
Number 4: Running on `hostname`:
Number 3: Running on `hostname`: \t
Number 2: Running on `hostname`: \t
Number 5: Running on `hostname`: \t
The bcp command needs the "\t" literal not the "t" literal (and I suspect several other similar string corruptions (also I do believe that \t is the default for bcp anyway, but this is just an example and want to keep \t for code clarity)), but not sure how to get this for both local and remote nodes or even why this behavior differs by remote vs local.
Basically, need the the strings to be exactly the same for both local and remote hosts even if strings have spaces or escape characters in them (note, I think this used to not be the case (have older script on other machines that don't have this problem))
Not sure if this is counts more as a parallel problem or a bcp problem (currently thinking something is going wrong with the -q option in parallel, but not sure). Anyone have any debugging suggestions or fixes? Ideas of what could be happening?
Firstly, the reason why hostname is not expanded is due to -q. It quotes the ` so that it does not expand.
Secondly, I think what you see is the different behaviours in built-in echo and /bin/echo. Built-in echo depends on the shell. Here I compare echo \\\\t in different shells:
$ parallel --onall --tag -S sh#lo,bash#lo,csh#lo,tcsh#lo,ksh#lo,zsh#lo echo \\\\t ::: a
bash#lo \t a
tcsh#lo a
sh#lo a
ksh#lo \t a
zsh#lo a
csh#lo \t a
That does not, however, get you closer to a solution. If I were you I would use env_parallel to copy the environment variables. And if the login shell on the remote systems are not the same as your shell, then set PARALLEL_SHELL to force using that shell.
So:
#!/bin/bash
env_parallel --session
# setting some vars to pass
TO_SERVER_ODBCDSN="-D -S MyMSSQLServer"
:
:
PARALLEL_SHELL=bash env_parallel -q -j $parallelization_pernode ...
(no need to use neither --env nor 'export -f' when using 'env_parallel --session')
# Cleanup (not needed if this is the last line in the script)
env_parallel --end-session

How to read lines from .txt file into this bash script?

I have this bash script which connects to a postgre sql db and performs a query. I would like to be able to read line from a .txt file into the query as parameters. What is the best way to do that? Your assistance is greatly appreciated! I have my example code below however it is not working.
#!/bin/sh
query="SELECT ci.NAME_VALUE NAME_VALUE FROM certificate_identity ci WHERE ci.NAME_TYPE = 'dNSName' AND reverse(lower(ci.NAME_VALUE)) LIKE reverse(lower('%.$1'));"
(echo $1; echo $query | \
psql -t -h crt.sh -p 5432 -U guest certwatch | \
sed -e 's:^ *::g' -e 's:^*\.::g' -e '/^$/d' | \
sed -e 's:*.::g';) | sort -u
Considering that the file has only one sql query per line:
while read -r line; do echo "${line}" | "your code to run psql here"; done < file_with_query.sql
That means: while read the content of file_with_query.sql line by line, do something with each line.

Extract unique IPs from live tcpdump capture

I am using the following command to output IPs from live tcpdump capture
sudo tcpdump -nn -q ip -l | awk '{print $3; fflush(stdout)}' >> ips.txt
I get the following output
192.168.0.100.50771
192.168.0.100.50770
192.168.0.100.50759
Need 2 things:
Extract only the IPs, not the ports.
Generate a file with unique IPs, no duplicated, and sorted if posible.
Thank you in advance
To extract unique IPs from tcpdump you can use:
awk '{ ip = gensub(/([0-9]+.[0-9]+.[0-9]+.[0-9]+).*/,"\\1","g",$3); if(!d[ip]) { print ip; d[ip]=1; fflush(stdout) } }' YOURFILE
So your command to see unique IPs live would be:
sudo tcpdump -nn -q ip -l | awk '{ ip = gensub(/([0-9]+.[0-9]+.[0-9]+.[0-9]+)(.*)/,"\\1","g",$3); if(!d[ip]) { print ip; d[ip]=1; fflush(stdout) } }'
This will print each IP to output as soon as they appear, so it cannot sort them. If you want to sort those, you can save the output to a file and then use sort tool:
sudo tcpdump -nn -q ip -l | awk '{ ip = gensub(/([0-9]+.[0-9]+.[0-9]+.[0-9]+)(.*)/,"\\1","g",$3); if(!d[ip]) { print ip; d[ip]=1; fflush(stdout) } }' > IPFILE
sort -n -t . -k 1,1 -k 2,2 -k 3,3 -k 4,4 IPFILE
Example output:
34.216.156.21
95.46.98.113
117.18.237.29
151.101.65.69
192.168.1.101
192.168.1.102
193.239.68.8
193.239.71.100
202.96.134.133
NOTE: make sure you are using gawk. It doesn't work with mawk.
While I'm a huge Awk fan, it's worthwhile having alternatives. Consider this example using cut:
tcpdump -n ip | cut -d ' ' -f 3 | cut -d '.' -f 1-4 | sort | uniq
This is a using match (working in macOs)
sudo tcpdump -nn -q ip -l | \
awk '{match($3,/[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+/); \
ip = substr($3,RSTART,RLENGTH); \
if (!seen[ip]++) print ip }'
In case want to pre-filter the input you could use something like:
sudo tcpdump -nn -q ip -l | \
awk '$3 !~ /^(192\.168|10\.|172\.1[6789]|172\.2[0-9]\.|172\.3[01]\.)/ \
{match($3,/[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+/); \
ip = substr($3,RSTART,RLENGTH); \
if (!seen[ip]++) print ip }'
sudo tcpdump -n ip | cut -d ' ' -f 3 | cut -d '.' -f 1-4 | awk '!x[$0]++'
Is the command that did it for me. Simple and elegant.

Slick SourceCodeGenerator From SQL File

Is there a way to use the Slick SourceCodeGenerator to generate source code from a file of SQL CREATE statements? I know there is a way to connect to a DB and read in the schema, but I want to cut out that step and just give it the file. Please advise.
Slick ready meta data via jdbc. If you find a jdbc driver that can do that from a SQL file, you may be in luck. Otherwise, why not use an H2 in-memory database? It has compatibility modes for various SQL dialects. They are limited though. Another option would be using something like this: https://github.com/bgranvea/mysql2h2-converter first to produce an H2 compatible schema file.
We used the following script to load a sql schema from a mysql database, convert it to H2 compatible format and then use it in-memory for tests. You should be able to adapt it.
#!/bin/sh
echo ''
export IP=192.168.1.123
export user=foobar
export password=secret
export database=foobar
ping -c 1 $IP &&\
echo "" &&\
echo "Server is reachable"
# dump mysql schema for debuggability (ignore in git)
# convert the mysql to h2db using the converter.
## disable foreign key check in begining and enable it in the end. Prevents foreign key errors
echo "SET FOREIGN_KEY_CHECKS=0;" > foobar-mysql.sql
## Dump the Db structure and remove the auto_increment so as to set the id column back to 1
mysqldump --compact -u $user -h $IP -d $database -p$password\
|sed 's/CONSTRAINT `_*/CONSTRAINT `/g' \
|sed 's/KEY `_*/KEY `/g' \
|sed 's/ AUTO_INCREMENT=[0-9]*//' \
>> foobar-mysql.sql
echo "SET FOREIGN_KEY_CHECKS=1;" >> foobar-mysql.sql &&\
java -jar mysql2h2-converter.jar foobar-mysql.sql \
|perl -0777 -pe 's/([^`]),/\1,\n /g' \
|perl -0777 -pe 's/\)\);/)\n);/g' \
|perl -0777 -pe 's/(CREATE TABLE [^\(]*\()/\1\n /g' \
|sed 's/UNSIGNED/unsigned/g' \
|sed 's/float/real/' \
|sed "s/\(int([0-9]*).*\) DEFAULT '\(.*\)'/\1 DEFAULT \2/" \
|sed "s/tinyint(1)/boolean/" \
> foobar-h2.sql
perl -ne 'print "$ARGV\n" if /.\z/' -- foobar-h2.sql

Copy all keys from one db to another in redis

Instade of move I want to copy all my keys from a particular db to another.
Is it possible in redis if yes than how ?
If you can't use MIGRATE COPY because of your redis version (2.6) you might want to copy each key separately which takes longer but doesn't require you to login to the machines themselves and allows you to move data from one database to another.
Here's how I copy all keys from one database to another (but without preserving ttls)
#set connection data accordingly
source_host=localhost
source_port=6379
source_db=0
target_host=localhost
target_port=6379
target_db=1
#copy all keys without preserving ttl!
redis-cli -h $source_host -p $source_port -n $source_db keys \* | while read key; do
echo "Copying $key"
redis-cli --raw -h $source_host -p $source_port -n $source_db DUMP "$key" \
| head -c -1 \
| redis-cli -x -h $target_host -p $target_port -n $target_db RESTORE "$key" 0
done
Keys are not going to be overwritten, in order to do that, delete those keys before copying or simply flush the whole target database before starting.
Copies all keys from database number 0 to database number 1 on localhost.
redis-cli --scan | xargs redis-cli migrate localhost 6379 '' 1 0 copy keys
If you use the same server/port you will get a timeout error but the keys seem to copy successfully anyway. GitHub Redis issue #1903
redis-cli -a $source_password -p $source_port -h $source_ip keys /*| while read key;
do echo "Copying $key";
redis-cli --raw -a $source_password -h $source_ip -p $source_port -n $dbname DUMP "$key"| head -c -1| redis-cli -x -a $destination_password -h $destination_IP -p $destination_port RESTORE "$key" 0;
Latest solution:
Use the RIOT open-source command line tool provided by Redislabs to copy the data.
Reference: https://developer.redis.com/riot/riot-redis/cookbook.html#_performing_migration
GitHub project link: https://github.com/redis-developer/riot
How to install: https://developer.redis.com/riot/riot-redis/
# Source Redis db
SH=test1-redis.com
SP=6379
# Target Redis db
TH=test1-redis.com
TP=6379
# Copy from db0 to db1 (standalone Redis db, Or cluster mode disabled)
#
riot-redis -h $SH -p $SP --db 0 replicate -h $TH -p $TP --db 1 --batch 10000 \
--scan-count 10000 \
--threads 4 \
--reader-threads 4 \
--reader-batch 500 \
--reader-queue 2000 \
--reader-pool 4
RIOT is quicker, supports multithreading, and works well with cross-environment Redis data copy ( AWS Elasticache, Redis OSS, and Redislabs ).
Not directly. I would suggest to use the always convenient redis-rdb-tools package (from Sripathi Krishnan) to extract the data from a normal rdb dump, and reinject it to another instance.
See https://github.com/sripathikrishnan/redis-rdb-tools
As far as I understand you need to copy keys from a particular DB (e.g 5 ) to a particular DB say 10. If that is the case you can use redis database dumper (https://github.com/r043v/rdd). Although as per documentation it has a switch (-d) to select a database for operation but didn't work for me, so what I did
1.) Edit the rdd.c file and look for int main(int argc,char argv) function
2.) Change the DB to as per your requirement
3.) compile the src by **make
4.) Dump all keys using ./rdd -o "save.rdd"
5.) Edit the rdd.c file again and change the DB
6.) Make again
7.) Import by using ./rdd "save.rdd" -o insert -s "IP" -p"Port"
I know this is old, but for those of you coming here form Google:
I just published a command line interface utility to npm and github that allows you to copy keys that match a given pattern (even *) from one Redis database to another.
You can find the utility here:
https://www.npmjs.com/package/redis-utils-cli
Try using dump to first dump all the keys and then restore the same
If migrating keys inside of the same redis engine, then you might use internal command MOVE for that (pipelining for more speed):
#!/bin/bash
#set connection data accordingly
source_host=localhost
source_port=6379
source_db=4
target_db=0
total=$(redis-cli -n 4 keys \* | sed 's/^/MOVE /g' | sed 's/$/ '$target_db'/g' | wc -c)
#copy all keys without preserving ttl!
time redis-cli -h $source_host -p $source_port -n $source_db keys \* | \
sed 's/^/MOVE /g' | sed 's/$/ 0/g' | \
pv -s $total | \
redis-cli -h $source_host -p $source_port -n $source_db >/dev/null