Variable within a bash script - variables

As debian no longer produces a boot log, I'm attempting to produce one from 'journatctl' (which, I admit, may or may not be the correct course!).
The following script (logdate.sh) works:
#!/bin/sh
DATE=$(zenity --entry \
--title="Produce Log For Chosen Date" \
--text="Enter date (Mmm nn, e.g. Jul 08)" \
--entry-text="")
journalctl > /home/john/log && grep -i "${DATE}" log >> /home/john/Logs/log_${DATE}.txt
rm /home/john/log
if zenity --question \
--text="Do you wish to view to log?"
then pluma /home/john/Logs/log_"${DATE}".txt
else zenity --info \
--text="Log not displayed"
fi
The variable DATE is passed from the first routine to the second without problem.
However, when trying to modify the code to ensure that users cannot enter an incorrect date format (using zenity), it appears that the variable DATE is not found in the 2nd routine. Code:
#!/bin/sh
DATE="$(zenity --calendar --date-format="%b %d" \
--title="Select a Date" \
--text="Click on a date to select that date.")"
if [[ $DATE != "" ]]
then journalctl > /home/john/log && grep -i "${DATE}" log >> /home/john/Logs/log_'${DATE}'.txt
rm /home/john/log
else zenity --info --text="No date selected"
fi
if zenity --question \
--text="Do you wish to view to log?"
then pluma /home/john/Logs/log_"${DATE}".txt
else zenity --info --text="Log not displayed"
fi
The error message received - 'bash: /home/john/Logs/log_${DATE}.txt: ambiguous redirect' - is due to the fact the DATE is not recognised/picked-up by the second routine
I've tried putting the 2nd & 3rd routines into another script, called from the first script, with the variable DATE being 'exported' from the the 1st, but the result is exactly the same.

OK, it's SOLVED!
Amended
'#!/bin/sh'
to:
'#!/bin/bash'
The first script is obviously not bothered by the omission of 'ba'!
Sorry to have wasted anyone's time. It's my age.

Related

How to get the last X minutes of log messages

Here is my log format:
127.0.0.1 user-identifier test [23/Jan/2018:16:45:22 -0700] [WARN ] message
127.0.0.1 user-identifier test [23/Jan/2018:16:55:23 -0700] [WARN ] message
127.0.0.1 user-identifier test [23/Jan/2018:17:00:24 -0700] [WARN ] message
And i use this to get last x min log:
awk -v d1="$(date --date="-60 min" "+[%d/%m/%Y:%H:%M:%S")" -v d2="$(date "+[%d/%m/%Y:%H:%M:%S")" '$0 > d1 && $0 < d2' log.log
But, it seems not work because my log not starting with date. How it should be with my log format? Thanks
If you're running the script in the same TZ as whatevers producing the logs then all you need is:
$ cat tst.awk
BEGIN { FS="[[ /:]+" }
{
mthNr = (index("JanFebMarAprMayJunJulAugSepOctNovDec",$5)+2)/3
time = sprintf("%04d%02d%02d%02d%02d%02d", $6, mthNr, $4, $7, $8, $9)
}
time > tgt
which will work with any awk and you'd execute as:
awk -v tgt="$(date --date='-60 min' +'%Y%m%d%H%M%S')" -f tst.awk
using the version of date you're already using that supports those arguments.
Easiest solution I can see to this would be to parse your log, live, adding a new date field which is more compatible with simple arithmetic comparisons. For example, leave the following running all the time:
tail -0F /path/to/logfile | while read line; do
[[ $line =~ ^([^[]+\[)([^]]+)(.*) ]]
printf '%s %s%s%s\n' \
$(date -j -f '%d/%b/%Y:%T %z' "${BASH_REMATCH[2]}" '+%s') \
"${BASH_REMATCH[1]}" "${BASH_REMATCH[2]}" "${BASH_REMATCH[3]}"
done >> /path/to/epochlogfile
Note that I'm using BSD date, so I've got control over my input date format using -f. You appear to be using GNU coreutils' date command, so you'll need to figure out how to adapt the options to suit. Perhaps something like:
tail -0F /path/to/logfile | while read line; do
[[ $line =~ ^([^[]+\[)([^]]+)(.*) ]]
printf '%s %s%s%s\n' \
$(d="${BASH_REMATCH[2]}"; d="${d/:/ }"; d="${d//\// }"; date -d "$d" '+%s') \
"${BASH_REMATCH[1]}" "${BASH_REMATCH[2]}" "${BASH_REMATCH[3]}"
done >> /path/to/epochlog
If your awk is GNU awk, you might opt to assume that tail -F will always output a log entry at the same time as is referenced by the timestamp. In this case, you don't need to parse the date, and you could simplify this with something like:
tail -0F /path/to/logfile | gawk '{print systime(),$0}' >> /path/to/epochlog
The systime() function is a gawk extension which returns the current epoch second. Just to reiterate, these times will reflect when the log entry was reached the tail command rather than the time being logged by your application.
Of course, even better than leaving it running would be to create your log using a searchable date in the first place. You haven't said what's creating this log, so I can't make any specific suggestions in that area.
Once you've got your replacement log file, you can search with something like this:
#!/usr/bin/env bash
case $(uname -s) in
Linux) date_opts=( --date="-60 min" ) ;;
*BSD|Darwin) date_opts=( -v-60M ) ;;
*) echo "No."; exit 1 ;;
esac
start=$(date "${date_opts[#]}" '+%s')
awk -v start="$start" '$1 > start' /path/to/epochlogfile
I skipped your d2 date condition because, well, it's now. And there's no reason this search script needs to be bash, it could be POSIX easily enough. I'm just lazy. By now you probably understand this enough to add it again if it's important to you.
Disclaimer: Untested. YMMV. May contain nuts.

Retrieving process id using sshcmd on unix

I want to retrieve process id when my code successfully start the job. But its returning null.
I am starting job using sshcmd, creating log of sshcmd output, and then trying to retrieve process id in new_process_id using sshcmd. if I get new_process_id I will show new_process_id else I will show output collected in log file. But I am getting null in new_process_id.
remote_command="nohup J2EEServer/config/AMSS/scripts/${batch_job} & "
sshcmd -q -u ${login_user} -s ${QA_HOST} "$remote_command" > /tmp/nohup_${batch_job} 2>&1
remote_command=$(ps -ef | grep ${login_user} | grep $batch_job | grep -v grep | awk '{print $2}');
new_process_id=`sshcmd -q -u ${login_user} -s ${QA_HOST} "$remote_command"`
runstatus=`grep Synchronized. /tmp/nohup_${batch_job}`
if [[ $runstatus != "" ]]
then
new_process_id=`cat /tmp/nohup_${batch_job}`
fi
echo $new_process_id
The second variable remote_command is the output of that command run on your local machine.
Some other hints: If you are making a second, unrelated variable, give it another name. It will avoid unnecessary confusion.
What you are attempting to do next with runstatus and rewriting an already existing but not used variable is totally unclear to me.

Bash while read : output issue

Updated :
Initial issue :
Having a while read loop printing every line that is read
Answer : Put a done <<< "$var"
Subsequent issue :
I may need some explanations about some SHELL code :
I have this :
temp_ip=$($mysql --skip-column-names -h $db_address -u $db_user -p$db_passwd $db_name -e "select ip_routeur,code_site from $db_vtiger_table where $db_vtiger_table.ip_routeur NOT IN (select ip from $db_erreur_table);")
That gets results looking like this :
<ip1> <site1>
<ip2> <site2>
<ip3> <site3>
<ip4> <site4>
up to 5000 ip_address
I did a "while loop" :
while [ `find $proc_dir -name snmpproc* | wc -l` -ge "$max_proc_snmpget" ];do
{
echo "sleeping, fping in progress";
sleep 1;
}
done
temp_ip=$($mysql --skip-column-names -h $db_address -u $db_user -p$db_passwd $db_name -e "select ip_routeur,code_site from $db_vtiger_table where $db_vtiger_table.ip_routeur NOT IN (select ip from $db_erreur_table);")
while read ip codesite;do
{
sendSNMPGET $ip $snmp_community $code_site &
}
done<<<"$temp_ip"
And the sendSNMPGET function is :
sendSNMPGET() {
touch $procdir/snmpproc.$$
hostname=`snmpget -v1 -c $2 $1 sysName.0`
if [ "$hostname" != "" ]
then
echo "hi test"
fi
rm -f $procdir/snmpproc.$$
The $max_proc_snmpget is set to 30
At the execution, the read is ok, no more printing on screen, but child processes seems to be disoriented
hi
hi
hi
hi
hi
hi
hi
hi
hi
hi
hi
hi
./scan-snmp.sh: fork: Resource temporarily unavailable
./scan-snmp.sh: fork: Resource temporarily unavailable
./scan-snmp.sh: fork: Resource temporarily unavailable
./scan-snmp.sh: fork: Resource temporarily unavailable
Why can't it handle this ?
If temp_ip contains the name of a file that you want to read, then use:
done<"$temp_ip"
In your case, it appears that temp_ip is not a file name but contains the actual data that you want. In that case, use:
done<<<"$temp_ip"
Take care that the variable is placed inside double-quotes. That protects the data against the shell's word splitting which would result in the replacement of new line characters with spaces.
More details
In bash, an expression like <"$temp_ip" is called redirection. In this case in means that the while loop will get its standard input from the file called $temp_ip.
The expression <<<"$temp_ip" is called a here string. In this case, it means that the while loop will get its standard input from the data in the variable $temp_ip.
More information on both redirection and here strings in man bash.
Or you can parse the output of your initial command directly:
$mysql --skip-column-names -h $db_address -u $db_user -p$db_passwd $db_name -e "select ip_routeur,code_site from $db_vtiger_table where $db_vtiger_table.ip_routeur NOT IN (select ip from $db_erreur_table) | \
while read ip codesite
do
...
done
If you want to improve the performance and run some of the 5,000 SNMPGETs in parallel, I would recommend using GNU Parallel (here) like this:
$mysql --skip-column-names -h $db_address -u $db_user -p$db_passwd $db_name -e "select ip_routeur,code_site from $db_vtiger_table where $db_vtiger_table.ip_routeur NOT IN (select ip from $db_erreur_table) | parallel -k -j 20 -N 2 sendSNMPGET {1} $snmp_community {2}
The -k will keep the parallel output in order. The -j 20 will run up to 20 SNMPGETs in parallel at a time. The -N 2 means take 2 parameters from the mysql output per job (i.e. ip and codesite). {1} and {2} are your ip and codesite parameters.
http://www.gnu.org/software/parallel/
I propose to not store the result value but use it directly:
while read ip codesite
do
sendSNMPGET "$ip" "$snmp_community" "$code_site" &
done < <(
"$mysql" --skip-column-names -h "$db_address" -u "$db_user" -p"$db_passwd" "$db_name" \
-e "select ip_routeur,code_site from $db_vtiger_table where $db_vtiger_table.ip_routeur NOT IN (select ip from $db_erreur_table);")
This way you start the mysql command in a subshell and use its output as input to the while loop (similar to piping which here also is an option).
But I see some problems with that code: If you really start each sendSNMPGET command in the background, you very quickly will put a massive load on your computer. For each line you read another active background process is started. This can slow down your machine to the point where it is rendered useless.
I propose to not run more than 20 background processes at a time.
As you don't seem to have liked my answer with GNU Parallel, I'll show you a very simplistic way of doing it in parallel without needing to install that...
#!/bin/bash
MAX=8
j=0
while read ip code
do
(sleep 5; echo $ip $code) & # Replace this with your SNMPGET
((j++))
if [ $j -eq $MAX ]; then
echo -n Pausing with $MAX processes...
j=0
wait
fi
done < file
wait
This starts up to 8 processes (you can change it) and then waits for them to complete before starting another 8. You have already been shown how to feed your mysql stuff into the loop by other respondents in the second to last line of the script...
The key to this is the wait which will wait for all started processes to complete.

Grep query in C shell script not performing properly

When I run the grep command on the command prompt, the output is correct. However, when I run it as part of a script, I only get partial output. Does anyone know what is wrong with this programme?
#!/bin/csh
set res = `grep -E "OPEN *(OUTPUT|INPUT|I-O|EXTEND)" ~/work/lst/TXT12UPD.lst`
echo $res
Your wildcard is probably being processed by the shell calling awk rather than as part of the awk script.
try escaping the * with a \ (i.e. \*)

awk doesn't work in hadoop's mapper

This is my hadoop job:
hadoop streaming \
-D mapred.map.tasks=1\
-D mapred.reduce.tasks=1\
-mapper "awk '{if(\$0<3)print}'" \ # doesn't work
-reducer "cat" \
-input "/user/***/input/" \
-output "/user/***/out/"
this job always fails, with an error saying:
sh: -c: line 0: syntax error near unexpected token `('
sh: -c: line 0: `export TMPDIR='..../work/tmp'; /bin/awk { if ($0 < 3) print } '
But if I change the -mapper into this:
-mapper "awk '{print}'"
it works without any error. What's the problem with the if(..) ?
UPDATE:
Thank #paxdiablo for your detailed answer.
what I really want to do is filter out some data whose 1st column is greater than x, before piping the input data to my custom bin. So the -mapper actually looks like this:
-mapper "awk -v x=$x{if($0<x)print} | ./bin"
Is there any other way to achieve that?
The problem's not with the if per se, it's to do with the fact that the quotes have been stripped from your awk command.
You'll realise this when you look at the error output:
sh: -c: line 0: `export TMPDIR='..../work/tmp'; /bin/awk { if ($0 < 3) print } '
and when you try to execute that quote-stripped command directly:
pax> echo hello | awk {if($0<3)print}
bash: syntax error near unexpected token `('
pax> echo hello | awk {print}
hello
The reason the {print} one works is because it doesn't contain the shell-special ( character.
One thing you might want to try is to escape the special characters to ensure the shell doesn't try to interpret them:
{if\(\$0\<3\)print}
It may take some effort to get the correctly escaped string but you can look at the error output to see what is generated. I've had to escape the () since they're shell sub-shell creation commands, the $ to prevent variable expansion, and the < to prevent input redirection.
Also keep in mind that there may be other ways to filter depending on you needs, ways that can avoid shell-special characters. If you specify what your needs are, we can possibly help further.
For example, you could create an shell script (eg, pax.sh) to do the actual awk work for you:
#!/bin/bash
awk -v x=$1 'if($1<x){print}'
then use that shell script in the mapper without any special shell characters:
hadoop streaming \
-D mapred.map.tasks=1 -D mapred.reduce.tasks=1 \
-mapper "pax.sh 3" -reducer "cat" \
-input "/user/***/input/" -output "/user/***/out/"