Can anyone please help me with understanding the below script from the "typeset" part in the script and what the while and if statements are doing in the script:
# Set effective dates for tasks
set -A EDATE `sqlplus -s / << ENDSQL
set pages 0 feed off
set timing off
alter session set nls_date_format='DD-MM-YYYY';
select sysdate + 42, sysdate + 51, sysdate + 50 from dual;
ENDSQL`
# Check effective dates set
# ${EDATE[0]} = SYSDATE + 42 for tasks NORMALISED
# ${EDATE[1]} = SYSDATE + 51 for tasks SUBTOTAL, SUBTOTAL_RAT
# ${EDATE[2]} = SYSDATE + 50 for tasks NORMALISED_EV,CHARGE
typeset -i C=0
while [[ $C -lt 3 ]] ; do
if [[ -z "${EDATE[C]}" ]] ; then
echo "FAILED TO SET ROTATE PARTITION TASKS EFFECTIVE DATE! PLEASE CHECK."
sms "${SV_SL}" "Failed to set Rotate Partition Tasks effective date. Please check."
exit -1
fi
let C+=1
done
In the first line you've used builtins which is used for permit modifying the properties of variables. You can use 'declare' as well. Both are nearly equal.
In your if statement line you've included semicolon ; which isn't really required at that place. The right if..else syntax in unix is as following given.
Same for while loop also, just remove the semicolon
If..else statement
x=5
y=20
if [ $x == $y ]
then
echo "x is equal to y"
else
echo "x is not equal to y"
fi
Above code's output: x is not equal to y
for while loop
a=0
while [ $u -lt 5 ]
do
echo $u
a=`expr $u + 1`
done
Above code's output:
0
1
2
3
4
-lt 5 indicates limits up to 5 for the while loop.
Also -z option used in if condition that is for variable or string is null.
i.e.
$ip="";
$[-n "$ip"] && echo "ip is not null"
$[-z "$fip"] && echo "ip is null"
Above code's output: ip is null.
Related
I have a shell script that running a count in sql but when I put the script via crontab the query doesn't take enough time and the script run quickly, and no results while it should take about 25 min as it count millions of data, when I run it manually it gives the right results, so what is the reason, this is a sample of the script,
#!/bin/bash
COUNT=`sqlplus -s username/pass#192.168.1.10:1521/oracl << EOF
SELECT count(*) AS COUNT from table1 where
trunc(generation_timestamp)=trunc(sysdate-1);
EOF << EOF EXIT; EOF`
if [[ $COUNT -ne 0 ]];
then sleep 0
else echo "No Data in table1"> /home/data/file.txt
fi
0 * * * * sh /home/date/count.sh
I have a script bash to add users from a .txt file.
It is really simple:
name firstname uid gid
space separated values
I want to check with awk if each row contains 4 fields. If yes I want to return 1, if not return 0.
file=/my/file.txt
awk=$(awk -F' ' '{(NF != 4) ? res = 0 : res = 1; print res)}' $file)
echo $awk
Right now, awk returns 1 for each row, but I want it to return 1 or 0 at the end, not for each line in the file.
On UNIX you'll return 0 in case of success and !=0 in case of an error. For me it makes more sense to return 0 when all records have 4 fields and 1 when not all records have 4 fields.
To achieve that, use exit:
awk 'NF!=4{exit 1}' file
FYI: awk will exit with 0 by default.
If you want to use it in a shell conditional:
#!/bin/bash
if ! awk 'NF!=4{exit 1}' file ; then
echo "file is invalid"
fi
PS: -F' ' in your example is superfluous because ' ' is the default field delimiter.
You can use:
awk 'res = NF!=4{exit} END{exit !res}' file
This will exit with 1 if all rows have 4 columns otherwise it will exist with 0
Subtle changes to your script would do
result=$(awk -F' ' 'BEGIN{flag=1}NF!=4{flag=0;exit}END{print flag}' "$file")
[ ${result:-0} -eq 0 ] && echo "Problematic entries found in file"
The approach
set the flag to 1 hoping that every record would contain 4 fields.
check if record actually contains 4 fields, if not set flag to zero and exit.
And exit would skip the rest of the input and go to the END rule.
print the flag and store it in result.
Check the result and proceed with the action course.
Experts,
I have the following text in an xml files ( there will 20,000 rows in file).
<record record_no = "1" error_code="101">"21006041";"28006041";"34006211";"43";"101210-0001"
Here is how I need the result for each row to be and append to new file.
"21006041";"28006041";"34006211";"43";"101210-0001";101
Here is what I need to do to get the above result.
I replaced " with "
remove <record record_no = "1" error_code="
Get the text 101 ( it can have any value in this position)
append to the last.
Here is what I have been trying.
BEGIN { FS=OFS=";" }
/<record/ {
gsub(/"/,"\"")
gsub(/'/,"")
gsub(/.*="|">.*/,"",$1)
$(NF+1)=$1;
$1="";
print $0;
}
This should do the trick.
awk -F'">' -v OFS=';' '{gsub(/<record record_no = \"[0-9]+\" error_code="/,""); gsub(/"/,"\""); print $2,$1}'
The strategy is to:
split the string at closing chars of the xml element ">
remove the first bit of the xml element including the attribute names leaving only the error code.
replace all " xml entities with ".
print the two FS sections in reverse order.
Test it out with the following data generation script. The script will generate 500x20000 line files with records of random length, some with dashes in the values.
#!/bin/bash
recCount=0
for h in {1..500};
do
for i in {1..20000};
do
((recCount++))
error=$(( RANDOM % 998 + 1 ))
record="<record record_no = "'"'"${recCount}"'"'" error_code="'"'"${error}"'"'">"
upperBound=$(( RANDOM % 4 + 5 ))
for (( k=0; k<${upperBound}; k++ ));
do
randomVal=$(( RANDOM % 99999999 + 1))
record+=""${randomVal}"
if [[ $((RANDOM % 4)) == 0 ]];
then
randomVal=$(( RANDOM % 99999999 + 1))
record+="-${randomVal}"
fi
record+="""
if [[ $k != $(( ${upperBound} - 1 )) ]];
then
record+=";"
fi
done;
echo "${record}" >> "file-${h}.txt"
done;
done;
On my laptop I get the following performance.
$ time cat file-*.txt | awk -F'">' -v OFS=';' '{gsub(/<record record_no = \"[0-9]+\" error_code="/,""); gsub(/"/,"\""); print $2,$1}' > result
real 0m18.985s
user 0m17.673s
sys 0m2.697s
As an added bonus, here is the "equivalent" command in sed:
sed -e 's|\("\)|"|g' -e 's|^.*error_code="\([^>]\+\)">\(.\+\).*$|\2;\1|g'
Much slower although the strategy is the same. Two expressions are used. First replace all " xml entities with ". Lastly group all characters (.+) after >. Display the remembered patterns in reverse order \2;\1
Timing statistics:
$ time cat file-* | sed -e 's|\("\)|"|g' -e 's|^.*error_code="\([^>]\+\)">\(.\+\).*$|\2;\1|g' > result.sed
real 5m59.576s
user 5m56.136s
sys 0m9.850s
Is this too thick:
$ awk -F""+" -v OFS='";"' -v dq='"' '{gsub(/^.*="|">$/,"",$1);print dq""$2,$4,$6,$8,$10dq";"$1}' test.in
"21006041";"28006041";"34006211";"43";"101210-0001";101
I have a log file (which is on output from running a python script)
The log file has the list of variables that I want to pass to a shell script. How do I accomplish this
Example
Log file has the following content. It has the variables x, y, z
Contents of file example.log:
2016-06-07 15:28:12.874 INFO x = (10, 11, 12)
2016-06-07 15:28:12.874 INFO y = case when id =1 then gr8 else ok end
2016-06-07 15:28:12.874 INFO z = 2016-06-07
I want the shell script to read the variables and use in the shell program
Sample shell
shell.ksh
Assign variables
var1 = read_value_x from example.log
var2 = read_value_y from example.log
Is there a generic shell function that I can use to read the log and parse the variable values
Thanks
PMV
Here's how you can do it reasonably efficiently in ksh, for smallish files:
# Read into variables $var1, $var2, ...
n=0
while IFS='=' read -r unused value; do
typeset "var$((++n))=${value# }"
done < example.log
# Enumerate the variables created.
# Equivalent to: `echo "$var1"`, `echo "$var2"`, ...
for (( i = 1; i <= n; ++i)); do
eval echo \"\$'var'$i\"
done
Read the log file, use regex to get the value after = on each line and assign to a variable in a loop.
var1=$(awk -F " = " '$1 ~ /[x]$/' < file.log)
var2=$(awk -F " = " '$1 ~ /[y]$/' < file.log)
The awk utility command above will use the delimiter " = " and using regex we check whether $1 is having x or y at the end; if it does, we set the value to relevant variable.
In case you want to set the 2nd part in variable
var1=$(awk -F " = " '$1 ~ /[x]$/{print $2}' < file.log)
var2=$(awk -F " = " '$1 ~ /[y]$/{print $2}' < file.log)
Hi I need to parse a number from sql output :
COUNT(*)
----------
924
140
173
583
940
77
6 rows selected.
if the the fisrt line is less then 10 I want to create a empty file,
The problem is I dont know how to parse it, the numbers are still changing (from 0 to ca. 10 000 ) .
Question is very unclear so I'll make some assumptions. You'll get the output above from sql either to file or stdout and you would like to test of the first line containing digits is less than 10. Correct?
This is one way to do it.
sed -n '3p' log | awk '{ print ($1 < 10) ? "true" : "false" }'
sed is used to print the 3rd line from your example
this is then piped into awk which makes to comparison.
...or putting it together in bash
#!/bin/bash
while read variable;
do
if [[ "$variable" =~ ^[0-9]+$ ]]
then
break
fi
done < input
if [ "$variable" -lt 10 ]
then
echo 'less than 10'
# add your code here, eg
# touch /path/to/file/to/be/created
fi