how to parse a number from sql output - sql

Hi I need to parse a number from sql output :
COUNT(*)
----------
924
140
173
583
940
77
6 rows selected.
if the the fisrt line is less then 10 I want to create a empty file,
The problem is I dont know how to parse it, the numbers are still changing (from 0 to ca. 10 000 ) .

Question is very unclear so I'll make some assumptions. You'll get the output above from sql either to file or stdout and you would like to test of the first line containing digits is less than 10. Correct?
This is one way to do it.
sed -n '3p' log | awk '{ print ($1 < 10) ? "true" : "false" }'
sed is used to print the 3rd line from your example
this is then piped into awk which makes to comparison.
...or putting it together in bash
#!/bin/bash
while read variable;
do
if [[ "$variable" =~ ^[0-9]+$ ]]
then
break
fi
done < input
if [ "$variable" -lt 10 ]
then
echo 'less than 10'
# add your code here, eg
# touch /path/to/file/to/be/created
fi

Related

How do I print every nth entry of the mth column, starting from a particular line of a file?

Consider the following data in a file file.txt:
$
$
$
FORCE 10 30 40
* 1 5 4
FORCE 11 20 22
* 2 3 0
FORCE 19 25 10
* 16 12 8
.
.
.
I want to print every 2nd element of the third column, starting from line 4, resulting in:
30
20
25
I have tried:
cat file.txt | sed 's/\|/ /' | awk 'NR%2==4 {print $3}'
However, this is not resulting in anything being printed and no errors generated either.
You might use awk checking that the row number > 3 and then check for an even row number with NR%2==0.
Note that you don't have to use cat
awk 'NR > 3 && NR%2==0 {
print $3
}' file.txt
Output
30
20
25
Using sed
$ sed -En '4~2s/([^ \t]*[ \t]+){2}([^ \t]*).*/\2/p' input_file
30
20
25
I have tried:
cat file.txt | sed 's/\|/ /' | awk 'NR%2==4 {print $3}'
However, this is not resulting in anything being printed and no errors
generated either.
You do not need cat whilst using GNU sed as it can read file on its' own, in this case it would be sed 's/\|/ /' file.txt.
You should consider if you need that part at all, your sample input does not have pipe character at all, so it would do nothing to it. You might also drop that part if lines holding values you want to print do not have that character.
Output is empty as NR%2==4 does never hold, remainder of division by x is always smaller than x (in particular case of %2 only 2 values are possible: 0 and 1)
This might work for you (GNU sed):
sed -nE '4~2s/^((\S+)\s*){3}.*/\2/p' file
Turn off implicit printing by setting the -n option and reduce back slashes in regexps by turning on -E.
From the fourth line and then every second line thereafter, capture the third column and print it.
N.B. The \2 represents the last inhabitant of that back reference which in conjunction with the {3} means the above.
Alternative:
sed -n '4,${s/^\(\(\S\+\)\s*\)\{3\}.*/\2/p;n}' file

What is the easiest way to obtain a value by looking for a specific heade with awk?

I want to obtain the value that is under the header count (2500)
but the line number of the file can change
I tried with this
value=$(awk 'NR==6' $log_file)
echo $value
Also
value=$(awk 'NR==1' | awk '/^[0-9]+/ { print }' $log_file)
echo $value
log_file
PC220 - Production on Feb 13
Connected ...
CONNECT to DB
count
--------------------
2500
1 rows affected
Good bye!
User "BILL" disconnect successfully.
FIN
I want to get the value of 2500 that is below the count header
This awk should work for you:
awk '$0 == "count" {n = NR+2} NR == n {print; exit}' file
2500
Just save line number when we get count record and print the value when we get 2nd records afterwards.
shorter with sed
$ value=$(sed -n '/^count$/{n;n;p;q}' file)
find the header, next line, next line, print, quit.
You can find a line that is equal to count and then get the first field of the second line below with
awk '$0 == "count"{getline;getline;print $1;exit}' $log_file
See the online demo:
#!/bin/bash
s='PC220 - Production on Feb 13
Connected ...
CONNECT to DB
count
--------------------
2500
1 rows affected
Good bye!
User "BILL" disconnect successfully.
FIN'
awk '$0 == "count"{getline;getline;print $1;exit}' <<< "$s"
# => 2500

Find duplicates and give sum of values in column next to it (UNIX) (with solution -> need faster way)

I am writing a script for bioinformatical use. I have a file with 2 columns, in which column A shows a number and column B a specific string. I need a script that will search the file for the string in column B, IF any duplicates are found the number in column A should all be added up, duplicates should be removed and only one line with column A having the sum and column B the string should remain.
I have written something that does exactly that, but because I am not really a programmer I am sure there is a much faster way. My files contain sometimes 500k lines and my code takes to long for such files. Please have a look at it and see what I could change to speed things up. Also I can't use uniq because for this Id have to also use sort but the order of the lines have to stay the way they are!
13 ABCD
15 BGDA
12 ABCD
10 BGDA
10 KLMN
17 BGDA
should become
25 ABCD
42 BGDA
10 KLMN
This does it but for a file with 500k lines it takes too long:
for AASEQUENCE in file.txt;
do
#see how many lines the file has and save that number in $LN
LN="$(wc -l $AASEQUENCE | cut -d " " -f 1)"
for ((i=1;i<=${LN};i++));
do
#Create a variable that will have just the string from column B
#save it in $STRING
STRING="$(cut -f2 $AASEQUENCE | head -n $i| tail -n 1 | cut -f1)";
#create $UNIQ: a variable that will have number+string of that
#line. This will be used in the ELSE-statement, IF there are no
#duplicates of the string, it will just be added to the
# output file without further processing
UNIQ="$(head -n $i $AASEQUENCE | tail -n 1)"
for DUPLICATE in $AASEQUENCE;
do
#create variable that will display the number of lines
#of duplicates. IF its 1 the IF-statement will jump to the ELSE
#part as there are no duplicates
VAR="$(grep -w "${STRING}" $DUPLICATE | wc -l)"
#Now add up all the numbers from column A that have $STRING in
#column B
TOTALCOUNT="$(grep -w "${STRING}" $DUPLICATE | cut -f1 | awk
'{SUM += $1} END {print SUM}')"
#Create a file that the single line can be put into it
touch MERGED_`basename $AASEQUENCE`
#The IF-statement checks if the AA occurs more than once
#If it does a second IF-statement checks if this AA-sequence has
#already been added.
#If it hasnt been added, it will be, if not nothing happens.
ALREADYMATCHED="$(grep -w "${STRING}" MERGED_`basename
$AASEQUENCE` | wc -l)"
if [[ "$VAR" > 1 ]];
then if [[ "$ALREADYMATCHED" != 0 ]]; then paste <(echo
"$TOTALCOUNT") <(echo "$STRING") --delimiters ' '>>
MERGED_`basename $AASEQUENCE` ;fi;
else echo $UNIQ >> MERGED_`basename $AASEQUENCE` ;fi
done;
done;
done;
P.S: When I have fileA.txt fileB.txt ... and use file* the loop still always stops after the first file. Any suggestions why?
maybe pure awk solution?
$ cat > in
13 ABCD
15 BGDA
12 ABCD
10 BGDA
10 KLMN
17 BGDA
$ awk '{dc[$2] += $1} END{for (seq in dc) {print dc[seq], seq}}' in
25 ABCD
42 BGDA
10 KLMN
$

Print every second consequtive field in two columns - awk

Assume the following file
#zvview.exe
#begin Present/3
77191.0000 189.320100 0 0 3 0111110 16 1
-8.072430+6-8.072430+6 77190 0 1 37111110 16 2
37 2 111110 16 3
8.115068+6 0.000000+0 8.500000+6 6.390560-2 9.000000+6 6.803440-1111110 16 4
9.500000+6 1.685009+0 1.000000+7 2.582780+0 1.050000+7 3.260540+0111110 16 5
37 2 111110 16 18
What I would like to do, is print in two columns, the fields after line 6. This can be done using NR. The tricky part is the following : Every second field, should go in one column as well as adding an E before the sign, so that the output file will look like this
8.115068E+6 0.000000E+0
8.500000E+6 6.390560E-2
9.000000E+6 6.803440E-1
9.500000E+6 1.685009E+0
1.000000E+7 2.582780E+0
1.050000E+7 3.260540E+0
From the output file you see that I want to keep in $6 only length($6)=10 characters.
How is it possible to do it in awk?
can do all in awk but perhaps easier with the unix toolset
$ sed -n '6,7p' file | cut -c2-66 | tr ' ' '\n' | pr -2ats' '
8.115068+6 0.000000+0
8.500000+6 6.390560-2
9.000000+6 6.803440-1
9.500000+6 1.685009+0
1.000000+7 2.582780+0
1.050000+7 3.260540+0
Here is a awk only solution or comparison
$ awk 'NR>=6 && NR<=7{$6=substr($6,1,10);
for(i=1;i<=6;i+=2) {f[++c]=$i;s[c]=$(i+1)}}
END{for(i=1;i<=c;i++) print f[i],s[i]}' file
8.115068+6 0.000000+0
8.500000+6 6.390560-2
9.000000+6 6.803440-1
9.500000+6 1.685009+0
1.000000+7 2.582780+0
1.050000+7 3.260540+0
Perhaps shorter version,
$ awk 'NR>=6 && NR<=7{$6=substr($6,1,10);
for(i=1;i<=6;i+=2) print $i FS $(i+1)}' file
8.115068+6 0.000000+0
8.500000+6 6.390560-2
9.000000+6 6.803440-1
9.500000+6 1.685009+0
1.000000+7 2.582780+0
1.050000+7 3.260540+0
to convert format to standard scientific notation, you can pipe the result to
sed or embed something similar in awk script (using gsub).
... | sed 's/[+-]/E&/g'
8.115068E+6 0.000000E+0
8.500000E+6 6.390560E-2
9.000000E+6 6.803440E-1
9.500000E+6 1.685009E+0
1.000000E+7 2.582780E+0
1.050000E+7 3.260540E+0
With GNU awk for FIELDWIDTHS:
$ cat tst.awk
BEGIN { FIELDWIDTHS="9 2 9 2 9 2 9 2 9 2 9 2" }
NR>5 && NR<8 {
for (i=1;i<NF;i+=4) {
print $i "E" $(i+1), $(i+2) "E" $(i+3)
}
}
$ awk -f tst.awk file
8.115068E+6 0.000000E+0
8.500000E+6 6.390560E-2
9.000000E+6 6.803440E-1
9.500000E+6 1.685009E+0
1.000000E+7 2.582780E+0
1.050000E+7 3.260540E+0
If you really want to get rid of the leading blanks then there's various ways to do it (simplest being gsub(/ /,"",$<field number>) on the relevant fields) but I left them in because the above allows your output to line up properly if/when your numbers start with a -, like they do on line 4 of your sample input.
If you don't have GNU awk, get it as you're missing a LOT of extremely useful functionality.
I tried to combine #karafka 's answer using substr, so the following does the trick!
awk 'NR>=6 && NR<=7{$6=substr($6,1,10);for(i=1;i<=6;i+=2) print substr($i,1,8) "E" substr($i,9) FS substr($(i+1),1,8) "E" substr($(i+1),9)}' file
and the output is
8.115068E+6 0.000000E+0
8.500000E+6 6.390560E-2
9.000000E+6 6.803440E-1
9.500000E+6 1.685009E+0
1.000000E+7 2.582780E+0
1.050000E+7 3.260540E+0

awk associative array grows fast

I have a file that assigns numbers to md5sums like follows:
0 0000001732816557DE23435780915F75
1 00000035552C6F8B9E7D70F1E4E8D500
2 00000051D63FACEF571C09D98659DC55
3 0000006D7695939200D57D3FBC30D46C
4 0000006E501F5CBD4DB56CA48634A935
5 00000090B9750D99297911A0496B5134
6 000000B5AEA2C9EA7CC155F6EBCEF97F
7 00000100AD8A7F039E8F48425D9CB389
8 0000011ADE49679AEC057E07A53208C1
Another file containts three md5sums in each line like follows:
00000035552C6F8B9E7D70F1E4E8D500 276EC96E149571F8A27F4417D7C6BC20 9CFEFED8FB9497BAA5CD519D7D2BB5D7
00000035552C6F8B9E7D70F1E4E8D500 44E48C092AADA3B171CE899FFC6943A8 1B757742E1BF2AA5DB6890E5E338F857
What I want to to is replace the first and third md5sums in the second file with the integers of the first file. Currently i am trying the following awk script:
awk '{OFS="\t"}FNR==NR{map[$2]=$1;next}
{print map[$1],$2,map[$3]}' mapping.txt relation.txt
The problem is that the script needs more that 16g ram despite the fact that the first file is only 5.7g on the hard drive.
If you don't have enough memory to store the first file, then you need to write something like this to look up the 1st file for each value in the 2nd file:
awk 'BEGIN{OFS="\t"}
{
val1 = val3 = ""
while ( (getline line < "mapping.txt") > 0 ) {
split(line,flds)
if (flds[2] == $1) {
val1 = flds[1]
}
if (flds[2] == $3) {
val3 = flds[1]
}
if ( (val1 != "") && (val3 != "") ) {
break
}
}
close("mapping.txt")
print val1,$2,val3
}' relation.txt
It will be slow. You could add a cache of N getline-d lines to speed it up if you like.
This problem could be solved, as follows (file1.txt is the file with the integers and md5sums while file2.txt is the file with the three columns of md5sums):
#!/bin/sh
# First sort each of file 1 and the first and third columns of file 2 by MD5
awk '{ print $2 "\t" $1}' file1.txt | sort >file1_n.txt
# Before we sort the file 2 columns, we number the rows so we can put them
# back into the original order later
cut -f1 file2.txt | cat -n - | awk '{ print $2 "\t" $1}' | sort >file2_1n.txt
cut -f3 file2.txt | cat -n - | awk '{ print $2 "\t" $1}' | sort >file2_3n.txt
# Now do a join between them, extract the two columns we want, and put them back in order
join -t' ' file2_1n.txt file1_n.txt | awk '{ print $2 "\t" $3}' | sort -n | cut -f2 >file2_1.txt
join -t' ' file2_3n.txt file1_n.txt | awk '{ print $2 "\t" $3}' | sort -n | cut -f2 >file2_3.txt
cut -f2 file2.txt | paste file2_1.txt - file2_3.txt >file2_new1.txt
For a case where file1.txt and file2.txt are each 1 million lines long, this solution and Ed Morton's awk-only solution take about the same length of time on my system. My system would take a very long time to solve the problem for 140 million lines, regardless of the approach used but I ran a test case for files with 10 million lines.
I had assumed that a solution that relied on sort (which automatically uses temporary files when required) should be faster for large numbers of lines because it would be O(N log N) runtime, whereas a solution that re-reads the mapping file for each line of the input would be O(N^2) if the two files are of similar size.
Timing results
My assumption with respect to the performance relationship of the two candidate solutions turned out to be faulty for the test cases that I've tried. On my system, the sort-based solution and the awk-only solution took similar (within 30%) amounts of time to each other for each of 1 million and 10 million line input files, with the awk-only solution being faster in each case. I don't know if that relationship will hold true when the input file size goes up by another factor of more than 10, of course.
Strangely, the 10 million line problem took about 10 times as long to run with both solutions as the 1 million line problem, which puzzles me as I would have expected a non-linear relationship with file length for both solutions.
If the size of a file causes awk to run out of memory, then either use another tool, or another approach entirely.
The sed command might succeed with much less memory usage. The idea is to read the index file and create a sed script which performs the remapping, and then invoke sed on the generated sedscript.
The bash script below is a implementation of this idea. It includes some STDERR output to help track progress. I like to produce progress-tracking output for problems with large data sets or other kinds of time-consuming processing.
This script has been tested on a small set of data; it may work on your data. Please give it a try.
#!/bin/bash
# md5-indexes.txt
# 0 0000001732816557DE23435780915F75
# 1 00000035552C6F8B9E7D70F1E4E8D500
# 2 00000051D63FACEF571C09D98659DC55
# 3 0000006D7695939200D57D3FBC30D46C
# 4 0000006E501F5CBD4DB56CA48634A935
# 5 00000090B9750D99297911A0496B5134
# 6 000000B5AEA2C9EA7CC155F6EBCEF97F
# 7 00000100AD8A7F039E8F48425D9CB389
# 8 0000011ADE49679AEC057E07A53208C1
# md5-data.txt
# 00000035552C6F8B9E7D70F1E4E8D500 276EC96E149571F8A27F4417D7C6BC20 9CFEFED8FB9497BAA5CD519D7D2BB5D7
# 00000035552C6F8B9E7D70F1E4E8D500 44E48C092AADA3B171CE899FFC6943A8 1B757742E1BF2AA5DB6890E5E338F857
# Goal replace field 1 and field 3 with indexes to md5 checksums from md5-indexes
md5_indexes='md5-indexes.txt'
md5_data='md5-data.txt'
talk() { echo 1>&2 "$*" ; }
talkf() { printf 1>&2 "$#" ; }
track() {
local var="$1" interval="$2"
local val
eval "val=\$$var"
if (( interval == 0 || val % interval == 0 )); then
shift 2
talkf "$#"
fi
eval "(( $var++ ))" # increment the counter
}
# Build a sedscript to translate all occurances of the 1st & 3rd MD5 sums into their
# corresponding indexes
talk "Building the sedscript from the md5 indexes.."
sedscript=/tmp/$$.sed
linenum=0
lines=`wc -l <$md5_indexes`
interval=$(( lines / 100 ))
while read index md5sum ; do
track linenum $interval "..$linenum"
echo "s/^[[:space:]]*[[:<:]]$md5sum[[:>:]]/$index/" >>$sedscript
echo "s/[[:<:]]$md5sum[[:>:]]\$/$index/" >>$sedscript
done <$md5_indexes
talk ''
sedlength=`wc -l <$sedscript`
talkf "The sedscript is %d lines\n" $sedlength
cmd="sed -E -f $sedscript -i .bak $md5_data"
talk "Invoking: $cmd"
$cmd
changes=`diff -U 0 $md5_data.bak $md5_data | tail +3 | grep -c '^+'`
talkf "%d lines changed in $md5_data\n" $changes
exit
Here are the two files:
cat md5-indexes.txt
0 0000001732816557DE23435780915F75
1 00000035552C6F8B9E7D70F1E4E8D500
2 00000051D63FACEF571C09D98659DC55
3 0000006D7695939200D57D3FBC30D46C
4 0000006E501F5CBD4DB56CA48634A935
5 00000090B9750D99297911A0496B5134
6 000000B5AEA2C9EA7CC155F6EBCEF97F
7 00000100AD8A7F039E8F48425D9CB389
8 0000011ADE49679AEC057E07A53208C1
cat md5-data.txt
00000035552C6F8B9E7D70F1E4E8D500 276EC96E149571F8A27F4417D7C6BC20 9CFEFED8FB9497BAA5CD519D7D2BB5D7
00000035552C6F8B9E7D70F1E4E8D500 44E48C092AADA3B171CE899FFC6943A8 1B757742E1BF2AA5DB6890E5E338F857
Here is the sample run:
$ ./md5-reindex.sh
Building the sedscript from the md5 indexes..
..0..1..2..3..4..5..6..7..8
The sedscript is 18 lines
Invoking: sed -E -f /tmp/83800.sed -i .bak md5-data.txt
2 lines changed in md5-data.txt
Finally, the resulting file:
$ cat md5-data.txt
1 276EC96E149571F8A27F4417D7C6BC20 9CFEFED8FB9497BAA5CD519D7D2BB5D7
1 44E48C092AADA3B171CE899FFC6943A8 1B757742E1BF2AA5DB6890E5E338F857