Run awk in parallel - awk

I have the code below, which works successfully, and is used to parse, clean log files (very large in size) and output into smaller sized files. Output filename is the first 2 characters of each line. However, if there is a special character in these 2 characters, then it needs to be replaced with a '_'. This will help ensure there is no illegal character in the filename.
This would take about 12-14 mins to process 1 GB worth of logs (on my laptop). Can this be made faster?
Is it possible to run this is parallel? I am aware I could do }' "$FILE" &. However, I tested and that does not help much. Is it possible to ask awk to output in parallel - what is the equivalent of print $0 >> Fpath & ?
Any help will be appreciated.
Sample log file
"email1#foo.com:datahere2
email2#foo.com:datahere2
email3#foo.com datahere2
email5#foo.com;dtat'ah'ere2
wrongemailfoo.com
nonascii#row.com;data.is.junk-Œœ
email3#foo.com:datahere2
Expected Output
# cat em
email1#foo.com:datahere2
email2#foo.com:datahere2
email3#foo.com:datahere2
email5#foo.com:dtat'ah'ere2
email3#foo.com:datahere2
# cat errorfile
wrongemailfoo.com
nonascii#row.com;data.is.junk-Œœ
Code:
#/bin/sh
pushd "_test2" > /dev/null
for FILE in *
do
awk '
BEGIN {
FS=":"
}
{
gsub(/^[ \t"'\'']+|[ \t"'\'']+$/, "")
$0=gensub("[,|;: \t]+",":",1,$0)
if (NF>1 && $1 ~ /^[[:alnum:]_.+-]+#[[:alnum:]_.-]+\.[[:alnum:]]+$/ && $0 ~ /^[\x00-\x7F]*$/)
{
Fpath=tolower(substr($1,1,2))
Fpath=gensub("[^[:alnum:]]","_","g",Fpath)
print $0 >> Fpath
}
else
print $0 >> "errorfile"
}' "$FILE"
done
popd > /dev/null

Look up the man page for the GNU tool named parallel if you want to run things in parallel but we can vastly improve the execution speed just by improving your script.
Your current script makes 2 mistakes that greatly impact efficiency:
Calling awk once per file instead of once for all files, and
Leaving all output files open while the script is running so awk has to manage them
You currently, essentially, do:
for file in *; do
awk '
{
Fpath = substr($1,1,2)
Fpath = gensub(/[^[:alnum:]]/,"_","g",Fpath)
print > Fpath
}
' "$file"
done
If you do this instead it'll run much faster:
sort * |
awk '
{ curr = substr($0,1,2) }
curr != prev {
close(Fpath)
Fpath = gensub(/[^[:alnum:]]/,"_","g",curr)
prev = curr
}
{ print > Fpath }
'
Having said that, you're manipulating your input lines before figuring out the output file names so - this is untested but I THINK your whole script should look like this:
#/usr/bin/env bash
pushd "_test2" > /dev/null
awk '
{
gsub(/^[ \t"'\'']+|[ \t"'\'']+$/, "")
sub(/[,|;: \t]+/, ":")
if (/^[[:alnum:]_.+-]+#[[:alnum:]_.-]+\.[[:alnum:]]+:[\x00-\x7F]+$/) {
print
}
else {
print > "errorfile"
}
}
' * |
sort -t':' -k1,1 |
awk '
{ curr = substr($0,1,2) }
curr != prev {
close(Fpath)
Fpath = gensub(/[^[:alnum:]]/,"_","g",curr)
prev = curr
}
{ print > Fpath }
'
popd > /dev/null
Note the use of $0 instead of $1 in the scripts - that's another performance improvement because awk only does field splitting (which takes time of course) if you name specific fields in your script.

Assuming multiple cores are available, the simple way to run parallel is to use xargs, Depending on your config try 2, 3, 4, 5, ... until you find the optimal number. This assumes that there are multiple input files, and that there is NO single files that is much larger than all other files.
Notice added 'fflush' so that lines will not be split. This will have some negative performance impact, but is required, assuming you the individual input files to get merged into single set of output files. Possible to wrokaround this problem by splitting each file, and then merging the combined files.
#! /bin/sh
pushd "_test2" > /dev/null
ls * | xargs --max-procs=4 -L1 awk '
BEGIN {
FS=":"
}
{
gsub(/^[ \t"'\'']+|[ \t"'\'']+$/, "")
$0=gensub("[,|;: \t]+",":",1,$0)
if (NF>1 && $1 ~ /^[[:alnum:]_.+-]+#[[:alnum:]_.-]+\.[[:alnum:]]+$/ && $0 ~ /^[\x00-\x7F]*$/)
{
Fpath=tolower(substr($1,1,2))
Fpath=gensub("[^[:alnum:]]","_","g",Fpath)
print $0 >> Fpath
fflush(Fpath)
}
else
print $0 >> "errorfile"
fflush("errorfile")
}' "$FILE"
popd > /dev/null
From practical point of view you might want to create an awk script, e.g., split.awk
#! /usr/bin/awk -f -
BEGIN {
FS=":"
}
{
gsub(/^[ \t"'\'']+|[ \t"'\'']+$/, "")
$0=gensub("[,|;: \t]+",":",1,$0)
if (NF>1 && $1 ~ /^[[:alnum:]_.+-]+#[[:alnum:]_.-]+\.[[:alnum:]]+$/ && $0 ~ /^[\x00-\x7F]*$/)
{
Fpath=tolower(substr($1,1,2))
Fpath=gensub("[^[:alnum:]]","_","g",Fpath)
print $0 >> Fpath
}
else
print $0 >> "errorfile"
}
And then the 'main' code will look like below, easier to manage.
xargs --max-procs=4 -L1 awk -f split.awk

Related

How can I use sed to generate an awk file?

How do I write sed commands to generate an awk file.
Here is my problem:
For example, I have a text file, A.txt which contains a word on each line.
app#
#ple
#ol#
The # refers when the word starts/ ends/ starts and ends. For example, app# shows that the word starts with 'app'. #ple shows that the word ends with 'ple'. #ol# shows that the word has 'ol' in the middle of the word.
I have to generate an awk file from sed commands which reads in another file, B.txt (which contains a word on each line) and increments the variable start, end, middle.
How do I write sed commands whereby for each line in the text file, A.txt, it will generate an awk code ie.
{ {if ($1 ~/^app/)
{start++;}
}
For example, if I input the other file, B.txt with these words into the awk script,
application
people
bold
cold
The output would be; start = 1, end = 1, middle = 2.
I'd use ed over sed for this, actually.
A quick script that creates A.awk from A.txt and runs it on B.txt:
#!/bin/sh
ed -s A.txt <<'EOF'
1,$ s!^#\(.*\)#$!$0 ~ /.+\1.+/ { middle++ }!
1,$ s!^#\(.*\)!$0 ~ /\1$/ { end++ }!
1,$ s!^\(.*\)#!$0 ~ /^\1/ { start++ }!
0 a
#!/usr/bin/awk -f
BEGIN { start = end = middle = 0 }
.
$ a
END { printf "start = %d, end = %d, middle = %d\n", start, end, middle }
.
w A.awk
EOF
# awk -f A.awk B.txt would work too, but this demonstrates a self-contained awk script
chmod +x A.awk
./A.awk B.txt
Running it:
$ ./translate.sh
start = 1, end = 1, middle = 2
$ cat A.awk
#!/usr/bin/awk -f
BEGIN { start = end = middle = 0 }
$0 ~ /^app/ { start++ }
$0 ~ /ple$/ { end++ }
$0 ~ /.+ol.+/ { middle++ }
END { printf "start = %d, end = %d, middle = %d\n", start, end, middle }
Note: This assumes that the middle patterns shouldn't match at the start or end of a line.
But here's a attempt using sed to create A.awk, putting all the sed commands in a file, as trying to this as a one-liner using -e and getting all the escaping right is not something I feel up to at the moment:
Contents of makeA.sed:
s!^#\(.*\)#$!$0 ~ /.+\1.+/ { middle++ }!
s!^#\(.*\)!$0 ~ /\1$/ { end++ }!
s!^\(.*\)#!$0 ~ /^\1/ { start++ }!
1 i\
#!/usr/bin/awk -f\
BEGIN { start = end = middle = 0 }
$ a\
END { printf "start = %d, end = %d, middle = %d\\n", start, end, middle }
Running it:
$ sed -f makeA.sed A.txt > A.awk
$ awk -f A.awk B.txt
start = 1, end = 1, middle = 2
Off the top of my head, and not tested:
/\(.*\)#$/s//{if ($1 ~ /^\1/) start++; next}/
/#\(.*\)$/s//{if ($1 ~ /\1$/) end++; next}/
/\(.*\)/s//{if ($1 ~ /\1/) middle++; next}/
The construct \(.*\) matches any text and saves it in a back-reference, then \1 recalls the back-reference. The empty pattern following the s command refers back to the pattern that matched the line. The next prevents the third pattern from matching after one of the other two has already matched.

How to merge lines using awk command so that there should be specific fields in a line

I want to merge some rows in a file so that the lines should contain 22 fields seperated by ~.
Input file looks like this.
200269~7414~0027001~VALTD~OM3500~963~~~~716~423~2523~Y~UN~~2423~223~~~~A~200423
2269~744~2701~VALD~3500~93~~~~76~423~223~Y~
UN~~243~223~~~~A~200123
209~7414~7001~VALD~OM30~963~~~
~76~23~2523~Y~UN~~223~223~~~~A~123
and So on
First line looks fine. 2nd and 3rd line needs to be merged so that it becomes a line with 22 fields. 4th,5th and 6th line should be merged and so on.
Expected output:
200269~7414~0027001~VALTD~OM3500~963~~~~716~423~2523~Y~UN~~2423~223~~~~A~200423
2269~744~2701~VALD~3500~93~~~~76~423~223~Y~UN~~243~223~~~~A~200123
209~7414~7001~VALD~OM30~963~~~~76~23~2523~Y~UN~~223~223~~~~A~123
The file has 10 GB data but the code I wrote (used while loop) is taking too much time to execute . How to solve this problem using awk/sed command?
Code Used:
IFS=$'\n'
set -f
while read line
do
count_tild=`echo $line | grep -o '~' | wc -l`
if [ $count_tild == 21 ]
then
echo $line
else
checkLine
fi
done < file.txt
function checkLine
{
current_line=$line
read line1
next_line=$line1
new_line=`echo "$current_line$next_line"`
count_tild_mod=`echo $new_line | grep -o '~' | wc -l`
if [ $count_tild_mod == 21 ]
then
echo "$new_line"
else
line=$new_line
checkLine
fi
}
Using only the shell for this is slow, error-prone, and frustrating. Try Awk instead.
awk -F '~' 'NF==1 { next } # Hack; see below
NF<22 {
for(i=1; i<=NF; i++) f[++a]=$i }
a==22 {
for(i=1; i<=a; ++i) printf "%s%s", f[i], (i==22 ? "\n" : "~")
a=0 }
NF==22
END {
if(a) for(i=1; i<=a; i++) printf "%s%s", f[i], (i==a ? "\n" : "~") }' file.txt>file.new
This assumes that consecutive lines with too few fields will always add up to exactly 22 when you merge them. You might want to check this assumption (or perhaps accept this answer and ask a new question with more and better details). Or maybe just add something like
a>22 {
print FILENAME ":" FNR ": Too many fields " a >"/dev/stderr"
exit 1 }
The NF==1 block is a hack to bypass the weirdness of the completely empty line 5 in your sample.
Your attempt contained multiple errors and inefficiencies; for a start, try http://shellcheck.net/ to diagnose many of them.
$ cat tst.awk
BEGIN { FS="~" }
{
sub(/^[0-9]+\./,"")
gsub(/[[:space:]]+/,"")
$0 = prev $0
if ( NF == 22 ) {
print ++cnt "." $0
prev = ""
}
else {
prev = $0
}
}
$ awk -f tst.awk file
1.200269~7414~0027001~VALTD~OM3500~963~~~~716~423~2523~Y~UN~~2423~223~~~~A~200423
2.2269~744~2701~VALD~3500~93~~~~76~423~223~Y~UN~~243~223~~~~A~200123
3.209~7414~7001~VALD~OM30~963~~~~76~23~2523~Y~UN~~223~223~~~~A~123
The assumption above is that you never have more than 22 fields on 1 line nor do you exceed 22 in any concatenation of the contiguous lines that are each less than 22 fields, just like you show in your sample input.
You can try this awk
awk '
BEGIN {
FS=OFS="~"
}
{
while(NF<22) {
if(NF==0)
break
a=$0
getline
$0=a$0
}
if(NF!=0)
print
}
' infile
or this sed
sed -E '
:A
s/((.*~){21})([^~]*)/\1\3/
tB
N
bA
:B
s/\n//g
' infile

Insert a line at the end of an ini section only if it doesn't exist

I have an smb.conf ini file which is overwritten whenever edited with a certain GUI tool, wiping out a custom setting. This means I need a cron job to ensure that one particular section in the file contains a certain option=value pair, and insert it at the end of the section if it doesn't exist.
Example
Ensure that hosts deny=192.168.23. exists within the [myshare] section:
[global]
printcap name = cups
winbind enum groups = yes
security = user
[myshare]
path=/mnt/myshare
browseable=yes
enable recycle bin=no
writeable=yes
hosts deny=192.168.23.
[Another Share]
invalid users=nobody,nobody
valid users=nobody,nobody
path=/mnt/share2
browseable=no
Long-winded solution using awk
After a long time struggling with sed, I concluded that it might not be the right tool for the job. So I moved over to awk and came up with this:
#!/bin/sh
file="smb.conf"
tmp="smb.conf.tmp"
section="myshare"
opt="hosts deny=192.168.23."
awk '
BEGIN {
this_section=0;
opt_found=0;
}
# Match the line where our section begins
/^[ \t]*\['"$section"'\][ \t]*$/ {
this_section=1;
print $0;
next;
}
# Match lines containing our option
this_section == 1 && /^[ \t]*'"$opt"'[ \t]*$/ {
opt_found=1;
}
# Match the following section heading
this_section == 1 && /^[ \t]*\[.*$/ {
this_section=0;
if (opt_found != 1) {
print "\t'"$opt"'";
}
}
# Print every line
{ print $0; }
END {
# In case our section is the very last in the file
if (this_section == 1 && opt_found != 1) {
print "\t'"$opt"'";
}
}
' $file > $tmp
# Overwrite $file only if $tmp is different
diff -q $file $tmp > /dev/null 2>&1
if [ $? -ne 0 ]; then
mv $tmp $file
# reload smb.conf here
else
rm $tmp
fi
I can't help feeling that this is a long script to achieve a simple task. Is there a more efficient/elegant way to insert a property in an ini file using basic shell tools like sed and awk?
Consider using Python 3's configparser:
#!/usr/bin/python3
import sys
from configparser import SafeConfigParser
cfg = SafeConfigParser()
cfg.read(sys.argv[1])
cfg['myshare']['hosts deny'] = '192.168.23.';
with open(sys.argv[1], 'w') as f:
cfg.write(f)
To be called as ./filename.py smb.conf (i.e., the first parameter is the file to change).
Note that comments are not preserved by this. However, since a GUI overwrites the config and doesn't preserve custom options, I suspect that comments are already nuked and that this is not a worry in your case.
Untested, should work though
awk -vT="hosts deny=192.168.23" 'x&&$0~T{x=0}x&&/^ *\[[^]]+\]/{print "\t\t"T;x=0}
/^ *\[myshare\]/{x++}1' file
This solution is a bit awkward. It uses the INI section header as the record separator. This means that there is an empty record before the first header, so when we match the header we're interested in, we have to read the next record to handle that INI section. Also, there are some printf commands because the records still contain leading and trailing newlines.
awk -v RS='[[][^]]+[]]' -v str="hosts deny=192.168.23." '
{printf "%s", $0; printf "%s", RT}
RT == "[myshare]" {
getline
printf "%s", $0
if (index($0, str) == 0) print str
printf "%s", RT
}
' smb.conf
RS is the awk variable that contains the regex to split the text into records.
RT is the awk variable that contains the actual text of the current record separator.
With GNU awk for a couple of extensions:
$ cat tst.awk
index($0,str) { found = 1 }
match($0,/^\s*\[([^]]+).*/,a) {
if ( (name == tgt) && !found ) { print indent str }
name = a[1]
found = 0
}
{ print; indent=gensub(/\S.*/,"","") }
.
$ awk -v tgt="myshare" -v str="hosts deny=192.168.23." -f tst.awk file
[global]
printcap name = cups
winbind enum groups = yes
security = user
[myshare]
path=/mnt/myshare
browseable=yes
enable recycle bin=no
writeable=yes
hosts deny=192.168.23.
[Another Share]
invalid users=nobody,nobody
valid users=nobody,nobody
path=/mnt/share2
browseable=no
.
$ awk -v tgt="myshare" -v str="fluffy bunny" -f tst.awk file
[global]
printcap name = cups
winbind enum groups = yes
security = user
[myshare]
path=/mnt/myshare
browseable=yes
enable recycle bin=no
writeable=yes
hosts deny=192.168.23.
fluffy bunny
[Another Share]
invalid users=nobody,nobody
valid users=nobody,nobody
path=/mnt/share2
browseable=no

How to rewrite a Awk script to process several files instead of one

I am writing a report tool which processes the source files of some application and produce a report table with two columns, one containing the name of the file and the other containing the word TODO if the file contains a call to some deprecated function deprecated_function and DONE otherwise.
I used awk to prepare this report and my shell script looks like
report()
{
find . -type f -name '*.c' \
| xargs -n 1 awk -v deprecated="$1" '
BEGIN { status = "DONE" }
$0 ~ deprecated{ status = "TODO" }
END {
printf("%s|%s\n", FILENAME, status)
}'
}
report "deprecated_function"
The output of this script looks like
./plop-plop.c|DONE
./fizz-boum.c|TODO
This works well but I would like to rewrite the awk script so that it supports several input files instead of just one — so that I can remove the -n 1 argument to xargs. The only solutions I could figure out involve a lot of bookkeeping, because we need to track the changes of FILENAME and the END event to catch each end of file event.
awk -v deprecated="$1" '
BEGIN { status = "DONE" }
oldfilename && (oldfilename != FILENAME) {
printf("%s|%s\n", oldfilename, status);
status = DONE;
oldfilename = FILENAME;
}
$0 ~ deprecated{ status = "TODO" }
END {
printf("%s|%s\n", FILENAME, status)
}'
Maybe there is a cleaner and shorter way to handle this.
I am using FreeBSD's awk and am looking for solutions compatible with this tool.
This will work in any modern awk:
awk -v deprecated="$1" -v OFS='|' '
$0 ~ deprecated{ dep[FILENAME] }
END {
for (i=1;i<ARGC;i++)
print ARGV[i], (ARGV[i] in dep ? "TODO" : "DONE")
}
' file1 file2 ...
Any time you need to produce a report for all files and don't have GNU awk for ENDFILE, you MUST loop through ARGV[] in the END section (or loop through it in BEGIN and populate a different array for END section processing). Anything else will fail if you have empty files.
Your awk script could be something like this:
awk -v deprecated="$1" '
FNR==1 {if(file) print file "|" (f?"TODO":"DONE"); file=FILENAME; f=0}
$0 ~ deprecated {f=1}
END {print file "|" (f?"TODO":"DONE")}' file1.c file2.c # etc.
The logic is fairly similar to your program so hopefully it's all clear. FNR is the record number of the current file, which I'm using to detect the start of a new file. Admittedly there's some repetition in the END block but I don't think it's a big deal. You could always use a function if you wanted to.
Testing it out:
$ cat f1.c
int deprecated_function()
{
// some deprecated stuff
}
$ cat f2.c
int good_function()
{
// some good stuff
}
$ find -name "f?.c" -print0 | xargs -0 awk -v deprecated="deprecated" 'FNR==1 {if(file) print file "|" (f?"TODO":"DONE"); file=FILENAME; f=0} $0 ~ deprecated {f=1} END {print file "|" (f?"TODO":"DONE")}'
./f2.c|DONE
./f1.c|TODO
I have used -print0 and the -0 switch to xargs so that both programs with work file names separated by null bytes "\0" rather than spaces. This means that you won't run into problems with spaces in file names.

awk output format for average

I am computing average of many values and printing it using awk using following script.
for j in `ls *.txt`; do
for i in emptyloop dd cp sleep10 gpid forkbomb gzip bzip2; do
echo -n $j $i" "; cat $j | grep $i | awk '{ sum+=$2} END {print sum/NR}'
done;
echo ""
done
but problem is, it is printing the value in in 1.2345e+05, which I do not want, I want it to print values in round figure. but I am unable to find where to pass the output format.
EDIT: using {print "average,%3d = ",sum/NR}' inplace of {print sum/NR}' is not helping, because it is printing "average,%3d 1.2345e+05".
You need printf instead of simply print. Print is a much simpler routine than printf is.
for j in *.txt; do
for i in emptyloop dd cp sleep10 gpid forkbomb gzip bzip2; do
awk -v "i=$i" -v "j=$j" '$0 ~ i {sum += $2} END {printf j, i, "average %6d", sum/NR}' "$j"
done
echo
done
You don't need ls - a glob will do.
Useless use of cat.
Quote all variables when they are expanded.
It's not necessary to use echo - AWK can do the job.
It's not necessary to use grep - AWK can do the job.
If you're getting numbers like 1.2345e+05 then %6d might be a better format string than %3d. Use printf in order to use format strings - print doesn't support them.
The following all-AWK script might do what you're looking for and be quite a bit faster. Without seeing your input data I've made a few assumptions, primarily that the command name being matched is in column 1.
awk '
BEGIN {
cmdstring = "emptyloop dd cp sleep10 gpid forkbomb gzip bzip2";
n = split(cmdstring, cmdarray);
for (i = 1; i <= n; i++) {
cmds[cmdarray[i]]
}
}
$1 in cmds {
sums[$1, FILENAME] += $2;
counts[$1, FILENAME]++
files[FILENAME]
}
END {
for file in files {
for cmd in cmds {
printf "%s %s %6d", file, cmd, sums[cmd, file]/counts[cmd, file]
}
}
}' *.txt