Awk processes the files line by line. Assuming each line operation has no dependency on other lines, is there any way to make awk process multiple lines at a time in parallel?
Is there any other text processing tool which automatically exploits parallelism and processes the data quicker ?
The only awk implementation that was attempting to provide a parallel implementation of awk was parallel-awk but it looks like the project is dead now.
Otherwise, one way to parallelize awk is be to split your input in chunks and process them in parallel. However, splitting the input data would still be single threaded so might defeat the performance enhancement goal, the main issue being the standard split command is unable to split at line boundaries without reading each and every line.
If you have GNU split available, or a version that support the -n l/* option, here is one optimized way to process your file in parallel, assuming here you have 8 vCPUs:
inputfile=input.txt
outputfile=output.txt
script=script.awk
count=8
split -n l/$count $inputfile /tmp/_pawk$$
for file in /tmp/_pawk$$*; do
awk -f script.awk $file > ${file}.out &
done
wait
cat /tmp/_pawk$$*.out > $outputfile
rm /tmp/_pawk$$*
You can use GNU Parallel for this purpose
Consider you are counting the sum of numbers in a big file:
cat rands20M.txt | awk '{s+=$1} END {print s}'
With GNU Parallel you can do it in multiple threads:
cat rands20M.txt | parallel --pipe awk \'{s+=\$1} END {print s}\' | awk '{s+=$1} END {print s}'
Related
I am writing an awk script which needs to produce an output which needs to be sorted.
I am able to get the desired unsorted output in an awk array. I tried the following code to sort the array and it works and I don't know why and whether it is the expected behavior.
Sample Input to the question:
Ram,London,200
Alex,London,500
David,Birmingham,300
Axel,Mumbai,150
John,Seoul,450
Jen,Tokyo,600
Sarah,Tokyo,630
The expected output should be:
Birmingham,300
London,700
Mumbai,150
Seoul,450
Tokyo,1230
The following script is required to show the city name along with the respective cumulative total of the integers present in the third field.
BEGIN{
FS = ","
OFS = ","
}
{
if($2 in arr){
arr[$2]+=$3;
}else{
arr[$2]=$3;
}
}
END{
for(i in arr){
print i,arr[i] | "sort"
}
}
The following code is in question:
for(i in arr){
print i,arr[i] | "sort"
}
The output of the print is piped to sort, which is a bash command.
So, how does this output travel from awk to bash?
Is this the expected behavior or a mere side effect?
Is there a better awk way to do it? Have tried asort and asorti already, but they exist with gawk and not awk.
PS: I am trying to specifically write a .awk file for the task, without using bash commands. Please suggest the same.
Addressing your specific questions in order:
So, how does this output travel from awk to bash?
A pipe to a spawned process.
Is this the expected behavior or a mere side effect?
Expected
Is there a better awk way to do it? Have tried asort and asorti already, but they exist with gawk and not awk.
Yes, pipe the output of the whole awk command to sort.
PS: I am trying to specifically write a .awk file for the task, without using bash commands. Please suggest the same.
See https://web.archive.org/web/20150928141114/http://awk.info/?Sorting for the implementation of a few common sorting algorithms in awk. See also https://rosettacode.org/wiki/Category:Sorting_Algorithms.
With respect to the question in your comments:
Since a process is spawned to sort from within the loop in the END rule, I was confused whether this will call the sort function on a single line and the spawned process will die there after, and a new process to sort will be spawned in the next iteration of the loop
The spawned process won't die until your awk script terminates or you call close("sort").
Could you please try changing you sort to sort -t',' -k1 in your code. Since your delimiter is comma so you need to inform sort that your delimiter is different than space. By default sort takes delimiter as comma.
Also you could remove if, else block ftom your main block and you could use only arr[$2]+=$3. Keep the rest code as it is apart from sort changes which I mentioned above
I am on mobile so couldn't paste all code but explanation should help you here.
What I would suggest is piping the output of awk to sort and not try and worry about piping the output within the END rule. While GNU awk provides asorti() to allow sorting the contents of an array, in this case since it is just the output you want sorted, a single pipe to sort after your awk script completes is all you need, e.g.
$ awk -F, -v OFS=, '{a[$2]+=$3}END{for(i in a)print i, a[i]}' file | sort
Birmingham,300
London,700
Mumbai,150
Seoul,450
Tokyo,1230
And since it is a single pipe of the output, you incur no per-iteration overhead for the subshell required by the pipe.
If you want to avoid the pipe altogether, if you have bash, you can simply use process-substitution with redirection, e.g.
$ sort < <(awk -F, -v OFS=, '{a[$2]+=$3}END{for(i in a)print i, a[i]}' file)
(same result)
If you have GNU awk, then asorti() will sort a by index and you can place the sorted array in a new array b and then output the sorted results within the END rule, e.g.
$ awk -F, -v OFS=, '{a[$2]+=$3}END{asorti(a,b);for(i in b)print b[i], a[b[i]]}' file
Birmingham,300
London,700
Mumbai,150
Seoul,450
Tokyo,1230
So a naive me wanted to parse 50 files using awk, so I did the following
zcat dir_with_50files/* > huge_file
cat huge_file | awk '{parsing}'
Of course, this was terrible because it would spend time creating a file, then consume a whole bunch of memory to pass along to awk.
Then a coworker showed me that I could do this.
zcat dir_with_50files/filename{0..50} | awk '{parsing}'
I was amazed that I would get the same results without the memory consumption.
ps aux also showed that the two commands ran in parallel. I was confused about what was happening and this SO answer partially answered my question.
https://stackoverflow.com/a/1072251/6719378
But if piping knows to initiate the second command after certain amount of buffered data, why does my naive approach consume so much more memory compared to the second approach?
Is it because I am using cat on a single file compared to loading multiple files?
you can reduce maximuml memory usage by zcat file by file
ex:
for f in dir_with_50files/*
do
zcat f | awk '{parsing}' >> Result.File
done
# or
find dir_with_50files/ -exec zcat {} | awk '{parsing}' >> Result.File \;
but it depend on your parsing
ok for modfying line, deleting, copying if there is no relation to previous items ( ex: sub( /foo/, "bar"))
bad for counter (ex: List[$2]++ ) or related (modification) (ex: NR != FNR {...}; ! List[$2]++ {...})
I a command which decodes binary logs to ascii format
From ASCII format file, I need to grep some patters using awk and print them
How can this be done?
What I have tried is as below in shell script and it does not work.
command > file.txt | awk /pattern/ | sed/pattern/
Also I need command to continously decode file and keep printing patterns on file being updated
Thanks in advance
command to continously decode file and keep printing patterns
The first question is exactly how continuously manifests itself. Most log files grow by being appended to -- for our purpose here, by some unknown external process -- and are periodically rotated. If you're going to continuously decode them, you're going to have to keep track of log rotation.
Can command continuously decode, or do you intend to re-run command periodically, picking up where you left off? If the latter, you might instead try some variation of:
cat log | command | awk
If that can't be done, you'll have to record where each iteration terminates, something like:
touch pos
while -f pos
do
command | awk -v status=pos script.awk > output || rm pos
done
where script.awk skips input until NR equals the value of the number in the pos file. It then processes lines until EOF, and overwrites pos with its final NR. On error, it calls exit 1, and the file is removed, and the loop terminates.
I recommend you ignore sed, and put all the pattern matching logic in one awk script. It will be easier to understand and cheaper to execute.
Say I have a file where the patterns reside, e.g. patterns.txt. And I know that all the patterns will only be matched once in another file patterns_copy.txt, which in this case to make matters simple is just a copy of patterns.txt.
If I run
grep -m 1 --file=patterns.txt patterns_copy.txt > output.txt
I get only one line. I guess it's because the m flag stopped the whole matching process once the 1st line of the two files match.
What I would like to achieve is to have each pattern in patterns.txt matched only once, and then let grep move to the next pattern.
How do I achieve this?
Thanks.
Updated Answer
I have now had a chance to integrate what I was thinking about awk into the GNU Parallel concept.
I used /usr/share/dict/words as my patterns file and it has 235,000 lines in it. Using BenjaminW's code in another answer, it took 141 minutes, whereas this code gets that down to 11 minutes.
The difference here is that there are no temporary files and awk can stop once it has found all 8 of the things it was looking for...
#!/bin/bash
# Create a bash function that GNU Parallel can call to search for 8 things at once
doit() {
# echo Job: $9
# In following awk script, read "p1s" as a flag meaning "p1 has been seen"
awk -v p1="$1" -v p2="$2" -v p3="$3" -v p4="$4" -v p5="$5" -v p6="$6" -v p7="$7" -v p8="$8" '
$0 ~ p1 && !p1s {print; p1s++;}
$0 ~ p2 && !p2s {print; p2s++;}
$0 ~ p3 && !p3s {print; p3s++;}
$0 ~ p4 && !p4s {print; p4s++;}
$0 ~ p5 && !p5s {print; p5s++;}
$0 ~ p6 && !p6s {print; p6s++;}
$0 ~ p7 && !p7s {print; p7s++;}
$0 ~ p8 && !p8s {print; p8s++;}
{if(p1s+p2s+p3s+p4s+p5s+p6s+p7s+p8s==8)exit}
' patterns.txt
}
export -f doit
# Next line effectively uses 8 cores at a time to each search for 8 items
parallel -N8 doit {1} {2} {3} {4} {5} {6} {7} {8} {#} < patterns.txt
Just for fun, here is what it does to my CPU - blue means maxed out, and see if you can see where the job started in the green CPU history!
Other Thoughts
The above benefits from the fact that the input files are relatively well sorted, so it is worth looking for 8 things at a time because they are likely close to each other in the input file, and I can therefore avoid the overhead associated with creating one process per sought term. However, if your data are not well sorted, that may mean that you waste a lot of time looking further through the file than necessary to find the next 7, or 6 other items. In that case, you may be better off with this:
parallel grep -m1 "{}" patterns.txt < patterns.txt
Original Answer
Having looked at the size of your files, I now think awk is probably not the way to go, but GNU Parallel maybe is. I tried parallelising the problem two ways.
Firstly, I search for 8 items at a time in a single pass through the input file so that I have less to search through with the second set of greps that use the -m 1 parameter.
Secondly, I do as many of these "8-at-a-time" greps in parallel as I have CPU cores.
I use the GNU Parallel job number {#} as a unique temporary filename, and only create 16 (or however many CPU cores you have) temporary files at a time. The temporary files are prefixed ss (for sub-search) so they can call be deleted easily enough when testing.
The speedup seems to be a factor of about 4 times on my machine. I used /usr/share/dict/words as my test files.
#!/bin/bash
# Create a bash function that GNU Parallel can call to search for 8 things at once
doit() {
# echo Job: $9
# Make a temp filename using GNU Parallel's job number which is $9 here
TEMP=ss-${9}.txt
grep -E "$1|$2|$3|$4|$5|$6|$7|$8" patterns.txt > $TEMP
for i in $1 $2 $3 $4 $5 $6 $7 $8; do
grep -m1 "$i" $TEMP
done
rm $TEMP
}
export -f doit
# Next line effectively uses 8 cores at a time to each search for 8 items
parallel -N8 doit {1} {2} {3} {4} {5} {6} {7} {8} {#} < patterns.txt
You can loop over your patterns like this (assuming you're using Bash):
while read -r line; do
grep -m 1 "$line" patterns_copy.txt
done < patterns.txt > output.txt
Or, in one line:
while read -r line; do grep -m 1 "$line" patterns_copy.txt; done < patterns.txt > output.txt
For parallel processing, you can start the processes as background jobs:
while read -r line; do
grep -m 1 "$line" patterns_copy.txt &
read -r line && grep -m 1 "$line" patterns_copy.txt &
# Repeat the previous line as desired
wait # Wait for greps of this loop to finish
done < patterns.txt > output.txt
This is not really elegant as for each loop it will wait for the slowest grep to finish, but should still be faster than just one grep per loop.
I have a Q's for awk processing, i got a file below
cat test.txt
/home/shhh/
abc.c
/home/shhh/2/
def.c
gthjrjrdj.c
/kernel/sssh
sarawtera.c
wrawrt.h
wearwaerw.h
My goal is to make a full path from splitting sentences into /home/jhyoon/abc.c.
This is the command I got from someone:
cat test.txt | awk '/^\/.*/{path=$0}/^[a-zA-Z]/{printf("%s/%s\n",path,$0);}'
It works, but I do not understand well about how do make interpret it step by step.
Could you teach me how do I make interpret it?
Result :
/home/shhh//abc.c
/home/shhh/2//def.c
/home/shhh/2//gthjrjrdj.c
/kernel/sssh/sarawtera.c
/kernel/sssh/wrawrt.h
/kernel/sssh/wearwaerw.h
What you probably want is the following:
$ awk '/^\//{path=$0}/^[a-zA-Z]/ {printf("%s/%s\n",path,$0)}' file
/home/jhyoon//abc.c
/home/jhyoon/2//def.c
/home/jhyoon/2//gthjrjrdj.c
/kernel/sssh/sarawtera.c
/kernel/sssh/wrawrt.h
/kernel/sssh/wearwaerw.h
Explanation
/^\//{path=$0} on lines starting with a /, store it in the path variable.
/^[a-zA-Z]/ {printf("%s/%s\n",path,$0)} on lines starting with a letter, print the stored path together with the current line.
Note you can also say
awk '/^\//{path=$0; next} {printf("%s/%s\n",path,$0)}' file
Some comments
cat file | awk '...' is better written as awk '...' file.
You don't need the ; at the end of a block {} if you are executing just one command. It is implicit.