GNU parallel used with xargs and awk - awk

I have two large tab separated files A.tsv and B.tsv, they look like (the header is not in the file):
A.tsv:
ID AGE
User1 18
...
B.tsv:
ID INCOME
User4 49000
...
I want to select list of IDs in A such that 10=< AGE <=20 and select rows in B that match the list. And I want to use GNU parallel tool. My attempt is two steps:
cat A.tsv | parallel --pipe -q awk '{ if ($3 >= 10 && $3 <= 20) print $1}' > list.tsv
cat list.tsv | parallel --pipe -q xargs -I% awk 'FNR==NR{a[$1];next}($1 in a)' % B.tsv > result.tsv
The first step works but the second one comes with error like:
awk: cannot open User1 (No such file or directory)
How can I fix this? Does this method work even if A.tsv and list.tsv are 2 to 3 times bigger than the memory?

$ for I in $(seq 8 2 22); do echo -e "User$I\t$I" >> A.txt; done; cat A.txt
User8 8
User10 10
User12 12
User14 14
User16 16
User18 18
User20 20
User22 22
$ for I in $(seq 8 2 22); do echo -e "User$I\t100${I}00" >> B.txt; done; cat B.txt
User8 100800
User10 1001000
User12 1001200
User14 1001400
User16 1001600
User18 1001800
User20 1002000
User22 1002200
$ cat A.txt | parallel --pipe -q awk '{if ($2 >= 10 && $2 <= 20) print $1}' > list.txt
$ cat B.txt | parallel --pipe -q grep -f list.txt
User10 1001000
User12 1001200
User14 1001400
User16 1001600
User18 1001800
User20 1002000

I know this: (yes, I saw it)
GNU parallel used with xargs and awk
Asked 8 years, 3 months ago
Modified 8 years, 3 months ago
Viewed 2k times
My solution:
only xargs and awk, only a line without intermediate file, and you don't need install a new tool
awk '{if ($2 >= 10 && $2 <= 20) print $1}' A.tsv | xargs -I myItem awk --assign quebuscar=myItem '$1==quebuscar {print}' B.tsv

Related

awk Can not Select Column with empty value

i am trying to select a column with its missing value
here is my input file separated by tab
1 2 3
4 5
6
7 8
9
i am trying to select the first column in which output will look like
1
4
7
and the length of my column would be 5 in this case
I have tried
awk '$1!=""{print $1}' ./demo.txt
but it returns
1
4
6
7
9
can anybody help with this I am new in AWK
You can use cut:
$ cut -f 1 file # the default delimiter is a tab
Or with sed:
$ sed 's/[[:blank:]].*$//' file
Or awk:
$ awk '{sub(/[[:blank:]].*$/,"")}1' file
Or:
$ awk 'BEGIN{FS=OFS="\t"} {print $1}' file
All those print the first column and all five lines (blank or not)
Prints:
1
4
7
Tell awk to use a tab (\t) as the input field delimiter (-F):
$ awk -F'\t' '{ print $1 }' demo.txt
1
4
7
If you want to print multiple columns, maintaining the same delimiter for output, another approach using the FS and OFS variables:
$ awk 'BEGIN { FS=OFS="\t" } { print $1,$3 }' demo.txt
1 3
4 5
7
9
With sed something like:
sed 's/^\([^[:blank:]]*\).*/\1/' demo.txt
Using FIELDWIDTHS in gnu-awk you can do this for fixed width separated data:
awk 'BEGIN {FIELDWIDTHS = "4 4 *"} {print $1}' file
1
4
7
For demo purpose:
awk 'BEGIN {FIELDWIDTHS = "4 4 *"} {print NR ":", $1}' file
1: 1
2: 4
3:
4: 7
5:
if they're all single digits in 1st column :
echo \
'1 2 3
4 5
6
7 8
9' |
mawk NF=1 FS=
gcat -n
1 1
2 4
3
4 7
5
that's literally all you need. To play it safe, then do
nawk NF=1 FS='[[:space:]]' # overly-verbose so-called
# "proper" posix form
gawk NF=1 FS='[ \t]' # suffices unless the input
# happens to have uncommon bytes
# like \013 \v or \014 \f
or a very fringe way of fudging NF :
mawk 'NF ^= FS="[ \t]"'

Instead of null print zero in awk

I have a problem. This is my script
#!/bin/bash
for index in {1..100} # I do this script on 100 files, that is s why I use for loop
do
sort -k2,2 -k1,1 eq9_x4_$index.ndx |
uniq -c |
uniq -f2 -c |
awk '
($1==1 && $2==4) {inner+=6}
($1==2 && $2==1) {inner+=3; outer+=3}
($1==2 && $2==2) {inner+=2; outer+=4}
($1==3 && $2==1) {inner+=1; outer+=5}
($1==4 && $2==1) {outer+=6}
END{print inner, outer}' >> inner_outer_water_bridges_x4.txt
done
It counts water bridges and print sum (inner and outer)
This is part of my output file and instead of this
9 15
2 16
8 10
4 14
6
5 25
2 10
6
I want to have this
9 15
2 16
0 0
8 10
4 14
0 6
5 25
2 10
6 0
How to do this is there any good solution in awk?
With ternary operator try following. Couldn't test it since only code samples provided here.
#!/bin/bash
for index in {1..100} # I do this script on 100 files, that is s why I use for loop
do
sort -k2,2 -k1,1 eq9_x4_$index.ndx |
uniq -c |
uniq -f2 -c |
awk '
($1==1 && $2==4) {inner+=6}
($1==2 && $2==1) {inner+=3; outer+=3}
($1==2 && $2==2) {inner+=2; outer+=4}
($1==3 && $2==1) {inner+=1; outer+=5}
($1==4 && $2==1) {outer+=6}
END{print (inner?inner:0), (outer?outer:0)}' >> inner_outer_water_bridges_x4.txt
done
If you are dealing solely with integers you might harness printf following way
END{printf "%d %d\n", inner, outer}
(tested in GNU Awk 5.0.1)

awk + How do I find duplicates in a column?

How do I find duplicates in a column?
$ head countries_lat_long_int_code3.csv | cat -n
1 country,latitude,longitude,name,code
2 AD,42.546245,1.601554,Andorra,376
3 AE,23.424076,53.847818,United Arab Emirates,971
4 AF,33.93911,67.709953,Afghanistan,93
5 AG,17.060816,-61.796428,Antigua and Barbuda,1
6 AI,18.220554,-63.068615,Anguilla,1
7 AL,41.153332,20.168331,Albania,355
8 AM,40.069099,45.038189,Armenia,374
9 AN,12.226079,-69.060087,Netherlands Antilles,599
10 AO,-11.202692,17.873887,Angola,244
For instance this has duplicates in the 5th column.
5 AG,17.060816,-61.796428,Antigua and Barbuda,1
6 AI,18.220554,-63.068615,Anguilla,1
How do I view all the others in this file?
I know I can do this:
awk -F, 'NR>1{print $5}' countries_lat_long_int_code3.csv | sort
And I can eyeball and see if there is any duplicates, but is there a better way?
Or I can do this:
Find out how may are there completely
$ awk -F, 'NR>1{print $5}' countries_lat_long_int_code3.csv | sort | wc -l
210
Find out how many unique values are there
$ awk -F, 'NR>1{print $5}' countries_lat_long_int_code3.csv | sort | uniq | wc -l
183
Therefore there are at most 27 (210-183) duplicates.
EDIT1
My desired output would be something as follows, basically all the columns but just showing the rows that are duplicates:
5 AG,17.060816,-61.796428,Antigua and Barbuda,1
6 AI,18.220554,-63.068615,Anguilla,1
This will give you the duplicated codes
awk -F, 'a[$5]++{print $5}'
if you're only interested in count of duplicate codes
awk -F, 'a[$5]++{count++} END{print count}'
To print duplicated rows try this
awk -F, '$5 in a{print a[$5]; print} {a[$5]=$0}'
This will print the whole row with duplicates found in col $5:
awk -F, 'a[$5]++{print $0}'
This is the less memory aggressive i can guess:
$ cat infile
country,latitude,longitude,name,code
AD,42.546245,1.601554,Andorra,376
AE,23.424076,53.847818,United Arab Emirates,971
AF,33.93911,67.709953,Afghanistan,93
AG,17.060816,-61.796428,Antigua and Barbuda,1
AI,18.220554,-63.068615,Anguilla,1
AL,41.153332,20.168331,Albania,355
AM,40.069099,45.038189,Armenia,374
AN,12.226079,-69.060087,Netherlands Antilles,599
AO,-11.202692,17.873887,Angola,355
$ awk -F\, '$NF in a{if (a[$NF]!=0){print a[$NF];a[$NF]=0}print;next}{a[$NF]=$0}' infile
AG,17.060816,-61.796428,Antigua and Barbuda,1
AI,18.220554,-63.068615,Anguilla,1
AL,41.153332,20.168331,Albania,355
AO,-11.202692,17.873887,Angola,355
NOTE: I have included another duplicate for testing purposes.
If you just want to print out a unique value that repeat over the same file just add at the end of the awk:
awk ... ... | sort | uniq -u
That will print the unique values only on alphabetic order

Reading a file from line 4 to the end

I want to read a file from the line 4 to the very end is there anyway to this with awk or something?
This sed command will do:
sed -n '4,$p' file.txt
Or using awk:
awk 'NR>=4' file.txt
Or using tail:
tail +4 file.txt
awk 'NR >= 4 {print $0}'
For example
$> seq 101 110 | awk 'NR >= 4 {print $0}'
104
105
106
107
108
109
110
tail +4 filename ll serve ur purpose.
more on tail
heres a method (that can depend on the type of shell you use, bash should work):
tmpvar=`cat a_file | wc -l `; tail -$((tmpvar-4)) a_file
heres another method that should work in more shells:
cat a_file -n | awk '{if($1>4) print $2}'

Print all but the first three columns

Too cumbersome:
awk '{print " "$4" "$5" "$6" "$7" "$8" "$9" "$10" "$11" "$12" "$13}' things
awk '{for(i=1;i<4;i++) $i="";print}' file
use cut
$ cut -f4-13 file
or if you insist on awk and $13 is the last field
$ awk '{$1=$2=$3="";print}' file
else
$ awk '{for(i=4;i<=13;i++)printf "%s ",$i;printf "\n"}' file
A solution that does not add extra leading or trailing whitespace:
awk '{ for(i=4; i<NF; i++) printf "%s",$i OFS; if(NF) printf "%s",$NF; printf ORS}'
### Example ###
$ echo '1 2 3 4 5 6 7' |
awk '{for(i=4;i<NF;i++)printf"%s",$i OFS;if(NF)printf"%s",$NF;printf ORS}' |
tr ' ' '-'
4-5-6-7
Sudo_O proposes an elegant improvement using the ternary operator NF?ORS:OFS
$ echo '1 2 3 4 5 6 7' |
awk '{ for(i=4; i<=NF; i++) printf "%s",$i (i==NF?ORS:OFS) }' |
tr ' ' '-'
4-5-6-7
EdMorton gives a solution preserving original whitespaces between fields:
$ echo '1 2 3 4 5 6 7' |
awk '{ sub(/([^ ]+ +){3}/,"") }1' |
tr ' ' '-'
4---5----6-7
BinaryZebra also provides two awesome solutions:
(these solutions even preserve trailing spaces from original string)
$ echo -e ' 1 2\t \t3 4 5 6 7 \t 8\t ' |
awk -v n=3 '{ for ( i=1; i<=n; i++) { sub("^["FS"]*[^"FS"]+["FS"]+","",$0);} } 1 ' |
sed 's/ /./g;s/\t/->/g;s/^/"/;s/$/"/'
"4...5...6.7.->.8->."
$ echo -e ' 1 2\t \t3 4 5 6 7 \t 8\t ' |
awk -v n=3 '{ print gensub("["FS"]*([^"FS"]+["FS"]+){"n"}","",1); }' |
sed 's/ /./g;s/\t/->/g;s/^/"/;s/$/"/'
"4...5...6.7.->.8->."
The solution given by larsr in the comments is almost correct:
$ echo '1 2 3 4 5 6 7' |
awk '{for (i=3;i<=NF;i++) $(i-2)=$i; NF=NF-2; print $0}' | tr ' ' '-'
3-4-5-6-7
This is the fixed and parametrized version of larsr solution:
$ echo '1 2 3 4 5 6 7' |
awk '{for(i=n;i<=NF;i++)$(i-(n-1))=$i;NF=NF-(n-1);print $0}' n=4 | tr ' ' '-'
4-5-6-7
All other answers before Sep-2013 are nice but add extra spaces:
Example of answer adding extra leading spaces:
$ echo '1 2 3 4 5 6 7' |
awk '{$1=$2=$3=""}1' |
tr ' ' '-'
---4-5-6-7
Example of answer adding extra trailing space
$ echo '1 2 3 4 5 6 7' |
awk '{for(i=4;i<=13;i++)printf "%s ",$i;printf "\n"}' |
tr ' ' '-'
4-5-6-7-------
Try this:
awk '{ $1=""; $2=""; $3=""; print $0 }'
The correct way to do this is with an RE interval because it lets you simply state how many fields to skip, and retains inter-field spacing for the remaining fields.
e.g. to skip the first 3 fields without affecting spacing between remaining fields given the format of input we seem to be discussing in this question is simply:
$ echo '1 2 3 4 5 6' |
awk '{sub(/([^ ]+ +){3}/,"")}1'
4 5 6
If you want to accommodate leading spaces and non-blank spaces, but again with the default FS, then it's:
$ echo ' 1 2 3 4 5 6' |
awk '{sub(/[[:space:]]*([^[:space:]]+[[:space:]]+){3}/,"")}1'
4 5 6
If you have an FS that's an RE you can't negate in a character set, you can convert it to a single char first (RS is ideal if it's a single char since an RS CANNOT appear within a field, otherwise consider SUBSEP), then apply the RE interval subsitution, then convert to the OFS. e.g. if chains of "."s separated the fields:
$ echo '1...2.3.4...5....6' |
awk -F'[.]+' '{gsub(FS,RS);sub("([^"RS"]+["RS"]+){3}","");gsub(RS,OFS)}1'
4 5 6
Obviously if OFS is a single char AND it can't appear in the input fields you can reduce that to:
$ echo '1...2.3.4...5....6' |
awk -F'[.]+' '{gsub(FS,OFS); sub("([^"OFS"]+["OFS"]+){3}","")}1'
4 5 6
Then you have the same issue as with all the loop-based solutions that reassign the fields - the FSs are converted to OFSs. If that's an issue, you need to look into GNU awks' patsplit() function.
Pretty much all the answers currently add either leading spaces, trailing spaces or some other separator issue. To select from the fourth field where the separator is whitespace and the output separator is a single space using awk would be:
awk '{for(i=4;i<=NF;i++)printf "%s",$i (i==NF?ORS:OFS)}' file
To parametrize the starting field you could do:
awk '{for(i=n;i<=NF;i++)printf "%s",$i (i==NF?ORS:OFS)}' n=4 file
And also the ending field:
awk '{for(i=n;i<=m=(m>NF?NF:m);i++)printf "%s",$i (i==m?ORS:OFS)}' n=4 m=10 file
awk '{$1=$2=$3="";$0=$0;$1=$1}1'
Input
1 2 3 4 5 6 7
Output
4 5 6 7
echo 1 2 3 4 5| awk '{ for (i=3; i<=NF; i++) print $i }'
Another way to avoid using the print statement:
$ awk '{$1=$2=$3=""}sub("^"FS"*","")' file
In awk when a condition is true print is the default action.
I can't believe nobody offered plain shell:
while read -r a b c d; do echo "$d"; done < file
Options 1 to 3 have issues with multiple whitespace (but are simple).
That is the reason to develop options 4 and 5, which process multiple white spaces with no problem.
Of course, if options 4 or 5 are used with n=0 both will preserve any leading whitespace as n=0 means no splitting.
Option 1
A simple cut solution (works with single delimiters):
$ echo '1 2 3 4 5 6 7 8' | cut -d' ' -f4-
4 5 6 7 8
Option 2
Forcing an awk re-calc sometimes solve the problem (works with some versions of awk) of added leading spaces:
$ echo '1 2 3 4 5 6 7 8' | awk '{ $1=$2=$3="";$0=$0;} NF=NF'
4 5 6 7 8
Option 3
Printing each field formated with printf will give more control:
$ echo ' 1 2 3 4 5 6 7 8 ' |
awk -v n=3 '{ for (i=n+1; i<=NF; i++){printf("%s%s",$i,i==NF?RS:OFS);} }'
4 5 6 7 8
However, all previous answers change all FS between fields to OFS. Let's build a couple of solutions to that.
Option 4
A loop with sub to remove fields and delimiters is more portable, and doesn't trigger a change of FS to OFS:
$ echo ' 1 2 3 4 5 6 7 8 ' |
awk -v n=3 '{ for(i=1;i<=n;i++) { sub("^["FS"]*[^"FS"]+["FS"]+","",$0);} } 1 '
4 5 6 7 8
NOTE: The "^["FS"]*" is to accept an input with leading spaces.
Option 5
It is quite possible to build a solution that does not add extra leading or trailing whitespace, and preserve existing whitespace using the function gensub from GNU awk, as this:
$ echo ' 1 2 3 4 5 6 7 8 ' |
awk -v n=3 '{ print gensub("["FS"]*([^"FS"]+["FS"]+){"n"}","",1); }'
4 5 6 7 8
It also may be used to swap a field list given a count n:
$ echo ' 1 2 3 4 5 6 7 8 ' |
awk -v n=3 '{ a=gensub("["FS"]*([^"FS"]+["FS"]+){"n"}","",1);
b=gensub("^(.*)("a")","\\1",1);
print "|"a"|","!"b"!";
}'
|4 5 6 7 8 | ! 1 2 3 !
Of course, in such case, the OFS is used to separate both parts of the line, and the trailing white space of the fields is still printed.
Note1: ["FS"]* is used to allow leading spaces in the input line.
Cut has a --complement flag that makes it easy (and fast) to delete columns. The resulting syntax is analogous with what you want to do -- making the solution easier to read/understand. Complement also works for the case where you would like to delete non-contiguous columns.
$ foo='1 2 3 %s 5 6 7'
$ echo "$foo" | cut --complement -d' ' -f1-3
%s 5 6 7
$
Perl solution which does not add leading or trailing whitespace:
perl -lane 'splice #F,0,3; print join " ",#F' file
The perl #F autosplit array starts at index 0 while awk fields start with $1
Perl solution for comma-delimited data:
perl -F, -lane 'splice #F,0,3; print join ",",#F' file
Python solution:
python -c "import sys;[sys.stdout.write(' '.join(line.split()[3:]) + '\n') for line in sys.stdin]" < file
For me the most compact and compliant solution to the request is
$ a='1 2\t \t3 4 5 6 7 \t 8\t ';
$ echo -e "$a" | awk -v n=3 '{while (i<n) {i++; sub($1 FS"*", "")}; print $0}'
And if you have more lines to process as for instance file foo.txt, don't forget to reset i to 0:
$ awk -v n=3 '{i=0; while (i<n) {i++; sub($1 FS"*", "")}; print $0}' foo.txt
Thanks your forum.
As I was annoyed by the first highly upvoted but wrong answer I found enough to write a reply there, and here the wrong answers are marked as such, here is my bit. I do not like proposed solutions as I can see no reason to make answer so complex.
I have a log where after $5 with an IP address can be more text or no text. I need everything from the IP address to the end of the line should there be anything after $5. In my case, this is actualy withn an awk program, not an awk oneliner so awk must solve the problem. When I try to remove the first 4 fields using the old nice looking and most upvoted but completely wrong answer:
echo " 7 27.10.16. Thu 11:57:18 37.244.182.218 one two three" | awk '{$1=$2=$3=$4=""; printf "[%s]\n", $0}'
it spits out wrong and useless response (I added [] to demonstrate):
[ 37.244.182.218 one two three]
Instead, if columns are fixed width until the cut point and awk is needed, the correct and quite simple answer is:
echo " 7 27.10.16. Thu 11:57:18 37.244.182.218 one two three" | awk '{printf "[%s]\n", substr($0,28)}'
which produces the desired output:
[37.244.182.218 one two three]
I've found this other possibility, maybe it could be useful also...
awk 'BEGIN {OFS=ORS="\t" }; {for(i=1; i<14; i++) print $i " "; print $NF "\n" }' your_file
Note: 1. For tabular data and from column $1 to $14
Use cut:
cut -d <The character between characters> -f <number of first column>,<number of last column> <file name>
e.g.: If you have file1 containing : car.is.nice.equal.bmw
Run : cut -d . -f1,3 file1 will print car.is.nice
This isn't very far from some of the previous answers, but does solve a couple of issues:
cols.sh:
#!/bin/bash
awk -v s=$1 '{for(i=s; i<=NF;i++) printf "%-5s", $i; print "" }'
Which you can now call with an argument that will be the starting column:
$ echo "1 2 3 4 5 6 7 8 9 10 11 12 13 14" | ./cols.sh 3
3 4 5 6 7 8 9 10 11 12 13 14
Or:
$ echo "1 2 3 4 5 6 7 8 9 10 11 12 13 14" | ./cols.sh 7
7 8 9 10 11 12 13 14
This is 1-indexed; if you prefer zero indexed, use i=s + 1 instead.
Moreover, if you would like to have to arguments for the starting index and end index, change the file to:
#!/bin/bash
awk -v s=$1 -v e=$2 '{for(i=s; i<=e;i++) printf "%-5s", $i; print "" }'
For example:
$ echo "1 2 3 4 5 6 7 8 9 10 11 12 13 14" | ./cols.sh 7 9
7 8 9
The %-5s aligns the result as 5-character-wide columns; if this isn't enough, increase the number, or use %s (with a space) instead if you don't care about alignment.
AWK printf-based solution that avoids % problem, and is unique in that it returns nothing (no return character) if there are less than 4 columns to print:
awk 'NF > 3 { for(i=4; i<NF; i++) printf("%s ", $(i)); print $(i) }'
Testing:
$ x='1 2 3 %s 4 5 6'
$ echo "$x" | awk 'NF > 3 { for(i=4; i<NF; i++) printf("%s ", $(i)); print $(i) }'
%s 4 5 6
$ x='1 2 3'
$ echo "$x" | awk 'NF > 3 { for(i=4; i<NF; i++) printf("%s ", $(i)); print $(i) }'
$ x='1 2 3 '
$ echo "$x" | awk 'NF > 3 { for(i=4; i<NF; i++) printf("%s ", $(i)); print $(i) }'
$