awk: gsub /pattern1/, but not /pattern1pattern2/ - awk

In my work, I have to solve such a simple problem: change pattern1 to newpattern, but only if it is not followed by pattern2 or pattern3:
"pattern1 pattern1pattern2 pattern1pattern3 pattern1pattern4" → "newpattern pattern1pattern2 pattern1pattern3 newpatternpattern4"
Here is my solution, but I don't like it and I suppose there should be a more elegant and easy way to do that?
$ echo 'pattern1 pattern1pattern2 pattern1pattern3 pattern1pattern4' | awk '
{gsub(/pattern1pattern2/, "###", $0)
gsub(/pattern1pattern3/, "%%%", $0)
gsub(/pattern1/, "newpattern", $0)
gsub(/###/, "pattern1pattern2", $0)
gsub(/%%%/, "pattern1pattern3", $0)
print}'
newpattern pattern1pattern2 pattern1pattern3 newpatternpattern4
So, the sample input file:
pattern1 pattern1pattern2 aaa_pattern1pattern3 pattern1pattern4 pattern1pattern2pattern1
The sample output file should be:
newpattern pattern1pattern2 aaa_pattern1pattern3 newpatternpattern4 pattern1pattern2newpattern

This is trivial in perl, using a negative lookahead:
perl -pe 's/pattern1(?!pattern[23])/newpattern/g' file
Substitute all matches of pattern1 that are not followed by pattern2 or pattern3.
If for some reason you need to do it in awk, then here's one way you could go about it:
{
out = ""
replacement = "newpattern"
while (match($0, /pattern1/)) {
if (substr($0, RSTART + RLENGTH) ~ /^pattern[23]/) {
out = out substr($0, 1, RSTART + RLENGTH - 1)
}
else {
out = out substr($0, 1, RSTART - 1) replacement
}
$0 = substr($0, RSTART + RLENGTH)
}
print out $0
}
Consume the input while pattern1 matches and build the string out, inserting the replacement when the part after each match isn't pattern2 or pattern3. Once there are no more matches, print the string that has been build so far, followed by whatever is left in the input.

With GNU awk for the 4th arg to split():
$ cat tst.awk
{
split($0,flds,/pattern1(pattern2|pattern3)/,seps)
for (i=1; i in flds; i++) {
printf "%s%s", gensub(/pattern1/,"newpattern","g",flds[i]), seps[i]
}
print ""
}
$ awk -f tst.awk file
newpattern pattern1pattern2 aaa_pattern1pattern3 newpatternpattern4 pattern1pattern2newpattern
With other awks you can do the same with a while(match()) loop:
$ cat tst.awk
{
while ( match($0,/pattern1(pattern2|pattern3)/) ) {
tgt = substr($0,1,RSTART-1)
gsub(/pattern1/,"newpattern",tgt)
printf "%s%s", tgt, substr($0,RSTART,RLENGTH)
$0 = substr($0,RSTART+RLENGTH)
}
gsub(/pattern1/,"newpattern",$0)
print
}
$ awk -f tst.awk file
newpattern pattern1pattern2 aaa_pattern1pattern3 newpatternpattern4 pattern1pattern2newpattern
but obviously the gawk solution is simpler and more concise so, as always, get gawk!

awk solution. Nice question. Basically it's doing 2 gensubs:
$ cat tst.awk
{ for (i=1; i<=NF; i++){
s=gensub(/pattern1/, "newpattern", "g", $i);
t=gensub(/(newpattern)(pattern(2|3))/, "pattern1\\2", "g", s);
$i=t
}
}1
Testing:
echo "pattern1 pattern1pattern2 aaa_pattern1pattern3 pattern1pattern4 pattern1pattern2pattern1" | awk -f tst.awk
newpattern pattern1pattern2 aaa_pattern1pattern3 newpatternpattern4 pattern1pattern2newpattern
However, this will fail whenever you already have something like newpatternpattern2 in your input. But that's not what OP suggests with his input examples, I guess.

Related

AWK: How to number auto-increment?

I have a file.file content is:
20210126000880000003|3|33.00|20210126|15:30
1|20210126000000000000000000002207|1220210126080109|1000|100000000000000319|100058110000000325|402041000012|402041000012|PT07|621067000000123645|收款方户名|2021-01-26|2021-01-26|10.00|TN|NCS|12|875466
2|20210126000000000000000000002208|1220210126080110|1000|100000000000000319|100058110000000325|402041000012|402041000012|PT06|621067000000123645|收款方户名|2021-01-26|2021-01-26|20.00|TN|NCS|12|875466
3|20210126000000000000000000002209|1220210126080111|1000|100000000000000319|100058110000000325|402041000012|402041000012|PT08|621067000000123645|收款方户名|2021-01-26|2021-01-26|3.00|TN|NCS|12|875466
I use awk command:
awk -F"|" 'NR==1{print $1};FNR==2{print $2,$3}' testfile
Get the following result:
20210126000880000003
20210126000000000000000000002207 1220210126080109
I want the number to auto-increase:
awk -F"|" 'NR==1{print $1+1};FNR==2{print $2+1,$3+1}' testfile
But get follow result:
20210126000880001024
20210126000000000944237587726336 1220210126080110
have question:
I want to the numer is auto-increase: hope the result is:
20210126000880000003
20210126000000000000000000002207|1220210126080109
-------------------------------------------------
20210126000880000004
20210126000000000000000000002208|1220210126080110
--------------------------------------------------
20210126000880000005
20210126000000000000000000002209|1220210126080111
How to auto_increase?
Thanks!
You may try this gnu awk command:
awk -M 'BEGIN {FS=OFS="|"} NR == 1 {hdr = $1; next} NF>2 {print ++hdr; print $2, $3; print "-------------------"}' file
20210126000880000004
20210126000000000000000000002207|1220210126080109
-------------------
20210126000880000005
20210126000000000000000000002208|1220210126080110
-------------------
20210126000880000006
20210126000000000000000000002209|1220210126080111
-------------------
A more readable version:
awk -M 'BEGIN {
FS=OFS="|"
}
NR == 1 {
hdr = $1
next
}
NF > 2 {
print ++hdr
print $2, $3
print "-------------------"
}' file
Here is a POSIX awk solution that doesn't need -M:
awk 'BEGIN {FS=OFS="|"} NR == 1 {hdr = $1; next} NF>2 {"echo " hdr " + 1 | bc" | getline hdr; print hdr; print $2, $3; print "-------------------"}' file
20210126000880000004
20210126000000000000000000002207|1220210126080109
-------------------
20210126000880000005
20210126000000000000000000002208|1220210126080110
-------------------
20210126000880000006
20210126000000000000000000002209|1220210126080111
-------------------
Anubhava has the best solution but for older versions of GNU awk that don't support -M (big numbers) you can try the following:
awk -F\| 'NR==1 { print $1;hed=$1;hed1=substr($1,(length($1)-1));next; } !/^$/ {print $2" "$3 } /^$/ { print "--------------------------------------------------";printf "%s%s\n",substr(hed,1,((length(hed))-(length(hed1)+1))),++hed1 }' testfile
Explanation:
awk -F\| 'NR==1 { # Set field delimiter to | and process the first line
print $1; # Print the first field
hed=$1; # Set the variable hed to the first field
hed1=substr($1,(length($1)-1)); # Set a counter variable hed1 to the last digit in hed ($1)
next;
}
!/^$/ {
print $2" "$3 # Where there is no blank line, print the second field, a space and the third field
}
/^$/ {
print "--------------------------------------------------"; # Where there is a blank field, process
printf "%s%s\n",substr(hed,1,((length(hed))-(length(hed1)+1))),++hed1 # print the header extract before the counter, followed by the incremented counter
}' testfile

printing information of two files according specific field

I have two files. I need to print information like the example, when the first field exist and is equal, in two files.
file 1
20;"aaaaaa";99292929
24;"fsfdfa";42933294
30;"fsdsff";23832299
38;"fjsdjl";62673777
file 2
13;"fsdffsdfs";2272777
20;"ffuiiii";23728877
30;"wdwfsdh";8882817
40;"sfjslll";82371111
expect result:
file1;20;"aaaaaa";99292929;file2;20;"ffuiiii";23728877
file1,30;"fsdsff";23832299;file2;30;"wdwfsdh";8882817
I tried with:
awk 'FNR==NR{a[$1]=$1;next} $1 in a' file2 file1 > newfile
logical it's ok, but I can't show fields that I want.
awk will help:
awk -F ';' 'NR==FNR{rec[$1]=FILENAME FS $0}
NR>FNR{
if($1 in rec){
print rec[$1] FS FILENAME FS $0
}
}' file{1..2}
should do.
$ cat tst.awk
BEGIN { FS=OFS=";" }
{ $0 = FILENAME FS $0 }
NR==FNR { a[$2] = $0; next }
$2 in a { print a[$2], $0 }
$ awk -f tst.awk file1 file2
file1;20;"aaaaaa";99292929;file2;20;"ffuiiii";23728877
file1;30;"fsdsff";23832299;file2;30;"wdwfsdh";8882817

AWK command to simulate full outer join and then compare

Hello Guys I need a help in building an awk command which can simulate full outer join and then compare values
Say
cat File1
1|A|B
2|C|D
3|E|F
cat File2
1|A|X
2|C|D
3|Z|F
Assumptions
first column in both the files is the key field so no duplicates
both the files are expected to have same structure
No limit on the number of fields
Now, If I run the awk command
awk -F'|' ........... File1 File2 > output
Output format
<Key>|<File1.column1>|<File2.column1>|<Matched/Mismatched>|<File1.column2>|<File2.column2>|<Matched/Mismatched>|<File1.column3>|<File2.column3>|<Matched/Mismatched>
cat output
1|A|A|MATCHED|B|X|MISMATCHED
2|C|C|MATCHED|D|D|MATCHED
3|E|Z|MISMATCHED|F|F|MATCHED
Thank You
$ awk -v OFS=\| -F\| 'NR==FNR{for(i=2;i<=NF;i++)a[$1][i]=$i;next}{printf "%s",$1;for(i=2;i<=NF;i++){printf"%s|%s|%s",a[$1][i],$i,a[$1][i]==$i?"matched":"mismatched"}printf"\n"}' file1 file2
1|A|A|matched|B|X|mismatched
2|C|C|matched|D|D|matched
3|E|Z|mismatched|F|F|matched
BEGIN {
OFS="|"; FS="|"
}
NR==FNR { # for the first file
for(i=2;i<=NF;i++) # fill array with "non-key" fields
a[$1][i]=$i;next # and use the "key" field as an index
}
{
printf "%s",$1
for(i=2;i<=NF;i++) { # use the key field to match and print
printf"|%s|%s|%s",a[$1][i],$i,a[$1][i]==$i?"matched":"mismatched"
}
printf"\n" # sugar on the top
}
perhaps easier with join assist
$ join -t'|' file1 file2 |
awk -F'|' -v OFS='|' '{n="MIS"; m="MATCHED";
m1=($2!=$4?n:"")m;
m2=($3!=$5?n:"")m;
print $1,$2,$4,m1,$3,$5,m2}'
1|A|A|MATCHED|B|X|MISMATCHED
2|C|C|MATCHED|D|D|MATCHED
3|E|Z|MISMATCHED|F|F|MATCHED
for unspecified number of fields need more awk
$ join -t'|' file1 file2 |
awk -F'|' '{c=NF/2; printf "%s", $1;
for(i=2;i<=c+1;i++) printf "|%s|%s|%s", $i,$(i+c),($i!=$(i+c)?"MIS":"")"MATCHED";
print ""}'
$ cat tst.awk
BEGIN { FS=OFS="|" }
NR==FNR {
for (i=2; i<=NF; i++) {
a[$1,i] = $i
}
next
}
{
printf "%s%s", $1, OFS
for (i=2; i<=NF; i++) {
printf "%s%s%s%s%s%s", a[$1,i], OFS, $i, OFS, (a[$1,i]==$i ? "" : "MIS") "MATCHED", (i<NF ? OFS : ORS)
}
}
$ awk -f tst.awk file1 file2
1|A|A|MATCHED|B|X|MISMATCHED
2|C|C|MATCHED|D|D|MATCHED
3|E|Z|MISMATCHED|F|F|MATCHED

print unique lines based on field

Would like to print unique lines based on first field , keep the first occurrence of that line and remove duplicate other occurrences.
Input.csv
10,15-10-2014,abc
20,12-10-2014,bcd
10,09-10-2014,def
40,06-10-2014,ghi
10,15-10-2014,abc
Desired Output:
10,15-10-2014,abc
20,12-10-2014,bcd
40,06-10-2014,ghi
Have tried below command and in-complete
awk 'BEGIN { FS = OFS = "," } { !seen[$1]++ } END { for ( i in seen) print $0}' Input.csv
Looking for your suggestions ...
You put your test for "seen" in the action part of the script instead of the condition part. Change it to:
awk -F, '!seen[$1]++' Input.csv
Yes, that's the whole script:
$ cat Input.csv
10,15-10-2014,abc
20,12-10-2014,bcd
10,09-10-2014,def
40,06-10-2014,ghi
10,15-10-2014,abc
$
$ awk -F, '!seen[$1]++' Input.csv
10,15-10-2014,abc
20,12-10-2014,bcd
40,06-10-2014,ghi
This should give you what you want:
awk -F, '{ if (!($1 in a)) a[$1] = $0; } END '{ for (i in a) print a[i]}' input.csv
typo there in syntax.
awk '{ if (!($1 in a)) a[$1] = $0; } END { for (i in a) print a[i]}'

How to print out a specific field in AWK?

A very simple question, which a found no answer to. How do I print out a specific field in awk?
awk '/word1/', will print out the whole sentence, when I need just a word1. Or I need a chain of patterns (word1 + word2) to be printed out only from a text.
Well if the pattern is a single word (which you want to print and can't contaion FS (input field separator)) why not:
awk -v MYPATTERN="INSERT_YOUR_PATTERN" '$0 ~ MYPATTERN { print MYPATTERN }' INPUTFILE
If your pattern is a regex:
awk -v MYPATTERN="INSERT_YOUR_PATTERN" '$0 ~ MYPATTERN { print gensub(".*(" MYPATTERN ").*","\\1","1",$0) }' INPUTFILE
If your pattern must be checked in every single field:
awk -v MYPATTERN="INSERT_YOUR_PATTERN" '$0 ~ MYPATTERN {
for (i=1;i<=NF;i++) {
if ($i ~ MYPATTERN) { print "Field " i " in " NR " row matches: " MYPATTERN }
}
}' INPUTFILE
Modify any of the above to your taste.
The fields in awk are represented by $1, $2, etc:
$ echo this is a string | awk '{ print $2 }'
is
$0 is the whole line, $1 is the first field, $2 is the next field ( or blank ),
$NF is the last field, $( NF - 1 ) is the 2nd to last field, etc.
EDIT (in response to comment).
You could try:
awk '/crazy/{ print substr( $0, match( $0, "crazy" ), RLENGTH )}'
i know you can do this with awk :
an alternative would be :
sed -nr "s/.*(PATTERN_TO_MATCH).*/\1/p" file
or you can use grep -o
Something like this perhaps:
awk '{split("bla1 bla2 bla3",a," "); print a[1], a[2], a[3]}'