Awk column with pattern array - awk

Is it possible to do this but use an actual array of strings where it says "array"
array=(cat
dog
mouse
fish
...)
awk -F "," '{ if ( $5!="array" ) { print $0; } }' file
I would like to use spaces in some of the strings in my array.
I would also like to be able to match partial matches, so "snow" in my array would match "snowman"
It should be case sensitive.
Example csv
s,dog,34
3,cat,4
1,african elephant,gd
A,African Elephant,33
H,snowman,8
8,indian elephant,3k
7,Fish,94
...
Example array
snow
dog
african elephant
Expected output
s,dog,34
H,snowman,8
1,african elephant,gd
Cyrus posted this which works well, but it doesn't allow spaces in the array strings and wont match partial matches.
echo "${array[#]}" | awk 'FNR==NR{len=split($0,a," "); next} {for(i=1;i<=len;i++) {if(a[i]==$2){next}} print}' FS=',' - file

The brief approach using a single regexp for all array contents:
$ array=('snow' 'dog' 'african elephant')
$ printf '%s\n' "${array[#]}" | awk -F, 'NR==FNR{r=r s $0; s="|"; next} $2~r' - example.csv
s,dog,34
1,african elephant,gd
H,snowman,8
Or if you prefer string comparisons:
$ cat tst.sh
#!/bin/env bash
array=('snow' 'dog' 'african elephant')
printf '%s\n' "${array[#]}" |
awk -F',' '
NR==FNR {
array[$0]
next
}
{
for (val in array) {
if ( index($2,val) ) { # or $2 ~ val for a regexp match
print
next
}
}
}
' - example.csv
$ ./tst.sh
s,dog,34
1,african elephant,gd
H,snowman,8

This prints no line from csv file which contains an element from array in column 5:
echo "${array[#]}" | awk 'FNR==NR{len=split($0,a," "); next} {for(i=1;i<=len;i++) {if(a[i]==$5){next}} print}' FS=',' - file

Related

Split file into good and bad data

I have a file file1.txt the data is like below
HDR|2016-10-24
DTL|10000|SRC_ORD_ID|SRC_ORD_TYPE_CD|SRC_ORD_STAT_CD|SRC_ACCT_ID|SRC_DISC_RSN_CD|1858-11-17|1858-11-18|1858-11-19|1858-11-20|1858-11-21|1858-11-22|ORD_STATUS_CD|ORDER_CREA_USER_ID|REGION_NM|STATE_CD|ORDER_TYPE|BILL_NAME|FEED_TYPE_CD|101|CREA_APPLN_NAME|BILL_TELE_NUM|CUST_CD|DIGITAL_LIFE_FLAG|CUSTOMER_TYPE_CD|VENDOR_NAME|SITE_NAME|DNIS_CODE
DTL|10000|SRC_ORD_ID|SRC_ORD_TYPE_CD|SRC_ORD_STAT_CD|SRC_ACCT_ID|SRC_DISC_RSN_CD|1858-11-17|1858-11-18|1858-11-19|1858-11-20|1858-11-21|1858-11-22|ORD_STATUS_CD|ORDER_CREA_USER_ID|REGION_NM|STATE_CD|ORDER_TYPE|BILL_NAME|FEED_TYPE_CD|101|CREA_APPLN_NAME|BILL_TELE_NUM|CUST_CD|DIGITAL_LIFE_FLAG|CUSTOMER_TYPE_CD|VENDOR_NAME|SITE_NAME|DNIS_CODE
DTL|10000|SRC_ORD_ID|SRC_ORD_TYPE_CD|SRC_ORD_STAT_CD|SRC_ACCT_ID|SRC_DISC_RSN_CD|1858-11-17|1858-11-18|1858-11-19|1858-11-20|1858-11-21|1858-11-22|ORD_STATUS_CD|ORDER_CREA_USER_ID|REGION_NM|STATE_CD|ORDER_TYPE|BILL_NAME|FEED_TYPE_CD|101|CREA_APPLN_NAME|BILL_TELE_NUM|CUST_CD|DIGITAL_LIFE_FLAG|CUSTOMER_TYPE_CD|VENDOR_NAME|SITE_NAME|DNIS_CODE
DTL|10000|SRC_ORD_ID|SRC_ORD_TYPE_CD|SRC_ORD_STAT_CD|SRC_ACCT_ID|SRC_DISC_RSN_CD|1858-11-17|1858-11-18|1858-11-19|1858-11-20|1858-11-21|1858-11-22|ORD_STATUS_CD|ORDER_CREA_USER_ID|REGION_NM|STATE_CD|ORDER_TYPE|BILL_NAME|FEED_TYPE_CD|101|CREA_APPLN_NAME|BILL_TELE_NUM|CUST_CD|DIGITAL_LIFE_FLAG|CUSTOMER_TYPE_CD|VENDOR_NAME|SITE_NAME|DNIS_CODE
DTL|10000|SRC_ORD_ID|SRC_ORD_TYPE_CD|SRC_ORD_STAT_CD|SRC_ACCT_ID|SRC_DISC_RSN_CD|1858-11-17|1858-11-18|1858-11-19|1858-11-20|1858-11-21|1858-11-22|ORD_STATUS_CD|ORDER_CREA_USER_ID|REGION_NM|STATE_CD|ORDER_TYPE|BILL_NAME|FEED_TYPE_CD|101|CREA_APPLN_NAME|BILL_TELE_NUM|CUST_CD|DIGITAL_LIFE_FLAG|CUSTOMER_TYPE_CD|VENDOR_NAME|SITE_NAME|DNIS_CODE
DTL|10000|SRC_ORD_ID|SRC_ORD_TYPE_CD|SRC_ORD_STAT_CD|SRC_ACCT_ID|SRC_DISC_RSN_CD|1858-11-17|1858-11-18|1858-11-19|1858-11-20|1858-11-21|1858-11-22|ORD_STATUS_CD|ORDER_CREA_USER_ID|REGION_NM|STATE_CD|ORDER_TYPE|BILL_NAME|FEED_TYPE_CD|101|CREA_APPLN_NAME|BILL_TELE_NUM|CUST_CD|DIGITAL_LIFE_FLAG|CUSTOMER_TYPE_CD|VENDOR_NAME|SITE_NAME|DNIS_CODE
DTL|10000|SRC_ORD_ID|SRC_ORD_TYPE_CD|SRC_ORD_STAT_CD|SRC_ACCT_ID|SRC_DISC_RSN_CD|1858-11-17|1858-11-18|1858-11-19|1858-11-20|1858-11-21|1858-11-22|ORD_STATUS_CD|ORDER_CREA_USER_ID|REGION_NM|STATE_CD|ORDER_TYPE|BILL_NAME|FEED_TYPE_CD|101|CREA_APPLN_NAME|BILL_TELE_NUM|CUST_CD|DIGITAL_LIFE_FLAG|CUSTOMER_TYPE_CD|VENDOR_NAME|SITE_NAME|DNIS_CODE
DTL|10000|SRC_ORD_ID|SRC_ORD_TYPE_CD|SRC_ORD_STAT_CD|SRC_ACCT_ID|SRC_DISC_RSN_CD|1858-11-17|1858-11-18|1858-11-19|1858-11-20|1858-11-21|1858-11-22|ORD_STATUS_CD|ORDER_CREA_USER_ID|REGION_NM|STATE_CD|ORDER_TYPE|BILL_NAME|FEED_TYPE_CD|101|CREA_APPLN_NAME|BILL_TELE_NUM|CUST_CD|DIGITAL_LIFE_FLAG|CUSTOMER_TYPE_CD|VENDOR_NAME|SITE_NAME|DNIS_CODE
DTL|10000|SRC_ORD_ID|SRC_ORD_TYPE_CD|SRC_ORD_STAT_CD|SRC_ACCT_ID|SRC_DISC_RSN_CD|1858-11-17|1858-11-18|1858-11-19|1858-11-20|1858-11-21|1858-11-22|ORD_STATUS_CD|ORDER_CREA_USER_ID|REGION_NM|STATE_CD|ORDER_TYPE|BILL_NAME|FEED_TYPE_CD|101|CREA_APPLN_NAME|BILL_TELE_NUM|CUST_CD|DIGITAL_LIFE_FLAG|CUSTOMER_TYPE_CD|VENDOR_NAME|SITE_NAME|
DTL|10000|SRC_ORD_ID|SRC_ORD_TYPE_CD|SRC_ORD_STAT_CD|SRC_ACCT_ID|SRC_DISC_RSN_CD|1858-11-17|1858-11-18|1858-11-19|1858-11-20|1858-11-21|1858-11-22|ORD_STATUS_CD|ORDER_CREA_USER_ID|REGION_NM|STATE_CD|ORDER_TYPE|BILL_NAME|FEED_TYPE_CD|101|CREA_APPLN_NAME|BILL_TELE_NUM|CUST_CD|DIGITAL_LIFE_FLAG|CUSTOMER_TYPE_CD|VENDOR_NAME|SITE_NAME|DNIS_CODE|1
DTL|10000|SRC_ORD_ID|SRC_ORD_TYPE_CD|SRC_ORD_STAT_CD|SRC_ACCT_ID|SRC_DISC_RSN_CD|1858-11-17|1858-11-18|1858-11-19|1858-11-20|1858-11-21|1858-11-22|ORD_STATUS_CD|ORDER_CREA_USER_ID|REGION_NM|STATE_CD|ORDER_TYPE|BILL_NAME|FEED_TYPE_CD|101|CREA_APPLN_NAME|BILL_TELE_NUM|CUST_CD|DIGITAL_LIFE_FLAG|CUSTOMER_TYPE_CD|VENDOR_NAME|SITE_NAME
TRL|11
Now I want to create two set of files good and bad. Good should be where all 29 separators are there. Where it is less than or more than 29 separator (which is pipe) it should go into bad file.
IN_FILE=$1
FNAME=`echo $IN_FILE | cut -d"." -f1 | awk '{$1 = substr($1, 1, 26)} 1'`
DFNAME=$FNAME"_Data.txt"
DGFNAME=$FNAME"_Good.txt"
DBFNAME=$FNAME"_Bad.txt"
TFNAME=$FNAME"_Trl.txt"
cat $IN_FILE | awk -v DGFNM="$DGFNAME" -v DBFNM="$DBFNAME" '
{ {FS="|"}
split($0, chars, "|")
if(chars[1]=="DTL")
{
NSEP=`awk -F\| '{print NF}'`
if [ "$NSEP" = "29" ]
then
print substr($0,5) >> DGFNM
else
print $0 >> DBFNM
fi
}
}'
But I am getting some error on this.
awk: cmd. line:5: NSEP=`awk -F\| {print
awk: cmd. line:5: ^ invalid char '`' in expression
Looks like you want:
awk -F'|' -v DGFNM="$DGFNAME" -v DBFNM="$DBFNAME" '
$1 == "DTL" {
if (NF == 29) {
print substr($0, 5) > DGFNM
} else {
print > DBFNM
}
}
' "$IN_FILE"
Your code has two main problems:
it uses shell syntax (such as `....` and [ ... ]) inside an awk script, which is not supported.
it performs operations explicitly that awk performs implicitly by default.
Also:
it is best to avoid all-uppercase variable names - both in the shell and in awk scripts - because they can conflict with reserved variables.
As #tripleee points out in a comment, you can pass filenames directly to Awk (as in the above code) - no need for cat and a pipelin.
In essence:
$ awk -F\| 'NF==30 {print > "good.txt"; next}{print > "bad.txt"}' file1.txt
29 separators means 30 fields, just check the NF.

AWK command to simulate full outer join and then compare

Hello Guys I need a help in building an awk command which can simulate full outer join and then compare values
Say
cat File1
1|A|B
2|C|D
3|E|F
cat File2
1|A|X
2|C|D
3|Z|F
Assumptions
first column in both the files is the key field so no duplicates
both the files are expected to have same structure
No limit on the number of fields
Now, If I run the awk command
awk -F'|' ........... File1 File2 > output
Output format
<Key>|<File1.column1>|<File2.column1>|<Matched/Mismatched>|<File1.column2>|<File2.column2>|<Matched/Mismatched>|<File1.column3>|<File2.column3>|<Matched/Mismatched>
cat output
1|A|A|MATCHED|B|X|MISMATCHED
2|C|C|MATCHED|D|D|MATCHED
3|E|Z|MISMATCHED|F|F|MATCHED
Thank You
$ awk -v OFS=\| -F\| 'NR==FNR{for(i=2;i<=NF;i++)a[$1][i]=$i;next}{printf "%s",$1;for(i=2;i<=NF;i++){printf"%s|%s|%s",a[$1][i],$i,a[$1][i]==$i?"matched":"mismatched"}printf"\n"}' file1 file2
1|A|A|matched|B|X|mismatched
2|C|C|matched|D|D|matched
3|E|Z|mismatched|F|F|matched
BEGIN {
OFS="|"; FS="|"
}
NR==FNR { # for the first file
for(i=2;i<=NF;i++) # fill array with "non-key" fields
a[$1][i]=$i;next # and use the "key" field as an index
}
{
printf "%s",$1
for(i=2;i<=NF;i++) { # use the key field to match and print
printf"|%s|%s|%s",a[$1][i],$i,a[$1][i]==$i?"matched":"mismatched"
}
printf"\n" # sugar on the top
}
perhaps easier with join assist
$ join -t'|' file1 file2 |
awk -F'|' -v OFS='|' '{n="MIS"; m="MATCHED";
m1=($2!=$4?n:"")m;
m2=($3!=$5?n:"")m;
print $1,$2,$4,m1,$3,$5,m2}'
1|A|A|MATCHED|B|X|MISMATCHED
2|C|C|MATCHED|D|D|MATCHED
3|E|Z|MISMATCHED|F|F|MATCHED
for unspecified number of fields need more awk
$ join -t'|' file1 file2 |
awk -F'|' '{c=NF/2; printf "%s", $1;
for(i=2;i<=c+1;i++) printf "|%s|%s|%s", $i,$(i+c),($i!=$(i+c)?"MIS":"")"MATCHED";
print ""}'
$ cat tst.awk
BEGIN { FS=OFS="|" }
NR==FNR {
for (i=2; i<=NF; i++) {
a[$1,i] = $i
}
next
}
{
printf "%s%s", $1, OFS
for (i=2; i<=NF; i++) {
printf "%s%s%s%s%s%s", a[$1,i], OFS, $i, OFS, (a[$1,i]==$i ? "" : "MIS") "MATCHED", (i<NF ? OFS : ORS)
}
}
$ awk -f tst.awk file1 file2
1|A|A|MATCHED|B|X|MISMATCHED
2|C|C|MATCHED|D|D|MATCHED
3|E|Z|MISMATCHED|F|F|MATCHED

Edit header file with awk

I have a file that is white-space separated value, i need to convert this into:
header=tab separated,
records=" ; " separated (space-semicolon-space)
what i'm doing now is:
cat ${original} | awk 'END {FS=" "} { for(i=1; i<=NR; i++) {if (i==1) { OFS="\t"; print $0; } else { OFS=";" ;print $0; }}}' > ${new}
But is working only partly, first, it produces millions of lines, while the original ones has about 90000.
Second, the header, which should be modified here:
if (i==1) { OFS="\t"; print $0; }
Is not modified at all,
Another option would be by using sed, i can get that job to be done partially, but again the header remains untouched:
cat ${original} | sed 's/\t/ ;/g' > ${new}
this line should change all the separator in file
awk -F'\t' -v OFS=";" '$1=$1' file
this will leave header untouched:
awk -F'\t' -v OFS=";" 'NR>1{$1=$}1' file
this will only change the header line:
awk -F'\t' -v OFS=";" 'NR==1{$1=$1}1' file
you could paste some example to let us know why your header was not modified.

awk command to split nth field

I am learning AWK and was trying some exercises on built-in string functions.
Here's my exercise:
I have a file containing as below
RecordType:83
1,2,3,a|x|y|z,4,5
And my desired output is as below:
RecordType:83
1,2,3,a,4,5
1,0,0,x,4,5
1,0,0,y,4,5
1,0,0,z,4,5
I wrote an awk command for the above output.
awk -F',' '$1 ~ /RecordType:83/{print $0}
$1 == 1{
split($4,splt,"|")
for(i in splt)
{
if(i==1)
print $1,$2,$3,splt[i],$5,$6
else
print $1,0,0,splt[i],$5,$6
}
}' OFS=, file_name
The above command looks so clumsy. Is there any way minimizing the command?
Thanks in advance
The shortest possible one-liner I could manage:
awk -F, 'NR>1{n=split($4,a,"|");for(;i++<n;){$4=a[i];print;$2=$3=0}}NR==1' OFS=, file
RecordType:83    
1,2,3,a,4,5
1,0,0,x,4,5
1,0,0,y,4,5
1,0,0,z,4,5
The much more readable script (recommended):
BEGIN {
FS=OFS="," # Comma delimiter
}
NR==1 { # If the first line in file
print $0 # Print the whole line
next # Skip to next line
}
{
n=split($4,a,"|") # Split field four on |
for(i=1;i<=n;i++) # For each sub-field
print $1,i==1?$2OFS$3:"0"OFS"0",a[i],$5,$6 # Print the output
}
another shorter one-liner
awk -F, -v OFS="," 'NR>1{n=split($4,a,"|");while(++i<=n){$4=a[i];print;$2=$3=0}}NR==1' file
with your example:
kent$ awk -F, -v OFS="," 'NR>1{n=split($4,a,"|");while(++i<=n){$4=a[i];print;$2=$3=0}}NR==1' file
RecordType:83
1,2,3,a,4,5
1,0,0,x,4,5
1,0,0,y,4,5
1,0,0,z,4,5

How to print out a specific field in AWK?

A very simple question, which a found no answer to. How do I print out a specific field in awk?
awk '/word1/', will print out the whole sentence, when I need just a word1. Or I need a chain of patterns (word1 + word2) to be printed out only from a text.
Well if the pattern is a single word (which you want to print and can't contaion FS (input field separator)) why not:
awk -v MYPATTERN="INSERT_YOUR_PATTERN" '$0 ~ MYPATTERN { print MYPATTERN }' INPUTFILE
If your pattern is a regex:
awk -v MYPATTERN="INSERT_YOUR_PATTERN" '$0 ~ MYPATTERN { print gensub(".*(" MYPATTERN ").*","\\1","1",$0) }' INPUTFILE
If your pattern must be checked in every single field:
awk -v MYPATTERN="INSERT_YOUR_PATTERN" '$0 ~ MYPATTERN {
for (i=1;i<=NF;i++) {
if ($i ~ MYPATTERN) { print "Field " i " in " NR " row matches: " MYPATTERN }
}
}' INPUTFILE
Modify any of the above to your taste.
The fields in awk are represented by $1, $2, etc:
$ echo this is a string | awk '{ print $2 }'
is
$0 is the whole line, $1 is the first field, $2 is the next field ( or blank ),
$NF is the last field, $( NF - 1 ) is the 2nd to last field, etc.
EDIT (in response to comment).
You could try:
awk '/crazy/{ print substr( $0, match( $0, "crazy" ), RLENGTH )}'
i know you can do this with awk :
an alternative would be :
sed -nr "s/.*(PATTERN_TO_MATCH).*/\1/p" file
or you can use grep -o
Something like this perhaps:
awk '{split("bla1 bla2 bla3",a," "); print a[1], a[2], a[3]}'