Split file into good and bad data - awk

I have a file file1.txt the data is like below
HDR|2016-10-24
DTL|10000|SRC_ORD_ID|SRC_ORD_TYPE_CD|SRC_ORD_STAT_CD|SRC_ACCT_ID|SRC_DISC_RSN_CD|1858-11-17|1858-11-18|1858-11-19|1858-11-20|1858-11-21|1858-11-22|ORD_STATUS_CD|ORDER_CREA_USER_ID|REGION_NM|STATE_CD|ORDER_TYPE|BILL_NAME|FEED_TYPE_CD|101|CREA_APPLN_NAME|BILL_TELE_NUM|CUST_CD|DIGITAL_LIFE_FLAG|CUSTOMER_TYPE_CD|VENDOR_NAME|SITE_NAME|DNIS_CODE
DTL|10000|SRC_ORD_ID|SRC_ORD_TYPE_CD|SRC_ORD_STAT_CD|SRC_ACCT_ID|SRC_DISC_RSN_CD|1858-11-17|1858-11-18|1858-11-19|1858-11-20|1858-11-21|1858-11-22|ORD_STATUS_CD|ORDER_CREA_USER_ID|REGION_NM|STATE_CD|ORDER_TYPE|BILL_NAME|FEED_TYPE_CD|101|CREA_APPLN_NAME|BILL_TELE_NUM|CUST_CD|DIGITAL_LIFE_FLAG|CUSTOMER_TYPE_CD|VENDOR_NAME|SITE_NAME|DNIS_CODE
DTL|10000|SRC_ORD_ID|SRC_ORD_TYPE_CD|SRC_ORD_STAT_CD|SRC_ACCT_ID|SRC_DISC_RSN_CD|1858-11-17|1858-11-18|1858-11-19|1858-11-20|1858-11-21|1858-11-22|ORD_STATUS_CD|ORDER_CREA_USER_ID|REGION_NM|STATE_CD|ORDER_TYPE|BILL_NAME|FEED_TYPE_CD|101|CREA_APPLN_NAME|BILL_TELE_NUM|CUST_CD|DIGITAL_LIFE_FLAG|CUSTOMER_TYPE_CD|VENDOR_NAME|SITE_NAME|DNIS_CODE
DTL|10000|SRC_ORD_ID|SRC_ORD_TYPE_CD|SRC_ORD_STAT_CD|SRC_ACCT_ID|SRC_DISC_RSN_CD|1858-11-17|1858-11-18|1858-11-19|1858-11-20|1858-11-21|1858-11-22|ORD_STATUS_CD|ORDER_CREA_USER_ID|REGION_NM|STATE_CD|ORDER_TYPE|BILL_NAME|FEED_TYPE_CD|101|CREA_APPLN_NAME|BILL_TELE_NUM|CUST_CD|DIGITAL_LIFE_FLAG|CUSTOMER_TYPE_CD|VENDOR_NAME|SITE_NAME|DNIS_CODE
DTL|10000|SRC_ORD_ID|SRC_ORD_TYPE_CD|SRC_ORD_STAT_CD|SRC_ACCT_ID|SRC_DISC_RSN_CD|1858-11-17|1858-11-18|1858-11-19|1858-11-20|1858-11-21|1858-11-22|ORD_STATUS_CD|ORDER_CREA_USER_ID|REGION_NM|STATE_CD|ORDER_TYPE|BILL_NAME|FEED_TYPE_CD|101|CREA_APPLN_NAME|BILL_TELE_NUM|CUST_CD|DIGITAL_LIFE_FLAG|CUSTOMER_TYPE_CD|VENDOR_NAME|SITE_NAME|DNIS_CODE
DTL|10000|SRC_ORD_ID|SRC_ORD_TYPE_CD|SRC_ORD_STAT_CD|SRC_ACCT_ID|SRC_DISC_RSN_CD|1858-11-17|1858-11-18|1858-11-19|1858-11-20|1858-11-21|1858-11-22|ORD_STATUS_CD|ORDER_CREA_USER_ID|REGION_NM|STATE_CD|ORDER_TYPE|BILL_NAME|FEED_TYPE_CD|101|CREA_APPLN_NAME|BILL_TELE_NUM|CUST_CD|DIGITAL_LIFE_FLAG|CUSTOMER_TYPE_CD|VENDOR_NAME|SITE_NAME|DNIS_CODE
DTL|10000|SRC_ORD_ID|SRC_ORD_TYPE_CD|SRC_ORD_STAT_CD|SRC_ACCT_ID|SRC_DISC_RSN_CD|1858-11-17|1858-11-18|1858-11-19|1858-11-20|1858-11-21|1858-11-22|ORD_STATUS_CD|ORDER_CREA_USER_ID|REGION_NM|STATE_CD|ORDER_TYPE|BILL_NAME|FEED_TYPE_CD|101|CREA_APPLN_NAME|BILL_TELE_NUM|CUST_CD|DIGITAL_LIFE_FLAG|CUSTOMER_TYPE_CD|VENDOR_NAME|SITE_NAME|DNIS_CODE
DTL|10000|SRC_ORD_ID|SRC_ORD_TYPE_CD|SRC_ORD_STAT_CD|SRC_ACCT_ID|SRC_DISC_RSN_CD|1858-11-17|1858-11-18|1858-11-19|1858-11-20|1858-11-21|1858-11-22|ORD_STATUS_CD|ORDER_CREA_USER_ID|REGION_NM|STATE_CD|ORDER_TYPE|BILL_NAME|FEED_TYPE_CD|101|CREA_APPLN_NAME|BILL_TELE_NUM|CUST_CD|DIGITAL_LIFE_FLAG|CUSTOMER_TYPE_CD|VENDOR_NAME|SITE_NAME|DNIS_CODE
DTL|10000|SRC_ORD_ID|SRC_ORD_TYPE_CD|SRC_ORD_STAT_CD|SRC_ACCT_ID|SRC_DISC_RSN_CD|1858-11-17|1858-11-18|1858-11-19|1858-11-20|1858-11-21|1858-11-22|ORD_STATUS_CD|ORDER_CREA_USER_ID|REGION_NM|STATE_CD|ORDER_TYPE|BILL_NAME|FEED_TYPE_CD|101|CREA_APPLN_NAME|BILL_TELE_NUM|CUST_CD|DIGITAL_LIFE_FLAG|CUSTOMER_TYPE_CD|VENDOR_NAME|SITE_NAME|
DTL|10000|SRC_ORD_ID|SRC_ORD_TYPE_CD|SRC_ORD_STAT_CD|SRC_ACCT_ID|SRC_DISC_RSN_CD|1858-11-17|1858-11-18|1858-11-19|1858-11-20|1858-11-21|1858-11-22|ORD_STATUS_CD|ORDER_CREA_USER_ID|REGION_NM|STATE_CD|ORDER_TYPE|BILL_NAME|FEED_TYPE_CD|101|CREA_APPLN_NAME|BILL_TELE_NUM|CUST_CD|DIGITAL_LIFE_FLAG|CUSTOMER_TYPE_CD|VENDOR_NAME|SITE_NAME|DNIS_CODE|1
DTL|10000|SRC_ORD_ID|SRC_ORD_TYPE_CD|SRC_ORD_STAT_CD|SRC_ACCT_ID|SRC_DISC_RSN_CD|1858-11-17|1858-11-18|1858-11-19|1858-11-20|1858-11-21|1858-11-22|ORD_STATUS_CD|ORDER_CREA_USER_ID|REGION_NM|STATE_CD|ORDER_TYPE|BILL_NAME|FEED_TYPE_CD|101|CREA_APPLN_NAME|BILL_TELE_NUM|CUST_CD|DIGITAL_LIFE_FLAG|CUSTOMER_TYPE_CD|VENDOR_NAME|SITE_NAME
TRL|11
Now I want to create two set of files good and bad. Good should be where all 29 separators are there. Where it is less than or more than 29 separator (which is pipe) it should go into bad file.
IN_FILE=$1
FNAME=`echo $IN_FILE | cut -d"." -f1 | awk '{$1 = substr($1, 1, 26)} 1'`
DFNAME=$FNAME"_Data.txt"
DGFNAME=$FNAME"_Good.txt"
DBFNAME=$FNAME"_Bad.txt"
TFNAME=$FNAME"_Trl.txt"
cat $IN_FILE | awk -v DGFNM="$DGFNAME" -v DBFNM="$DBFNAME" '
{ {FS="|"}
split($0, chars, "|")
if(chars[1]=="DTL")
{
NSEP=`awk -F\| '{print NF}'`
if [ "$NSEP" = "29" ]
then
print substr($0,5) >> DGFNM
else
print $0 >> DBFNM
fi
}
}'
But I am getting some error on this.
awk: cmd. line:5: NSEP=`awk -F\| {print
awk: cmd. line:5: ^ invalid char '`' in expression

Looks like you want:
awk -F'|' -v DGFNM="$DGFNAME" -v DBFNM="$DBFNAME" '
$1 == "DTL" {
if (NF == 29) {
print substr($0, 5) > DGFNM
} else {
print > DBFNM
}
}
' "$IN_FILE"
Your code has two main problems:
it uses shell syntax (such as `....` and [ ... ]) inside an awk script, which is not supported.
it performs operations explicitly that awk performs implicitly by default.
Also:
it is best to avoid all-uppercase variable names - both in the shell and in awk scripts - because they can conflict with reserved variables.
As #tripleee points out in a comment, you can pass filenames directly to Awk (as in the above code) - no need for cat and a pipelin.

In essence:
$ awk -F\| 'NF==30 {print > "good.txt"; next}{print > "bad.txt"}' file1.txt
29 separators means 30 fields, just check the NF.

Related

Awk column with pattern array

Is it possible to do this but use an actual array of strings where it says "array"
array=(cat
dog
mouse
fish
...)
awk -F "," '{ if ( $5!="array" ) { print $0; } }' file
I would like to use spaces in some of the strings in my array.
I would also like to be able to match partial matches, so "snow" in my array would match "snowman"
It should be case sensitive.
Example csv
s,dog,34
3,cat,4
1,african elephant,gd
A,African Elephant,33
H,snowman,8
8,indian elephant,3k
7,Fish,94
...
Example array
snow
dog
african elephant
Expected output
s,dog,34
H,snowman,8
1,african elephant,gd
Cyrus posted this which works well, but it doesn't allow spaces in the array strings and wont match partial matches.
echo "${array[#]}" | awk 'FNR==NR{len=split($0,a," "); next} {for(i=1;i<=len;i++) {if(a[i]==$2){next}} print}' FS=',' - file
The brief approach using a single regexp for all array contents:
$ array=('snow' 'dog' 'african elephant')
$ printf '%s\n' "${array[#]}" | awk -F, 'NR==FNR{r=r s $0; s="|"; next} $2~r' - example.csv
s,dog,34
1,african elephant,gd
H,snowman,8
Or if you prefer string comparisons:
$ cat tst.sh
#!/bin/env bash
array=('snow' 'dog' 'african elephant')
printf '%s\n' "${array[#]}" |
awk -F',' '
NR==FNR {
array[$0]
next
}
{
for (val in array) {
if ( index($2,val) ) { # or $2 ~ val for a regexp match
print
next
}
}
}
' - example.csv
$ ./tst.sh
s,dog,34
1,african elephant,gd
H,snowman,8
This prints no line from csv file which contains an element from array in column 5:
echo "${array[#]}" | awk 'FNR==NR{len=split($0,a," "); next} {for(i=1;i<=len;i++) {if(a[i]==$5){next}} print}' FS=',' - file

Can someone help me getting average of a column using awk with condition on other column

awk -F, '{if ($2 == 0) awk '{ total += $3; count++ } END { print total/count }' CLN_Tapes_LON; }' /tmp/CLN_Tapes_LON
awk: {if ($2 == 0) awk {
awk: ^ syntax error
bash: count++: command not found
Just for fun, let's look at what's wrong with your original version and transform it into something that works, step by step. Here's your initial version (I'll call it version 0):
awk -F, '{if ($2 == 0) awk '{ total += $3; count++ } END { print total/count }' CLN_Tapes_LON; }' /tmp/CLN_Tapes_LON
The -F, sets the field separator to be the comma character, but your later comment seems to indicate that the columns (fields) are separated by spaces. So let's get rid of it; whitespace-separation is what awk expects by default. Version 1:
awk '{if ($2 == 0) awk '{ total += $3; count++ } END { print total/count }' CLN_Tapes_LON; }' /tmp/CLN_Tapes_LON
You seem to be attempting to nest a call to awk inside your awk program? There's almost never any call for that, and this wouldn't be the way to do it anyway. Let's also get rid of the mismatched quotes while we're at it: note in passing that you cannot nest single quotes inside another pair of single quotes that way: you'd have to escape them somehow. But there's no need for them at all here. Version 2:
awk '{if ($2 == 0) { total += $3; count++ } END { print total/count } }' /tmp/CLN_Tapes_LON
This is close but not quite right: the END block is only executed when all lines of input are finished processing: it doesn't make sense to have it inside an if. So let's move it outside the braces. I'm also going to tighten up some whitespace. Version 3:
awk '{if ($2==0) {total+=$3; count++}} END{print total/count}' /tmp/CLN_Tapes_LON
Version 3 actually works, and you could stop here. But awk has a handy way of specifying to run a block of code only against lines that match a condition: 'condition {code}' So yours can more simply be written as:
awk '$2==0 {total+=$3; count++} END{print total/count}' /tmp/CLN_Tapes_LON
... which, of course, is pretty much exactly what John1024 suggested.
$ awk '$2 == 0 { total += $3; count++;} END { print total/count; }' CLN_Tapes_LON
3
This assumes that your input file looks like:
$ cat CLN_Tapes_LON
CLH040 0 3
CLH041 0 3
CLH042 0 3
CLH043 0 3
CLH010 1 0
CLH011 1 0
CLH012 1 0
CLH013 1 0
CLH130 1 40
CLH131 1 40
CLH132 1 40
CLH133 1 40
Thought I'd try to do this without awk. Awk is clearly the better choice, but it's still a one-liner.
bc<<<"($(grep ' 0 ' file|tee >(wc -l>i)|cut -d\ -f3|tr '\n' '+')0)/"$(<i)
3
It extracts lines with 0 in the second column with grep. This is passed to tee for wc -l to count the lines and to cut to extract the third column. tr replaces the new lines with "+" which is put over the number of lines (i.e., "12 / 4"). This is then passed to bc.

Edit header file with awk

I have a file that is white-space separated value, i need to convert this into:
header=tab separated,
records=" ; " separated (space-semicolon-space)
what i'm doing now is:
cat ${original} | awk 'END {FS=" "} { for(i=1; i<=NR; i++) {if (i==1) { OFS="\t"; print $0; } else { OFS=";" ;print $0; }}}' > ${new}
But is working only partly, first, it produces millions of lines, while the original ones has about 90000.
Second, the header, which should be modified here:
if (i==1) { OFS="\t"; print $0; }
Is not modified at all,
Another option would be by using sed, i can get that job to be done partially, but again the header remains untouched:
cat ${original} | sed 's/\t/ ;/g' > ${new}
this line should change all the separator in file
awk -F'\t' -v OFS=";" '$1=$1' file
this will leave header untouched:
awk -F'\t' -v OFS=";" 'NR>1{$1=$}1' file
this will only change the header line:
awk -F'\t' -v OFS=";" 'NR==1{$1=$1}1' file
you could paste some example to let us know why your header was not modified.

awk command to split nth field

I am learning AWK and was trying some exercises on built-in string functions.
Here's my exercise:
I have a file containing as below
RecordType:83
1,2,3,a|x|y|z,4,5
And my desired output is as below:
RecordType:83
1,2,3,a,4,5
1,0,0,x,4,5
1,0,0,y,4,5
1,0,0,z,4,5
I wrote an awk command for the above output.
awk -F',' '$1 ~ /RecordType:83/{print $0}
$1 == 1{
split($4,splt,"|")
for(i in splt)
{
if(i==1)
print $1,$2,$3,splt[i],$5,$6
else
print $1,0,0,splt[i],$5,$6
}
}' OFS=, file_name
The above command looks so clumsy. Is there any way minimizing the command?
Thanks in advance
The shortest possible one-liner I could manage:
awk -F, 'NR>1{n=split($4,a,"|");for(;i++<n;){$4=a[i];print;$2=$3=0}}NR==1' OFS=, file
RecordType:83    
1,2,3,a,4,5
1,0,0,x,4,5
1,0,0,y,4,5
1,0,0,z,4,5
The much more readable script (recommended):
BEGIN {
FS=OFS="," # Comma delimiter
}
NR==1 { # If the first line in file
print $0 # Print the whole line
next # Skip to next line
}
{
n=split($4,a,"|") # Split field four on |
for(i=1;i<=n;i++) # For each sub-field
print $1,i==1?$2OFS$3:"0"OFS"0",a[i],$5,$6 # Print the output
}
another shorter one-liner
awk -F, -v OFS="," 'NR>1{n=split($4,a,"|");while(++i<=n){$4=a[i];print;$2=$3=0}}NR==1' file
with your example:
kent$ awk -F, -v OFS="," 'NR>1{n=split($4,a,"|");while(++i<=n){$4=a[i];print;$2=$3=0}}NR==1' file
RecordType:83
1,2,3,a,4,5
1,0,0,x,4,5
1,0,0,y,4,5
1,0,0,z,4,5

How to print out a specific field in AWK?

A very simple question, which a found no answer to. How do I print out a specific field in awk?
awk '/word1/', will print out the whole sentence, when I need just a word1. Or I need a chain of patterns (word1 + word2) to be printed out only from a text.
Well if the pattern is a single word (which you want to print and can't contaion FS (input field separator)) why not:
awk -v MYPATTERN="INSERT_YOUR_PATTERN" '$0 ~ MYPATTERN { print MYPATTERN }' INPUTFILE
If your pattern is a regex:
awk -v MYPATTERN="INSERT_YOUR_PATTERN" '$0 ~ MYPATTERN { print gensub(".*(" MYPATTERN ").*","\\1","1",$0) }' INPUTFILE
If your pattern must be checked in every single field:
awk -v MYPATTERN="INSERT_YOUR_PATTERN" '$0 ~ MYPATTERN {
for (i=1;i<=NF;i++) {
if ($i ~ MYPATTERN) { print "Field " i " in " NR " row matches: " MYPATTERN }
}
}' INPUTFILE
Modify any of the above to your taste.
The fields in awk are represented by $1, $2, etc:
$ echo this is a string | awk '{ print $2 }'
is
$0 is the whole line, $1 is the first field, $2 is the next field ( or blank ),
$NF is the last field, $( NF - 1 ) is the 2nd to last field, etc.
EDIT (in response to comment).
You could try:
awk '/crazy/{ print substr( $0, match( $0, "crazy" ), RLENGTH )}'
i know you can do this with awk :
an alternative would be :
sed -nr "s/.*(PATTERN_TO_MATCH).*/\1/p" file
or you can use grep -o
Something like this perhaps:
awk '{split("bla1 bla2 bla3",a," "); print a[1], a[2], a[3]}'