Text in double quote with awk, and colum not in this delimiter (") - awk

I've this log file :
11/01/2023 (17:01) [INFO] => root : "get()" from wrapper.py (line:156) in get()
11/01/2023 (17:01) [INFO] => root : "get() : 200 " from wrapper.py (line:166) in get()
11/01/2023 (17:01) [ERROR] => root : "<!> initialisation error : Expecting value: line 1 column 1 (char 0)" from main.py (line:453) in <module>()
and, with awk, I want to get, for example $1 and $3 column, AND the text in double quote, like this :
11/01/2023 [INFO] "get()"
11/01/2023 [INFO] "get() : 200 "
11/01/2023 [ERROR] "<!> initialisation error : Expecting value: line 1 column 1 (char 0)"
For the columns, ok :
awk '{print $1, $3}' mylog.log
Like this, but with the other(s) colum(s), like $1 and $3 :
$ awk -F\" '{print $2}' mylog.log
get()
get() : 200
<!> initialisation error : Expecting value: line 1
Have an idea please ?
Thanks
F.

Using gnu-awk you can set FPAT to a double quoted string or any other non-quoted, non-whitespace string:
awk -v FPAT='"[^"]*"|[^"[:blank:]]+' '{print $1, $3, $7}' file
11/01/2023 [INFO] "get()"
11/01/2023 [INFO] "get() : 200 "
11/01/2023 [ERROR] "<!> initialisation error : Expecting value: line 1 column 1 (char 0)"
Or use this awk solution on any version:
awk 'match($0, /"[^"]*"/) {
print $1, $3, substr($0, RSTART, RLENGTH)
}' file

Related

Remove pattern from a column if it is present in another one

I have this file :
>AX-89916436-Affx-G-[A/G]
TTGTCCGAGAGTGACGTCAATCCGCA
>AX-89916437-Affx-A-[A/G]
TGTGTGGAAACTCCG
>AX-89916438-Affx-C-[A/C]
GAAGTACGGTAACAT
>AX-89916440-Affx-T-[G/T]
AGTTGATGGTGTATGTGTGTCTTT
I would like to remove in the last field [X/X] the letter present in the 4th field. To get something like that :
>AX-89916436-Affx-G-A
TTGTCCGAGAGTGACGTCAATCCGCA
>AX-89916437-Affx-A-G
TGTGTGGAAACTCCG
>AX-89916438-Affx-C-A
GAAGTACGGTAACAT
>AX-89916440-Affx-T-G
AGTTGATGGTGTATGTGTGTCTTT
I have :
awk -F'-' '
match($0, /\[[A-Z]\/[A-Z]]/) {m = substr($0, RSTART, RLENGTH); if(/^>/ && $NF~/m/); print ... }'
$ awk 'BEGIN{FS=OFS="-"} />/{gsub("[][/]",""); sub($(NF-1),"",$NF)}1' file
>AX-89916436-Affx-G-A
XXXXXXX
>AX-89916437-Affx-A-G
XXXXXXXXXXX
>AX-89916438-Affx-C-A
XXXXXXX
>AX-89916440-Affx-T-G
XXXXXXX
Here is another awk:
awk 'BEGIN {FS=OFS="-"} NF>1 {gsub("[][/" $(NF-1) "]", "", $NF) } 1' file
>AX-89916436-Affx-G-A
XXXXXXX
>AX-89916437-Affx-A-G
XXXXXXXXXXX
>AX-89916438-Affx-C-A
XXXXXXX
>AX-89916440-Affx-T-G
XXXXXXX
With your shown samples, please try following awk code. Simple explanation would be setting FS and OFS as = and in main section checking if a line starts from > and 5th field is matching regex \[[A-Z]\/[A-Z]] then remove whatever values present of 4th field in 5th field using gsub. 1 is awksh way of printing current edited/non-edited line.
awk '
BEGIN{ FS=OFS="-" }
/^>/ && $5 ~ /\[[A-Z]\/[A-Z]]/{
gsub("[][/"$4"]", "", $5)
}
1' Input_file
Using sed
$ sed -E s'#([A-Z])-\[(\1|([A-Z]))/(\1|([A-Z]))]#\1-\3\5#' input_file
>AX-89916436-Affx-G-A
TTGTCCGAGAGTGACGTCAATCCGCA
>AX-89916437-Affx-A-G
TGTGTGGAAACTCCG
>AX-89916438-Affx-C-A
GAAGTACGGTAACAT
>AX-89916440-Affx-T-G
AGTTGATGGTGTATGTGTGTCTTT
You can use
awk 'BEGIN{FS=OFS="-"} /^>/ && $5 ~ /\[[A-Z]\/[A-Z]]/{gsub("[][/"$4"]", "", $5);}1' file
Details:
BEGIN{FS=OFS="-"} - set input/output field separator to -
/^>/ && $5 ~ /\[[A-Z]\/[A-Z]]/ - if the string starts with > and Field 5 contains [ + uppercase letter + / + uppercase letter + ] substring...
{gsub("[][/"$4"]", "", $5);} - then remove from Field 5 ], [, / and Field 4 chars
1 - fires the default print action.
See the online demo:
#!/bin/bash
s='>AX-89916436-Affx-G-[A/G]
XXXXXXX
>AX-89916437-Affx-A-[A/G]
XXXXXXXXXXX
>AX-89916438-Affx-C-[A/C]
XXXXXXX
>AX-89916440-Affx-T-[G/T]
XXXXXXX'
awk 'BEGIN{FS=OFS="-"} /^>/ && $5 ~ /\[[A-Z]\/[A-Z]]/{gsub("[][/"$4"]", "", $5);}1' <<< "$s"
Output:
>AX-89916436-Affx-G-A
XXXXXXX
>AX-89916437-Affx-A-G
XXXXXXXXXXX
>AX-89916438-Affx-C-A
XXXXXXX
>AX-89916440-Affx-T-G
XXXXXXX
much better now :
>AX-89916436-Affx-G-A
TTGTCCGAGAGTGACGTCAATCCGCA
>AX-89916437-Affx-A-G
TGTGTGGAAACTCCG
>AX-89916438-Affx-C-A
GAAGTACGGTAACAT
>AX-89916440-Affx-T-G
AGTTGATGGTGTATGTGTGTCTTT
# gawk profile, created Thu May 12 05:05:48 2022
# Rule(s)
8 NF*=($_=(NF=NF)==!_?$!_:$!(NF-=($(_+=(_-=_)-+-++_-+-++_)=\
$((_+=_+=(_^=_<_)+_)-($--_!=$--_) ) )^(_-=_)+!_))~""'

Awk column with pattern array

Is it possible to do this but use an actual array of strings where it says "array"
array=(cat
dog
mouse
fish
...)
awk -F "," '{ if ( $5!="array" ) { print $0; } }' file
I would like to use spaces in some of the strings in my array.
I would also like to be able to match partial matches, so "snow" in my array would match "snowman"
It should be case sensitive.
Example csv
s,dog,34
3,cat,4
1,african elephant,gd
A,African Elephant,33
H,snowman,8
8,indian elephant,3k
7,Fish,94
...
Example array
snow
dog
african elephant
Expected output
s,dog,34
H,snowman,8
1,african elephant,gd
Cyrus posted this which works well, but it doesn't allow spaces in the array strings and wont match partial matches.
echo "${array[#]}" | awk 'FNR==NR{len=split($0,a," "); next} {for(i=1;i<=len;i++) {if(a[i]==$2){next}} print}' FS=',' - file
The brief approach using a single regexp for all array contents:
$ array=('snow' 'dog' 'african elephant')
$ printf '%s\n' "${array[#]}" | awk -F, 'NR==FNR{r=r s $0; s="|"; next} $2~r' - example.csv
s,dog,34
1,african elephant,gd
H,snowman,8
Or if you prefer string comparisons:
$ cat tst.sh
#!/bin/env bash
array=('snow' 'dog' 'african elephant')
printf '%s\n' "${array[#]}" |
awk -F',' '
NR==FNR {
array[$0]
next
}
{
for (val in array) {
if ( index($2,val) ) { # or $2 ~ val for a regexp match
print
next
}
}
}
' - example.csv
$ ./tst.sh
s,dog,34
1,african elephant,gd
H,snowman,8
This prints no line from csv file which contains an element from array in column 5:
echo "${array[#]}" | awk 'FNR==NR{len=split($0,a," "); next} {for(i=1;i<=len;i++) {if(a[i]==$5){next}} print}' FS=',' - file

Split file into good and bad data

I have a file file1.txt the data is like below
HDR|2016-10-24
DTL|10000|SRC_ORD_ID|SRC_ORD_TYPE_CD|SRC_ORD_STAT_CD|SRC_ACCT_ID|SRC_DISC_RSN_CD|1858-11-17|1858-11-18|1858-11-19|1858-11-20|1858-11-21|1858-11-22|ORD_STATUS_CD|ORDER_CREA_USER_ID|REGION_NM|STATE_CD|ORDER_TYPE|BILL_NAME|FEED_TYPE_CD|101|CREA_APPLN_NAME|BILL_TELE_NUM|CUST_CD|DIGITAL_LIFE_FLAG|CUSTOMER_TYPE_CD|VENDOR_NAME|SITE_NAME|DNIS_CODE
DTL|10000|SRC_ORD_ID|SRC_ORD_TYPE_CD|SRC_ORD_STAT_CD|SRC_ACCT_ID|SRC_DISC_RSN_CD|1858-11-17|1858-11-18|1858-11-19|1858-11-20|1858-11-21|1858-11-22|ORD_STATUS_CD|ORDER_CREA_USER_ID|REGION_NM|STATE_CD|ORDER_TYPE|BILL_NAME|FEED_TYPE_CD|101|CREA_APPLN_NAME|BILL_TELE_NUM|CUST_CD|DIGITAL_LIFE_FLAG|CUSTOMER_TYPE_CD|VENDOR_NAME|SITE_NAME|DNIS_CODE
DTL|10000|SRC_ORD_ID|SRC_ORD_TYPE_CD|SRC_ORD_STAT_CD|SRC_ACCT_ID|SRC_DISC_RSN_CD|1858-11-17|1858-11-18|1858-11-19|1858-11-20|1858-11-21|1858-11-22|ORD_STATUS_CD|ORDER_CREA_USER_ID|REGION_NM|STATE_CD|ORDER_TYPE|BILL_NAME|FEED_TYPE_CD|101|CREA_APPLN_NAME|BILL_TELE_NUM|CUST_CD|DIGITAL_LIFE_FLAG|CUSTOMER_TYPE_CD|VENDOR_NAME|SITE_NAME|DNIS_CODE
DTL|10000|SRC_ORD_ID|SRC_ORD_TYPE_CD|SRC_ORD_STAT_CD|SRC_ACCT_ID|SRC_DISC_RSN_CD|1858-11-17|1858-11-18|1858-11-19|1858-11-20|1858-11-21|1858-11-22|ORD_STATUS_CD|ORDER_CREA_USER_ID|REGION_NM|STATE_CD|ORDER_TYPE|BILL_NAME|FEED_TYPE_CD|101|CREA_APPLN_NAME|BILL_TELE_NUM|CUST_CD|DIGITAL_LIFE_FLAG|CUSTOMER_TYPE_CD|VENDOR_NAME|SITE_NAME|DNIS_CODE
DTL|10000|SRC_ORD_ID|SRC_ORD_TYPE_CD|SRC_ORD_STAT_CD|SRC_ACCT_ID|SRC_DISC_RSN_CD|1858-11-17|1858-11-18|1858-11-19|1858-11-20|1858-11-21|1858-11-22|ORD_STATUS_CD|ORDER_CREA_USER_ID|REGION_NM|STATE_CD|ORDER_TYPE|BILL_NAME|FEED_TYPE_CD|101|CREA_APPLN_NAME|BILL_TELE_NUM|CUST_CD|DIGITAL_LIFE_FLAG|CUSTOMER_TYPE_CD|VENDOR_NAME|SITE_NAME|DNIS_CODE
DTL|10000|SRC_ORD_ID|SRC_ORD_TYPE_CD|SRC_ORD_STAT_CD|SRC_ACCT_ID|SRC_DISC_RSN_CD|1858-11-17|1858-11-18|1858-11-19|1858-11-20|1858-11-21|1858-11-22|ORD_STATUS_CD|ORDER_CREA_USER_ID|REGION_NM|STATE_CD|ORDER_TYPE|BILL_NAME|FEED_TYPE_CD|101|CREA_APPLN_NAME|BILL_TELE_NUM|CUST_CD|DIGITAL_LIFE_FLAG|CUSTOMER_TYPE_CD|VENDOR_NAME|SITE_NAME|DNIS_CODE
DTL|10000|SRC_ORD_ID|SRC_ORD_TYPE_CD|SRC_ORD_STAT_CD|SRC_ACCT_ID|SRC_DISC_RSN_CD|1858-11-17|1858-11-18|1858-11-19|1858-11-20|1858-11-21|1858-11-22|ORD_STATUS_CD|ORDER_CREA_USER_ID|REGION_NM|STATE_CD|ORDER_TYPE|BILL_NAME|FEED_TYPE_CD|101|CREA_APPLN_NAME|BILL_TELE_NUM|CUST_CD|DIGITAL_LIFE_FLAG|CUSTOMER_TYPE_CD|VENDOR_NAME|SITE_NAME|DNIS_CODE
DTL|10000|SRC_ORD_ID|SRC_ORD_TYPE_CD|SRC_ORD_STAT_CD|SRC_ACCT_ID|SRC_DISC_RSN_CD|1858-11-17|1858-11-18|1858-11-19|1858-11-20|1858-11-21|1858-11-22|ORD_STATUS_CD|ORDER_CREA_USER_ID|REGION_NM|STATE_CD|ORDER_TYPE|BILL_NAME|FEED_TYPE_CD|101|CREA_APPLN_NAME|BILL_TELE_NUM|CUST_CD|DIGITAL_LIFE_FLAG|CUSTOMER_TYPE_CD|VENDOR_NAME|SITE_NAME|DNIS_CODE
DTL|10000|SRC_ORD_ID|SRC_ORD_TYPE_CD|SRC_ORD_STAT_CD|SRC_ACCT_ID|SRC_DISC_RSN_CD|1858-11-17|1858-11-18|1858-11-19|1858-11-20|1858-11-21|1858-11-22|ORD_STATUS_CD|ORDER_CREA_USER_ID|REGION_NM|STATE_CD|ORDER_TYPE|BILL_NAME|FEED_TYPE_CD|101|CREA_APPLN_NAME|BILL_TELE_NUM|CUST_CD|DIGITAL_LIFE_FLAG|CUSTOMER_TYPE_CD|VENDOR_NAME|SITE_NAME|
DTL|10000|SRC_ORD_ID|SRC_ORD_TYPE_CD|SRC_ORD_STAT_CD|SRC_ACCT_ID|SRC_DISC_RSN_CD|1858-11-17|1858-11-18|1858-11-19|1858-11-20|1858-11-21|1858-11-22|ORD_STATUS_CD|ORDER_CREA_USER_ID|REGION_NM|STATE_CD|ORDER_TYPE|BILL_NAME|FEED_TYPE_CD|101|CREA_APPLN_NAME|BILL_TELE_NUM|CUST_CD|DIGITAL_LIFE_FLAG|CUSTOMER_TYPE_CD|VENDOR_NAME|SITE_NAME|DNIS_CODE|1
DTL|10000|SRC_ORD_ID|SRC_ORD_TYPE_CD|SRC_ORD_STAT_CD|SRC_ACCT_ID|SRC_DISC_RSN_CD|1858-11-17|1858-11-18|1858-11-19|1858-11-20|1858-11-21|1858-11-22|ORD_STATUS_CD|ORDER_CREA_USER_ID|REGION_NM|STATE_CD|ORDER_TYPE|BILL_NAME|FEED_TYPE_CD|101|CREA_APPLN_NAME|BILL_TELE_NUM|CUST_CD|DIGITAL_LIFE_FLAG|CUSTOMER_TYPE_CD|VENDOR_NAME|SITE_NAME
TRL|11
Now I want to create two set of files good and bad. Good should be where all 29 separators are there. Where it is less than or more than 29 separator (which is pipe) it should go into bad file.
IN_FILE=$1
FNAME=`echo $IN_FILE | cut -d"." -f1 | awk '{$1 = substr($1, 1, 26)} 1'`
DFNAME=$FNAME"_Data.txt"
DGFNAME=$FNAME"_Good.txt"
DBFNAME=$FNAME"_Bad.txt"
TFNAME=$FNAME"_Trl.txt"
cat $IN_FILE | awk -v DGFNM="$DGFNAME" -v DBFNM="$DBFNAME" '
{ {FS="|"}
split($0, chars, "|")
if(chars[1]=="DTL")
{
NSEP=`awk -F\| '{print NF}'`
if [ "$NSEP" = "29" ]
then
print substr($0,5) >> DGFNM
else
print $0 >> DBFNM
fi
}
}'
But I am getting some error on this.
awk: cmd. line:5: NSEP=`awk -F\| {print
awk: cmd. line:5: ^ invalid char '`' in expression
Looks like you want:
awk -F'|' -v DGFNM="$DGFNAME" -v DBFNM="$DBFNAME" '
$1 == "DTL" {
if (NF == 29) {
print substr($0, 5) > DGFNM
} else {
print > DBFNM
}
}
' "$IN_FILE"
Your code has two main problems:
it uses shell syntax (such as `....` and [ ... ]) inside an awk script, which is not supported.
it performs operations explicitly that awk performs implicitly by default.
Also:
it is best to avoid all-uppercase variable names - both in the shell and in awk scripts - because they can conflict with reserved variables.
As #tripleee points out in a comment, you can pass filenames directly to Awk (as in the above code) - no need for cat and a pipelin.
In essence:
$ awk -F\| 'NF==30 {print > "good.txt"; next}{print > "bad.txt"}' file1.txt
29 separators means 30 fields, just check the NF.

How to count spaces between columns

How can I count the number of spaces (16) between S1, and // in the following line:
S1, // name
One way:
awk -F '//' '{ n = gsub(/ /, "", $1); print n }'
Test:
echo 'S1, // name' | awk -F '//' '{ n = gsub(/ /, "", $1); print n }'
Results:
16
If you really want awk then you can build on the following.
$ echo "S1, // name" | awk '{x=gsub(/ /," ",$0); print x}'
17
gsub returns the number of replacements made. Obviously this regex will also find and count other spaces but you get the point.
Or try something like this:
echo "S1, // name" |
awk -F[,/] ' { for (i=1;i<=NF;i++) print "$"i " is \""$i"\" of length, " length($i);}'
Test:
$ echo "S1, // name" | awk -F[,/] ' { for (i=1;i<=NF;i++) print "$"i " is \""$i"\" of length, " length($i);}'
$1 is "S1" of length, 2
$2 is " " of length, 16
$3 is "" of length, 0
$4 is " name" of length, 5
Count all spaces between S1, and // only with awk:
$ echo 'S1, // name' | awk -F'[,/]' '{print length($2)}'
16
Or a method based off fedorqui comment:
$ echo 'S1, // name' | grep -Po '(?<=S1,) *(?=//)' | wc -L
16
Pure bash
x='S1, // name'
x=${x#S1,}
x=${x%//*}
echo ${#x}
16

Formatting specific output from dmidecode

I was searching for a means to format output from dmidecode a specific way, and I found the following article which just about does what I need
http://www.experts-exchange.com/Programming/Languages/Scripting/Shell/Q_27770556.html
I modified some of the fields that I need from the code in the answer above, this shows awk creating csv output, with quotes, from dmidecode
dmidecode -t 17 | awk -F: '/Size|Locator|Speed|Manufacturer|Serial Number|Part Number/{sub(/^ */,"",$2);s=sprintf("%s,\"%s\"",s,$2)}/^Memory/{print s;s=""}END{print s}' |sed -e 's/,//' | grep -iv "no module" | tr -d ' '
"4096MB","CPU0","DIMM01","1066MHz","Samsung","754C2C33","M393B5273CH0-YH9"
I need tabbed, no quotes
4096MB CPU0 DIMM01 1066MHz Samsung 754C2C33 M393B5273CH0-YH9
I am still trying to get my head around awk and would appreciate anyone showing me the appropriate modifications
Fixed my code above, previously pasted non-working syntax
From the link you posted, I saved the data in a file called file.txt. I noticed that records are blank line separated. I used the following awk code:
awk 'BEGIN { FS=":"; OFS="\t" } /Size|Locator|Speed|Manufacturer|Serial Number|Part Number/ { gsub(/^[ \t]+/,"",$2); line = (line ? line OFS : "") $2 } /^$/ { print line; line="" }' file.txt
Results:
2048 MB XMM1 Not Specified 1333 MHz JEDEC ID 8106812F HMT125U6BFR8C-H9
No Module Installed XMM2 Not Specified Unknown JEDEC ID
2048 MB XMM3 Not Specified 1333 MHz JEDEC ID 7006C12F HMT125U6BFR8C-H9
No Module Installed XMM4 Not Specified Unknown JEDEC ID
4096 kB SYSTEM ROM Not Specified Unknown Not Specified Not Specified Not Specified
Your command line would now look like this:
dmidecode -t 17 | awk 'BEGIN { FS=":"; OFS="\t" } /Size|Locator|Speed|Manufacturer|Serial Number|Part Number/ { gsub(/^[ \t]+/,"",$2); line = (line ? line OFS : "") $2 } /^$/ { print line; line="" }' | grep -iv "no module"
EDIT:
dmidecode -t 17 | awk 'BEGIN { FS=":"; OFS="\t" } /Size|Locator|Speed|Manufacturer|Serial Number|Part Number/ { if ($2 ~ /MB$|MHz$/) { gsub(/[ \t]+/,"",$2) } gsub(/^[ \t]+/,"",$2); line = (line ? line OFS : "") $2 } /^$/ { print line; line="" }' | grep -iv "no module"