I have a bash script calling awk. I want to test whether ukeys contains elements in array kaggr, if so setting display=1.
My problem is how to avoid the array test when keywords or ukeys are empty. What can I do?
awk -v ukeys="$keys" -v beg_ere="$beg_ere" -v pn_ere="$pn_ere" -v end_ere="$end_ere" \
'$0 ~ beg_ere {
title=gensub(beg_ere, "\\2", 1, $0);
subtitle=gensub(beg_ere, "\\3", 1, $0);
keywords=gensub(beg_ere, "\\4", 1, $0);
nk = split(keywords, kaggr, ",");
nu = split(ukeys, uaggr, ",");
for (i in uaggr) {
match=0;
for (j in kaggr) {
if (uaggr[i] == kaggr[j]) { match=1; break; }
}
if (match == 1) { display=1; break; }
}
next
}
$0 ~ end_ere { display=0 ; print "" }
display { sub(pn_ere, "") ; print }
' "$filename"
I am passing keys="resource" to match values in keywords. But when keys is empty, I do not want to match anything in keywords.
Just to clarify, if the array "ukeys" or "keywords" are empty you want to skip the for-loop? Would this approach solve your issue?
awk -v ukeys="$keys" -v beg_ere="$beg_ere" -v pn_ere="$pn_ere" -v end_ere="$end_ere" \
'$0 ~ beg_ere {
title=gensub(beg_ere, "\\2", 1, $0);
subtitle=gensub(beg_ere, "\\3", 1, $0);
keywords=gensub(beg_ere, "\\4", 1, $0);
nk = split(keywords, kaggr, ",");
nu = split(ukeys, uaggr, ",");
if(length(keys) > 0 && length(keywords) > 0) { # if either arrays are empty, skip this part
for (i in uaggr) {
match=0;
for (j in kaggr) {
if (uaggr[i] == kaggr[j]) {
match=1; break
}
}
if (match == 1) {
display=1; break
}
}
next
}
}
$0 ~ end_ere { display=0 ; print "" }
display { sub(pn_ere, "") ; print }
' "$filename"
I'm trying to parse two csv files that contains thousands of rows. The data is to be matched and appended based solely on the data in the first column. I currently have it parsing the files and outputting to 3 files:
1 - key matched
2 - file1 only
3 - file2 only
The issue I am having is that I have noticed once it makes one match it move on to the next line rather than finding the other entries. for the data in question I would rather output multiple lines containing some duplicates than to miss nay data. (The name column for instance varies depending on who entered the data)
INPUT FILES
file1.csv
topic,group,name,allow
fishing,boaties,dave,yes
fishing,divers,steve,no
flying,red,luke,yes
walking,red,tom,yes
file2.csv
Resource,name,email,funny
fishing,frank,frank#home.com,no
swiming,lee,lee#wallbanger.com,no
driving,lee,lee#wallbanger.com,no
CURRENT OUTPUT
key matched
topic,group,name,allow,Resource,name,email,funny
fishing,divers,steve,no,fishing,frank,frank#home.com,no
file1_only
topic,group,user,allow
fishing,divers,steve,no
flying,red,luke,yes
walking,red,tom,yes
file2_only
Resource,user,email,funny
swiming,lee,lee#wallbanger.com,no
driving,lee,lee#wallbanger.com,no
Expected Output
key matched
topic,group,name,allow,Resource,name,email,funny
fishing,divers,steve,no,fishing,frank,frank#home.com,no
fishing,boaties,dave,yes,fishing,frank,frank#home.com,no
file1_only
topic,group,user,allow
flying,red,luke,yes
walking,red,tom,yes
file2_only
Resource,user,email,funny
swiming,lee,lee#wallbanger.com,no
driving,lee,lee#wallbanger.com,no
So for every key in file 1 column 1, it needs to output/append every key that matches in file2 column1.
This is my current awk filter. Im guessing I need to add a loop in if possible?
BEGIN { FS=OFS="," }
FNR==1 { next }
{ key = $1 }
NR==FNR {
file1[key] = $0
next
}
key in file1 {
print file1[key], $0 > "./out_combined.csv"
delete file1[key]
next
}
{
print > "./out_file2_only.csv"
}
END {
for (key in file1) {
print file1[key] > "./out_file1_only.csv"
}
}
$ cat tst.awk
BEGIN { FS=OFS="," }
FNR==1 {
if ( NR==FNR ) {
file1hdr = $0
}
else {
print file1hdr > "./out_file1_only.csv"
print > "./out_file2_only.csv"
print file1hdr, $0 > "./out_combined.csv"
}
next
}
{ key = $1 }
NR==FNR {
file1[key,++cnt[key]] = $0
next
}
{
file2[key]
if ( key in cnt ) {
for ( i=1; i<=cnt[key]; i++ ) {
print file1[key,i], $0 > "./out_combined.csv"
}
}
else {
print > "./out_file2_only.csv"
}
}
END {
for ( key in cnt ) {
if ( !(key in file2) ) {
for ( i=1; i<=cnt[key]; i++ ) {
print file1[key,i] > "./out_file1_only.csv"
}
}
}
}
$ awk -f tst.awk file1.csv file2.csv
$ head out_*
==> out_combined.csv <==
topic,group,name,allow,Resource,name,email,funny
fishing,boaties,dave,yes,fishing,frank,frank#home.com,no
fishing,divers,steve,no,fishing,frank,frank#home.com,no
==> out_file1_only.csv <==
topic,group,name,allow
flying,red,luke,yes
walking,red,tom,yes
==> out_file2_only.csv <==
Resource,name,email,funny
swiming,lee,lee#wallbanger.com,no
driving,lee,lee#wallbanger.com,no
NR>NRMIN{
if($3 == "Leu") {
if($4 == "CD1" || $4 == "HD11" || $4 == "HD12" || $4 == "HD13") {
next;
}
}
elseif($3 == "Val") {
if($4 == "CD1" || $4 == "HD11" || $4 == "HD12" || $4 == "HD13") {
next;
}
}
else {
print;
}
}
I intend to selectively print lines of a space-delimited file.
Please let me know why the above code is giving an error when gawk -f FILE_Modifier.awk NRMIN = 90 FILE > NEWFILE
Error Message
gawk: FILE_Modifier.awk:7: elseif($3 == "Val") {
gawk: FILE_Modifier.awk:7: ^ syntax error
gawk: FILE_Modifier.awk:12: else {
gawk: FILE_Modifier.awk:12: ^ syntax error
There is no elseif. Anyway, you can rewrite the script as just:
awk -v nrmin=90 '(NR > nrmin) && !(($3 ~ /^(Leu|Val)$/) && ($4 ~ /^(CD1|HD11|HD12|HD13)$/))' file
Don't use all upper case variable names to avoid clashes with builtin names. Do set variables up front using -v unless you have a specific reason not to.
this script is supposed to read in csv in the following format
Name,Date,ID,Number
John Smith,09/05/2015,s,999-999-99
Mike Smith,09/06/2015,s,989-979-99
Fred Smith,09/03/2015,s,781-999-99
The first line is a header it is supposed to be skipped. So when script runs every .csv file seems to be moving to the GoodFile direcotory which i think is false positive, i fudged with the validation steps like the 3rd one and entered QE instead of SE(it has to be S or E) it doesn't even hit the code? i am not sure why.. for(linenum = 1; linenum <nr; linenum++) {
if (length(dataArr[linenum,3]) == 0){
printf "Failed 3rd a validation"
exit 1
#!/bin/sh
for file in test/*.csv ; do
awk -F',' '
# skip the header and blank lines
NR = 1 || NF == 0 {next}
#save the data in to a 2d array called dataArr
{ for (i=1; i <= NF; i++) dataArr[++nr,i] = $i }
END {
STATUS = "GOOD"
#verify coulmn 1
for( linenum=1; linenum <= nr; linenum++) {
if (length(dataArr[linenum,1]) == 0){
printf "Failed 1st validation"
exit 1
}
}
printf "file: %s, verify column 1, STATUS: %s\n", FILENAME, STATUS
#verify coulmn 2
for(linenum = 1; linenum <nr; linenum++) {
if (length(dataArr[linenum,2]) == 0){
printf "Failed 2nd a validation"
exit 1
}
if ((dataArr[linenum,2]) !~ /^(0[1-9]|1[012])[- /.](0[1-9]|[12][0-9]|3[01])[- /.](19|20)[0-9][0-9]$/){
printf "Failed 2nd b validation"
exit 1
}
}
#verify coulmn 3
for(linenum = 1; linenum <nr; linenum++) {
if (length(dataArr[linenum,3]) == 0){
printf "Failed 3rd a validation"
exit 1
}
# has to be either S or E
if ((dataArr[linenum,3]) !~ /^[SE]$/){
printf "Failed 3rd b validation"
exit 1
}
}
#verify coulmn 4
for(linenum = 1; linenum <nr; linenum++) {
#lenght has to between 9 AND 11
if ((length(dataArr[linenum,4])) < 9 || (length(dataArr[linenum,4]) > 11)){
printf "Failed 4th validation"
exit 1
}
}
}' "$file"
if [[ $? -eq 0 ]]; then
# "good" status
mv ${file} test1/goodFile
else
# "bad" status
mv ${file} test1/badFile
fi
done
You don't need to save the file in an array, all you need is:
awk -F',' '
# skip the header and blank lines
NR == 1 || NF == 0 {next}
$1 == "" { fails1++ }
$2 == "" { fails2a++ }
$2 !~ /^(0[1-9]|1[012])[- /.](0[1-9]|[12][0-9]|3[01])[- /.](19|20)[0-9][0-9]$/) { fails2b++ }
$3 == "" { fails3a++ }
$3 !~ /^[SE]$/ { fails3b++ }
length($4) < 9 || length($4) > 11 { fails4++ }
END {
if (fails1) { print "Failed 1st validation"; exit 1 }
if (fails2a) { print "Failed 2nd a validation"; exit 1 }
if (fails2b) { print "Failed 2nd b validation"; exit 1 }
if (fails3a) { print "Failed 3rd a validation"; exit 1 }
if (fails3b) { print "Failed 3rd b validation"; exit 1 }
if (fails4) { print "Failed 4th validation"; exit 1 }
}' "$file"
To print the failure messages to stderr instead of stdout, btw, would portably be:
if (fails4) { print "Failed 4th validation" | "cat>&2"; exit 1 }
Here's the version if you don't care which error is reported first when the file contains multiple errors:
awk -F',' '
# skip the header and blank lines
NR == 1 || NF == 0 {next}
$1 == "" { print "Failed 1st validation"; exit 1 }
$2 == "" { print "Failed 2nd a validation"; exit 1 }
$2 !~ /^(0[1-9]|1[012])[- /.](0[1-9]|[12][0-9]|3[01])[- /.](19|20)[0-9][0-9]$/) { print "Failed 2nd b validation"; exit 1 }
$3 == "" { print "Failed 3rd a validation"; exit 1 }
$3 !~ /^[SE]$/ { print "Failed 3rd b validation"; exit 1 }
length($4) < 9 || length($4) > 11 { print "Failed 4th validation"; exit 1 }
' "$file"
I have awk file named throughput.awk to calculate throughput from trace files in NS-2.
BEGIN {
FS="[[:space:]]|_"
}
{
action = $1;
node_id = $4;
time = $2;
dest = $6;
app = $10;
pkt_size = $11;
if ( action == "r" && dest == "MAC" && app == "cbr" && time > 10 && (node_id == 1)) {
sum_ = sum_ + pkt_size;
}
}
END {
}
what I want is I have to calculate each node's throughput for multiple nodes from TCL script maybe like this :
for {set node 1} {$node < N } {incr node}
exec awk -f throughput.awk test.tr
}
so "node" variable inside trace files can be changed from TCL. How to do that?
Just use -v parameter:
for (node=1;node<N;node++){
exec awk -v node=$node -f throughput.awk test.tr
}
And inside awk
if ( action == "r" && dest == "MAC" && app == "cbr" && time > 10 && (node_id == node)) {
sum_ = sum_ + pkt_size;
}
Before the "=" node will be the name of the variable inside awk and it's value ($node) will be node val in Tcl