syntax error in awk 'if-else if-else' with multiple operations - awk

There's a weird thing with awk conditional statements:
when running awk 'if-else if-else' with a single operation after each condition, it works fine as below:
awk 'BEGIN {a=30; \
if (a==10) print "a = 10"; \
else if (a == 20) print "a = 20"; \
else print "a = 30"}'
output:
a = 30
However, when running awk 'if-else if-else' with multiple operations (properly braced) after 'else if' , syntax error occured:
awk 'BEGIN {a=30; \
if (a==10) print "a = 10"; \
else if (a == 20) {print "a = 20"; print "b = 20"}; \
else print "a = 30"}'
output:
awk: cmd. line:4: else print "a = 30"}
awk: cmd. line:4: ^ syntax error
Can anyone tell if this is an awk issue that intrinsically doesn't allow multiple operations in such cases, or if it's just my syntax error that could be corrected?
P.S. I looked through all relevant posts of awk 'if else' syntax error, but none of them is addressing this issue.

Removed semi-colon at end of third line after close brace.
awk 'BEGIN {a=30; \
if (a==10) print "a = 10"; \
else if (a == 20) {print "a = 20"; print "b = 20"} \
else print "a = 30"}'
Output: a = 30

Related

Show sort ordinal numbers as count in a list

I have the following input file
system info com1 command set
system info com2 command set
system command set1 information description test
system command 21-22 information description pass
system command T1-T2-T3 information description file
system command commonset information description info
and the following command
awk '/system command/&&/information description/ { gsub("\"","") ; print ++ OFS") Command = " $3}' inputfile.txt
gives me the following output
1) Command = set1
2) Command = 21-22
3) Command = T1-T2-T3
4) Command = commonset
Is there a way that my commands list count is not with simple numbers but with sort ordinal numbers
and to have an output like this
1st) Command = set1
2nd) Command = 21-22
3rd) Command = T1-T2-T3
4th) Command = commonset
There is no built function to get that ordinal. We need to get our hands dirty and write this code:
awk 'function ordinal(i, mod, str) {mod = i%10; str=i; if (i~/1[1-3]$/) str=str "th"; else if (mod==1) str=str "st"; else if (mod==2) str=str "nd"; else if (mod==3) str=str "rd"; else str=str "th"; return str;} /system command/&&/information description/ { gsub(/"/,"") ; print ordinal(++i) ") Command = " $3}' file
1st) Command = set1
2nd) Command = 21-22
3rd) Command = T1-T2-T3
4th) Command = commonset
Expanded form:
awk '
function ordinal(i, mod, str) {
mod = i%10
str = i
if (i~/1[1-3]$/) # for numbers ending in 11, 12, 13
str = str "th"
else if (mod==1)
str = str "st"
else if (mod==2)
str = str "nd"
else if (mod==3)
str = str "rd"
else
str = str "th"
return str
}
/system command/&&/information description/ {
gsub(/"/,"")
print ordinal(++i) ") Command = " $3
}' file
An alternative way to implement the above function would be:
function ordinal(num, idx, sfxs) {
split("st nd rd th",sfxs," ")
idx = ( (num ~ /[123]$/) && (num !~ /1[123]$/) ? num % 10 : 4 )
return num sfxs[idx]
}

manipulate input file and create a new file using awk

Input
1473697,5342715,256,0.3
1473697,7028427,256,0.1
1473697,5342716,256,0.3
1473697,5342715,257,0.3
1473697,7028427,257,0.1
1473610,7028427,256,0.1
1473610,5342715,256,0.3
1473610,7028422,256,0.1
Output
1473697,256,5342715 0.3 7028427 0.1 5342716 0.3
1473697,257,5342715 0.3 7028427 0.1
1473610,256,7028427 0.1 5342715 0.3 7028422 0.1
OFS and FS is = ,
is there a way to search unique lines base on column 1 and 3
then print the line with the details from column 2 and 4
It took awhile to figure out what you want, but I think you're looking for:
awk '!a[$1 $3] {a[$1 $3] = $1","$3","}
{a[$1 $3] = a[$1 $3] " " $2 " " $4}
END {for(i in a) print a[i]}' FS=, input-file
or
awk '{a[$1","$3] = a[$1","$3] " " $2 " " $4}
END {for(i in a) print i","a[i]}' FS=, input-file
There are many variations on the theme.

Compare Two Files Using awk Script

I need to validate two files using condition and each record in file is separated using comma.
File1
Prakash,10,20,3(Field Index)
File2
10,25,100
20,25,200
30,20,300
From reading the file 1 FieldIndex I need to sum the corresponding column in File 2(ie 100+200+300) needs to be added.
$ cat > file1
Prakash,10,20,3
$ awk -F, '
NR == FNR {f = $NF; next} # last field of the first file
{ sum += $f } # we're in the 2nd file here since NR != FNR
END { print "sum of field index " f " is " sum }
' file1 file2
sum of field index 3 is 600

awk: printing lines side by side when the first field is the same in the records

I have a file containing lines like
a x1
b x1
q xq
c x1
b x2
c x2
n xn
c x3
I would like to test on the fist field in each line, and if there is a match I would like to append the matching lines to the first line. The output should look like
a x1
b x1 b x2
q xq
c x1 c x2 c x3
n xn
any help will be greatly appreciated
Using awk you can do this:
awk '{arr[$1]=arr[$1]?arr[$1] " " $0:$0} END {for (i in arr) print arr[i]}' file
n xn
a x1
b x1 b x2
c x1 c x2 c x3
q xq
To preserve input ordering:
$ awk '
{
if ($1 in vals) {
prev = vals[$1] " "
}
else {
prev = ""
keys[++k] = $1
}
vals[$1] = prev $0
}
END {
for (k=1;k in keys;k++)
print vals[keys[k]]
}
' file
a x1
b x1 b x2
q xq
c x1 c x2 c x3
n xn
What I ended up doing. (The answers by Ed Morton and Jonte are obviously more elegant.)
First I saved the 1st column of the input file in a separate file.
awk '{print $1}' input.file.txt > tmp0
Then saved the input file with lines, which has duplicate values at $1 field, removed.
awk 'BEGIN { FS = "\t" }; !x[$1]++ { print $0}' input_file.txt > tmp1
Then saved all the lines with duplicate $1 field.
awk 'BEGIN { FS = "\t" }; x[$1]++ { print $0}' input_file.txt >tmp2
Then saved the $1 fields of the non-duplicate file (tmp1).
awk '{ print $1}' tmp1 > tmp3
I used a for loop to pull in lines from the duplicate file (tmp2) and the duplicates removed file (tmp1) into an output file.
for i in $(cat tmp3)
do
if [ $(grep -w $i tmp0 | wc -l) = 1 ] #test for single instance in the 1st col of input file
then
echo "$(grep -w $i tmp1)" >> output.txt #if single then pull that record from no dupes
else
echo -e "$(grep -w $i tmp1) \t $(grep -w $i tmp2 | awk '{
printf $0"\t" }; END { printf "\n" }')" >> output.txt # if not single then pull that record from no_dupes first then all the records from dupes in a single line.
fi
done
Finally remove the tmp files
rm tmp* # remove all the tmp files

c shell while loop stack not that deep

I am new to c shell. I have a problem using the while loop, the error message is "directory stack not deep". Here is my while loop.
set i = 1
while ($i < =10)
echo $i
end
EDIT:
I solve the problem by removing the space between '<' and '='.
set i = 1
while ($i <=10)
echo $i
end
OP answered his own question: the less-than-or-equal operator is spelled <=, not < =.