Nsis. multiple conditions in doWhile loop - scripting

I'm trying to find out if several processes exist.
C++:
while (cond1 || cond2) {
...
}
How can I implement it using NSIS? I need something like this:
${DoWhile} cond1 or cond2
...
${Loop}
or even this
${DoWhile} true
${If} cond1
${OrIf} cond2
...
${EndIf}
${Loop}

You can use a Do+Loop without a condition:
!include LogicLib.nsh
${Do}
${If} $1 <> 0
${OrIf} $2 <> 0
# ...
${Else}
${Break}
${EndIf}
${Loop}
Using a label works as well:
loop:
${If} $1 <> 0
${OrIf} $2 <> 0
# ...
Goto loop
${EndIf}

Related

syntax error in awk 'if-else if-else' with multiple operations

There's a weird thing with awk conditional statements:
when running awk 'if-else if-else' with a single operation after each condition, it works fine as below:
awk 'BEGIN {a=30; \
if (a==10) print "a = 10"; \
else if (a == 20) print "a = 20"; \
else print "a = 30"}'
output:
a = 30
However, when running awk 'if-else if-else' with multiple operations (properly braced) after 'else if' , syntax error occured:
awk 'BEGIN {a=30; \
if (a==10) print "a = 10"; \
else if (a == 20) {print "a = 20"; print "b = 20"}; \
else print "a = 30"}'
output:
awk: cmd. line:4: else print "a = 30"}
awk: cmd. line:4: ^ syntax error
Can anyone tell if this is an awk issue that intrinsically doesn't allow multiple operations in such cases, or if it's just my syntax error that could be corrected?
P.S. I looked through all relevant posts of awk 'if else' syntax error, but none of them is addressing this issue.
Removed semi-colon at end of third line after close brace.
awk 'BEGIN {a=30; \
if (a==10) print "a = 10"; \
else if (a == 20) {print "a = 20"; print "b = 20"} \
else print "a = 30"}'
Output: a = 30

Looping through combinations of selected strings in specific columns and counting their occurrence

I have
A 34 missense fixed
A 33 synonymous fixed
B 12 synonymous var
B 34 missense fixed
B 34 UTR fixed
B 45 missense var
TRI 4 synonymous var
TRI 4 intronic var
3 3 synonymous fixed
I wanna output the counts of the combinations missense && fixed, missense && var, synonymous && fixed, synonymous && var , for each element in $1
missensefixed missensevar synonymousfixed synonymousvar
A 1 0 1 0
B 1 1 0 0
TRI 0 0 0 1
3 0 0 1 0
I can do this way with 4 individual commands selecting for each combination and concatenating the outputs
awk -F'\t' '($3~/missense/ && $4~/fixed/)' file | awk -F'\t' '{count[$1"\t"$3"\t"$4]++} END {for (word in count) print word"\t"count[word]}' > out
But I'm would like to do this for all combinations at once. I've tried some variations of this but not able to make it work
awk print a[i] -v delim=":" -v string='missense:synonymous:fixed:var' 'BEGIN {n = split(string, a, delim); for (i = 1; i <= n-2; ++i) {count[xxxx}++}} END ;for (word in count) print word"\t"count[word]}
You may use this awk with multiple arrays to hold different counts:
awk -v OFS='\t' '
{keys[$1]}
/missense fixed/ {++mf[$1]}
/missense var/ {++mv[$1]}
/synonymous fixed/ {++sf[$1]}
/synonymous var/ {++sv[$1]}
END {
print "-\tmissensefixed\tmissensevar\tsynonymousfixed\tsynonymousvar"
for (i in keys)
print i, mf[i]+0, mv[i]+0, sf[i]+0, sv[i]+0
}
' file | column -t
- missensefixed missensevar synonymousfixed synonymousvar
A 1 0 1 0
B 1 1 0 1
TRI 0 0 0 1
3 0 0 1 0
I have used column -t for tabular output only.
GNU awk supports arrays of arrays, so if it is your awk you can count your records with something as simple as num[$1][$3$4]++. The most complex part is the final human-friendly printing:
$ cat foo.awk
{ num[$1][$3$4]++ }
END {
printf(" missensefixed missensevar synonymousfixed synonymousvar\n");
for(r in num) printf("%3s%14d%12d%16d%14d\n", r, num[r]["missensefixed"],
num[r]["missensevar"], num[r]["synonymousfixed"], num[r]["synonymousvar"])}
$ awk -f foo.awk data.txt
missensefixed missensevar synonymousfixed synonymousvar
A 1 0 1 0
B 1 1 0 1
TRI 0 0 0 1
3 0 0 1 0
Using any awk in any shell on every Unix box with an assist from column to convert the tab-separated awk output to a visually tabular display if you want it:
$ cat tst.awk
BEGIN {
OFS = "\t"
numTags = split("missensefixed missensevar synonymousfixed synonymousvar",tags)
}
{
keys[$1]
cnt[$1,$3 $4]++
}
END {
for (tagNr=1; tagNr<=numTags; tagNr++) {
tag = tags[tagNr]
printf "%s%s", OFS, tag
}
print ""
for (key in keys) {
printf "%s", key
for (tagNr=1; tagNr<=numTags; tagNr++) {
tag = tags[tagNr]
val = cnt[key,tag]
printf "%s%d", OFS, val
}
print ""
}
}
$ awk -f tst.awk file
missensefixed missensevar synonymousfixed synonymousvar
A 1 0 1 0
B 1 1 0 1
TRI 0 0 0 1
3 0 0 1 0
$ awk -f tst.awk file | column -s$'\t' -t
missensefixed missensevar synonymousfixed synonymousvar
A 1 0 1 0
B 1 1 0 1
TRI 0 0 0 1
3 0 0 1 0
I'd highly recommend you always give every column a header string though so it doesn't make further processing of the data harder (e.g. reading it into Excel and sorting on headers), so if I were you I'd add printf "key" or something else that more accurately identifies that columns contents as the first line of the END section (i.e. on a line immediately before the first for loop) so the first column gets a header too:
$ awk -f tst.awk file | column -s$'\t' -t
key missensefixed missensevar synonymousfixed synonymousvar
A 1 0 1 0
B 1 1 0 1
TRI 0 0 0 1
3 0 0 1 0

Postgres update with case causing error "could not determine data type of parameter $3"

I have an issue with our golang programs when querying sql statement as below.
"database/sql"
sqlStatement := `
UPDATE user_posts SET
content = $2,
post_image = CASE WHEN ($3 IS NULL) THEN post_image ELSE $3 END,
updated_at = NOW()
WHERE id = $1
`
Found the answers parameter $3 cannot compare with clause "IS NULL" fixed as below.
sqlStatement := `
UPDATE user_posts SET
content = $2,
post_image = CASE WHEN $3 != '' THEN $3 ELSE post_image END,
updated_at = NOW()
WHERE id = $1
`

awk: printing lines side by side when the first field is the same in the records

I have a file containing lines like
a x1
b x1
q xq
c x1
b x2
c x2
n xn
c x3
I would like to test on the fist field in each line, and if there is a match I would like to append the matching lines to the first line. The output should look like
a x1
b x1 b x2
q xq
c x1 c x2 c x3
n xn
any help will be greatly appreciated
Using awk you can do this:
awk '{arr[$1]=arr[$1]?arr[$1] " " $0:$0} END {for (i in arr) print arr[i]}' file
n xn
a x1
b x1 b x2
c x1 c x2 c x3
q xq
To preserve input ordering:
$ awk '
{
if ($1 in vals) {
prev = vals[$1] " "
}
else {
prev = ""
keys[++k] = $1
}
vals[$1] = prev $0
}
END {
for (k=1;k in keys;k++)
print vals[keys[k]]
}
' file
a x1
b x1 b x2
q xq
c x1 c x2 c x3
n xn
What I ended up doing. (The answers by Ed Morton and Jonte are obviously more elegant.)
First I saved the 1st column of the input file in a separate file.
awk '{print $1}' input.file.txt > tmp0
Then saved the input file with lines, which has duplicate values at $1 field, removed.
awk 'BEGIN { FS = "\t" }; !x[$1]++ { print $0}' input_file.txt > tmp1
Then saved all the lines with duplicate $1 field.
awk 'BEGIN { FS = "\t" }; x[$1]++ { print $0}' input_file.txt >tmp2
Then saved the $1 fields of the non-duplicate file (tmp1).
awk '{ print $1}' tmp1 > tmp3
I used a for loop to pull in lines from the duplicate file (tmp2) and the duplicates removed file (tmp1) into an output file.
for i in $(cat tmp3)
do
if [ $(grep -w $i tmp0 | wc -l) = 1 ] #test for single instance in the 1st col of input file
then
echo "$(grep -w $i tmp1)" >> output.txt #if single then pull that record from no dupes
else
echo -e "$(grep -w $i tmp1) \t $(grep -w $i tmp2 | awk '{
printf $0"\t" }; END { printf "\n" }')" >> output.txt # if not single then pull that record from no_dupes first then all the records from dupes in a single line.
fi
done
Finally remove the tmp files
rm tmp* # remove all the tmp files

How to select lines inwhich the sum of negative numbers in the line is equal or less than -3 (with awk)?

I have a sample file like this:
probeset_id submitted_id chr snp_pos alleleA alleleB 562_201 562_202 562_203 562_204 562_205 562_206 562_207 562_208 562_209 562_210
AX-75448119 Chr1_41908741 1 41908741 T C 0 -1 0 -1 0 0 0 0 0 -1
AX-75448118 Chr1_41908545 1 41908545 T A 2 -1 2 2 2 -1 -1 2 2 0
AX-75448118 Chr1_41908545 1 41908545 T A 1 2 -1 2 2 -1 2 -1 2 0
and I want to exclud the lines that have a sum of negative number equal or less than -3, I know how to calculate the sum of negative number and print it, with this code:
awk 'BEGIN{sum=0} NR >=2 {for (i=7;i<=NF;i++) if ($i ~ /^-/) sum += $i; print $1,$2,$3,$4,$5,$6,sum; sum=0}' test.txt > out.txt
But I don't want to do this I just want to calculate the sum of negative number and then select the lines that have less or equal to -3.
These are the commands that I wrote:
awk 'BEGIN{sum=0} NR >=2 {for (i=7;i<=NF;i++) if ($i ~ /^-/) sum += $i; sum=0}' test.txt | awk 'sum <= -3' > out.txt
I get no errors but the out.txt file is empty!
awk 'BEGIN{sum=0} NR >=2 {for (i=7;i<=NF;i++) if ($i ~ /^-/) sum += $i; if sum >= -3 pritn R; sum=0}' test.txt | wc -l
which I get:
^ syntax error
and how can I make sure that the first line(header) would be also in my output file?
so I would like to have this out put:
probeset_id submitted_id chr snp_pos alleleA alleleB 562_201 562_202 562_203 562_204 562_205 562_206 562_207 562_208 562_209 562_210
AX-75448119 Chr1_41908741 1 41908741 T C 0 -1 0 -1 0 0 0 0 0 -1
AX-75448118 Chr1_41908545 1 41908545 T A 2 -1 2 2 2 -1 -1 2 2 0
Try this:
awk '
NR == 1 {
print
next
}
{
negsum=0
for(i=7; i<=NF; i++) {
if ($i<0) {
negsum += $i
}
}
negsum <= -3'
Your first try fails because you use two different invocations of awk. These are two different programs being run, and the second knows nothing about the sum variable in the first, so it uses the default value sum = 0.
The second try just has a mis-spelling. You used pritn instead of print.
What you described can be easier to code with proper formatting. (Not that I'd always resort to using editor when scripting awk...)
The first condition (NR == 1) just ensures we print the first line as is.
awk '
NR == 1 { print }
NR >= 2 {
sum = 0;
for (i=7;i<=NF;i++) {
if ($i < 0)
sum += $i;
}
if (sum <= -3)
print;
}
' test.txt > out.txt