Execute command if player has bow with certain tag - minecraft

I have these commands and I want it to only explode the arrow if the player has a bow with the "explode" tag.
Here are the commands (courtesy of AntVenom)
"*" Means that all blocks under it have that property
[Repeat,*Unconditional,*Always Active]
/execute at #e[type=Arrow] ~ ~ ~ run /scoreboard players set #e[c=1,r=0,type=Arrow] inGround 1 {inGround:1b}
[*Chain]
/execute at #e[type=Arrow] ~ ~ ~ run /particle lava ~ ~ ~ .5 .5 .5 .5 30 .5
/execute at #e[type=Arrow,score_inGround_min=1] ~ ~ ~ run /summon PrimedTnt ~ ~1 ~ {Fuse:0}
/kill #e[type=Arrow,score_inGround_min=1]
I could use https://mcstacker.net/ but i'm confused on what to put where.
Any help?
I had to bump this question

Try execute as #a[nbt={SelectedItem:{id=“bow”,tag:{explode:1b}}}] run say hi
To give yourself the bow do give #s bow {explode:1b}

Related

grep early stop with one match per pattern

Say I have a file where the patterns reside, e.g. patterns.txt. And I know that all the patterns will only be matched once in another file patterns_copy.txt, which in this case to make matters simple is just a copy of patterns.txt.
If I run
grep -m 1 --file=patterns.txt patterns_copy.txt > output.txt
I get only one line. I guess it's because the m flag stopped the whole matching process once the 1st line of the two files match.
What I would like to achieve is to have each pattern in patterns.txt matched only once, and then let grep move to the next pattern.
How do I achieve this?
Thanks.
Updated Answer
I have now had a chance to integrate what I was thinking about awk into the GNU Parallel concept.
I used /usr/share/dict/words as my patterns file and it has 235,000 lines in it. Using BenjaminW's code in another answer, it took 141 minutes, whereas this code gets that down to 11 minutes.
The difference here is that there are no temporary files and awk can stop once it has found all 8 of the things it was looking for...
#!/bin/bash
# Create a bash function that GNU Parallel can call to search for 8 things at once
doit() {
# echo Job: $9
# In following awk script, read "p1s" as a flag meaning "p1 has been seen"
awk -v p1="$1" -v p2="$2" -v p3="$3" -v p4="$4" -v p5="$5" -v p6="$6" -v p7="$7" -v p8="$8" '
$0 ~ p1 && !p1s {print; p1s++;}
$0 ~ p2 && !p2s {print; p2s++;}
$0 ~ p3 && !p3s {print; p3s++;}
$0 ~ p4 && !p4s {print; p4s++;}
$0 ~ p5 && !p5s {print; p5s++;}
$0 ~ p6 && !p6s {print; p6s++;}
$0 ~ p7 && !p7s {print; p7s++;}
$0 ~ p8 && !p8s {print; p8s++;}
{if(p1s+p2s+p3s+p4s+p5s+p6s+p7s+p8s==8)exit}
' patterns.txt
}
export -f doit
# Next line effectively uses 8 cores at a time to each search for 8 items
parallel -N8 doit {1} {2} {3} {4} {5} {6} {7} {8} {#} < patterns.txt
Just for fun, here is what it does to my CPU - blue means maxed out, and see if you can see where the job started in the green CPU history!
Other Thoughts
The above benefits from the fact that the input files are relatively well sorted, so it is worth looking for 8 things at a time because they are likely close to each other in the input file, and I can therefore avoid the overhead associated with creating one process per sought term. However, if your data are not well sorted, that may mean that you waste a lot of time looking further through the file than necessary to find the next 7, or 6 other items. In that case, you may be better off with this:
parallel grep -m1 "{}" patterns.txt < patterns.txt
Original Answer
Having looked at the size of your files, I now think awk is probably not the way to go, but GNU Parallel maybe is. I tried parallelising the problem two ways.
Firstly, I search for 8 items at a time in a single pass through the input file so that I have less to search through with the second set of greps that use the -m 1 parameter.
Secondly, I do as many of these "8-at-a-time" greps in parallel as I have CPU cores.
I use the GNU Parallel job number {#} as a unique temporary filename, and only create 16 (or however many CPU cores you have) temporary files at a time. The temporary files are prefixed ss (for sub-search) so they can call be deleted easily enough when testing.
The speedup seems to be a factor of about 4 times on my machine. I used /usr/share/dict/words as my test files.
#!/bin/bash
# Create a bash function that GNU Parallel can call to search for 8 things at once
doit() {
# echo Job: $9
# Make a temp filename using GNU Parallel's job number which is $9 here
TEMP=ss-${9}.txt
grep -E "$1|$2|$3|$4|$5|$6|$7|$8" patterns.txt > $TEMP
for i in $1 $2 $3 $4 $5 $6 $7 $8; do
grep -m1 "$i" $TEMP
done
rm $TEMP
}
export -f doit
# Next line effectively uses 8 cores at a time to each search for 8 items
parallel -N8 doit {1} {2} {3} {4} {5} {6} {7} {8} {#} < patterns.txt
You can loop over your patterns like this (assuming you're using Bash):
while read -r line; do
grep -m 1 "$line" patterns_copy.txt
done < patterns.txt > output.txt
Or, in one line:
while read -r line; do grep -m 1 "$line" patterns_copy.txt; done < patterns.txt > output.txt
For parallel processing, you can start the processes as background jobs:
while read -r line; do
grep -m 1 "$line" patterns_copy.txt &
read -r line && grep -m 1 "$line" patterns_copy.txt &
# Repeat the previous line as desired
wait # Wait for greps of this loop to finish
done < patterns.txt > output.txt
This is not really elegant as for each loop it will wait for the slowest grep to finish, but should still be faster than just one grep per loop.

Can't make AWK process an argument correctly

I have begun using AWK to reduce the number of lines in CSV files I use. My files usually have between 60,000 and 300,000 lines and I reduce them to 5,000 lines using this (with the number 38 changed as necessary):
awk ’NR % 38 == 0’ input.csv > output.csv
This works, by using the 2nd argument as the input file and the 3rd argument as the output file.
I'm trying to use the first argument "$1" to replace the number 38. I can not make AWK use the argument in this way however. Below is what I'm trying to accomplish...
sh reduce.sh 1000 input.csv output.csv
#!/bin/bash
#script name is reduce.sh
awk ’NR % $1 == 0’ $2 > $3
Thanks in advance for any help.
First thing, your quotes are wrong: they look like fancy "smart" quotes. Make sure they are plain single quotes
awk ’NR % $1 == 0’ $2 > $3
# ..^............^
Next, bash variables are not expanded inside single quotes. The best way to pass a shell variable to awk is with the -v option
awk -v step="$1" 'NR % step == 0' "$2" > "$3"
Last, always quote your variables

how to call a variable using awk in a for loop? [duplicate]

How do I iterate over a range of numbers in Bash when the range is given by a variable?
I know I can do this (called "sequence expression" in the Bash documentation):
for i in {1..5}; do echo $i; done
Which gives:
1
2
3
4
5
Yet, how can I replace either of the range endpoints with a variable? This doesn't work:
END=5
for i in {1..$END}; do echo $i; done
Which prints:
{1..5}
for i in $(seq 1 $END); do echo $i; done
edit: I prefer seq over the other methods because I can actually remember it ;)
The seq method is the simplest, but Bash has built-in arithmetic evaluation.
END=5
for ((i=1;i<=END;i++)); do
echo $i
done
# ==> outputs 1 2 3 4 5 on separate lines
The for ((expr1;expr2;expr3)); construct works just like for (expr1;expr2;expr3) in C and similar languages, and like other ((expr)) cases, Bash treats them as arithmetic.
discussion
Using seq is fine, as Jiaaro suggested. Pax Diablo suggested a Bash loop to avoid calling a subprocess, with the additional advantage of being more memory friendly if $END is too large. Zathrus spotted a typical bug in the loop implementation, and also hinted that since i is a text variable, continuous conversions to-and-fro numbers are performed with an associated slow-down.
integer arithmetic
This is an improved version of the Bash loop:
typeset -i i END
let END=5 i=1
while ((i<=END)); do
echo $i
…
let i++
done
If the only thing that we want is the echo, then we could write echo $((i++)).
ephemient taught me something: Bash allows for ((expr;expr;expr)) constructs. Since I've never read the whole man page for Bash (like I've done with the Korn shell (ksh) man page, and that was a long time ago), I missed that.
So,
typeset -i i END # Let's be explicit
for ((i=1;i<=END;++i)); do echo $i; done
seems to be the most memory-efficient way (it won't be necessary to allocate memory to consume seq's output, which could be a problem if END is very large), although probably not the “fastest”.
the initial question
eschercycle noted that the {a..b} Bash notation works only with literals; true, accordingly to the Bash manual. One can overcome this obstacle with a single (internal) fork() without an exec() (as is the case with calling seq, which being another image requires a fork+exec):
for i in $(eval echo "{1..$END}"); do
Both eval and echo are Bash builtins, but a fork() is required for the command substitution (the $(…) construct).
Here is why the original expression didn't work.
From man bash:
Brace expansion is performed before
any other expansions, and any
characters special to other
expansions are preserved in the
result. It is strictly textual. Bash
does not apply any syntactic
interpretation to the context of
the expansion or the text between the
braces.
So, brace expansion is something done early as a purely textual macro operation, before parameter expansion.
Shells are highly optimized hybrids between macro processors and more formal programming languages. In order to optimize the typical use cases, the language is made rather more complex and some limitations are accepted.
Recommendation
I would suggest sticking with Posix1 features. This means using for i in <list>; do, if the list is already known, otherwise, use while or seq, as in:
#!/bin/sh
limit=4
i=1; while [ $i -le $limit ]; do
echo $i
i=$(($i + 1))
done
# Or -----------------------
for i in $(seq 1 $limit); do
echo $i
done
1. Bash is a great shell and I use it interactively, but I don't put bash-isms into my scripts. Scripts might need a faster shell, a more secure one, a more embedded-style one. They might need to run on whatever is installed as /bin/sh, and then there are all the usual pro-standards arguments. Remember shellshock, aka bashdoor?
The POSIX way
If you care about portability, use the example from the POSIX standard:
i=2
end=5
while [ $i -le $end ]; do
echo $i
i=$(($i+1))
done
Output:
2
3
4
5
Things which are not POSIX:
(( )) without dollar, although it is a common extension as mentioned by POSIX itself.
[[. [ is enough here. See also: What is the difference between single and double square brackets in Bash?
for ((;;))
seq (GNU Coreutils)
{start..end}, and that cannot work with variables as mentioned by the Bash manual.
let i=i+1: POSIX 7 2. Shell Command Language does not contain the word let, and it fails on bash --posix 4.3.42
the dollar at i=$i+1 might be required, but I'm not sure. POSIX 7 2.6.4 Arithmetic Expansion says:
If the shell variable x contains a value that forms a valid integer constant, optionally including a leading plus or minus sign, then the arithmetic expansions "$((x))" and "$(($x))" shall return the same value.
but reading it literally that does not imply that $((x+1)) expands since x+1 is not a variable.
You can use
for i in $(seq $END); do echo $i; done
Another layer of indirection:
for i in $(eval echo {1..$END}); do
∶
I've combined a few of the ideas here and measured performance.
TL;DR Takeaways:
seq and {..} are really fast
for and while loops are slow
$( ) is slow
for (( ; ; )) loops are slower
$(( )) is even slower
Worrying about N numbers in memory (seq or {..}) is silly (at least up to 1 million.)
These are not conclusions. You would have to look at the C code behind each of these to draw conclusions. This is more about how we tend to use each of these mechanisms for looping over code. Most single operations are close enough to being the same speed that it's not going to matter in most cases. But a mechanism like for (( i=1; i<=1000000; i++ )) is many operations as you can visually see. It is also many more operations per loop than you get from for i in $(seq 1 1000000). And that may not be obvious to you, which is why doing tests like this is valuable.
Demos
# show that seq is fast
$ time (seq 1 1000000 | wc)
1000000 1000000 6888894
real 0m0.227s
user 0m0.239s
sys 0m0.008s
# show that {..} is fast
$ time (echo {1..1000000} | wc)
1 1000000 6888896
real 0m1.778s
user 0m1.735s
sys 0m0.072s
# Show that for loops (even with a : noop) are slow
$ time (for i in {1..1000000} ; do :; done | wc)
0 0 0
real 0m3.642s
user 0m3.582s
sys 0m0.057s
# show that echo is slow
$ time (for i in {1..1000000} ; do echo $i; done | wc)
1000000 1000000 6888896
real 0m7.480s
user 0m6.803s
sys 0m2.580s
$ time (for i in $(seq 1 1000000) ; do echo $i; done | wc)
1000000 1000000 6888894
real 0m7.029s
user 0m6.335s
sys 0m2.666s
# show that C-style for loops are slower
$ time (for (( i=1; i<=1000000; i++ )) ; do echo $i; done | wc)
1000000 1000000 6888896
real 0m12.391s
user 0m11.069s
sys 0m3.437s
# show that arithmetic expansion is even slower
$ time (i=1; e=1000000; while [ $i -le $e ]; do echo $i; i=$(($i+1)); done | wc)
1000000 1000000 6888896
real 0m19.696s
user 0m18.017s
sys 0m3.806s
$ time (i=1; e=1000000; while [ $i -le $e ]; do echo $i; ((i=i+1)); done | wc)
1000000 1000000 6888896
real 0m18.629s
user 0m16.843s
sys 0m3.936s
$ time (i=1; e=1000000; while [ $i -le $e ]; do echo $((i++)); done | wc)
1000000 1000000 6888896
real 0m17.012s
user 0m15.319s
sys 0m3.906s
# even a noop is slow
$ time (i=1; e=1000000; while [ $((i++)) -le $e ]; do :; done | wc)
0 0 0
real 0m12.679s
user 0m11.658s
sys 0m1.004s
If you need it prefix than you might like this
for ((i=7;i<=12;i++)); do echo `printf "%2.0d\n" $i |sed "s/ /0/"`;done
that will yield
07
08
09
10
11
12
If you're on BSD / OS X you can use jot instead of seq:
for i in $(jot $END); do echo $i; done
This works fine in bash:
END=5
i=1 ; while [[ $i -le $END ]] ; do
echo $i
((i = i + 1))
done
There are many ways to do this, however the ones I prefer is given below
Using seq
Synopsis from man seq
$ seq [-w] [-f format] [-s string] [-t string] [first [incr]] last
Syntax
Full command
seq first incr last
first is starting number in the sequence [is optional, by default:1]
incr is increment [is optional, by default:1]
last is the last number in the sequence
Example:
$ seq 1 2 10
1 3 5 7 9
Only with first and last:
$ seq 1 5
1 2 3 4 5
Only with last:
$ seq 5
1 2 3 4 5
Using {first..last..incr}
Here first and last are mandatory and incr is optional
Using just first and last
$ echo {1..5}
1 2 3 4 5
Using incr
$ echo {1..10..2}
1 3 5 7 9
You can use this even for characters like below
$ echo {a..z}
a b c d e f g h i j k l m n o p q r s t u v w x y z
I know this question is about bash, but - just for the record - ksh93 is smarter and implements it as expected:
$ ksh -c 'i=5; for x in {1..$i}; do echo "$x"; done'
1
2
3
4
5
$ ksh -c 'echo $KSH_VERSION'
Version JM 93u+ 2012-02-29
$ bash -c 'i=5; for x in {1..$i}; do echo "$x"; done'
{1..5}
This is another way:
end=5
for i in $(bash -c "echo {1..${end}}"); do echo $i; done
If you want to stay as close as possible to the brace-expression syntax, try out the range function from bash-tricks' range.bash.
For example, all of the following will do the exact same thing as echo {1..10}:
source range.bash
one=1
ten=10
range {$one..$ten}
range $one $ten
range {1..$ten}
range {1..10}
It tries to support the native bash syntax with as few "gotchas" as possible: not only are variables supported, but the often-undesirable behavior of invalid ranges being supplied as strings (e.g. for i in {1..a}; do echo $i; done) is prevented as well.
The other answers will work in most cases, but they all have at least one of the following drawbacks:
Many of them use subshells, which can harm performance and may not be possible on some systems.
Many of them rely on external programs. Even seq is a binary which must be installed to be used, must be loaded by bash, and must contain the program you expect, for it to work in this case. Ubiquitous or not, that's a lot more to rely on than just the Bash language itself.
Solutions that do use only native Bash functionality, like #ephemient's, will not work on alphabetic ranges, like {a..z}; brace expansion will. The question was about ranges of numbers, though, so this is a quibble.
Most of them aren't visually similar to the {1..10} brace-expanded range syntax, so programs that use both may be a tiny bit harder to read.
#bobbogo's answer uses some of the familiar syntax, but does something unexpected if the $END variable is not a valid range "bookend" for the other side of the range. If END=a, for example, an error will not occur and the verbatim value {1..a} will be echoed. This is the default behavior of Bash, as well--it is just often unexpected.
Disclaimer: I am the author of the linked code.
These are all nice but seq is supposedly deprecated and most only work with numeric ranges.
If you enclose your for loop in double quotes, the start and end variables will be dereferenced when you echo the string, and you can ship the string right back to BASH for execution. $i needs to be escaped with \'s so it is NOT evaluated before being sent to the subshell.
RANGE_START=a
RANGE_END=z
echo -e "for i in {$RANGE_START..$RANGE_END}; do echo \\${i}; done" | bash
This output can also be assigned to a variable:
VAR=`echo -e "for i in {$RANGE_START..$RANGE_END}; do echo \\${i}; done" | bash`
The only "overhead" this should generate should be the second instance of bash so it should be suitable for intensive operations.
Replace {} with (( )):
tmpstart=0;
tmpend=4;
for (( i=$tmpstart; i<=$tmpend; i++ )) ; do
echo $i ;
done
Yields:
0
1
2
3
4
If you're doing shell commands and you (like I) have a fetish for pipelining, this one is good:
seq 1 $END | xargs -I {} echo {}
if you don't wanna use 'seq' or 'eval' or jot or arithmetic expansion format eg. for ((i=1;i<=END;i++)), or other loops eg. while, and you don't wanna 'printf' and happy to 'echo' only, then this simple workaround might fit your budget:
a=1; b=5; d='for i in {'$a'..'$b'}; do echo -n "$i"; done;' echo "$d" | bash
PS: My bash doesn't have 'seq' command anyway.
Tested on Mac OSX 10.6.8, Bash 3.2.48
This works in Bash and Korn, also can go from higher to lower numbers. Probably not fastest or prettiest but works well enough. Handles negatives too.
function num_range {
# Return a range of whole numbers from beginning value to ending value.
# >>> num_range start end
# start: Whole number to start with.
# end: Whole number to end with.
typeset s e v
s=${1}
e=${2}
if (( ${e} >= ${s} )); then
v=${s}
while (( ${v} <= ${e} )); do
echo ${v}
((v=v+1))
done
elif (( ${e} < ${s} )); then
v=${s}
while (( ${v} >= ${e} )); do
echo ${v}
((v=v-1))
done
fi
}
function test_num_range {
num_range 1 3 | egrep "1|2|3" | assert_lc 3
num_range 1 3 | head -1 | assert_eq 1
num_range -1 1 | head -1 | assert_eq "-1"
num_range 3 1 | egrep "1|2|3" | assert_lc 3
num_range 3 1 | head -1 | assert_eq 3
num_range 1 -1 | tail -1 | assert_eq "-1"
}

What would I do to match a value of just '/' in the "Mounted on" column?

Assume the following output:
➜ ~ df -kl
Filesystem 1024-blocks Used Available Capacity iused ifree %iused Mounted on
/dev/disk1 487401624 207950512 279195112 43% 52051626 69798778 43% /
/dev/disk2s2 732238672 242656088 489582584 34% 60664020 122395646 33% /Volumes/Backup Drive
I would like to extract '43%' (column %iused) from the output above. What would I do to match '/'? I get the feeling I need to escape it. In the past I matched a specific string (i.e. CPU usage) without any issue. I would use something like:
top -l 1 | awk '/CPU usage:/ {print $3}'
But the '/' is giving me trouble. Any ideas?
From your comments it sounds like this might be what you want:
df -kl | awk '$NF=="/"{ print $8 }'
If not, do edit your question to clarify.
Should be simple! Try this:
df -kl | awk '/^\// { print $5 }'
We tell it to find lines where the line starts with a slash. We specify the slash by escaping it.

Scripts Linux - awk

I have small problem. I need to explaint, what does awk.
I need to write a script that monitors the load on the system is overloaded (CPU, RAM) and writes message.
I have this:
if
[[ $(bc <<< "$(top -b -n1 | grep ^Cpu | awk -F': ' '{print $2}' | awk -F% '{print $1}') >= 100") -eq 1 ]] ; then echo '...';
fi
This is for CPU. Can anybody explain me what does the awk in this example? And how would be awk for RAM?
The first awk invocation will print the second token on any line, where tokens are separated by a colon or a space.
The second will print the first token in any line where the tokens are separated by a percent sign (%).
To get the used memory on a Linux system:
free | awk '/Mem:/ {print $3;}'
This is a very fragile script based upon what looks like an old version of top. It is very very easy to inspect this piece of script though - so, let's go through it. We start with the following:
top -b -n1
Which (reading the manual for top) places top into batch mode (meaning that instead of wanting to play interactively with top, we want to send output to another command) and outputs with 1 iteration. That will get us output like the following:
$ top -b -n1
top - 10:48:33 up 1 day, 22:51, 3 users, load average: 1.21, 1.27, 1.03
Tasks: 262 total, 2 running, 260 sleeping, 0 stopped, 0 zombie
%Cpu(s): 14.5 us, 5.2 sy, 11.3 ni, 67.3 id, 1.6 wa, 0.0 hi, 0.1 si, 0.0 st
KiB Mem: 8124692 total, 6722112 used, 1402580 free, 384188 buffers
KiB Swap: 4143100 total, 430656 used, 3712444 free. 2909664 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
11012 user1 20 0 426412 14436 5740 R 97.1 0.2 19:27.98 dleyna-renderer
4579 root 20 0 286480 152924 31152 S 13.0 1.9 24:15.49 Xorg
1 root 20 0 185288 4892 3352 S 0.0 0.1 0:02.52 systemd
2 root 20 0 0 0 0 S 0.0 0.0 0:00.02 kthreadd
3 root 20 0 0 0 0 S 0.0 0.0 0:02.77 ksoftirqd/0
5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0H
7 root 20 0 0 0 0 S 0.0 0.0 1:32.00 rcu_sched
When we pipe this to grep ^Cpu ... well it looks like this is where we discover some breakage that indicates that the version of top that we are using in this answer may be different in output from the version the original script expected. It looks like the intent is to match on ^%Cpu instead. Here is the corrected piece:
$ top -b -n1 | grep ^%Cpu
%Cpu(s): 14.6 us, 5.2 sy, 11.2 ni, 67.3 id, 1.6 wa, 0.0 hi, 0.1 si, 0.0 st
The next piece of the pipe is to just get rid of the '%Cpu(s): ' piece:
$ top -b -n1 | grep ^%Cpu | awk -F': ' '{print $2}'
15.1 us, 5.0 sy, 10.8 ni, 67.4 id, 1.6 wa, 0.0 hi, 0.1 si, 0.0 st
And then the next piece... awk -F% '{print $1}' -- doesn't make sense again for this answer's version of top, as the script is looking to print what's to the left of a % sign -- and there is no % in our output. So... we are left wondering where we need to go from here.
From the rest of the script... the result of the pipeline is compared to 100... so, I assume the version of top that the script was meant to parse had a percentage of CPU utilization total in the first column... in our version of top output is all broken out with much more granularity. Here is the breakdown for the immediately preceding output:
15.1% -- spent in normal priority user/applications
5.0% -- spent in system/kernel
10.8% -- spent in low priority batch or daemon jobs
67.4% -- spent "idle"
1.6% -- spent waiting for I/O to complete
0.0% -- spent in servicing HW interrupts
0.1% -- spent in servicing software interrupts
0.0% -- spent stolen by another VM running on the HW
------------------------------------------------------
100.0% -- Total
... So, on modern Linux systems, top provides a lot more information, and maybe we need to look at the problem differently. In this case, we could look at (idle * 10) as the metric -- as in shell, we only have integer math and comparison available to us. So, we will adjust the script a little... and while we are at it, let's get rid of the grep in the pipeline as that can just as easily be done by awk as well:
$ top -b -n1 | awk -F, '/^%Cpu/ {print $4}'
67.8 id
Now let's adjust it so that it gives us just the idle value multiplied by 10:
$ top -b -n1 | awk -F, '/^%Cpu/ { sub(/id/,"",$4); print $4*10 }'
678
Ok, the next part of the original script uses bc to see if we are 100% utilized. As we now are looking at idle rather than utilization, we want the opposite of the original script. Also, we don't need the complication of bc now that the output is scaled to integer. Let's just use shell in our comparison?
$ if [ $(top -b -n1 | awk -F, '/^%Cpu/ { sub(/id/,"",$4); print $4*10 }') -le 0 ]; then echo '...'; fi
And that is it.
This answer was all to show how the code works -- how to interpret and parse the output of top through a pipeline, how to go about the task of figuring out what a piece of script does, and how to go about repairing a fragile/broken script. However, the original script is not only fragile but is pretty much broken by design. The metric we use to detect an overloaded system is more like the "load average" that is found at the first line of the output of the top command, or even better that can be parsed from the output of the uptime command.
A way to find out overload may be to look at load average divided by number of cpu's. Number of cpu's can be found easily parsing /proc/cpuinfo:
$ grep ^processor /proc/cpuinfo | wc -l
4
Here is one example where 400% load over 15 minutes is considered to be the continuous loading threshold:
load=$(uptime | awk -F, '{ print $(NF) * 1.0 }')
proc=$(grep ^processor /proc/cpuinfo | wc -l)
plod=$(awk "BEGIN { x = 100 * $load / $proc; print int(x) + int(x+x)%2 }")
if [ $plod -gt 400 ]; then echo '...'; fi
note: int(x) + int(x+x)%2 is a rounding function
For the amount of free memory on the system, I like schtever's answer -- except that I would use column 4 rather than column 3 and check that for low memory.