How to stop zsh continuing bg job - process

Observe the following behaviour (which I want to stop happening):
> cat /dev/zero
[1] + 36461 suspended cat /dev/zero
> bg
[1] + 36461 continued cat /dev/zero
> cat
[1] + 36461 running cat /dev/zero
I'm not sure why typing cat starts the old cat again, I want it to just start a new process.

It seems that the zsh option AUTO_RESUME is on.
AUTO_RESUME (-W)
Treat single word simple commands without redirection as candidates for resumption of an existing job.
-- ZSHOPTIONS(1)
You could avoid this behavior globally with setopt no_autoresume.
Or just for this command, we could make it to not be a "single word" command. In this case, you could do it by prefixing some precommand modifiers for example command and - like this:
> cat /dev/zero
[1] + 6241 suspended cat /dev/zero
> bg
[1] + 6241 continued cat /dev/zero
> cat
[1] + 6241 running cat /dev/zero
[1] + 6241 suspended cat /dev/zero
> - cat

Related

Iterate bamboo Repositories via ssh task

I have a plan in bamboo, with n Repos. Via ssh task i would like to iterate through each of them. The total number of Repos is dynamic.
Bamboo variables explained here
https://confluence.atlassian.com/bamboo/bamboo-variables-289277087.html
My approaches look like this:
touch test.txt
#get count
echo "count ${bamboo.planRepository[#]}\n" >> test.text
for repo in "${bamboo.planRepository[#]}"
do
echo "${repo.name}\n" > test.txt
done
START=1
END=5
i=$START
while [[ $i -le $END ]]
do
printf "${bamboo.planRepository.${i}.name}\n" > test.txt
((i = i + 1))
done
Im not firm with ssh script syntax and nothing did work.
Any suggestions how to do this?

export temp from Witty pi 2 to a logfile

i have installed a witty pi 2 on my RPI3
But i want to export the temp from it to a spcific file
i can run a script called witty.sh and then i need to press 8 or Ctrl + C to quit
>>> Current temperature: 33.50°C / 92.3°F
>>> Your system time is: Sat 01 Jul 2017 20:29:46 CEST
>>> Your RTC time is: Sat 01 Jul 2017 20:29:46 CEST
Now you can:
1. Write system time to RTC
2. Write RTC time to system
3. Synchronize time
4. Schedule next shutdown [25 15:04:00]
5. Schedule next startup [25 15:05:00]
6. Choose schedule script
7. Reset data...
8. Exit
What do you want to do? (1~8)
All i want is to export the first line.
I tried
sudo ./wittyPi.sh | grep Current | awk '{ print $4 }' > temp.log
but it´s ask me for a number and then give the temp in temp.log
Is it possible to insert some extra code to generete Ctrl + C or sometinhg in the end ?
Just use a here string to provide the input:
$ cat tst.sh
echo "Type something:" >&2
read foo
echo "$foo"
$ ./tst.sh <<<stuff | sed 's/u/X/'
Type something:
stXff
and if your shell doesn't support here strings then use a here document instead:
$ ./tst.sh <<EOF | sed 's/u/X/'
> stuff
> EOF
Type something:
stXff
So you'd do (you never need grep when you're using awk):
sudo ./wittyPi.sh <<<8 | awk '/Current/{ print $4 }' > temp.log
or:
sudo ./wittyPi.sh <<<8 | awk 'NR==1{ print $4 }' > temp.log
Maybe a better way is to take a look at the get_temperature() function in the "utilities.sh" file, and see how it is implemented. It only involves some I2C communications.

Exit when the result is ready and do not wait for the rest job?

I want to exit immediately when the result has been provided and do not wait for the rest of the jobs. I provided three examples by different approaches, i.e. awk, head and read. I want to exit after the '1' is shown in the following example without waiting for sleep. But none of the do not work. Is there any guy to help me?
(echo 1; sleep 10; seq 10) | head -n 1
(echo 1; sleep 10; seq 10) | awk -e 'NR==1{print $1;exit}'
(echo 1; sleep 10; seq 10) | ./test.sh
where the test.sh is the following:
while read -r -d $'\n' x
do
echo "$x"
exit
done
Refactor Using Bash Process Substitution
I want to exit after the '1' is shown in the following example without waiting for sleep.
By default, Bash shell pipelines wait for each pipeline segment to complete before processing the next segment of the pipeline. This is usually the expected behavior, because otherwise your commands wouldn't be able to act on the completed output of from each pipeline element. For example, how could sort do its job in a pipeline if it doesn't have all the data available at once?
In this specific case, you can do what you want, but you have to refactor your code so that awk is reading from process substitution rather than a pipe. For example:
$ time awk -e 'NR==1 {print $1; exit}' < <(echo 1; sleep 10; seq 10)
1
real 0m0.004s
user 0m0.001s
sys 0m0.002s
From the timings, you can see that the process exits when awk does. This may not be how you want to do it, but it certainly does what you want to accomplish with a minimum of fuss. Your mileage with non-Bash shells may vary.
Asynchronous Pipelines
Asynchronous pipelines are not really a generic solution, but using one works sufficiently to accomplish your goals for the given use case. The following returns immediately:
$ { echo 1 & sleep 10 & seq 10 & } | awk -e 'NR==1 {print $1; exit}'
1
because the commands in the command list are run asynchronously. When you run commands asynchronously in Bash:
The shell does not wait for the command to finish, and the return status is 0 (true).
However, note that this only appears to do what you want. Your other commands (e.g. sleep and seq) are actually still running in the background. You can validate this with:
$ { echo 1 & sleep 10 & seq 10 & } | awk -e 'NR==1 {print $1; exit}'; pgrep sleep
1
14921
As you can see, this allows awk to process the output of echo without waiting for the entire list of commands to complete, but it doesn't really short-circuit the execution of the command list. Process substitution is still likely to be the right solution, but it's always good to know you have alternatives.

grep early stop with one match per pattern

Say I have a file where the patterns reside, e.g. patterns.txt. And I know that all the patterns will only be matched once in another file patterns_copy.txt, which in this case to make matters simple is just a copy of patterns.txt.
If I run
grep -m 1 --file=patterns.txt patterns_copy.txt > output.txt
I get only one line. I guess it's because the m flag stopped the whole matching process once the 1st line of the two files match.
What I would like to achieve is to have each pattern in patterns.txt matched only once, and then let grep move to the next pattern.
How do I achieve this?
Thanks.
Updated Answer
I have now had a chance to integrate what I was thinking about awk into the GNU Parallel concept.
I used /usr/share/dict/words as my patterns file and it has 235,000 lines in it. Using BenjaminW's code in another answer, it took 141 minutes, whereas this code gets that down to 11 minutes.
The difference here is that there are no temporary files and awk can stop once it has found all 8 of the things it was looking for...
#!/bin/bash
# Create a bash function that GNU Parallel can call to search for 8 things at once
doit() {
# echo Job: $9
# In following awk script, read "p1s" as a flag meaning "p1 has been seen"
awk -v p1="$1" -v p2="$2" -v p3="$3" -v p4="$4" -v p5="$5" -v p6="$6" -v p7="$7" -v p8="$8" '
$0 ~ p1 && !p1s {print; p1s++;}
$0 ~ p2 && !p2s {print; p2s++;}
$0 ~ p3 && !p3s {print; p3s++;}
$0 ~ p4 && !p4s {print; p4s++;}
$0 ~ p5 && !p5s {print; p5s++;}
$0 ~ p6 && !p6s {print; p6s++;}
$0 ~ p7 && !p7s {print; p7s++;}
$0 ~ p8 && !p8s {print; p8s++;}
{if(p1s+p2s+p3s+p4s+p5s+p6s+p7s+p8s==8)exit}
' patterns.txt
}
export -f doit
# Next line effectively uses 8 cores at a time to each search for 8 items
parallel -N8 doit {1} {2} {3} {4} {5} {6} {7} {8} {#} < patterns.txt
Just for fun, here is what it does to my CPU - blue means maxed out, and see if you can see where the job started in the green CPU history!
Other Thoughts
The above benefits from the fact that the input files are relatively well sorted, so it is worth looking for 8 things at a time because they are likely close to each other in the input file, and I can therefore avoid the overhead associated with creating one process per sought term. However, if your data are not well sorted, that may mean that you waste a lot of time looking further through the file than necessary to find the next 7, or 6 other items. In that case, you may be better off with this:
parallel grep -m1 "{}" patterns.txt < patterns.txt
Original Answer
Having looked at the size of your files, I now think awk is probably not the way to go, but GNU Parallel maybe is. I tried parallelising the problem two ways.
Firstly, I search for 8 items at a time in a single pass through the input file so that I have less to search through with the second set of greps that use the -m 1 parameter.
Secondly, I do as many of these "8-at-a-time" greps in parallel as I have CPU cores.
I use the GNU Parallel job number {#} as a unique temporary filename, and only create 16 (or however many CPU cores you have) temporary files at a time. The temporary files are prefixed ss (for sub-search) so they can call be deleted easily enough when testing.
The speedup seems to be a factor of about 4 times on my machine. I used /usr/share/dict/words as my test files.
#!/bin/bash
# Create a bash function that GNU Parallel can call to search for 8 things at once
doit() {
# echo Job: $9
# Make a temp filename using GNU Parallel's job number which is $9 here
TEMP=ss-${9}.txt
grep -E "$1|$2|$3|$4|$5|$6|$7|$8" patterns.txt > $TEMP
for i in $1 $2 $3 $4 $5 $6 $7 $8; do
grep -m1 "$i" $TEMP
done
rm $TEMP
}
export -f doit
# Next line effectively uses 8 cores at a time to each search for 8 items
parallel -N8 doit {1} {2} {3} {4} {5} {6} {7} {8} {#} < patterns.txt
You can loop over your patterns like this (assuming you're using Bash):
while read -r line; do
grep -m 1 "$line" patterns_copy.txt
done < patterns.txt > output.txt
Or, in one line:
while read -r line; do grep -m 1 "$line" patterns_copy.txt; done < patterns.txt > output.txt
For parallel processing, you can start the processes as background jobs:
while read -r line; do
grep -m 1 "$line" patterns_copy.txt &
read -r line && grep -m 1 "$line" patterns_copy.txt &
# Repeat the previous line as desired
wait # Wait for greps of this loop to finish
done < patterns.txt > output.txt
This is not really elegant as for each loop it will wait for the slowest grep to finish, but should still be faster than just one grep per loop.

how to call a variable using awk in a for loop? [duplicate]

How do I iterate over a range of numbers in Bash when the range is given by a variable?
I know I can do this (called "sequence expression" in the Bash documentation):
for i in {1..5}; do echo $i; done
Which gives:
1
2
3
4
5
Yet, how can I replace either of the range endpoints with a variable? This doesn't work:
END=5
for i in {1..$END}; do echo $i; done
Which prints:
{1..5}
for i in $(seq 1 $END); do echo $i; done
edit: I prefer seq over the other methods because I can actually remember it ;)
The seq method is the simplest, but Bash has built-in arithmetic evaluation.
END=5
for ((i=1;i<=END;i++)); do
echo $i
done
# ==> outputs 1 2 3 4 5 on separate lines
The for ((expr1;expr2;expr3)); construct works just like for (expr1;expr2;expr3) in C and similar languages, and like other ((expr)) cases, Bash treats them as arithmetic.
discussion
Using seq is fine, as Jiaaro suggested. Pax Diablo suggested a Bash loop to avoid calling a subprocess, with the additional advantage of being more memory friendly if $END is too large. Zathrus spotted a typical bug in the loop implementation, and also hinted that since i is a text variable, continuous conversions to-and-fro numbers are performed with an associated slow-down.
integer arithmetic
This is an improved version of the Bash loop:
typeset -i i END
let END=5 i=1
while ((i<=END)); do
echo $i
…
let i++
done
If the only thing that we want is the echo, then we could write echo $((i++)).
ephemient taught me something: Bash allows for ((expr;expr;expr)) constructs. Since I've never read the whole man page for Bash (like I've done with the Korn shell (ksh) man page, and that was a long time ago), I missed that.
So,
typeset -i i END # Let's be explicit
for ((i=1;i<=END;++i)); do echo $i; done
seems to be the most memory-efficient way (it won't be necessary to allocate memory to consume seq's output, which could be a problem if END is very large), although probably not the “fastest”.
the initial question
eschercycle noted that the {a..b} Bash notation works only with literals; true, accordingly to the Bash manual. One can overcome this obstacle with a single (internal) fork() without an exec() (as is the case with calling seq, which being another image requires a fork+exec):
for i in $(eval echo "{1..$END}"); do
Both eval and echo are Bash builtins, but a fork() is required for the command substitution (the $(…) construct).
Here is why the original expression didn't work.
From man bash:
Brace expansion is performed before
any other expansions, and any
characters special to other
expansions are preserved in the
result. It is strictly textual. Bash
does not apply any syntactic
interpretation to the context of
the expansion or the text between the
braces.
So, brace expansion is something done early as a purely textual macro operation, before parameter expansion.
Shells are highly optimized hybrids between macro processors and more formal programming languages. In order to optimize the typical use cases, the language is made rather more complex and some limitations are accepted.
Recommendation
I would suggest sticking with Posix1 features. This means using for i in <list>; do, if the list is already known, otherwise, use while or seq, as in:
#!/bin/sh
limit=4
i=1; while [ $i -le $limit ]; do
echo $i
i=$(($i + 1))
done
# Or -----------------------
for i in $(seq 1 $limit); do
echo $i
done
1. Bash is a great shell and I use it interactively, but I don't put bash-isms into my scripts. Scripts might need a faster shell, a more secure one, a more embedded-style one. They might need to run on whatever is installed as /bin/sh, and then there are all the usual pro-standards arguments. Remember shellshock, aka bashdoor?
The POSIX way
If you care about portability, use the example from the POSIX standard:
i=2
end=5
while [ $i -le $end ]; do
echo $i
i=$(($i+1))
done
Output:
2
3
4
5
Things which are not POSIX:
(( )) without dollar, although it is a common extension as mentioned by POSIX itself.
[[. [ is enough here. See also: What is the difference between single and double square brackets in Bash?
for ((;;))
seq (GNU Coreutils)
{start..end}, and that cannot work with variables as mentioned by the Bash manual.
let i=i+1: POSIX 7 2. Shell Command Language does not contain the word let, and it fails on bash --posix 4.3.42
the dollar at i=$i+1 might be required, but I'm not sure. POSIX 7 2.6.4 Arithmetic Expansion says:
If the shell variable x contains a value that forms a valid integer constant, optionally including a leading plus or minus sign, then the arithmetic expansions "$((x))" and "$(($x))" shall return the same value.
but reading it literally that does not imply that $((x+1)) expands since x+1 is not a variable.
You can use
for i in $(seq $END); do echo $i; done
Another layer of indirection:
for i in $(eval echo {1..$END}); do
∶
I've combined a few of the ideas here and measured performance.
TL;DR Takeaways:
seq and {..} are really fast
for and while loops are slow
$( ) is slow
for (( ; ; )) loops are slower
$(( )) is even slower
Worrying about N numbers in memory (seq or {..}) is silly (at least up to 1 million.)
These are not conclusions. You would have to look at the C code behind each of these to draw conclusions. This is more about how we tend to use each of these mechanisms for looping over code. Most single operations are close enough to being the same speed that it's not going to matter in most cases. But a mechanism like for (( i=1; i<=1000000; i++ )) is many operations as you can visually see. It is also many more operations per loop than you get from for i in $(seq 1 1000000). And that may not be obvious to you, which is why doing tests like this is valuable.
Demos
# show that seq is fast
$ time (seq 1 1000000 | wc)
1000000 1000000 6888894
real 0m0.227s
user 0m0.239s
sys 0m0.008s
# show that {..} is fast
$ time (echo {1..1000000} | wc)
1 1000000 6888896
real 0m1.778s
user 0m1.735s
sys 0m0.072s
# Show that for loops (even with a : noop) are slow
$ time (for i in {1..1000000} ; do :; done | wc)
0 0 0
real 0m3.642s
user 0m3.582s
sys 0m0.057s
# show that echo is slow
$ time (for i in {1..1000000} ; do echo $i; done | wc)
1000000 1000000 6888896
real 0m7.480s
user 0m6.803s
sys 0m2.580s
$ time (for i in $(seq 1 1000000) ; do echo $i; done | wc)
1000000 1000000 6888894
real 0m7.029s
user 0m6.335s
sys 0m2.666s
# show that C-style for loops are slower
$ time (for (( i=1; i<=1000000; i++ )) ; do echo $i; done | wc)
1000000 1000000 6888896
real 0m12.391s
user 0m11.069s
sys 0m3.437s
# show that arithmetic expansion is even slower
$ time (i=1; e=1000000; while [ $i -le $e ]; do echo $i; i=$(($i+1)); done | wc)
1000000 1000000 6888896
real 0m19.696s
user 0m18.017s
sys 0m3.806s
$ time (i=1; e=1000000; while [ $i -le $e ]; do echo $i; ((i=i+1)); done | wc)
1000000 1000000 6888896
real 0m18.629s
user 0m16.843s
sys 0m3.936s
$ time (i=1; e=1000000; while [ $i -le $e ]; do echo $((i++)); done | wc)
1000000 1000000 6888896
real 0m17.012s
user 0m15.319s
sys 0m3.906s
# even a noop is slow
$ time (i=1; e=1000000; while [ $((i++)) -le $e ]; do :; done | wc)
0 0 0
real 0m12.679s
user 0m11.658s
sys 0m1.004s
If you need it prefix than you might like this
for ((i=7;i<=12;i++)); do echo `printf "%2.0d\n" $i |sed "s/ /0/"`;done
that will yield
07
08
09
10
11
12
If you're on BSD / OS X you can use jot instead of seq:
for i in $(jot $END); do echo $i; done
This works fine in bash:
END=5
i=1 ; while [[ $i -le $END ]] ; do
echo $i
((i = i + 1))
done
There are many ways to do this, however the ones I prefer is given below
Using seq
Synopsis from man seq
$ seq [-w] [-f format] [-s string] [-t string] [first [incr]] last
Syntax
Full command
seq first incr last
first is starting number in the sequence [is optional, by default:1]
incr is increment [is optional, by default:1]
last is the last number in the sequence
Example:
$ seq 1 2 10
1 3 5 7 9
Only with first and last:
$ seq 1 5
1 2 3 4 5
Only with last:
$ seq 5
1 2 3 4 5
Using {first..last..incr}
Here first and last are mandatory and incr is optional
Using just first and last
$ echo {1..5}
1 2 3 4 5
Using incr
$ echo {1..10..2}
1 3 5 7 9
You can use this even for characters like below
$ echo {a..z}
a b c d e f g h i j k l m n o p q r s t u v w x y z
I know this question is about bash, but - just for the record - ksh93 is smarter and implements it as expected:
$ ksh -c 'i=5; for x in {1..$i}; do echo "$x"; done'
1
2
3
4
5
$ ksh -c 'echo $KSH_VERSION'
Version JM 93u+ 2012-02-29
$ bash -c 'i=5; for x in {1..$i}; do echo "$x"; done'
{1..5}
This is another way:
end=5
for i in $(bash -c "echo {1..${end}}"); do echo $i; done
If you want to stay as close as possible to the brace-expression syntax, try out the range function from bash-tricks' range.bash.
For example, all of the following will do the exact same thing as echo {1..10}:
source range.bash
one=1
ten=10
range {$one..$ten}
range $one $ten
range {1..$ten}
range {1..10}
It tries to support the native bash syntax with as few "gotchas" as possible: not only are variables supported, but the often-undesirable behavior of invalid ranges being supplied as strings (e.g. for i in {1..a}; do echo $i; done) is prevented as well.
The other answers will work in most cases, but they all have at least one of the following drawbacks:
Many of them use subshells, which can harm performance and may not be possible on some systems.
Many of them rely on external programs. Even seq is a binary which must be installed to be used, must be loaded by bash, and must contain the program you expect, for it to work in this case. Ubiquitous or not, that's a lot more to rely on than just the Bash language itself.
Solutions that do use only native Bash functionality, like #ephemient's, will not work on alphabetic ranges, like {a..z}; brace expansion will. The question was about ranges of numbers, though, so this is a quibble.
Most of them aren't visually similar to the {1..10} brace-expanded range syntax, so programs that use both may be a tiny bit harder to read.
#bobbogo's answer uses some of the familiar syntax, but does something unexpected if the $END variable is not a valid range "bookend" for the other side of the range. If END=a, for example, an error will not occur and the verbatim value {1..a} will be echoed. This is the default behavior of Bash, as well--it is just often unexpected.
Disclaimer: I am the author of the linked code.
These are all nice but seq is supposedly deprecated and most only work with numeric ranges.
If you enclose your for loop in double quotes, the start and end variables will be dereferenced when you echo the string, and you can ship the string right back to BASH for execution. $i needs to be escaped with \'s so it is NOT evaluated before being sent to the subshell.
RANGE_START=a
RANGE_END=z
echo -e "for i in {$RANGE_START..$RANGE_END}; do echo \\${i}; done" | bash
This output can also be assigned to a variable:
VAR=`echo -e "for i in {$RANGE_START..$RANGE_END}; do echo \\${i}; done" | bash`
The only "overhead" this should generate should be the second instance of bash so it should be suitable for intensive operations.
Replace {} with (( )):
tmpstart=0;
tmpend=4;
for (( i=$tmpstart; i<=$tmpend; i++ )) ; do
echo $i ;
done
Yields:
0
1
2
3
4
If you're doing shell commands and you (like I) have a fetish for pipelining, this one is good:
seq 1 $END | xargs -I {} echo {}
if you don't wanna use 'seq' or 'eval' or jot or arithmetic expansion format eg. for ((i=1;i<=END;i++)), or other loops eg. while, and you don't wanna 'printf' and happy to 'echo' only, then this simple workaround might fit your budget:
a=1; b=5; d='for i in {'$a'..'$b'}; do echo -n "$i"; done;' echo "$d" | bash
PS: My bash doesn't have 'seq' command anyway.
Tested on Mac OSX 10.6.8, Bash 3.2.48
This works in Bash and Korn, also can go from higher to lower numbers. Probably not fastest or prettiest but works well enough. Handles negatives too.
function num_range {
# Return a range of whole numbers from beginning value to ending value.
# >>> num_range start end
# start: Whole number to start with.
# end: Whole number to end with.
typeset s e v
s=${1}
e=${2}
if (( ${e} >= ${s} )); then
v=${s}
while (( ${v} <= ${e} )); do
echo ${v}
((v=v+1))
done
elif (( ${e} < ${s} )); then
v=${s}
while (( ${v} >= ${e} )); do
echo ${v}
((v=v-1))
done
fi
}
function test_num_range {
num_range 1 3 | egrep "1|2|3" | assert_lc 3
num_range 1 3 | head -1 | assert_eq 1
num_range -1 1 | head -1 | assert_eq "-1"
num_range 3 1 | egrep "1|2|3" | assert_lc 3
num_range 3 1 | head -1 | assert_eq 3
num_range 1 -1 | tail -1 | assert_eq "-1"
}