nodemon run in background stops on terminal input - jobs

I want to use nodemon, but still have the use of my terminal, so I ran
$ nodemon &> site.log &
But as soon as I type a single character at the prompt, nodemon stops with this message.
[1] + 45260 suspended (tty input) nodemon &> site.log
What's going on? How can I make this stop happening?
I'm running zsh on MacOS.
EDIT:
I found this answer which explains it perfectly - apparently nodemon tries to read from stdin, and unix systems will stop processes that try to read from stdin while in the background. So my question now becomes:
How do I get nodemon to stop reading from stdin? And, more generally, is there a way to get an arbitrary process to stop reading from stdin?

I figured it out. I have to redirect /dev/null to the input.
$ nodemon < /dev/null &> site.log &

Related

Exit code from docker-compose breaking while loop

I've got case: there's WordPress project where I'm supposed to create a script for updating plugins and commit source changes to the separated branch. While doing this I had run into a strange issue.
Input variable:
akimset,4.0.3
all-in-one-wp-migration,6.71
What I wanted to do was iterating over each line of this variable
while read -r line; do
echo $line
done <<< "$variable"
and this piece of code worked perfectly fine, but when I have added docker-compose logic everything started to act weirdly
while read -r line; do
docker-compose run backend echo $line
done <<< "$variable"
now only one line was executed and after this script exited with 0 and stopped iterating. I have found workaround with:
echo $variable > file.tmp
for line in $(cat file.tmp); do
docker-compose run backend echo $line
done
and that works perfectly fine and it iterates each line. Now my question is: why? ZSH and shell scripting could be a bit misterious and running in edge-cases like this one isn't anything new for me, but I'm wondering why succesfully executed script broke input stream.
The problem with this
while read -r line; do
docker-compose run backend echo $line
done <<< "$variable"
is that docker allocate pseudo-TTY. After the first execution of docker-compose run (first loop) it access to the terminal using up the next lines as input.
You have to pass -T parameter to 'docker-compose run' command in order to avoid docker allocating pseudo-TTY. Then, a working code is:
while read -r line; do
docker-compose run -T backend echo $line
done < $(variable)
Update
The above solution is for docker version 18 and docker-compose version 1.17. For newer version the parameter -T is not working but you can try:
-d instead of -T to run container in background mode BUT no you will not see stdout in terminal.
If you have docker-compose v1.25.0, in your docker-compose.yml add the parameter stdin_open: false to the service.
I was able to solve the same problem by using a different loop :
for line in $(cat $variable)
do
docker-compose run backend echo $line
done
I ran into a nearly identical problem about a year ago, though the shell was bash (the command/problem was also slightly different, but it applied to your issue). I ended up writing the script in zsh.
I'm not certain what's going on, but it's not actually the exit code (you can confirm by running the following):
variable=$'akimset,4.0.3\nall-in-one-wp-migration,6.71'
while read line; do docker-compose run backend print "$line"; print "$?"; done <<<($variable)
... which yielded ...
(akimset,4.0.3
0
(I'm not at all sure where the ( came from and perhaps solving that would answer why this problem happens)
Working Script
for line in "${(f)variable}"; do
docker-compose run backend echo "$line"
done
The (f) flag tells zsh to split on newlines; the "${(f)variable" is in quotes so that any blank lines aren't lost. If you're going to include escap sequences that you want to not be converted to the corresponding values (something that I often need when reading file contents from a variable), make the flags (fV)

Intellij Idea - ignoring non-zero exit code of external tool

I'm using external tool to run fuser -k 1099 command before actually launching my run configuration
But if external tool returns non-zero status, build configuration stops. That is perfectly correct, but I can not find any way to ignore failure. If it was a plain bash, I'd do something like fuser -k 1099 || true. But at Idea, that seems to be not possible
Any ideas?
You can use /bin/bash as the program and the following as the arguments:
-c 'fuser -k 1099'; true
This way the exit code of the tool will be always zero.
Correct answer was not working for me (see my comment under it) I then found a solution that is to create a script that exits with 0, here under windows (let us call it KillMyExeNoError.bat):
taskkill /IM my.exe /F
exit /B 0
Then put C:\Path\To\KillMyExeNoError.bat in Program and leave Arguments empty.
Maybe under Linux you need to put bash in Program and /path/to/script.sh in Arguments.
Not the best solution since it would be good not to have to create a separate script but at least it works.

How to make ffmpeg exit when Input is broken

I have written a bash script to keep a ffmpeg command up and running
#!/bin/bash
while :
do
echo `ffmpeg -re -i http://domain.com/index400.m3u8 -vcodec copy -acodec copy -f mpegts udp://127.0.0.1:10000?pkt_size=1316`
done
The problem is, sometimes the input is broken, yet ffmpeg does not exit when that happens so that it is restarted by the above script. Instead what happens is the same process is kept running eventhough it is not transferring any packet to the UDP address (output). And I need to manually go into the terminal and kill it (kill -9 #processID)
I need a way to make ffmpeg kill its own process whenever the input is broken.
Appreciate your help.

Run a php script in background on debian (Apache)

I'm trying to make a push notification work on my debian vps (apace2, mysql).
I use a php script from this tutorial (http://www.raywenderlich.com/3525/apple-push-notification-services-tutorial-part-2).
Basically, the script is put in an infintive loop, that check a mysql table for new records every couple of seconds. The tutorial says it should be run as a background process.
// This script should be run as a background process on the server. It checks
// every few seconds for new messages in the database table push_queue and
// sends them to the Apple Push Notification Service.
//
// Usage: php push.php development &
So I have four questions.
How do I start the script from the terminal? What should I type? The script location on the server is:
/var/www/development_folder/scripts/push2/push.php
How can I kill it if I need to (without having to restart apace)?
Since the push notification is essential, I need a way to check if the script is running.
The code (from the tutorial) calls a function is something goes wrong:
function fatalError($message)
{
writeToLog('Exiting with fatal error: ' . $message);
exit;
}
Maybe I can put something in there to restart the script? But It would also be nice to have a cron job or something that check every 5 minute or so if the script is running, and start it if it doens't.
4 - Can I make the script automatically start after a apace or mysql restart? If the server crash or something else happens that need a apace restart?
Thanks a lot in advance
You could run the script with the following command:
nohup php /var/www/development_folder/scripts/push2/push.php > /dev/null &
The nohup means that that the command should not quit (it ignores hangup signal) when you e.g. close your terminal window. If you don't care about this you could just start the process with "php /var/www/development_folder/scripts/push2/push.php &" instead. PS! nohup logs the script output to a file called nohup.out as default, if you do not want this, just add > /dev/null as I've done here. The & at the end means that the proccess will run in the background.
I would only recommend starting the push script like this while you test your code. The script should be run as a daemon at system-startup instead (see 4.) if it's important that it runs all the time.
Just type
ps ax | grep push.php
and you will get the processid (pid). It will look something like this:
4530 pts/3 S 0:00 php /var/www/development_folder/scripts/push2/push.php
The pid is the first number you'll see. You can then run the following command to kill the script:
kill -9 4530
If you run ps ax | grep push.php again the process should now be gone.
I would recommend that you make a cronjob that checks if the php-script is running, and if not, starts it. You could do this with ps ax and grep checks inside your shell script. Something like this should do it:
if ! ps ax | grep -v grep | grep 'push.php' > /dev/null
then
nohup php /var/www/development_folder/scripts/push2/push.php > /dev/null &
else
echo "push-script is already running"
fi
If you want the script to start up after booting up the system you could make a file in /etc/init.d (e.g. /etc.init.d/mypushscript with something like this inside:
php /var/www/development_folder/scripts/push2/push.php
(You should probably have alot more in this file)
You would also need to run the following commands:
chmod +x /etc/init.d/mypushscript
update-rc.d mypushscript defaults
to make the script start at boot-time. I have not tested this so please do more research before making your own init script!

How do I execute a command every time after ssh'ing from one machine to another?

How do I execute a command every time after ssh'ing from one machine to another?
e.g
ssh mymachine
stty erase ^H
I'd rather just have "stty erase ^H" execute every time after my ssh connection completes.
This command can't simply go into my .zshrc file. i.e. for local sessions, I can't run the command (it screws up my keybindings). But I need it run for my remote sessions.
Put the commands in ~/.ssh/rc
You can put something like this into your shell's startup file:
if [ -n "$SSH_CONNECTION" ]
then
stty erase ^H
end
The -n test will determine if SSH_CONNECTION is set which happens only when logged in via SSH.
If you're logging into a *nix box with a shell, why not put it in your shell startup?
.bashrc or .profile in most cases.
Assuming a linux target, put it in your .profile
Try adding the command below the end of your ~/.bashrc. It should be exited upon logoff. Do you want this command only executed when logging off a ssh session? What about local sessions, etc?
trap 'stty erase ^H; exit 0' 0
You probably could setup a .logout file from /etc/profile using this same pattern as well.
An answer for us, screen/byobu users:
The geocar's solution will not work as screen will complain that "Must be connected to a terminal.". (This is probably caused by the fact that .ssh/rc is processed before shell is started. See LOGIN PROCESS section from man 8 sshd).
Robert's solution is better here but since screen and byobu open it's own bash instance, we need to avoid infinite recursion. So here is adjusted byobu-friendly version:
## RUN BYOBU IF SSH'D ##
## '''''''''''''''''' ##
# (but only if this is a login shell)
if shopt -q login_shell
then
if [ -n "$SSH_CONNECTION" ]
then
byobu
exit
fi
fi
Note that I also added exit after byobu, since IMO if you use byobu in the first place, you normally don't want to do anything outside of it.