Run a php script in background on debian (Apache) - objective-c

I'm trying to make a push notification work on my debian vps (apace2, mysql).
I use a php script from this tutorial (http://www.raywenderlich.com/3525/apple-push-notification-services-tutorial-part-2).
Basically, the script is put in an infintive loop, that check a mysql table for new records every couple of seconds. The tutorial says it should be run as a background process.
// This script should be run as a background process on the server. It checks
// every few seconds for new messages in the database table push_queue and
// sends them to the Apple Push Notification Service.
//
// Usage: php push.php development &
So I have four questions.
How do I start the script from the terminal? What should I type? The script location on the server is:
/var/www/development_folder/scripts/push2/push.php
How can I kill it if I need to (without having to restart apace)?
Since the push notification is essential, I need a way to check if the script is running.
The code (from the tutorial) calls a function is something goes wrong:
function fatalError($message)
{
writeToLog('Exiting with fatal error: ' . $message);
exit;
}
Maybe I can put something in there to restart the script? But It would also be nice to have a cron job or something that check every 5 minute or so if the script is running, and start it if it doens't.
4 - Can I make the script automatically start after a apace or mysql restart? If the server crash or something else happens that need a apace restart?
Thanks a lot in advance

You could run the script with the following command:
nohup php /var/www/development_folder/scripts/push2/push.php > /dev/null &
The nohup means that that the command should not quit (it ignores hangup signal) when you e.g. close your terminal window. If you don't care about this you could just start the process with "php /var/www/development_folder/scripts/push2/push.php &" instead. PS! nohup logs the script output to a file called nohup.out as default, if you do not want this, just add > /dev/null as I've done here. The & at the end means that the proccess will run in the background.
I would only recommend starting the push script like this while you test your code. The script should be run as a daemon at system-startup instead (see 4.) if it's important that it runs all the time.
Just type
ps ax | grep push.php
and you will get the processid (pid). It will look something like this:
4530 pts/3 S 0:00 php /var/www/development_folder/scripts/push2/push.php
The pid is the first number you'll see. You can then run the following command to kill the script:
kill -9 4530
If you run ps ax | grep push.php again the process should now be gone.
I would recommend that you make a cronjob that checks if the php-script is running, and if not, starts it. You could do this with ps ax and grep checks inside your shell script. Something like this should do it:
if ! ps ax | grep -v grep | grep 'push.php' > /dev/null
then
nohup php /var/www/development_folder/scripts/push2/push.php > /dev/null &
else
echo "push-script is already running"
fi
If you want the script to start up after booting up the system you could make a file in /etc/init.d (e.g. /etc.init.d/mypushscript with something like this inside:
php /var/www/development_folder/scripts/push2/push.php
(You should probably have alot more in this file)
You would also need to run the following commands:
chmod +x /etc/init.d/mypushscript
update-rc.d mypushscript defaults
to make the script start at boot-time. I have not tested this so please do more research before making your own init script!

Related

monitor bash script execution using monit

We have just started using monit for process monitor and pretty much new in monit. I have a bash script at /home/ubuntu/launch_example.sh. This is continuously running. is it possible to monitor this using monit? monit should start the script if it bash scripts terminates. What should be syntax.I tried below syntax but all the commands are not being executed as ubuntu user, like shell script calls some python scripts.
check process launch_example
matching "launch_example"
start program = "/bin/bash -c '/home/ubuntu/launch_example.sh'"
as uid ubuntu and gid ubuntu
stop program = "/bin/bash -c '/home/ubuntu/launch_example.sh'"
as uid ubuntu and gid ubuntu
The simple answer is "no". Monit is just for monitoring and is not some kind of supervisor/process manager. So if you want to monitor your long running executable, you have to wrap it.
check process launch_example with pidfile /run/launch.pid
start program = "/bin/bash -c 'nohup /home/ubuntu/launch_example.sh &'"
as uid ubuntu and gid ubuntu
stop program = "/bin/bash -c 'kill $(cat /run/launch.pid)'"
as uid ubuntu and gid ubuntu
This quick'n'dirty way also needs an additional line for your launch_example.sh to write the pidfile (pidfile matching should always be preferred over string matching) - it could be just the first line after she shebang. It simply writes the current process ID to the pidfile. Nothing fancy here ;)
echo $$ > /run/launch.pid
In fact, it's not even hard to convert your script into a systemd unit. Here is an example on how to. User binding, restarts, pidfile, and "start-on-boot" can then be managed through systemd (eg. start program = "/usr/bin/systemctl start my_unit").

Is it possible to run local servers on AWS-CodeBuild?

Good Morning,
I'm using CodeBuild to test my application,
I was wondering if its possible to run a local Server inside a build.
I create a NPM script to start a local server, but every time I ran de tests, the CodeBuild pass through the command without waiting.
I searched on AWS Documentation and they say to use "nohup" command, but It doesn't work for me.
Just to be clear, my expectations is that CodeBuild ran the command, wait to be finished and proceed to another command without closing the open server.
Any of you guys have an idea?
Command:
- nohup yarn start-server
Start a background process and wait for it to complete later:
nohup sleep 30 & echo $! > pidfile
…
wait $(cat pidfile)
Start a background process and do not wait for it to ever complete:
nohup sleep 30 & disown $!
Start a background process and kill it later:
nohup sleep 30 & echo $! > pidfile
…
kill $(cat pidfile)
https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-background-tasks.html

Raspberry Pi Zero W start script on startup

I'm creating a time-lapse camera to attach to a pair of glasses.
I tried out my script (python 3) and it works, but I need to get my script to work upon startup up since it will powered by a powerbank and I can't manually start the script because of that. (filename is "GlassCam.py" in the folder named "GlassCam")
This is what I've tried in the command line:
sudo nano ~/.config/lxsession/LXDE-pi/autostart
then in the menu
sudo /usr/bin/python3 /home/pi/GlassCam/GlassCam.py
(control+x and then y to save)
yet it won't start when I reboot it or shut it down and plug it back in
Try creating a cron job: sudo crontab -e and then add your script at the end of the file (to be executed on reboot) like this: #reboot python /home/pi/GlassCam/GlassCam.py &

Cant Terminate process which is launched at bootup with at daemon

I have fooinit.rt process launched at boot (/etc/init.d/boot.local)
Here is boot.local file
...
/bin/fooinit.rt &
...
I create an order list at job in order to kill fooinit.rt. that is Triggered in C code
and I wrote a stop script (in)which kill -9 pidof fooinit.rt is written
Here is stop script
#!/bin/sh
proc_file="/tmp/gdg_list$$"
ps -ef | grep $USER > $proc_file
echo "Stop script is invoked!!"
suff=".rt"
pid=`fgrep "$suff" $proc_file | awk '{print $2}'`
echo "pid is '$pid'"
rm $proc_file
When at job timer expires 'kill -9 pid'( of fooinit.rt) command can not terminate fooinit.rt process!!
I checked pid number printed and the sentence "Stop script is invoked!!" is Ok !
Here is "at" job command in C code (I verified that the stop scriptis is called after 1 min later)
...
case 708: /* There is a trigger signal here*/
{
result = APP_RES_PRG_OK;
system("echo '/sbin/stop' | at now + 1 min");
}
...
On the other hand, It works properly in case launching fooinit.rt manually from shell as a ordinary command. (not from /etc/init.d/boot.local). So kill -9 work and terminates fooinit.rt process
Do you have any idea why kill -9 can not terminate foo.rt process if it is launched from /etc/init.d/boot.local
Your solution is built around a race condition. There is no guarantee it will kill the right process (an unknowable amount of time can pass between the ps call and the attempt to make use of the pid), plus it's also vulnerable to a tmp exploit: someone could create a few thousand symlinks under /tmp called "gdg_list[1-32767]" that point to /etc/shadow and your script would overwrite /etc/shadow if it runs as root.
Another potential problem is the setting of $USER -- have you made sure it's correct? Your at job will be called as the user your C program runs as, which may not be the same user your fooinit.rt runs as.
Also, your script doesn't include a kill command at all.
A much cleaner way of doing this would be to run your fooinit.rt under some process supervisor like runit and use runit to shut it down when it's no longer needed. That avoids the pid bingo as well as the /tmp attack vector.
But even using pkill -u username -f fooinit.rt would be less racy than the script you provided.

running same script over many machines

I have setup a few EC2 instances, which all have a script in the home directory. I would like to run the script simultaneously across each EC2 instance, i.e. without going through a loop.
I have seen csshX for OSX for terminal interactive useage...but was wondering what the commandline code is to execute commands like
ssh user#ip.address . test.sh
to run the test.sh script across all instances since...
csshX user#ip.address.1 user#ip.address.2 user#ip.address.3 . test.sh
does not work...
I would like to do this over the commandline as I would like to automate this process by adding it into a shell script.
and for bonus points...if there is a way to send a message back to the machine sending the command that it has completed running the script that would be fantastic.
will it be good enough to have a master shell script that runs all these things in the background? e.g.,
#!/bin/sh
pidlist="ignorethis"
for ip in ip1 ip2
do
ssh user#$ip . test.sh &
pidlist="$pidlist $!" # get the process number of the last forked process
done
# Now all processes are running on the remote machines, and we want to know
# when they are done.
# (EDIT) It's probably better to use the 'wait' shell built-in; that's
# precisely what it seems to be for.
while true
do
sleep 1
alldead=true
for pid in $pidlist
do
if kill -0 $pid > /dev/null 2>&1
then
alldead=false
echo some processes alive
break
fi
done
if $alldead
then
break
fi
done
echo all done.
it will not be exactly simultaneous, but it should kick off the remote scripts in parallel.