Is it possible to run local servers on AWS-CodeBuild? - aws-codebuild

Good Morning,
I'm using CodeBuild to test my application,
I was wondering if its possible to run a local Server inside a build.
I create a NPM script to start a local server, but every time I ran de tests, the CodeBuild pass through the command without waiting.
I searched on AWS Documentation and they say to use "nohup" command, but It doesn't work for me.
Just to be clear, my expectations is that CodeBuild ran the command, wait to be finished and proceed to another command without closing the open server.
Any of you guys have an idea?
Command:
- nohup yarn start-server

Start a background process and wait for it to complete later:
nohup sleep 30 & echo $! > pidfile
…
wait $(cat pidfile)
Start a background process and do not wait for it to ever complete:
nohup sleep 30 & disown $!
Start a background process and kill it later:
nohup sleep 30 & echo $! > pidfile
…
kill $(cat pidfile)
https://docs.aws.amazon.com/codebuild/latest/userguide/build-env-ref-background-tasks.html

Related

Allow job to run "reboot" command without causing failure

We have a large number of runners running a large number of jobs in one of our Gitlab CI/CD pipelines.
Each of these runners has a concurrency of 1, and they are of executor type shell.
[EDIT] These runners are AWS EC2 instances using Amazon Linux 2.
After certain jobs in the pipeline have completed, I would like them to run a reboot command to restart the runner.
However, some of these jobs will be tests. Currently, when I run the reboot command, the job fails. Obviously I can allow_failure so that the job passes, but this then means we have no way of determining whether or not the actual test has passed.
Originally, my test job looked like this:
after_script:
- sleep 1 && reboot
I have also tried the following variations:
after_script:
- sleep 15 && reboot
- exit 0
after_script:
- (sleep 15 ; reboot ) &
- exit 0
I've also tried running a shell script with the same contents.
All of these result in the same problem - ERROR: Job failed (system failure): aborted: terminated.
Can anyone think of a clever way round this?
In the end, I had to run this in a screen:
sudo screen -dm bash -c 'sleep 5; shutdown -r now;'
This allowed me, in a Gitlab CI pipeline, to run this as a script element, and immediately afterwards execute an exit command, like this:
after_script:
- sudo screen -dm bash -c 'sleep 5; shutdown -r now;'
- exit 0
This way, if a test fails - the job fails. If a test passes, the job passes. No need for allow_failure.
Unfortunately... I'm unsure of how to then contend with artifacts which take place after the after_script commands. If anyone has any ideas about that one, please add a comment here.

How to use screen to issue a command in the background over an ssh session

I am used to using linux terminals and nohup over ssh to issue commands that run in the background even when logged out of the ssh session. For some reason nohup seems to be broken in the latest MACOS. For that reason I am trying to executing this small sample script using screen command.
sleep 10
echo "this is my test file" > testfile
This file is saved as tst script. And then I issue the following command.
ssh sohaib#localhost screen -dm sh testscript
However nothing happens. screen just exits quietly without writing to the file testfile.
If I run this without ssh it works as desired. What am I doing wrong here?
The issue is that after your script exits the screen exits. The -dm is for deamons, i.e., scripts that keep running.
Showing the screen exiting after 10 seconds:
On remote host (file is executable):
ttucker#merlin:~$ cat test.sh
#!/bin/bash
echo derp > /tmp/test.txt
sleep 10
Command on local machine:
[ttucker#localhost ~]$ ssh ttucker#merlin 'screen -dmS my_screen ~/test.sh'
After run.
On remote machine, a few seconds after screen is running:
ttucker#merlin:~$ screen -ls
There is a screen on:
23141.my_screen (11/21/2016 07:05:11 PM) (Detached)
1 Socket in /var/run/screen/S-ttucker.
On remote machine, over 10 seconds later:
ttucker#merlin:~$ screen -ls
No Sockets found in /var/run/screen/S-ttucker.
Modifying the script to keep running, and so, keeping the screen up:
If you are really running a script that needs to stay up you can do the following in the script:
#!/bin/bash
while true; do
# Do something
sleep 10
done
This will do something, wait 10 seconds, then loop again.
Or, detaching the screen manually:
You can ssh to the remote machine, run screen, then press Ctrl+A,D, press Ctrl and hold, then hit A then hit D. You can now exit the SSH session and the screen will stay running.

Cant Terminate process which is launched at bootup with at daemon

I have fooinit.rt process launched at boot (/etc/init.d/boot.local)
Here is boot.local file
...
/bin/fooinit.rt &
...
I create an order list at job in order to kill fooinit.rt. that is Triggered in C code
and I wrote a stop script (in)which kill -9 pidof fooinit.rt is written
Here is stop script
#!/bin/sh
proc_file="/tmp/gdg_list$$"
ps -ef | grep $USER > $proc_file
echo "Stop script is invoked!!"
suff=".rt"
pid=`fgrep "$suff" $proc_file | awk '{print $2}'`
echo "pid is '$pid'"
rm $proc_file
When at job timer expires 'kill -9 pid'( of fooinit.rt) command can not terminate fooinit.rt process!!
I checked pid number printed and the sentence "Stop script is invoked!!" is Ok !
Here is "at" job command in C code (I verified that the stop scriptis is called after 1 min later)
...
case 708: /* There is a trigger signal here*/
{
result = APP_RES_PRG_OK;
system("echo '/sbin/stop' | at now + 1 min");
}
...
On the other hand, It works properly in case launching fooinit.rt manually from shell as a ordinary command. (not from /etc/init.d/boot.local). So kill -9 work and terminates fooinit.rt process
Do you have any idea why kill -9 can not terminate foo.rt process if it is launched from /etc/init.d/boot.local
Your solution is built around a race condition. There is no guarantee it will kill the right process (an unknowable amount of time can pass between the ps call and the attempt to make use of the pid), plus it's also vulnerable to a tmp exploit: someone could create a few thousand symlinks under /tmp called "gdg_list[1-32767]" that point to /etc/shadow and your script would overwrite /etc/shadow if it runs as root.
Another potential problem is the setting of $USER -- have you made sure it's correct? Your at job will be called as the user your C program runs as, which may not be the same user your fooinit.rt runs as.
Also, your script doesn't include a kill command at all.
A much cleaner way of doing this would be to run your fooinit.rt under some process supervisor like runit and use runit to shut it down when it's no longer needed. That avoids the pid bingo as well as the /tmp attack vector.
But even using pkill -u username -f fooinit.rt would be less racy than the script you provided.

running same script over many machines

I have setup a few EC2 instances, which all have a script in the home directory. I would like to run the script simultaneously across each EC2 instance, i.e. without going through a loop.
I have seen csshX for OSX for terminal interactive useage...but was wondering what the commandline code is to execute commands like
ssh user#ip.address . test.sh
to run the test.sh script across all instances since...
csshX user#ip.address.1 user#ip.address.2 user#ip.address.3 . test.sh
does not work...
I would like to do this over the commandline as I would like to automate this process by adding it into a shell script.
and for bonus points...if there is a way to send a message back to the machine sending the command that it has completed running the script that would be fantastic.
will it be good enough to have a master shell script that runs all these things in the background? e.g.,
#!/bin/sh
pidlist="ignorethis"
for ip in ip1 ip2
do
ssh user#$ip . test.sh &
pidlist="$pidlist $!" # get the process number of the last forked process
done
# Now all processes are running on the remote machines, and we want to know
# when they are done.
# (EDIT) It's probably better to use the 'wait' shell built-in; that's
# precisely what it seems to be for.
while true
do
sleep 1
alldead=true
for pid in $pidlist
do
if kill -0 $pid > /dev/null 2>&1
then
alldead=false
echo some processes alive
break
fi
done
if $alldead
then
break
fi
done
echo all done.
it will not be exactly simultaneous, but it should kick off the remote scripts in parallel.

Run a php script in background on debian (Apache)

I'm trying to make a push notification work on my debian vps (apace2, mysql).
I use a php script from this tutorial (http://www.raywenderlich.com/3525/apple-push-notification-services-tutorial-part-2).
Basically, the script is put in an infintive loop, that check a mysql table for new records every couple of seconds. The tutorial says it should be run as a background process.
// This script should be run as a background process on the server. It checks
// every few seconds for new messages in the database table push_queue and
// sends them to the Apple Push Notification Service.
//
// Usage: php push.php development &
So I have four questions.
How do I start the script from the terminal? What should I type? The script location on the server is:
/var/www/development_folder/scripts/push2/push.php
How can I kill it if I need to (without having to restart apace)?
Since the push notification is essential, I need a way to check if the script is running.
The code (from the tutorial) calls a function is something goes wrong:
function fatalError($message)
{
writeToLog('Exiting with fatal error: ' . $message);
exit;
}
Maybe I can put something in there to restart the script? But It would also be nice to have a cron job or something that check every 5 minute or so if the script is running, and start it if it doens't.
4 - Can I make the script automatically start after a apace or mysql restart? If the server crash or something else happens that need a apace restart?
Thanks a lot in advance
You could run the script with the following command:
nohup php /var/www/development_folder/scripts/push2/push.php > /dev/null &
The nohup means that that the command should not quit (it ignores hangup signal) when you e.g. close your terminal window. If you don't care about this you could just start the process with "php /var/www/development_folder/scripts/push2/push.php &" instead. PS! nohup logs the script output to a file called nohup.out as default, if you do not want this, just add > /dev/null as I've done here. The & at the end means that the proccess will run in the background.
I would only recommend starting the push script like this while you test your code. The script should be run as a daemon at system-startup instead (see 4.) if it's important that it runs all the time.
Just type
ps ax | grep push.php
and you will get the processid (pid). It will look something like this:
4530 pts/3 S 0:00 php /var/www/development_folder/scripts/push2/push.php
The pid is the first number you'll see. You can then run the following command to kill the script:
kill -9 4530
If you run ps ax | grep push.php again the process should now be gone.
I would recommend that you make a cronjob that checks if the php-script is running, and if not, starts it. You could do this with ps ax and grep checks inside your shell script. Something like this should do it:
if ! ps ax | grep -v grep | grep 'push.php' > /dev/null
then
nohup php /var/www/development_folder/scripts/push2/push.php > /dev/null &
else
echo "push-script is already running"
fi
If you want the script to start up after booting up the system you could make a file in /etc/init.d (e.g. /etc.init.d/mypushscript with something like this inside:
php /var/www/development_folder/scripts/push2/push.php
(You should probably have alot more in this file)
You would also need to run the following commands:
chmod +x /etc/init.d/mypushscript
update-rc.d mypushscript defaults
to make the script start at boot-time. I have not tested this so please do more research before making your own init script!