Is it possible to debug a Gitlab CI build interactively? - gitlab-ci

I have a build in Gitlab CI that takes a long time (10mins+) to run, and it's very annoying to wait for the entire process every time I need to experiment / make changes. It seems like surely there's a way to access some sort of shell during the build process and run commands interactively instead of placing them all in a deploy script.
I know it's possible to run Gitlab CI tests locally, but I can't seem to find a way to access the deploy running in process, even after scouring the docs.
Am I out of luck or is there a way to manually control this lengthy build?

I have not found a clean way for now, but here is how I do it
I start building locally gitlab-runner exec docker your_build_name
I kill gitlab-runner using control + c right after the docker image to be built. You can still add the command sleep 1m as the first script line just to have time enough to kill gitlab-runner
Note: gitlab-runner will create a docker and then delete it once the job is done… killing it will ensure the docker is still there - no other alternative I know for now….
Manually log into the container docker exec -i -t <instance-id/tag-name> bash
Run your script commands manually…

Related

Allow job to run "reboot" command without causing failure

We have a large number of runners running a large number of jobs in one of our Gitlab CI/CD pipelines.
Each of these runners has a concurrency of 1, and they are of executor type shell.
[EDIT] These runners are AWS EC2 instances using Amazon Linux 2.
After certain jobs in the pipeline have completed, I would like them to run a reboot command to restart the runner.
However, some of these jobs will be tests. Currently, when I run the reboot command, the job fails. Obviously I can allow_failure so that the job passes, but this then means we have no way of determining whether or not the actual test has passed.
Originally, my test job looked like this:
after_script:
- sleep 1 && reboot
I have also tried the following variations:
after_script:
- sleep 15 && reboot
- exit 0
after_script:
- (sleep 15 ; reboot ) &
- exit 0
I've also tried running a shell script with the same contents.
All of these result in the same problem - ERROR: Job failed (system failure): aborted: terminated.
Can anyone think of a clever way round this?
In the end, I had to run this in a screen:
sudo screen -dm bash -c 'sleep 5; shutdown -r now;'
This allowed me, in a Gitlab CI pipeline, to run this as a script element, and immediately afterwards execute an exit command, like this:
after_script:
- sudo screen -dm bash -c 'sleep 5; shutdown -r now;'
- exit 0
This way, if a test fails - the job fails. If a test passes, the job passes. No need for allow_failure.
Unfortunately... I'm unsure of how to then contend with artifacts which take place after the after_script commands. If anyone has any ideas about that one, please add a comment here.

how start my server process forever through gitlab ci/cd (.gitlab-ci.yml)

I want to start my server, but gitlab-runner will kill it after the timeout (1 hour)
my gitlab-ci.yml
build:
script:
- gradle build
- sudo gradle run &
please can help how to run so that he does not kill him
Gitlab terminates the runner, and thereby all subprocesses, after the timeout. There's no way around that.
That is aside from the question why you think you need/want to abuse the ci-runner for perpetually executing something.

"docker run -dti" with a dumb terminal

updated: added the missing docker attach.
Hi am trying to run a docker container, with -dti. but I cannot access with a terminal set to dumb. is there a way to change this (it is currently set to xterm, even though my ssh client is dumb)
example:
create the container
docker run -dti --name test -v /my-folder alpine /bin/ash
docker attach test
apk --update add nodejs
cd /my-folder
npm install -g gulp
the last command always contains ascii escape chars to move the cursor.
I have tried "export TERM=dumb" inside the running container, but it does not work.
is there a way to "run" this using the dumb terminal?
I am running this from a script on another computer, via (dumb) ssh.
using the -t which sets this https://docs.docker.com/engine/reference/run/#env-environment-variables, however removing effects the command prompt (the prompt is not shown)
possible solution 1 remove the -t and keep the -i. To see if the command has completed echo out a known token (ENDENDEND). ie
docker run -di --name test -v /my-folder alpine /bin/ash
docker attach test
apk --update add nodejs;echo ENDENDEND
cd /my-folder;echo ENDENDEND
npm install -g gulp;echo ENDENDEND
not pretty, but it works (there is no ascii in the results)
Possible solution 2 use the journal, docker can log out to the linux journal, this can be gathered as commands are executed in the container. (I have yet to fully test this one out. however the log seems to be a nicer output of what happened)
update:
Yep -t is the problem.
However if you want to see the entire process when running a command, maybe this way is better:
docker run -di --name test -v/my-folder alpine /bin/ash
docker exec -it test /bin/ash
finally you need to kill the container after all jobs finished.
docker run -d means "Run container in background and print container ID"
not start the container as a daemon
I was hitting this issue on OSx running docker, i had to do 2 things to stop the terminal/ascii/ansi escape sequences.
remove the "t" option on the docker run command (from docker run -it ... to docker run -i...)
ensure to force bash or sh shells used on osx when running the command from a script file, not the default zsh
Also
the escape sequences were not always visible on the terminal
even so, they still usually caused content corruption, even with SED brought to bear
they always were shown in my editor

Docker container exit(0) when using docker run command, but works with docker start command

I am attempting to dockerize a GUI app and have had some success. If I build the dockerfile into an image and then perform a docker run --name testcontainer testimage it appears that the process begins but the abruptly stops. I then check the container with docker ps to confirm no containers are running. Then I check docker ps -a and can see that it exited with status code exit(0). Then if I run the command docker start testcontainer, it appears to start the ENTRYPOINT command again, but this time it is able to continue and the GUI pops up.
My best guess is that I think that when I run the docker run command, the process begins but might be forked into a background process, causing the container to exit since the foreground process has ended. Although that could be way off because you would think the docker start command would result in the same outcome. I was thinking of trying to force the process to stay in the foreground, but do not know how to do that. Any suggestions?
UPDATE: I editted my Dockerfile to use supervisord to manage the starting of the GUI app. Now my docker run command will start supervisor, which will start my GUI app, and it works. Some things to note about this are that supervisor shows:
INFO spawned: myguiapp with pid 7
INFO success: myguiapp entered RUNNING state
INFO: exited: myguiapp (exit status 0; expected)
Supervisor and the container are still running at this point, which seems to indicate that the main process kicks off a child process. Since supervisor is still running, my container stays up and the GUI app does show up and I can use it. When I close the GUI, supervisor reports:
CRIT reaped unknown pid 93
Supervisor remains running, causing the container NOT to close. So I have to CTRL-C to kill supervisor. I'd rather not use supervisor, but if I need to, I would like for supervisor to close itself gracefully when that child process ends. If I could figure out how to get my container or supervisor to track child processes of the main process, then I think this would be solved.
The first issue is probably because your application requires a tty and you are not allocating a pseudo tty. Try running your container like this:
docker run -t --name testcontainer testimage
When you do a docker start the second time around it somehow allocates the pseudo-tty and the process keeps on running. I tried it myself. I couldn't find this info anywhere in the Docker docs though.
Also, if your UI is interactive you would want:
docker run -t -i --name testcontainer testimage

Proper way to automatically start and expose ssh when running my app container

I have containers with python apps and I need them to automatically start and expose ssh when running them. I know it's against Docker's best practices, but right now I don't have any other solution. I'd be interested to know the best way to automatically run an additionnal service in a docker container anyway.
Since Docker will only start one process, installing sshd isn't enough. There are apparently multiple options to deal with it:
use a process manager like Monit or Supervisor
use the ENTRYPOINT option
append a command (service sshd start, for instance) at the end of /etc/bash.bashrc (see this answer)
Option 1 seems overkill to me. Also I suppose I'll have to run the container with a cmd calling the process manager instead of bash or my python app: not exactly what I want.
I don't know how to use Option 2 for such a case. Should I write a custom script starting sshd and then running the provided command if any ? How should this script look like ?
Option 3 is very straightforward but quite dirty. Also it won't work if I run the container with another command than /bin/bash.
What's the best solution and how to set it up ?
You mention that option 1 seems like overkill. Why is it overkill? Supervisor is very simple to configure and will basically do what you want.
First, write supervisor config files that starts your python app and sshd:
[supervisord]
nodaemon=true
[program:sshd]
command=/usr/sbin/sshd -D
[program:pythonapp]
command=/path/to/python myapp.py -x args etc etc
Call that file supervisord.conf and commit it somewhere in your repo. In your Dockerfile, copy that file to the container as one of the container build steps, expose the ports for SSH and your app (if needed) and set the CMD to start supervisord:
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
EXPOSE 22 80
CMD ["/usr/bin/supervisord"]
This is clean and easy to understand. It's how I run multiple processes in a container when needed. It is even suggested in the Docker docs as a nice solution.
If you don't want to use a process manager, you can wrap your actual container command inside a shell script and sudo service ssh start, then execute your actual command.
sudo service ssh start
python myapp.py -x args blah blah
This will start up ssh as a daemon, and then your python app will start up after.
Yes, We can configure the Supervisord for the multi process in a container. If you want to use Openssh-server we can configure the Supervisor like below-:
[supervisord]
nodaemon=true
[program:sshd]
command=/usr/sbin/sshd -D
in supervisord.conf file.
We can add the supervisord.conf file in the docker image update a line in Dockerfile.
RUN apt update && apt install -y supervisor openssh-server
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
EXPOSE 22
CMD ["/usr/bin/supervisord"]
Reference link-: Gotechnies