I need to reboot a machine (e.g., call Restart-Computer -Force) in a GitLab CI script, reestablish contact with the runner, and continue executing the script. The runners are tied to specific machines and execute PowerShell scripts (shell executor). How can I do this, and ensure that anything that happens after calling Restart-Computer occurs after the machines are rebooted?
Related
I have a gitlab-ci.yml file which I have inherited. And I have a local gitlab server running on my laptop. I have managed to create several gitlab runners and kickoff this inherited pipeline -- which gets immediately stuck. The error I am getting is:
...because you dont have any active runners online or available with any of these tags assigned to them: sometag
I have pieced together that the gitlab-ci.yml file references several tags and if there is a runner with a given tag, the runner will pickup my pipeline --- but why do I need this control (or hassle, more like it). What does it matter which runner runs my pipeline? Do i need to closely examine the gitlab-ci.yml file and based on that make some special runner for it ?
After I have modified my runners and gave them the missing tags, I am still getting the same error. Looking at the runner API results, the results do show that where it says "online" it shows "null". What does it mean? How do I make this runner "online"
There may be several runner, which will have different executors set up and thus, have different functionalities. So, the best practice is to give tags in gitlab-ci.yml file to run the jobs on particular runner.
In order to bring your runner online, you can go in the server where that particular gitlab runner is installed, and in the restart the gitlab-runner service using gitlab-runner restart with the user you have installed the runner or root user.
Sometimes, it might happen that you have changed the tags or added some tags to the runner using Gitlab UI but the same tags has not been saved in config.toml file. (config.toml file stores the gitlab-runner configurations. More details here https://docs.gitlab.com/runner/configuration/advanced-configuration.html)
So, in this particular case, you have to go the server where the gitlab-runner is installed, and modify the tags in config.toml file and then restart the gitlab-runner service. If everything goes well, you can see the runner is online in Gitlab UI.
I'm new to Gitlab (and YML files, and just about everything else in this project ...), and want to add some script to the build pipeline to automate release file generation.
I added a few script lines to the branch's release section in gitlab-ci.yml, and can see they're generally doing what I want in the console output to the Gitlab UI. But, after a bit of head scratching I now realise the build itself is executed within a docker instance, which explains why I can't see the release file on the Gitlab host machine.
Is there a way of executing script from the YML file on the Gitlab host, as opposed to within the build container?
It depends on the executor type of the Gitlab Runner where the scripts are executed.
If the jobs are executed on a runner with a docker executor, each job will be run in a new container.
If the jobs are executed on a runner with a Shell executor the jobs will be executed directly on the runner machine.
BTW, it is not advised to have your runners on the same machine where you host your Gitlab server. Especially, runners with Shell executor. It compromises resiliency and scalability. A CI/CD job could break the entire host and make your gitlab server unavailable.
We upgraded from Gitlab 7.11.4 to 9 in one fell swoop (by accident). Now we are trying to get CI set up the way it use to run for us before. I understand that CI is an integrated thing now.
One of my coworkers got a multi-runner thing going. The running command looks like so:
/usr/bin/gitlab-ci-multi-runner run --working-directory /home/gitlab-runner --config /etc/gitlab-runner/config.toml --service gitlab-runner --syslog --user gitlab-runner
But previously we had 1 runner for each project and we had a user associated for each project. So, if we have 2 projects called "portal" and "engine", we would have users created thusly:
gitlab-runner-fps-portal
gitlab-runner-fps-engine
And being users, they would have home folders like:
/home/gitlab-runner-fps-portal
/home/gitlab-runner-fps-engine
In the older version of CI, you'd have a config.yml with the url of CI and the runners token. Now you have config.toml.
I want to "divorce" the engine runner from this multi setup which runs under user "gitlab-runner" and have its own runner that runs under "gitlab-runner-fps-engine".
Easy to do? Right now since all of this docker business is new to us, we're continuing on to use "shell" as our executor in gitlab, if that information is useful.
There are at least two ways you can do it:
Register a specific runner in each of the projects and disable the shared runners.
Use tags to specify the job must be run on a specific runner. This way you can have some CI jobs run on your defined environment while others (like lint for example) can be run on tagged shared runners.
Jenkins version: 1.574
I created a simple job which performs the following:
Using "Execute shell script on remote host using SSH" as one of the BUILD steps, I'm just calling a shell script. This shell script performs stop and start operations on Tomcat to restart an application on the target machine.
I have a valid username, password, port defined for the target SSH server in Jenkins Global settings.
I saw this behavior that when I run a Jenkins job and call the restart script (which gets the application name as parameter $1), it works fine, but as soon as "Execute shell script on remote host using SSH" step completes, I see the new process dies on the remote/target application server.
If I run the script from the target/remote server itself, everything works fine and the new process/PID remains live forever, but running the same script from Jenkins, though I don't see any errors and everything works as expected, the new process dies as soon as the above mentioned SSH step is complete and control comes back to the next BUILD step in Jenkins job OR the Jenkins job is complete.
I saw a few posts/blogs and tried setting: BUILD_ID=dontKillMe in the Jenkins job (in various places i.e. Prepare Environment variables and also using Inject Environment variables...). When the job's particular build# is complete, I can see Environment Variables for that build# does say BUILD_ID=dontKillMe as its value (instead of the default Timestamp tag value).
I tried putting nohup before calling the restart script, i.e.,
nohup restart_tomcat.sh "${app}"
I also tried:
BUILD_ID=dontKillMe nohup restart_tomcat.sh "${app}"
This doesn't give any error and creates a nohup.out file on the remote server (but I'm not worried about it as the restart_tomcat.sh script itself creates its own LOG file which I'm "cat"ing after the restart_tomcat.sh script is complete. cat'ing on the log file is performed using another "Execute shell script on remote host using SSH" build step, and it successfully shows the log file created by the restart script).
I don't know what I'm missing at this point, but as soon as the restart_tomcat.sh step is complete, the new PID/process on the remote/target server dies.
How can I fix this?
I've been through this myself.
On my first iteration, before I knew about Jenkins ProcessTreeKiller, I ended up just daemonizing Tomcat. The Apache Tomcat documentation includes a section on running as a daemon.
You can also try disabling the ProcessTreeKiller for your whole Jenkins instance, if it's relatively small (read the first link for information).
The BUILD_ID=dontKillMe should be passed to the shell, and therefore it should be in your command line, not in Jenkins global configuration or job parameters.
BUILD_ID=dontKillMe restart_tomcat.sh "${app}" should have worked without problems.
You can also try nohup restart_tomcat.sh "${app}" & with the & at the end.
My solution (it worked after trying everything else) in Ubuntu 14.04 (Trusty Tahr) (Amazon AWS - Amazon EC2), Jenkins 1.601:
Exec command: (setsid COMMAND < /dev/null > /dev/null 2>&1 &);
Exec in PTY: DISABLED
// Example COMMAND=socat TCP4-LISTEN:1337,fork TCP4:127.0.0.1:1338
I created this Transfer as my last one.
#!/bin/ksh
export BUILD_ID=dontKillMe
I added the above line to the start of my script and the issue was resolved.
Hello people.
I'm using Jenkins as CI server and I need to run some performance test using Jmeter. I've setup the plugin and configured my workspace and everything works ok, but I have to do some steps manually and I want a bit more of "automation".
Currently i have some small programs in a remote server. These programs make some specific validations, for instance (just to explain): validates e-mail addresses, phone numbers, etc.
So, before I run the build in jenkins, I have to manually start the program (file.sh) I want:
I have to use putty (or any othe ssh client) to conect to the server and then run, for instance, the command
./email_validation.sh
And the Jmeter test runs in a correct way, and when the test is done I have to manually "shut down" the program I started. But what I want is trying to start the program I need in Jenkins configuration (not manually outside Jenkins, but in "execute shell" or "execute remote shell using ssh" build step).
I have tried to start it, but it get stuck, because when Jenkins build finds the command
./email_validation.sh
the build stops, it waits for the command to finish and then it will continue the other build steps, but obviously, I need this step not to finish until the test is executed.
Is there a way to achieve this? Thanks
Run your command as a background process by adding the & symbol at the end of the command and use the nohup command in case the parent process gets a hangup signal, e.g.
nohup /path/to/email_validation.sh &
If the script produces any output, it will go by default to the file nohup.out in the current directory when the script was launched.
You can kill the process at the end of the build by running:
pkill email_validation.sh