How to get private executors to communicate with our scm - cloudbees

we are using jenkins-cli for private executors like this:
Command on our private build machine (linux box):
java -jar jenkins-cli.jar -s [cloudbees_url]-i /var/build/.ssh/id_rsa on-premise-executor -fsroot /var/build -labels myLabel -executors 1
But when I kick off a job on this machine, I get a permission denied error in the build:
conq: repository access denied. deployment key is not associated with the requested repository.
fatal: Could not read from remote repository.
Do you know how to add the keys necessary for the build machines to talk to our SCM (We are using BitBucket)

First try running the same commands as the Jenkins job would run, as the user which started the slave.

Related

How to run script on build host (not docker container) during gitlab CI pipeline

I'm new to Gitlab (and YML files, and just about everything else in this project ...), and want to add some script to the build pipeline to automate release file generation.
I added a few script lines to the branch's release section in gitlab-ci.yml, and can see they're generally doing what I want in the console output to the Gitlab UI. But, after a bit of head scratching I now realise the build itself is executed within a docker instance, which explains why I can't see the release file on the Gitlab host machine.
Is there a way of executing script from the YML file on the Gitlab host, as opposed to within the build container?
It depends on the executor type of the Gitlab Runner where the scripts are executed.
If the jobs are executed on a runner with a docker executor, each job will be run in a new container.
If the jobs are executed on a runner with a Shell executor the jobs will be executed directly on the runner machine.
BTW, it is not advised to have your runners on the same machine where you host your Gitlab server. Especially, runners with Shell executor. It compromises resiliency and scalability. A CI/CD job could break the entire host and make your gitlab server unavailable.

Using Gitlab deploy keys with write access

I am currently running CE version 8.17.4 and am attempting to setup a deploy key with write access (as of 8.16) so that my runner instance may commit build artifacts back to the repository. I took the following steps to set this up:
On the runner instance, I generated the ssh keypair with the command: 
sudo ssh-keygen -t rsa -C "label" -b 4096
The generated keypair was saved to /home/gitlab-runner/.ssh/id_rsa and password protected.
Within Gitlab, I created a public deploy key from the admin console and pasted the contents of id_rsa.pub into the appropriate field and verified that the key fingerprints matched. I checked the "Write access allowed" box. 
In the private project that I wished to enable repository access from the runner, I enabled the newly created public deploy key.
This is a LaTeX document respository, so in the .gitlab-ci.yml file, I issue the following script after building the pdf:
after_script:
  - "git commit -am 'autobuild PDF'"
  - "git push origin master"
When the changes were committed, the build ran successfully on the runner up until the git push origin master command, and this error was thrown:
fatal: Authentication failed for 'http://gitlab-ci-token:xxxxxxxxxxxxxxxx#host/project.git/'
Ok. A couple questions:
If the deploy key is just an SSH key, shouldn't it be connecting on the secure port or does this matter? I haven't found much documentation on using this new write-permission deploy key feature, so am I missing something in the steps I took above?
Do I need to include [ci skip] in the commit message to avoid looping CI builds? I saw this concern come up in the original issue tickets for this feature, but did not see whether this step was required or not. 
Thanks for any help!
Jawad's comment worked for me: you need to force SSH. for example
git remote add ssh_remote git#host:user/project.git
git push ssh-remote HEAD:dev
thanks jawad

Running Jenkins tests in Docker containers build from dockerfile in codebase

I want to deploy a continuous integration platform based on Jenkins. As I have various kinds of projects (PHP / Symfony, node, angular, …) and as I want these tests to run both locally and on Jenkins, I was thinking about using Dockers containers.
The process I’m aiming for is :
A merge request is opened on Github / Gitlab
A webhook notifies Jenkins of the merge request
Jenkins pulls the repo, builds the containers and runs a shell script to execute the tests
Once the tests are finished, Jenkins retrieves the results from one of the containers (through a shared volume) and process the results.
I do not want Jenkins to be in a container.
With this kind of process, I’m hoping to be able to run very easily the tests on each developer machine with something like a docker-composer up and then in one of the container ./tests all.
I’m not very familiar with Jenkins. I’ve read a lot of documentation, but most of them suggested to define Jenkins slaves for each kind of projects beforehand. I would like everything to be as dynamic as possible and require as less configuration on Jenkins as possible.
I would appreciate a description of your test process if you have ever implemented something similar. If you think what I’m aiming for is impossible, I would also appreciate if you could explain to me why.
A setup I suggest is Docker in Docker.
The base is a derived Docker image, which extends the jenkins:2.x image by adding a Docker commandline client.
The Jenkins is started as a container with its home folder (a folder e.g. /var/jenkins_home mounted from the Docker host) and the Docker socket file to be able to start Docker containers from Jenkins build jobs.
docker run -d --name jenkins -v /var/jenkins_home:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock ... <yourDerivedJenkinsImage>
To check, if this setup is working just execute following command after starting the Jenkins container:
docker exec jenkins docker version
If the "docker version" output does NOT show:
Is the docker daemon running on this host?
Everythin is fine.
In your build jobs, you could configure the process you mentioned above. Let Jenkins simply check out the repository. The repository should contain your build and test scripts.
Use a freestyle build job with a shell execution. A shell execution could look like this:
docker run --rm --volumes-from jenkins <yourImageToBuildAndTestTheProject> bash $WORKSPACE/<pathToYourProjectWithinTheGitRepository>/build.sh
This command simply starts a new container (to build and/or test your project) with the volumes from jenkins. Which means that the cloned repository will be available under $WORKSPACE. So if you run "bash $WORKSPACE/<pathToYourProjectWithinTheGitRepository>/build.sh" your project will be built within a container of "yourImageToBuildAndTestTheProject". After running this, you could start other containers for integration tests or combine this with "docker-compose" by installing it on the derived Jenkins image.
Advantages are the minimal configuration affort you have within Jenkins - only the SCM configuration for cloning the GIT repository is required. Since each Jenkins job uses the Docker client directly you could use for each project one or Docker image to build and/or test, WITHOUT further Jenkins configuration.
If you need additional configuration e.g. SSH keys or Maven settings, just put them on the Docker host and start the Jenkins container with the additional volumes, which contain those configuration files.
Using this Docker option within the shell execution of your build jobs:
--volumes-from jenkins
Automatically adds workspace and configuration files to each of your build jobs.

Jenkins: Publish over SSH after failed build

I am trying to use the Publish Over SSH plugin to publish many kinds of build artifact to an external server. Examples of build artifacts are compiled builds, XML output from testing, and JSON output from linting.
If testing or linting results in errors, the build will fail or be marked unstable. In the case of a failed build, the Publish Over SSH plugin will not copy the build artifacts, writing to the console:
SSH: Current build result is [FAILURE], not going to run.
I see no reason why I wouldn't want to publish this information if it exists, and I would like to continue to report errors as build failures. So, is there any way to force Jenkins to publish build artifacts even if the job is marked as a failure?
I thought I could use the Flexible Publish to force this, by wrapping the Publish Over SSH in an "always" condition, but this gave the same output as before on a build failure.
I can think of a couple of work-arounds:
a) store the build status in an environment variable; force the status to SUCCESS; perform the publish step; recover the build status from the environment variable using java jenkins-cli.jar set-build-status $STORED_STATUS
OR
b) Write a bash script to perform the publishing step manually using SSH, cutting out the Publish Over SSH plugin altogether
Before I push forward with either of these solutions (neither of which I like), is there any piece of configuration that I'm missing?
The solution I ended up using was to use rsync/ssh to copy the files manually using a post build script. I configured this in my Jenkins Job Builder YAML like so:
- publisher:
name: publish-to-archive
publishers:
- post-tasks:
- matches:
- log-text: ".*"
script: |
ssh -i ${{HOME}}/.ssh/id_rsa jenkins#archiver "mkdir -p {archive_path}"
rsync -Pravdtze "ssh -i ${{HOME}}/.ssh/id_rsa" {source_path} jenkins#archiver:{archive_path}
Quoting old hooky on jenkinsci-users:
How can I force Publish Over SSH to work even if the build has been marked
a failure?
Use "Send files or execute commands over SSH after the build runs" in
configuration section "Build environment"
Job configuration / Build Environment / Send files or execute commands over SSH after the build runs
instead of using a post-build or build-step.

Jenkins & TestNG start browsers

Is it possible to make Jenkins use actual browser instead of headless browser? I a running some tests written in TestNG (using Selenium webdriver). When I run the testng.xml file in Eclipse, the browser starts and the tests run. But when I use Jenkins and run the tests with maven, it doesn't start any browsers.
If your jenkins is hosted in a Windows machine, there are some special configurations you should know about services that are allow to use the interface.
By the way, the easiest way to see the browsers running is starting jenkins using the command line:
java -jar jenkins.war
In linux you could use the same command or use xvfb plugin to run browsers in background.
Hope helps
In addition to this, the main reason for not launching the browser is JNLP (java network launch protocol) , when we execute the war we can interact with the desktop applications.
Using Selenium Grid will allow you to execute the test on Jenkins but open the browser on a remote slave.
To achieve this you need to create an instance RemoteWebdriver than ChromeDriver, IEDriver etc I
For linux. If jenkins is running as a daemon, you could specify active display to connect to and run your browser on it. Check what display you could connect to:
ps e | grep -Po " DISPLAY=[\.0-9A-Za-z:]* " | sort -u
My output is:
DISPLAY=:10.0
DISPLAY=:2
DISPLAY=:2.0
Then go to your jenkins project -> Configure -> Build and add the next string above your main build configuration through "Add build step -> Execute shell"
/bin/bash -c "export DISPLAY=:10"
Edited: I've encountered the issue again recently. To resolve it:
I've given for the jenkins' user ability to interact with the desktop of my current user:
xhost +si:localuser:jenkins
so if I connect to my linux system through ssh using jenkins' user credentials, export display of my current user (export DISPLAY=:10) and run, for example google-chrome or firefox inside of putty's terminal, they are launching on my current user's desktop.
After this I've checked If I could start "mvn test" command inside of workspace/MyTests folder from my putty so it will start browser and execute tests.
At the end I've created simple script in the root of my current user:
vi ~/.startup.sh
#!/bin/bash
xhost +si:localuser:jenkins
and added it to my Xfce4 GUI: Application -> Settings -> Session and Startup -> Application Autostart, specifying command field as:
sh -c $HOME/.startup.sh
It's because of this script should work only when desktop is loaded to share my current user's display with jenkins' user. After reboot and connecting to this server through RDC desktop loads with xhost command applied. And after this jenkins could interract with desktop even when I close the RDP connection but leaving current user's session alive.
I've removed Build step in my jenkins' project configuration that was stated "Invoke top-level Maven targets". It could not start my browsers.
I've changed my "Add build step -> Execute shell" to:
export DISPLAY=:10
cd /var/lib/jenkins/workspace/MyTests
mvn test
I've tried grid also, turning selenium-server-standalone -hub and -node into daemons. But it was slower than launching browsers with WebDriver in the such way.