I have setup Gitlab in ubuntu server, it's working fine. I access Gitlab by url http://123.456.789.100 and Gitlab login details
username:admin#local.host
password:123456
Then I wanted to set Gitlab CI for test the code before it merge. I have done with setting Gitlab CI by using the link. I have done setting up every thing except Runners. While set ssh git#<your gitlab url> in Runner I face error
ssh git#123.456.789.100
it prompts me for password
git#123.456.789.100's password:
I entered password(123456) of gitlab's that I used to enter into gitlab server, then I have this error
Permission denied please try again
But I got Gitlab CI web interface by http://123.456.789.100:8081(I did set Gitlab-ci to listen on 8081 port). Then I entered Gitlab server's username and password as admin#local.host and 123456, I got Invalid credentials.
What would be the wrong I made?
When you put git# in the ssh it’s actually trying to use the git user on the machine that is running GitLab (rather than some GitLab controlled user).
The easiest fix for this would be to create an SSH key on the runner in question and then add that SSH key to GitLab. That will allow the runner to access the GitLab instance and clone and repositories you need.
For more help getting started, see this page on Configuring GitLab runners.
Related
I am new to GitLab and I want create a Gitlab CI pipeline which builds the Docker image and basically pushes it to my Dockerhub. I created a Free account in Gitlab and created a simple pipeline. Below is my pipeline.
Below are the Environment Variables (Note: I'm pushing to DockerHub)
But it says login failed. Username and password is correct
Do I need to create a Token ?
is the branch running the pipeline protected? since protected variables are seen only in protected branches and tags in gitlab
I'm running a CI machine on AppCenter and need to allow read/write access to a private BitBucket repository but I can't figure out how to do this.
My approach is to create an ssh key and during CI builds add the private key to the machine ssh-agent using ssh-add -K (mac machine).
I've created an ssh key on my local computer (mac) using ssh-keygen and uploaded the .pub key to BitBucket. Then as my CI runs I'm trying to take the private key and add it to the ssh-agent but I'm being prompt to enter a password and can't figure out how to inject it in a non-interactive shell mode.
Is this the right approach to grant access to BitBucket in CI? if so, how can I add an ssh key without being prompt to enter a password?
Scripts are in Ruby or Bash.
The repo contains certificates used for Fastlane Match
Answering my own question...
I ended up using BitBucket AppPasword and cloning via https. I think there has to be a better way but this work for my needs at the moment.
I needed access from my CI to a private BitBucket certificates repo to use with Fastlane Match, the value in my MatchFile forgit_url that allows me to clone the repo is:
git_url "https://{BITBUCKET_USER}:{BITBUCKET_APP_PASSWORD}#bitbucket.org/{BITBUCKET_USER}/{REPO}.git"
You can obtain a bitbucket app password by clicking your profile (Avatar) -> Settings -> App Passwords
Using the new Bitbucket Pipelines feature, how can I SSH into my staging box from the docker container it spins up?
The last step in my pipeline is an .sh file that deploys the necessary code on staging, however because my staging box uses public key authentication and doesn't know about the docker container, the SSH connection is being denied.
Anyway of getting around this without using password authentication over SSH (which is causing me issues as well by constantly choosing to authenticate over public key instead.)?
Bitbucket pipelines can use a Docker image you've created, that has the ssh client setup to run during your builds, as long as it's hosted on a publicly accessible container registry.
Create a Docker image.
Create a Docker image with your ssh key available somewhere. The image also needs to have the host key for your environment(s) saved under the user the container will run as. This is normally the root user but may be different if you have a USER command in your Dockerfile.
You could copy an already populated known-hosts file in or configure the file dynamically at image build time with:
RUN ssh-keyscan your.staging-host.com
Publish the image
Publish your image to a publicly accessible, but private registry. You can host your own or use a service like Docker Hub.
Configure Pipelines
Configure pipelines to build with your docker image.
If you use Docker Hub
image:
name: account-name/java:8u66
username: $USERNAME
password: $PASSWORD
email: $EMAIL
Or Your own external registry
name: docker.your-company-name.com/account-name/java:8u66
Restrict access on your hosts
You don't want to have ssh keys to access your hosts flying around the world so I would also restrict access for these deploy ssh keys to only run your deploy commands.
The authorized_keys file on your staging host:
command="/path/to/your/deploy-script",no-agent-forwarding,no-port-forwarding,no-X11-forwarding ssh-dss AAAAC8ghi9ldw== deploy#bitbucket
Unfortunately bitbucket don't publish an IP list to restrict access to as they use shared infrastructure for pipelines. If they happen to be running on AWS then Amazon do publish IP lists.
from="10.5.0.1",command="",no-... etc
Also remember to date them an expire them from time to time. I know ssh keys don't enforce dates but it's a good idea to do it anyway.
You can now setup SSH keys under pipeline settings so that you do not need to have a private docker image just to store ssh keys. It is also extracted from your source code so you don't have it in your repo as well.
Under
Settings -> Pipelines -> SSH keys
You can either provide a key pair or generate a new one. The private key will be put in the docker container at ~/.ssh/config and provide you a public key you can put in your host to the ~/.ssh/authorized_keys file. The page also requires an ip or name to setup the fingerprint for known hosts when running on docker as well.
Also, Bitbucket has provided IP addresses you can white list if necessary for the docker containers being spun up. They are currently:
34.236.25.177/32
34.232.25.90/32
52.203.14.55/32
52.202.195.162/32
52.204.96.37/32
52.54.90.98/32
34.199.54.113/32
34.232.119.183/32
35.171.175.212/32
On my Jenkins server, I have a project that checks some source code out of SVN and runs a build script called MakeInstaller.ps1 that got checked out along with the source code. It's pretty straight-forward and this part is working great.
What's not working great is that part of MakeInstaller.ps1 attempts to do an svn export of a specific revision of some other source code, but it doesn't have the credentials to connect to SVN. When I run the build-scripts on my PC this is totally fine, because Tortoise SVN has the credentials cached.
Jenkins has my SVN credentials already, but that's only used when Jenkins checks out the source code, not when my build script attempts to check out the source code.
I've tried:
Installing the Tortoise SVN command-line tools on the machine that's running Jenkins. This fixed my initial "svn.exe not found" error, but it has no way of knowing my credentials for this server
I even logged into the SVN server using Tortoise SVN on that machine to attempt it to get it to cache the credentials, but it's still not working. I'm guessing it's because the Jenkins service does not run under the same user that I was logged in as.
I feel like I'm getting off on the wrong path though. It seems a bit odd to have Tortoise SVN installed on that machine along side whatever SVN client the Jenkins SVN plug-in is already using.
My question: Is there a way to do SVN export from inside a build-script in Jenkins and have it use the credentials that Jenkins already knows about? Can I use the same svn.exe that the Jenkins SVN plug-in is using? I really don't want to include the credentials in the build script itself.
Most of the similar questions I found involved the initial check-out failing due to bad credentials, not a check-out that happens as part of the build script.
In the end, I ended up doing something like Jenkins: Access global passwords in powershell
I had to use EnvInject because the Credentials Binding plugin was giving me an error when I tried to save the project. This involved adding another instance of my SVN password to Jenkins, and hardcoding the Jenkins user name into my build script, but it's working.
Now the build script attempts to authenticate with a dummy svn info command. If I'm running the script on my local machine, that works and it proceeds without a user-name or password. If the dummy svn info command fails, it attempts to log in with --username Jenkins and --password $env:svnpassword where the svnpassword environment variable is provided by EnvInject.
Jenkins keeps using the default "jenkins" user when executing builds. My build requires a number of SSH calls. However these SSH calls fails with Host verification exceptions because i haven't been able connect place the public key for this user on the target server.
I don't know where the default "jenkins" user is configured and therefore cant generate the required public key to place on the target server.
Any suggestions for either;
A way to force Jenkins to use a user i define
A way to enable SSH for the default Jenkins user
Fetch the password for the default 'jenkins' user
Ideally I would like to be able do both both any help greatly appreciated.
Solution: I was able access the default Jenkins user with an SSH request from the target server. Once i was logged in as the jenkins user i was able generate the public/private RSA keys which then allowed for password free access between servers
Because when having numerous slave machine it could be hard to anticipate on which of them build will be executed, rather then explicitly calling ssh I highly suggest using existing Jenkins plug-ins for SSH executing a remote commands:
Publish Over SSH - execute SSH commands or transfer files over SCP/SFTP.
SSH - execute SSH commands.
The default 'jenkins' user is the system user running your jenkins instance (master or slave). Depending on your installation this user can have been generated either by the install scripts (deb/rpm/pkg etc), or manually by your administrator. It may or may not be called 'jenkins'.
To find out under what user your jenkins instance is running, open the http://$JENKINS_SERVER/systemInfo, available from your Manage Jenkins menu.
There you will find your user.home and user.name. E.g. in my case on a Mac OS X master:
user.home /Users/Shared/Jenkins/Home/
user.name jenkins
Once you have that information you will need to log onto that jenkins server as the user running jenkins and ssh into those remote servers to accept the ssh fingerprints.
An alternative (that I've never tried) would be to use a custom jenkins job to accept those fingerprints by for example running the following command in a SSH build task:
ssh -o "StrictHostKeyChecking no" your_remote_server
This last tip is of course completely unacceptable from a pure security point of view :)
So one might make a "job" which writes the host keys as a constant, like:
echo "....." > ~/.ssh/known_hosts
just fill the dots from ssh-keyscan -t rsa {ip}, after you verify it.
That's correct, pipeline jobs will normally use the user jenkins, which means that SSH access needs to be given for this account for it work in the pipeline jobs. People have all sorts of complex build environments so it seems like a fair requirement.
As stated in one of the answers, each individual configuration could be different, so check under "System Information" or similar, in "Manage Jenkins" on the web UI. There should be a user.home and a user.name for the home directory and the username respectively. On my CentOS installation these are "/var/lib/jenkins/" and "jenkins".
The first thing to do is to get a shell access as user jenkins in our case. Because this is an auto-generated service account, the shell is not enabled by default. Assuming you can log in as root or preferably some other user (in which case you'll need to prepend sudo) switch to jenkins as follows:
su -s /bin/bash jenkins
Now you can verify that it's really jenkins and that you entered the right home directory:
whoami
echo $HOME
If these don't match what you see in the configuration, do not proceed.
All is good so far, let's check what keys we already have:
ls -lah ~/.ssh
There may only be keys created with the hostname. See if you can use them:
ssh-copy-id user#host_ip_address
If there's an error, you may need to generate new keys:
ssh-keygen
Accept the default values, and no passphrase, if it prompts you to add the new keys to the home directory, without overwriting existing ones. Now you can run ssh-copy-id again.
It's a good idea to test it with something like
ssh user#host_ip_address ls
If it works, so should ssh, scp, rsync etc. in the Jenkins jobs. Otherwise, check the console output to see the error messages and try those exact commands on the shell as done above.