I run my own Conan server and want to automatically upload packages generated by CI. When I use conan upload it prompts me for a username and password. Is there a way to automate this process?
Yes, therea are a couple of ways to do it:
Using the command conan user myuser -p mypassword you can "log-in" into the remote, so the local cache will store a temporary token to authenticate against the server, and subsequent commands will not require it. Note that this token can expire, check the docs (e.g. for conan_server). Also, if you are managing more remotes, there is a login per remote (add -r=myremote to the above for each one
There are environment variables you can use for this CONAN_LOGIN_USERNAME, CONAN_PASSWORD and with using _REMOTENAME for different remotes. Have a look here in the docs. This is probably the way to go for CI, so the password is not plain text in the CI scripts. Some CI services will allow for encripted variables in the configuration. Furthermore, these variables allow automatically log-in in case of expired tokens, which can happen if they are set to short times, and the builds are very long.
Related
I need to get a file from a private GitLab in a script (actually a Yocto recipe, if it matters).
Issuing: https://gitlab2server.com/api/v4/projects/53/packages/generic/paCKAGE/21.08.16/FILE.tar.xz on a browser works fine, but wget <same URL> fails with a "401 Unauthorized".
I can get around the problem with curl --header "PRIVATE_TOKEN: xxxx" ... but that means encoding my private token into a shell script which doesn't seem right.
To access a regular git repo I can use git clone git:... and it works because of the uploaded keys.
Using the equivalent scp gitlab2server.com:/api/v4/... . does not work because "Permission denied (publickey).".
What is the right way to do this?
Ideally I would need to have a ssh (actually scp, of course) access using pre-shared keys to access the files. I would hate to put large binaries into the git repo just to be able to access them.
The only way to authenticate with the GitLab API (including the Package API here) is using a personal access token, or the CI_JOB_TOKEN environment variable if running within GitLab CI/CD. CI_JOB_TOKEN is one of the Predefined Variables available to every CI/CD Pipeline Job and holds a non-admin token.
I know this has been asked many times because of the complete mess Google have made with authentication but I can't find an answer. I'm trying to create a CI pipeline that can use service account credentials from a file. I want to be able to run it locally or from a server. From what I've read gcloud inexplicably ignores the GOOGLE_APPLICATION_CREDENTIALS env var so I have to globally set my creds with the following, meaning I can kiss goodbye to any kind of parallelisation:
gcloud auth activate-service-account --key-file=$(GOOGLE_APPLICATION_CREDENTIALS)
Surely it must be possible to run multiple commands in parallel with different SA credentials?
Also, the above approach ignores the project ID specified in the key file, so gcloud tries to target the last project ID I personally set for myself.
Is there a solution to this ridiculousness? I'm looking for a non-interactive, non-destructive (i.e. won't trash my personal creds) way of calling gcloud in parallel with different service accounts and automatically using their project IDs. Is this possible?
Well it actually is possible with this:
CLOUDSDK_AUTH_CREDENTIAL_FILE_OVERRIDE=$(GOOGLE_CREDENTIALS_FILE) \
CLOUDSDK_CORE_PROJECT=$(GCP_PROJECT) \
gcloud run deploy --allow-unauthenticated $(CLOUD_RUN_CONFIG) --image $(GCR_DOCKER_IMAGE)
It's a shame the docs are so poor it's taken me forever to find this info. Why gcloud doesn't just use the same env vars as all the libraries will remain a mystery to everyone outside Google...
is there any way to provide username and password for git pull as command line arguments? in svn there was something like:
svn up --no-auth-cache --username $SVN_USER --password $SVN_PASSWORD
Is there any equivalent of this in git? I can't store the credentials on the filesystem.
Basically, I have a script running build for multiple correlated projects. Because the script is on a shared server and is to be run by different users, I can't store the credentials on the server. I don't want to prompt the user, because the script fetches data from multiple SVN/GIT repositories with single username/pass so I want to read the credentials once via the script and then pass them to git pull or svn up commands
If you're using HTTPS, a solution might be in this answer:
The not secure way is to include credentials in the url you're pulling, https://user:password#server.com/path/to/repo. Apparently, your credentials end up as plain text in the .git folder and/or in log files.
The secure way is to configure a "credential helper" in git. Then it will remember the credentials once they're used. It will store the credentials securely on the machine, but if you use the system-wide configuration they will apply to all users. For example, with msysgit on Windows I'd use the wincred helper: git config --system credential.helper wincred. My understanding is that --system turns the credential helper on for all repositories and all users on the system, so you'll have to decide if this is okay for your server. Disclaimer: I've only used --global before.
I haven't seen better options for your situation, but some of the real git gurus might chime in.
Jenkins keeps using the default "jenkins" user when executing builds. My build requires a number of SSH calls. However these SSH calls fails with Host verification exceptions because i haven't been able connect place the public key for this user on the target server.
I don't know where the default "jenkins" user is configured and therefore cant generate the required public key to place on the target server.
Any suggestions for either;
A way to force Jenkins to use a user i define
A way to enable SSH for the default Jenkins user
Fetch the password for the default 'jenkins' user
Ideally I would like to be able do both both any help greatly appreciated.
Solution: I was able access the default Jenkins user with an SSH request from the target server. Once i was logged in as the jenkins user i was able generate the public/private RSA keys which then allowed for password free access between servers
Because when having numerous slave machine it could be hard to anticipate on which of them build will be executed, rather then explicitly calling ssh I highly suggest using existing Jenkins plug-ins for SSH executing a remote commands:
Publish Over SSH - execute SSH commands or transfer files over SCP/SFTP.
SSH - execute SSH commands.
The default 'jenkins' user is the system user running your jenkins instance (master or slave). Depending on your installation this user can have been generated either by the install scripts (deb/rpm/pkg etc), or manually by your administrator. It may or may not be called 'jenkins'.
To find out under what user your jenkins instance is running, open the http://$JENKINS_SERVER/systemInfo, available from your Manage Jenkins menu.
There you will find your user.home and user.name. E.g. in my case on a Mac OS X master:
user.home /Users/Shared/Jenkins/Home/
user.name jenkins
Once you have that information you will need to log onto that jenkins server as the user running jenkins and ssh into those remote servers to accept the ssh fingerprints.
An alternative (that I've never tried) would be to use a custom jenkins job to accept those fingerprints by for example running the following command in a SSH build task:
ssh -o "StrictHostKeyChecking no" your_remote_server
This last tip is of course completely unacceptable from a pure security point of view :)
So one might make a "job" which writes the host keys as a constant, like:
echo "....." > ~/.ssh/known_hosts
just fill the dots from ssh-keyscan -t rsa {ip}, after you verify it.
That's correct, pipeline jobs will normally use the user jenkins, which means that SSH access needs to be given for this account for it work in the pipeline jobs. People have all sorts of complex build environments so it seems like a fair requirement.
As stated in one of the answers, each individual configuration could be different, so check under "System Information" or similar, in "Manage Jenkins" on the web UI. There should be a user.home and a user.name for the home directory and the username respectively. On my CentOS installation these are "/var/lib/jenkins/" and "jenkins".
The first thing to do is to get a shell access as user jenkins in our case. Because this is an auto-generated service account, the shell is not enabled by default. Assuming you can log in as root or preferably some other user (in which case you'll need to prepend sudo) switch to jenkins as follows:
su -s /bin/bash jenkins
Now you can verify that it's really jenkins and that you entered the right home directory:
whoami
echo $HOME
If these don't match what you see in the configuration, do not proceed.
All is good so far, let's check what keys we already have:
ls -lah ~/.ssh
There may only be keys created with the hostname. See if you can use them:
ssh-copy-id user#host_ip_address
If there's an error, you may need to generate new keys:
ssh-keygen
Accept the default values, and no passphrase, if it prompts you to add the new keys to the home directory, without overwriting existing ones. Now you can run ssh-copy-id again.
It's a good idea to test it with something like
ssh user#host_ip_address ls
If it works, so should ssh, scp, rsync etc. in the Jenkins jobs. Otherwise, check the console output to see the error messages and try those exact commands on the shell as done above.
I have my Hudson CI server setup. I have a CVS repo that I can only checkout stuff via ssh. But I see no way to convince Hudson to check out via ssh. I tried all sorts of options when supplying my connection string.
Has anyone done this? I gotta think it has been done.
If I still remember CVS, I thought you have to set CVS_RSH environment variable to ssh. I suspect you need to set this so that your Tomcat process gets this value inherited.
You can check Hudson system information to see exactly what environment variables the JVM is seeing (and passes along to the build.)
I wrote up an article that tackles this you can find it here:
http://www.openscope.net/2011/01/03/configure-ssh-authorized-keys-for-cvs-access/
Essentially you want to set up passphraseless ssh keys for your build user. This will allow authentication to occur without the need to work out some kind of way to key in your password.
<edit> i.e. Essentially the standard .ssh key client & server install/exchange.
http://en.wikipedia.org/wiki/Secure_Shell#Key_management
for the jenkins user account:
install user key (public & private part) in ~/.ssh (generate it fresh or use existing user key)
on cvs server:
install user key (public part) in ~/.ssh
add to authorized_keys
back on jenkins user account:
access cvs from command-line as jenkins user and accept remote host key (to known_hosts)
* note any time remote server changes key/ip you will need to manually access cvs and accept key again *
</edit>
There's another way to do it but you have to manually log from the build machine to your cvs server and keep the ssh session open so hudson/jenkins can piggyback the connection. Seemed kinda pointless to me though since you want your CI server to be as hands off as possible.