Moved a gitlab instance, now having problems talking to gitolite - ssh

I had a working instance of Gitlab until a few weeks ago, when we had to move all the user directories to another disk b/c of resource constraints. I've gone through and fixed all the paths that I could find, and so now my gitlab instance is up and running again. Git appears to be working, and I pass the gitlab self-diagnostic test.
However, from a remote client that's previously worked, I get prompted to provide the git user's password, which suggests an ssh problem.
Looking in my .gitolite stuff (conf/gitolite.conf & the keydir), things look in order. My public key is in the keydir, and the rights are assigned in the gitolite.conf correctly.
EDIT: gitolite public keys were in the .ssh/authorized_keys file and the protections were as created by gitolite setup.
What am I missing?

My public key is in the keydir, and the rights are assigned in the gitolite.conf correctly.
This isn't enough.
For ssh to not ask you for a password, you need to check if your ~gitlab/.ssh/authorized_keys is complete (with the gitolite public keys in it, and with the right protections)
Check out the gitolite setup command (for gitolite V3).

Related

Gitlab server: giving access to only certain ssh keys rather than any key that the user uploads

So, I am new to the GitLab server. Now, what I want to achieve is this:
Allow access to repositories only on certain ssh-keys. There are a limited no of machines and a limited no of users, so if a user adds an ssh-key outside these sets of keys, the repo should not clone there. Because my team size is small, I am okay if I only add those public keys to the account.
I am fine with the idea of ssh access but currently, as an admin, I lose the freedom to conveniently track or choose which all ssh-keys can access my repo. Can I disable users from adding ssh keys?
Is there any other way to ensure this? Would instead of having ssh enabled access HTTPS with whitelisting IP-enabled access work?
GitLab was, in the beginning (2011) based upon gitolite, but switched to its own mechanism in 2013.
Nowadays, it is best to declare a GitLab project private and add users to said project: that way you won't have to manage SSH or HTTPS access: any user who is not part of that project won't be able to see it/clone it (HTTPS or SSH).
In other words, repository access is no longer based on SSH keys (not for years), but is based on project visibility.
The OP adds:
even if a user is part of a project, he should only be able to clone the project on certain remote machines.
That is not a Git or GitLab feature, which means you need:
to restrict Git protocols on GitLab to SSH only
change the gitlab-shell SSH forced command script in order to allow commands only coming from some IPs
There is access to group by IP address restriction feature, since GitLab 12.0 (June 2019), but... only in GitLab Ultimate (meaning: "not free").

gitLab: certificat issue, missing ssh public key

i can't get a point and understand how it works and what is necessary to do.
I have an account by GitLab and successfully generated private and public certificate in order to provide access to it. I done all steps as describes the https://gitlab.com/help/ssh/README#generating-a-new-ssh-key-pair . Now i decided to create a new project and synchronize the state between gitLab project and one i created local by me. Because i have access to machine, which i used to create both certificate, i simply copied the public key from one machine (located in ~/.ssh folder) to current machine i am working in (in ~/.ssh folder). But it doesn't take any effect. I can't even execute the git clone command.
~> git clone git#gitlab.com:[myUser]/[myProject].git
Cloning into 'gate-controller'...
git#gitlab.com: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
i tried to figure out the reason and executed
~> ssh -vT git#gitlab.com
but to be a honest i can't interpret that response. I don't see in response any reference to my public keys file in ssh folder.
Could you please support me to solved the issue and understand, what is the problem ?
many thanks in advance
UPDATE:
You need the private key on any machine you're attempting to pull/push from. When authenticating with a service that has your public key (which any git service like Github, Gitlab, etc. will have) you need to use your private key when authenticating.
You can read more about ssh (which git uses when you don't use HTTPS auth) and PKI (Public Key Infrastructure) here https://www.ssh.com/pki/

Mercurial: How to post-hook push to Bitbucket?

We have a 'master' Mercurial server on our network that we use for a local staging box. Our team does all of our pushes and pulls to/from this one box. I'm having trouble with the implementation I'm using, but I'm also second guessing whether what I want to do is even a good idea...
We also want to start using BitBucket, but only as a secondary server. I'd like to use a hook to automatically push to Bitbucket, but I can't get it working right...
Here's the HGRC from the 'master' repo:
[hooks]
changegroup =
changegroup.update = hg update
changegroup.bitbucket = hg push ssh://hg#bitbucket.org/account/repo
If I manually fire off the above push, everything works perfectly. However, as a hook it fails:
warning: changegroup.bitbucket hook exited with status 255
I followed this guide to get SSH working: Set up SSH for Git and Mercurial on Mac OSX/Linux
I get my keys generated, I run ssh-agent, and I ssh-add the key. But ssh-agent doesn't seem to be doing anything, and as soon as I exit the SSH session it seems to leave memory. Additionally, when I test it out with ssh -Tv hg#bitbucket.org it prompts me for my password. I thought the whole point of this was for it not to do that?
But taking a step back, maybe this is a terrible idea to being with. If I give my public key to Bitbucket wouldn't that theoretically mean if someone got a hold of it, they could SSH in to my box without a password?
And if so, what alternative do I have to forward commits to bitbucket? I'd rather not use HTTPS because it would require putting our bitbucket password as plain/text in the .hg/hgrc file...
Maybe there's some more obvious way to do this that I'm missing? For the developers, I'd rather keep things the way they are now (everyone push to master) instead of reconfiguring everyone's developer box to have a private key and to push to bitbucket instead...
As always, thanks for any help you guys can provide.
Woah, there are a lot of questions there. I'll hit a few of 'em:
But ssh-agent doesn't seem to be doing anything, and as soon as I exit the SSH session it seems to leave memory.
You're correct. ssh-agent is for interactive sessions, not for automation. In most usages when you log out it's killed, but even if that weren't the case it wouldn't be working as you imagine because when someone does that hg push they're running a new, non-interactive session that wouldn't have access to the ssh-agent anyway
Additionally, when I test it out with ssh -Tv hg#bitbucket.org it prompts me for my password.
Testing it out like that isn't valid. That's saying "I want to log into an interactive session at bitbucket with the username hg", but that's not what they authorize you to do. If you send them your public key they let you login as the user hg only for the purposes of doing hg non-interactive commands.
Additionally, when I test it out with ssh -Tv hg#bitbucket.org it prompts me for my password.
No, public keys are meant to be public -- you can list anyone's on github for example. The public key just says "anyone who has the private key that matches this is authorized to...", so any site that wants your private key are crooks, but any site that wants you public key is just offering you a way to use something better than a password.
One thing you may be missing about hooks is "who" the hook runs as. When people are pushing to your "centralish" repo over ssh that the hook is being run as their unix user, and if they're pushing over http the hook is being run as the web server's user.
If you had:
a private ssh key with no password on it
the public key matching that private key setup on bitbucket
the unix user running the hook using that private key for access to bitbucket.org
then what you're trying to do would work.

Is there an easy way to use more than one private ssh key on the same gitolite client?

I have a machine running gitolite that is used both for code repos and for Sparkleshare. The problem is that Sparkleshare creates it's own key pair; that key pair authenticates first, and has no permissions on the code repos, so gitolite terminates without trying any other pairs.
I'm thinking that I may need to figure out how to either tell Sparkleshare to use my original key, or write an alias that forces gitolite to use the correct private key--something I'm not sure is even possible.
Never having used SparkleShare, I am not quite sure of its requirements, but I read some of the documentation to try to get a feel for how it interacts with Git. It looks like it is designed to publish and pull data through a Git repository (it describes using “your own server”, Github, and Gitorious for data storage/transfer/sync/whatever).
In the following I am assuming that you want to serve both your SparkleShare repository and other non-SparkleShare repositories through the same Gitolite installation (so that you can use Gitolite to control access to both kinds of repositories).
It seems to me that it will probably work just fine with a Gitolite-hosted repository if you follow Gitolite’s rules for giving access instead of the generic “Git over SSH” that is described in SparkeShare’s “use your own server” documentation.
In particular, do not use ssh-copy-id, or cat keyfile >> .ssh/authorized_keys to install public keys into the Gitolite user’s .ssh/authorized_keys. This effectively gives the owners of those public keys “administrative access” to the Gitolite installation (e.g. the ability to completely delete the Gitolite installation and anything else stored under that account). Instead, you should add users through Gitolite to grant new SparkeShare users access to a Gitolite-hosted repository (make and push changes in your gitolite_admin clone: put the user’s public key into keydir/newusername.pub and add newusername to the repository’s access lists in conf/gitolite.conf). You can even have multiple SSH keys associated with a single Gitolite user if you think that is the way to go.
If you find that you absolutely must still have users with both “full access” keys (no command=) and Gitolite-managed keys (keys with command=, managed through keydir/ in the Gitolite admin repository) in the same account’s .ssh/authorized_keys, then you may find that you can force ssh clients to supply only certain specified keys via the IdentitiesOnly parameter (see ssh_config(5)).
Assuming that you can access Gitolite through Git URLs like git#server.example.com:projectA.git, then configure each client like this:
Host sparkleshare
User git
HostName server.example.com
IdentityFile ~/sparkelshare/pub_key
IdentitiesOnly yes
Host gitolite
User git
HostName server.example.com
IdentityFile ~/.ssh/id_rsa # or the user's normal, non-SparkleShare key
IdentitiesOnly yes
In SparkleShare, set “my own server” to sparkleshare (or git#sparkleshare if it demands a user part) and set the “folder name” to our-sparkleshare.git (whatever the “Gitolite path” to the repository is, not the “full server site path” since access will be going though Gitolite and it expects paths relative to its REPO_BASE setting).
For non-SparkleShare access, use Git URLs like gitolite:projectA.git

How to get Hudson CI to check out CVS projects over SSH?

I have my Hudson CI server setup. I have a CVS repo that I can only checkout stuff via ssh. But I see no way to convince Hudson to check out via ssh. I tried all sorts of options when supplying my connection string.
Has anyone done this? I gotta think it has been done.
If I still remember CVS, I thought you have to set CVS_RSH environment variable to ssh. I suspect you need to set this so that your Tomcat process gets this value inherited.
You can check Hudson system information to see exactly what environment variables the JVM is seeing (and passes along to the build.)
I wrote up an article that tackles this you can find it here:
http://www.openscope.net/2011/01/03/configure-ssh-authorized-keys-for-cvs-access/
Essentially you want to set up passphraseless ssh keys for your build user. This will allow authentication to occur without the need to work out some kind of way to key in your password.
<edit> i.e. Essentially the standard .ssh key client & server install/exchange.
http://en.wikipedia.org/wiki/Secure_Shell#Key_management
for the jenkins user account:
install user key (public & private part) in ~/.ssh (generate it fresh or use existing user key)
on cvs server:
install user key (public part) in ~/.ssh
add to authorized_keys
back on jenkins user account:
access cvs from command-line as jenkins user and accept remote host key (to known_hosts)
* note any time remote server changes key/ip you will need to manually access cvs and accept key again *
</edit>
There's another way to do it but you have to manually log from the build machine to your cvs server and keep the ssh session open so hudson/jenkins can piggyback the connection. Seemed kinda pointless to me though since you want your CI server to be as hands off as possible.