Gitlab server: giving access to only certain ssh keys rather than any key that the user uploads - ssh

So, I am new to the GitLab server. Now, what I want to achieve is this:
Allow access to repositories only on certain ssh-keys. There are a limited no of machines and a limited no of users, so if a user adds an ssh-key outside these sets of keys, the repo should not clone there. Because my team size is small, I am okay if I only add those public keys to the account.
I am fine with the idea of ssh access but currently, as an admin, I lose the freedom to conveniently track or choose which all ssh-keys can access my repo. Can I disable users from adding ssh keys?
Is there any other way to ensure this? Would instead of having ssh enabled access HTTPS with whitelisting IP-enabled access work?

GitLab was, in the beginning (2011) based upon gitolite, but switched to its own mechanism in 2013.
Nowadays, it is best to declare a GitLab project private and add users to said project: that way you won't have to manage SSH or HTTPS access: any user who is not part of that project won't be able to see it/clone it (HTTPS or SSH).
In other words, repository access is no longer based on SSH keys (not for years), but is based on project visibility.
The OP adds:
even if a user is part of a project, he should only be able to clone the project on certain remote machines.
That is not a Git or GitLab feature, which means you need:
to restrict Git protocols on GitLab to SSH only
change the gitlab-shell SSH forced command script in order to allow commands only coming from some IPs
There is access to group by IP address restriction feature, since GitLab 12.0 (June 2019), but... only in GitLab Ultimate (meaning: "not free").

Related

How do I tell the GitHub CLI to use a specific SSH key?

I have various GitHub accounts and for each account I have SSH set up. So under ~/.ssh I have a public and private key for each account.
I want to use the GitHub CLI, but I am not sure how I can tell the CLI to use a particular SSH key.
In case it is relevant, this is what I get when I run ssh-add -l:
Example Scenario
I want to run gh repo create on GitHub account B, but for some reason, the repo got created on GitHub account A. Is there a way I can tell gh what account to use?
if you have different GitHub users the gh-cli won't be very effective. as #phd commands like gh repo create require logging in via an auth token. https://cli.github.com/manual/gh_auth_login
Switching contexts between accounts (i.e. github.com/user1 and github.com/user2) def doesn't seem supported so you'd have to hack around loging in and out each time every time you switched.
But configuring which ssh key git should use can be configured easily enough in some combo of ~/.ssh/config .gitconfig and/or setting the GIT_SSH_COMMAND env var before running git commands.

Generate key files to connect to Bitbucket in Vagrant boxes

We use Vagrant boxes for development. For every project or small snippet we simply start a new box and provision it with Ansible. This is working fantastic; however, we do get into trouble when connecting to a private Bitbucket repository within a bower install run.
The solution we have now is to generate a new key (ssh-keygen), accept all defaults (pressing <return>, <return>, <return>) and then grab the public key (cat ~/.ssh/id_rsa.pub). Copy it, go to Bitbucket, view your account and add this new ssh key. And repeat for every new box you instantiate.
We have to do this because of some closed source packages (hosted on Bitbucket) we install via Bower. We do have another experience, which is much better: composer (php's package manager) and private Github repositories. With that setup, you have to enter your username/password/2fa token via the command line and an OAuth token is generated for you. This works great.
So, is there a way we can mitigate this bower/bitbucket/ssh issue? For obvious reasons I don't want to provision the boxes with a standard private key, but there has to be another solution?
While I'm not sure that my situation is as complex as yours (I'm not using Ansible or Bower), I solved this problem by using the Vagrant ssh forward agent. This blog post gives you the details on how to get it working:
Cloning from GitHub in Vagrant using SSH agent forwarding
So as long as each of the developers has access on their local machines to the bitbucket repos, it should work.

Can I change gerrit authentication type from openid to ldap?

We in our team are planning to use gerrit. So, to get introduced, I did set up a server, used open-id for authentication and created some test-users and test-projects in it.
Now we are ready to use it. But we actually prefer LDAP for real use.
So, can I change my authentication system from open-id from LDAP? What will happen to current users then?
I want to clear test projects and changes. How can I do them?
Can I complete delete existing gerrit setup and initiate a fresh setup in same machine? (I tried extracting the jar in different folder, but I faced some problems in it)
I am using Ubuntu 12.04 as my server.
Please help.
Delete the database (you're not using the H2 database anymore, but some MySQL or PostgreSQL server, don't you?) plus the directory where Gerrit is running (the -d parameter, see docs). Additionally, remove the git repos, if you configured them to be located on a different path.
Then all your data is gone and you can start from scratch.

Is there an easy way to use more than one private ssh key on the same gitolite client?

I have a machine running gitolite that is used both for code repos and for Sparkleshare. The problem is that Sparkleshare creates it's own key pair; that key pair authenticates first, and has no permissions on the code repos, so gitolite terminates without trying any other pairs.
I'm thinking that I may need to figure out how to either tell Sparkleshare to use my original key, or write an alias that forces gitolite to use the correct private key--something I'm not sure is even possible.
Never having used SparkleShare, I am not quite sure of its requirements, but I read some of the documentation to try to get a feel for how it interacts with Git. It looks like it is designed to publish and pull data through a Git repository (it describes using “your own server”, Github, and Gitorious for data storage/transfer/sync/whatever).
In the following I am assuming that you want to serve both your SparkleShare repository and other non-SparkleShare repositories through the same Gitolite installation (so that you can use Gitolite to control access to both kinds of repositories).
It seems to me that it will probably work just fine with a Gitolite-hosted repository if you follow Gitolite’s rules for giving access instead of the generic “Git over SSH” that is described in SparkeShare’s “use your own server” documentation.
In particular, do not use ssh-copy-id, or cat keyfile >> .ssh/authorized_keys to install public keys into the Gitolite user’s .ssh/authorized_keys. This effectively gives the owners of those public keys “administrative access” to the Gitolite installation (e.g. the ability to completely delete the Gitolite installation and anything else stored under that account). Instead, you should add users through Gitolite to grant new SparkeShare users access to a Gitolite-hosted repository (make and push changes in your gitolite_admin clone: put the user’s public key into keydir/newusername.pub and add newusername to the repository’s access lists in conf/gitolite.conf). You can even have multiple SSH keys associated with a single Gitolite user if you think that is the way to go.
If you find that you absolutely must still have users with both “full access” keys (no command=) and Gitolite-managed keys (keys with command=, managed through keydir/ in the Gitolite admin repository) in the same account’s .ssh/authorized_keys, then you may find that you can force ssh clients to supply only certain specified keys via the IdentitiesOnly parameter (see ssh_config(5)).
Assuming that you can access Gitolite through Git URLs like git#server.example.com:projectA.git, then configure each client like this:
Host sparkleshare
User git
HostName server.example.com
IdentityFile ~/sparkelshare/pub_key
IdentitiesOnly yes
Host gitolite
User git
HostName server.example.com
IdentityFile ~/.ssh/id_rsa # or the user's normal, non-SparkleShare key
IdentitiesOnly yes
In SparkleShare, set “my own server” to sparkleshare (or git#sparkleshare if it demands a user part) and set the “folder name” to our-sparkleshare.git (whatever the “Gitolite path” to the repository is, not the “full server site path” since access will be going though Gitolite and it expects paths relative to its REPO_BASE setting).
For non-SparkleShare access, use Git URLs like gitolite:projectA.git

How to get Hudson CI to check out CVS projects over SSH?

I have my Hudson CI server setup. I have a CVS repo that I can only checkout stuff via ssh. But I see no way to convince Hudson to check out via ssh. I tried all sorts of options when supplying my connection string.
Has anyone done this? I gotta think it has been done.
If I still remember CVS, I thought you have to set CVS_RSH environment variable to ssh. I suspect you need to set this so that your Tomcat process gets this value inherited.
You can check Hudson system information to see exactly what environment variables the JVM is seeing (and passes along to the build.)
I wrote up an article that tackles this you can find it here:
http://www.openscope.net/2011/01/03/configure-ssh-authorized-keys-for-cvs-access/
Essentially you want to set up passphraseless ssh keys for your build user. This will allow authentication to occur without the need to work out some kind of way to key in your password.
<edit> i.e. Essentially the standard .ssh key client & server install/exchange.
http://en.wikipedia.org/wiki/Secure_Shell#Key_management
for the jenkins user account:
install user key (public & private part) in ~/.ssh (generate it fresh or use existing user key)
on cvs server:
install user key (public part) in ~/.ssh
add to authorized_keys
back on jenkins user account:
access cvs from command-line as jenkins user and accept remote host key (to known_hosts)
* note any time remote server changes key/ip you will need to manually access cvs and accept key again *
</edit>
There's another way to do it but you have to manually log from the build machine to your cvs server and keep the ssh session open so hudson/jenkins can piggyback the connection. Seemed kinda pointless to me though since you want your CI server to be as hands off as possible.