gitolite with non-default port - ssh

To clone a repository managed by gitolite one usually uses following syntax
git clone gitolite#server:repository
This tells the SSH client to connect to port 22 of server using gitolite as user name. When I try it with the port number:
git clone gitolite#server:22:repository
Git complains that the repository 22:repository is not available. What syntax should be used if the SSH server uses a different port?

The “SCP style” Git URL syntax (user#server:path) does not support including a port. To include a port, you must use an ssh:// “Git URL”. For example:
ssh://gitolite#server:2222/repository
Note: As compared to gitolite#server:repository, this presents a slightly different repository path to the remote end (the absolute /repository instead of the relative path repository); Gitolite accepts both types of paths, other systems may vary.
An alternative is to use a Host entry in your ~/.ssh/config (see your ssh_config(5) manpage). With such an entry, you can create an “SSH host nickname” that incorporates the server name/address, the remote user name, and the non-default port number (as well as any other SSH options you might like):
Host gitolite
User gitolite
HostName server
Port 2222
Then you can use very simple Git URLs like gitolite:repository.
If you have to document (and or configure) this for multiple people, I would go with ssh:// URLs, since there is no extra configuration involved.
If this is just for you (especially if you might end up accessing multiple repositories from the same server), it might be nice to have the SSH host nickname to save some typing.

It is explained in great detail here: https://github.com/sitaramc/gitolite/blob/pu/doc/ssh-troubleshooting.mkd#_appendix_4_host_aliases
Using a "host" para in ~/.ssh/config lets you nicely encapsulate all this within ssh and give it a short, easy-to-remember, name. Example:
host gitolite
user git
hostname a.long.server.name.or.annoying.IP.address
port 22
identityfile ~/.ssh/id_rsa
Now you can simply use the one word gitolite (which is the host alias we defined here) and ssh will infer all those details defined under it -- just say ssh gitolite and git clone gitolite:reponame and things will work.

Related

Setting up pyCharm pro for remote ssh

I have PyCharm pro and I need to set up a remote repo on it via ssh. There are instructions available for doing this but my requirements are slightly different than what I have seen available. My organisation has ssh set up via cloudfare and all those configs are already put in .ssh/config. It includes hostname, IdentityFile, username, and so on. If I need to access my remote machine, I just do ssh <HOST-NAME> from the terminal. My config looks like this -
Host <HOST-NAME>
HostName <HOST-NAME>.<ORGANISATION>.net
ProxyCommand /usr/local/bin/cloudflared access ssh --hostname %h
IdentityFile ~/.ssh/id_rsa
User <USER-NAME>
The way to set up PyCharm usually includes instructions like specifying host, port, username etc so that PyCharm can essentially run
ssh username#host_ip_address. However, I just need it to run ssh <HOST-NAME> so that it can access everything else using the config file that is already set up.
Is there a way to make this happen?
I use Pycharm Pro with my own .ssh/config and Jump Proxy pseudo hostname and certificates so slightly different. I "ssh pseudo-hostname" so no user as well.
In Pycharm I set a valid username and port (default 22) cause in my case it will get passed on.
Your setup ignore username and port no? If so you can use BS values?

SSH Config for using a jumpserver

I have a problem with the configuration of a jumper server in ssh_config.
So far I was able to configure the connection for a server as follows:
Host myhost*
ProxyCommand ssh -q myjumpserver nc %h %p
So I could reach via ssh myhost1 or ssh myhost2 directly my two servers.
Now unfortunately something has changed and I should call my servers with the following syntax:
ssh user#newjump#myhost1
(yes 2x # is correct and only this notation works)
Newjump is not a jumpserver in the real sense, so I can't use the syntax above.
Does anyone have an idea how I can configure this?
Since I have >30 servers and these groupwise always have the same naming scheme, it would be nice if I could generalize it as above. But in case of doubt I would also implement a solution for each individual server.
What is important to me is that I only have to type ssh myhost1 to reach this host. A solution via alias is out of the question, because then I would have to adapt numerous scripts.
I solved it by
Host name1
Hostname newjump
User user#host1

CanonicalHostname and ssh connections through a jumphost

Say I have an internal subdomain named internal.sub.domain.net with hosts like foo.internal.sub.domain.net and bar.internal.sub.domain.net. Furthermore, these hosts are only accessible from the outside world through the jumphost internal.sub.domain.net. Thus, the following ~/.ssh/config allows me to directly connect to either of the internal hosts from outside:
Host foo bar
HostName %h.internal.sub.domain.net
ProxyJump internal.sub.domain.net
This is fine for a subdomain with just only a couple of hosts, but it quickly becomes rather unmaintainable if the number of hosts is large and / or changes occasionally. But wildcards may come to the rescue:
Host *.internal.sub.domain.net
ProxyJump internal.sub.domain.net
This avoids the maintenance issue, but forces to always specify the fully qualified hostname to connect, which is rather tedious. From looking at the ssh_config man-page, the CannonicalizeHostname and CannonicalDomains options seems to fix that:
CannonicalizeHostname always
CannonicalDomains internal.sub.domain.net
Host *.internal.sub.domain.net
ProxyJump internal.sub.domain.net
But this would only work if the name lookup for the host that is to be connected succeeds. But as these hosts are internal by definition, it is no surprise that name resolution fails.
A not really helpful but very illustrative hack is to fake successful name resolutions by just adding all the internal hosts as aliases for e.g. 127.0.0.1 to /etc/hosts, i.e. adding the following line to /etc/hosts:
127.0.0.1 foo.internal.sub.domain.net bar.internal.sub.domain.net
With that line in place, the last ~/.ssh/config works like a charm. But apart from the fact that this would be quite a hack, it just only shifts the maintenance issue form ~/.ssh/config to /etc/hosts.
As the described scenario should not be so uncommon, is there a way to make it work? To phrase it in one sentence again:
I want to be able to ssh to all internal hosts that live in the internal.sub.domain.net, i.e. that are only accessible through the internal.sub.domain.net jumphost without having to list each of these hosts somewhere, as they may frequently be added or removed from the internal domain and without being forced to always type their fully qualified hostnames.
If you are invoking ssh from a shell, you could define a short variable for the internal domain and append that to the relevant hostnames:
e.g. in your ~/.bashrc or similar:
i=".internal.sub.domain.net"
Then, on the command line:
ssh foo$i
ssh bar$i
At least until a better solution comes along. It's not perfect but it's only 2 extra characters on the command line.
Update: A better solution has come along.
A better solution, by Peter Stuge on the openssh mailing list, is to have something like this in your config file:
Host *.i
ProxyCommand ssh -W %hnternal.sub.domain.net:%p jumphost
This enables the use of abbreviated hostnames like foo.i (rather than foo$i in my first answer). Being able to use . rather than $ is better because it doesn't require the use of the shift key.
However, this is limited to a single remote jumphost whose internal domain starts with i. I think that internal domains that start with internal. are common. If you need to connect to multiple such internal domains, the above needs a slight (but very ugly) tweak. I apologize in advance if anyone's eyes start to bleed.
For example, if you need to connect to these internal domains:
internal.a.com
internal.b.net
Then the above needs to be changed to something like this:
Host *.a
ProxyCommand sh -c "H=%h; exec sh -c \"exec ssh -W \${H%%.a}.internal.a.com:%p jumphosta\""
Host *.b
ProxyCommand sh -c "H=%h; exec sh -c \"exec ssh -W \${H%%.b}.internal.b.net:%p jumphostb\""
Believe me, I wish it could be simpler but that's the shortest thing that I could get to work (without using a separate shell script or environment variables).
If a username is supplied on the command line, it only applies to the jumped-to host. Your local username will be used by default for the jumphost.
If you need to use a username for the jumphost that isn't the same as your local username, add it in the ProxyCommand (e.g. jumpuser#jumphosta).

How to configure Ansible to use my local SSH configuration?

I'm testing out Ansible and am already stuck on a fairly simple thing. I configured my /etc/ansible/hosts to contain the IP of my server:
[web]
1.2.3.4
Now, connecting to it with ansible all -vvvv -m ping fails since my ~/.ssh/config for the specified server uses a custom port, a key file not in the default location, etc. How do I tell Ansible to reuse my SSH configuration for the connection?
It's a little esoteric, so it's understandable you missed it. This is from the Ansible Inventory page:
If you have hosts that run on non-standard SSH ports you can put the port number after the hostname with a colon. Ports listed in your SSH config file won’t be used with the paramiko connection but will be used with the openssh connection.
To make things explicit, it is suggested that you set them if things are not running on the default port
So, in your case:
[web]
1.2.3.4:9000
(Using 9000 as your alt port, of course)
Ansible uses paramiko on systems with a dated version of ssh, such as CentOS/RHEL.
What tedder42 said plus there are other slightly more advanced ways of defining your ssh config on a per host basis.
[web]
1.2.3.4 ansible_ssh_port=9000
This only makes sense if you're also using the other ansible_ssh special variables like ansible_ssh_user and ansible_ssh_host.
ansible_ssh_host is helpful if you want to be able to refer to a server by a name of your choosing instead of its IP.
[web]
apache ansible_ssh_host=1.2.3.4 ansible_ssh_port=9000
If you end up with multiple hosts with the same alternative ssh port you can makes use of Ansible's group variable function.
[web]
1.2.3.4
5.6.7.8
9.10.11.12
[web:vars]
ansible_ssh_port=9000
Now Ansible will use port 9000 on all three of the hosts In the web group.
Understanding how to organize your inventory data goes a long way to your success with Ansible.

A way to specify a different host in an SSH tunnel from the host in use

I am trying to setup an SSH tunnel to access Beanstalk (to bypass an annoying proxy server).
I can get this to work, but with one caveat: I have to map my Beanstalk host URL (username.svn.beanstalkapp.com) in my hosts file to 127.0.0.1 (and use the ip in place of the domain when setting up the tunnel).
The reason (I think) is that I am creating the tunnel using the local SSH instance (on Snow Leopard) and if I use localhost or 127.0.0.1 when talking to Beanstalk, it rejects the authorisation credentials. I believe this is because Beanstalk use the hostname specified in a request to determine which account the username / password combination should be checked against. If localhost is used, I think this information is missing (in some manner which Beanstalk requires) from the requests.
At the moment I dig the IP for username.svn.beanstalkapp.com, map username.svn.beanstalkapp.com to 127.0.0.1 in my hosts file, then for the tunnel I use the command:
ssh -L 8080:ip:443 -p 22 -l tom -N 127.0.0.1
I can tell Subversion that the repo. is located at:
https://username.svn.beanstalkapp.com:8080/repo-name
This uses my tunnel and the username and password are accepted.
So, my question is if there is an option when setting up the SSH tunnel which would mean I wouldn't have to use my hosts file workaround?
I would add an entry to your hosts file that maps 127.0.0.1 to the hostname you need and then use the hostname to connect to your tunnel.
Update
The hosts file is IMO your best option.