Can I set a timeout for a hostname in ~/.ssh/config? - ssh

If I create a host alias in ~/.ssh/config like the following one:
host myhost
hostname myhost.mysubnet.mynet.com
hostname myhost.mynet.com
user igordcard
I am able to connect to myhost.mysubnet.mynet.com perfectly using ssh myhost. However, when this hostname is not reachable, it will never connect. Is there a way to set a timeout for the first hostname so it can automatically try the second one, and so on? If not, is there any other way for achieving almost the same effect?
Thank you.

This answer reflects the exact implementation of the workaround I decided to do, based on the page link provided by damienfrancois and the answer by pcm which is inside that page.
Under ~/.bashrc I've added the following at the end:
nets=$(hostname -I)
if [[ "$nets" == *192.168.* ]]
then
alias ssh='ssh -F .ssh/config.alt'
fi
Which gets the list of network interfaces' addresses connected and then tries to match a segment of a specific network address with that list to check if we are connected to some network and, if so, an alias for ssh is set to use an alternative ssh config file.
This workaround is obviously subject to failure, for instance when the host moves from network to network but these networks share the same network prefix. However, for my specific case (and many others), this is enough.

Related

SSH Config for using a jumpserver

I have a problem with the configuration of a jumper server in ssh_config.
So far I was able to configure the connection for a server as follows:
Host myhost*
ProxyCommand ssh -q myjumpserver nc %h %p
So I could reach via ssh myhost1 or ssh myhost2 directly my two servers.
Now unfortunately something has changed and I should call my servers with the following syntax:
ssh user#newjump#myhost1
(yes 2x # is correct and only this notation works)
Newjump is not a jumpserver in the real sense, so I can't use the syntax above.
Does anyone have an idea how I can configure this?
Since I have >30 servers and these groupwise always have the same naming scheme, it would be nice if I could generalize it as above. But in case of doubt I would also implement a solution for each individual server.
What is important to me is that I only have to type ssh myhost1 to reach this host. A solution via alias is out of the question, because then I would have to adapt numerous scripts.
I solved it by
Host name1
Hostname newjump
User user#host1

CanonicalHostname and ssh connections through a jumphost

Say I have an internal subdomain named internal.sub.domain.net with hosts like foo.internal.sub.domain.net and bar.internal.sub.domain.net. Furthermore, these hosts are only accessible from the outside world through the jumphost internal.sub.domain.net. Thus, the following ~/.ssh/config allows me to directly connect to either of the internal hosts from outside:
Host foo bar
HostName %h.internal.sub.domain.net
ProxyJump internal.sub.domain.net
This is fine for a subdomain with just only a couple of hosts, but it quickly becomes rather unmaintainable if the number of hosts is large and / or changes occasionally. But wildcards may come to the rescue:
Host *.internal.sub.domain.net
ProxyJump internal.sub.domain.net
This avoids the maintenance issue, but forces to always specify the fully qualified hostname to connect, which is rather tedious. From looking at the ssh_config man-page, the CannonicalizeHostname and CannonicalDomains options seems to fix that:
CannonicalizeHostname always
CannonicalDomains internal.sub.domain.net
Host *.internal.sub.domain.net
ProxyJump internal.sub.domain.net
But this would only work if the name lookup for the host that is to be connected succeeds. But as these hosts are internal by definition, it is no surprise that name resolution fails.
A not really helpful but very illustrative hack is to fake successful name resolutions by just adding all the internal hosts as aliases for e.g. 127.0.0.1 to /etc/hosts, i.e. adding the following line to /etc/hosts:
127.0.0.1 foo.internal.sub.domain.net bar.internal.sub.domain.net
With that line in place, the last ~/.ssh/config works like a charm. But apart from the fact that this would be quite a hack, it just only shifts the maintenance issue form ~/.ssh/config to /etc/hosts.
As the described scenario should not be so uncommon, is there a way to make it work? To phrase it in one sentence again:
I want to be able to ssh to all internal hosts that live in the internal.sub.domain.net, i.e. that are only accessible through the internal.sub.domain.net jumphost without having to list each of these hosts somewhere, as they may frequently be added or removed from the internal domain and without being forced to always type their fully qualified hostnames.
If you are invoking ssh from a shell, you could define a short variable for the internal domain and append that to the relevant hostnames:
e.g. in your ~/.bashrc or similar:
i=".internal.sub.domain.net"
Then, on the command line:
ssh foo$i
ssh bar$i
At least until a better solution comes along. It's not perfect but it's only 2 extra characters on the command line.
Update: A better solution has come along.
A better solution, by Peter Stuge on the openssh mailing list, is to have something like this in your config file:
Host *.i
ProxyCommand ssh -W %hnternal.sub.domain.net:%p jumphost
This enables the use of abbreviated hostnames like foo.i (rather than foo$i in my first answer). Being able to use . rather than $ is better because it doesn't require the use of the shift key.
However, this is limited to a single remote jumphost whose internal domain starts with i. I think that internal domains that start with internal. are common. If you need to connect to multiple such internal domains, the above needs a slight (but very ugly) tweak. I apologize in advance if anyone's eyes start to bleed.
For example, if you need to connect to these internal domains:
internal.a.com
internal.b.net
Then the above needs to be changed to something like this:
Host *.a
ProxyCommand sh -c "H=%h; exec sh -c \"exec ssh -W \${H%%.a}.internal.a.com:%p jumphosta\""
Host *.b
ProxyCommand sh -c "H=%h; exec sh -c \"exec ssh -W \${H%%.b}.internal.b.net:%p jumphostb\""
Believe me, I wish it could be simpler but that's the shortest thing that I could get to work (without using a separate shell script or environment variables).
If a username is supplied on the command line, it only applies to the jumped-to host. Your local username will be used by default for the jumphost.
If you need to use a username for the jumphost that isn't the same as your local username, add it in the ProxyCommand (e.g. jumpuser#jumphosta).

Ansible ask to verify the ssh fingerprint and fails to ssh into newly created ec2 instance

I am creating ec2 instances and configuring them using ansible scripts. I have used
[ssh_connection]
pipelining=true
in my ansible.cfg file but it still asks to verify the ssh fingerprint, when I type yes and press enter it fails to login to the instance.
Just to let you know I am using ansible dynamic inventory and hence am not storing IPs or dns in hosts file.
Any help will be much appreciated.
TIA
Pipelining doesn't have any effect on authentication - it bundles up individual module calls into one bigger file to transfer over once a connection has been established.
In order not to stop execution and prompt you to accept the SSH key, you need to disable strict host key checking, not enable pipelining.
You can set that by exporting ANSIBLE_HOST_KEY_CHECKING=False or set it in ansible.cfg with:
[defaults]
host_key_checking=False
The latter is probably better for your use case, because it's persistent.
Note that even though this is a setting that deals with ssh connections, it is in the [defaults] section, not the [ssh_connection] one.
==
The fact that when you type yes you fail to log in makes it seem like this might not be your only problem, but you haven't given enough information to solve the rest.
If you're still having connection issues after disabling host key checking, edit the question to add the output of you SSHing into the instance manually, alongside the output of an ansible play with -vvv for verbose output.
First steps to look through when troubleshooting:
What are the differences between when I connect and when Ansible does?
Is the ansible_ssh_user set to the right user for the ec2 instance?
Is the ansible_ssh_private_key_file the same as the private part of the keypair you assigned the instance on creation?
Is ansible_ssh_host set correctly by whatever is generating your dynamic inventory?
I think you can find the answer here: ansible ssh prompt known_hosts issue
Basically, when you run ansible-playbook, you will need to use the argument:
ANSIBLE_HOST_KEY_CHECKING=False
Make sure you have your private key added (ssh-add your_private_key).

gitolite with non-default port

To clone a repository managed by gitolite one usually uses following syntax
git clone gitolite#server:repository
This tells the SSH client to connect to port 22 of server using gitolite as user name. When I try it with the port number:
git clone gitolite#server:22:repository
Git complains that the repository 22:repository is not available. What syntax should be used if the SSH server uses a different port?
The “SCP style” Git URL syntax (user#server:path) does not support including a port. To include a port, you must use an ssh:// “Git URL”. For example:
ssh://gitolite#server:2222/repository
Note: As compared to gitolite#server:repository, this presents a slightly different repository path to the remote end (the absolute /repository instead of the relative path repository); Gitolite accepts both types of paths, other systems may vary.
An alternative is to use a Host entry in your ~/.ssh/config (see your ssh_config(5) manpage). With such an entry, you can create an “SSH host nickname” that incorporates the server name/address, the remote user name, and the non-default port number (as well as any other SSH options you might like):
Host gitolite
User gitolite
HostName server
Port 2222
Then you can use very simple Git URLs like gitolite:repository.
If you have to document (and or configure) this for multiple people, I would go with ssh:// URLs, since there is no extra configuration involved.
If this is just for you (especially if you might end up accessing multiple repositories from the same server), it might be nice to have the SSH host nickname to save some typing.
It is explained in great detail here: https://github.com/sitaramc/gitolite/blob/pu/doc/ssh-troubleshooting.mkd#_appendix_4_host_aliases
Using a "host" para in ~/.ssh/config lets you nicely encapsulate all this within ssh and give it a short, easy-to-remember, name. Example:
host gitolite
user git
hostname a.long.server.name.or.annoying.IP.address
port 22
identityfile ~/.ssh/id_rsa
Now you can simply use the one word gitolite (which is the host alias we defined here) and ssh will infer all those details defined under it -- just say ssh gitolite and git clone gitolite:reponame and things will work.

Can I forward env variables over ssh?

I work with several different servers, and it would be useful to be able to set some environment variables such that they are active on all of them when I SSH in. The problem is, the contents of some of the variables contain sensitive information (hashed passwords), and so I don't want to leave it lying around in a .bashrc file -- I'd like to keep it only in memory.
I know that you can use SSH to forward the DISPLAY variable (via ForwardX11) or an SSH Agent process (via ForwardAgent), so I'm wondering if there's a way to automatically forward the contents of arbitrary environment variables across SSH connections. Ideally, something I could set in a .ssh/config file so that it would run automatically when I need it to. Any ideas?
You can, but it requires changing the server configuration.
Read the entries for AcceptEnv in sshd_config(5) and SendEnv in ssh_config(5).
update:
You can also pass them on the command line:
ssh foo#host "FOO=foo BAR=bar doz"
Regarding security, note than anybody with access to the remote machine will be able to see the environment variables passed to any running process.
If you want to keep that information secret it is better to pass it through stdin:
cat secret_info | ssh foo#host remote_program
You can't do it automatically (except for $DISPLAY which you can forward with -X along with your Xauth info so remote programs can actually connect to your display) but you can use a script with a "here document":
ssh ... <<EOF
export FOO="$FOO" BAR="$BAR" PATH="\$HOME/bin:\$PATH"
runRemoteCommand
EOF
The unescaped variables will be expanded locally and the result transmitted to the remote side. So the PATH will be set with the remote value of $HOME.
THIS IS A SECURITY RISK Don't transmit sensitive information like passwords this way because anyone can see environment variables of every process on the same computer.
Something like:
ssh user#host bash -c "set -e; $(env); . thescript.sh"
...might work (untested)
Bit of a hack but if you cannot change the server config for some reason it might work.