iterm 2 ssh into multiple hosts at once - ssh

I have multiple hostnames that are named like this
hostname-asd87
hostname-sdf09
hostname-yxv28
hostname-czv14
hostname-efv32
hostname-xav99
I want to ssh into them at once using a wildcard or something that looks like this
ssh hostname-*
Then I should get multiple panes for each session
How would this be possible?

Related

Execute two ssh-forwardings with different remotes in one command

I have two remotes, A and B defined in my ssh-config. I have a program running on each and I'm running two port-forwarding commands to my localhost.
ssh A -NL 8888:node:8888 and ssh B -NL 9999:node:9999
How can I execute these in one bash-command? I've tried using & or && and it doesn't work. The forwarding to A has to be established first, since B connects via A with ProxyJump, and A uses and 2FA, and I'm using ControlMaster for A. If possible I'd like to be able to cancel both with ctrl+c.

CanonicalHostname and ssh connections through a jumphost

Say I have an internal subdomain named internal.sub.domain.net with hosts like foo.internal.sub.domain.net and bar.internal.sub.domain.net. Furthermore, these hosts are only accessible from the outside world through the jumphost internal.sub.domain.net. Thus, the following ~/.ssh/config allows me to directly connect to either of the internal hosts from outside:
Host foo bar
HostName %h.internal.sub.domain.net
ProxyJump internal.sub.domain.net
This is fine for a subdomain with just only a couple of hosts, but it quickly becomes rather unmaintainable if the number of hosts is large and / or changes occasionally. But wildcards may come to the rescue:
Host *.internal.sub.domain.net
ProxyJump internal.sub.domain.net
This avoids the maintenance issue, but forces to always specify the fully qualified hostname to connect, which is rather tedious. From looking at the ssh_config man-page, the CannonicalizeHostname and CannonicalDomains options seems to fix that:
CannonicalizeHostname always
CannonicalDomains internal.sub.domain.net
Host *.internal.sub.domain.net
ProxyJump internal.sub.domain.net
But this would only work if the name lookup for the host that is to be connected succeeds. But as these hosts are internal by definition, it is no surprise that name resolution fails.
A not really helpful but very illustrative hack is to fake successful name resolutions by just adding all the internal hosts as aliases for e.g. 127.0.0.1 to /etc/hosts, i.e. adding the following line to /etc/hosts:
127.0.0.1 foo.internal.sub.domain.net bar.internal.sub.domain.net
With that line in place, the last ~/.ssh/config works like a charm. But apart from the fact that this would be quite a hack, it just only shifts the maintenance issue form ~/.ssh/config to /etc/hosts.
As the described scenario should not be so uncommon, is there a way to make it work? To phrase it in one sentence again:
I want to be able to ssh to all internal hosts that live in the internal.sub.domain.net, i.e. that are only accessible through the internal.sub.domain.net jumphost without having to list each of these hosts somewhere, as they may frequently be added or removed from the internal domain and without being forced to always type their fully qualified hostnames.
If you are invoking ssh from a shell, you could define a short variable for the internal domain and append that to the relevant hostnames:
e.g. in your ~/.bashrc or similar:
i=".internal.sub.domain.net"
Then, on the command line:
ssh foo$i
ssh bar$i
At least until a better solution comes along. It's not perfect but it's only 2 extra characters on the command line.
Update: A better solution has come along.
A better solution, by Peter Stuge on the openssh mailing list, is to have something like this in your config file:
Host *.i
ProxyCommand ssh -W %hnternal.sub.domain.net:%p jumphost
This enables the use of abbreviated hostnames like foo.i (rather than foo$i in my first answer). Being able to use . rather than $ is better because it doesn't require the use of the shift key.
However, this is limited to a single remote jumphost whose internal domain starts with i. I think that internal domains that start with internal. are common. If you need to connect to multiple such internal domains, the above needs a slight (but very ugly) tweak. I apologize in advance if anyone's eyes start to bleed.
For example, if you need to connect to these internal domains:
internal.a.com
internal.b.net
Then the above needs to be changed to something like this:
Host *.a
ProxyCommand sh -c "H=%h; exec sh -c \"exec ssh -W \${H%%.a}.internal.a.com:%p jumphosta\""
Host *.b
ProxyCommand sh -c "H=%h; exec sh -c \"exec ssh -W \${H%%.b}.internal.b.net:%p jumphostb\""
Believe me, I wish it could be simpler but that's the shortest thing that I could get to work (without using a separate shell script or environment variables).
If a username is supplied on the command line, it only applies to the jumped-to host. Your local username will be used by default for the jumphost.
If you need to use a username for the jumphost that isn't the same as your local username, add it in the ProxyCommand (e.g. jumpuser#jumphosta).

How to tell ansible to use a key

I'm not very familiar with ansible.
The problem I have at the moment is the following:
I have a master - nodes environment with multiple nodes.
My ansible needs to access my nodes but can't access them.
SSH Error: Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
I'm able to SSH from my master to each node but only by using a key:
ssh -i key-to-node.pem centos#ec2...
Is it possible to setup something to allow ansible to connect to the created hosts?
You can define your pem file in your ansible.cfg:
private_key_file=key-to-node.pem
If you don't have one, create one at the same location where you playbook is or in /etc/ansible/ansible.cfg.
If you have different keys for your hosts, you can also define the key in your inventory:
ansible_ssh_private_key_file=key-to-node.pem
Also, if you would have configured ssh to work without explicitly passing the private key file (in your .ssh/config) Ansible would automatically work.
Adding an example from the OpenShift page, as mentioned in the comments.
I personally have never configured it this way (as I have set up everything via ~/.ssh/config but according to the docs it should be working like this:
[masters]
master.example.com ansible_ssh_private_key_file=1.pem
# host group for nodes, includes region info
[nodes]
node1.example.com openshift_node_labels="{'region': 'primary', 'zone': 'east'}" ansible_ssh_private_key_file=2.pem
Alternatively, since you have multiple nodes and maybe the same key for all of them, you can define a separate nodes:vars section
[nodes:vars]
ansible_ssh_private_key_file=2.pem

Set ssh to connect to specific users on specific servers

I would like to not have to type the user I ssh to a server as when I ssh to said server.
If there was a way I could set ssh to only connect to servers x and y with username z, that would be good. It still has to connect to all other servers with my username.
That seems like the only possibility right now, because I don't want to do any of the following:
change my username
make an alias for ssh ( ssh = ssh -l z )
create a script that is specific for a single server.
I'm starting to think this isn't really possible in the way I want.
create a ~/.ssh/config file or edit your existing one.
and an entry like this;
Host foo
HostName foo.bar.com
Port 22
User username
Then you can just type ssh foo

How to configure Ansible to use my local SSH configuration?

I'm testing out Ansible and am already stuck on a fairly simple thing. I configured my /etc/ansible/hosts to contain the IP of my server:
[web]
1.2.3.4
Now, connecting to it with ansible all -vvvv -m ping fails since my ~/.ssh/config for the specified server uses a custom port, a key file not in the default location, etc. How do I tell Ansible to reuse my SSH configuration for the connection?
It's a little esoteric, so it's understandable you missed it. This is from the Ansible Inventory page:
If you have hosts that run on non-standard SSH ports you can put the port number after the hostname with a colon. Ports listed in your SSH config file won’t be used with the paramiko connection but will be used with the openssh connection.
To make things explicit, it is suggested that you set them if things are not running on the default port
So, in your case:
[web]
1.2.3.4:9000
(Using 9000 as your alt port, of course)
Ansible uses paramiko on systems with a dated version of ssh, such as CentOS/RHEL.
What tedder42 said plus there are other slightly more advanced ways of defining your ssh config on a per host basis.
[web]
1.2.3.4 ansible_ssh_port=9000
This only makes sense if you're also using the other ansible_ssh special variables like ansible_ssh_user and ansible_ssh_host.
ansible_ssh_host is helpful if you want to be able to refer to a server by a name of your choosing instead of its IP.
[web]
apache ansible_ssh_host=1.2.3.4 ansible_ssh_port=9000
If you end up with multiple hosts with the same alternative ssh port you can makes use of Ansible's group variable function.
[web]
1.2.3.4
5.6.7.8
9.10.11.12
[web:vars]
ansible_ssh_port=9000
Now Ansible will use port 9000 on all three of the hosts In the web group.
Understanding how to organize your inventory data goes a long way to your success with Ansible.