Running MPI job on non-standard ssh port - ssh

I'm setting up to run some parallel jobs using MPICH 3.2 and I tried to test the configuration (3 Nodes, named Ruby, Sapphire and Onyx (Master)) using the example program cpi provided with the installation. When I tried to run the job I the following error:
ssh: connect to host Ruby_Slave port 22: No route to host
Host key verification failed.
Ruby is running ssh on a non-standard ssh port, which I think might be the problem. Is there any way to specify the port used for ssh in MPI?
Edit1:
Host Sapphire
HostName 10.42.43.11
Port 22
PasswordAuthentication no
EnableSSHKeysign yes
RSAAuthentication yes
PubkeyAuthentication yes

To my knowledge, you can't specify the port used for SSH in MPI.
You can however tell SSH which port to use, on a machine by machine basis, in .ssh/config. The user configuration file is (usually) located in ~/.ssh/config and the system-wide configuration file is located in /etc/ssh/ssh_config.
Here's an example configuration:
Host 192.168.0.101
Port 5101
Host 192.168.0.102
Port 5102
Also take a look at man ssh_config.

Related

Ubuntu Jump Host in Open Telekom Cloud not working as expected

Currently, I have built a small datacenter environment in OTC with Terraform. based on Ubuntu 20.04 images.
The idea is to have a jump host in the setup phase and for operational purposes that allows spontaneous access to service frontends via ssh proxy jumps without permanently routing them to the public net.
Basic setup works fine so far - I can access the jump host with ssh, and can access the internal machines from there with ssh when I put the private key onto the jump host. So, cloudwise the security seems to be fine. Key pair is generated with ed25519, I use the same key for jump host and internal servers (for now).
What I cannot achieve is the proxy jump as a chained command from my outside machine.
On the jump host, I set AllowTcpForwarding to "yes" in /etc/ssh/sshd_config and restarted ssh and sshd services.
My current local ssh config looks like this:
Host otc
User ubuntu
Hostname <FloatingIP-Address>
Port 22
StrictHostKeyChecking=no
UserKnownHostsFile=/dev/null
IdentityFile= ~/.ssh/ssh_access
ControlPath ~/.ssh/cm-%r#%h:%p
ControlMaster auto
ControlPersist 10m
Host 10.*
User ubuntu
Port 22
IdentityFile=~/.ssh/ssh_access
ProxyJump otc
StrictHostKeyChecking=no
UserKnownHostsFile=/dev/null
I can use this to ssh otc to the jump host.
What I would expect is that I could use e.g. ssh 10.0.0.56 to reach an internal host without further ado. As well I should be able to use commands like ssh -L 8080:10.0.0.56:8080 10.0.0.56 -N to map an internal server's port to a localhost port on my external machine. This is how I managed that successfully on other hosting scenarios in the public cloud.
All I get is:
Stdio forwarding request failed: Session open refused by peer
kex_exchange_identification: Connection closed by remote host
Journal on the Jump host says:
Jul 30 07:19:04 dev-nc-o-bastion sshd[2176]: refused local port forward: originator 127.0.0.1 port 65535, target 10.0.0.56 port 22
What I checked as well:
ufw is off on the Jump Host.
replaced ProxyJump configuration with ProxyCommand
So I am at the end of my knowledge. Has anyone a hint what else could be the reason? Any help welcome!
Ok, cause is found (but not yet fully explained).
My local ssh setting was allowing multiplexed forwards (ControlMaster auto ) which caused the creation of a unix socket file for the Controlpath in ~/.ssh.
I had to login to the jump host to AllowTcpForwarding in the first place.
After rebooting the sshd, I returned to the local machine and the failure occured when trying to forward to the remote internal machine.
After deleting the socket file in ~/.ssh, the connection can now be established as needed. Obviously, the persistent tunnel was not impacted by the restarted daemon on the jump host and simply refused to follow the new directive.
This cost me two days. On the bright side, I learned a lot about ssh :o

GitLab CI runner with SSH ProxyJump

I have the following settings in my /etc/ssh/ssh_config file:
Host serverA
User idA
Host serverB
User idB
ProxyJump serverA
I’ve also copied the public keys, so if I locally run ssh serverB I’m correctly connected to serverB as idB through serverA.
Now, here’s my runner configuration in /etc/gitlab-runner/config.toml:
[[runners]]
name = "ssh-runner-1"
url = "http://my-cicd-server"
token = "xxxxxxxxxxxxxxxx"
executor = "ssh"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.ssh]
user = "idB"
host = "serverB"
identity_file = "/home/gitlab-runner/.ssh/id_ed25519"
When I run a CI/CD job on this runner I get a « connection refused » error:
ERROR: Preparation failed: ssh command Connect() error: ssh Dial() error: dial tcp xxx.xxx.xxx.xxx:22: connect: connection refused
I conclude that the ProxyJump configuration is not applied, and since the machine with the runner can’t directly connect to serverB, I get denied access.
How can I configure the runner to apply the proxy jump configuration?
The GitLab runner uses a Go-based SSH client. It does not respect your system SSH configuration and does not have the same configurability as the standard ssh (usually OpenSSH) packages you typically find installed in operating system distributions or similar packages.
The Go client does not support the ProxyJump configuration.
Your best bet would probably be to configure a tunneled connection where your entrypoint does not require SSH configuration options that are not supported.
Local port forwarding
One way might be to open a local port-forwarding tunnel, then in your GitLab configuration, specify the host as localhost and port as the forwarded port.
For example:
Open the tunnel -- local port 2222 forwards to port 22 on ServerB via ssh connection through ServerA
ssh -L 2222:ServerB:22 -N ServerA
Configure runner to use the tunnel:
...
[runners.ssh]
host = "localhost"
port = 2222
...
With this approach, you may have to write some automation on your server to restore the tunnel connection in the event it is broken. How you might do this depends on your operating system and preferred service manager. Or use a tool like autossh
This is basically how the ProxyJump configuration works under the hood.
IP/Port forwarding system
A similar approach would be to have your jump system automatically forward connections to the desired destination. This might be something like a software firewall rule (e.g. using iptables routing rules). That way the forwarding occurs transparently. Then simply tell the runner to target ServerA and the traffic will be transparently moved to ServerB.
This approach is more reliable, since you won't have to do anything to keep the tunnel alive if it ever drops. However, the configuration is much more complex and requires a static IP for ServerB.
For example, on ServerA, assuming the IP of ServerB is 10.10.10.10 the following iptables configuration could be used:
iptables -t nat -A PREROUTING -p tcp --dport 2222 -j DNAT --to-destination 10.10.10.10:22
iptables -t nat -A POSTROUTING -j MASQUERADE
reference.
Then the GitLab runner configuration:
...
[runners.ssh]
host = "ServerA"
port = 2222
...
Lastly, it may also be useful to know that disable_strict_host_key_checking is an undocumented configuration option for the runner as well, in the event you need this.

Use ssh over port forwarded connection

My organisation makes us connect to our AWS environments using a "bastion" host so my openssl .ssh config file looks a bit like this:
Host bastion.*.c1.some.com
User bastionuser
ProxyCommand none
StrictHostKeyChecking no
ForwardAgent yes
Host *.c1.some.com 12.345.* 456.12.1.*
User awsuser
StrictHostKeyChecking no
ForwardAgent yes
ProxyCommand ~/.ssh/proxy_command.sh %h %p
I want to use an ssh client built into the CLion IDE to connect to my AWS environment but it does not support this kind of configuration.
Can I setup a port forward using openssl and then establish an ssh connection over that tunnel from within CLion?
I was able to setup a port forward using PuTTY and afterwards I was able to establish a second ssh connection over the port forward using Intellij. For some reason I couldn't establish the second ssh connection over the OpenSSH port forward, perhaps because the Git Bash environment is sandboxed or something?
Presumably this will also work with any other SSH client that doesn't support tunneling out of the box.

SSH Connection Being Refused When I'm Remote, but not Local (Port Forwarding Already Enabled)

I set up SSH on my Ubuntu server (running XMonad) and generated a key for my laptop that I used to connect to my home server with. I also went on my wireless router and forward port 22 for SSH use. I can SSH fine when I'm at home using the standard:
ssh user#ipaddress
However when I'm outside of my local network I get this error:
ssh: connect to host xxx.xx.xx.xxx port 22: Connection refused
Everything I read says I need to either a) check that my port 22 is forward (which it is) or b) check that sshd is actually running on my Ubuntu server (which it is).
Any ideas what is preventing my SSH from working when I'm remote?
Add the following line your ssh user config file if it doesn't exits You can create the config file as shown below.
vi ~/.ssh/config
Host *
ServerAliveInterval 300
Change the permission as below:
chmod 600 ~/.ssh/config
Restart the daemon. Hope this helps.
https://serverfault.com/a/371563/617303
For me this was the cause.
In your /etc/ssh/sshd_config or /etc/ssh_ssh_config check to make sure GSSAPI Auth is disabled (set to no).
GSSAPIAuthentication no
Then restart the service or machine.

Vagrant forwarding ssh from remote server

I set up vagrant to run a vm on a host os. What I would like to do is be able to ssh from other machines directly into the vagrant vm (ie, I shouldn't ssh into the host and then vagrant ssh, etc. into the vagrant vm).
Currently, I can ssh not using vagrant ssh from the host os using ssh vagrant#127.0.0.1 -p 2222. However, if I run the same command (replacing 127.0.0.1 with the host's ip address), I get "ssh connect to host XXXXX port 2222: Connection refused."
I tried adding my own port forwarding rule to vagrant:
config.vm.network :forwarded_port, guest: 22, host: 2222
But that doesn't allow ssh connection from either the host machine or any other machine in the network. Additionally, I spent a while with config.ssh in the vagrant docs. I think that most of those parameters though specify what port the vagrant vm is running ssh on.
I really don't think this should be that difficult. Does anyone know what I might be doing wrong, or what I should do differently to ssh into a vagrant vm from a remote server?
If you don't want to change network to public you can override default port forwarding for ssh by this:
config.vm.network :forwarded_port, guest: 22, host: 2222, host_ip: "0.0.0.0", id: "ssh", auto_correct: true
This will forward guest 22 port to 2222 on your host machine and will be available from any ip, so you can access it outside your local machine.
Since v1.2.3 Vagrant port forwarding by default binds with 127.0.0.1 so only local connections are allowed.
You got "Connection refused" because the port forwarding was NOT binding to your network interfaces (e.g. eth0, wlan0). The port 2222 on your host is NOT even open to hosts in the same network (loopback interfaces not accessible to other hosts).
If you want to SSH directly to the Vagrant VM from a remote host (in the same LAN), the best and easiest way is to use Public Network (VirtualBox's Bridged networking mode).
Add the following to your Vagrantfile and do a vagrant reload.
It should bridge through one of the public network interfaces, you should be able to get the IP address after VM is up, vagrant ssh into it and run ifconfig -a or ip addr to get the IP address to ssh to from remote hosts.
Sample Vagrantfile
<!-- language: lang-rb -->
config.vm.network :public_network # 2nd interface bridged mode
or more advanced, you can set default network interface for public network
<!-- language: lang-rb -->
config.vm.network "public_network", :bridge => 'en1: Wi-Fi (AirPort)'
See more => Public Network
You can also add another rule to Vagrantfile like the following:
config.vm.network :forwarded_port, guest: 1234, host: 22
Connect to Vagrant with the default port (2222) and edit /etc/ssh/sshd_config, then add below Port 22 the port previously configured as 'guest', resulting:
...
Port 22 #Uncomment this line if it's commented
Port 1234
....
Finally, restart the ssh daemon or do vagrant reload (if you edited Vagrantfile while the VM was running you have to reload it) and now you can connect to Vagrant using 'host' port (22 in my case) from outside the host computer.
You can't remove the default port, because Vagrant would hang when starting up.
Use vagrant share --ssh
Vagrant now has a service for registering a Vagrant VM
for remote SSH access automatically.
See here: https://www.vagrantup.com/docs/share/ssh.html
You call vagrant share --ssh.
This generates an SSH key (encrypted and password-protected),
uploads it to a Hashicorp server,
and returns a silly global box name (e.g. "rambunctious-deer-3496").
Then everybody who
has a Hashicorp Atlas account
knows the box name,
knows the password for the key, and
has Vagrant installed(!)
can perform remote SSH to the box via
vagrant connect --ssh BOXNAME.
Vagrant takes care of all the admin stuff behind the scenes (here are some details).
Works as advertised.
I guess this will even work if the Vagrant host (not merely the VM) is behind a NAT.
Limitations:
vagrant share sessions expire (currently after 8 hours)
expect some latency, because all traffic is (presumably)
routed through the Altas server
I have seen my remote connections close (for no obvious reason)
after I had not used them for maybe 15 minutes.