Is there a way to merge the ssh config file into one session?
Such as
$ vim ~/.ssh/config
Host sshtest1
HostName 192.168.1.11
User ec2-user
Host ssttest2
HostName 192.168.1.12
User ec2-user
into
Host sshtest?
HostName 192.168.1.1?
User ec2-user
I did a try, but got this error:
ssh: Could not resolve hostname 192.168.1.1?: nodename nor servname
provided, or not known
Suppose I have 9 nodes. I don't want to repeat that sessions.
You can merge common line like this:
Host sshtest1
HostName 192.168.1.11
Host ssttest2
HostName 192.168.1.12
Host *
User ec2-user
Related
How can you write the following setup in an ssh config.
### The Bastion Host
Host bastion-host-nickname
HostName bastion-hostname
### The Remote Host
Host remote-host-nickname
HostName remote-hostname
ProxyJump bastion-host-nickname
### The Remote Host VM
Host remote-host-vm-nickname
Hostname remote-vm-hostname
????
I have a bastian sever through which my remote-host can be reached via ssh. This connection is working as expected. On my remote-host there are a few virtual machines (the remote host vm) that can also be reached via ssh.
AllowTCPForwarding is disabled in the sshd_config of the remote-host. Therefore neither an SSH tunnel nor a ProxyCommand can be used. With both you get the error message "... administratively prohibited". The sshd_config should stay that way.
My preferred approach is that I connect to the remote-host and execute the following command:
[#remote-host]
"ssh -t -i keyfile user#remote-vm-hostname \" whoami \ ""
How can I describe this ssh command in my ssh_config?
Especially so that this ssh command can only be executed on my remote host.
I have a Vagrant file looks like the one below.
Vagrant.configure("2") do |config|
config.vm.box = "centos/7"
config.ssh.host = "0.0.0.0"
config.ssh.password = "foo"
end
I want to access to my box via an SSH tool. When I try to connect it with a command like ssh vagrant#0.0.0.0:22, I encounter with an error that says ssh: Could not resolve hostname 0.0.0.0:22: nodename nor servname provided, or not known
.
According to Vagrant documents, it is possible to set a hostname and password by setting host and password properties of ssh object of the config.
I am sure that the current Vagrant box is up and running. I can even access apache via port 80 if I set forwarded_port attribute.
config.vm.network "forwarded_port", guest: 80, host: 80, host_ip: "127.0.0.1"
That's not valid ssh command line syntax. Use the -p command line parameter to specify the port. For example:
This works: ssh -p 2222 -i /path/to/identity/file vagrant#localhost
This does not: ssh -i /path/to/identity/file vagrant#localhost:2222
Note: port 22 is the default port for ssh, if you're attempting to connect to port 22 then you don't have to specify the port.
I've looked [to faq][1] , but there is no answer for my task.
How can I access to bastion (jump box) host using password with Ansible? We do not consider using SSH keys. How will SSH config (or Ansible config) be look like for this situation?
For instance using SSH keys, the configuration looks like this:
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q user#gateway.example.com"'
How to achieve the same result by using password?
You can use the ProxyJump ssh option which does not require netcat/nc to be installed on the jump host.
So in the ~/.ssh/config file of the user you are using to run the Ansible commands, add something like this
Host jumphost
HostName 1.1.1.1 # use actual ip address or fqdn
Host *
ProxyJump jumphost
In my ~/.ssh/config file, I have configured the bastion host like this:
##### Private hosts access through bastion host #####
Host bastion-host
HostName 52.13.2.54
ForwardAgent yes
Host 10.10.*
ProxyCommand ssh -q bastion-host nc -q0 %h %p
Then I can directly run the Ansible over the hosts in private subnet. Hope that might help you
I have written a ssh config file that specifies a typical jump server setting:
Host host1
HostName 11.11.11.11
User useroo
IdentityFile some/key/file
Host host2
HostName 192.11.11.10
User useroo
IdentityFile some/other/key
ProxyCommand ssh -W %h:%p host1
I can successfully connect with ssh host2 when I save this as ~/.ssh/config. However if I save the config somewhere else as xy_conf, calling ssh -F xy_conf host2 results in an error saying
ssh: Could not resolve hostname host1: Name or service not known
ssh_exchange_identification: Connection closed by remote host
Is this the expected behavior? How else can I set this config temporarily? I don't want to set it as ~/.ssh/config.
OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.8, OpenSSL 1.0.1f 6 Jan 2014
Using different location for ssh_config affects only the first call of ssh, but not the second (from ProxyCommand). You need to pass the same argument to the secondssh` too:
ProxyCommand ssh -F xy_conf -W %h:%p host1
I have some problems with the ssh proxycommand. The authentication on the proxy works fine, but when i want to login to the remote-host it fails. The problem seems to be, that the proxy tries to login with my local rsa_key and not with the key stored on the proxy. Is there a way to fix this?
This is what I want:
Local -- local rsa --> Proxy -- proxy rsa --> host
The Config-file I use:
Host 192.168.178.32
HostName 192.168.178.32
User user
Port 22
IdentityFile ~/.ssh/id_rsa.pub
Host 192.168.178.30
HostName 192.168.178.30
User user
Port 22
IdentityFile home/user/.ssh/id_rsa.pub
ProxyCommand ssh -W %h:%p -F ssh_config -p 22 192.168.178
The problem seems to be, that the proxy tries to login with my local rsa_key and not with the key stored on the proxy.
Yes. It does. It is by design. You don't want to copy private keys over to the proxies. Proxy command will always authenticate from your local host.
There are twa ways out:
Copy the key to your local host and configure it to be used.
Don't use ProxyCommand and do the simple ssh:
ssh -t proxy ssh host
it will use the authentication from the second host