I will try to describe the issue in points:
I use salt-sproxy + pillar on my local machine
I can use it to manage network gears I have access directly to from my local ubuntu machine
I have some gears which are accessible only via a jumpbox
so to access them:
My machine ssh ---> jumpbox ssh ---> router
by adding jump box info in ~/.ssh/config, I can access my router directly, ie jumpbox is transparent to me
By using salt, I am not able to access these gears.
So it seems that salt isn't checking ~/.ssh/config to know how to access my router.
Is there any variable to define in salt master file to make it works? or in a pillar file?
BR
Mou
salt-ssh should read your ~/.ssh/config (via openssh) but clearly salt-sproxy doesn't.
However you should be able to specify ProxyJump as an ssh option as stated in the documentation about ssh roster since salt-sproxy also reads rosters.
Related
I have PyCharm pro and I need to set up a remote repo on it via ssh. There are instructions available for doing this but my requirements are slightly different than what I have seen available. My organisation has ssh set up via cloudfare and all those configs are already put in .ssh/config. It includes hostname, IdentityFile, username, and so on. If I need to access my remote machine, I just do ssh <HOST-NAME> from the terminal. My config looks like this -
Host <HOST-NAME>
HostName <HOST-NAME>.<ORGANISATION>.net
ProxyCommand /usr/local/bin/cloudflared access ssh --hostname %h
IdentityFile ~/.ssh/id_rsa
User <USER-NAME>
The way to set up PyCharm usually includes instructions like specifying host, port, username etc so that PyCharm can essentially run
ssh username#host_ip_address. However, I just need it to run ssh <HOST-NAME> so that it can access everything else using the config file that is already set up.
Is there a way to make this happen?
I use Pycharm Pro with my own .ssh/config and Jump Proxy pseudo hostname and certificates so slightly different. I "ssh pseudo-hostname" so no user as well.
In Pycharm I set a valid username and port (default 22) cause in my case it will get passed on.
Your setup ignore username and port no? If so you can use BS values?
I made some search around reverse proxy but it does not seem to answer to my problem.
I want to connect from anywhere to a local server without changing anything to the local firewall.
My idea is to establish an ssh connection between the local server to a vm instance on gcloud and then, from anywhere, I connect to this vm instance and reuse this ssh connection.
Is it possible?
you can use serveo.net link
ssh -o serverAliveInterval=60 -R CUSTOMDOMAIN:80:localhost:YOURPORT serveo.net
you can access your localhost from any where,it's free of cost
by using url CUSTOMDOMAIN.serveo.net you can access your localhost server from any where in the world.(Here CUSTOMDOMAIN means you can use your custom sub domain)
Alternative for serveo is ngrok link
I'm sorry to have to ask this question, but I feel like I've tried every answer so far on SO with no luck.
I have my local machine and my remote server. Jenkins is up and running on my server.
If I open up terminal and do something like scp /path/to/file user#server:/path/to/wherever then my ssh works fine without requiring a password
If I run this command inside of my Jenkins job I get 'Host Key Verification Failed'
So I know my SSH is working correctly the way I want, but why can't I get Jenkins to use this SSH key?
Interesting thing is, it did work fine when I first set up Jenkins and the key, then I think I restarted my local machine, or restarted Jenkins, then it stopped working. It's hard to say exactly what caused it.
I've also tried several options regarding ssh-agent and ssh-add but those don't seem to work.
I verified the local machine .pub is on the server in the /user/.ssh folder and is also in the authorized keys file. The folder is owned by user.
Any thoughts would be much appreciated and I can provide more info about my problem. Thanks!
Update:
Per Kensters suggestion I did su - jenkins, then ssh server, and it asked me to add to known hosts. So I thought this was a step in the right direction. But the same problem persisted afterward.
Something I did not notice before I can ssh server without password when using my myUsername account. But if I switch to the jenkins user, then it asks me for my password when I do ssh server.
I also tried ssh-keygen -R server as suggested to no avail.
Try
su jenkins
ssh-keyscan YOUR-HOSTNAME >> ~/.ssh/known_hosts
SSH Slaves Plugin doesn't support ECDSA. The command above should add RSA key for ssh-slave.
Host Key Verification Failed
ssh is complaining about the remote host key, not the local key that you're trying to use for authentication.
Every SSH server has a host key which is used to identify the server to the client. This helps prevent clients from connecting to servers which are impersonating the intended server. The first time you use ssh to connect to a particular host, ssh will normally prompt you to accept the remote host's host key, then store the key locally so that ssh will recognize the key in the future. The widely used OpenSSH ssh program stores known host keys in a file .ssh/known_hosts within each user's home directory.
In this case, one of two things is happening:
The user ID that Jenkins is using to run these jobs has never connected to this particular remote host before, and doesn't have the remote host's host key in its known_hosts file.
The remote host key has changed for some reason, and it no longer matches the key which is stored in the Jenkins user's known_hosts file.
You need to update the known_hosts file for the user which jenkins is using to run these ssh operations. You need to remove any old host key for this host from the file, then add the host's new host key to the file. The simplest way is to use su or sudo to become the Jenkins user, then run ssh interactively to connect to the remote server:
$ ssh server
If ssh prompts you to accept a host key, say yes, and you're done. You don't even have to finish logging in. If it prints a big scary warning that the host key has changed, run this to remove the existing host from known_hosts:
$ ssh-keygen -R server
Then rerun the ssh command.
One thing to be aware of: you can't use a passphrase when you generate a key that you're going to use with Jenkins, because it gives you no opportunity to enter such a thing (seeing as it runs automated jobs with no human intervention).
I have enabled two factor authentication for ssh using duosecurity (using this playbook https://github.com/CoffeeAndCode/ansible-duo ).
How can I use ansible to manage the server now. The SSH calls fail at gathering facts because of this. I want the person running the playbook to enter the two factor code before the playbook is run.
Disabling two factor for the deployment user is a possible solution but creates a security issue which I would I like to avoid.
It's a hack, but you can tunnel a non-2fac Ansible SSH connection through a 2fac-enabled SSH connection.
Overview
We will setup two users: ansible will be the user Ansible will use. It should be authenticated in a way that's supported by Ansible (i.e., not 2fac). This user will be restricted so it cannot connect from anywhere but 127.0.0.1, so it is not accessible from outside the machine.
The second user, ansible_tunnel will be open to the outside world, but will be authenticated by two factors, and will only allow tunneling of SSH connections to the local machine.
You must be able to configure 2-factor authentication only for some users (not all).
Some info on SSH tunnels.
On the target machine:
Create two users: ansible and ansible_tunnel
Put your public key in ~/.ssh/authorized_keys of both users
Set the shell of ansible_tunnel to /bin/false, or lock the user - it will be used for tunneling exclusively, not running commands
Add the following to /etc/ssh/sshd_config:
AllowTcpForwarding no
AllowUsers ansible#127.0.0.1 ansible_tunnel
Match User ansible_tunnel
AllowTcpForwarding yes
PermitOpen 127.0.0.1:22
ForceCommand echo 'This account can only be used for tunneling SSH sessions'
Setup 2-factor authentication only for ansible_tunnel
Restart sshd
On the machine running Ansible:
Before running Ansible, run the following (on the Ansible machine, not the target):
ssh -N -L 8022:127.0.0.1:22 ansible_tunnel#<host>
You will be authenticated using two factors.
Once the tunnel is up (check with netstat), run Ansible with ansible_ssh_user=ansible, ansible_ssh_port=8022 and ansible_ssh_host=localhost.
Recap
Only ansible_tunnel can connect from the outside, and it will be authenticated using two factors
Once the tunnel is set up, connecting to port 8022 on the local machine is the same as connecting to sshd on the remote machine
We're allowing ansible to connect over SSH only when it is done through the localhost, so only connections that are tunneled are allowed
Scale
This will not scale well for multiple server, due to the need to open a separate tunnel for each machine, which requires manual action. However, if you've chosen 2-factor authentication for your servers you're already willing to do some manual action to connect to each server, and this solution will only add a little overhead with some script-wrapping.
[EDITED TO ADD]
Bonus
For convenience, we may want to log into the maintenance account directly to do some manual work, without going through the process of setting up a tunnel. We can configure SSH to require 2fac authentication in this case, while maintaining the ability to connect without 2fac through the tunnel:
# All users must authenticate using two factors
AuthenticationMethods publickey,keyboard-interactive
# Allow both maintenance user and tunnel user with no restrictions
AllowUsers ansible ansible_tunnel
# The maintenance user is allowed to authenticate using a single factor only
# when connecting from a local address - it should be impossible to connect to
# this user using a single factor from the outside (the only way to do that is
# having an existing access to the machine, or use the two-factor tunnel)
Match User ansible Address 127.0.0.1
AuthenticationMethods publickey
I can use ansible with ssh and 2FA using the ControlMaster feature of ssh and ansible.
My local ssh client is configured to dump a ControlPath socket for multiplexing connection. Ansible is configured to use the same socket.
Local ssh client
This configuration enable multiplexing for all connections. I personally store this configuration in `~/.ssh/config:
Host *
ControlMaster auto
ControlPath ~/.ssh/master-%r#%h:%p.socket
ControlPersist 1m
When a connection is established, a socket appears in the $HOME/.ssh directory. This socket persists during one minute after disconnection.
Configure ansible
Ansible is configured to re-use the local socket.
Add this in your ansible configuration file (for instance, ~/.ansible.cfg):
[ssh_connection]
control_path=~/.ssh/master-%%r#%%h:%%p.socket
Note the double % for variable substitution.
Usage
Connect to your server using ssh regular command (ssh user#server), and perform 2FA;
Launch your ansible command as usual.
The step 2 must be performed within the ControlPersist configuration, or keep an ssh connection in a terminal when you launch ansible command in another one.
You can also force to close connection when you do not need it, using: ssh -O exit user#server.
Note that, if you open a third terminal and run ssh user#server, you will not be asked for credentials: the connection established in 1. will be re-used.
Drawbacks
In case of bad network conditions
Sometimes, when you loose connection, the socket persists. Every further connection hangs. You must manually disconnect this connection, using ssh -O exit user#server. This is the only known drawback for this method.
References:
Ansible parameter ANSIBLE_SSH_CONTROL_PATH
About multiplexing ssh (a very old blog post which makes me discover ssh multiplexing: https://blog.scottlowe.org/2015/12/11/using-ssh-multiplexing/)
Solution using a Bastion Host
Even using an ssh bastion host it took me quite a while to get this working. In case it helps anyone else, here's what I came up with. It uses the ControlMaster ssh config options and since ansible uses regular ssh it can be configured to use the same ssh features and re-use the connection to the bastion host regardless of how many connections it opens to remote hosts. I've seen these Control options recommended in general (presumably for performance reasons if you have a lot of hosts) but not in the context of 2FA to a bastion host.
With this approach you don't need any sshd config changes, so you'll want AuthenticationMethods publickey,keyboard-interactive as the only authentication method setting on the bastion server, and publickey only for all your other servers that you're proxying through the bastion to get to. Since the bastion host is the only one that accepts external connections from the internet, it's the only one that requires 2FA, and internal hosts rely on agent forwarding for public key authentication but don't use 2FA.
On the client, I created a new ssh config file for my ansible environment in the top-level directory that I run ansible from (so sibling of ansible.cfg) called ssh.config. It contains:
Host bastion-persistent-connection
HostName <bastion host>
ForwardAgent yes
IdentityFile ~/.ssh/my-key
ControlMaster auto
ControlPath ~/.ssh/ansible-%r#%h:%p
ControlPersist 10m
Host 10.0.*.*
ProxyCommand ssh -W %h:%p bastion-persistent-connection -F ./ssh.config
IdentityFile ~/.ssh/my-key
Then in ansible.cfg I have:
[ssh_connection]
ssh_args = -F ./ssh.config
A few things to note:
My private subnet in this case is 10.0.0.0/16 which maps to the host wildcard option above. The bastion proxies all ssh connections to servers on this subnet.
This is a bit brittle in that I can only run my ssh or ansible commands in this directory, because of the ProxyCommand passing the local path to this config file. Unfortunately I don't think there's an ssh variable that maps to the current config file being used so that I could pass the same config file to the ProxyCommand automatically. Depending on your environment it might be better to use an absolute path for this.
The one gotcha is it makes running ansible more complex. Unfortunately, from what I can tell ansible has no support whatsoever for 2FA. So if you have no existing ssh connection to the bastion, ansible will print out Verification code: once for every private server it's connecting to, but it's not actually listening for the input so no matter what you do the connections will fail.
So I first run: ssh -F ssh.config bastion-persistent-connection
This creates the socket file in ~/.ssh/ansible-*, and the ssh agent locally will close & remove that socket after the configurable time (what I have set to 10m).
Once the socket is open I can run ansible commands like normal, e.g. ansible all -m ping and they succeed.
To clone a repository managed by gitolite one usually uses following syntax
git clone gitolite#server:repository
This tells the SSH client to connect to port 22 of server using gitolite as user name. When I try it with the port number:
git clone gitolite#server:22:repository
Git complains that the repository 22:repository is not available. What syntax should be used if the SSH server uses a different port?
The “SCP style” Git URL syntax (user#server:path) does not support including a port. To include a port, you must use an ssh:// “Git URL”. For example:
ssh://gitolite#server:2222/repository
Note: As compared to gitolite#server:repository, this presents a slightly different repository path to the remote end (the absolute /repository instead of the relative path repository); Gitolite accepts both types of paths, other systems may vary.
An alternative is to use a Host entry in your ~/.ssh/config (see your ssh_config(5) manpage). With such an entry, you can create an “SSH host nickname” that incorporates the server name/address, the remote user name, and the non-default port number (as well as any other SSH options you might like):
Host gitolite
User gitolite
HostName server
Port 2222
Then you can use very simple Git URLs like gitolite:repository.
If you have to document (and or configure) this for multiple people, I would go with ssh:// URLs, since there is no extra configuration involved.
If this is just for you (especially if you might end up accessing multiple repositories from the same server), it might be nice to have the SSH host nickname to save some typing.
It is explained in great detail here: https://github.com/sitaramc/gitolite/blob/pu/doc/ssh-troubleshooting.mkd#_appendix_4_host_aliases
Using a "host" para in ~/.ssh/config lets you nicely encapsulate all this within ssh and give it a short, easy-to-remember, name. Example:
host gitolite
user git
hostname a.long.server.name.or.annoying.IP.address
port 22
identityfile ~/.ssh/id_rsa
Now you can simply use the one word gitolite (which is the host alias we defined here) and ssh will infer all those details defined under it -- just say ssh gitolite and git clone gitolite:reponame and things will work.