Vagrant startup with sshpass and port forward not working - ssh

I am trying to set up Vagrant for my development environment, but are having problems getting Vagrant to automatically connect to a remote server on startup
In my Vagrantfile I have the following line:
config.vm.provision "shell", path: "vagrant/startup.sh", run: "always"
In my startup.sh I have the following:
#!/usr/bin/env bash
sshpass -p '*******' ssh -fN -L 389:XXX.XXX.XXX.XXX:389 ******#********.*******.**.**
The provision runs on startup, but no ports are getting forwarded
If I SSH into the box and run the command, it just returns without any errors, but doesn't work. I can only get it to work if I don't use sshpass
P.S. Please don't tell me about the insecurity of sshpass, this is only used for LAN connections

Related

Multiple jumps ssh tunnel, one command line

I'm currently connecting my local machine with the target running commands in my local (mobaxterm), in pivotonone and pivottwo, this is the flow of data:
mobaxterm <--- pivotone <--- pivottwo <--- target
These are the commands that I run on each machine:
local(mobaxterm)
ssh -L 5601:127.0.0.1:5601 root#pivotone
pivotone
ssh -L 5601:127.0.0.1:5601 root#pivottwo
pivottwo
ssh -L 5601:127.0.0.1:5601 root#target
I was wandering if I could do the same but with just one command in my mobaxterm machine?
You don't need the -L option to manage jump hosts.
ssh -J root#pivotone,root#pivottwo root#target
You can automate this in your .ssh/config file
Host target
ProxyJump root#pivotone,root#pivottwo
Then you can simply run
ssh root#target

Ansible command fails with 'Failed to connect to the host via ssh' but succeeds after doing 'ansible all -m ping' - why?

This is on an Ubuntu 16.10 Linux VM (host) going to an EC2 Ubuntu instance (client).
I do this command:
sudo ansible-playbook deploy.yml -vvv
And get:
fatal: [web1]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh.", "unreachable": true}
Yet if I do this immediately (seconds) afterward:
ansible all -m ping
The previous command works!
Is it something to do with ControlPersist=60s, like no more commands can be issued? Confusing.
Seems like this may be a known Ansible bug where SSH connections intermittently fail, and give a poor error message. I'm on Ansible 2.1.1, the same version that so many people in this bug report are on too:
https://github.com/ansible/ansible/issues/15706
So upgrading Ansible would probably get around the error. Or use the workaround I discovered of pinging ansible <your hosts> -m ping.

vagrant ssh agent forwarding only works for inline commands?

I've added agent forwarding to my vagrant file, and now when I run vagrant ssh -c 'ssh-add -l' I see a list of ssh keys, which is correct. However, when I run vagrant ssh to connect and then run ssh-add -l, I don't see any keys! It looks like the forwarding only works for commands included as part of the initial call, otherwise it doesn't forward them.
What is going on here? How do I get it to forward the keys consistently, for all ssh connections to vagrant?

SSH Error: Permission denied (publickey,password) in Ansible

I am new to Ansible and I am trying to implement it. I tried all the possible ways present on the Internet and also all questions related to it, but still I can't resolve the error. How can I fix it?
I installed Ansible playbook on my MacBook Pro. I created a VM whose IP address is 10.4.1.141 and host IP address is 10.4.1.140.
I tried to connect to my VM using the host via SSH. It connected by the following command:
ssh user#10.4.1.141
And I got the shell access. This means my SSH connection is working fine.
Now I tried the following command for Ansible:
ansible all -m ping
And the content in the /etc/ansible/host is 10.4.1.141.
Then it shows the following error:
10.4.1.141 | FAILED => SSH Error: Permission denied (publickey,password).
while connecting to 10.4.1.141:22
It is sometimes useful to rerun the command using -vvvv, which prints SSH debug output to help diagnose the issue.
Then I tried creating the config file in .ssh/ folder on the host machine, but the error is still the same.
The content of the config file is:
IdentityFile ~/.ssh/id_rsa
which is the path to my private key.
Then I ran the same command ansible all -m ping and got the same error again.
When I tried another command,
ansible all -m ping -u user --ask-pass
Then it asked for the SSH password. I gave it (I am very sure the password is correct), but I got this error:
10.4.1.141 | FAILED => FAILED: Authentication failed.
This is the log using -vvvv:
<10.4.1.141> ESTABLISH CONNECTION FOR USER: rajatg
<10.4.1.141> REMOTE_MODULE ping
<10.4.1.141> EXEC ssh -C -tt -vvv -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/Users/rajatg/.ansible/cp/ansible-ssh-%h-%p-%r" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 10.4.1.141 /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1445512455.7-116096114788007 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1445512455.7-116096114788007 && echo $HOME/.ansible/tmp/ansible-tmp-1445512455.7-116096114788007'
10.4.1.141 | FAILED => SSH Error: Permission denied (publickey,password).
while connecting to 10.4.1.141:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
I am still not able to figure it out what the problem is. It is my last choice to ask it here after doing my all research. This is the link I referred to.
I fixed the issue. The problem was in my /etc/ansible/hosts file.
The content written in /etc/ansible/hosts was 10.4.1.141. But when I changed it to rajat#10.4.1.141, then the issue got fixed.
If you log in with ssh user#10.4.1.141:
Option 1
Then make sure that in your hosts file inside etc\ansible you have:
[server01]
10.4.1.141
Then within etc\ansible run:
ansible all -m ping -u user --ask-pass
Option 2
If you want to log in without typing the SSH password then in your hosts file inside etc\ansible you add:
[server01]
10.4.1.141 ansible_ssh_pass=xxx ansible_ssh_user=user
Then within etc\ansible run:
ansible all -m ping
For me it worked both ways.
My case is I have multiple private keys in my .ssh.
Here is how I fix it by telling ansible to use a certain private key
ansible-playbook -i ../../inventory.ini --private-key=~/.ssh/id_rsa_ansiadmin update.yml
The previous solutions didn't work for me, unfortunately (DevOps layman here!).
But the below one worked for me.
Change your inventory file to:
[webserver] 10.4.1.141 ansible_user=ubuntu
ansible webserver --private-key pem_file.pem -m ping
Hitting the command with -vvvv helped me to debug it more.
Reference: Failed to connect to the host via ssh: Permission denied (publickey,password) #19584
If you execute Ansible with sudo, for example
sudo ansible -m ping all
Please keep in mind that the public key for root has to be on the server you want to reach as well, not only the public key from your non-root-user. Otherwise, you get the error message above as well.
Most of the issues happen while connecting Ubuntu machines in hosts.
Solution Ansible required which user want to connect, because Ubuntu doesn't have a default root user.
For the hosts file
[Test-Web-Server]
10.192.168.10 ansible_ssh_pass=foo ansible_ssh_user=foo
The problem lies in the inventory file.
vi /etc/ansible/hosts
It should be:
[webserver]
192.###.###.### ansible_ssh_user=user ansible_ssh_pass=pass
I have fixed this issue as well.
My issue was also in my hosts file, /etc/ansible/hosts.
I changed my hosts file from
172.28.2.101
to
name-of-server-in-ssh-config
I had IP addresses in the hosts file. Since I have SSH configurations already set up for names, I do not need to use a variable or username in front of the hosts.
[name-stg-web]
server-name-stg-web[01:02]
What first worked for me was to hardcode the target machine root's password in the /etc/ansible/hosts like this:
[load_balancers_front]
loadbalancer1 ansible_host=xxx.xxx.xxx.xxx ansible_user=root ansible_password=root_password_in_target
But it is not recommended to do this of course because of security issues.
Then, I figured out a solutions from the docs by doing:
ssh-agent bash --> read here
and then
ssh-add /my/private/ssh-key
After this, my hosts file looks like this and ansible all -m ping works fine:
[load_balancers_front]
loadbalancer1 ansible_host=xxx.xxx.xxx.xxx ansible_user=root
Mentioning the username in /etc/hosts file also can resolve the issue.
#sudo vim /etc/hosts
[test-server]
ip_address ansible_user="remote pc's username"
[jenkinsserver]
publicdnsname ansible_user=ubuntu private_key=ubuntu.cer
After years some OS require strong encryption of the SSH key, they don't support RSA and DSA keys. Therefore the message Permission denied (publickey,password) may indicate that OS needs strong SSH-key instead of id_rsa.
Use the following command to generate new key:
ssh-keygen -t ecdsa -f ~/.ssh/id_ecdsa -N ""
Ensure that server has an option
PubkeyAuthentication yes
in /etc/ssh/sshd_config or /etc/openssh/sshd_config.
Some other options may be required as well (read the documentation of your OS first), for example:
Protocol 2
PermitRootLogin without-password
AuthorizedKeysFile /etc/openssh/authorized_keys/%u /etc/openssh/authorized_keys2/%u .ssh/authorized_keys .ssh/authorized_keys2
Do not forget to restart sshd service to apply changes.
Copy the new key with ssh-copy-id -i ~/.ssh/id_ecdsa, then you can connect to remote server using ansible.
At the host machine you should install sshpass with the below command
sudo apt install sshpass -y
and use this command to ping
ansible all -i slaves.txt -m ping -u test --ask-pass
it will provide you keyboard interactive password entry, where you shall enter the passowrd of the slave machine

How to create a cloud9 SSH workspace with dreamhost VPS

I have already installed node.js(v0.10.30) and npm. I'm able to establish a SSH connection between my mac and dreamhost VPS via terminal, but i cant do it in Cloud9. Someone help me, please?
./server.js -p 8080 -l 0.0.0.0 -a :
--settings Settings file to use
--help Show command line options.
-t Start in test mode
-k Kill tmux server in test mode
-b Start the bridge server - to receive commands from the cli [default: false]
-w Workspace directory
--port Port
--debug Turn debugging on
--listen IP address of the server
--readonly Run in read only mode
--packed Whether to use the packed version.
--auth Basic Auth username:password
--collab Whether to enable collab.
--no-cache Don't use the cached version of CSS
So you can use your own VPS,just change 0.0.0.0 to your server ip.