I've got a list of accounts/machines that I need to check that I have working access.
So far I have written a for loop around the list.
But when I run ssh for a machine where my .ssh2 public key is not there yet, I get:
> /usr/bin/ssh someuser#somemachine groups
Password:
and it hangs waiting for the password.
How do I invoke ssh to fail instead of asking for a password?
I looked at -n option but that is for the remote cmd after password.
I tried < /dev/null but it knows it's a tty and still asks.
It turns out this works to disable attempting to ask for the password.
/usr/bin/ssh -o AllowedAuthentications=publickey ...
Looks like there's a pretty good answer here: Send close signal to SSH immediately on password prompt
To whit,
ssh -o PasswordAuthentication=no -q user#somemachine
These both work for me on Ubuntu:
ssh -o PasswordAuthentication=no -q user#somemachine
ssh -o BatchMode=yes -q user#somemachine
Related
To check, if user does have sudo (on multiple server at once), I'm using following command:
echo -e "$Password" | ssh -tt -q $Username#$Server "sudo -S -p '' echo ok" 2>&1
This approach seems to work, but only if the password is accepted. If you are (for whatever reason) asked for the password again, the command hangs, and the whole script with it.
Is there a way to force this command to end, if the password is not accepted?
The command is not hanging. It is waiting.
You have specified that the sudo command should read the password from stdin (-S) and not prompt the user to enter a password (-p ''). If you enter the wrong password, sudo will wait for you to try again -- by default, three times.
I cannot find any option to sudo -- either on the command line or in the sudo.conf config file -- that will allow you to ask only once for the password and then exit.
On my linux server I run the command:
sshpass -p 'password' rsync -avz /source/folder/ root#192.168.x.x:/dest/folder
When I run the command without sshpass it will provide me with prompts for authenticity of host and the password.
I need some equivalent to "-o StrictHostKeyChecking=no" (which I use for ssh) that will allow me to run this with no prompts or errors.
Everything I saw from googling was about ssh throwing the error not rsync.
If you want to connect to new server, which public key is not yet in your ~/.ssh/knonwn_hosts, you should not skip this only security check, but rather store the server host key in the known_hosts manually, verify that it is correct and then make the automatic check working.
Simplest way to get the known hosts populated with the server host key is using
ssh-keyscan server-ip >> ~/.ssh/known_hosts
After that, you should not need to use the StrictHostKeyChecking=no workaround.
This is the right command without output errors:
sshpass -p "yourpassword" rsync -rvz -e 'ssh -o StrictHostKeyChecking=no -p 22' --progress root#111.111.111.111:/backup/origin /backup/destination/
I found the following command at cyberciti. This allowed me to do exactly what I needed.
$ rsync --rsh="sshpass -p myPassword ssh -o StrictHostKeyChecking=no -l username" server.example.com:/var/www/html/ /backup/
In some cases sshpass attempts find "assword" as the default password prompt indicator. But rsync can return similar string:
Enter passphrase for key '/home/user/.ssh/private_user_key':
So, try to add '-P' parameter:
sshpass -p "yourpassword" -P 'Enter passphrase for key' rsync 111.111.111.111:/backup/origin /backup/destination/
Path to your private key you can set in /home/user/config or set with -e parameter like that:
sshpass -p "yourpassword" -P 'Enter passphrase for key' rsync -e 'ssh -i /home/user/.ssh/private_user_key' 111.111.111.111:/backup/origin /backup/destination/
More inf about default password prompt indicator:
$ sshpass -V
sshpass 1.06
(C) 2006-2011 Lingnu Open Source Consulting Ltd.
(C) 2015-2016 Shachar Shemesh
This program is free software, and can be distributed under the terms of the GPL
See the COPYING file for more information.
Using "assword" as the default password prompt indicator.
I wrote a script to run up several vms using vagrant, which I have to then provision with ansible. Unfortunately my host is a windows machine, so I thought I could solve the issue by putting all the vms into a vpn and then provision them from another machine in the same vpn.
In theory, it works... I can ssh into the other machines without trouble. But when I run my ansible playbook, ansible fails.
At first I got the message "ssh: connect to host 10.1.2.100 [10.1.2.100] port 22: No route to host" when running ansible with -vvvv
This was in the evening, and I was very tired, and this error didn't recur the following morning. Not sure if it's got something to do with the vm I'm doing deployment from being rebooted in the meantime, or the receiving machine being destroyed and uped completely since then. In any case, the problem has not gone away.
results now, after recreating both vms:
# ansible-playbook -i vms -k -u vagrant vms.yml -vvvv
result:
<10.1.2.100> ESTABLISH SSH CONNECTION FOR USER: vagrant <10.1.2.100>
SSH: EXEC sshpass -d14 ssh -C -vvv -o ServerAliveInterval=50 -o
User=vagrant -o ConnectTimeout=10 -tt 10.1.2.100 '( umask 22 && mkdir
-p "$( echo $HOME/.ansible/tmp/ansible-tmp-1455781388.36-25193904947084 )" && echo
"$( echo $HOME/.ansible/tmp/ansible-tmp-1455781388.36-25193904947084
)" )' fatal: [10.1.2.100]: FAILED! => {"failed": true, "msg": "ERROR!
Using a SSH password instead of a key is not possible because Host Key
checking is enabled and sshpass does not support this. Please add
this host's fingerprint to your known_hosts file to manage this
host."}
So far so clear. I ssh into the other instance to add it to the known hosts. This works without any trouble.
Back to ansible, I try the same command again. The result now is:
<10.1.2.100> ESTABLISH SSH CONNECTION FOR USER: vagrant <10.1.2.100>
SSH: EXEC sshpass -d14 ssh -C -vvv -o ServerAliveInterval=50 -o
StrictHostKeyChecking=no -o User=vagrant -o ConnectTimeout=10 -tt
10.1.2.100 '( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1455782149.99-271768166468916 )" &&
echo "$( echo
$HOME/.ansible/tmp/ansible-tmp-1455782149.99-271768166468916 )" )'
<10.1.2.100> PUT /tmp/tmpXQKa8Z TO
/home/vagrant/.ansible/tmp/ansible-tmp-1455782149.99-271768166468916/setup
<10.1.2.100> SSH: EXEC sshpass -d14 sftp -b - -C -vvv -o
ServerAliveInterval=50 -o StrictHostKeyChecking=no -o User=vagrant -o
ConnectTimeout=10 '[10.1.2.100]' fatal: [10.1.2.100]: UNREACHABLE! =>
{"changed": false, "msg": "ERROR! SSH Error: data could not be sent to
the remote host. Make sure this host can be reached over ssh",
"unreachable": true}
Well, I made sure the host was reachable by ssh, thank you very much! Ansible still can't get through, and I'm about to get a brain tumor from thinking of things that might be the problem.
Any suggestions what might be the problem?
This issue was reported here, with some workarounds:
https://github.com/ansible/ansible/issues/15321
The consensus seems to be either to a. use ansible_password or b. use -u username in the connection parameters. However, any number of things can disrupt an SSH connection in ways that make it look "unreachable" to higher level apps, so I recommend going through each of the steps outlined in that ticket.
I'm new to Ansible.I set-up an Ubuntu virtual machine using Vagrant. I'm able to ssh into the machine using ssh vagrant#172.16.23.228. I have created an ssh key with the same password as the vm, added it to the agent and specified the path in my hosts file.
After following the instructions here I started to receive the following errors, when running this command (ansible all --inventory-file=hosts.ini --module-name ping -u vagrant -vvvv):
Not sure what I'm missing from my set-up, what else I need to check?
<172.16.23.228> ESTABLISH CONNECTION FOR USER: vagrant
<172.16.23.228> REMOTE_MODULE ping
<172.16.23.228> EXEC ssh -C -tt -vvv -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/Users/user/.ansible/cp/ansible-ssh-%h-%p-%r" - o Port=22 -o IdentityFile="~Users/user/.ssh/onemachine_rsa" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 172.16.23.228 /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1451080871.59-247915080664557 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1451080871.59-247915080664557 && echo $HOME/.ansible/tmp/ansible-tmp-1451080871.59-247915080664557'
172.16.23.228 | FAILED => SSH Error: tilde_expand_filename: No such user Users
while connecting to 172.16.23.228:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
My hosts file looks like:
[testserver]
172.16.23.228 ansible_ssh_port=22 ansible_ssh_user=vagrant ansible_ssh_private_key_file=~Users/user/.ssh/onemachine_rsa
What you're doing can work, but I highly recommend using the built-in Ansible provisioner in Vagrant. It will make your life easier and improve your Vagrant skills at the same time. And if you need to execute any shell scripts, use the shell provisioner.
Providing this answer for the benefit of those, like me, who arrive later at the party. Latest Vagrant installations install a private key in a local directory instead of using the admittedly insecure private key for every VM. You'll have to create an ansible_hosts file like this one:
[vagrantboxes]
jessie ansible_ssh_port=2222 ansible_ssh_host=127.0.0.1
[vagrantboxes:vars]
ansible_ssh_user=vagrant
ansible_ssh_private_key_file=.vagrant/machines/default/virtualbox/private_key
Where the key is the last line, which provides a path to the actual private key used in the virtual machine that has been started up from this particular directory.
The path to your ansible_ssh_private_key_file is incorrect. Try ansible_ssh_private_key_file=~/.ssh/onemachine_rsa instead. The tilde in this case expands to the home directory of your user on the local machine you're running ansible from.
I need to do rsync by ssh and want to do it automatically without the need of passing password for ssh manually.
Use "sshpass" non-interactive ssh password provider utility
On Ubuntu
sudo apt-get install sshpass
Command to rsync
/usr/bin/rsync -ratlz --rsh="/usr/bin/sshpass -p password ssh -o StrictHostKeyChecking=no -l username" src_path dest_path
You should use a keyfile without passphrase for scripted ssh logins. This is obviously a security risk, take care that the keyfile itself is adequately secured.
Instructions for setting up passwordless ssh access
You can avoid the password prompt on rsync command by setting the environment variable RSYNC_PASSWORD to the password you want to use or using the --password-file option.
I got it to work like this:
sshpass -p "password" rsync -ae "ssh -p remote_port_ssh" /local_dir remote_user#remote_host:/remote_dir
If you can't use a public/private keys, you can use expect:
#!/usr/bin/expect
spawn rsync SRC DEST
expect "password:"
send "PASS\n"
expect eof
if [catch wait] {
puts "rsync failed"
exit 1
}
exit 0
You will need to replace SRC and DEST with your normal rsync source and destination parameters, and replace PASS with your password. Just make sure this file is stored securely!
The following works for me:
SSHPASS='myPassword'
/usr/bin/rsync -a -r -p -o -g --progress --modify-window=1 --exclude /folderOne -s -u --rsh="/usr/bin/sshpass -p $SSHPASS ssh -o StrictHostKeyChecking=no -l root" source-path myDomain:dest-path >&2
I had to install sshpass
Use a ssh key.
Look at ssh-keygen and ssh-copy-id.
After that you can use an rsync this way :
rsync -a --stats --progress --delete /home/path server:path
Another interesting possibility:
generate RSA, or DSA key pair (as it was described)
put public key to host (as it was already described)
run:
rsync --partial --progress --rsh="ssh -i dsa_private_file" host_name#host:/home/me/d .
Note: -i dsa_private_file which is your RSA/DSA private key
Basically, this approach is very similar to the one described by #Mad Scientist, however you do not have to copy your private key to ~/.ssh. In other words, it is useful for ad-hoc tasks (one time passwordless access)
Automatically entering the password for the rsync command is difficult. My simple solution to avoid the problem is to mount the folder to be backed up. Then use a local rsync command to backup the mounted folder.
mount -t cifs //server/source/ /mnt/source-tmp -o username=Username,password=password
rsync -a /mnt/source-tmp /media/destination/
umount /mnt/source-tmp
The official solution (and others) were incomplete when I first visited, so I came back, years later, to post this alternate approach in case any others wound up here intending to use a public/private key-pair:
Execute this from the target backup machine, which pulls from source to target backup
rsync -av --delete -e 'ssh -p 59333 -i /home/user/.ssh/id_rsa' user#10.9.9.3:/home/user/Server/ /home/keith/Server/
Execute this from the source machine, which sends from source to target backup
rsync -av --delete -e 'ssh -p 59333 -i /home/user/.ssh/id_rsa' /home/user/Server/ user#10.9.9.3:/home/user/Server/
And, if you are not using an alternate port for ssh, then consider the more elegant examples below:
Execute this from the target backup machine, which pulls from source to target backup:
sudo rsync -avi --delete user#10.9.9.3:/var/www/ /media/sdb1/backups/www/
Execute this from the source machine, which sends from source to target backup:
sudo rsync -avi --delete /media/sdb1/backups/www/ user#10.9.9.3:/var/www/
If you are still getting prompted for a password, then you need to check your ssh configuration in /etc/ssh/sshd_config and verify that the users in source and target each have the others' respective public ssh key by sending each over with ssh-copy-id user#10.9.9.3.
(Again, this is for using ssh key-pairs without a password, as an alternate approach, and not for passing the password over via a file.)
Though you've already implemented it by now,
you can also use any expect implementation (you'll find alternatives in Perl, Python: pexpect, paramiko, etc..)
I use a VBScript file for doing this on Windows platform, it servers me very well.
set shell = CreateObject("WScript.Shell")
shell.run"rsync -a Name#192.168.1.100:/Users/Name/Projects/test ."
WScript.Sleep 100
shell.SendKeys"Your_Password"
shell.SendKeys "{ENTER}"
Exposing a password in a command is not safe, especially when using a bash script, if you tried to work with keyfiles thats will be nice.
create keys in your host with ssh-keygen and copy the public key with ssh-copy-id "user#hostname.example.com and then use rsync addin the option -e "ssh -i $HOME/.ssh/(your private key)" to force rsync using ssh connection via the the private key that you create earlier.
example :
rsync -avh --exclude '$LOGS' -e "ssh -i $HOME/.ssh/id_rsa" --ignore-existing $BACKUP_DIR $DESTINATION_HOST:$DESTINATION_DIR;
Here's a secure solution using a gpg encrypted password.
1.Create a .secret file containing your password in the same folder as your rsync script using the command:
echo 'my-very-secure-password' > .secret
Note that the file is hidden by default for extra security.
2.Encrypt your password file using the following gpg command and follow the prompts:
gpg -c .secret
This will create another file named .secret.gpg. Your password is now encrypted.
3.Delete the plain text password file
rm .secret
4.Finally in your rsync script use gpg and sshpass as follows:
gpg -dq secret.gpg | sshpass rsync -avl --mkpath /home/john user_name#x.x.x.x/home
The example is syncing the entire home folder for the user named john to a remote server with IP x.x.x.x
Following the idea posted by Andrew Seaford, this is done using sshfs:
echo "SuperHardToGuessPass:P" | sshfs -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no user#example.com:/mypath/ /mnt/source-tmp/ -o workaround=rename -o password_stdin
rsync -a /mnt/source-tmp/ /media/destination/
umount /mnt/source-tmp