scp + error Name or service not known + custom port - ssh

I have read lot of post about this problem but i still can not solve it on my side.
I have a server i used to connect like this:
$ ssh user#xxx.xx.xx.xxx -p yy
user = is not root
xxx.xx.xx.xxx = ipv4 of my server
yy = custom port for ssh
Connexion works well.
I try to make a copy of a folder from my local machine (ubuntu) to the server(ubuntu 14.04) like this:
$ scp -r -p /home/user/my/folder/ ssh://user#xxx.xx.xx.xxx:yy/home/user/my/folder/on/server/
I get this error:
ssh: Could not resolve hostname ssh: Name or service not known
lost connection
I guess the connexion works well. So what could happen? A problem with rights of the folder?
For information, my local machine get both ipv4 and ipv6 address. Could it be that?
Thank you in advance for any help.
jb

Check manual page for scp. It describe the usage of scp with all the switches and options:
scp [...] [-P port] [[user#]host1:]file1 ... [[user#]host2:]file2
Your command should be:
$ scp -r -p -P yy /home/user/my/folder/ user#xxx.xx.xx.xxx:/home/user/my/folder/on/server/
Note port comes as -P yy, you don't write ssh:// in front the user and separate host from the remote path using colon (:).

You don't need "ssh://".
Here scp believes ssh is the name of the server you want to copy to. That's what the message says : "Could not resolve hostname ssh"
Try :
$ scp -r -p -P yy /home/user/my/folder/ user#xxx.xx.xx.xxx/home/user/my/folder/on/server/

Related

scp: "Host key verification failed. lost connection" when attempting to copy files from remote server to WSL

I have a user at a remote server, let's call it remote_user#remote_server.
I also have a user on my WSL2 Ubuntu, let's call it wsl_user#<localhost>.
When I tried to use the command scp -v -o StrictKeyChecking=no remote_user#remote_server:/path/to/file.txt wsl_user#<localhost>:/path/to/directory on my host computer, it asked for the remote server's password (which successfully authenticates), but then it outputs
Host key verification failed.
lost connection
when I use localhost as <localhost>.
I have tried using both the IP address of the host computer and the IP address of the WSL2 instance, but both just hangs, and then does a Connection timed out.
P.S: I can ssh into both of them.
Well, I somehow kinda circumvented the problem using by breaking it into smaller commands, aka
scp -v remote_user#remote_server:/path/to/file.txt file.txt\
&& scp -v file.txt wsl_user#localhost:/path/to/directory \
&& rm file.txt

Connecting to a remote server from local machine via ssh-tunnel

I am running Ansible on my machine. And my machine does not have ssh access to the remote machine. Port 22 connection originating from local machine are blocked by the institute firewall. But I have access to a machine (ssh-tunnel), through which I can login to the remote machine. Now is there a way we can run ansible playbook from local machine on remote hosts.
In a way is it possible to make Ansible/ssh connect to the remote machine, via ssh-tunnel. But not exactly login to ssh-tunnel. The connection will pass through the tunnel.
Other way is I can install ansible on ssh-tunnel, but that is not the desired and run plays from there. But that would not be a desired solution.
Please let me know if this is possible.
There are two ways to achieve this without install the Ansible on the ssh-tunnel machine.
Solution#1:
Use these variables in your inventory:
[remote_machine]
remote ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user='username' ansible_ssh_private_key_file='/home/user/private_key'
hope you understand above parameters, if need help please ask in comments
Solution#2:
Create ~/.ssh/config file and add the following parameters:
####### Access to the Private Server through ssh-tunnel/bastion ########
Host ssh-tunnel-server
HostName x.x.x.x
StrictHostKeyChecking no
User username
ForwardAgent yes
Host private-server
HostName y.y.y.y
StrictHostKeyChecking no
User username
ProxyCommand ssh -q ssh-tunnel-server nc -q0 %h %p
Hope that help you, if you need any help, feel free to ask
No request to install ansible on the jump and remote servers, ansible is ssh service only tool :-)
First make sure you can work it directly with SSH Tunnel.
On local machine (Local_A), you can login to Remote machine (Remote_B) via jump box (Jump_C).
login server Local_A
ssh -f user#remote_B -L 2000:Jump_C:22 -N
The other options are:
-f tells ssh to background itself after it authenticates, so you don't have to sit around running something on the remote server for the tunnel to remain alive.
-N says that you want an SSH connection, but you don't actually want to run any remote commands. If all you're creating is a tunnel, then including this option saves resources.
-L [bind_address:]port:host:hostport
Specifies that the given port on the local (client) host is to be forwarded to the given host and port on the remote side.
There will be a password challenge unless you have set up DSA or RSA keys for a passwordless login.
There are lots of documents teaching you how to do the ssh tunnel.
Then try below ansible command from Local_A:
ansible -vvvv remote_B -m shell -a 'hostname -f' --ssh-extra-args="-L 2000:Jump_C:22"
You should see the remote_B hostname. Let me know the result.
Let's say you can ssh into x.x.x.x from your local machine, and ssh into y.y.y.y from x.x.x.x, while y.y.y.y is the target of your ansible playbook.
inventory:
[target]
y.y.y.y
playbook.yml
---
- hosts: target
tasks: ...
Run:
ansible-playbook --ssh-common-args="-o ProxyCommand='ssh -W %h:%p root#x.x.x.x'" -i inventory playbook.yml

Ansible percent expand

I have an ansible playbook which connects to a virtual machine via a non-standard ssh port (forwarded to localhost) and a different user than the host user (vagrant).
The ssh port is specified in the ansible inventory:
[vms]
localhost:2222
The username given on the command line to ansible-playbook:
ansible-playbook -i <inventory from above> <some playbook> -u vagrant
The communication with the VM works correctly, however, %p always expands to 22 and %r to the host username.
Consequently, I cannot flush the SSH connection (for the user's changed group membership to take effect) like this:
- name: flush the ssh connection
command: ssh -o ControlPath="~/.ansible/cp/ansible-ssh-%h-%p-%r" -O stop {{inventory_hostname}}
delegate_to: 127.0.0.1
Am I making a silly mistake somewhere? Alternatively, is there a different way to flush the SSH connection?
The percent expand is not expanded by ansible, but by ssh later on.
Sorry, forgot to add the most important part
Using
command: ssh -o ControlPath=[...] -O stop {{inventory_hostname}}
will use default port, because you didn't specify it on the command-line. You would have to specify also the port to "flush" the connection this way:
command: ssh -o ControlPath=[...] -O stop -p {{inventory_port}} {{inventory_hostname}}
But I don't think it is needed. Ansible should clean up the connections when the playbook ends and I don't see any different reason why to do that.

Using dispy with port forwarding via ssh tunnel

I have dispynode running on a remote server. I'm trying to open an SSH tunnel from my computer (client) and configure dispyJobCluster to use this tunnel. But it's not working. Am I not configuring this right ? Here's how I'm doing this :
( p.s . i don't have a deep knowledge in distributed & parallel computing nor networking, I'm a civil engineer so please excuse me if I don't use the right technical words sometimes)
SSH tunnel​ :
plink -v -ssh -L 61:localhost:21 user#myserver.net
This will forward connections to port 61 to localhost:21 on the server where dispynode is running
dispynode :
sudo dispynode.py -d --ext_ip_addr localhost -p 21 -i localhost
will listen on port 21 and transmit using localhost which leads it though the tunnel back to the client
with this dispyClient JobCluster code :
cluster = dispy.JobCluster( runCasterDispyWorker,
nodes=[('localhost',61)], \
ip_addr='localhost', \
ext_ip_addr='localhost', \
port = 61, \
node_port = 21, \
recover_file='recover.rec', \
)
When I launch the dispy.py I get the following error in the command prompt from which I opened the SSH tunnel :
Opening connection to localhost:21 for forwarding from 127.0.0.1:64027
Forwarded port closed
At least I guess this means that dipsy is trying to access the opened SSH tunnel but I'm not sure what's happening server side. It seems that dispynode receives nothing.
Running a quick traffic capture with TCPdump on the server confirms it. For some unknown reason, the port changes to 64027.
I have also tried to open 2 SSH tunnels simultaneously :
One for client-to-server communications
plink -v -ssh -L 61:localhost:21 user#myserver.net
One for server-to-client communications
plink -v -ssh -R 20:localhost:60 user#myserver.net
but with no luck. I'm not even sure whether it is best to use remote forwarding or local forwarding
I tried this solution that the developer of dispy himself suggested but it didn't work for me :
http://sourceforge.net/p/dispy/discussion/1771151/thread/bcad6eaa/
Is the configuration i used above wrong ? Should I use remote or local forwarding ? Why does the port change automatically, can it be because of my company's firewall blocking the connection through the ports i'm trying to use ? Has anyone managed to run dispy through an SSH tunnel before ?
This worked for me. It should work for you :
SSH tunnel ( i'm using PuTTY's plink.exe to create the tunnel ):
plink -v -ssh -R 51347:localhost:51347 [username on server]#[server's Public IP or DomainName] -pw [USER PASSWORD on server] -N
dispynode (running on the server - linux):
sudo dispynode.py -d --ext_ip_addr [public IP or domain name of server]
JobCluster (dipsyClient):
def Worker():
os.system('echo hello') #prints hello on the server running dispynode
return 0
import os
import dispy, logging
cluster = dispy.JobCluster( \
Worker, \
nodes=['IP public or domain name of server'], \
ext_ip_addr='localhost', \
recover_file='recoverdispy.rec', \
)
job = cluster.submit()
print "waiting for job completion"
job()
print('status: %s\nstdout: %s\nstderr: %s\nexception: %s' % (job.status, job.stdout, job.stderr, job.exception))
Try this piece of code .. Make sure the required ports are allowed to be used

SSH Error: Permission denied (publickey,password) in Ansible

I am new to Ansible and I am trying to implement it. I tried all the possible ways present on the Internet and also all questions related to it, but still I can't resolve the error. How can I fix it?
I installed Ansible playbook on my MacBook Pro. I created a VM whose IP address is 10.4.1.141 and host IP address is 10.4.1.140.
I tried to connect to my VM using the host via SSH. It connected by the following command:
ssh user#10.4.1.141
And I got the shell access. This means my SSH connection is working fine.
Now I tried the following command for Ansible:
ansible all -m ping
And the content in the /etc/ansible/host is 10.4.1.141.
Then it shows the following error:
10.4.1.141 | FAILED => SSH Error: Permission denied (publickey,password).
while connecting to 10.4.1.141:22
It is sometimes useful to rerun the command using -vvvv, which prints SSH debug output to help diagnose the issue.
Then I tried creating the config file in .ssh/ folder on the host machine, but the error is still the same.
The content of the config file is:
IdentityFile ~/.ssh/id_rsa
which is the path to my private key.
Then I ran the same command ansible all -m ping and got the same error again.
When I tried another command,
ansible all -m ping -u user --ask-pass
Then it asked for the SSH password. I gave it (I am very sure the password is correct), but I got this error:
10.4.1.141 | FAILED => FAILED: Authentication failed.
This is the log using -vvvv:
<10.4.1.141> ESTABLISH CONNECTION FOR USER: rajatg
<10.4.1.141> REMOTE_MODULE ping
<10.4.1.141> EXEC ssh -C -tt -vvv -o ControlMaster=auto -o ControlPersist=60s -o ControlPath="/Users/rajatg/.ansible/cp/ansible-ssh-%h-%p-%r" -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 10.4.1.141 /bin/sh -c 'mkdir -p $HOME/.ansible/tmp/ansible-tmp-1445512455.7-116096114788007 && chmod a+rx $HOME/.ansible/tmp/ansible-tmp-1445512455.7-116096114788007 && echo $HOME/.ansible/tmp/ansible-tmp-1445512455.7-116096114788007'
10.4.1.141 | FAILED => SSH Error: Permission denied (publickey,password).
while connecting to 10.4.1.141:22
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
I am still not able to figure it out what the problem is. It is my last choice to ask it here after doing my all research. This is the link I referred to.
I fixed the issue. The problem was in my /etc/ansible/hosts file.
The content written in /etc/ansible/hosts was 10.4.1.141. But when I changed it to rajat#10.4.1.141, then the issue got fixed.
If you log in with ssh user#10.4.1.141:
Option 1
Then make sure that in your hosts file inside etc\ansible you have:
[server01]
10.4.1.141
Then within etc\ansible run:
ansible all -m ping -u user --ask-pass
Option 2
If you want to log in without typing the SSH password then in your hosts file inside etc\ansible you add:
[server01]
10.4.1.141 ansible_ssh_pass=xxx ansible_ssh_user=user
Then within etc\ansible run:
ansible all -m ping
For me it worked both ways.
My case is I have multiple private keys in my .ssh.
Here is how I fix it by telling ansible to use a certain private key
ansible-playbook -i ../../inventory.ini --private-key=~/.ssh/id_rsa_ansiadmin update.yml
The previous solutions didn't work for me, unfortunately (DevOps layman here!).
But the below one worked for me.
Change your inventory file to:
[webserver] 10.4.1.141 ansible_user=ubuntu
ansible webserver --private-key pem_file.pem -m ping
Hitting the command with -vvvv helped me to debug it more.
Reference: Failed to connect to the host via ssh: Permission denied (publickey,password) #19584
If you execute Ansible with sudo, for example
sudo ansible -m ping all
Please keep in mind that the public key for root has to be on the server you want to reach as well, not only the public key from your non-root-user. Otherwise, you get the error message above as well.
Most of the issues happen while connecting Ubuntu machines in hosts.
Solution Ansible required which user want to connect, because Ubuntu doesn't have a default root user.
For the hosts file
[Test-Web-Server]
10.192.168.10 ansible_ssh_pass=foo ansible_ssh_user=foo
The problem lies in the inventory file.
vi /etc/ansible/hosts
It should be:
[webserver]
192.###.###.### ansible_ssh_user=user ansible_ssh_pass=pass
I have fixed this issue as well.
My issue was also in my hosts file, /etc/ansible/hosts.
I changed my hosts file from
172.28.2.101
to
name-of-server-in-ssh-config
I had IP addresses in the hosts file. Since I have SSH configurations already set up for names, I do not need to use a variable or username in front of the hosts.
[name-stg-web]
server-name-stg-web[01:02]
What first worked for me was to hardcode the target machine root's password in the /etc/ansible/hosts like this:
[load_balancers_front]
loadbalancer1 ansible_host=xxx.xxx.xxx.xxx ansible_user=root ansible_password=root_password_in_target
But it is not recommended to do this of course because of security issues.
Then, I figured out a solutions from the docs by doing:
ssh-agent bash --> read here
and then
ssh-add /my/private/ssh-key
After this, my hosts file looks like this and ansible all -m ping works fine:
[load_balancers_front]
loadbalancer1 ansible_host=xxx.xxx.xxx.xxx ansible_user=root
Mentioning the username in /etc/hosts file also can resolve the issue.
#sudo vim /etc/hosts
[test-server]
ip_address ansible_user="remote pc's username"
[jenkinsserver]
publicdnsname ansible_user=ubuntu private_key=ubuntu.cer
After years some OS require strong encryption of the SSH key, they don't support RSA and DSA keys. Therefore the message Permission denied (publickey,password) may indicate that OS needs strong SSH-key instead of id_rsa.
Use the following command to generate new key:
ssh-keygen -t ecdsa -f ~/.ssh/id_ecdsa -N ""
Ensure that server has an option
PubkeyAuthentication yes
in /etc/ssh/sshd_config or /etc/openssh/sshd_config.
Some other options may be required as well (read the documentation of your OS first), for example:
Protocol 2
PermitRootLogin without-password
AuthorizedKeysFile /etc/openssh/authorized_keys/%u /etc/openssh/authorized_keys2/%u .ssh/authorized_keys .ssh/authorized_keys2
Do not forget to restart sshd service to apply changes.
Copy the new key with ssh-copy-id -i ~/.ssh/id_ecdsa, then you can connect to remote server using ansible.
At the host machine you should install sshpass with the below command
sudo apt install sshpass -y
and use this command to ping
ansible all -i slaves.txt -m ping -u test --ask-pass
it will provide you keyboard interactive password entry, where you shall enter the passowrd of the slave machine