Passwordless connect a remote server from vagrant vm through ssh (rsync) - ssh

I'd like to run a rsync command from a vagrant vm to a remote server (to push files) without the need for a password.
So, the involved machines are: host, guest vm and remote
host is authorized on remote via authorized_keys, however when I run the rsync command from the vm I get asked for a password.
Is there a way to get passwordless rsync from the vm using the keys on the already-authorized host?
I'd like to avoid copying a new authorized key to the remote every time I create a vm.
Also, adding my server's password in the vagrant file is not an option.

Use ssh key forwarding via ssh-agent. Follow these steps:
On your host machine:
ssh-add PATH_TO_KEY <use Tab if unsure>
vagrant ssh
In the vagrant box edit your ~/.ssh/config:
Host name_or_ip_of_remote
ForwardAgent yes
Now try to connect to the remote from the vagrant box:
ssh name_or_ip_of_remote
It should work without a password. As rsync is using ssh under the hood, it will work without a password too.

Related

How to use KeepassXC SSH Agent in linux terminal?

I have KeePass configured on Windows machine and SSH connnections with KeeAgent works great in Windows Terminal and Putty. Now i have new machine with linux and I'm trying to configure KeepassXC to use SSH connnections with SSH Agent in terminal. I think I did everything in KeepassXC (add new entry, turn on SSH Agent, add SSH key etc.) correctly. When I type ssh-add -l in terminal I can see that SSH key is loaded. But when I'm trying to login to the server I'm still asked for password... Please help.
Greetings,
Martin

Setting up pyCharm pro for remote ssh

I have PyCharm pro and I need to set up a remote repo on it via ssh. There are instructions available for doing this but my requirements are slightly different than what I have seen available. My organisation has ssh set up via cloudfare and all those configs are already put in .ssh/config. It includes hostname, IdentityFile, username, and so on. If I need to access my remote machine, I just do ssh <HOST-NAME> from the terminal. My config looks like this -
Host <HOST-NAME>
HostName <HOST-NAME>.<ORGANISATION>.net
ProxyCommand /usr/local/bin/cloudflared access ssh --hostname %h
IdentityFile ~/.ssh/id_rsa
User <USER-NAME>
The way to set up PyCharm usually includes instructions like specifying host, port, username etc so that PyCharm can essentially run
ssh username#host_ip_address. However, I just need it to run ssh <HOST-NAME> so that it can access everything else using the config file that is already set up.
Is there a way to make this happen?
I use Pycharm Pro with my own .ssh/config and Jump Proxy pseudo hostname and certificates so slightly different. I "ssh pseudo-hostname" so no user as well.
In Pycharm I set a valid username and port (default 22) cause in my case it will get passed on.
Your setup ignore username and port no? If so you can use BS values?

Error Public Key when trying to ssh into Google Cloud Platform VM

I had been using VSCode's remote-ssh to access my virtual machines running on google cloud. This had been working perfectly fine until I made a snapshot of my most recent instance and created a new instance out of this on a larger VM. Now when I try to connect (through any method) I get: " Permission denied (publickey).". I have spent countless hours deleting and re-adding, and recreating my ssh keys to no avail. Before I simply ran "gcloud compute config-ssh" and this created a working config file, but now this works. Please help, I have tried everything and there is simply no way for me to ssh. On the website I can click the ssh button to open up their shell, but cannot do it from my terminal
The problem may be related to the lack of identification of your SSH private key during connection in VSCode. You can indicate your private key adding IdentityFile option pointing to your SSH private key, this in your SSH connection host entries in SSH configuration files:
Host vm_name
HostName external_ip
IdentityFile /path/to/ssh_private_key
Port port_number
Here the long story if you or someone need more information.
You can go from the start for ensure that you do no have compromise your SSH keys and that is the origin of problem.
Create SSH Key
First, create new ssh keys.In the computer that you will use to access your remote host, that is Google VM instance, open your terminal or cmd and go to the ssh folder to generate the keys.
My ssh config and keys are under my user directory, /home/my_user/.ssh on Linux or C:\Users\my_user\.ssh on Windows.
The I will cd to one of these path, depending on for which of them I using at the moment.
Linux:
cd /home/my_user/.ssh
Windows:
cd C:\Users\my_user\.ssh
Command to generate SSH key
ssh-keygen -t rsa -f my_ssh_key -C user
my_ssh_key: the name your key, you can put what you want to better identify
user: must be the user that you want to use to connect at your Google VM instance.
This will generate an Private Key named my_ssh_key and a Public key named my_ssh_key.pub.
Alternatively, stay in any location of operating system and passing the absolute path where to generate the keys:
Linux:
ssh-keygen -t rsa -f /home/my_user/.ssh/my_ssh_key -C user
Windows:
ssh-keygen -t rsa -f C:\Users\my_user\.ssh\my_ssh_key -C user
Copy the public key in your Google cloud VM authorized_keys file
/home/my_user/.ssh/authorized_keys
** Do not rewrite anyone public key that already exists jus append in the file of authorized_keys file.
Add new ssh Host entry for remote connection
Click on Remote SSH manager, the icon at the bottom right of the VS Code, click on the Remote SSH: Open Configuration File option and choose your ssh configuration file to add another SSH entry for remote connection.
The config file must be under SSH directory, the same path used in the step of generate SSH keys.
Linux:
/home/my_user/.ssh/config
Windows:
C:\Users\my_user\.ssh\config
To add another Host, write the following make the properly changes:
Host vm_name
HostName external_ip
IdentityFile /path/to/ssh_private_key
Port port_number
vm_name: is alias to connect with ssh command in practical way, could be what you want.
external_ip: the external of your Google VM instance, you can get in the VM instances panel at https://console.cloud.google.com/
IdentityFile: the path for yout private SSH key, the file that you generated that note have .pub extension.
Linux:
/home/my_user/.ssh/my_ssh_key
Windows:
C:\Users\my_user\.ssh\my_ssh_key
Port: the por number of SSH of your Google VM instance, 22 is the default port.
Now it is just choose this host to connect to your Google VM instance.
For more details about SSH settings on Google Cloud Platform: https://cloud.google.com/compute/docs/instances/adding-removing-ssh-keys#linux-and-macos_1

SSH to Github not working

SSH has been working fine for the last few weeks since I got my new PC. I've had no problems but today I started getting:
ssh: connect to host github.com port 22: resource temporarily unavailable
I did some googling and found that there is a common issue with WSL which sometimes causes this, but I'm unable to SSH from my bash shell, or from cmd/powershell.
This is the part that confuses me, if I do: ssh -T git#192.30.253.113 I am prompted for the password to my key, it successfully authenticates and responds with "Hi alexmk92! You've successfully authenticated".
Great, that at least proves that my firewall isn't blocking SSH on port 22. But why does git#github.com throw the resource failed error? My initial thought is that this could be a DNS problem.
So I tried to configure my network adapter to use Google's DNS server (8.8.8.8 and 8.8.4.4) I even configured the IPV6 DNS servers just in case. Following this I did an ipconfig /flushdns, attempted to connect via git#github.com again and BAM the same result, however git#192.30.253.113 still works.
I'm guessing another potential cause is that github.com is behind a load balancer and one of the IP's on the cluster could be black-listed somewhere on my machine? I'm just pulling guesses out of thin air now, any help would be greatly appreciated, this is driving me insane.
After some further Googling it turned out that my machine did not have a hosts entry for github.com and it was unable to automatically resolve it.
In Windows Subsystem for Linux I created a ssh config file
touch ~/.ssh/config
(for some reason the base distro of Ubuntu 18.04 on the windows marketplace didn't have one) I then had to make sure the file permissions were correct:
chmod 755 ~/.ssh/config
Once the file was created, I edited it with
sudo nano ~/.ssh/config
and added github.com as a Host.
Host github.com
Hostname ssh.github.com
Port 22
Upon saving, I ran
sudo /etc/init.d/ssh restart
and attempted
ssh -T git#github.com
Everything now seems to be working.
In my case my ISP did not allow ssh, so it was not working from cmd and wsl both. Got around it using vpn
To have successful SSH connection to Github, SSH key has to be import into Github
Open Git bash or Terminal
Run the command ssh-keygen
Choose all default option
A private and a public key gets generated in the folder * < user_home>/.ssh/*
Login to Github.com
Navigate to account settings
Choose item "SSH and GPG Keys" from the side navigation bar
click added new SSh key
Copy and save public key content from * < user_home>/.ssh/id_rsa.pub *

Connecting to a remote server from local machine via ssh-tunnel

I am running Ansible on my machine. And my machine does not have ssh access to the remote machine. Port 22 connection originating from local machine are blocked by the institute firewall. But I have access to a machine (ssh-tunnel), through which I can login to the remote machine. Now is there a way we can run ansible playbook from local machine on remote hosts.
In a way is it possible to make Ansible/ssh connect to the remote machine, via ssh-tunnel. But not exactly login to ssh-tunnel. The connection will pass through the tunnel.
Other way is I can install ansible on ssh-tunnel, but that is not the desired and run plays from there. But that would not be a desired solution.
Please let me know if this is possible.
There are two ways to achieve this without install the Ansible on the ssh-tunnel machine.
Solution#1:
Use these variables in your inventory:
[remote_machine]
remote ansible_ssh_host=127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user='username' ansible_ssh_private_key_file='/home/user/private_key'
hope you understand above parameters, if need help please ask in comments
Solution#2:
Create ~/.ssh/config file and add the following parameters:
####### Access to the Private Server through ssh-tunnel/bastion ########
Host ssh-tunnel-server
HostName x.x.x.x
StrictHostKeyChecking no
User username
ForwardAgent yes
Host private-server
HostName y.y.y.y
StrictHostKeyChecking no
User username
ProxyCommand ssh -q ssh-tunnel-server nc -q0 %h %p
Hope that help you, if you need any help, feel free to ask
No request to install ansible on the jump and remote servers, ansible is ssh service only tool :-)
First make sure you can work it directly with SSH Tunnel.
On local machine (Local_A), you can login to Remote machine (Remote_B) via jump box (Jump_C).
login server Local_A
ssh -f user#remote_B -L 2000:Jump_C:22 -N
The other options are:
-f tells ssh to background itself after it authenticates, so you don't have to sit around running something on the remote server for the tunnel to remain alive.
-N says that you want an SSH connection, but you don't actually want to run any remote commands. If all you're creating is a tunnel, then including this option saves resources.
-L [bind_address:]port:host:hostport
Specifies that the given port on the local (client) host is to be forwarded to the given host and port on the remote side.
There will be a password challenge unless you have set up DSA or RSA keys for a passwordless login.
There are lots of documents teaching you how to do the ssh tunnel.
Then try below ansible command from Local_A:
ansible -vvvv remote_B -m shell -a 'hostname -f' --ssh-extra-args="-L 2000:Jump_C:22"
You should see the remote_B hostname. Let me know the result.
Let's say you can ssh into x.x.x.x from your local machine, and ssh into y.y.y.y from x.x.x.x, while y.y.y.y is the target of your ansible playbook.
inventory:
[target]
y.y.y.y
playbook.yml
---
- hosts: target
tasks: ...
Run:
ansible-playbook --ssh-common-args="-o ProxyCommand='ssh -W %h:%p root#x.x.x.x'" -i inventory playbook.yml