I was going to do rsync, but rsync over SSH needs to have the private key on the second EC2 instance. I'm concerned about copying my private SSH key to the server. That can't be safe, right?
Is there another possibility, e.g. somehow getting authentication via my computer? If it's only a little auth check at the beginning of each sync, I don't mind that.
Or can I securely sync files between EC2 instances without the private key?
Thanks for your input,
MrB
You needn't use your EC2 keys to setup SSH between the two EC2 instances. Look at this guide - http://ask-leo.com/how_can_i_automate_an_sftp_transfer_between_two_servers.html .
Simple outline of the process is, lets say you want to transfer files from Server1 to Server2. You basically create a new key for your user on Server1 (note this is different from the key you downloaded to access your EC2 instance - Server1 in this case). Then load up the public part in Server2's authorized_keys and you should be able to setup SSH.
If the user that the rsync process is going to run under is not your user, then you will have to setup SSH keys for the user that the process will run under.
HTH
Just create a snapshot of the volume you have your modified files contained and attach it your outdated instance after detaching the outdated volume.
Related
About the Question
I want to create a hierarchy of servers : There is a bastian host in one VPC allowing ssh connection only from my local IP address. Then there is one more instance (Let's call it B) in another VPC and accepts connection only from bastian host . Now there are set of instances in third VPC (all of them accept ssh connections only from B.). So it is like :
local -----> bastian host -----> Instance B--------> All other instances.
In addition to this configuration I don't want to add private ssh key to any of the instance on cloud for security purpose. I only want to store the private key on my local machine that I will use to ssh into bastian host.
Approaches tried till now
Generated ssh-key pair
Added public key to the metadata section of the compute engine so that its available to all the instances in the project,
Tried to use ssh forwarding to implement this. But i am only able to reach Instance B and beyond that it is giving me permission denied(public key) error.
I want to know how can i implement this scenario such that I can reach instance B's terminal and then access all other instances as mentioned. Is it possible to do it only using one ssh-key pair? Any help would be greatly appreciated.
When I try to access VM through ssh, it stuck at transferring ssh keys to SSH servers.
Already checked the solutions mentioned at:
https://groups.google.com/forum/#!topic/gce-discussion/zJS0qFFQYlM
but they didn't work.
The messages are shown on the screen when I try to access VM through ssh.
The key transfer to project metadata is taking an unusually long time. Transferring instead to instance metadata may be faster, but will transfer the keys only to this VM. If you wish to SSH into other VMs from this VM, you will need to transfer the keys accordingly.
Click here to transfer the key to instance metadata. Note that this setting is persistent and needs to be disabled in the Instance Details page once enabled.
You can drastically improve your key transfer times by migrating to OS Login.
I'm trying to clone an EC2 instance so that I can test some things. I created an AMI and launched an instance and it seems to be running ok. However, I cannot connect to it with ssh or putty.
My live instance, which I'm making the copy of, has various users who can all log in happily with their private key. But they cannot log in with the exact same credentials to the cloned instance. I just get:
Disconnected: No supported authentication methods available (server sent: publickey)
Is there more to do than to just change the IP address from the live instance to the cloned instance?
I also cannot connect to the ec2-user login, using the private key I created during launch. One slight quirk of my live server is that I had to change the AuthorizedKeysFile setting in /etc/ssh/sshd_config in order to deal with some SFTP problems I was having. Is this likely to have messed up the connection for a cloned server? Surely all the settings are identical?
The answer was to do with the AuthorizedKeysFile setting after all. I undid the edit I made in /etc/ssh/sshd_config, took another snapshot, made another AMI, launched another instance and all was well. I didn't even need to restart the sshd service, so this didn't mess up my configuration on my live server.
I'm not entirely sure why this caused a problem, but the lesson here is that EC2 needs the AuthorizedKeysFile to be set to the default location or I guess it doesn't know where to look for the public key.
I want to offer a backup storage service for some of my friends. I have a QNAP nas and want to make it accessable across the internet so my friends can backup their files on it.
I think rsync is the best protocol for this. But I want to know how to make it secure. I can start the rsync server and configure my router to forward the port, but then the data goes across the net unencrypted. I can use ssh instead. But how do I set things up so that they cannot login to the machine, or at least, not be able to see the files that others have stored on there? I basically want to sandbox them.
I've been searching the net a lot and have found plenty of information about setting up your personal rsync server to backup your personal stuff. But I have not been able to find anything about the usecase I described above.
You don't need to set up an rsync server (rsyncd) - you can just use ssh (which is used by default for rsync) and rsync will be taken care of automatically. Create an account on your server for each user and then they can just backup as as, e.g.
$ rsync -av /path/to/local/files username#your_server:path/to/backups/
So all you need to do other than creating user accounts is to open port 22 for incoming ssh traffic.
I am using aws java sdk to launch EC2 instances (running Ubuntu 12.04) and run a distributed tool on them, the tool uses openMPI for message passing between the nodes and openMPI uses SSH to connect nodes with each other.
The problem is that the EC2 instances don't authenticate each other for SSH connections by default, this tutorial shows how to set up SSH by generating keys and adding them to nodes, However, when I tried to add the generated key to the slaves using the command
$ scp /home/mpiuser/.ssh/id_dsa.pub mpiuser#slave1:.ssh/authorized_keys
I still got permission denied. Also, after generating new keys, I was not able to log in using the ".pem" key that I got from amazon.
I am not experienced with SSH keys, but I would like to have some way of configuring each EC2 instance (when its firstly created) to authenticate the others, for example by coping a key into each of them. Is this possible and how It could be done?
P.S.: I can connect to each instance once it is launched and can execute any commands on them over SSH.
I found the solution, I added the amazon private key (.pem) in the image (AMI) that I use to create the EC2 instances and I changed the /etc/ssh/ssh_config file by adding a new identity file
IdentityFile /path/to/the/key/file
This made SSH recognize the .pem private key when it tries to connect to any other EC2 instance created with the same key.
I also changed StrictHostKeyChecking to no, which stopped the message "authenticity of host xxx can't be established" which requires users interaction to proceed with connecting to that host.