I have some servers launched using terraform in Azure.
Now I need to change the ssh keys of some of the servers.
Can I do it using terraform?
Will terraform re-launch the server and the data be lost if key is changed in the terraform script and applied ?
Thanks in advance
Regards
Jayakrishnan
Related
I'm newbie for GCP and I need your help which this is the step I had made.
(1) I setup google cloud firewall rules to allow ssh on port 22 and I can ssh to my instance, CentOS7, correctly.
(2) When I connect to my instance, I run some firewall script and after that I cannot ssh to my instance anymore. It seem that script block ssh port even I enable it in the VPC Network > Firewall rules.
(3) Now I cannot connect to my instance including Open in browser window in the SSH menu on gcp console.
Is there any solution to connect my instance? Please help.
Thank in advance.
Bom
You probably change block ssh port by changing firewall configuration inside VM.
So you can consider 2 options :
1) Recreate VM if no sensitive data, or not too much work spent for the existing setup.
2) Detach Boot disk and reuse it on another instance, to change the configuration files of firewal.
check Official Docs - Use your disk on a new instance for that:
gcloud compute instances delete $PROB_INSTANCE
--keep-disks=boot
gcloud compute instances create new-instance
--disk name=$BOOT_DISK,boot=yes,auto-delete=no
gcloud compute ssh new-instance
Hope it will help you.
I am getting error while trying to do ssh gcloud instance.
I have removed old ssh key and regenerated new ssh key and tried to connect but still the problem remains as it is.
Please share your suggestions.
Check whether the port 22 is open in the firewall for that specific instance. You can follow this document to manage your firewall rules. You can try connecting via serial console instead. The issue you are facing could arise due to many different reasons. It is worth trying different troubleshooting steps for SSH connectivity.
If you created new SSH key properly then check if you added the key to your instance or project-wide metadata. This article is a good read.
From the following Data Pipeline ShellCommandWith (S)FTP Sample:
The sample relies on having public key authentication configured to access the SFTP server.
How do I configure public key authentication so that my Amazon Data Pipeline's ShellCommandActivity can access an on-prem server via SFTP and upload files to S3?
What command do I put in my ShellCommandActivity to test if it can talk to my on-prem SFTP server?
Since you don't want to expose your password in your pipeline definition, the sample assumes you set up passwordless ssh login for your ftp server.
You can learn more about that here: http://www.linuxproblem.org/art_9.html
Please make sure that is allowed in your organization's security rules or check with someone in your IT department if you are unsure.
If you are allowed and able to set that up, you can test it by running the sample. It has an sftp command that gets executed as part of the ShellCommandActivity.
I am using aws java sdk to launch EC2 instances (running Ubuntu 12.04) and run a distributed tool on them, the tool uses openMPI for message passing between the nodes and openMPI uses SSH to connect nodes with each other.
The problem is that the EC2 instances don't authenticate each other for SSH connections by default, this tutorial shows how to set up SSH by generating keys and adding them to nodes, However, when I tried to add the generated key to the slaves using the command
$ scp /home/mpiuser/.ssh/id_dsa.pub mpiuser#slave1:.ssh/authorized_keys
I still got permission denied. Also, after generating new keys, I was not able to log in using the ".pem" key that I got from amazon.
I am not experienced with SSH keys, but I would like to have some way of configuring each EC2 instance (when its firstly created) to authenticate the others, for example by coping a key into each of them. Is this possible and how It could be done?
P.S.: I can connect to each instance once it is launched and can execute any commands on them over SSH.
I found the solution, I added the amazon private key (.pem) in the image (AMI) that I use to create the EC2 instances and I changed the /etc/ssh/ssh_config file by adding a new identity file
IdentityFile /path/to/the/key/file
This made SSH recognize the .pem private key when it tries to connect to any other EC2 instance created with the same key.
I also changed StrictHostKeyChecking to no, which stopped the message "authenticity of host xxx can't be established" which requires users interaction to proceed with connecting to that host.
I was going to do rsync, but rsync over SSH needs to have the private key on the second EC2 instance. I'm concerned about copying my private SSH key to the server. That can't be safe, right?
Is there another possibility, e.g. somehow getting authentication via my computer? If it's only a little auth check at the beginning of each sync, I don't mind that.
Or can I securely sync files between EC2 instances without the private key?
Thanks for your input,
MrB
You needn't use your EC2 keys to setup SSH between the two EC2 instances. Look at this guide - http://ask-leo.com/how_can_i_automate_an_sftp_transfer_between_two_servers.html .
Simple outline of the process is, lets say you want to transfer files from Server1 to Server2. You basically create a new key for your user on Server1 (note this is different from the key you downloaded to access your EC2 instance - Server1 in this case). Then load up the public part in Server2's authorized_keys and you should be able to setup SSH.
If the user that the rsync process is going to run under is not your user, then you will have to setup SSH keys for the user that the process will run under.
HTH
Just create a snapshot of the volume you have your modified files contained and attach it your outdated instance after detaching the outdated volume.