My goal is to eliminate the private key - authentication.
I have an android application in which I connect to my google cloud virtual instance and run a couple of scripts. Now, I have the private key on the phone for testing purposes so I can connect to google cloud and do what I wrote. My question is:
How could I eliminate the private key thus making the connecting less secure but easier to test on more phones?
My final goal is to send an email to an email address and this runs the scripts on the google cloud virtual instance. Is this possible?
Thanks
You will need to edit the ssh configuration file and set PubkeyAuthentication to no in /etc/ssh/sshd_config
Then you need to restart the ssh server in order to load the new configuration: For Ubuntu: sudo service ssh restart, for Centos: sudo service sshd restart.
Related
So I'm using ssh-agent on my host and we use vagrant, virtualbox and other normal tools to setup our environment. At one point, we need to provision new machines (we build appliances) and push our key into that appliance. So far, we had been unprotecting our keys and just copying our id_rsa(.pub)? on the virtual machine. It suited us!
Recently, I started using ssh-agent on my main machine to host my keys with a password (much better obviously) but from time to time, I seem to have problems running ssh-copy-id to copy my id towards the appliance.
Most of the time, I have to just copy my protected key files to the vm and then it works but I'm sure i was able at one point to leave the virtual machine clean from any ssh keys and still use ssh-copy-id to copy my ssh-agent provided identity through the vm. It doesn't seem to be working half of the time.
Am I crazy? What could be the root cause of this? How to I solve this problem to ensure that my SSH protected id is forwarded only from my local agent into the vm and into any other appliance properly?
I have a machine that I want to setup an SFTP connection to. The SSH server is running properly, I can ssh into it from my client computer, and I can SFTP in from my smartphone. I'm just a bit confused on how to properly configure the ~/.netrc file. The server computer is running Ubuntu, the client computer is running OSX.
Here are my main requirements for what I'm trying to configure:
Alias. I don't have a DNS name for the computer I'm connecting to, just the IP address. ~/.ssh/config is great because it basically assigns aliases to connections, and then specifies the hostname, port, etc. Looking at the man page for ~/.netrc, I don't see a way to do this.
Private Key. This SFTP connection is validated using a private key. I don't see anything in the ~/.netrc man page about how to specify the key.
If ~/.netrc is the wrong way to go, what alternatives would be better?
before asking this question i looked through google and tried different alternatives none of which were successful for me, sadly. I'm a little above the noob level. What i want is to basicaly host a wordpress site on a google cloud debian machine.
I was doing good installing services through their SSH access until i got to the point where i installed an ftp service and wanted to access it through a remote computer(my own) i only got as far as to:
Status: Waiting to retry...
Status: Connecting to 104.197.183.19...
Response: fzSftp started
Command: open "root#104.197.183.19" 22
Error: Connection timed out
Error: Could not connect to server
I kept on looking and trying new ways until i found the gcloud documentation for ftp but it is not aimed at new ones, so my questions are:
Where do i input the commands for gcloud, on my computer or on the SSH console(Google cloud machine)?
Do i need to use gcloud for ftp remote access or can i do it entirely through my computer and their SSH machine?
Do i really need to add an ssh authorization file to FileZilla or is there a way i can disable that check on my vps so it lets me sign in with just a username and a password?
What i already tried and didn't work for me:
gCloud documentation for ssh and ftp
Google cloud documention for setting up a wordpress site
Many others
Basically what i need in short is to manage to access the vps through ftp so i can continue with my learning.. Been stuck there two days.
To get access to a users public area, ie. public_html
Go to the accounts Cpanel area and under Security > SSH Access you can import a key file.
You can use PuttyGen to make one, you will need both a private and public key.
Past the keys into the box's.
You may get a warning message about the private key, this is ok.
Go to Manage under public key and authorize it.
Or
Make on using the interface in Cpanel and download both Keys.
Then in FileZilla
Host: IP of server
Protocol: SFTP
Logon Type: Key File
Key File: the PPK you made.
(if you asked Cpanel to make the file select the one that does not end in .pub and FileZilla will convert it for you to a .ppk file.
After clicking connect you should be in
If you still have an error make sure the SSH port (22) is open in your filewalls both Google cloud.google.com > Networks and WHM > LDF/CSF plugin
Use SSH File Transfer Protocol.
No need to install ftp service.
Use winscp for connecting with sftp.
The recommended way of transferring files to a Unix-based Google Compute Engine VM is via the gcloud compute copy-files command. For this, please install the Google Cloud SDK. Then, run a command such as the following:
gcloud compute copy-files --zone=<Compute Engine zone>/path/to/local/file.txt <Compute Engine instance name>:/path/to/destination/file.txt
If you'd like to use FileZilla, you'll have to configure it for access. The SSH daemon on Compute Engine VMs is set up for key-based authentication. This forum post indicates how this is possible in FileZilla. The catch is that you need to put your public key on the VM, which can be a little tricky. gcloud compute copy-files and gcloud compute ssh take care of this for you, which is why they are the recommended method.
I am using aws java sdk to launch EC2 instances (running Ubuntu 12.04) and run a distributed tool on them, the tool uses openMPI for message passing between the nodes and openMPI uses SSH to connect nodes with each other.
The problem is that the EC2 instances don't authenticate each other for SSH connections by default, this tutorial shows how to set up SSH by generating keys and adding them to nodes, However, when I tried to add the generated key to the slaves using the command
$ scp /home/mpiuser/.ssh/id_dsa.pub mpiuser#slave1:.ssh/authorized_keys
I still got permission denied. Also, after generating new keys, I was not able to log in using the ".pem" key that I got from amazon.
I am not experienced with SSH keys, but I would like to have some way of configuring each EC2 instance (when its firstly created) to authenticate the others, for example by coping a key into each of them. Is this possible and how It could be done?
P.S.: I can connect to each instance once it is launched and can execute any commands on them over SSH.
I found the solution, I added the amazon private key (.pem) in the image (AMI) that I use to create the EC2 instances and I changed the /etc/ssh/ssh_config file by adding a new identity file
IdentityFile /path/to/the/key/file
This made SSH recognize the .pem private key when it tries to connect to any other EC2 instance created with the same key.
I also changed StrictHostKeyChecking to no, which stopped the message "authenticity of host xxx can't be established" which requires users interaction to proceed with connecting to that host.
I get this error message when trying to connect with ssh.
Disconnected: No supported authentication methods available (server sent: publickey,gssapi-keyex,gssapi-with-mic)
I create a instances(cent os), generated my webserver.pem, puttygen imported that and output a ppk
I have seen that it may be a permissions issue with the ~/.ssh on the server but how can i change the permissions on the server without ssh access to the server? Is there another way to connect that i am not aware of? I am quite new to the amazon ec2 stuff.
I am on a windows system right now using putty.
My security groups were incorrect. I remade the instance with the correct security groups
The below steps worked for me.
Edit sshd_config file sudo vi /etc/ssh/sshd_config.
Search for PasswordAuthentication
If it is no, change it to yes. For me it was commented. If so, uncomment it.
Restart sshd service sudo systemctl restart sshd.service
Done.
These are the basic steps generally when working with a public cloud, trying to create a Virtual Machine and connect to it.
Create a Virtual Cloud Network/ Virtual Private Cloud
Create an Internet Gateway and ensure the Route Table for the VCN has the entry to route internet bound traffic (destination 0.0.0.0/0) to the internet gateway
Create a Virtual Machine (Linux in this case), ensure it has a public IP ( VM be created in public subnet ), download the key pair (for example was in PEM format)
Create a Security Group and ensure ingress rule from source : 0.0.0.0/0, protocol: TCP, destination port: 22
Associate the VM with the Security Group at VNIC level at the time of creating the VM or post creation.
From Oracle Cloud documentation -
Just having an internet gateway alone does not expose the instances in
the VCN's subnets directly to the internet. The following requirements
must also be met:
The internet gateway must be enabled (by default, the internet gateway
is enabled upon creation). The subnet must be public. The subnet
must have a route rule that directs traffic to the internet gateway.
The subnet must have security list rules that allow the traffic (and
each instance's firewall must allow the traffic). The instance must >
have a public IP address.
Now connecting to VM using putty, basically you are doing a :
ssh user#ip_address —i private_key
a. Use puttygen and load the private PEM key that you downloaded. Once successfully imported, save the private key (optionally with a passphrase) as PPK in your local machine ( for example "your_pvt_key_name.ppk" )
b. Use putty to connect to the VM's public IP. Ensure in putty when connecting to the VM that private key is provided for authentication. In the section Connection->SSH->Auth, browse for the "your_pvt_key_name.ppk" and then go back to the Session and "Open" the VM. If the VM is on public subnet with correct route table entry, you should see the login screen. In case the VM is not available on internet, it wont connect !
c. Once you see the login screen most important and which is the probable cause of the above error, login with correct user name, such as "ec2-user" in AWS or "opc" in OCI. Using an incorrect user name results in this error.
No supported authentication methods available (server sent: publickey,gssapi-keyex,gssapi-with-mic)