I'm running into a weird issue with the Google Cloud VM interface. I'm working with my team on the same Google Cloud VM project, each with our own instances.
The problem: I am unable to SSH into my instance, yet am able to SSH into my teammates' instances. Whenever I SSH using the google cloud online interface, the SSH keys never transfer properly. Despite deleting and recreating keys for my computer, I always get Permission denied (publickey) (I'm even getting this on the Google Cloud shell). Even stranger: my teammates are able to SSH into my instance. This is a new phenomenon I hadn't encountered a month ago when I first used the VM successfully.
Can anyone provide me with insight as to how to diagnose the issue, and even better, a solution? I can provide debug information if you'd find useful.
Here is output when using the verbosity flag:
Output using verbosity flag
Here is the output from Armando's recommendation of using systemctl status google-guest-agent: check ownership status
Here is the output from Anthony's recommendation of creating new keys all in one line. Anthony's recommendation to recreate keys in gcloud shell
Related
I'm working for an AWS CDK Pipeline with a source repository in AWS CodeCommit.
I set the pipeline works with the specific branch push in the repository.
And I used SSH connection (IAM USER > security credentials > SSH keys for AWS CodeCommit) to pull/push the source code from/to the repository.
It worked well in 2~3 months..
But today it stopped suddenly.
I searched some references but confused..
As I know, I can't set allowed host on CodeCommit by myself...
The below is a capture which I tried to find a clue...
I don't know well about SSH. Could you give me some hint if you get the reason on here?
I replaced the SSH pub key on the IAM users > security credentials but no lucks.
And if someone know why this happen suddenly, please let me know.
(Can it be the cause that too much push in short time?)
FYI, I waited 30 minutes and tried again, but no luck...
Q1. Could you give me some hint what should I do with that capture?
Q2. Why this happen suddenly..?
It is working again after 1 day 😂
I installed docker machine, and then created a new docker-machine on Windows 10.
Now I run ls to see the list of docker machines.
Now I run the following command
docker-machine start hypervdockermachine
Now I am stuck at this
Waiting for SSH to be available...
Too many retries waiting for SSH to be available. Last error: Maximum number of retries (60) exceeded
I have seen the git hub issue here, but not clear what to do.
Is there a way to solve this problem? I am not good at ssh
UPDATE
I just found a workaround.
You can run the above commands with git bash.
Most important, you must run git bash as admin. Else you will end up scratching your head.
Even the basic
docker-machine ls
will not show up anything without being an admin.
Finally if you are seeing the following error
Unable to query docker version: Get https://192.168.0.105:2376/v1.15/version: x509: certificate signed by unknown authority
Then you have to look at this issue.
docker-machine regenerate-certs yourdockermachinename
If needed user --force option
I got into the same problem after I moved .docker to partition D: and created a symlink to C:\Users\username\.docker, following this SO answer. I removed the old machines and configured new ones, and tried to regenerate the certs as suggested in the OP workaround but the problem was not solved.
After googling, I found this OpenSSH wiki page
and suspected that the cause of the problem was related to permissions.
So I could solve the problem by trying two different things:
Delete .ssh (source)
fix permissions to D:\path\to\.docker, allowing only SYSTEM, Administrators and my user to have full control access (source). These permissions were the same defined for .docker when it was under C:\Users\username\, but moving the folder to another partition made it inherit different permissions. To avoid dealing to much with it, I keep inheritance enabled changed the permissions directly in D: rather than in .docker folder.
I just started with the Google Cloud Platform and created my first VM instance (Debian).
It all worked in a pretty straight-forward way, I hit the SSH button next to my instance and it opened up a command-line interface in a new browser window. My username was the handle (pre-#) part of my gmail.
However, I wanted to use Terminal on my macbook as a CLI for accessing it. Looking at the guides, this seemed to be a long convoluted process. I followed this process (detailed below) but now I can only access some new account on the VM; the username is my full gmail address this time (but with underscores replacing non-alphabet characters, so like the orignal but with "_gmail_com" tacked on to the end). I can no longer access the original account that seems to be the proper account with admin privileges. Note that I can sudo into the root account and open up the directories and files owned by the original account but this seems very dumb.
I've tried posting in the forum for this stuff, Google's group for Google Compute, "gce-discussion", but my posts are held at approval for some reason. It's as though Google are just hoping I cave and pay for technical support.
My aim is to have a python session running a discord bot that continues while I log off. It'd also be good to be able to serve up files (images) via http.
Thank you for any help you can be!
The steps I followed in the convoluted process given in the guide are as follows:
I created an SSH keypair (private and public)
I downloaded and installed the Google SDK to get the gcloud CLI applciation
I issued the gcloud command to set the public key up on my instance
it had me log in at a google page (OAuth-like thing)
I started an SSH session on Terminal, invoking the file containing my private key, trying with different permutations of options
finally got it to connect and log in using my-handle_gmail_com (ie the second username on my instance)
when I tried to access the SSH from within the Google Cloud Platform page, the browser-based CLI automatically logged my into this same second account, "my-handle_gmail_com". So now I have no access to the original.
Thanks!
I have created 4 instances in two separate instance groups based on two vm templates.
Initially I was using the "SSH" button within google cloud console, and I noticed about 40% of the time would it actually work. I would often have to stop/restart the machines in order for the SSH to work. After a day or so later, the SSH button stops working. I figured this was just silly bug, and having actual SSH keys and logging in via normal SSH would work fine.
Well today I configured normal ssh keys, and I was getting the following on 3 of 4 instances:
Permission denied (publickey).
I logged into the cloud console and clicked the ssh button on all 4 instances and low and behold only 1 / 4 works.
So my question is... why am I having to keep rebooting instances just to keep my ssh working. I have never had this problem on any other cloud server before.
Note: I created a base ubuntu from their available images, and built a generic server, then used that as the base template and forked it to create the other 2 instance group templates.
I am thinking that the ssh daemon might be crashing, but how the heck can I tell, and how can I fix it?
I took the silence from the community as an indicator that the problem was only affecting myself. It turns out the stock image I had chosen to start as a base template had a buggy SSH daemon. It was a fairly quick process to rebuild my templates off of a different stock image, and since then I have had no problems connecting to my machines via ssh.
I'm pretty new to the Gcloud environment, but getting the hang of it.
Though with our first project live on an instance, I've been shuffeling some static IP's, instances and snapshots around for optimal deployment workflow. Though whats going on now, I can't understand;
I have two instances (i.e.) live-1 and dev-2.
Now I can connect to live-1 using gcloud compute ssh live-1 and it's okay.
When I try to connect to dev-2 using gcloud compute ssh dev-2, it logs me in to live-1.
The first time I tried to ssh to dev-2 it took longer than usual. After that it just connects me to the wrong instance immediately.
The goal was (as you might've guessed) to copy the live environment to a testing one. I did create an image of live-1, and cloned it to setup dev-2 with it. But in my earlier experience trying this, this was possible and worked as expected.
Whenever I use the Compute Console in the browser and use the online SSH tool from the instance list, it does connect to dev-2 properly. But on my local machine, using aformentioned command, connects me to live-1.
I already removed the IP for dev-2 from my known hosts, figuring it's cached somewhere, but no luck. What am I missing here?
Edit: I found out just now that the instances are separated though 'named' the same; if I login to dev-2, I do see myuser#live-1: in the shell, but it appears it is running a separate instance. I created a dummy file on the supposed dev-2, and it doesn't show up at the actual live-1 machine.
So this is very confusing; I rely on the 'user-tag' thing in front of every shell line to know where and what I'm actually working on; having two instances with the same name but different environments is confusing.
Ok, it was dead simple. Just run sudo hostname [desiredhostname] in the terminal, and restart it.
So in my case I logged in to dev-2 and ran sudo hostname dev-2.