AWS CodeCommit SSH connection stopped working suddenly (It was worked well before) - ssh

I'm working for an AWS CDK Pipeline with a source repository in AWS CodeCommit.
I set the pipeline works with the specific branch push in the repository.
And I used SSH connection (IAM USER > security credentials > SSH keys for AWS CodeCommit) to pull/push the source code from/to the repository.
It worked well in 2~3 months..
But today it stopped suddenly.
I searched some references but confused..
As I know, I can't set allowed host on CodeCommit by myself...
The below is a capture which I tried to find a clue...
I don't know well about SSH. Could you give me some hint if you get the reason on here?
I replaced the SSH pub key on the IAM users > security credentials but no lucks.
And if someone know why this happen suddenly, please let me know.
(Can it be the cause that too much push in short time?)
FYI, I waited 30 minutes and tried again, but no luck...
Q1. Could you give me some hint what should I do with that capture?
Q2. Why this happen suddenly..?

It is working again after 1 day 😂

Related

Google Cloud SSH Strange Failure

I'm running into a weird issue with the Google Cloud VM interface. I'm working with my team on the same Google Cloud VM project, each with our own instances.
The problem: I am unable to SSH into my instance, yet am able to SSH into my teammates' instances. Whenever I SSH using the google cloud online interface, the SSH keys never transfer properly. Despite deleting and recreating keys for my computer, I always get Permission denied (publickey) (I'm even getting this on the Google Cloud shell). Even stranger: my teammates are able to SSH into my instance. This is a new phenomenon I hadn't encountered a month ago when I first used the VM successfully.
Can anyone provide me with insight as to how to diagnose the issue, and even better, a solution? I can provide debug information if you'd find useful.
Here is output when using the verbosity flag:
Output using verbosity flag
Here is the output from Armando's recommendation of using systemctl status google-guest-agent: check ownership status
Here is the output from Anthony's recommendation of creating new keys all in one line. Anthony's recommendation to recreate keys in gcloud shell

AWS-Amplify with Cognito

Following this tutorial from AWS.
This looks awesome but I am running into some (newbie?) issues:
a. In 17:14 I don't get an option to name my project it goes straight
to the next set of questions shown.
b. In 25:09 - when I do the amplify push, there is nothing under Category, Resource name Operation or Provider Plugin. Needless to say, nothing gets created on the Cognito side in AWS. Only the S3 bucket was created -but due to (I think) a) it has funky name.
Did anybody else run into this issue? What am I missing?
Note: I have done the configure, and the S3 Bucket is getting created but it seems like the amplify-cli is behaving different for me when compared to the video.
Answering my own question---in case anybody else runs into this issue:
(a) is still an issue. In the case of (b) the way I fixed it was to do an additional step: amplify add hosting - and then amplify push. When I did that the Cognito user pool was also created.
Feels like the CLI will be very useful, but is still a little rough.

google cloud ssh inconsistent

I have created 4 instances in two separate instance groups based on two vm templates.
Initially I was using the "SSH" button within google cloud console, and I noticed about 40% of the time would it actually work. I would often have to stop/restart the machines in order for the SSH to work. After a day or so later, the SSH button stops working. I figured this was just silly bug, and having actual SSH keys and logging in via normal SSH would work fine.
Well today I configured normal ssh keys, and I was getting the following on 3 of 4 instances:
Permission denied (publickey).
I logged into the cloud console and clicked the ssh button on all 4 instances and low and behold only 1 / 4 works.
So my question is... why am I having to keep rebooting instances just to keep my ssh working. I have never had this problem on any other cloud server before.
Note: I created a base ubuntu from their available images, and built a generic server, then used that as the base template and forked it to create the other 2 instance group templates.
I am thinking that the ssh daemon might be crashing, but how the heck can I tell, and how can I fix it?
I took the silence from the community as an indicator that the problem was only affecting myself. It turns out the stock image I had chosen to start as a base template had a buggy SSH daemon. It was a fairly quick process to rebuild my templates off of a different stock image, and since then I have had no problems connecting to my machines via ssh.

gcloud compute ssh connects shows wrong instance name

I'm pretty new to the Gcloud environment, but getting the hang of it.
Though with our first project live on an instance, I've been shuffeling some static IP's, instances and snapshots around for optimal deployment workflow. Though whats going on now, I can't understand;
I have two instances (i.e.) live-1 and dev-2.
Now I can connect to live-1 using gcloud compute ssh live-1 and it's okay.
When I try to connect to dev-2 using gcloud compute ssh dev-2, it logs me in to live-1.
The first time I tried to ssh to dev-2 it took longer than usual. After that it just connects me to the wrong instance immediately.
The goal was (as you might've guessed) to copy the live environment to a testing one. I did create an image of live-1, and cloned it to setup dev-2 with it. But in my earlier experience trying this, this was possible and worked as expected.
Whenever I use the Compute Console in the browser and use the online SSH tool from the instance list, it does connect to dev-2 properly. But on my local machine, using aformentioned command, connects me to live-1.
I already removed the IP for dev-2 from my known hosts, figuring it's cached somewhere, but no luck. What am I missing here?
Edit: I found out just now that the instances are separated though 'named' the same; if I login to dev-2, I do see myuser#live-1: in the shell, but it appears it is running a separate instance. I created a dummy file on the supposed dev-2, and it doesn't show up at the actual live-1 machine.
So this is very confusing; I rely on the 'user-tag' thing in front of every shell line to know where and what I'm actually working on; having two instances with the same name but different environments is confusing.
Ok, it was dead simple. Just run sudo hostname [desiredhostname] in the terminal, and restart it.
So in my case I logged in to dev-2 and ran sudo hostname dev-2.

Amazon S3 suddenly stopped working with EC2 but working from localhost

Create folders and upload files to my S3 bucket stopped working.
The remote server returned an error: (403) Forbidden.
Everything seems to work previously as i did not change anything recently
After days of testing - i see that i am able to create folders in my bucket from localhost but same code doesnt work on the EC2 instance.
I must resolve the issue ASAP.
Thanks
diginotebooks
Does your EC2 instance have a role? If yes, what is this role? Is it possible that someone detached or modified a policy that was attached to it?
If your instance doesn't have a role, how do you upload files to S3? Using the AWS CLI tools? Same questions for the IAM profile used.
If you did not change anything - are you using the same IAM credentials from the server and localhost? May be related to this.
Just random thoughts...