AWS-Amplify with Cognito - amazon-cognito

Following this tutorial from AWS.
This looks awesome but I am running into some (newbie?) issues:
a. In 17:14 I don't get an option to name my project it goes straight
to the next set of questions shown.
b. In 25:09 - when I do the amplify push, there is nothing under Category, Resource name Operation or Provider Plugin. Needless to say, nothing gets created on the Cognito side in AWS. Only the S3 bucket was created -but due to (I think) a) it has funky name.
Did anybody else run into this issue? What am I missing?
Note: I have done the configure, and the S3 Bucket is getting created but it seems like the amplify-cli is behaving different for me when compared to the video.

Answering my own question---in case anybody else runs into this issue:
(a) is still an issue. In the case of (b) the way I fixed it was to do an additional step: amplify add hosting - and then amplify push. When I did that the Cognito user pool was also created.
Feels like the CLI will be very useful, but is still a little rough.

Related

How to disable encryption of Grfana Loki logs fed to S3

This could be a very dumb question to start off with, so I apologise in advance, but skimming through the documentation I didn't find a way (to control in config) the encryption of logs being fed to s3 buckets. I have a setup where Grafana Loki logs are being fed to S3(collected by fluent-bit from pods, since all of this is deployed in EKS), I have absolutely no problem in viewing logs via Grafana UI, logs are properly stored in S3 as well, but when I download files from within the bucket they are encrypted.
Is there a config flag I missed or there is more to do away with this encrypted logs or there isn't really something that can be done in this situation.
I hope I have shared enough information or presented the situation/question. But in case its not clear please feel free to ask and also thanks in advance for any help !!
I tried to play around with some config items like sse_encryption: false but it didn't seem to have any effect , I also tried to toggle the insecure flag but I believe that has more to do with tls.
The download file from s3 looks like the attached screen.

AWS CodeCommit SSH connection stopped working suddenly (It was worked well before)

I'm working for an AWS CDK Pipeline with a source repository in AWS CodeCommit.
I set the pipeline works with the specific branch push in the repository.
And I used SSH connection (IAM USER > security credentials > SSH keys for AWS CodeCommit) to pull/push the source code from/to the repository.
It worked well in 2~3 months..
But today it stopped suddenly.
I searched some references but confused..
As I know, I can't set allowed host on CodeCommit by myself...
The below is a capture which I tried to find a clue...
I don't know well about SSH. Could you give me some hint if you get the reason on here?
I replaced the SSH pub key on the IAM users > security credentials but no lucks.
And if someone know why this happen suddenly, please let me know.
(Can it be the cause that too much push in short time?)
FYI, I waited 30 minutes and tried again, but no luck...
Q1. Could you give me some hint what should I do with that capture?
Q2. Why this happen suddenly..?
It is working again after 1 day 😂

Mount S3 bucket as an NFS share on an EC2 instance

long time reader but I've usually been able to find the answers I've been looking for in existing posts - but this time I've not been able to.
I am essentially teaching myself AWS CDK from scratch, I've only really just started with it so not finding anything which helps me on my mission may be a result of not knowing enough yet to be asking the right questions... so please bare with me.
Thus far I've used the AWS CDK with Python to create a stack which creates an S3 bucket, and also fires up an EC2 instance with an AWS file storage gateway AMI loaded on it (so running Amazon Linux). This deploys and runs fine - however now I'd like to programmatically set up the S3 bucket to be accessed via an NFS share on the EC2 instance. From what I've seen I'd assumed it is or should be fairly trivial however I keep getting a bit lost in documentation and internet hunts and not quite sure I'm looking in the right places or asking search engines the right questions to unlock the path to achieve this.
It looks like I should be able to script something up to make it happen when the instance is start using user-data but I'm a bit lost. Is anyone able to throw me some crumbs to follow to find a good way of achieving this, or a better way of achieving what I want to happen (which is basically accessing the S3 bucket contents as though they are files on an EC2 instance) - if not tell me how to do it if it's trivial enough?
Much appreciated :)
Dan
You are on good track. user_data can be used for that.
I don't have full code to give you as its use case specific (e.g. which OS are you using?), but the user_data would have to download and install s3fs:
s3fs allows Linux and macOS to mount an S3 bucket via FUSE. s3fs preserves the native object format for files, allowing use of other tools like AWS CLI.
However, S3 is an object storage system, and it can't be really mounted on an instance like you would do with NFS or EBS storage solutions. But with s3fs-fuse you can mimic such a behavior. And for some use-cases it will be sufficient.
So what you can do, is to setup the user_data script through console, verify that it works, and then basically just copy and paste to CDK. Its more of a trial-and-see approach, but this is the best way to learn.

Does Serverless, Inc ever see my AWS credentials?

I would like to start using serverless-framework to manage lambda deploys at my company, but we handle PHI so security’s tight. Our compliance director and CTO had concerns about passing our AWS key and secret to another company.
When doing a serverless deploy, do AWS credentials ever actually pass through to Serverless, Inc?
If not, can someone point me to where in the code I can prove that?
Thanks!
Running serverless deploy isn't just one call, it's many.
AWS example (oversimplification):
Check if deployment s3 bucket already exists
Create an S3 bucket
Upload packages to s3 bucket
Call CloudFormation
Check CloudFormation stack status
Get info of created recourses (e.g. endpoint urls of created APIs)
And those calls can change dependent on what you are doing and what you have done before.
The point I'm trying to make is is that these calls which contain your credentials are not all located in one place and if you want to do a full code review of Serverless Framework and all it's dependencies, have fun with that.
But under the hood, we know that it's actually using the JavaScript aws-sdk (go check out the package.json), and we know what endpoints that uses {service}.{region}.amazonaws.com.
So to prove to your employers that nothing with your credentials is going anywhere except AWS you can just run a serverless deploy with wireshark running (other network packet analyzers are available). That way you can see anything that's not going to amazonaws.com
But wait, why are calls being made to serverless.com and serverlessteam.com when I run a deploy?
Well that's just tracking some stats and you can see what they track here. But if you are uber paranoid, this can be turned off with serverless slstats --disable.

How do I create SQL connection to my app and Upload it to google cloud

Thanks for getting back at me. Sorry for the late reply, it was bed-time this time. I need to connect the Cloud SQL database that I have created to my application that is in App Engine. I tried to follow the online tutorials but when I do apply such info I would get then gcloud app deploy it return a connection error. Please help. Also clarify here: When I execute the gcloud app deploy command I suppose it takes my local file to Google Cloud where I would see the entire folder and files of my project on the project I was deploying but I am seeing the old version of my project while presentation has changed to the latest version. Also last one how can I link domain nam from http://domain.google.com to my app in http://cloud.google.com . Please help I am dying with stress I have been trying in here
Given that you haven't provided any information as to what settings you are using, or what error has been provided it is impossible to know what kind of problem you are running into.
I suggest taking a look at the "Connecting to App Engine" page here. It should answer a lot of your questions around connecting from an App Engine app.
I see two questions here.
1.
I need to connect the Cloud SQL database that I have created to my
application that is in App Engine. I tried to follow the online
tutorials but when I do apply such info I would get then gcloud app
deploy it return a connection error. Please help. Also clarify here:
When I execute the gcloud app deploy command I suppose it takes my
local file to Google Cloud where I would see the entire folder and
files of my project on the project I was deploying but I am seeing the
old version of my project while presentation has changed to the latest
version.
I see your problem here to be with CloudSQL and GAE connectivity. Depending on whether you use GAE Standard or Flex and CloudSQL MySQL or POSTGRES the steps varies. Documentation is quite clear in here though.
2.
Also last one how can I link domain nam from http://domain.google.com
to my app in http://cloud.google.com . Please help I am dying with
stress I have been trying in here
This is going to be super simple, goto GCP cloud console, Navigate to GAE-->Settings-->Custom Domain and click on add custom domain "Enter the domain name you want to link" When you click continue you will be shown the steps for verifying the domain owneship and to point the DNS to the GAE.
Documented properly by GCP folks at https://cloud.google.com/appengine/docs/standard/python/mapping-custom-domains
If you are using GAE Standard or Flex, a possible result of command gcloud app deploy :
An app.yaml (or appengine-web.xml) file is required to deploy this directory as an App Engine App, check next links:
https://cloud.google.com/appengine/docs/flexible/python/configuring-your-app-with-app-yaml
https://cloud.google.com/appengine/docs/flexible/python/writing-application-logs
Mysql and Postgres connection:
https://cloud.google.com/sql/docs/mysql/connect-app-engine
https://cloud.google.com/sql/docs/postgres/connect-app-engine
Sometimes it easy share the app.yaml for replicate the app correctly.