How protect Amazon S3 via Basic Authentification - amazon-s3

I am new to S3 and am wonding how I could protect access to S3 or cloud front via Basic Authentification while installing a private certificate into Chrome, that allows access. Is there anything like this?

It is not possible to use Basic Authentication with Amazon S3 nor Amazon CloudFront.
Amazon S3 access can be controlled via one or more of:
Access Control List on the object level
Amazon S3 Bucket Policy
AWS Identity and Access Management (IAM) Policy
Amazon CloudFront has its own method of controlling access via signed URLs and signed cookies.

Related

S3 hosted website only accessible via Private Endpoint

I have a task to host a website on S3 which is only accessible via the private link.
I created the website, and I am able to access it using the public link
Link --> http://mywebsite.com.s3-website-us-east-1.amazonaws.com
I also created a VPC interface endpoint to access the bucket privately over the VPN. I got the DNS name from the Interface endpoint as
*.vpce-xxxxx-xxx.s3.us-east-1.vpce.amazonaws.com
I did the nslookup on the mywebsite.com.vpce-xxxxx-xxx.s3.us-east-1.vpce.amazonaws.com and is getting the correct IP addresses of the ENI's
When I try to access the webite thru the VPC Interface endpoint I am getting bucket does not exist. What I am doing wrong ?
I am using this url to access the bucket.
Link : http://mywebsite.com.vpce-xxxxx-xxx.s3.us-east-1.vpce.amazonaws.com
For this POC my bucket policy is wide open so there is no restriction on the bucket policy.
Looks like this isn't supported.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/privatelink-interface-endpoints.html#accessing-bucket-and-aps-from-interface-endpoints
AWS PrivateLink for Amazon S3 does not support the following:
Federal Information Processing Standard (FIPS) endpoints
Website endpoints
Legacy global endpoints
Using CopyObject API or UploadPartCopy API between buckets in different AWS Regions
Transport Layer Security (TLS) 1.0

EC2 instance launched S3 Endpoint subnet unable to list bucket object with endpoint bucket policy

I have created S3endpoint and added it to route table of a subnet.
Subnet has route to internet and able to open AWS console.
Next a bucket is created with bucket policy limiting access to it through VPC endpoint.
I have IAM user which has full permission to this bucket.
When i access the S3 bucket through S3 console webpage there is an error 'Access Denied' but i am able to upload files to the bucket.
Does S3 endpoint imply that only access will be through AWS CLI \SDKs? and console access is limited?
Does S3 endpoint imply that only access will be through AWS CLI \SDKs?
and console access is limited?
My understanding is that any calls done in the AWS Console will not use the endpoint setup within the VPC, even if you're accessing it via an EC2 instance within the VPC. This is because the UI within the AWS Console does not directly access the S3 API Endpoint, but instead goes through a proxy to reach the endpoint.
If you need to access the S3 bucket via the AWS Console, you'll need to amend your bucket policy.

How we can use dynamo db local to save cognito ID of authenticate users

Is there any specific way to create a Cognito Identity Pool in a Amazon DynamoDB local setup? I have already setup the javascript shell and created several tables and have queried it. I need to provide a authenticated mode user login (Facebook, Amazon, Google) for my node application. I found several tutorials about how to set it up using AWS DynamoDB, but I need to know how I can create it using a local DynamoDB without accessing AWS DynamoDB.
Amazon DynamoDB local doesn't validate credentials, so it doesn't matter how you set up the Amazon Cognito identity pool or the roles for the pool. You will be able to interact with the CognitoCredentials object the same way if you are using Amazon DynamoDB or DynamoDB local.
It is important to note that you will not hoever be able to validate fine-grained access control unless you use the full service, again because DynamoDB local doesn't validate credentials.

AWS S3 only allow Cloudfront access

So in order to make it so that S3 objects must be accessed through Cloudfront, the instructions are to go into your Cloudfront distribution settings, then Origins, then set Yes to Restrict Bucket Access. I also select Yes, Update Bucket Policy.
I then go into my S3 bucket and see that the Cloudfront access policy is in place, and that the only permissions present on the bucket is access for my user account.
However, I can still access S3 bucket objects with their respective S3 urls.
The catch is that the objects are created with read permissions for everyone, but shouldn't bucket policy, and even the Cloudfront policy, trump independent object policy?
I would recommend taking a look at Using ACLs and Bucket Policies Together S3 documentation.
With existing Amazon S3 ACLs, a grant always provides access to a
bucket or object. When using policies, a deny always overrides a
grant.

How to share Amazon AWS credentials (S3, EC2, etc)?

I have a personal Amazon account which I use to do a lot of shopping. I also recently linked this account to AWS. Now at work, some guys are doing experiments with Amazon using my account. How can I let them access the admin console, etc without giving them my Amazon credentials. I am not willing to share my Amazon shopping history or other things I use on Amazon, just the cloud services such as EC2 and S3.
What they need is access to the full admin console, and any monitoring tools on AWS.
Use AWS Identity and Access Management (IAM).
AWS Identity and Access Management (IAM) enables you to securely
control access to AWS services and resources for your users. Using IAM
you can create and manage AWS users and groups and use permissions to
allow and deny their permissions to AWS resources