Cloudfront giving temporary access denied - amazon-s3

I have this scenario:
User initiates a process
Server constructs and uploads a file to S3
User attempts to access file through Cloudfront
#2 and #3 are asynchronous, so it's possible that the user can attempt to access the file before it is uploaded to S3.
When I try to access the file on cloudfront, I get an Access Denied error.
However, I can verify that the file exists on S3, and is accessible.
If I wait a minute or two, this access denied error is gone, and I can access my file through cloudfront. What's going on?

Eventual Consistency seems to be my problem.
All cloudfront distributions have it.
The S3 Virginia region also has it.

Related

AWS S3 bucket - How to disable read access even for administrators

I would like to disable read access to S3 objects even for the administrator (if he is using admin console).
I have a client that writes and reads objects to the S3 bucket. Only he should have access and not anyone else (including admin).
Is this possible ?

S3 Replication - s3:PutReplicationConfiguration

I have been attempting to introduce S3 bucket replication into my existing project's stack. I kept getting an 'API: s3:PutBucketReplication Access Denied' error in CloudFormation when updating my stack through my CodeBuild/CodePipeline project after adding the Replication rule on the source bucket + S3 replication role. For testing, I've added full S3 permission ( s3:* ) to the CodeBuild Role for all resources ( "*" ), as well as full S3 permissions on the S3 replication role -- again I got the same result.
Additionally, I tried running a stand-alone, stripped down version of the CF template (so not updating my existing application infrastructure stack) - which creates the buckets (source + target) and the S3 replication role. It was deployed/run through CloudFormation while logged in with my Admin role via the console and again I got the same error as when attempting the deployment with my CodeBuild role in CodePipeline.
As a last ditch sanity check, again being logged in using my admin role for the account, I attempted to perform the replication setup manually on buckets that I created using the S3 console and I got the below error:
You don't have permission to update the replication configuration
You or your AWS admin must update your IAM permissions to allow s3:PutReplicationConfiguration, and then try again. Learn more about Identity and access management in Amazon S3 API response
Access Denied
I confirmed that my role has full S3 access across all resources. This message seems to suggest to me that the permission s3:PutReplicationConfiguration may be different then other S3 permissions somehow - needing to be configured with root access to the account or something?
Also, it seems strange to me that CloudFormation indicates the s3:PutBucketReplication permission, where as the S3 console error references the permission s3:PutReplicationConfiguration. There doesn't seem to be an IAM action for s3:PutBucketReplication (ref: https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html) only s3:PutReplicationConfiguration.
Have you checked Permission Boundary? Is this in a corporate control tower or stand alone account?
Deny always wins so if you have a Permission Boundary that excludes some actions even when you have explicitly allowed it you may run into issues like this.
It turns out that the required permission (s3:PutReplicationConfiguration) was actually being blocked by a preventive ControlTower Guard Rail that was put in place on the OU the AWS account exists in. Unfortunately, this DENY is not visible as a user from anywhere within the AWS account, as it exists outside of any Permission Boundary or IAM Policy. This required some investigation from our internal IT team to identify the source of the DENY from the guard rail control.
https://docs.aws.amazon.com/controltower/latest/userguide/elective-guardrails.html#disallow-s3-ccr

Orphaned AWS s3 Bucket Cannot Be Deleted

After making some changes for an aws hosted static website, I deleted an aws s3 bucket through the AWS console. However, the bucket is now orphaned. Although it is not listed in the AWS console, I can see still reach what is left of it through the CLI and through the URI.
When I try to recreate a www bucket with the same name, the AWS console returns the following error:
Bucket already exists
The bucket with issues has a www prefix, so now I have two different versions (www and non-www) of the same website.
The problem URI is:
www.michaelrieder.com and www.michaelrieder.com.s3-eu-west-1.amazonaws.com
I made many failed attempts to delete the bucket using the aws s3 CLI utility. I tried aws rb force, aws rm, and any other command I remotely thought might work.
I need to delete and recreate the bucket with exactly the same name so I can have www website redirection working correctly as aws enforces static website naming conventions strictly.
When I execute the aws s3 CLI command for example:
aws s3 rb s3://www.michaelrieder.com --force --debug
A typical CLI error message is:
An error occurred (AccessDenied) when calling the DeleteObject operation: Access Denied
It thought it might be a cache related issue and that the the bucket would flush itself after a period of time, but the issue has persisted for over 48 hours.
It seems to be a permissions issue, but I cannot find a way to change the phantom bucket’s permissions or any method of deleting the bucket or even it’s individual objects, since I do not have access to the bucket via the AWS console or the aws s3 CLI.
Appreciate any ideas. Please help.

Error when uploading file with fineuploader using the "no server-side code" approach

To upload files to S3, I'm following the tutorial at http://blog.fineuploader.com/2014/01/15/uploads-without-any-server-code/
I got the first few errors out of the way, but i'm stuck at this one.
Failed to load resource: The certificate for this server is invalid.
You might be connecting to a server that is pretending to be
“videoupload.mobdesignapps.fr.s3.amazonaws.com” which could put your
confidential information at risk.
I'm hosting the site on S3 and need to upload files to another S3 buckets.
Any clue appreciated
Found the answer in this question: Amazon S3 Bucket naming convention causes conflict with certificate
Essentially bucket name need to not contain any period (".") to work with the default S3 SSL certificate.

How to upload files directly to Amazon S3 from a remote server?

Is it possible to upload a file to S3 from a remote server?
The remote server is basically a URL based file server. Example, using http://example.com/1.jpg, it serves the image. It doesn't do anything else and can't run code on this server.
It is possible to have another server telling S3 to upload a file from http://example.com/1.jpg
upload from http://example.com/1.jpg
server -------------------------------------------> S3 <-----> example.com
If you can't run code on the server or execute requests then, no, you can't do this. You will have to download the file to a server or computer that you own and upload from there.
You can see the operations you can perform on amazon S3 at http://docs.amazonwebservices.com/AmazonS3/latest/API/APIRest.html
Checking the operations for both the REST and SOAP APIs you'll see there's no way to give Amazon S3 a remote URL and have it grab the object for you. All of the PUT requests require the object's data to be provided as part of the request. Meaning the server or computer that is initiating the web request needs to have the data.
I have had a similar problem in the past where I wanted to download my users' Facebook Thumbnails and upload them to S3 for use on my site. The way I did it was to download the image from Facebook into Memory on my server, then upload to Amazon S3 - the full thing took under 2 seconds. After the upload to S3 was complete, write the bucket/key to a database.
Unfortunately there's no other way to do it.
I think the suggestion provided is quite good, you can SCP the file to S3 Bucket. Giving the pem file will be a password less authentication, via PHP file you can validate the extensions. PHP file can pass the file, as argument to SCP command.
The only problem with this solution is, you must have your instance in AWS. You can't use this solution if your website is hosted in other Hosting Providers and you are trying to upload files straight to S3 Bucket.
Technically it's possible, using AWS Signature Version 4, Assuming your remote server is the customer in the image below, you could prepare a form in the main server, and send the form fields to the remote server, for it to curl it. Detailed example here.
you can use scp command from Terminal.
1)using terminal, go to the place where there is that file you want to transfer to the server
2) type this:
scp -i yourAmazonKeypairPath.pem fileNameThatYouWantToTransfer.php ec2-user#ec2-00-000-000-15.us-west-2.compute.amazonaws.com:
N.B. Add "ec2-user#" before your ec2blablbla stuffs that you got from the Ec2 website!! This is such a picky error!
3) your file will be uploaded and the progress will be shown. When it is 100%, you are done!