I am using Cloudfront to serve assets stored in s3. Most of the files work fine, but some do not, specifically my fonts.
I am completely stumped as to why:
https://xxxxxx.cloudfront.net/assets/application-xxxxxxx.js
returns fine, but
https://xxxxxx.cloudfront.net/assets/fontawesome-webfont.woff?v=3.1.0
returns:
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>xxxxxx</RequestId>
<HostId>xxxxxx</HostId>
</Error>
Does anyone know why this is? I suspect it has to do with CORS, but I am using the CORS specified in this answer. And the request is getting returned as forbidden on all browsers, not just firefox.
Any help would be greatly appreciated.
After you fix your S3 permissions, you then need to invalidate CloudFront's cache of error messages from that URL.
Or wait 24 hours. Whichever.
When you invalidate the cache you can't just update the assets. I had to create a whole new cloudfront distribution package.
I'd suggest doing that. Create a new cloudfront asset package, point your server to that and delete your old one.
Related
I am trying to upload a file larger than 40MB but it fails and i get below error:
<Error> <Code>EntityTooLarge</Code> <Message>Your proposed upload exceeds the maximum allowed size</Message> <ProposedSize>41945391</ProposedSize> <MaxSizeAllowed>41943040</MaxSizeAllowed> <RequestId>yyy</RequestId> <HostId>xxx</HostId> </Error>
Contacted Amazon and they have confirmed that they haven't put any restriction on our bucket.
I am using ng-file-upload directive to upload the file. Did anyone had this problem using ng-file-upload angular directive while uploading file larger than 40MB.
I have checked .js files in above directive and cant see anything checking the size but want to double check if i am missing something.
Thanks in advance.
The problem was in the policy signature we were creating and in it, we were giving the max_size as 40MB
I working with mediawiki.
I want to change upload directory path to aws s3, i tried these two extensions but i getting some warning message.
I dont know these extension are working correctly.
https://www.mediawiki.org/wiki/Extension:LocalS3Repo and
https://www.mediawiki.org/wiki/Extension:AWS
If anybody is working with these extension or if you achieved these in any other ways
please explain me
I have been succesfully using the method described here, though in step 6, rather than using an apache rewrite, I changed the image paths in LocalSettings.php.
(It was quite a lot of work though, and I never figured out a way to the the cache-control and expires headers on the files, which was the real reason why I wanted to do it to begin with.)
I have an mp3 file on s3 (and have experienced with many other mp3 files) that is not playing in chrome (and other browsers as well: FF, safari, etc). The network dialog in chrome shows that there is a pending request that is seemingly never responded to by s3, however if I do a wget to the URL, I get an immediate response.
Additionally, if I serve the exact same file off of a server running nginx, I can access the URL in chrome as well instantaneously. I know that S3 does support byte range requests, so there should be no issue with chrome's byte range queries. Additionally, I've verified that the file is accessible, and that its content type is audio/mpeg.
Here is the file in question:
http://s3.amazonaws.com/josh-tmdbucket/23/talks/ffcc525a0761cd9e7023ab51c81edb781077377d.mp3
Here is a screenshot of the network requests in chrome for that URL:
I solved this by creating a CloudFront distribution. You need to create a distribution for your bucket. For example if you have a bucket named example-bucket, go to CloudFront and click on create distribution. Your bucket will appear in Origin Domain Name as example-bucket.s3.amazonaws.com
Now you can use example-bucket.s3.amazonaws.com url to load content.
This worked for me but I am not sure if it will work for others.
Had same exact issue with files.
Original url looked like this =>
https://my-bucket-name.s3-eu-west-1.amazonaws.com/EIR.mp4
Added CloudFront distribution and it solved all my issues.
Url changed only a bit:
https://my-bucket-name.s3.amazonaws.com/EIR.mp4
(but you can modify it a little while creating distribution / even setting your own DNS if you wish).
I have a cloudfront distribution that serves files from AWS S3.
Check the following files:
http://s3.amazonaws.com/kekanto_static/css/site.v70.css
http://s3.amazonaws.com/kekanto_static/css/site.v71.css
If you take a look at the headers, you'll realize that BOTH files contain the entry:
Content-Encoding:gzip
Both are properly gzipped in the S3 bucket as it should be.
But when served from Cloudfront, the content-encoding is not coming:
http://img.kekanto.com/css/site.v70.css
http://img.kekanto.com/css/site.v71.css
Any idea why this is happening?
I also checked the Cloudfront endpoints:
http://d1l2emnu9r9dtc.cloudfront.net/css/site.v70.css
http://d1l2emnu9r9dtc.cloudfront.net/css/site.v71.css
And the problem remains the same.
EDIT:
Looks like after an invalidation everything is working properly again, so you will be unable to test the URIs I've given.
All I can think about is that the S3 bucket makes the file available and a only a while after is that the headers become available, causing sometimes the headers getting skipped.
How can I prevent this? Other than setting my uploading files to sleep for a while before letting my web servers know there's a new version?
I created a bucket on Amazon S3 and went to the url (which is a url I need to put in an initializer in my rails app)
https://mtest73.s3.amazonaws.com/
and got this message
This XML file does not appear to have any style information associated with it. The document tree is shown below.
<Error>
<Code>NoSuchBucket</Code>
<Message>The specified bucket does not exist</Message>
<BucketName>mtest73</BucketName>
<RequestId>9FBDCC50303F4306</RequestId>
<HostId>
owG6PSSjvcS7QZwEMKzTjMnYiwclXkRG7QGIF/Ly+jc8mHnmbvWHXqitDzfjnzgM
</HostId>
</Error>
However, in the Amazon console i've even uploaded a small file to this bucket.
Is there a reason for this? I thought it might be saying the bucket doesn't exist due to security reasons, but if there's something I've done wrong it might be why I can't get my Rails application to work...
#JohnFlatness and #pjotr pointed out, I wrote the wrong url
https://mtest73.s3.amazonaws.com/
It should have been
https://73mtest.s3.amazonaws.com/