I am trying to upload a file larger than 40MB but it fails and i get below error:
<Error> <Code>EntityTooLarge</Code> <Message>Your proposed upload exceeds the maximum allowed size</Message> <ProposedSize>41945391</ProposedSize> <MaxSizeAllowed>41943040</MaxSizeAllowed> <RequestId>yyy</RequestId> <HostId>xxx</HostId> </Error>
Contacted Amazon and they have confirmed that they haven't put any restriction on our bucket.
I am using ng-file-upload directive to upload the file. Did anyone had this problem using ng-file-upload angular directive while uploading file larger than 40MB.
I have checked .js files in above directive and cant see anything checking the size but want to double check if i am missing something.
Thanks in advance.
The problem was in the policy signature we were creating and in it, we were giving the max_size as 40MB
Related
I have a zip file with the size 1 GB on S3 bucket. After downloading, I can't seem to unzip it. It always says
End-of-central-directory signature not found. Either this file is not
a zipfile, or it constitutes one disk of a multi-part archive. In the
latter case the central directory and zipfile comment will be found on
the last disk(s) of this archive.
Later, I download it again, using s3cmd this time. It says
WARNING: MD5 signatures do not match: computed=384c9a702c2730a6b46d21606137265d, received="b42099447c7a1a390d8e7e06a988804b-18"
Is there any s3 limitation I need to know or this is a bug?
This question seems dead, but I'll ask it for anyone landing here:
Amazon S3's multipart uploads (those suitable for big files) produce ETag values which no longer matches file's MD5, so if you're using this as checksum (as it seems looking at your received MD5) it won't work.
Best you can do for validation is ensuring ContentMD5 header is added to every part's header on your multipart upload ensuring file does not get corrupted during upload, and adding your own MD5 metadata field for checking data after download.
Thanks #ergoithz for reminding me that I had this question :)The problem is already fixed, with AWS SDK for nodejs being the problem. Apparently it cannot upload large files using stream data fs.createReadStream(), so I switched to using Knox where it worked perfectly
I use Responsive Filemanager for several websites that I host. I have the latest version (9.6.6) installed, and I also use the tinyMCE plugin for the Jquery tinyMCE version 4, but my problem occurs with both the standalone filemanager as well as the plugin, so I doubt this is important.
Anyhow, my problem is the following: everything seems to be working just fine when I upload files smaller than exactly 2 megabytes. Using a dummy file generator, I have been able to generate a PFD file of exactly 2097152 bytes, which uploads fine, and a PDF file of 2097153 bytes, which doesn't upload.
Responsive Filemanager always says the upload went fine (with both the Standard Uploader and the JAVA uploader), but any file bigger than 2097152 bytes doesn't actually get uploaded.
Here's a video demonstrating precicely what the problem is: https://youtu.be/NDtZHS6FYvg
Since my RF config allows files up to 100MB (see entire config here: http://pastebin.com/H9fvh1Pg), I'm guessing it might be something with my server settings? I'm using XAMPP for Windows. Could it be that there are some settings in my Apache config or something like that, which block uploads through http bigger than 2MB?
Thank you for your help!
EDIT: typo's and added links + video showing the problem.
I managed to find the solution for my own problem. I couldn't believe some sort of bug would cause any file exactly bigger than 2 MB to fail, so after a while I finally figured out it had to be something with the server itself, and indeed, in the php.ini I found the following line:
upload_max_filesize = 2M
Changing this to a bigger number fixed the problem for me. Would be nice if ResponsiveFileManager had a way of informing the user about the fact that the upload did in fact not complete successfully due to a php.ini server setting, but ah well...
You just need to change the config file of responsivefilemanager, i.e config.php
'MaxSizeUpload' => 10,
Just change the MaxSizeUpload variable and check.
I am using Cloudfront to serve assets stored in s3. Most of the files work fine, but some do not, specifically my fonts.
I am completely stumped as to why:
https://xxxxxx.cloudfront.net/assets/application-xxxxxxx.js
returns fine, but
https://xxxxxx.cloudfront.net/assets/fontawesome-webfont.woff?v=3.1.0
returns:
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>xxxxxx</RequestId>
<HostId>xxxxxx</HostId>
</Error>
Does anyone know why this is? I suspect it has to do with CORS, but I am using the CORS specified in this answer. And the request is getting returned as forbidden on all browsers, not just firefox.
Any help would be greatly appreciated.
After you fix your S3 permissions, you then need to invalidate CloudFront's cache of error messages from that URL.
Or wait 24 hours. Whichever.
When you invalidate the cache you can't just update the assets. I had to create a whole new cloudfront distribution package.
I'd suggest doing that. Create a new cloudfront asset package, point your server to that and delete your old one.
I created a bucket on Amazon S3 and went to the url (which is a url I need to put in an initializer in my rails app)
https://mtest73.s3.amazonaws.com/
and got this message
This XML file does not appear to have any style information associated with it. The document tree is shown below.
<Error>
<Code>NoSuchBucket</Code>
<Message>The specified bucket does not exist</Message>
<BucketName>mtest73</BucketName>
<RequestId>9FBDCC50303F4306</RequestId>
<HostId>
owG6PSSjvcS7QZwEMKzTjMnYiwclXkRG7QGIF/Ly+jc8mHnmbvWHXqitDzfjnzgM
</HostId>
</Error>
However, in the Amazon console i've even uploaded a small file to this bucket.
Is there a reason for this? I thought it might be saying the bucket doesn't exist due to security reasons, but if there's something I've done wrong it might be why I can't get my Rails application to work...
#JohnFlatness and #pjotr pointed out, I wrote the wrong url
https://mtest73.s3.amazonaws.com/
It should have been
https://73mtest.s3.amazonaws.com/
I am trying to upload some static data to my aws s3 account.
I am using aws/s3 gem for this purpose.
I have a simple upload button on my webpage which hits the controller where it create the AWS connection and tries uploading data to AWS S3.
The connection to the AWS is successful, how-ever while trying to store data in S3, i get following error : Errno::EPIPE:Broken pipe" ...always.
I tried running the same piece of code from s3sh (S3 Shell) and i am able to execute all calls properly.
Am i missing something here?? its been quite some time now since i am facing this issue.
My config are : ruby 1.8, rails 3, mongrel, s3 bucket region us.
any help will be great.
I think the broken pipe error could mean a lot of things. I was experiencing it just now and it was because the bucket name in my s3.yml configuration file didn't match the name of the bucket I created on Amazon (typo).
So for people running into this answer in the future, it could be something as silly and simple as that.
In my case the problem was with the file size. S3 puts a limit of 5GB on single file uploads. Chopping up the file into several 500MB files worked for me.
I also had this issue uploading my application.css which had compiled file size > 1.1MB. I set the fog region with:
config.fog_region = 'us-west-2'
and that seems to have fixed the issue for me...