On my site I upload media files. My uploader chunks these uploads into smaller sized files and once the upload is completed, the original will be created by merging the chunks.
The issue I run into is that when Cloudflare is enabled, the chunking request takes an awful long amount of time. An example is displayed here: http://testnow.ga
Every uploaded 5mb it chunks the file. This process saves the downloaded file on the server, then sends an AJAX request to the client and another 5mb upload request starts. The waiting (TTFB) in this particular case ranges anywhere from 2-10 seconds. Now, when the chunk size is 50mb for example, the waiting can be up to two minutes.
How can I speed up this process with Cloudflare? How can I ignore that specific /upload URL to not talk to Cloudflare?
Ps: the reason I'm not asking at Cloudflare is because I did a week ago and again a few days ago and haven't gotten a response yet. Thanks!
One option is to use a subdomain to submit the data to. At cloudflare, greycloud that dns entry. Then data is sent directly to your server bypassing cloudflare.
Related
My website is consuming much more bandwidth than it supposed to be. From Weblizer or awstats of WHM/ cPanel I can monitor the bandwidth usage, which type of files (jpg, png, php, css etc.) is consuming the bandwidth. But I couldn't get any specific file name. My assumption is the bandwidth usage is done by referral spaming. But from the "Visitors" page of cPanel I can see only last 1000 hits. Is there any way from where I can see that which image or css file is consuming the bandwidth.
If there is a particular file which you think is consuming the most bandwidth, then you use apachetop tool.
yum install apachetop
then run
apachetop -f /var/log/apache2/domlogs/website_name-ssl.log
replace website_name with which you wish too.
It will basically pick the entries from domlogs (which saves requests being served from websites, you may read more about domlogs here).
This will show the file which is being requested the most in real time basis and might give you an idea if particular image/php etc file has maximum requests.
Domlogs is a way to find which file request by which bot etc is being carried out. Your initial investigation may start from this point.
I read Amazon s3: direct upload vs presigned url and was wondering when use a direct upload from the backend to s3 vs a presigned url.
I understand that the direct upload requires extra bandwidth (user -> server -> s3) but I believe its more secure. Does the savings in bandwidth with the presigned url justify the slight drawback with security (i.e. with stuff like user messages)?
I am also checking the file types on the backend (via magic numbers) which I think is incompatible with presigned urls. Should this reason alone result in not using urls?
In addition I have a file size limit of 5 MB (not sure if this is considered large?). Would there be a significant difference in terms of performance and scalability (i.e. thousands to millions of files sent per hour) between using presigned urls vs direct upload.
You question sounds like you're asking for opinion, so, mine is as follows:
It depends on how secure you need it to be and what you consider is safe. I was wondering about the same questions and I believe that in my case, in the end, it is all secured by SSL encryption anyway (which is enough for me), so I prefer to save my servers bandwidth and memory usage.
Once more it depends on your own system requirements. Anyway, if any upload fails, S3 will be returning an error cause after the request failure. If checking file type is a MUST and checking it on your backend is the only way to do it, you already have your answer.
In a scenario with millions of files (with close to 5MB each) being sent every hour, I would recommend direct upload, because that would be a lot of RAM usage to receive and resend every file.
There are a few more advantages of uploading directly to S3 as you can read here
I am trying to upload files that are larger than 100 MB through Cloudflare's network.
I want everything to run through Cloudflare's network because I don't want my website's IP to be known to the world.
Plupload can be used to chunk files before uploading them to the server.
This is what it says on Plupload's home page.
Upload in Chunks
Files that have to be uploaded can be small or huge - about several
gigabytes in size. In such cases standard upload may fail, since
browsers still cannot handle it properly. We slice the files in chunks
and send them out one by one. You can then safely collect them on the
server and combine into original file.
As a bonus this way you can overcome a server's constraints on
uploaded file sizes, if any.
The last part is what catches my eyes.
So can I use Plupload to bypass the 100 MB limit set by Cloudflare?
I've tested this out and you can pass CloudFlare's limit by using plupload's chunking. CloudFlare limits a single file upload that is over 100MB so if we chunk it to say 90MB we would be sending 90MB file through CloudFlare's and that's not an issue.
Yes, chunking your uploads can work, I used ResumableJS to get around the upload limit.
The issue we dealing with is when moving or copying files in Dropbox server to another folder in Dropbox server.
The API requires to send request for each file separately. That takes way too long.
Maybe You provide some kind of batch request so I could move more than one files per request?
I also know the ability to move all folder content, but it doesn't work on our case, cause we need only subset of files to move.
If we try to flush many request at once threw several connections, we get 'Server Unavailable' or 'File Locked' errors and need to repeat request again.
Tl;DR;
To move 1000 files that already are in Dropbox server it takes over 30 minutes.
What possible solutions You have to increase the performance?
The Dropbox API now provides a batch endpoint for moving files. You can find the documentation here:
https://www.dropbox.com/developers/documentation/http/documentation#files-move_batch
I am running a simple server app to receive uploads from a fine-uploader web client. It is based on the fine-uploader Java example and is running in Tomcat6 with Apache sitting in front of it and using ProxyPass to route the requests. I am running into an occasional problem where the upload gets to 100% but ultimately fails. In the server logs, as well as on the client, I can see that Apache is timing out on the proxy with a 502 error.
After trying and seeing this myself, I realized the problem occurs with really large files. The Java server app was taking longer than 30 seconds to reassemble the chunks into a single file and so Apache would kill the connection and stop waiting. I have increased Apache Timeout to 300 seconds which should largely correct the problem but the potential remains.
Any ideas on other ways to handle this so that the connection between Apache and Tomcat is not killed while the app is assembling the chunks on the server? I am currently using 2 MB chunks and was thinking maybe I should use a larger chunk size. Perhaps with fewer chunks to assemble the server code could do it faster. I could test that but unless the speedup is dramatic it seems like the potential for problems remain and will just be waiting for a large enough upload to come along to trigger them.
It seems like you have two options:
Remove the timeout in Apache.
Delegate the chunk-combination effort to a separate thread, and return a response to the request as soon as possible.
With the latter approach, you will not be able to let Fine Uploader know if the chunk combination operation failed, but perhaps you can perform a few quick sanity checks before responding, such as determining if all chunks are accessible.
There's nothing Fine Uploader can do here, the issue is server side. After Fine Uploader sends the request, its job is done until your server responds.
As you mentioned, it may be reasonable to increase the chunk size or make other changes to speed up the chunk combination operation to lessen the chance of a timeout (if #1 or #2 above are not desirable).