server first byte time or (Improve server response time) - apache

Im running a wordpress site on amazon EC2 (m1.medium) .
Im using CDN serve files and using W3C total cache to increase performance ( i cannot use page cache because im serving dynamic content and using PHP sessions.
I am using mod_deflate to keep good performance.
Problem i have is that from time to time the server response time is very slow , looking at the server monitor I see no special problem CPU is under 40%
Sometimes first byte is sent after 1.5 seconds and sometimes it can go up to 6-8 seconds.
What can i do here ?

Related

Prevent Cloudflare 524 on long running scripts

It seems Cloudflare times out after 100 seconds of not receiving a response from the server. I have some scripts that require longer than that to run. Is there anything within Cloudflare that can be used to bypass that limit e.g. page rules? Or is there another way around it without having to recode stuff, setup bypass routes or run on a separate server?

Cloudflare Page Speed HTTP Headers

I've set Cloudflare up and it's working great, the only problem I have is that this keeps coming up in page speed insights
Setting an expiry date or a maximum age in the HTTP headers for static resources instructs the browser to load previously downloaded resources from local disk rather than over the network.
I've set the cache at Cloudflare to be 4 days and PageSpeed is picking up on this, is there a bit of code I'm missing here?
https://developers.google.com/speed/pagespeed/insights/?url=https%3A%2F%2Fwww.vouchertoday.uk

How to ignore Cloudflare for my uploader?

On my site I upload media files. My uploader chunks these uploads into smaller sized files and once the upload is completed, the original will be created by merging the chunks.
The issue I run into is that when Cloudflare is enabled, the chunking request takes an awful long amount of time. An example is displayed here: http://testnow.ga
Every uploaded 5mb it chunks the file. This process saves the downloaded file on the server, then sends an AJAX request to the client and another 5mb upload request starts. The waiting (TTFB) in this particular case ranges anywhere from 2-10 seconds. Now, when the chunk size is 50mb for example, the waiting can be up to two minutes.
How can I speed up this process with Cloudflare? How can I ignore that specific /upload URL to not talk to Cloudflare?
Ps: the reason I'm not asking at Cloudflare is because I did a week ago and again a few days ago and haven't gotten a response yet. Thanks!
One option is to use a subdomain to submit the data to. At cloudflare, greycloud that dns entry. Then data is sent directly to your server bypassing cloudflare.

Server timeout when re-assembling the uploaded file

I am running a simple server app to receive uploads from a fine-uploader web client. It is based on the fine-uploader Java example and is running in Tomcat6 with Apache sitting in front of it and using ProxyPass to route the requests. I am running into an occasional problem where the upload gets to 100% but ultimately fails. In the server logs, as well as on the client, I can see that Apache is timing out on the proxy with a 502 error.
After trying and seeing this myself, I realized the problem occurs with really large files. The Java server app was taking longer than 30 seconds to reassemble the chunks into a single file and so Apache would kill the connection and stop waiting. I have increased Apache Timeout to 300 seconds which should largely correct the problem but the potential remains.
Any ideas on other ways to handle this so that the connection between Apache and Tomcat is not killed while the app is assembling the chunks on the server? I am currently using 2 MB chunks and was thinking maybe I should use a larger chunk size. Perhaps with fewer chunks to assemble the server code could do it faster. I could test that but unless the speedup is dramatic it seems like the potential for problems remain and will just be waiting for a large enough upload to come along to trigger them.
It seems like you have two options:
Remove the timeout in Apache.
Delegate the chunk-combination effort to a separate thread, and return a response to the request as soon as possible.
With the latter approach, you will not be able to let Fine Uploader know if the chunk combination operation failed, but perhaps you can perform a few quick sanity checks before responding, such as determining if all chunks are accessible.
There's nothing Fine Uploader can do here, the issue is server side. After Fine Uploader sends the request, its job is done until your server responds.
As you mentioned, it may be reasonable to increase the chunk size or make other changes to speed up the chunk combination operation to lessen the chance of a timeout (if #1 or #2 above are not desirable).

'Content-Length too' long when uploading file using Tornado

Using a slightly modified version of this Tornado upload app on my development machine, I get the following error from tornado server and a blank page whenever I try to upload large files (+100MB):
[I 130929 07:45:44 httpserver:330] Malformed HTTP request from
127.0.0.1: Content-Length too long
There is no problem uploading files up to ~20MB.
so I'm wondering whether there is any particular file upload limit in Tornado web server? Or does it have someting to do with the machine's available memory. And whatever the reason is, how can I overcome this problem?
Tornado has a configurable limit on upload size (defaulting to 10MB). You can increase the limit by passing max_buffer_size to the HTTPServer constructor (or Application.listen). However, since Tornado (version 3.1) reads the entire upload body into a single contiguous string in memory, it's dangerous to make the limit too high. One popular alternative is the nginx upload module.