Should i use nginx for my image hosting site with high traffic? - apache

I am suggested by a friend to use nginx for my site http://www.imgzzz.com . its an image hosting site with loads of traffic.
Currently im on a vps ..CentOS 5.4 x64 , Apache
Most views are on image pages. So far to decrease the load on server i have done cache of almost all data like user details, image name, path, category details etc.
Still i have to go with about 3 sql queries every page/view.
Addition of Views
Displaying Views
Addition of user ad views with respect to their ads shown
Considering the traffic from social media sites like digg, stumbleupon. per second online user peaks upto 1500-2500. So i guess you can get the idea of php queries per second.. Sometimes it causes the server to lag.
The rest of the stuff on image pages is static. So now do you guys suggest nginx or any other better option for my server?
Thanks in advance :)
Edit : This is a custom system not any cms

I would recommend using nginx as your static file server. I run nginx for this and it works great, and I could vouch for alot of other people I know that uses it. It's fast and reliable.

Related

Is PageSpeed Insights bypassing Google CDN cache?

We're using Google Cloud Platform to host a WordPress site:
Google Load Balancer with CDN -> Instance Group with single VM -> Nginx + WordPress
From step 1 (only VM with WordPress, no cache) to the last step (whole setup with Load Balancer and CDN) I could progressively see the improvement when testing locally from my browser and from GTmetrix. But PageSpeed Insights always showed little improvement.
Now we're proud of an impressive 98/97 score in GTmetrix (woah!), but PSI still shows we're pretty average, specially on mobile (range from 45-55).
Problem: we're concerned about page ranking in Google so we'd like to make PSI happy as well. Also... our client won't understand that we did make an improvement while PSI still shows that score.
I was digging and found a few weird things about PSI:
When we adjusted cache-control in nginx, it was correctly detected by local browser and GTmetrix, but section Serve static assets with an efficient cache policy in PSI showed the old values for a few days.
The homepage has a background video hosted in 3 formats (mp4, webm, ogv). Clients are supposed to request only one of them (my browser and GTmetrix do), but PSI actually requests the 3 of them. I can see them in Avoid enormous network payloads section.
When a client requests our homepage, only the GET / request reaches our backend server (which is the expected behaviour) and the rest of the static assets are served from the CDN. But when testing from PSI, all requests reach our backend server. I can see them in nginx access log.
So... those 3 points are making us get a worse score in PSI (point 1 suddenly fixed itself yesterday after days since we changed cache-control), but for what I understand none of them should be happening. Is there something else I am missing?
Thanks in advance to those who can shed some light on this.
but PSI still shows we're pretty average, specially on mobile (range from 45-55).
PSI defaults to show you a mobile score on a simulated throttled connection. If you look at the desktop tab this is comparable to GT Metrix (which uses the same engine 'Lighthouse' under the hood without throttling so will give similar results on Desktop).
Sorry to tell you but the site is only average on mobile speed, test it by going to Performance tab in developer tools and enabling 'Network:Fast 3G' and 'CPU: 4x Slowdown' in the throttling options.
Plus the site seems really JavaScript computation heavy for some reason, PSI simulates a slower CPU so this is another factor. One script is taking nearly 1 second to evaluate.
Serve static assets with an efficient cache policy in PSI showed the old values for a few days.
This is far more likely to be a config issue than a PSI issue. PSI always runs from an empty cache. Perhaps the roll out across all CDNs is slow for some reason and PSI was requesting from a different CDN to you?
Videos - but PSI actually requests the 3 of them. I can see them in Avoid enormous network payloads section.
Do not confuse what you see here with what Google has used to actually run your test. This is calculated separately from all assets that it can download not based on the run data that is calculated by loading the page in a headless browser.
Also these assets are the same for desktop and mobile so it could be for some reason it is using one asset for the mobile test and one for the desktop test.
Either way it does indeed look like a bug but it will not affect your score as that is calculated in other ways.
all requests reach our backend server
Then this points to a similar problem as with point 1 - are you sure your CDN has fully deployed? Either that or you have some rule set up for a certain user agent / robots rule set up that bypasses your CDN. Most likely a robots rule needs updating.
What can you do?
double check your config, deployment etc. Ensure it has propagated to all CDN sites and that all of the DNS routing is working as expected.
Check that you don't have rules set for robots, I notice the site is 'noindex' so perhaps you do have something set up while you are testing things that is interfering.
Run an 'Audit' from Developer Tools in Google Chrome -> this uses exactly the same engine that PSI uses. This may give you better results as it uses your actual browser rather than a headless browser. Although for me this stops the videos loading at all so something strange is happening with that.

web pages stuck on "waiting for...", but then load fast

My website has a huge data base. Even this the frontpage with all content is 1.2MB. Small. I changed server because I wanted to load faster. In the old server when I clicked the url the site started to load serially, from the top to bottom. But in the new server I get the message "waiting for www..." in my browser down left for 4-5 sec and then my website displayed very quickly. When I remove one big mysql query, it is OK.
I downloaded the htaccess of boileplate, I put in Plesk hosting settings FastCGI application, but nothing.
Any idea?
If you are getting this issues due to mysql query then I will suggest you optimize your mysql services with MySQLTuner scripts. We have used that on for our some clients and it's working good. But make sure your mysql service up-time is more than 24 hours, so that you will get accurate suggestions in the script output.

Website crashing when too many requests happen at once

We have a website which advertises a competition on TV each month. When the advert runs the website gets around 4000 submissions and this causes it to crash
The website runs off Silverstripe and is hosted on Apache.
I have read about queuing, and this sounds like the solution but I have spoken to the Silverstripe dev and the Server admin and both say that the other needs to make this happen.
My question is should the queuing be done on the website or the server?
To help Silverstripe handle lots of requests you can install the Static Publishing module:
http://www.silverstripe.org/introducing-the-static-publish-queue-module/
Your developer would implement that on the website.
This will create a flat version of your website that is served to users. This greatly reduces server load.
What kind of server are you running? You can get many different types these days... for example some do load balancing etc which might help prevent the crashing.
Also there are plenty of third party applications that you could integrate with to help you with job queuing like http://www.iron.io/ or http://aws.amazon.com/sqs/.
Another option is to find a module for silverstripe that already exists... I had a really quick look on github and found a one that might do the job you require -
https://github.com/silverstripe-australia/silverstripe-queuedjobs
Let me know how you get on :)
Good overview of all the areas to look at for SilverStripe on http://www.silverstripe.org/improving-silverstripe-performance/.

How to optimize intranet web site

I have create a web site (PHP, MySQL) for the intranet of my campus. The campus network has a proxy and web servers but I've used a PC in my workgroup as the server using XAMPP 1.7.7 for testing purpose. When I visiting the web site from the different PC in the same workgroup it takes more than 30 seconds to load the index page or other pages.
The web browser used is Firefox and it has bypass the proxy server for the local addresses. The index.php page is only 5KB of the size. And in the index.php I have destroyed current session if there any, database connection to retrieve latest 03 news and call two external css files. Used less than 5 images (total capacity less than 5 KB) in every page. The XAMPP is in default settings.
Is there something that I can do to optimize and decrease the loading time. Your opinions are welcome.
That is very slow. Your page is not that large. If the page loads fast locally don't bother trying to optimize more. How long does it take to copy a file over the workgroup? Or better yet, FTP a file to your 'server'?
You could use tracert (http://support.microsoft.com/kb/162326) to get a better idea where the slow down is.
More generally speaking, the ideal solution really depends on what you are ultimately trying to accomplish. XAMPP is obviously more geared towards personal localhost work. You got the ports open and host all setup correctly, seems your immediate problem is a network issue requiring further diagnoses.

Making Plone site temporarily static for high traffic peak

We know there is a surge of traffic hitting a Plone site on a certain day. Last time this happened we couldn't crank enough power out of Plone to make it run smoothly.
Now I am asking what kind of tricks one could play to feed the horde temporarily? E.g.
Convert (part of) Plone site to static HTML files and images on a disk, serving them through Apache?
Cache the whole site in Varnish with very long expire time
Using some CDN service which automatically mirrors the site
We can change the site DNS if needed, but I hope all this could be achieved having contact form and other HTTP POST forms still working (if necessary we can hide them temporary)
I'd go with Varnish and something like a 60 second TTL. This is enough, because it means you'll get only a handful of requests per minute.
You need to test carefully, though, that response headers are set correctly so you don't have any "holes" in the cache that hammer Zope. Funkload to the rescue.
Martin