Xdebug boosts site speed - apache

I have WAMP stack for development and a lot of sites are going slow, but I have a really big issue with PrestaShop because loading time is 1 min on average.
Although the content is loaded, the main request is responding very slowly and Chrome's waterfall shows that the delay is caused by Content Downloading, but all assets are already downloaded (local storage) or cached.
I noticed that when I enable the xdebug listener (on VSCode) the site is responding as it should, i.e. within miliseconds.
Any idea what might be happening ?

Related

Is PageSpeed Insights bypassing Google CDN cache?

We're using Google Cloud Platform to host a WordPress site:
Google Load Balancer with CDN -> Instance Group with single VM -> Nginx + WordPress
From step 1 (only VM with WordPress, no cache) to the last step (whole setup with Load Balancer and CDN) I could progressively see the improvement when testing locally from my browser and from GTmetrix. But PageSpeed Insights always showed little improvement.
Now we're proud of an impressive 98/97 score in GTmetrix (woah!), but PSI still shows we're pretty average, specially on mobile (range from 45-55).
Problem: we're concerned about page ranking in Google so we'd like to make PSI happy as well. Also... our client won't understand that we did make an improvement while PSI still shows that score.
I was digging and found a few weird things about PSI:
When we adjusted cache-control in nginx, it was correctly detected by local browser and GTmetrix, but section Serve static assets with an efficient cache policy in PSI showed the old values for a few days.
The homepage has a background video hosted in 3 formats (mp4, webm, ogv). Clients are supposed to request only one of them (my browser and GTmetrix do), but PSI actually requests the 3 of them. I can see them in Avoid enormous network payloads section.
When a client requests our homepage, only the GET / request reaches our backend server (which is the expected behaviour) and the rest of the static assets are served from the CDN. But when testing from PSI, all requests reach our backend server. I can see them in nginx access log.
So... those 3 points are making us get a worse score in PSI (point 1 suddenly fixed itself yesterday after days since we changed cache-control), but for what I understand none of them should be happening. Is there something else I am missing?
Thanks in advance to those who can shed some light on this.
but PSI still shows we're pretty average, specially on mobile (range from 45-55).
PSI defaults to show you a mobile score on a simulated throttled connection. If you look at the desktop tab this is comparable to GT Metrix (which uses the same engine 'Lighthouse' under the hood without throttling so will give similar results on Desktop).
Sorry to tell you but the site is only average on mobile speed, test it by going to Performance tab in developer tools and enabling 'Network:Fast 3G' and 'CPU: 4x Slowdown' in the throttling options.
Plus the site seems really JavaScript computation heavy for some reason, PSI simulates a slower CPU so this is another factor. One script is taking nearly 1 second to evaluate.
Serve static assets with an efficient cache policy in PSI showed the old values for a few days.
This is far more likely to be a config issue than a PSI issue. PSI always runs from an empty cache. Perhaps the roll out across all CDNs is slow for some reason and PSI was requesting from a different CDN to you?
Videos - but PSI actually requests the 3 of them. I can see them in Avoid enormous network payloads section.
Do not confuse what you see here with what Google has used to actually run your test. This is calculated separately from all assets that it can download not based on the run data that is calculated by loading the page in a headless browser.
Also these assets are the same for desktop and mobile so it could be for some reason it is using one asset for the mobile test and one for the desktop test.
Either way it does indeed look like a bug but it will not affect your score as that is calculated in other ways.
all requests reach our backend server
Then this points to a similar problem as with point 1 - are you sure your CDN has fully deployed? Either that or you have some rule set up for a certain user agent / robots rule set up that bypasses your CDN. Most likely a robots rule needs updating.
What can you do?
double check your config, deployment etc. Ensure it has propagated to all CDN sites and that all of the DNS routing is working as expected.
Check that you don't have rules set for robots, I notice the site is 'noindex' so perhaps you do have something set up while you are testing things that is interfering.
Run an 'Audit' from Developer Tools in Google Chrome -> this uses exactly the same engine that PSI uses. This may give you better results as it uses your actual browser rather than a headless browser. Although for me this stops the videos loading at all so something strange is happening with that.

Apache - resources randomly hang (resulting in slow page loads)

HTTP requests of resources randomly - about between 1-5% of the time (per resource, not per page loads) - take extremely long to be delivered to the browser (~20 seconds), not uncommonly hanging indefinitely even. (Server details listed in list at the bottom).
This results in about every 5th request to any page appear to hang due to a JavaScript resource hanging within the <head> tag.
The resources are css, js and small image files, served directly by apache (no scripting language), although page loads (involving PHP or Rails) also rarely hang, with equal chances as any other resource (1-5% of the time), so this seems to be an Apache Request related issue.
Additional information:
I've checked the idle workers on server-status and as expected, I still have 98% of my idle workers. Although this may be relevant as the hangings apply to static resources not served by FastCGI (the resources are static).
I am not the only one with this problem. Someone else is also having the same problem, and from a different IP address.
This happens in both Google Chrome and Firefox as HTTP clients.
I have tried constantly force refreshing the same JS file in a new tab. It eventually led to the same kind of hanging.
The Timing tab for Google Chrome reports 34ms waiting and 19.27s receiving for one of these hanging requests. Would that mean Apache already had the file contents to be delivered ready, only had trouble delivering it in a sensible amount of time?
error.log doesn't show any errors. There are some expected 404 and 500 errors in error.log, but those aren't related to the hanging; those are actual errors for nonexisting pages and PHP fatal errors.
I get some suspicious 206 Partial Content responses mostly for static content, although the hanging happens more often then those partial contents. I mostly get 200 OK responses everywhere and I can confirm indefinitely hanging resources that were reported as 200 OK in the apache access.log.
I do have mod_passenger installed for Redmine. I don't know if that helps, but suspiciously this server has it installed unlike all the other servers I worked with. Although mod_passenger shouldn't affect static content, especially not within a non-ruby project folder, should it?
The server is using Apache 2.4 Event MPM on Ubuntu 13.10, hosted on Digital Ocean.
What may be causing these hangings and how could I fix this?
I had the same problem, so after reading this thread I tried setting KeepAlive Off in my apache config which seems to have helped- all resources have expected waiting times now.
Not a great "fix", but at least I am one step closer to figuring out the cause and pages aren't taking 15s to fully load in the mean time.

Visual Studio 2012 & IIS 7.5 running very slow on localhost

I am working on a project, the team (and vs2012) by default uses iis7.5 during development.
Very recently (yesterday) the sites I was debugging start loading very slow.
Using fiddler I could see that it would take about 30 seconds for every single call that is being made.
so "localhost:{port}" takes 30 seconds (2kb), once that finally comes back and the page starts to load it calls all the css and js files, each take about 30 seconds to load.
As this is a large application it quickly became unusable.
this happens whether I am debugging, or loading a page without debugging.
This same behavior happens an all sites on my dev machine, even plain newly build site exhibits the same behavior And occurs in all browsers.
I am able to "resolve" the problem several ways.
1. Switch from iis 7.5 to visual studio dev server.
(url does not change localhost:{port} to localhost:{port}.)
2. In web matrix, if I change the url from localhost:{port} to
http://{machinename}:{port} the site then runs faster (normal speed),
but debugging becomes and issue.
option 1 above does the trick and I dont seem to be effected too much by the change.
I am very curious as to why this is all of a sudden happening though, and would really like to fix the problem.
Any thoughts would be greatly appreciated.

Rails 3 development site is 10x faster than production site on Apache 2 + Phusion Passenger

I have a live production site (production mode) running alongside a test site running in development mode. They both run on the same machine, using Rails 3, Apache 2 and Phusion Passenger. If I load the same page on the production site, it takes approximately 4-5 seconds to load the page. If I request the same page on the test site it takes (only) about 0.5 seconds. A major difference of 10x. Always thought that production would be faster than development :( If I reload the page on the production site the load times stay the same. What is going on? How can I debug this problem? Because as of now the production site is way too slow even without any traffic.
I did some additional testing with other web servers, in particular Litespeed and even Webbrick. Both exhibit the same strange behaviour. Ten time slower in production mode than in development mode. So it probably is something Rails related. But I can not put my finger on it. Since the logs tell me the pages are rendered quickly, but it takes a hell of a lot of time before the page appears on my screen.
Thanks for the suggestions guys. I managed to get it fixed. I finally decided to load all my production data to my development server. It turned out that my sessions table was the culprit. It contained a lot of data and querying was slow. I added an index and the problem was solved.

Making Plone site temporarily static for high traffic peak

We know there is a surge of traffic hitting a Plone site on a certain day. Last time this happened we couldn't crank enough power out of Plone to make it run smoothly.
Now I am asking what kind of tricks one could play to feed the horde temporarily? E.g.
Convert (part of) Plone site to static HTML files and images on a disk, serving them through Apache?
Cache the whole site in Varnish with very long expire time
Using some CDN service which automatically mirrors the site
We can change the site DNS if needed, but I hope all this could be achieved having contact form and other HTTP POST forms still working (if necessary we can hide them temporary)
I'd go with Varnish and something like a 60 second TTL. This is enough, because it means you'll get only a handful of requests per minute.
You need to test carefully, though, that response headers are set correctly so you don't have any "holes" in the cache that hammer Zope. Funkload to the rescue.
Martin