What is /cdn-cgi/pe/bag2? - cloudflare

I'm working on minimizing my site and I'm seeing that bag2 appears 5 times in my files waterfall. My site takes 1.6 seconds to load but one bag2 file takes 800 seconds to load, although it's size is only 7.0 kb.
What are these bag2 files and how can I remove them to make my website faster? I like Cloudflare's anti-DDOS protection, but I'm annoyed with the rest of it.

These are a part of CloudFlare's "Rocket Loader" feature. If you disable that feature, it should disable serving the bag files from cdn-cgi.

Related

Is PageSpeed Insights bypassing Google CDN cache?

We're using Google Cloud Platform to host a WordPress site:
Google Load Balancer with CDN -> Instance Group with single VM -> Nginx + WordPress
From step 1 (only VM with WordPress, no cache) to the last step (whole setup with Load Balancer and CDN) I could progressively see the improvement when testing locally from my browser and from GTmetrix. But PageSpeed Insights always showed little improvement.
Now we're proud of an impressive 98/97 score in GTmetrix (woah!), but PSI still shows we're pretty average, specially on mobile (range from 45-55).
Problem: we're concerned about page ranking in Google so we'd like to make PSI happy as well. Also... our client won't understand that we did make an improvement while PSI still shows that score.
I was digging and found a few weird things about PSI:
When we adjusted cache-control in nginx, it was correctly detected by local browser and GTmetrix, but section Serve static assets with an efficient cache policy in PSI showed the old values for a few days.
The homepage has a background video hosted in 3 formats (mp4, webm, ogv). Clients are supposed to request only one of them (my browser and GTmetrix do), but PSI actually requests the 3 of them. I can see them in Avoid enormous network payloads section.
When a client requests our homepage, only the GET / request reaches our backend server (which is the expected behaviour) and the rest of the static assets are served from the CDN. But when testing from PSI, all requests reach our backend server. I can see them in nginx access log.
So... those 3 points are making us get a worse score in PSI (point 1 suddenly fixed itself yesterday after days since we changed cache-control), but for what I understand none of them should be happening. Is there something else I am missing?
Thanks in advance to those who can shed some light on this.
but PSI still shows we're pretty average, specially on mobile (range from 45-55).
PSI defaults to show you a mobile score on a simulated throttled connection. If you look at the desktop tab this is comparable to GT Metrix (which uses the same engine 'Lighthouse' under the hood without throttling so will give similar results on Desktop).
Sorry to tell you but the site is only average on mobile speed, test it by going to Performance tab in developer tools and enabling 'Network:Fast 3G' and 'CPU: 4x Slowdown' in the throttling options.
Plus the site seems really JavaScript computation heavy for some reason, PSI simulates a slower CPU so this is another factor. One script is taking nearly 1 second to evaluate.
Serve static assets with an efficient cache policy in PSI showed the old values for a few days.
This is far more likely to be a config issue than a PSI issue. PSI always runs from an empty cache. Perhaps the roll out across all CDNs is slow for some reason and PSI was requesting from a different CDN to you?
Videos - but PSI actually requests the 3 of them. I can see them in Avoid enormous network payloads section.
Do not confuse what you see here with what Google has used to actually run your test. This is calculated separately from all assets that it can download not based on the run data that is calculated by loading the page in a headless browser.
Also these assets are the same for desktop and mobile so it could be for some reason it is using one asset for the mobile test and one for the desktop test.
Either way it does indeed look like a bug but it will not affect your score as that is calculated in other ways.
all requests reach our backend server
Then this points to a similar problem as with point 1 - are you sure your CDN has fully deployed? Either that or you have some rule set up for a certain user agent / robots rule set up that bypasses your CDN. Most likely a robots rule needs updating.
What can you do?
double check your config, deployment etc. Ensure it has propagated to all CDN sites and that all of the DNS routing is working as expected.
Check that you don't have rules set for robots, I notice the site is 'noindex' so perhaps you do have something set up while you are testing things that is interfering.
Run an 'Audit' from Developer Tools in Google Chrome -> this uses exactly the same engine that PSI uses. This may give you better results as it uses your actual browser rather than a headless browser. Although for me this stops the videos loading at all so something strange is happening with that.

Drupal with imce with S3 SLOW performance issue

I have configured my drupal site so that all images/files/media etc is handled my s3 by using S3 file system module.
Now everything works fine, the image/file/ field uploader works fine but there is a huge performance issue when using IMCE file browser from the WYSIWYG editor. It takes at least a minute for the browser to display its content and there are only 290 images with 78 MB used in that initial folder which should not cause such huge delays. This is having a huge impact for our editors and several minutes lost just to upload a couple of images.
I tried various pagination patch and there is no difference at all in the performance.
What are my options now
As drilled through many forms and discussions, turns out that IMCE was not meant for S3 file system and I found this patch in pdf form(warning downloads rather than opens )
I followed the steps in that patch which significantly improved my performance.

Xdebug boosts site speed

I have WAMP stack for development and a lot of sites are going slow, but I have a really big issue with PrestaShop because loading time is 1 min on average.
Although the content is loaded, the main request is responding very slowly and Chrome's waterfall shows that the delay is caused by Content Downloading, but all assets are already downloaded (local storage) or cached.
I noticed that when I enable the xdebug listener (on VSCode) the site is responding as it should, i.e. within miliseconds.
Any idea what might be happening ?

Rails 3 development site is 10x faster than production site on Apache 2 + Phusion Passenger

I have a live production site (production mode) running alongside a test site running in development mode. They both run on the same machine, using Rails 3, Apache 2 and Phusion Passenger. If I load the same page on the production site, it takes approximately 4-5 seconds to load the page. If I request the same page on the test site it takes (only) about 0.5 seconds. A major difference of 10x. Always thought that production would be faster than development :( If I reload the page on the production site the load times stay the same. What is going on? How can I debug this problem? Because as of now the production site is way too slow even without any traffic.
I did some additional testing with other web servers, in particular Litespeed and even Webbrick. Both exhibit the same strange behaviour. Ten time slower in production mode than in development mode. So it probably is something Rails related. But I can not put my finger on it. Since the logs tell me the pages are rendered quickly, but it takes a hell of a lot of time before the page appears on my screen.
Thanks for the suggestions guys. I managed to get it fixed. I finally decided to load all my production data to my development server. It turned out that my sessions table was the culprit. It contained a lot of data and querying was slow. I added an index and the problem was solved.

Making Plone site temporarily static for high traffic peak

We know there is a surge of traffic hitting a Plone site on a certain day. Last time this happened we couldn't crank enough power out of Plone to make it run smoothly.
Now I am asking what kind of tricks one could play to feed the horde temporarily? E.g.
Convert (part of) Plone site to static HTML files and images on a disk, serving them through Apache?
Cache the whole site in Varnish with very long expire time
Using some CDN service which automatically mirrors the site
We can change the site DNS if needed, but I hope all this could be achieved having contact form and other HTTP POST forms still working (if necessary we can hide them temporary)
I'd go with Varnish and something like a 60 second TTL. This is enough, because it means you'll get only a handful of requests per minute.
You need to test carefully, though, that response headers are set correctly so you don't have any "holes" in the cache that hammer Zope. Funkload to the rescue.
Martin