Lighthouse metrics variance - vue.js

please tell me why there is such a big difference between the included "Total Blocking Time" on the site https://developers.google.com/ and the local litehouse for a mobile device?
I am not able to give you the address and show you the site, there is just a text and a picture. The site is made on nuxt js and assembled completely statically.
My computer - mackbook air m1, google uses less powerful emulation and results in fewer parrots?

This official page explains it well: https://web.dev/performance-scoring/
Common problems:
A/B tests or changes in ads being served
Internet traffic routing changes
Testing on different devices, such as a high-performance desktop and a low-performance laptop
Browser extensions that inject JavaScript and add/modify network requests
Antivirus software
Otherwise, this one is another answer: https://www.debugbear.com/blog/why-is-my-lighthouse-score-different-from-pagespeed-insights

Google PageSpeed uses a “combo” of lab and real-world data[historical data of your website], whereas Lighthouse uses lab data only[locally tested on your machine] to build its report.
Note: You should trust PSI metrics over just lab data.

Related

Is PageSpeed Insights bypassing Google CDN cache?

We're using Google Cloud Platform to host a WordPress site:
Google Load Balancer with CDN -> Instance Group with single VM -> Nginx + WordPress
From step 1 (only VM with WordPress, no cache) to the last step (whole setup with Load Balancer and CDN) I could progressively see the improvement when testing locally from my browser and from GTmetrix. But PageSpeed Insights always showed little improvement.
Now we're proud of an impressive 98/97 score in GTmetrix (woah!), but PSI still shows we're pretty average, specially on mobile (range from 45-55).
Problem: we're concerned about page ranking in Google so we'd like to make PSI happy as well. Also... our client won't understand that we did make an improvement while PSI still shows that score.
I was digging and found a few weird things about PSI:
When we adjusted cache-control in nginx, it was correctly detected by local browser and GTmetrix, but section Serve static assets with an efficient cache policy in PSI showed the old values for a few days.
The homepage has a background video hosted in 3 formats (mp4, webm, ogv). Clients are supposed to request only one of them (my browser and GTmetrix do), but PSI actually requests the 3 of them. I can see them in Avoid enormous network payloads section.
When a client requests our homepage, only the GET / request reaches our backend server (which is the expected behaviour) and the rest of the static assets are served from the CDN. But when testing from PSI, all requests reach our backend server. I can see them in nginx access log.
So... those 3 points are making us get a worse score in PSI (point 1 suddenly fixed itself yesterday after days since we changed cache-control), but for what I understand none of them should be happening. Is there something else I am missing?
Thanks in advance to those who can shed some light on this.
but PSI still shows we're pretty average, specially on mobile (range from 45-55).
PSI defaults to show you a mobile score on a simulated throttled connection. If you look at the desktop tab this is comparable to GT Metrix (which uses the same engine 'Lighthouse' under the hood without throttling so will give similar results on Desktop).
Sorry to tell you but the site is only average on mobile speed, test it by going to Performance tab in developer tools and enabling 'Network:Fast 3G' and 'CPU: 4x Slowdown' in the throttling options.
Plus the site seems really JavaScript computation heavy for some reason, PSI simulates a slower CPU so this is another factor. One script is taking nearly 1 second to evaluate.
Serve static assets with an efficient cache policy in PSI showed the old values for a few days.
This is far more likely to be a config issue than a PSI issue. PSI always runs from an empty cache. Perhaps the roll out across all CDNs is slow for some reason and PSI was requesting from a different CDN to you?
Videos - but PSI actually requests the 3 of them. I can see them in Avoid enormous network payloads section.
Do not confuse what you see here with what Google has used to actually run your test. This is calculated separately from all assets that it can download not based on the run data that is calculated by loading the page in a headless browser.
Also these assets are the same for desktop and mobile so it could be for some reason it is using one asset for the mobile test and one for the desktop test.
Either way it does indeed look like a bug but it will not affect your score as that is calculated in other ways.
all requests reach our backend server
Then this points to a similar problem as with point 1 - are you sure your CDN has fully deployed? Either that or you have some rule set up for a certain user agent / robots rule set up that bypasses your CDN. Most likely a robots rule needs updating.
What can you do?
double check your config, deployment etc. Ensure it has propagated to all CDN sites and that all of the DNS routing is working as expected.
Check that you don't have rules set for robots, I notice the site is 'noindex' so perhaps you do have something set up while you are testing things that is interfering.
Run an 'Audit' from Developer Tools in Google Chrome -> this uses exactly the same engine that PSI uses. This may give you better results as it uses your actual browser rather than a headless browser. Although for me this stops the videos loading at all so something strange is happening with that.

Which software is the best to test websites across multiple devices?

I have got a DNN website and would like to test the site on multiple devices. I currently use Google Chrome, but it is not always as accurate. Is it possible to use Xamarin Test Cloud or any other software? My company do not want to spend money on a Device Board.
Have to tried https://www.browserstack.com.
It allows you to test the site across many devices and browsers but it does have some limitations. It only gives you a screenshot of the page and the page must be publicly view-able.

Site functionality diminished over VPN/company network

We currently experience a diminished with one of our customers at our main production site. All subpages and resources seem to be affected as well.
The customer reports a completely broken experience for themselves with the site not working correctly at all, mostly due to assets not loading correctly.
We already started investigating and have found that - so far - nothing seems to be wrong with the site itself.
Quick rundown:
The production site has a Cloudflare layer and almost all of it's assets are delivered either via CDNjs or Amazon's Cloudfront (behind Cloudflare) - all assets are reachable via HTTP as well
The site uses SSL and enforces it (the dynamic cert from Cloudflare)
We could secure a HAR from one of the requests for the request to one of our sites, the request times are extremely long. If you like to try, here is an online HAR viewer, be sure to uncheck validation of the file.
The customer uses Internet Explorer 8 and Chrome (39). While the site is not optimized for IE8. It should run fine in Chrome, in fact, in runs in most browsers above IE9 just fine for all of us.
Notes
We already ruled out:
Virtual delivery problems (there could be physical limitations we are not aware of)
General faultiness of our setup (We tried three different open VPNs to verify this)
Being on the customers blacklist by accident (although we cannot be entirely sure of this)
SSL Server name indication (SNI) problems
(Potentially) a general problem with the customers network, the customer does not report any problems with "the rest of the internet".
The customer will not give access to their VPN/disclose security details so we cannot really test for the situation ourselves. We suspect that the customer uses an internal proxy that might cause the problems described, but we are not sure.
Questions
My questions here are:
Is there any known problem caused by internal networking in conjunction with our setup that can cause this behaviour?.
Are there potential problems on our end that we could have overlooked or things that we do different from other sites?
It seems the connection is being done (or routed) through a low bandwidth high latency link (or a very congested one). Most of the dns lookups and connects seems to be taking ~10s.
In the HAR you can see that it affects fonts.googleapis.com and cdnjs.cloudflare.com. https://www.google-analytics.com/analytics.js has no data captured. To me the affirmation that the customer does not report any problems with "the rest of the internet" seems kind of dubious, seeing that in this HAR it hasn't been able to load the analytics js and access to usual cdns are very slow.
My guesses (pick one or more):
they are testing in a machine different than the one they have no problems with "the rest of the internet"
this machine is very, very slow
it has some kind of content filtering, antivirus, whatever filtering the web (perhaps with a ssl certificate installed in order to forge & inspect https traffic)
the access is done through a congested route, or a low bandwidth high latency link
Two hotspots:
It happens sometime for CDN points to be inconsistent, I spent a lot of time to understand this issue. How? In a live session with the client when I opened each resource loaded one by one I understand there are differences between CDN access points (Mine eastern Europe - His central Europe ). CDN hosting was one of the biggest US player in the world, anyhow we fixed this by invalidating(deleting) all files from CDN as so new/correct ones were loaded.
You need to have CDN that supports serving files over HTTPS, then use that CDN for the SSL requests.

How to differentiate between test and live websites without changing back-end?

We have an issue where we have a website for test and an equivalent website for live. What we are finding is that due to carelessness our testers are using the wrong site (e.g. testing on the live site!).
We have total control over both sites, but since the test site is for acceptance testing by the users we don't want to make them different in any way. The sites must look the same and there is also a layer of management that will kick up a storm if the test and live sites are in any way different.
How have other people solved this problem? I am thinking of a browser plugin to make the browser look different somehow (e.g. changing the colour of the location bar when on the test website). Does anyone know of a plugin or a technique that would work? (We primarily use Firefox and Chrome)
Thanks,
Phil
UPDATE
We eventually settled on a program of: different credentials for the test and live site (this was not popular!) and making a series of plugins available for those who wanted them (colourize tabs for Chrome and Firefox users - we never did find a good plugin for IE).
Thanks to those who answered.
In our company we use different site's names:
www.dev.site.com - for developers
www.qa.site.com - for QA's
www.site.com - production site
Another good practic is to use different users credentials for dev\qa and prod sites.

How to stress test simulating heavy load using Selenium

I have a system to test, which is a video ads distribution technology. I need to load every video like 1-2 mins to serve the ads. The videos are played in a Flash client and streamed as FLV streams like in YouTube.
The reason why I need to test it only via browsers -- and every other method won't work -- is to stress test both the video streaming servers and the ads servers simultaneously and displaying ads in real-time.
I have used Selenium, WatiN, Automation Anywhere and many other automation tools. However, when I am trying to start like 10000 browsers on my machine (32GB RAM, 16-core CPU), none of them are able to do the job.
With Selenium, I am able to start the maximum FireFox instances so far, but that's still too low: half of the instances don't run the test.
Any suggestions to do with Selenium?
You aren't going to run 10,000 browsers on your machine. That would give 3.2MB of physical memory per browser instance and I'm pretty sure FireFox just won't like that.
You could create a JMeter script that hits your server with many threads. It won't interact with the UI but would simulate the load of many clients hitting whatever URLs you tell it. I believe it also includes the ability to record a session and play it back for easy setup of your sessions.
Selenium isn't really optimized for load/stress testing, especially if you're running your browsers locally. Running 1000+ browsers is going to choke even the beefiest server. Though RAM is an obvious bottleneck, you also have limited CPU resources and bandwidth. The latter being a primary concern if you are loading videos.
Not to mention you'd be testing from a single IP with 10k browsers, so load balancing may not kick in properly, as well as the actual distribution of video ads to specific virtual users.
If you want to stick with existing Selenium tests, I've had good experiences with BrowserMob. They basically have a huge grid to do real browser load-testing, distributed across AWS.
Another recommendation would be an actual performance testing tool. I'd recommend Soasta CloudTest. They have a free version that runs 100 users so you can see if it will be a good fit for you. I have found that scripting for CloudTest is relatively simple.
Disclaimer: My experiences with both companies have been as a paying customer and I have never worked for either.
If you are using Windows machine then as per my experience there is a limit on number of browser window instances to be opened. As per my test last time, it does restrict between 100-150 browser windows.
I would recommend you using headless robot, which doesn't require opening browser window. I think latest version of Selenium has that capability. But it seems to be more like a load test as you are trying to simulate 10,000+ user instances, I would recommend you using load testing tool like JMeter or LoadRunner.
It looks to me that you are trying to verify what the client will see based on high traffic, no?
In that case, Joel is quite correct. If you absolutely have to see what the client sees, you could use threaded hits and just dump the results in a database. That'll show you anything the client would see anyway, and it's a lot easier to sort through than thousands of browser instances.
Either way, your client will not see errors if there are no errors present on the server side. If you're testing functionality in bandwidth restricted environments, CPU-intensive environments, or memory-intensive environments, those are much easier achieved than running thousands of browser instances.
Your post smells of some form of ad-based fraud to me, but either way: have you considered using different web browsers besides Firefox? PhantomJS is a headless webkit-based browser that is compatible with Selenium. It supports all the core browser features like DOM handling, CSS selectors, Javascript and Canvas. I do not know if it supports Flash.
This post has a decent list of other headless and automatable web-browsers that you might consider.
Also, if each browser instance is instantiating a Flash plugin, don't neglect the possibility that the issue could be with Flash and not Firefox. Alternatively, why instantiate several different Firefox processes? Can you accomplish what you want through the use of tabs instead?
The in-house way to this wiht selenium is using browsermob proxy and multiple broswser agents to recreate the experience of different users, changing the ip is more difficult because it requires changing your home network.
Here is a good example