Magento 2 Enterprise Multistore - Speed Optimization - optimization

Recently I have completed migration from M1 to M2. Total of 5 stores reside under single Magento 2 installation in the Cloud now. Page speed is a pain point at this time. Lighthouse and web.dev scans suggesting several areas to be improved, such as bundling JS and reducing JS execution time, as well as minimizing main-thread work. Installed Amasty Page Speed Optimization extension, but it barely made an impact. Installed Magepack JS Bundling tool, and it did make small difference, however still need to get the websites optimized to improve the performance. Can anyone suggest another extension or recommend effective steps to get the sites optimized?

It is difficult to say why your website is slow with limited information.
But you may try following.
Check if site is running in production mode.
All the caches are enabled.
Disable all the custom modules and check if it runs fast with default Magento code?
Try to enable Magento profiler and investigate which code/event etc. is taking more time.
Check Magento logs and Server logs to see if there are any errors or connection timeouts to any third party services.
Once you have some kind of report with which you can identify what is causing slowness. You may decide next course of action.

Related

Is PageSpeed Insights bypassing Google CDN cache?

We're using Google Cloud Platform to host a WordPress site:
Google Load Balancer with CDN -> Instance Group with single VM -> Nginx + WordPress
From step 1 (only VM with WordPress, no cache) to the last step (whole setup with Load Balancer and CDN) I could progressively see the improvement when testing locally from my browser and from GTmetrix. But PageSpeed Insights always showed little improvement.
Now we're proud of an impressive 98/97 score in GTmetrix (woah!), but PSI still shows we're pretty average, specially on mobile (range from 45-55).
Problem: we're concerned about page ranking in Google so we'd like to make PSI happy as well. Also... our client won't understand that we did make an improvement while PSI still shows that score.
I was digging and found a few weird things about PSI:
When we adjusted cache-control in nginx, it was correctly detected by local browser and GTmetrix, but section Serve static assets with an efficient cache policy in PSI showed the old values for a few days.
The homepage has a background video hosted in 3 formats (mp4, webm, ogv). Clients are supposed to request only one of them (my browser and GTmetrix do), but PSI actually requests the 3 of them. I can see them in Avoid enormous network payloads section.
When a client requests our homepage, only the GET / request reaches our backend server (which is the expected behaviour) and the rest of the static assets are served from the CDN. But when testing from PSI, all requests reach our backend server. I can see them in nginx access log.
So... those 3 points are making us get a worse score in PSI (point 1 suddenly fixed itself yesterday after days since we changed cache-control), but for what I understand none of them should be happening. Is there something else I am missing?
Thanks in advance to those who can shed some light on this.
but PSI still shows we're pretty average, specially on mobile (range from 45-55).
PSI defaults to show you a mobile score on a simulated throttled connection. If you look at the desktop tab this is comparable to GT Metrix (which uses the same engine 'Lighthouse' under the hood without throttling so will give similar results on Desktop).
Sorry to tell you but the site is only average on mobile speed, test it by going to Performance tab in developer tools and enabling 'Network:Fast 3G' and 'CPU: 4x Slowdown' in the throttling options.
Plus the site seems really JavaScript computation heavy for some reason, PSI simulates a slower CPU so this is another factor. One script is taking nearly 1 second to evaluate.
Serve static assets with an efficient cache policy in PSI showed the old values for a few days.
This is far more likely to be a config issue than a PSI issue. PSI always runs from an empty cache. Perhaps the roll out across all CDNs is slow for some reason and PSI was requesting from a different CDN to you?
Videos - but PSI actually requests the 3 of them. I can see them in Avoid enormous network payloads section.
Do not confuse what you see here with what Google has used to actually run your test. This is calculated separately from all assets that it can download not based on the run data that is calculated by loading the page in a headless browser.
Also these assets are the same for desktop and mobile so it could be for some reason it is using one asset for the mobile test and one for the desktop test.
Either way it does indeed look like a bug but it will not affect your score as that is calculated in other ways.
all requests reach our backend server
Then this points to a similar problem as with point 1 - are you sure your CDN has fully deployed? Either that or you have some rule set up for a certain user agent / robots rule set up that bypasses your CDN. Most likely a robots rule needs updating.
What can you do?
double check your config, deployment etc. Ensure it has propagated to all CDN sites and that all of the DNS routing is working as expected.
Check that you don't have rules set for robots, I notice the site is 'noindex' so perhaps you do have something set up while you are testing things that is interfering.
Run an 'Audit' from Developer Tools in Google Chrome -> this uses exactly the same engine that PSI uses. This may give you better results as it uses your actual browser rather than a headless browser. Although for me this stops the videos loading at all so something strange is happening with that.

Ektron really slow to startup on local host, how to improve this?

We're developing a solution which uses Ektron. As part of our solution we all have local IIS instances (localhost) and deploy to this local instance as part of the development life cycle.
The problem is that after a deployment and once dll's are replaced IIS restarts and the app pool is recycled, this means that Ektron dll's need to reload themselves.
This process takes an extended amount of time.
Is there anyway to improve the loading time of "Ektron"
To some extent, this is the nature of a large app running as a website rather than a web application. Removing the workarea from your local environment is one way to get this compile time down, though this will naturally not work depending on your workflow, for example if you are not using a separate dev DB or if you are storing the workarea in source control.
I have seen some attempts to pre-complile the workarea and keep the working code in a separate project (http://dev.ektron.com/forum.aspx?g=posts&t=10996) but this approach will only speed up your builds, not the recompilation of individual pages that will occur after a build as a result of running as a web site.
The last (and least best-practice) solution is to simply avoid making code changes that cause a recompile, like modifying app_code. Apps running as websites are perfectly happy to recompile a single page's codebehind without regenerating DLLs, which is advantageous for productivity but ultimately discourages good practices like reusing code in libraries. Keep in mind that this is terrible advice, but if you have a deadline and are staring at an ektron page loading every 30 minutes it can be useful to know.
Same problem here. I found this: http://brianpereras.blogspot.com/2013/06/ektron-85-86-workarea-is-slow-compared.html
That says that the help documentation was moved to be retrieved from an online source (documentation.ektron.com). We're running Ektron 9, and I just made this change and it seems much faster on first load (after iisreset).
The solution is to set documentation.ektron.com to 127.0.0.1 in your hosts file.
There is not, this is just how IIS works. Instead of running a local instance of Ektron it's a good idea just to point your web.config file to the database of your test database and copy the /workarea folder to your local PC. You can't edit ektron locally but you can change the data on your test server and it will show up locally.

WebLogic staging mode affects runtime performance?

As a general question, are there any reasons that setting the Staging Mode to "nostage" instead of "stage" could cause performance hits? I was originally using "stage" mode, but after some issues with redeployments, I decided to try "nostage". This caused the application to perform almost two times slower. After switching the staging mode back to "stage" in the console, the slowdown was gone.
I was under the impression that the staging mode only "determines how deployment files are made available to target servers" (from the Oracle documentation page), and would not affect the runtime. Is this normal behaviour? I'm having trouble finding information on any links between staging mode and runtime performance.
Stage or nostage should not impact runtime performance. It is hard for me to find out what might cause your performance slowdown without further information, but one thing I can suggest is to switch back to staging mode if that seems to give you the better performance before you figure out the root cause. As to the redeployment issue you mentioned in your answer, you did not provide any details, but I guess it might be related to WLS still picking up the old files instead of new files. You can do some extra steps during your deployment to fix that
Undeploy your app
Shutdown the whole domain
Delete the tmp/stage/cache directories under each managed servers
Start the servers in the domains
Do a new deployment.
Of course, it sounds cumbersome, but you can automate all these into your deployment scripts.

mod_mono stability issues

I've read several stablity issues with modmono under high load. The root of the problem is GC and the solution is restarting modmono every n hours, and n should be decreased based on error frequency.
I'm planning to develop a heavy load site with mono (I've .net experience and a little java), and I've fears based on this issues like session interruption, http errors ...
At this starting stage of the project, should I switch to Java/tomcat or trust to mod_mono ?
Regards
Depending on how long developing your site (http://www.mono-project.com/Compacting_GC) might be ready for production. While, googleing found some complaints about stability, many were from 2006. Push comes to shove, if mono/mod_mono fail to live up to stability, you could always deploy from windows/iss.
It's a bit of a calculated risk at this point, but if you run into any issues, I'm sure the mod_mono mailing list would help sort out any issues.

mono in production websites?

I'm investigating the use of mono in real world high traffic web applications. There are some references on the mono site (companies using mono), but I couldn't find a high traffic website sample other than Deki powered ones. And I've read some mailings about mod_mono stability problems because of inexistence of compacting GC.
Please reference your app and give some info, if is there anyone using mono in production.
...or do I have to look at Java ?
Regards,
sirmak
Wikipedia is using Mono for search (also listed on the companies using Mono page)
A ton of people use Mono in production and development. I'm sure this page will change dramatically over the next year or so, but look at http://www.mono-project.com/Companies_Using_Mono. This is a good reference, but projects using Mono are popping up every day, so we'll see more soon.
Lunchwalla.com uses Mono for its website. It receives fairly high traffic. There is also a little blog item regarding the set up - http://blog.lunchwalla.com/2010/04/23/the-tech-behind-lunchwalla/
Go for it. Beyond the initial setup work and tuning you can have a very stable and fast server with all the advantages of low-resource required do the the job, at least with nginx/lighttpd. mod_mono (Apache) resources will go way faster according to a lot of feedback I've been reading on the all the major places this topic is discussed
From #mono (IRC)
<ruionwriting> ahall: in apache what is your feel about the performance compared with nginx?
<ahall> the fastcgi implementation is just a bit buggy and buy sending few concurrent requests to it it hogged 99% cpu and didn't get out of it. I will switch to nginx + fastcgi as soon as its suitable for me in production
<ahall> buy = by
<ahall> but yeah i always use nginx instead of apache whenever possible, but with mono i dont recommend it
This last part I don't have to agree based on the setup I have.
This question on stack overflow must me included here.