When I build the production build, the size of the css+js is going up to 3.8MB.
The only thing I could see is bootstrap which is taking half of the size amongst 3.8MB.
The app contains CRUD functionality in admin module where I have used bootstrap mostly and the other module is a list of static pages wherein I have used only grid of bootstrap.
Kindly guide on How can I make improvement in optimizing this further?
This is expected and using bootstrap and there's nothing you can do. If you had, instead, used bootstrap-vue you could import only the specific parts of the modules that you need (javascript) and that would significantly reduce the size of your bundle.
With that said, there's nothing wrong here. The gzip size of these is 252kb at maximum and that's quite cheap.
If you serve your site using http2 and the browser supports it, your request will be multiplexed and will use TCP pipelines to load the assets. This has huge gains and improvements over HTTP1 in that:
the connection to your server is opened through a TCP socket
the TCP socket then balances the requests by using Frames (which are asynchronous) vs http1 which is synchronous and could only manage 2 synchronous HTTPD threads at a time
the pipeline does not wait for assets and continues to cascade requests for assets, which improves the page load vastly.
So to summarize - serve your assets gzipped and make sure your web server uses http2 and your issue is trivial at this time.
Consider using purgecss plugin to get rid of all unused bootstrap classes: https://www.purgecss.com/guides/vue
Related
I've built multiple sites with Nuxt SSR, but never touched the static part.
As far as I know, upon build-time, Nuxt automatically executes all API calls and caches them.
If I want to make a blog with a static Nuxt site, how would I update the content? Is it only possible when I rebuild the app?
Seems unnecessary to rebuild everything every time I add a new blog post. With SSR I just reload the page.
Also wanted to note that I have a Strapi.js backend running on a VPS and I usually make changes weekly. Nuxt's docs state that I need to push my changes to the main repo branch but there's no changes on the frontend.
Does this also mean that the headless cms should be local only?
The whole point of having a static build is to have all the generated files with no additional server Node.js server needed. It reduces heavily the costs, removes a point of failure, discard any notion of server charge (amount of users at the same time on your app) and probably some other advantages yeah.
Downside, you indeed need to actually yarn generate the whole app again if it's something that was added/changed in the codebase. Usually it's pretty fast and there are also incremental builds if I do remember properly (you will not regenerate all the 99 old blog posts but only the 100th, the new one).
Headless CMS like Strapi usually work with a webhook: you add a new CMS article or alike, Strapi will notify your JAMstack platform to rebuild your app. Even if no front-end code was changed, you can force to build it with the new data coming from the headless CMS' API.
We're using Google Cloud Platform to host a WordPress site:
Google Load Balancer with CDN -> Instance Group with single VM -> Nginx + WordPress
From step 1 (only VM with WordPress, no cache) to the last step (whole setup with Load Balancer and CDN) I could progressively see the improvement when testing locally from my browser and from GTmetrix. But PageSpeed Insights always showed little improvement.
Now we're proud of an impressive 98/97 score in GTmetrix (woah!), but PSI still shows we're pretty average, specially on mobile (range from 45-55).
Problem: we're concerned about page ranking in Google so we'd like to make PSI happy as well. Also... our client won't understand that we did make an improvement while PSI still shows that score.
I was digging and found a few weird things about PSI:
When we adjusted cache-control in nginx, it was correctly detected by local browser and GTmetrix, but section Serve static assets with an efficient cache policy in PSI showed the old values for a few days.
The homepage has a background video hosted in 3 formats (mp4, webm, ogv). Clients are supposed to request only one of them (my browser and GTmetrix do), but PSI actually requests the 3 of them. I can see them in Avoid enormous network payloads section.
When a client requests our homepage, only the GET / request reaches our backend server (which is the expected behaviour) and the rest of the static assets are served from the CDN. But when testing from PSI, all requests reach our backend server. I can see them in nginx access log.
So... those 3 points are making us get a worse score in PSI (point 1 suddenly fixed itself yesterday after days since we changed cache-control), but for what I understand none of them should be happening. Is there something else I am missing?
Thanks in advance to those who can shed some light on this.
but PSI still shows we're pretty average, specially on mobile (range from 45-55).
PSI defaults to show you a mobile score on a simulated throttled connection. If you look at the desktop tab this is comparable to GT Metrix (which uses the same engine 'Lighthouse' under the hood without throttling so will give similar results on Desktop).
Sorry to tell you but the site is only average on mobile speed, test it by going to Performance tab in developer tools and enabling 'Network:Fast 3G' and 'CPU: 4x Slowdown' in the throttling options.
Plus the site seems really JavaScript computation heavy for some reason, PSI simulates a slower CPU so this is another factor. One script is taking nearly 1 second to evaluate.
Serve static assets with an efficient cache policy in PSI showed the old values for a few days.
This is far more likely to be a config issue than a PSI issue. PSI always runs from an empty cache. Perhaps the roll out across all CDNs is slow for some reason and PSI was requesting from a different CDN to you?
Videos - but PSI actually requests the 3 of them. I can see them in Avoid enormous network payloads section.
Do not confuse what you see here with what Google has used to actually run your test. This is calculated separately from all assets that it can download not based on the run data that is calculated by loading the page in a headless browser.
Also these assets are the same for desktop and mobile so it could be for some reason it is using one asset for the mobile test and one for the desktop test.
Either way it does indeed look like a bug but it will not affect your score as that is calculated in other ways.
all requests reach our backend server
Then this points to a similar problem as with point 1 - are you sure your CDN has fully deployed? Either that or you have some rule set up for a certain user agent / robots rule set up that bypasses your CDN. Most likely a robots rule needs updating.
What can you do?
double check your config, deployment etc. Ensure it has propagated to all CDN sites and that all of the DNS routing is working as expected.
Check that you don't have rules set for robots, I notice the site is 'noindex' so perhaps you do have something set up while you are testing things that is interfering.
Run an 'Audit' from Developer Tools in Google Chrome -> this uses exactly the same engine that PSI uses. This may give you better results as it uses your actual browser rather than a headless browser. Although for me this stops the videos loading at all so something strange is happening with that.
using the most current version of Vue Cli 3 and the PWA feature enabled, I have noticed that our PWA is starting/booting way faster when the client is offline vs online. I was thinking that a internet connection could never slow down the boot time, as the service-worker would always first serve cached files and afterwards check for updates. For me, it is not obvious which default caching strategy for Workbox is implemented by the Vue Cli 3. The practical problem is, that none of our customers is turning on "offline/flight mode" on their phones, instead they just suffer from poor internet connection and thereby do not benefit from the offline capabilities of our App. It seems that the service-worker is not serving cached files unless the client is completely "offline". Boot times therefore are horrible...
Help very much appreciated.
We have an MVC web site deployed in a Cloud Service on Microsoft Azure. For boosting performance, some of my colleagues suggested that we avoid the bundling and minification provided by ASP.NET MVC4 and instead store the .js and .css files on an Azure blob. Please note that the solution does not use a CDN, it merely serves the files from a blob.
My take on this is that just serving the files this way will not cause any major performance benefits. Since we are not using a CDN, the files will get served from the region in which our storage is deployed all the time. Ever time a user requests a page, at least for the first time, the data will flow across the data center boundary and that will in turn incur cost. Also, since they are not bundled but kept as individual files, the server requests will be more. So we are forfeiting the benefits of bundling and minification. The only benefit I see to this approach is that we can do changes to the .js and .css files and upload them without a need to re-deploy.
Can anyone please tell me which of the two options is preferable in terms of performance?
I can't see how this would be better than bundling and minification unless the intent is to blob store your minified and bundled files. The whole idea is to reduce requests to the server because javascript processes a single thread at a time and in addition there's added download time, I'd do everything I can to reduce that request count.
As a separate note on the image side, I'd also combine images into a single image and use css sprites ala: http://vswebessentials.com/features/bundling
When you add a script or style bundle to an mvc site, the bundling framework will append a version to the output markup.
e.g. <script src="/Scripts/custom/App.js?v=nf9WQHcG-UNbqZZzi4pJC3igQbequHCOPB50bXWkT641"></script>
notice the querystring ?v=xxx-xxx
If you are hosting your App on multiple servers then each server would have a different version appended to the resource url which means in a classic round robin load balanced environment you will download that resource each time you hit a different server.
To me, seems to negate the value of bundling in some ways, since the initial load is quicker but a deteriorated performance is experienced on subsequent user interaction.
In practice how have others handled this issue I know depending on the size of the download it could be insignaficant because the minified and gzipped resource is tiny but in many situations this might not be the case. So how can one with minimal effort reap the benefits of bundling and minification in a high scale out environment.
In practice the version number is a hash of the contents of the files. So if you have the same javascript files on all nodes of your webfarm, they should all get the same version number. If you are getting a different hash this could be an indication that you haven't deployed the same contents of those files on all nodes of your webfarm.