Vue server side rendering performance issue - vue.js

I got an weird issue. Here is my code for rendering the vue pages. In my local machine, the rendering time for this page is about 50~80ms around, however, if i access the page parallel, sometimes could be 120ms around(maybe 5 times out of 200 requests ), but most of time, it is still 50~80 ms.
However, when i deploy the code to our production docker, these peek time is getting worse, sometimes it can reach 1 second, and got 500ms a lot of times, the performance is bad. It makes no sense, the request load is not heavy and we have load balance too. For a similiar page which we are using EJS to render, we don't see this kind of peek a lot. The backend logic and services using for EJS and Vue are all the same.
Client side rendering is also the same, it has similar symptom.
Does any body know what kind of reasons could lead this issue?

first of all do two things:
1- do a quick test using lighthouse if possible, it'll help pin pointing the problem.
2- check console for any errors AND warnings.
without further information about you'r code i don't think it's possible to say what's exactly causing the problem.
However after searching for some time i came about an article which the writer had the same performance problems.
This is the result of his lighthouse check. As you can see his website had shortcomings in indexing and content full paint; Long story short he had an infinity loop and v-for loops without keys.
following are some tips on how you can better optimize you'r vue app:
Apply the Vue Style Guide and ESLint
There’s a style guide in Vue docs: https://v2.vuejs.org/v2/style-guide/.
You can find there four Rule categories. We really care about three of them:
Essential rules preventing us from errors,
Recommended and strongly recommended rules for keeping best practices – to improve quality and readability of code.
You can use ESLint to take care of those rules for you. You just have to set everything properly in the ESLint configuration file.
Don’t use multiple v-if
Don’t write api call handlers in components
Simply do what is locally necessary in components logic. Every method that could be external should be separated and only called in components e.g. business logic.
Use slots instead of large amounts of props
5.Lazy load routes
Use watcher with the immediate option instead of the created hook and watcher together.
Here another article on how to improve you'r vue app.
Good Luck!

Related

NuxtJS: Static render just a component instead of 1000's of pages

We have a website with potentially 1000's of pages. We would like to leverage the power of Static Rendering. The CMS, which is hosted on a different server, will trigger a static re-render of the page via WebHooks.
When a new page is created, the main nav may need to change. That means the entire site will need to be re-generated, and with so many pages that could take a very long time.
So what is the work around for this? Can you statically render just the main nav and include it all pages, to avoid re-rendering absolutely everything? ...so partial static rendering?
Depending on where you're hosting your code, you could use ISG: https://youtu.be/4vRn7yg85jw
There are several approaches of solving that yourself too, but it will require some work of course.
Nuxt team is currently working on solving this issue with something baked in: https://github.com/nuxt/framework/discussions/560
You could maybe also optimize some of those pages or look to split them in different projects as told here: https://stackoverflow.com/a/69835750/8816585
Batching the regeneration could be an idea too, or even using the preview feature to avoid some useless builds: https://nuxtjs.org/docs/features/live-preview#preview-mode
Overall, I'm not sure that there is magic solution with a perfect balance between SSR and SSG as of today without a decent amount of work. Of course, if you're using Go + Vite or alike, you will get faster builds overall but it's a quite broad/complex question overall.

Vue: wait to render until all components are mounted

I have a Vue/Nuxt web app where pages are dynamically generated from lots of components that have child components.
The trouble is the header and footer are rendered first, then the child components that have the actual content. This looks terrible on first load and Lighthouse doesn't like it. It's an Avoid large layout shifts failure. For context it's only an issue when client side rendering, SSR would eliminate this issue while intoducing others.
What I could do is edit every single component in my project and add an event on mounted. That could then be used to decide when to show the layout. The problem is it would be a major hassle and would cause bugs when new components are added and this bit is forgotten.
I'm not able to find any general solution to this in Vue and/or Nuxt. I'd love to have a new lifetime hook of allMounted which would only fire when child components are also mounted but it doesn't exist. Even that would be a bit hacky. An even more general render when all components are mounted option would be awesome.
I'm not sure that a dynamic component can help in your case, but I guess that your company's website will not really benefit from this. Indeed, the problem of the content jumping will still be present IMO.
<component :is="currentTabComponent"></component>
I still think that you content is highly static IMO and that you could even switch to full static to have the best performance benefits rather than having to wait for a long time (TTFB) while SPA is loading all the content. It may be a bit more challenging to have everything look nice of course (before/after the hydration).
Also, you should have an idea of the approximate size of your containers. In that case, you could use some skeletons and a maybe even a prototyping font to visually populate the blocks.
In case you do not agree or think that this is not doable, you still have this solution to your disposal
<child-component #hook:mounted="makeSomeStuff"></child-component>
With this you may be able to display a full-sized loader until your content is done loading. You could add a mixin with the longer mounted syntax in each component to avoid too much boilerplate but this one is deprecated and do have various issues.
But IMO, the issue is more in your way of fetching the data (asyncData and fetch hooks are nice) and the way that everything is full dynamic when there is no specific need. If it's more important to keep the dynamic part, I guess that you can be serious on code reviews or plug some git hooks or alike to kinda scan the code and see if the required mounted emits are in place.
There is no ideal solution in your case but keep in mind that Lighthouse will always prefer some SSR content with the less amount of JS. Here is my personal bible to anything performance related, you could probably grasp some nice tips in this really in-depth article.
Update for Vue3
The syntax has changed for Vue3: https://v3-migration.vuejs.org/breaking-changes/vnode-lifecycle-events.html#_2-x-syntax

vue-chartjs - Do Experienced Vue Developers use this wrapper?

I am refactoring my first pass Vue dashboard application, which uses vue-chartjs to access chart.js.
As part of doing this, I am creating a set of wrapper components that encapsulate more functionality than just the chart itself, e.g. titles, dialogs, measures etc. In doing this, I am finding that how vue-chartjs adds complexity to my task for multiple reasons, e.g. the structure of renderChart props doesn't match the parameters of chartjs itself. Also, vue-chartjs has its own unique capabilities that add a layer of complexity to using chartjs.
I assume there are other complexities that are reduced by using vue-chartjs, but... my question is:
Do experienced Vue developers use vue-chartjs to access chart.js? Or do you go direct to chart.js? My first pass approach was derived from a tutorial, and I didn't question it at the time. Now that I'm doing more complex things, vue-chartjs is getting in my way as I try to simplify and minimize data marshaling.
For now I am working around these issues, but if it is reasonable to create my own wrappers rather than add an unnecessary level through vue-charts, I would like to try that. But I don't want to venture into this without first asking for feedback from other dashboard folks who have done it!
Thanks for any advice on this.
Anecdotally speaking, I've found in some code reviews less experienced devs tend to rely on vue-**** wrapper libraries even when there is little (or even no) benefit. Adding additional libraries increases exposure to more dependencies, each of which theoretically carries a potential for security vulnerability. I've also seen the opposite, where the functionality is re-invented when a vue library is available and would save significant amount of time and have a more robust component(like including aria fields or thoroughly tested with various browsers). The tl;dr; being, I take is on a case by case basis.
I agree with #Daniel. I can give another example where I used vue-popper wrapper package. The component itself is not bad, it's well done, however, it uses the previous major version of popper.js which lacks good new features and improvements. For this reason I created later my own implementation of vue popper with the latest version.

Angular 6 - Why use #ngrx/store rather than service injection

I am recently learning Angular 6 with #ngrx/store while one of the tutorial is to use #ngrx/store for state management, however I don't understand the benefit of using #ngrx/store behind the scene.
For example, for a simple login and signup action, previously by using the service(Let's call it AuthService) we might use it to call the backend api, store "userInfo" or "token" in the AuthService, redirect user to "HOME" page and we can inject AuthService in any component where we need to get the userInfo by using DI, which simply that one file AuthService handles everything.
Now if we are using #ngrx/store, we need to define the Action/State/Reducer/Effects/Selector which probably need to write in 4 or 5 files to handle above action or event, then sometimes still we need to call backend api using service, which seems much much more complicated and redundant...
In some other scenario, I even see some page uses #ngrx/store to store the object or list of object like grid data., is that for some kind of in-memory store usage?
So back to the question, why are we using #ngrx/store over service registration store here in Angular project? I know it's for "STATE MANAGEMENT" usage, but what exactly is the "STATE MANAGEMENT"? Is that something like transaction log and When do we need it? Why would we manage it on the front end? Please feel free to share your suggestion or experience in the #ngrx/store area!
I think you should read those two posts about Ngrx store:
Angular Service Layers: Redux, RxJs and Ngrx Store - When to Use a Store And Why?
Ngrx Store - An Architecture Guide
If the first one explains the main issues solved by Ngrx Store, it also quote this statement from the React How-To "that seems to apply equally to original Flux, Redux, Ngrx Store or any store solution in general":
You’ll know when you need Flux. If you aren’t sure if you need it, you don’t need it.
To me Ngrx store solves multiple issues. For example when you have to deal with observables and when responsability for some observable data is shared between different components. In this case store actions and reducer ensure that data modifications will always be performed "the right way".
It also provides a reliable solution for http requests caching. You will be able to store the requests and their responses, so that you could verify that the request you're making has not a stored response yet.
The second post is about what made such solutions appear in the React world with Facebook's unread message counter issue.
Concerning your solution of storing non-obvervable data in services. It works fine when you're dealing with constant data. But when several components will have to update this data you will probably encounter change detection issues and improper update issues, that you could solve with:
observer pattern with private Subject public Observable and next function
Ngrx Store
I'm almost only reading about the benefits of Ngrx and other Redux like store libraries, while the (in my opinion) costly tradeoffs seem to be brushed off with far too much ease. This is often the only reason that I see given: "The only reason not to use Ngrx is if your app is small and simple". This (I would say) is simply incomplete reasoning and not good enough.
Here are my complaints about Ngrx:
You have logic split out into several different files, which makes the code hard to read and understand. This goes against basic code cohesion and locality principles. Having to jump around to different places to read how an operation is performed is mentally taxing and can lead to cognitive overload and exhaustion.
With Ngrx you have to write a lot more code, which increases the chances of bugs. More code -> more places for bugs to appear.
An Ngrx store can become a dumping ground for all things, with no rhyme or reason. It can become a global hodge podge of stuff that no one can get a coherent overview of. It can grow and grow until no one understands it any more.
I've seen a lot of unnecessary deep object cloning in Ngrx apps, which has caused real performance issues. A particular app I was assigned to work on was taking 40 ms to persist data in the store because of deep cloning of a huge store object. This is over two lost render frames if you are trying to hit a smooth 60 fps. Every interaction felt janky because of it.
Most things that Ngrx does can be done much simpler using a basic service/facade pattern that expose observables from rxjs subjects.
Just put methods on services/facades that return observables - such a method replaces the reducer, store, and selector from Ngrx. And then put other methods on the service/facade to trigger data to be pushed on these observables - these methods replace your actions and effects from Ngrx. So instead of reducers+stores+selectors you have methods that return observables. Instead of actions+effects you have methods that produce data the the observables. Where the data comes from is up to you, you can fetch something or compute something, and then just call subject.next() with the data you want to push.
The rxjs knowledge you need in order to use ngrx will already cause you to be competent in using bare rxjs yourself anyways.
If you have several components that depend on some common data, then you still don't need ngrx, as the basic service/facade pattern explicitly handles this already.
If several services depend on common data between them, then you just make a common service between these services. You still don't need ngrx. It's services all the way down, just like it is components all the way down.
For me Ngrx doesn't look so good on the bottom line.
It is essentially a bloated and over engineered Enterprise™🏢👨‍💼🤮 grade Rxjs Subject, when you could have just used the good old and trusty Rxjs Subject. Listen to me kids, life is too short for unnecessary complexity. Stick to the bare necessities. The simple bare necessities. Forget about your worries and your strife.
I've been working with NgRx for over three years now. I used it on small projects, where it was handy but unnecessary and I used it in applications where it was perfect fit. And meanwhile I had a chance to work on the project which did not use it and I must say it would profit from it.
On the current project I was responsible for designing the architecture of new FE app. I was tasked to completely refactor the existing application which for the same requirements used non NgRx way and it was buggy, difficult to understand and maintain and there was no documentation. I decided to use NgRx there and I did it because of following reasons:
The application has more than one actor over the data. Server uses
the SSE to push state updates which are independent from user
actions.
At the application start we load most of available data which are
then partially updated with SSE.
Various UI elements are enabled/disabled depending on multiple
conditions which come from BE and from user decisions.
UI has multiple variations. Events from BE can change currently
visible UI elements (texts in dialogs) and even user actions might
change how UI looks and works (recurring dialog can be replaced by
snack if user clicked some button).
State of multiple UI elements must be preserved so when user leaves
the page and goes back the same content (or updated via SSE) is
visible.
As you can see the requirements does not meet the standard CRUD operations web page. Doing it the "Angular" way brought such complexity to the code that it became super hard to maintain and what's worst by the time I joined the team the last two original members were leaving without any documentation of that custom made, non NgRx solution.
Now after the year since refactoring the app to use NgRx I think I can sum up the pros and cons.
Pros:
The app is more organized. State representation is easy to read,
grouped by purpose or data origin and is simple to extend.
We got rid of many factories, facades and abstract classes which lost
their purpose. The code is lighter, and components are 'dumber', with
less hidden tricks coming from somewhere else.
Complicated state calculations are made simple using effects and
selectors and most components can be now fully functional just by
injecting the store and dispatching the actions or selecting the
needed slice of the state while handling multiple actions at once.
Because of updated app requirements we were forced to refactor the
store already and it was mostly Ctrl + C, Ctrl + V and some renaming.
Thanks to Redux Dev Tools it is easier to debug and optimize (yep
really)
This is most important - even thought our state itself is unique the
store management we are using is not. It has support, it has
documentation and it's not impossible to find solutions to some
difficult problems on the internet.
Small perk, NgRx is another technology you can put to your CV :)
Cons:
My colleagues were new to the NgRx and it took some time for them to
adapt and fully understand it.
On some occasions we introduced the issue where some actions were
dispatched multiple times and it was difficult to find the cause of
it and fix it
Our Effects are huge, that's true. They can get messy but that's what
we have pull requests for. And if this code wasn't there it would
still end up somewhere else :)
Biggest issue? Actions are differentiated by their string type. Copy
an action, forget to rename it and boom, something different is
happening than you expect, and you have no clue why.
As a conclusion I would say that in our case the NgRx was a great choice. It is demanding at first but later everything feels natural and logical. Also, when you check the requirements, you'll notice that this is a special case. I understand the voices against NgRx and in some cases I would support them but not on this project. Could we have done it using 'Angular' way? Of course, it was done this way before, but it was a mess. It was still full of boiler plate code, things happening in different places without obvious reasons and more.
Anyone who would have the chance to compare those two versions would say the NgRx version is better.
There is also a 3rd option, having data in service and using service directly in html, for instance *ngFor="let item of userService.users". So when you update userService.users in service after add or update action is automatically rendered in html, no need for any observables or events or store.
If the data in your app is used in multiple components, then some kind of service to share the data is required. There are many ways to do this.
A moderately complex app will eventually look like a front end back end structure, with the data handling done in services, exposing the data via observables to the components.
At one point you will need to write some kind of api to your data services, how to get data in and out, queries, etc. Lots of rules like immutability of the data, and well defined single paths to modify the data. Not unlike the server backend, but much quicker and responsive than the api calls.
Your api will end up looking like one of the many state management libraries that already exist. They exist to solve difficult problems. You may not need them if your app is simple.
NGRX sometimes has a lot of files and a lot of duplicate code. Currently working on a fix for this. To make generic type classes for certain NGRX state management situations that are very common inside an Angular project like pagers and object loading from back-ends

Most Efficient Multipage RequireJS and Almond setup

I have multiple pages on a site using RequireJS, and most pages have unique functionality. All of them share a host of common modules (jQuery, Backbone, and more); all of them have their own unique modules, as well. I'm wondering what is the best way to optimize this code using r.js. I see a number of alternatives suggested by different parts of RequireJS's and Almond's documentation and examples -- so I came up with the following list of possibilities I see, and I'm asking which one is most recommended (or if there's another better way):
Optimize a single JS file for the whole site, using Almond, which would load once and then stay cached. The downside of this most simple approach is that I'd be loading onto each page code that the user doesn't need for that page (i.e. modules specific to other pages). For each page, the JS loaded would be bigger than it needs to be.
Optimize a single JS file for each page, which would include both the common and the page-specific modules. That way I could include Almond in each page's file and would only load one JS file on each page -- which would be significantly smaller than a single JS file for the whole site would be. The downside I see, though, is that the common modules wouldn't be cached in the browser, right? For every page the user goes to she'd have to re-download the bulk of jQuery, Backbone, etc. (the common modules), as those libraries would constitute large parts of each unique single-page JS file. (This seems to be the approach of the RequireJS multipage example, except that the example doesn't use Almond.)
Optimize one JS file for common modules, and then another for each specific page. That way the user would cache the common modules' file and, browsing between pages, would only have to load a small page-specific JS file. Within this option I see two ways to finish it off, to include the RequireJS functionality:
a. Load the file require.js before the common modules on all pages, using the data-main syntax or a normal <script> tag -- not using Almond at all. That means each page would have three JS files: require.js, common modules, and page-specific modules.
b. It seems that this gist is suggesting a method for plugging Almond into each optimized file ---- so I wouldn't have to load require.js, but would instead include Almond in both my common modules AND my page-specific modules. Is that right? Is that more efficient than loading require.js upfront?
Thanks for any advice you can offer as to the best way to carry this out.
I think you've answered your own question pretty clearly.
For production, we do - as well as most companies I've worked with option 3.
Here are advantages of solution 3, and why I think you should use it:
It utilizes the most caching, all common functionality is loaded once. Taking the least traffic and generating the fastest loading times when surfing multiple pages. Loading times of multiple pages are important and while the traffic on your side might not be significant compared to other resources you're loading, the clients will really appreciate the faster load times.
It's the most logical, since commonly most files on the site share common functionality.
Here is an interesting advantage for solution 2:
You send the least data to each page. If a lot of your visitors are one time, for example in a landing page - this is your best bet. Loading times can not be overestimated in importance in conversion oriented scenarios.
Are your visitors repeat? some studies suggest that 40% of visitors come with an empty cache.
Other considerations:
If most of your visitors visit a single page - consider option 2. Option 3 is great for sites where the average users visit multiple pages, but if the user visits a single page and that's all he sees - that's your best bet.
If you have a lot of JavaScript. Consider loading some of it to give the user visual indication, and then loading the rest in a deferred way asynchronously (with script tag injection, or directly with require if you're already using it). The threshold for people noticing something is 'clunky' in the UI is normally about 100ms. An example of this is GMail's 'loading...' .
Given that HTTP connections are Keep-Alive by default in HTTP/1.1 or with an additional header in HTTP/1.0 , sending multiple files is less of a problem than it was 5-10 years ago. Make sure you're sending the Keep-Alive header from your server for HTTP/1.0 clients.
Some general advice and reading material:
JavaScript minification is a must, r.js for example does this nicely and your thought process in using it was correct. r.js also combines JavaScript which is a step in the right direction.
As I suggested, defering JavaScript is really important too, and can drastically improve loading times. Defering execution will help your loading time look fast which is very important, a lot more important in some scenarios than actually loading fast.
Anything you can load from a CDN like external resources you should load from a CDN. Some libraries people use today like jQuery are pretty bid (80kb), fetching them from a cache could really benefit you. In your example, I would not load Backbone, underscore and jQuery from your site, rather, I'd load them from a CDN.
I created example repository to demonstrate these 3 kinds of optimization.
It can help us to have better understanding of how to use r.js.
https://github.com/cloudchen/requirejs-bundle-examples
FYI, I prefer to use option 3, following the example in https://github.com/requirejs/example-multipage-shim
I am not sure whether it is the most efficient.
However, I find it convienient because:
Only need to configure the require.config (on the various libraries in one place)
During r.js optimization, then decide which are the modules to group as common
I prefer to use option 3,and i can surely tell you that why is that.
It's the most logical.
It utilizes the most caching, all common functionality is loaded once. Taking the least traffic and generating the fastest loading times when surfing multiple pages. Loading times of multiple pages are important and while the traffic on your side might not be significant compared to other resources you're loading, the clients will really appreciate the faster load times.
I have listed much better options for the same.
You can use any content delivery network (CDN) like MaxCDN to ensure your js files get served to everyone. Also I'll suggest you to put your js files in the footer of your html code. Hope that helps.