We have a website with potentially 1000's of pages. We would like to leverage the power of Static Rendering. The CMS, which is hosted on a different server, will trigger a static re-render of the page via WebHooks.
When a new page is created, the main nav may need to change. That means the entire site will need to be re-generated, and with so many pages that could take a very long time.
So what is the work around for this? Can you statically render just the main nav and include it all pages, to avoid re-rendering absolutely everything? ...so partial static rendering?
Depending on where you're hosting your code, you could use ISG: https://youtu.be/4vRn7yg85jw
There are several approaches of solving that yourself too, but it will require some work of course.
Nuxt team is currently working on solving this issue with something baked in: https://github.com/nuxt/framework/discussions/560
You could maybe also optimize some of those pages or look to split them in different projects as told here: https://stackoverflow.com/a/69835750/8816585
Batching the regeneration could be an idea too, or even using the preview feature to avoid some useless builds: https://nuxtjs.org/docs/features/live-preview#preview-mode
Overall, I'm not sure that there is magic solution with a perfect balance between SSR and SSG as of today without a decent amount of work. Of course, if you're using Go + Vite or alike, you will get faster builds overall but it's a quite broad/complex question overall.
Related
I have a Vue/Nuxt web app where pages are dynamically generated from lots of components that have child components.
The trouble is the header and footer are rendered first, then the child components that have the actual content. This looks terrible on first load and Lighthouse doesn't like it. It's an Avoid large layout shifts failure. For context it's only an issue when client side rendering, SSR would eliminate this issue while intoducing others.
What I could do is edit every single component in my project and add an event on mounted. That could then be used to decide when to show the layout. The problem is it would be a major hassle and would cause bugs when new components are added and this bit is forgotten.
I'm not able to find any general solution to this in Vue and/or Nuxt. I'd love to have a new lifetime hook of allMounted which would only fire when child components are also mounted but it doesn't exist. Even that would be a bit hacky. An even more general render when all components are mounted option would be awesome.
I'm not sure that a dynamic component can help in your case, but I guess that your company's website will not really benefit from this. Indeed, the problem of the content jumping will still be present IMO.
<component :is="currentTabComponent"></component>
I still think that you content is highly static IMO and that you could even switch to full static to have the best performance benefits rather than having to wait for a long time (TTFB) while SPA is loading all the content. It may be a bit more challenging to have everything look nice of course (before/after the hydration).
Also, you should have an idea of the approximate size of your containers. In that case, you could use some skeletons and a maybe even a prototyping font to visually populate the blocks.
In case you do not agree or think that this is not doable, you still have this solution to your disposal
<child-component #hook:mounted="makeSomeStuff"></child-component>
With this you may be able to display a full-sized loader until your content is done loading. You could add a mixin with the longer mounted syntax in each component to avoid too much boilerplate but this one is deprecated and do have various issues.
But IMO, the issue is more in your way of fetching the data (asyncData and fetch hooks are nice) and the way that everything is full dynamic when there is no specific need. If it's more important to keep the dynamic part, I guess that you can be serious on code reviews or plug some git hooks or alike to kinda scan the code and see if the required mounted emits are in place.
There is no ideal solution in your case but keep in mind that Lighthouse will always prefer some SSR content with the less amount of JS. Here is my personal bible to anything performance related, you could probably grasp some nice tips in this really in-depth article.
Update for Vue3
The syntax has changed for Vue3: https://v3-migration.vuejs.org/breaking-changes/vnode-lifecycle-events.html#_2-x-syntax
I got an weird issue. Here is my code for rendering the vue pages. In my local machine, the rendering time for this page is about 50~80ms around, however, if i access the page parallel, sometimes could be 120ms around(maybe 5 times out of 200 requests ), but most of time, it is still 50~80 ms.
However, when i deploy the code to our production docker, these peek time is getting worse, sometimes it can reach 1 second, and got 500ms a lot of times, the performance is bad. It makes no sense, the request load is not heavy and we have load balance too. For a similiar page which we are using EJS to render, we don't see this kind of peek a lot. The backend logic and services using for EJS and Vue are all the same.
Client side rendering is also the same, it has similar symptom.
Does any body know what kind of reasons could lead this issue?
first of all do two things:
1- do a quick test using lighthouse if possible, it'll help pin pointing the problem.
2- check console for any errors AND warnings.
without further information about you'r code i don't think it's possible to say what's exactly causing the problem.
However after searching for some time i came about an article which the writer had the same performance problems.
This is the result of his lighthouse check. As you can see his website had shortcomings in indexing and content full paint; Long story short he had an infinity loop and v-for loops without keys.
following are some tips on how you can better optimize you'r vue app:
Apply the Vue Style Guide and ESLint
There’s a style guide in Vue docs: https://v2.vuejs.org/v2/style-guide/.
You can find there four Rule categories. We really care about three of them:
Essential rules preventing us from errors,
Recommended and strongly recommended rules for keeping best practices – to improve quality and readability of code.
You can use ESLint to take care of those rules for you. You just have to set everything properly in the ESLint configuration file.
Don’t use multiple v-if
Don’t write api call handlers in components
Simply do what is locally necessary in components logic. Every method that could be external should be separated and only called in components e.g. business logic.
Use slots instead of large amounts of props
5.Lazy load routes
Use watcher with the immediate option instead of the created hook and watcher together.
Here another article on how to improve you'r vue app.
Good Luck!
Recently I've been playing around with different frameworks and libraries, looking for something that really suits my needs.
You see, my job mainly involves creating asp.net mvc applications and for most of them using Razor and a little bit of jQuery is enough. But in certain cases and only for a few pages,which are rarely more than one or two per app, I really need something extra that helps me avoid getting entangled in a bunch of jQuery code.
As I mentioned, I tried a couple of alternatives and from them, the one I liked the most is Aurelia, because of its simplicity and the fact that it embraces standards, BUT the more I dive into the framework, the more I think that it might not be what I'm looking for,as it seems more suitable for full spa applications and what I need is:
Something that helps me reduce the amount of DOM manipulation
A efficient templating engine
I know that Aurelia provides that and much more, but I don't want/need a SPA, I need those functionalities ONLY in some specific pages and not the whole application.
Can Aurelia help me achieve this? If so, how?
Sure, Aurelia can help you achieve that. You just won't use certain features like routing in on the pages you create with Aurelia.
That being said, it isn't a drop in replacement for jQuery, but none of the "modern" JS frameworks really are. And you're going to end up spending time learning whichever one you end up choosing.
Check out the aurelia.enhance functionality, it might be just what you're looking for!
I have used Aurelia in a non-SPA context, and it worked out well. I think this is exactly what you describe. For example:
http://legumeinfo.org/chado_phylotree/phytozome_10_2.59028020
https://github.com/legumeinfo/tripal_phylotree/tree/lis_master/theme/js/aurelia
I'm using aurelia for dynamic elements on some sites. Like comments for example. Page loads fast w/o comments.Then Aurelia kicks in and loads the comments below. Also with some signalR magic the discussion is updated in real time. It is awesome and insanely easy.
I have multiple pages on a site using RequireJS, and most pages have unique functionality. All of them share a host of common modules (jQuery, Backbone, and more); all of them have their own unique modules, as well. I'm wondering what is the best way to optimize this code using r.js. I see a number of alternatives suggested by different parts of RequireJS's and Almond's documentation and examples -- so I came up with the following list of possibilities I see, and I'm asking which one is most recommended (or if there's another better way):
Optimize a single JS file for the whole site, using Almond, which would load once and then stay cached. The downside of this most simple approach is that I'd be loading onto each page code that the user doesn't need for that page (i.e. modules specific to other pages). For each page, the JS loaded would be bigger than it needs to be.
Optimize a single JS file for each page, which would include both the common and the page-specific modules. That way I could include Almond in each page's file and would only load one JS file on each page -- which would be significantly smaller than a single JS file for the whole site would be. The downside I see, though, is that the common modules wouldn't be cached in the browser, right? For every page the user goes to she'd have to re-download the bulk of jQuery, Backbone, etc. (the common modules), as those libraries would constitute large parts of each unique single-page JS file. (This seems to be the approach of the RequireJS multipage example, except that the example doesn't use Almond.)
Optimize one JS file for common modules, and then another for each specific page. That way the user would cache the common modules' file and, browsing between pages, would only have to load a small page-specific JS file. Within this option I see two ways to finish it off, to include the RequireJS functionality:
a. Load the file require.js before the common modules on all pages, using the data-main syntax or a normal <script> tag -- not using Almond at all. That means each page would have three JS files: require.js, common modules, and page-specific modules.
b. It seems that this gist is suggesting a method for plugging Almond into each optimized file ---- so I wouldn't have to load require.js, but would instead include Almond in both my common modules AND my page-specific modules. Is that right? Is that more efficient than loading require.js upfront?
Thanks for any advice you can offer as to the best way to carry this out.
I think you've answered your own question pretty clearly.
For production, we do - as well as most companies I've worked with option 3.
Here are advantages of solution 3, and why I think you should use it:
It utilizes the most caching, all common functionality is loaded once. Taking the least traffic and generating the fastest loading times when surfing multiple pages. Loading times of multiple pages are important and while the traffic on your side might not be significant compared to other resources you're loading, the clients will really appreciate the faster load times.
It's the most logical, since commonly most files on the site share common functionality.
Here is an interesting advantage for solution 2:
You send the least data to each page. If a lot of your visitors are one time, for example in a landing page - this is your best bet. Loading times can not be overestimated in importance in conversion oriented scenarios.
Are your visitors repeat? some studies suggest that 40% of visitors come with an empty cache.
Other considerations:
If most of your visitors visit a single page - consider option 2. Option 3 is great for sites where the average users visit multiple pages, but if the user visits a single page and that's all he sees - that's your best bet.
If you have a lot of JavaScript. Consider loading some of it to give the user visual indication, and then loading the rest in a deferred way asynchronously (with script tag injection, or directly with require if you're already using it). The threshold for people noticing something is 'clunky' in the UI is normally about 100ms. An example of this is GMail's 'loading...' .
Given that HTTP connections are Keep-Alive by default in HTTP/1.1 or with an additional header in HTTP/1.0 , sending multiple files is less of a problem than it was 5-10 years ago. Make sure you're sending the Keep-Alive header from your server for HTTP/1.0 clients.
Some general advice and reading material:
JavaScript minification is a must, r.js for example does this nicely and your thought process in using it was correct. r.js also combines JavaScript which is a step in the right direction.
As I suggested, defering JavaScript is really important too, and can drastically improve loading times. Defering execution will help your loading time look fast which is very important, a lot more important in some scenarios than actually loading fast.
Anything you can load from a CDN like external resources you should load from a CDN. Some libraries people use today like jQuery are pretty bid (80kb), fetching them from a cache could really benefit you. In your example, I would not load Backbone, underscore and jQuery from your site, rather, I'd load them from a CDN.
I created example repository to demonstrate these 3 kinds of optimization.
It can help us to have better understanding of how to use r.js.
https://github.com/cloudchen/requirejs-bundle-examples
FYI, I prefer to use option 3, following the example in https://github.com/requirejs/example-multipage-shim
I am not sure whether it is the most efficient.
However, I find it convienient because:
Only need to configure the require.config (on the various libraries in one place)
During r.js optimization, then decide which are the modules to group as common
I prefer to use option 3,and i can surely tell you that why is that.
It's the most logical.
It utilizes the most caching, all common functionality is loaded once. Taking the least traffic and generating the fastest loading times when surfing multiple pages. Loading times of multiple pages are important and while the traffic on your side might not be significant compared to other resources you're loading, the clients will really appreciate the faster load times.
I have listed much better options for the same.
You can use any content delivery network (CDN) like MaxCDN to ensure your js files get served to everyone. Also I'll suggest you to put your js files in the footer of your html code. Hope that helps.
I'm interested in using FireBase as a data-store for the creation of largely traditional, occasionally updated websites and am concerned about the SEO implications of rendering content using client-side JavaScript.
I know Google has made headway into indexing some JavaScript content, but am wondering what my best course of action is. I know I have some options:
Render content using 100% client-side JS, and probably suffer some indexing trouble
Build static HTML files on the server side (using Node, most likely) and serve them instead
First, I'm not sure how bad the problem actually is doing everything client side (am I solving something that needs solved?). And second, I just wonder if I'm missing some other obvious way to approach this.
Unfortunately, rendering data on the client-side generally makes it difficult to do SEO. Firebase is really intended for use with dynamic data, such as user account info, game data, etc, where SEO is not a goal.
That being said there are a few things you can do to optimize for SEO. First, you can render as much of your site as possible at compile time using a templating tool like mustache. This is what we did on the Firebase.com website (the entire site is static except for the tutorial and examples).
Second, if your app uses hash fragments in the URL for navigation (anything after the "#!"), you can provide a separate set of static or server-generated pages that correspond to your dynamic pages so that crawlers can read the data. Google has a spec for doing this, which you can see here:
https://developers.google.com/webmasters/ajax-crawling/docs/specification