Most Efficient Multipage RequireJS and Almond setup - optimization

I have multiple pages on a site using RequireJS, and most pages have unique functionality. All of them share a host of common modules (jQuery, Backbone, and more); all of them have their own unique modules, as well. I'm wondering what is the best way to optimize this code using r.js. I see a number of alternatives suggested by different parts of RequireJS's and Almond's documentation and examples -- so I came up with the following list of possibilities I see, and I'm asking which one is most recommended (or if there's another better way):
Optimize a single JS file for the whole site, using Almond, which would load once and then stay cached. The downside of this most simple approach is that I'd be loading onto each page code that the user doesn't need for that page (i.e. modules specific to other pages). For each page, the JS loaded would be bigger than it needs to be.
Optimize a single JS file for each page, which would include both the common and the page-specific modules. That way I could include Almond in each page's file and would only load one JS file on each page -- which would be significantly smaller than a single JS file for the whole site would be. The downside I see, though, is that the common modules wouldn't be cached in the browser, right? For every page the user goes to she'd have to re-download the bulk of jQuery, Backbone, etc. (the common modules), as those libraries would constitute large parts of each unique single-page JS file. (This seems to be the approach of the RequireJS multipage example, except that the example doesn't use Almond.)
Optimize one JS file for common modules, and then another for each specific page. That way the user would cache the common modules' file and, browsing between pages, would only have to load a small page-specific JS file. Within this option I see two ways to finish it off, to include the RequireJS functionality:
a. Load the file require.js before the common modules on all pages, using the data-main syntax or a normal <script> tag -- not using Almond at all. That means each page would have three JS files: require.js, common modules, and page-specific modules.
b. It seems that this gist is suggesting a method for plugging Almond into each optimized file ---- so I wouldn't have to load require.js, but would instead include Almond in both my common modules AND my page-specific modules. Is that right? Is that more efficient than loading require.js upfront?
Thanks for any advice you can offer as to the best way to carry this out.

I think you've answered your own question pretty clearly.
For production, we do - as well as most companies I've worked with option 3.
Here are advantages of solution 3, and why I think you should use it:
It utilizes the most caching, all common functionality is loaded once. Taking the least traffic and generating the fastest loading times when surfing multiple pages. Loading times of multiple pages are important and while the traffic on your side might not be significant compared to other resources you're loading, the clients will really appreciate the faster load times.
It's the most logical, since commonly most files on the site share common functionality.
Here is an interesting advantage for solution 2:
You send the least data to each page. If a lot of your visitors are one time, for example in a landing page - this is your best bet. Loading times can not be overestimated in importance in conversion oriented scenarios.
Are your visitors repeat? some studies suggest that 40% of visitors come with an empty cache.
Other considerations:
If most of your visitors visit a single page - consider option 2. Option 3 is great for sites where the average users visit multiple pages, but if the user visits a single page and that's all he sees - that's your best bet.
If you have a lot of JavaScript. Consider loading some of it to give the user visual indication, and then loading the rest in a deferred way asynchronously (with script tag injection, or directly with require if you're already using it). The threshold for people noticing something is 'clunky' in the UI is normally about 100ms. An example of this is GMail's 'loading...' .
Given that HTTP connections are Keep-Alive by default in HTTP/1.1 or with an additional header in HTTP/1.0 , sending multiple files is less of a problem than it was 5-10 years ago. Make sure you're sending the Keep-Alive header from your server for HTTP/1.0 clients.
Some general advice and reading material:
JavaScript minification is a must, r.js for example does this nicely and your thought process in using it was correct. r.js also combines JavaScript which is a step in the right direction.
As I suggested, defering JavaScript is really important too, and can drastically improve loading times. Defering execution will help your loading time look fast which is very important, a lot more important in some scenarios than actually loading fast.
Anything you can load from a CDN like external resources you should load from a CDN. Some libraries people use today like jQuery are pretty bid (80kb), fetching them from a cache could really benefit you. In your example, I would not load Backbone, underscore and jQuery from your site, rather, I'd load them from a CDN.

I created example repository to demonstrate these 3 kinds of optimization.
It can help us to have better understanding of how to use r.js.
https://github.com/cloudchen/requirejs-bundle-examples

FYI, I prefer to use option 3, following the example in https://github.com/requirejs/example-multipage-shim
I am not sure whether it is the most efficient.
However, I find it convienient because:
Only need to configure the require.config (on the various libraries in one place)
During r.js optimization, then decide which are the modules to group as common

I prefer to use option 3,and i can surely tell you that why is that.
It's the most logical.
It utilizes the most caching, all common functionality is loaded once. Taking the least traffic and generating the fastest loading times when surfing multiple pages. Loading times of multiple pages are important and while the traffic on your side might not be significant compared to other resources you're loading, the clients will really appreciate the faster load times.
I have listed much better options for the same.

You can use any content delivery network (CDN) like MaxCDN to ensure your js files get served to everyone. Also I'll suggest you to put your js files in the footer of your html code. Hope that helps.

Related

NuxtJS: Static render just a component instead of 1000's of pages

We have a website with potentially 1000's of pages. We would like to leverage the power of Static Rendering. The CMS, which is hosted on a different server, will trigger a static re-render of the page via WebHooks.
When a new page is created, the main nav may need to change. That means the entire site will need to be re-generated, and with so many pages that could take a very long time.
So what is the work around for this? Can you statically render just the main nav and include it all pages, to avoid re-rendering absolutely everything? ...so partial static rendering?
Depending on where you're hosting your code, you could use ISG: https://youtu.be/4vRn7yg85jw
There are several approaches of solving that yourself too, but it will require some work of course.
Nuxt team is currently working on solving this issue with something baked in: https://github.com/nuxt/framework/discussions/560
You could maybe also optimize some of those pages or look to split them in different projects as told here: https://stackoverflow.com/a/69835750/8816585
Batching the regeneration could be an idea too, or even using the preview feature to avoid some useless builds: https://nuxtjs.org/docs/features/live-preview#preview-mode
Overall, I'm not sure that there is magic solution with a perfect balance between SSR and SSG as of today without a decent amount of work. Of course, if you're using Go + Vite or alike, you will get faster builds overall but it's a quite broad/complex question overall.

Vue server side rendering performance issue

I got an weird issue. Here is my code for rendering the vue pages. In my local machine, the rendering time for this page is about 50~80ms around, however, if i access the page parallel, sometimes could be 120ms around(maybe 5 times out of 200 requests ), but most of time, it is still 50~80 ms.
However, when i deploy the code to our production docker, these peek time is getting worse, sometimes it can reach 1 second, and got 500ms a lot of times, the performance is bad. It makes no sense, the request load is not heavy and we have load balance too. For a similiar page which we are using EJS to render, we don't see this kind of peek a lot. The backend logic and services using for EJS and Vue are all the same.
Client side rendering is also the same, it has similar symptom.
Does any body know what kind of reasons could lead this issue?
first of all do two things:
1- do a quick test using lighthouse if possible, it'll help pin pointing the problem.
2- check console for any errors AND warnings.
without further information about you'r code i don't think it's possible to say what's exactly causing the problem.
However after searching for some time i came about an article which the writer had the same performance problems.
This is the result of his lighthouse check. As you can see his website had shortcomings in indexing and content full paint; Long story short he had an infinity loop and v-for loops without keys.
following are some tips on how you can better optimize you'r vue app:
Apply the Vue Style Guide and ESLint
There’s a style guide in Vue docs: https://v2.vuejs.org/v2/style-guide/.
You can find there four Rule categories. We really care about three of them:
Essential rules preventing us from errors,
Recommended and strongly recommended rules for keeping best practices – to improve quality and readability of code.
You can use ESLint to take care of those rules for you. You just have to set everything properly in the ESLint configuration file.
Don’t use multiple v-if
Don’t write api call handlers in components
Simply do what is locally necessary in components logic. Every method that could be external should be separated and only called in components e.g. business logic.
Use slots instead of large amounts of props
5.Lazy load routes
Use watcher with the immediate option instead of the created hook and watcher together.
Here another article on how to improve you'r vue app.
Good Luck!

How can I overcome the SEO implications of rendering content using client-side JS with FireBase?

I'm interested in using FireBase as a data-store for the creation of largely traditional, occasionally updated websites and am concerned about the SEO implications of rendering content using client-side JavaScript.
I know Google has made headway into indexing some JavaScript content, but am wondering what my best course of action is. I know I have some options:
Render content using 100% client-side JS, and probably suffer some indexing trouble
Build static HTML files on the server side (using Node, most likely) and serve them instead
First, I'm not sure how bad the problem actually is doing everything client side (am I solving something that needs solved?). And second, I just wonder if I'm missing some other obvious way to approach this.
Unfortunately, rendering data on the client-side generally makes it difficult to do SEO. Firebase is really intended for use with dynamic data, such as user account info, game data, etc, where SEO is not a goal.
That being said there are a few things you can do to optimize for SEO. First, you can render as much of your site as possible at compile time using a templating tool like mustache. This is what we did on the Firebase.com website (the entire site is static except for the tutorial and examples).
Second, if your app uses hash fragments in the URL for navigation (anything after the "#!"), you can provide a separate set of static or server-generated pages that correspond to your dynamic pages so that crawlers can read the data. Google has a spec for doing this, which you can see here:
https://developers.google.com/webmasters/ajax-crawling/docs/specification

How to optimize Dojo loading time?

I'm working on a business application built upon PHP & Dojo tool kit. The interface is similar that you see on dojo dijit theme tester.
On internet it takes lot of time to load all those js one by one..
I want to know what is the best technique that is being used by theme tester demo that it loads much faster than one we built.?
I'm interested to know the best practices on optimizing its loading time?
You have rightly observed the biggest cause of runtime performance issue is the many many roundtrips it is doing to the server to fetch the small JS files.
While the modularized design of Dojo is very beneficial at design time (widget extensions, namespacing etc), at runtime, it is expected you optimize the dojo bits - the way to do that is to do a custom build.
Doing a custom build will give you a big performance boost - the hundreds of roundtrips will be reduced to one or 2 and the size of the payload will also dramatically decrease. We have seen a 50x performance improvement with custom build
Custom build will create an optimized, minified JS file that will contain only the code you use in the app.
You can define multiple layers depending on how you want to segregate your application JS files (for example, one single compressed file versus multiple files included in different UIs)
depending on the version of dojo you are using, see:
http://dojotoolkit.org/reference-guide/1.7/build/index.html#build-index
http://dojotoolkit.org/reference-guide/1.7/build/pre17/build.html#build-pre17-build
While it looks daunting at first, sitck with it and you will be able to create an optimized version and see the benefits :)

would lazy-loading img src negatively impact SEO

I'm working on a shopping site. We display 40 images in our results. We're looking to reduce the onload time of our page, and since images block the onload event, I'm considering lazy loading them by initially setting img.src="" and then setting them after onload. Note that this is not ajax loading of html fragments. the image html along with the alt text is present. it's just the image src is deferred.
Does anyone have any idea as to whether this may harm SEO or lead to a google penalty box now that they are measuring sitespeed?
Images don't block anything, they are already lazy loaded. The onload event notifies you that all of the content has been downloaded, including images, but that is long after the document is ready.
It might hurt your rank because of the lost keywords and empty src attributes. You'll probably lose more than you gain - you're better off optimizing your page in other ways, including your images. Gzip + fewer requests + proper expires + a fast static server should go a long way. There is also a free CDN that might interest you.
I'm sure google doesn't mean for the whole web to remove their images from source code to gain a few points. And keep in mind that they consider anything under 3s to be good loading times, there's plenty of room to wiggle before resorting to voodoo techniques.
From a pure SEO perspective, you shouldn't be indexing search result pages. You should index your home page and your product detail pages, and have a spiderable method of getting to those pages (category pages, sitemap.xml, etc.)
Here's what Matt Cutts has to say on the topic, in a post from 2007:
In general, we’ve seen that users usually don’t want to see search results (or copies of websites via proxies) in their search results. Proxied copies of websites and search results that don’t add much value already fall under our quality guidelines (e.g. “Don’t create multiple pages, subdomains, or domains with substantially duplicate content.” and “Avoid “doorway” pages created just for search engines, or other “cookie cutter” approaches…”), so Google does take action to reduce the impact of those pages in our index.
http://www.mattcutts.com/blog/search-results-in-search-results/
This isn't to say that you're going to be penalised for indexing the search results, just that Google will place little value on them, so lazy-loading the images (or not) won't have much of an impact.
There are some different ways to approach this question.
Images don't block load. Javascript does; stylesheets do to an extent (it's complicated); images do not. However, they will consume http connections, of which the browser will only fire off 2 per domain at a time.
So, what you can do that should be worry-free and the "Right Thing" is to do a poor man's CDN and just drop them on www1, www2, www3, etc on your own site and servers. There are a number of ways to do that without much difficulty.
On the other hand: no, it shouldn't affect your SEO. I don't think Google even bothers to load images, actually.
We display 40 images in our results.
first question, is this page even a landing page? is it targeted for a specific keyword? internal search result pages are not automatically landing pages. if they are not a landingpage, then do whatever you want with them (and make sure they do not get indexed by google).
if they are a landingpages (a page targeted for a specific keyword) the performance of the site is indeed important, for the conversion rate of these pages and indirectly (and to a smaller extend also directly) also for google. so a kind of lazy load logic for pages with a lot of images is a good idea.
i would go for:
load the first two (product?) images in an SEO optimized way (as normal HTML, with a targeted alt text and a targeted filename). for the rest of the images make a lazy load logic. but not just setting the src= to blank, but insert the whole img tag onload (or onscroll, or whatever) into your code.
having a lot of broken img tags in the HTML for non javacript users (i.e.: google, old mobile devices, textviewer) is not a good idea (you will not get a penalty as long as the lazy loaded images are not missleading) but shitty markup is never a good idea.
for general SEO question please visit https://webmasters.stackexchange.com/ (stack overflow is more for programing related questions)
I have to disagree with Alex. Google recently updated its algorithm to account for page load time. According to the official Google blog
...today we're including a new signal in our search ranking algorithms: site speed. Site speed reflects how quickly a website responds to web requests.
However, it is important to keep in mind that the most important aspect of SEO is original, quality content.
http://googlewebmastercentral.blogspot.com/2010/04/using-site-speed-in-web-search-ranking.html
I have been added lazyload to my site (http://www.amphorashoes.ro) and i have better pagerank from google (maybe because the content is loading faster) :)
first,don't use src="",it may hunt your page,make a small loading image instead it.
second,I think it won't affect SEO, actually we always use alt="imgDesc.." to describe this image, and spider may catch this alt but not analyse this image what id really be.
I found this tweet regarding Google's SEO
There are various ways to lazy-load images, it's certainly worth
thinking about how the markup could work for image search indexing
(some work fine, others don't). We're looking into making some clearer
recommendations too.
12:24 AM - 28 Feb 2018
John Mueller - Senior Webmaster Trends Analyst
From what I understand, it looks like it depends on how you implement your lazy loading. And Google is yet to recommend an approach that would be SEO friendly.
Theoretically, Google should be running the scripts on websites so it should be OK to lazy load. However, I can't find a source(from Google) that confirms this.
So it looks like crawling lazy loaded or deferred images may not be full proof yet. Here's an article I wrote about lazy loading image deferring and seo that talks about it in detail.
Here's working library that I authored which focuses on lazy loading or deferring images in an SEO friendly way .
What it basically does is cancel the image loading when DOM is ready and continue loading the images after window load event.
...
<div>My last DOM element</div>
<script>
(function() {
// remove the all sources!
})();
window.addEventListener("load", function() {
// return all the sources!
}, false);
</script>
</body>
You can cancel loading of an image by removing it's src value or replacing it with a placeholder image. You can test this approach with Google Fetch
You have to make sure that you have the correct src until DOM is ready so to be sure that Google Fetch will capture your imgs original src.