How to optimize webpack's build time using prefetchPlugin & analyse tool? - optimization

Previous research:
As webpack's wiki says, it is possible to use the analyse tool to optimize build performance:
from: https://github.com/webpack/docs/wiki/build-performance#hints-from-build-stats
Hints from build stats
There is an analyse tool which visualise your build and also provides
some hint how build size and build performance can be optimized.
You can generate the required JSON file by running webpack --profile
--json > stats.json
I generate the stats file (available here)
uploaded it to webpack's analize tool and under Hints tab
I told to use the prefetchPlugin:
from: http://webpack.github.io/analyse/#hints
Long module build chains
Use prefetching to increase build performance.
Prefetch a module from the middle of the chain.
I digged the web inside out to find the only documentation available on prefechPlugin is this:
from: https://webpack.js.org/plugins/prefetch-plugin/
PrefetchPlugin
new webpack.PrefetchPlugin([context], request)
A request for a normal module, which is resolved and built even before
a require to it occurs. This can boost performance. Try to profile the
build first to determine clever prefetching points.
My questions:
How to properly use prefetchPlugin?
What is the right workflow to use it with the Analyse tool?
How do I know if the prefetchPlugin works? how can I measure it?
What it means to Prefetch a module from the middle of the chain?
I'll really appreciate some examples
Please help me make this question a valuable resource for the next developer who wants to use the prefechPlugin and the Analyse tools.
Thank you.

Yeah, The pre-fetch plugin documentation is pretty much non-existent. After figuring it out for myself, its pretty simple to use, and there's not much flexibility to it. Basically, it takes two arguments, the context (optional) and the module path (relative to context). The context in your case would be /absolute/path/to/your/project/node_modules/react-transform-har/ assuming that the tilde in your screenshot is referring to node_modules as per webpack's node_module resolution.
The actual prefetch module should be ideally no more than three module dependencies deep. So in your case isFunction.js is the module with the long build chain and ideally it should be pre-fetched at getNative.js
However, I suspect there's something funky in your config, because your build chain dependencies are referring to module dependencies, which should be automatically optimized by webpack. I'm not sure how you got that, but in our case, we don't see any warnings about long build chains in node_modules. Most of our long build chains are due to deeply nested react components which require scss. ie:
Regardless, you'll want to add a new plugin for each of the warnings, like so:
plugins: [
new webpack.PrefetchPlugin('/web/', 'app/modules/HeaderNav.jsx'),
new webpack.PrefetchPlugin('/web/', 'app/pages/FrontPage.jsx')
];
The second argument must be a string to the relative location of the module. Hope this makes sense.

The middle of your chain there is probably react-transform-hmr/index.js as it starts about half way through. You could try PrefetchPlugin('react-transform-hmr/index') and rerun your profile to see if it helps speed up your total time to build.

Related

how to fix the problem of downloading fasttext-model300?

I'm using windows 10 and python 3.3. I tried to download fasttext_model300 to calculate soft cosine similarity between documents, but when I run my python file, it stops after arriving at this statement:
fasttext_model300 = api.load('fasttext-wiki-news-subwords-300')
There are no errors or not responding, It just stops without any reaction.
Does anybody know why it happens?
Thanks
I'd recommend against using the gensim api.load() functionality. It dynamically runs new, unversioned source code from remote servers – which is opaque in its operations & suboptimal for maintaining a secure local configuration, or debugging any issues which occur.
Instead, find the actual exact data files you trust and download them as plain data. Then, use specific library operations, like the KeyedVectors.load_word2vec_format() method, so instantiate exactly the model you need, using precise local-file paths you understand.
Following those steps may make it clearer what, if anything, is going wrong. If it doesn't, try also enabling logging at the INFO level to gather more information about what progress is made before failure (and add any new details as a comment or to your question).
python3 -m gensim.downloader --download fasttext-wiki-news-subwords-300
Try using this. Source : https://awesomeopensource.com/project/RaRe-Technologies/gensim-data

I get errors serving lit-element with the nollup development server

When serving lit-element components with nollup then I keep getting the following error in the browser console that I am not able to track down:
toast-messages.js:56 Uncaught TypeError: modules[number] is not a function
at create_bindings (toast-messages.js:56)
at toast-messages.js:57
at Object.48 (toast-messages.js:365)
at create_bindings (toast-messages.js:56)
at _require (toast-messages.js:141)
at toast-messages.js:249
at toast-messages.js:251
Can anyone point me in the right direction? (I can share my rollup.config.js if required)
This error means that a module has been requested, but there's no module with the specified id (number) included in the bundle. This can happen for a variety of reasons:
There's a call using require passing an invalid id.
Passing a string or something other than a number into require.
Using a library that was compiled to CommonJS but was not transformed to ESM.
It's likely that you're missing the CommonJS plugin, or if you are, then the CommonJS isn't able to catch the require call and convert it. The latter can happen because of obfuscation in the library code. CommonJS plugins work by using static analysis. It would be very difficult for the plugin to transform the following:
var r = require;
r('my-library-code');
Without executing the code, it would be difficult for static analysis to track this. A best effort attempt can be made, but there will be always a situation where it could fail.
So here's the following steps you should take:
* Confirm that the CommonJS plugin is being used.
* If it is, check the file in node_modules for unusual patterns.
* If there is, file an issue with the CommonJS plugin maintainer and see if it's possible to solve, and if not, you might need to contact the maintainer for the toast-message library.
I do realise I'm posting this very late, but better to answer later than never! To avoid this vague error in the future, in 0.10.2 of Nollup I've added a user friendly error that list some things to check for.
Hope this helps!

Creating Cytoscape.js extensions

Max, I want to update my extension to the new format, but I am running into issues with placement of custom code. It seems that the extension framework has been updated a lot since I added an extension 4 years ago. Is there a way to get better documentation on getting started with adding a extension? I am happy to help write up the documentation if you can help answer some questions that I think would help get people started. Let me know.
The only thing that really changed is that the scaffolder creates a webpack project for you. The extension registering procedure is the same: http://js.cytoscape.org/#extensions/api
For example, cytoscape( 'collection', 'fooBar', function(){ return 'baz'; } ) registers eles.fooBar().
I guess the main thing is that there are a lot more files than what the previous scaffolder generated, so it might be harder to find things. The layout output has lots of files, because it creates a skeleton impl for each of the continuous case and the discrete case.
The scaffolder isn't strictly necessary. You could use another build system (or none at all) as long as you call cytoscape(). For example, if you only care about publishing to npm for people who use webpack/browserify/rollup, then you could just use cjs require('cytoscape') to pull in the peer dependency. Exporting a register function is nice if you want to allow the client to decide the order of extension registrations with cytoscape.use(extension) (or extension(cytoscape)).
You're right that there should be some more docs on the output of the scaffolder. Maybe a summary of the files would suffice. We could add a tutorial in the blog later if need be. Both the docs and the blog just use markdown, so the content could go in either place.

What is the purpose of PipelineDependsOnBuild property in MSBuild?

I am in charge of maintain some build scripts intended to be used with MSBuild; I have found that they have several properties, among them I found the code:
/p:PipelineDependsOnBuild=False
Looking for an explanation about this, I search the web and I found that is part of the solution of issues with the building process of web projects, but is used in combination with other properties.
So far I have not found an msdn page that explain the purpose of that.
Fact: I tried to search for /p:PipelineDependsOnBuild=True and no results were found
For future reference, as I too found the use of this particular property curious, yet can't find any reference myself. However, in the said Microsoft.Web.Publishing.targets file, this comments are interesting:
"These two Properties are not compatable %24(UseWPP_CopyWebApplication) and %24(PipelineDependsOnBuild) both are True.
Please correct the problem by either set %24(Disable_CopyWebApplication) to true or set %24(PipelineDependsOnBuild) to false to break the circular build dependency.
Detail: %24(UseWPP_CopyWebApplication) make the publsih pipeline (WPP) to be Part of the build and %24(PipelineDependsOnBuild) make the WPP depend on build thus cause the build circular build dependency. " />
So that using the PipelineDependsOnBuild=False might be necessary in order to using `UseWPP_CopyWebApplication' option.

How to locally test cross-domain builds?

Using the dojo toolkit, what is the proper way of locally testing code that will be executed as cross-domain, without making the actual build?
As it appears, there are three possible options (each, with their own drawbacks):
Using local (non xd) XMLHttpRequest dojo.require
This option does not really test the xd behavior, since it dojo.require[s] the js synchronously via XHR.
djConfig.debugAtAllCosts = true;
Although this option does load the required code asynchronously (via the 'script' tag), it also pulls the code in via XHR, parses the dojo.require[s] inside that, and pulls them in. This (using the loader_debug), again, is not what the loader_xd is doing. More info on this topic in a different question.
Creating a cross-domain build
This approach requires a build, which is not possible in the environment which I'm running the code in (We're using our own on-the-fly build process, which includes only the js that is necessary for a particular page. This process is not suitable for development).
Thus, my question: is there a way to use the loader_xd, which does not require an xd build (which adds the xd prefix / suffix to every file)?
The 2nd way (using the debugAtAllCosts) also makes me question the motivation for pre-parsing the dojo.require[s]. If the loader_xd will not (or rather can not) pre-parse, why is the method that was created for testing/debugging doing so?
peller has described the situation. If you wanted to just generate .xd.js file for your modules, you could look at util/buildscripts/jslib/buildUtilXd.js and its buildUtilXd.xdgen() function.
It would take a bit of work to make your own script, but you could look at util/buildscripts/build.js for pointers.
I am hoping in the future for Dojo (maybe Dojo 2.x timeframe) we can switch to a loader that just uses script tags with a module format that has a function wrapper around the module, something that is coded by the developer. This would allow the same module format to work in the local and xd cases.
I don't think there's any way to do XD loading without building and deploying it. Your analysis of the various options seems about right.
debugAtAllCosts is there specifically to solve a debugging problem, where most browsers, until recently, could not do anything intelligent with code brought in through eval. Still today, Firefox will report exception in the console as appearing at the eval site (bootstrap.js) with a line number offset from the eval, rather than from the actual eval buffer, and normally that eval buffer is anonymous. Firebug was the first debugger to jump through some hoops to enhance the debugging experience and permitted special metadata that Dojo's loader injects between the XHR and the eval to determine a filepath to the source. Webkit/Safari have recently implemented this also. I believe debugAtAllCosts pre-dates the XD loader.