How to detect features and lazy-load only the needed polyfills with babel / core-js? - lazy-loading

Polyfill services like polyfill.io seem to be delivering only small feature detects to the browser and then lazy-load only the polyfills that are actually needed.
As I understand the babel documentation on polyfilling, babel always include the full set of potentially needed polyfills: it will process a browserslist and then include those polyfills from core-js that the weakest browsers need. A bundler like webpack would then probably merge all these polyfills into the application, but without runtime feature detects.
My application uses modern ES language features but also targets a wide range of browsers, including IE10 and IE11. That requires a lot of polyfills and will probably bloat the bundle, especially for modern browsers that may not need the most of the polyfills.
So I was wondering: can I tell either babel and/or webpack to only include feature detects, split the polyfills off into separate chunks (individually or into small bundles), and then, at runtime, only "lazy"-load what is actually needed?

Services like polyfill.io investigate your User Agent against a predefined set and based on that provide you with different bundle of polyfills. What you're trying to do is actually way different.
One solution I could thing of is to introduce code splitting into your build (it is on by default in Webpack 4 production build) and create several files in your project, where each would import a different set of polyfills. This will require you to import polyfills manually, but will allow you to have several polyfill chunks, each with a different subset of missing features. After you will have those chunks, you could use some feature detection (probably Modernizr) on the startup of your application and dynamically load only those chunks which are needed by the browser. Keep in mind that this process is rather cumbersome - you will need to take care of including each polyfill manually. Also another disadvantage of it, it will need to make several requests to the server, which will additionally slow down your app start time.
As to other part of your question - webpack/babel will not do the automatic polyfill chunks splitting and runtime feature checking for you, those will need to be handled manually.

Related

In npm/yarn workspaces, should packages consume src or dist

I want to use a monorepo for our frontend app. We want to divide some react UI components into a package folder under "/packages/ui-components" and leave the app in an "/apps/app" folder and then have the app consume ui-components by importing it (simplified setup). We don't plan to release these packages to individual npm repos anytime soon, but just have the final app running.
I'm starting to worry a bit about how we can have the best workflow, and for some reason I cannot find this in my research:
Should the app consume the src-files from packages or instead compile each package to the dist folder and import only these?
Workflow wise we would like working in different packages seamless, so if someone edits in a package we would like these changes to immediately show in the App.
I see some pros and cons of using source-files compared to using a dist-output.
Pros of using src directly:
Better tree-shaking, as dependencies can be peer-dependencies and libraries that are used by multiple packages can be combined.
Smaller final bundle size due to webpack having better access to original data like full dependency-tree and common functions etc.
Faster development iterations with smaller projects as there's only one build and smart webpack could potentially only recompile a few changed files.
Pros of using dist:
More independent packages as they can contain their own build-pipeline.
Will be easier to import as less peer-dependencies and special webpack-config is needed
Ready to be published as a public npm package
Possibly faster build-time as only changed packages and main-app needs to recompile on changes (I assume webpack can do cache, so maybe this doesnt matter much)
I'm sure I'm missing a lot of details; setting up the good development flow is quiet complicated these days and I would like to make it as simple to use as possible for my colleagues..
TL;DR;
Should packages in a mono-repo build to their dist for others to consume, or is it better to import directly from src.
It is a matter of tradeoffs.
If you consume dist of your packages, this means that in order to "apply" changes inside your package you need to build, then consume it in the app.
Some even suggest publishing the package to the registry (public or private), this allows you more loose coupling between the app and the packages.
On the other hand, working on the src has "seem less" like advantages, but will require your app setup to support it, since it will be the one that compile the package's code.
Personally, I'm using the second method, my apps are consuming from the src, and it was not so trivial to configure to since most of the tools are ignoring code from node_modules by default (Like babel-loader, which will ignore transpilation of code inside node_modules).
Most of my code was based on next-transpile-modules source code.

What is the significance of browserslist in package.json created by create-react-app

I was asked this in an interview. I could not answer.
"browserslist": [
">0.2%",
"not dead",
"not ie <= 11",
"not op_mini all"
]
I can see that Its an array.
"not ie <=11" means will not run on lower than Internet Explorer v11
"op_mini" must be related to Opera mini.
But I want to know why it is required.
What is Browserslist?
Browserslist is a tool that allows specifying which browsers should be supported in your frontend app by specifying "queries" in a config file. It's used by frameworks/libraries such as React, Angular and Vue, but it's not limited to them.
Why would we want it?
During development we want to use the latest javascript features (e.g ES6) as it makes our jobs easier, leads to cleaner code, possibly better performance.
As javascript evolves, browsers won't support new features at the same pace, for instance not all browsers have built-in support for ES6 (aka ES2015). By using browserslist, transpilers/bundlers know what browsers you want to support, so they can "group" browsers in different categories and generate separate bundles, for example:
Legacy Bundle: Contains polyfills, larger bundle size, compatible with old browsers without ES6 support.
Modern Bundle: Smaller bundle size, optimized for modern browsers.
So our entrypoint (e.g index.html) is generated in a way that it'll load the required bundles according to current browser being used by a user.
This process is done by Angular, Vue and React. In the future, bundler tools may generate even more bundles depending on how different browsers are, one bundle per group of browsers. Generating more bundles optimizes your app even more, at the price of making the build slower (and more complex), it's a tradeoff.
Let's see each individual query in your example:
0.2%: All browsers that have at least 0,2% of global market share
not dead: Exclude browsers without official support in the last 24 months
not ie <= 11: Exclude IE 11 and older versions
not op_mini all: Exclude Opera Mini
You can find more about it (including further query options) in Browserslist's GitHub repository.
That's a React configuration option to know which browsers the build process should target to.
As the documentation says:
The browserslist configuration controls the outputted JavaScript so that the emitted code will be compatible with the browsers specified.
If you are intend to use a ES feature make sure all browsers specified supports it, otherwise you have to include polyfills manually. React will not do it for you automatically.
See more in: https://facebook.github.io/create-react-app/docs/supported-browsers-features and https://create-react-app.dev/docs/supported-browsers-features/

Handling common components with NPM on development

I have developed several reusable components for my ionic2 applications, among this I have security, fmcManagment, formsManagment among others, and new apps just reuse this components using NPM.
When I work ONLY on new apps reusing this modules, the model works perfectly and Im able to use the livereload functionalities which speed development, but when I need to make changes to the common components I totally lost this, due the fact that the code for components is not directly related to the new project, so for every change o common component, I need to stop the app, rebuild the module, reinstall using node then restart the app.
What I would like to achieve is to relate the components code on dev time, so when I make a change to the common component's code, the changes are picked up live, instead of doing all the manual process, but for deployment time I'll need to keep the standard npm libs structure, what should be the best way to achieve this escenario?

Why does Aurelia install so many dependencies?

I am curious to know why when I create a new Aurelia project, each project installs +600 node_modules. Understandably, the modules collectively don't take up a lot of space, but are all of these modules necessary? I was under the impression that Aurelia's aim was to help developers move away from depending on 3rd party libraries so it seems odd that each project comes with a massive dump of 3rd party libraries.
My guess is that you are starting your project from CLI - which comes preset with HTTP server, ES6/2015, SASS, live-reloading and more.
I created clean Aurelia project and looked at the package.json - there were 5 dependencies and 34 dev dependencies. Using all of above mentioned tools is somewhat standard in today's JS web development, and generating project from CLI reduces time needed for upfront setup. All of these features come with their own dependencies, and that's why node_modules/ folder grows rapidly.
The bottom line is - you could start new Aureila project with much fewer dependencies. On their home page you can find starter project with just three. But that also means that you won't have access to most of the tools used today.
Also, and correct me if I'm wrong, I haven't got the impression Aurelia ever aimed to move devs from third party libs and modules, just to be modern, fast, and unobtrusive.
All modern web frameworks have a host of tooling. The reasons in no particular order -
1. Transpiling ESNext or TypeScript - if you want to write in Future JavaScript but have it work in all browsers, you need this step. Both Babel and TypeScript tooling comes with extra stuff too. If you want to see coverage (everyone does) there's another tool.
2. Testing - Unit test and End to End testing require testing frameworks, test runners, and if you want to write like above (esnext or TypeScript) you also need transpiling.
3. Module Loading / Bundling - Require.js, JSPM/System.js, WebPack, etc... are used to allow your code to actually run in the browser. Without a module loader you could not break your code out in to separate files. Without a bundler you would be loading a lot of extra files in production.
4. Serving your application - If you want to run your app locally you need a way to serve it up and watch for changes.
5. Debugging - You want to debug? Now you need a way to debug the file that gets served to the browser back to the original source.
6. Linting - Lint your code base for style consistencies.
Each of these packages usually have their own dependencies, and they get pulled down as well.
This convention of small packages that have a single focus is arguably better than massive packages that do everything for you. This allows you to remove a package and replace it with the one that does the same thing but in a way you want it.

How to use built/compressed Dojo to resolve Dojo modules ref'ed from tests?

Currently, with my Intern setup, I'm using an unbuilt Dojo build when running my Intern tests; like, for example, a test module loads app/ProductModuleA, and ProductModuleA references and loads dojo/request. I need to have the dojo/request.js file in the appropriate directory structure in order for the module to be resolved without errors and therefore the test to be able to run. Our product code does use a built dojo.js file and our previous DOH tests were able to use this, too, without any issues--I don't understand how that worked because I don't know anything really about building Dojo.
I know I've seen snippets in various Internet forums (like here) and the Intern User Guide that Intern supports source maps, which I guess suggests it's possible to use a built dojo.js file in conjunction with running Intern, but I haven't found anything at all in detail. Insights, or pointers to documentation or examples that so far I haven't been able to find?
One of the benefits of AMD is that you don't have to do anything special to your code when switching between a built and unbuilt Dojo. The first time you load a dependency using an unbuilt Dojo, the loader requests it over the network and then caches the result. Subsequent loads use the cached dependency. The loading process works the same with a built Dojo; the main difference is that all the modules built into the built Dojo are pre-cached. The loader doesn't have to request them over the network the first time because they start out in the module cache.
For Intern to use a built Dojo, you just need to make sure you're using the built Dojo as your loader during tests. You can do this by setting the useLoader option in your Intern config.
I tried what Jason suggested and it still didn't work--I was getting 404s for a Dojo_ROOT.js module, though nothing in tests or product files explicitly load that. I'm sure this is due to something unique in my product's build environment. That's okay, I will just use the Dojo source for now and return to this later.