Does react-native's bundler optimize with tree shaking? - react-native

I recently saw a suggestion about deep requiring in a module -
Note: If you don't want the ReactART based components and it's dependencies, do a deep require instead: import ProgressBar from 'react-native-progress/Bar';.
Based on my knowledge - without adding/configuring Webpack 2 with tree-shaking and enabling uglify by oneself - the RN bundler would not be tree shaking and removing unused modules.
Given that, would deep requiring as suggested really lead to unused dependencies not being included in the final bundle?

The React-Native bundler is called Metro, and (as of this writing) there is an open issue fore tree shaking with a delivery in "H1 2019".
Note that Uglify.js (or anything else that acts solely on a single file) is not capable of doing tree shaking, because tree shaking is (by definition) between modules — the equivalent to tree shaking within a single module is simply called "dead code elimination". So you need to do proper tree shaking at the bundler level.
To answer the final question in the OP: yes, if you do a deep include, you will exclude unused dependencies. When you do an import, you are creating a dependency on a specific JavaScript file (and, transitively, the files it depends on). You are only importing a single JavaScript file, even if you import using the short-hand of merely saying the module name (eg: import "react-native-progress"): in that case, the single file you are creating a dependency on the file named in package.json under the main field (cf: the browser field).
Conventionally, that main file is simply re-exporting (that is, creating a dependency upon and exposing) other modules. That is exactly what index.js does in react-native-progress, which is why you end up importing all the package's modules when you do the generic module import. When you do the so-called "deep require", you're just bypassing the re-exporting that the index.js does, and instead setting up the dependency to the deeper module yourself.

Related

Best practice for NPM package with es6 modules - bundle or not

Writing an NPM package containing es6 modules, is it best practice to keep the source files separate
package.json
esm
index.js
Content1
Content1A.js
Content1A.js.map
Content1B.js
Content1B.js.map
Content2
Content2A.js
Content2A.js.map
Content2B.js
Content2B.js.map
with index.js referencing contents in subfolders, or is it better practice to bundle it into one file
package.json
esm
contents.js
contents.js.map
Seems the first method has an advantage with CommonJS modules since it gives a consumer possibility to import directly from the source and thus skip unused imports from index.js (since CommonJS modules are not tree-shakeable) but with es6 modules, this argument disappears.
Different bundlers might be capable of different things. The rest of this answer refers to Webpack which, being one of the most common bundlers, should influence decisions in this area.
The most important factor governing the decision about whether to bundle your library or not should be related to tree-shaking. No other important aspects come to mind for me.
Parameters affecting tree-shaking in Webpack
sideEffects: false
Setting in package.json that indicates whether modules in the package have side effects which needs to be executed when the module is imported but not consumed. Setting it to false indicates that no modules have side effects. May also be set to a list of modules which have side effects and other more complex values. Default seem to be true indicating that all modules have side effects.
This parameters plays a large role when using an entrypoint index in your package, from which all package exports are re-exported. Sparse imports from this index could easily cause your entire package to be bundled if this setting is not correct.
optimization.usedExports: true
Setting in webpack.config.js indicating to Webpack that all exports that are not used may be excluded. This activates a heuristic used by Terser to remove unused code inside a module. Is set to true by default.
In toy scenarios, this setting might seem efficient enough and the sideEffects flag might not seem to play a big role. This is not the case in real scenarios with more complex code where it is harder for this heuristic to do a good job.
/*#__PURE__*/
Annotation to be used before statements (such as functions) to indicate that they can be excluded if not explicitly used. These annotations also play a part in the heuristic used by Terser to remove unused code inside a module.
Conclusion
To allow your consumers to benefit the most from tree-shaking, it seems advisable to not bundle your es6 npm package and instead let the separate input modules remain separate so that the sideEffects setting in package.json may result in the consumer bundler to prune as many unused modules as possible. Rely on optimization.usedExports inside modules, evaluate bundle content and add /*#__PURE__*/ annotations where you think it could make a big difference. If everything is bundled in the same file, the sideEffects flag in package.json can't do the main part of the job as everything is in the same module and subsequently we have to rely on a lot of additional /*#__PURE__*/ annotations and heuristics in the consumer bundler to make tree-shaking as efficient as possible, which requires more from you (in terms of annotations) and does not come with any particular advantage. Remember to build your package in production mode as optimizations are not always active otherwise.
Source
https://webpack.js.org/guides/tree-shaking/
Own experiments

CMake: How to tell where transitive dependency is coming from?

I'm in the process of rewriting a legacy CMake setup to use modern features like automatic dependency propagation. (i.e. using things like target_include_directories(<target> PUBLIC <dir>) instead of include_directories(<dir>).) Currently, we manually handle all project dependency information by setting a bunch of global directory properties.
In my testing, I've found a few examples where a target in the new build will link to a library that the old build would not. I'm not linking to it explicitly, so I know this is coming from the target's dependencies, but in order to find which one(s) I have to recursively look through all of the project's CMakeLists.txts, following up the dependency hierarchy until I find one that pulls in the library in question. We have dozens of libraries so this is not a trivial process.
Does CMake provide any way to see, for each target, which of its dependencies were added explicitly, and which ones were propagated through transitive dependencies?
It looks like the --graphviz output does show this distinction, so clearly CMake knows the context internally. However, I'd like to write a tree-like script to show dependency information on the command line, and parsing Graphviz files sounds like both a nightmare and a hack.
As far as I can tell, cmake-file-api does not include this information. I thought the codemodel/target/dependencies field might work, but it lists both local and transitive dependencies mixed together. And the backtrace field of each dependency only ties back to the add_executable/add_library call for the current target.
You can parse dot file generated by graphviz and extract details which you want. Below is sample python script to do that.
import pydot
import sys
graph = pydot.graph_from_dot_file(sys.argv[1])
result = {}
for g in graph:
# print(g)
for node in g.get_node_list():
if node.get("label") != None:
result[node.get("label")] = []
for edge in g.get_edges():
result[g.get_node(edge.get_source())[0].get("label")].append(g.get_node(edge.get_destination())[0].get("label"))
for r in result:
print(r+":"+",".join(result[r]))
You can also add this script to run from cmake as custom target, so you can call it from you build system. You can find sample cmake project here

IntelliJ multi-project

Moving to intellij i'm trying to understand properly the logic behind the its project structure. I come from eclipse. After reading for a while i understood the relation between workspace and project, then between project and modules. However something that is puzzling me is the logic of the default project configuration in Intellij. Indeed, when you create a project there is an initial module which to a certain extend is equivalent to the Project itself. To be more precise, the initial module folder is the Project folder. This is kind of confusing to me. Then when you add more module they are sub-module of that module.
My first question is what is the rationale of making this first module equivalent to the project folder ?
Following this, i would further ask, what the point of having modules as sub-module of others.
In eclipse i use to have simply different project (i.e. module) independent from each other and adding the dependency as necessary. So how does the Idea solution makes it better, if not what is the rational here ?
I saw that one can start an empty project and then add modules to it. However in that case, the modules added are added as subfolder of the Project and therefore there is no initial module equivalent to the Project folder ? So why this difference and what is the rationale behind it ?
What would be the better approach, the first or second ?
Would it be ok to have this first initial module with no src or test folder but just with the proper facet so as to spread it to the sub-module?
I would appreciate if someone could explain a bit the rational of all of it ?
I will move to SBT soon (i.e. maven structure which I suppose inspired all modern IDE project Structure) if one want to explain within that context fine, nevertheless i want to understand the rationale in intelliJ first.
Many thanks,
-M-
PS: What i'm looking for is some advise for some multi-module project structure in Intellij as i'm moving my eclipse workspaces to it.
I think that it's not uncommon for projects to be relatively small, so they don't need fancy modules with dependency management etc. In that case, I find the default project created by IntelliJ to fit perfectly my needs: no need to add submodules, everything is directly in the parent project, it reduces the structure to its bare minimum.
On the other hand, big projects with submodules will likely resemble the structure of a Maven multimodule project (perhaps SBT too, but I don't know this tool at all). You have a parent root which acts as a container for submodules. The parent project may also store configuration (a default SDK, a language level etc. that will be inherited by the submodules). The actual code will be contained in the submodules.
Regarding your questions, it all depends on the kind of project you are developing. For a small codebase, you could keep a simple project with no submodule. For bigger codebases, you can either create modules manually, or import an existing Maven/SBT/whatever project, which will automatically create modules reflecting the imported structure.

How does modular code work in Go?

Not having come from a C/compiled languages background, I'm finding it hard to get to grips with using Go's packages mechanism to create modular code.
In Python, to import a module and get access to it's functions and whatnot, it's a simple case of
import foo
where foo.py is the name of the module you want to import in the same directory. Otherwise you can add an empty __init__.py into a subfolder and access the modules via
from subfolder import foo
You can then access functions by simply referencing them through the module name, e.g. y = foo.bar(y). This makes it easy to separate logical pieces of code from one another.
In Go however, you specify the package name in the source file itself, e.g.
package foo
at the top of the 'foo' module, which you can then supposedly import through
import (
"foo"
)
and then refer to it through that, i.e. y := foo.Bar(x) . But what I can't wrap my head around is how this works in practice. The relevant docs on golang.org seem terse, and directed to people with more (any) experience using makefiles and compilers.
Can someone please clearly explain how you are meant to modularise your code in Go, the right project structure to do so, and how the compilation process works?
Wiki answer, please feel free to add/edit.
Modularization
Multiple files in the same package
This is just what it sounds like. A bunch of files in the same directory that all start with the same package <name> directive means that they are treated as one big set of code by Go. You can transparently call functions in a.go from b.go. This is mostly for the benefit of code organization.
A fictional example would be a "blog" package might be laid out with blog.go (the main file), entry.go, and server.go. It's up to you. While you could write a blog package in one big file, that tends to affect readability.
Multiple packages
The standard library is done this way. Basically you create modules and optionally install them into $GOROOT. Any program you write can import "<name>" and then call <name>.someFunction()
In practice any standalone or shared components should be compiled into packages. Back to the blog package above, If you wanted to add a news feed, you could refactor server.go into a package. Then both blog.go and news.go would both import "server".
Compilation
I currently use gomake with Makefiles. The Go installation comes with some great include files for make that simplify the creation of a package or a command. It's not hard and the best way to get up to speed with these is to just look at sample makefiles from open source projects and read "How to Write Go Code".
In addition to the package organisation, Like pip in python, use dep https://github.com/golang/dep for go package management. if you use it on existing go package it will automatically build the dependency tree with versions for all the packages being used. when shifting to production server, dep ensure will use Gopkg.toml to install all the required packages.
Just use dep ensure -add , other commands for dep are:
Commands:
init Set up a new Go project, or migrate an existing one
status Report the status of the project's dependencies
ensure Ensure a dependency is safely vendored in the project
version Show the dep version information
check Check if imports, Gopkg.toml, and Gopkg.lock are in sync
Examples:
dep init set up a new project
dep ensure install the project's dependencies
dep ensure -update update the locked versions of all dependencies
dep ensure -add github.com/pkg/errors add a dependency to the project

In what package should a "Settings" class be placed?

I'm in the middle of building an application but found myself too easily creating new packages without keeping the project's structure in mind.
Now, I'm trying to redo the whole project structure on paper first. I am using a Settings class with public properties, accessed as settings for several other classes around the project.
Now, since this Settings class applies for the whole project, I am unsure if it should be packaged and if so, in what kind of package should it exist? Or should it be in the root (the default package) with the main application class?
I've been thinking about putting it in my utils package, then again I don't think it really is an utlity. Any strategies on how to decide on such package structure for example for a Settings class?
Use of the default package is discouraged anyway (in java it is actually enforced as a warning as far as I know), even for the class containing the main.
Other than that, I prefer having a config package, even if it's the only class in there. I don't think it would fit in the utils package.
IMHO you should put it into a separate, low level package, since many other classes depend on it, but it (presumably) doesn't depend on anything. So it should definitely not be put in one package with the main application class. It could be in the utils package though, or in a separate package on the same level (e.g. config).
By "low level" I simply mean "low on the package dependency hierarchy", where a package A which depends on package B is higher than B. So it does not directly relate to the actual package hierarchy. The point is to avoid dependency cycles between your packages, so that you can have such an ordering between your packages.
Btw you should not use the root package in a real application.