I am curious to know why when I create a new Aurelia project, each project installs +600 node_modules. Understandably, the modules collectively don't take up a lot of space, but are all of these modules necessary? I was under the impression that Aurelia's aim was to help developers move away from depending on 3rd party libraries so it seems odd that each project comes with a massive dump of 3rd party libraries.
My guess is that you are starting your project from CLI - which comes preset with HTTP server, ES6/2015, SASS, live-reloading and more.
I created clean Aurelia project and looked at the package.json - there were 5 dependencies and 34 dev dependencies. Using all of above mentioned tools is somewhat standard in today's JS web development, and generating project from CLI reduces time needed for upfront setup. All of these features come with their own dependencies, and that's why node_modules/ folder grows rapidly.
The bottom line is - you could start new Aureila project with much fewer dependencies. On their home page you can find starter project with just three. But that also means that you won't have access to most of the tools used today.
Also, and correct me if I'm wrong, I haven't got the impression Aurelia ever aimed to move devs from third party libs and modules, just to be modern, fast, and unobtrusive.
All modern web frameworks have a host of tooling. The reasons in no particular order -
1. Transpiling ESNext or TypeScript - if you want to write in Future JavaScript but have it work in all browsers, you need this step. Both Babel and TypeScript tooling comes with extra stuff too. If you want to see coverage (everyone does) there's another tool.
2. Testing - Unit test and End to End testing require testing frameworks, test runners, and if you want to write like above (esnext or TypeScript) you also need transpiling.
3. Module Loading / Bundling - Require.js, JSPM/System.js, WebPack, etc... are used to allow your code to actually run in the browser. Without a module loader you could not break your code out in to separate files. Without a bundler you would be loading a lot of extra files in production.
4. Serving your application - If you want to run your app locally you need a way to serve it up and watch for changes.
5. Debugging - You want to debug? Now you need a way to debug the file that gets served to the browser back to the original source.
6. Linting - Lint your code base for style consistencies.
Each of these packages usually have their own dependencies, and they get pulled down as well.
This convention of small packages that have a single focus is arguably better than massive packages that do everything for you. This allows you to remove a package and replace it with the one that does the same thing but in a way you want it.
Related
I have a monorepo (Nx Workspace) that has many libraries that were generated with #nrwl/js. These libraries depend on each other. They are small utility libraries that I intend to publish on NPM as separate packages.
According to Nx docs:
#nrwl/js is particularly useful if you want to
Create framework agnostic, just plain TypeScript libraries within an existing Nx workspace (say to use in your React, Node or Angular app)
Publish TypeScript packages to NPM
Yet, I now realize that the build command associated with #nrwl/js does not take into account a library's dependencies on other libraries in order to include them as peerDependencies into its publishable package.json. It also looks like it is making just a simple commonjs build that would be compatible with node.js but not with browsers.
I know that #nrwl/angular takes care of all that stuff for you when you build a library with it if you had marked it as publishable. My question is how do I get the same behavior but for a library that is not meant for Angular but for general purpose Javascript use in any framework or environment.
It's still early in development so if a solution to my problem would involve regenerating the libs using some other Nx generator, it wouldn't be too much of a hassle for me to do so, and I'd consider it.
Edit
I have since changed the build executor to #nrwl/node, which has the buildable and publishable options and behave the way I need.
I believe that #nrwl/js will eventually gain that ability but as of 2022-01-05 it didn't have it. For now, someone looking to publish libraries on NPM probably should use #nrwl/node to generate his new library projects and not #nrwl/js
If you want to publish a NPM package as a "universal package" to be consumed by Node.js and the browser, you would need to build it as UMD, to cover most consumer environments.
Though modern browsers support ES modules (import and export), you might need extra setup steps to consume them.
I want to use a monorepo for our frontend app. We want to divide some react UI components into a package folder under "/packages/ui-components" and leave the app in an "/apps/app" folder and then have the app consume ui-components by importing it (simplified setup). We don't plan to release these packages to individual npm repos anytime soon, but just have the final app running.
I'm starting to worry a bit about how we can have the best workflow, and for some reason I cannot find this in my research:
Should the app consume the src-files from packages or instead compile each package to the dist folder and import only these?
Workflow wise we would like working in different packages seamless, so if someone edits in a package we would like these changes to immediately show in the App.
I see some pros and cons of using source-files compared to using a dist-output.
Pros of using src directly:
Better tree-shaking, as dependencies can be peer-dependencies and libraries that are used by multiple packages can be combined.
Smaller final bundle size due to webpack having better access to original data like full dependency-tree and common functions etc.
Faster development iterations with smaller projects as there's only one build and smart webpack could potentially only recompile a few changed files.
Pros of using dist:
More independent packages as they can contain their own build-pipeline.
Will be easier to import as less peer-dependencies and special webpack-config is needed
Ready to be published as a public npm package
Possibly faster build-time as only changed packages and main-app needs to recompile on changes (I assume webpack can do cache, so maybe this doesnt matter much)
I'm sure I'm missing a lot of details; setting up the good development flow is quiet complicated these days and I would like to make it as simple to use as possible for my colleagues..
TL;DR;
Should packages in a mono-repo build to their dist for others to consume, or is it better to import directly from src.
It is a matter of tradeoffs.
If you consume dist of your packages, this means that in order to "apply" changes inside your package you need to build, then consume it in the app.
Some even suggest publishing the package to the registry (public or private), this allows you more loose coupling between the app and the packages.
On the other hand, working on the src has "seem less" like advantages, but will require your app setup to support it, since it will be the one that compile the package's code.
Personally, I'm using the second method, my apps are consuming from the src, and it was not so trivial to configure to since most of the tools are ignoring code from node_modules by default (Like babel-loader, which will ignore transpilation of code inside node_modules).
Most of my code was based on next-transpile-modules source code.
I have made a few applications (using webpack, babel, react, d3, npm etc.) that uses very similar charting code. I am in the process of splitting out that charting code into an npm package which multiple apps can then import.
To test this out, I've embedded a demo app inside my chart libraries project directory and install the library at its file path. Now, presumably i'll be able to install this in depending apps A, B and C and so on, and I can change my chart libary and all apps will reflect these changes.
The first thing I noticed is that I now have to cd into my chart library and run npm run build (which runs webpack) any time I change something, and then cd into the depending app I'm working on and run npm i. This can perhaps be improved by using npm link but there are issues there as well (such as versioning and deploying to my server). So my first question is about what a decent rapid development approach looks like now that my charting code is in a separate npm project.
The other problem I've noticed is that I've lost two valuable features with respect to my chart library code. Code completion in VSCode and debugging in chrome dev tools. I'm not sure why VSCode code completion has stopped working. And for debugging, how would i be able to debug both my depending app and the library its depending on at the same time in chrome?
I would use npm link. It's immensely helpful when working on a library and its integration side by side.
Check the Chrome settings to make sure it's not instructed to skip libraries in Settings -> Framework Blackboxing, see e.g., http://blog.edenhauser.com/tell-chrome-debugger-to-ignore-libraries/.
I've been developing in Aurelia-CLI for about 3 months and like it so far. I think it's a solid framework and obviously escalating in support and usage. That's a good thing!
Before I develop much more of my large app, I'm wondering if I'm using the best build system. I've only tried Aurelia-CLI and am not really familiar with Webpack or JSPM, and therefore I don't know what I'm missing. Are there any clear advantages or disadvantages in using either of the other two build systems, or is using the CLI the most clean and supported approach? Since I'm developing independently, I don't have any external constraints.
Thanks for your help.
UPDATE
This answer is almost two years old. Feel free to research updates and provide another more complete answer and I can replace this answer or point to that answer. Thanks!
Aurelia CLI
Aurelia CLI is great for getting started. It's important to understand that under the covers the CLI is using require.js but proxies the configuration through aurelia.json in your application. This means that you need to understand how to configure aurelia to work with require.js at the moment. Once you need to start configuring to match your workflow or change build steps up it gets a bit cumbersome at the moment. We are working to improve this. There are many features planned for the Aurelia CLI but given at the time of writing this that it is in an alpha / beta state it should generally be used on proof of concept or other smaller apps, not production-ready large scale apps yet.
WebPack
WebPack is arguably the most popular kid on the block at the moment. WebPack is not a module loader, but a bundler. It's important to understand this because while we strive to make Aurelia work great with all module loaders WebPack by default is not in charge of loading modules so a dynamically loaded application requires the developer to expand on this. WebPack is strong in creating optimized bundles and can be easy to use as long as you are comfortable with configure WebPack. WebPack has considerably more GitHub stars due to the popularity from React using WebPack it's hard to say whether the choice is better when using Aurelia simply because of the number of GitHub stars.
JSPM / System.js
Some of the skeletons use JSPM and System.js. The reason is that these are the closest to 'spec compliant' solutions. JSPM tries to help as much as possible when loading from the JSPM registry. If not yet available in the registry you can load from NPM or GitHub directly. From a module loading perspective you use a config.js file that is (usually) automatically maintained when installing dependencies to improve the developer workflow.
Side biased note
On most larger apps at the moment I typically prefer using JSPM / System.js simply because I have a great understanding of the tooling and prefer the control that I am provided. I work on a great number of Aurelia apps that are in production and typically reserve CLI for smaller proof of concept apps and WebPack is a great alternative but I prefer the flexibility and understanding I have with JSPM / System.js at the moment.
The CLI isn't currently feature complete, but it is a much simpler setup. Webpack can basically do anything you want to do, but you'll be maintaining your webpack configuration just as much as you maintain your Aurelia code.
Ok, maybe not just as much, but you'll have to learn Webpack to use webpack. The Aurelia CLI is simple to get started, but has some definite limitations. For example, CSS files that reference external resources won't work w/the Aurelia CLI, but they should work fine with Webpack.
First, I can understand if this post gets shutdown due to its subjective nature.
I believe it's time to re-visit the answers about Aurelia CLI being a second-class tool. I respect both PW Kad and Ashley Grant immensely, but I am just not convinced that a statement like this is true anymore:
There are many features planned for the Aurelia CLI but given at the
time of writing this that it is in an alpha / beta state it should
generally be used on proof of concept or other smaller apps, not
production-ready large scale apps yet.
Notably, I have a production application that way back in the day I started with Aurelia CLI, and changed it to JSPM precisely for the reasons noted. But recently, I rebuilt the same app from scratch using the CLI and I realize that it is much easier to use, particularly managing modules and publishing! And this is an app with Google Maps, Google Analytics, Auth0, DevExpress, Bootstrap, etc.
Just think it is time to give Aurelia CLI a little love. It's ready.
Aurelia CLI is the most preferred option with this announcement.
http://aurelia.io/blog/2017/08/18/aurelia-cli-webpack-update/
Now It has more flexibility for your choice of preferences.
As far as I see every time I make a change, for example the value of a configuration variable, I have to
Make a copy of the change in each project (webapp, Android, IPhone, etc.)
Build each project
Distribute each project (besides the webapp)
I have found PhoneGap build which seems to be a great solution for the mobile part. But it's still beta and it doesn't solve everything. I still have webapp's code, which is not exactly the same.
Do you know techniques, tools or tricks, which help to improve this process?
Thanks in advance.
We are currently developing a web/Android app using PhoneGap and Sencha Touch (iOS is coming soon). So far our approach is as follows:
We have one project per platform plus several additional toolkit projects.
One platform is "primary", web in our case. This is what developers mainly use to develop and test the app. We're using jsTestDriver for testing.
During the build, the app is packaged for web in the first step. We're producing several artifacts here (.war file, tests in a .jar file).
"Secondary" platform projects do not include the source code. It gets unpackaged and copied to the right places when projects are built. This also includes tests from the primary platform.
Platform projects contain some additional code - normally only testing code, app code itself is currently cross-platform (not sure if it will stay this way).
So we're doing it mainly through advanced build scripts. We're using Maven for web and Android. iOS is coming soon (into our work, I mean), so we'll be looking for some sensible build tool there too.
We're building our projects using Hudson continuous integration.
What I have to admit is that this whole environment (multi-project Maven builds, JSTD, multi-node Hudson) is a hell of a setup, took quite an effort to figure it out.