Modular cross project design / Create re-usable extensible base - npm

Context:
We're creating a new solution consisting of multiple app portals (for lack of a better term), each portal will need to leverage off of a base project that will already include some of my employed proprietary code, as well as any new features pertaining to that portal. Our current app leaves much to be desired, and as we're getting a fresh start, we'd like to go at it the right way. (Thus I'd like to rubberduck my thoughts somewhat)
I've thought of a few possible ways to solve this. Each with it's pro' and cons.
1. GIT Fork A Base Project:
This seems like the most straight forward way. Have a PortalCore
project, then have each project fork it in a downstream only fashion.
Con: If the base changes, we'll need to manually update all of the dependant projects.
Pro: Easier to implement initially, and I believe will reduce some of the other more "laborious" tasks. (Example, single build file that will travel with each new portal with our build requirements.)
The flow would be:
Fork PortalCore > Core will be kept up to date via updating via GIT master
2. Base Project NPM Package:
This seems like an ideal route, as with each deployment the latest version of our base package/project will be installed with each portal.
Con: From my research it seems like we're not able to have a npm package install outside of the npm folder (this pertains to my question). We'll need to share the build file via some other means if we want it to sit in the project root.
Pro: Updates automatically rolled out with the build process
The flow would be:
New Project > Add Portal Core npm > Make custom build task, or grab
from some central repo > Will be kept up to date via npm install >
Gulp Build
3. Combination of the above
Have a git project only containing our base npm modules, & build config. The build can then handle things like moving files to the right location (example. node_modles -> root)
The flow would be:
Fork PortalCore > Core will be kept up to date via npm install > Gulp Build
Questions:
Is there a way to have an npm package (or another package manager) install files to a specific location? (I have checked the npm forum, and this seems like a dead end. But I thought I'd try my luck here)
Are we frankensteining it? We don't want to create a new monster. Does this logic make sense ITO creating something that should be somewhat modular by design, but allows for easier maintenance. How do the big boys do this... if they do this?

The answer to this ended up being much simpler than I expected:
Put all shared services in separate common NPM packages (common components,shared / common services, etc.)
And create a few yeoman generators that assists with initial project
initialisation. (The only drawback is someone needs to maintain them,
should some new core dependency come along... but such is dev life.)

Related

CI build failed in case project reference is from another .net solution

I have two independent .net projects. One is like a project which is baiscalliy to process invoice and another project is something which I am calling as common as I am keeping all sharable/reusable code under that.
Now any project can consume this common-project by adding it via add Existing project option so that source code will not move to consumer project which is Invoice management in my case.
Now if I add common project as reference and run my CI pipeline its failed as its not able to find the path of common project which is obvious as it may be different from my local machine to build server.
Now the solution that I am aware of are below :
Make common as Nuget package and use it under invoice management.
Build common project dll at some centralized file server and give that path in Invoice management
for reference instead of absolute path.
Both solutions are not simple to implement so I am looking for any better quick solution for the situation where project setup is like this and CI build has to execute.
The best would be actually reference via NuGet package. However, there is a third option which I do not recommend. You can use multiple repository pipeline. You will checkout there two repositories. In thi case you have to mimic folder structure wich you will get on AzureDevops. Otherwise build will fail as it will not find the references.

When building an application using package.json, how can I select different dependency versions for development?

I have a Javascript package (A) which uses one of our own packages (B) as a dependency. Neither is on the NPM registry. A is an application, and B is installed directly from github using specifiers like this: github:user/project#version
When A is built with npm install; npm run build we normally want it to use the version of B defined in package.json's dependencies. So set the production version here, say: github:user/B#semver:^1.0.0. All good.
What I would like is to be able to build a development version of A with everything the same but with the development branch of B: github:user/B#dev.
devDependencies does not seem to be designed for this, rather for dependencies which are entirely absent in production. NODE_ENV environment variable doesn't seem able to affect the behaviour of install.
Currently I edit A's package.json dependencies to use github:user/B#dev, and remember not to commit this to the master branch. And if do I commit it to the dev branch, then I can't simply have dev follow master in a fast-forwarding manner, instead I have to maintain two branches and merge dev across to master.
----+---+---+---> dev
\ \ \
v v v
------+---+---+--> master
This is inconvenient and potentially error-prone.
I would guess that my situation is not an unusual one, and that there might be a better solution for this out there.
How do other people solve this?
After a few comments I now understand your requirement. The answer is, it depends on how you plan to differentiate a dev and prod version of your application.
Different Deployments
One option, is to have two separate deployments of your application - one for development, and one for production. If the source code itself should be the same, the rather than having two separate repos, you could use a monorepo with two different package.json inside, one for development purposes, and the other for production.
Aliases
The other option, would be to make use of the npm alias functionality - npm now supports package aliasing like the following:
"dependencies": {
"prodPackage": "npm:mypackage#prod",
"devPackage": "npm:mypackage#dev"
}
If you then need to focus on how this is used in your code, the NODE_ENV variable could then be used to control this:
const Package = process.env.NODE_ENV == 'prod' ? require('prodPackage') : require('devPackage')
Or create a more specific env variable that defines your package, then change this in your environments to be which version you require, like so:
#in .env
MY_PACKAGE=devPackage #or prodPackage
// in your .js files
const Package = require(`${process.env.MY_PACKAGE}`)
Package.json & Source Control
The only other alternative that springs to mind (which is much less preferred), is to either remove package.json from source control and use local ones depending on dev or prod.
There is no official approach to this unfortunately, but there are ways to manage it - including the one you mentioned above. You can decide which one best suits you if any.
Either way, using the alias approach or different deployments, you wont have to worry about keeping separate branches. I hope any of this helps.

When to use shrinkwrap, npm-lockdown, or npm-seal

I'm coming from a background much more familiar with composer. I'm getting gulp (etc) going for the build processes and learning node and how to use npm as I go.
It's very odd (again, coming from a composer background) that a composer.lock-like manifest is not included by default. Having said that, I've been reading documentation on [shrinkwrap], [npm-lockdown], and [npm-seal]. ...and the more documentation I read, the more confused I become as to which I should be choosing (everyone thinks their way is the best way). One of the issues I notice is that npm-seal hasn't changed in 4 years and npm-lockdown in 8 months -- this all leads me to wonder if this because it's not needed with the newest version of npm...
What are the benefits / drawbacks of each?
In what cases would I use one over another in Project A, but use a different one in Project B?
How will each impact our development workflow?
PS: Brownie points if you include the most basic implementation example for each. ;)
npm shrinkwrap is the most standard way how to lock your dependencies. And yes, npm install does not create it by default which is a pity and it is something that npm creators definitely should change.
npm-lockdown is trying to do the same things as npm shrinkwrap, there are two minor points in which npm-lockdown is better: it handles optional dependencies better and it validates checksums of the package:
https://www.npmjs.com/package/lockdown#related-tools
Both these features seem not so relevant for me; I'm quite happy with npm shrinkwrap: For example, npmjs guarantees that once you upload certain package at certain version, it stays immutable - so checking sha checksums is not so hot (I've never encountered an error caused by this).
seal is meant to be used together with npm shrinkwrap. It adds the 'checksum checking' aspect. It looks abandoned and quite raw.
Good question - I'm going to skip everything but shrinkwrap because it is the de-facto way to do this, per NPM's docs.
Long story short the npm-shrinkwrap.json file is akin to your lock files you are used to in every other package manager, though NPM allows different versions of the same package to play nice together by isolation - literally scoping and copying different entire versions to node_modules at different levels of the tree. If two projects that are parent-child to each other use the exact same version, NPM will copy the version to only the parent and the child will traverse up the tree to find the package.
Best practice is simply to update package.json for your direct dependencies, run npm install, verify that things are working while developing, then run npm shrinkwrap when you are just about to commit and push. NOTE: make sure to rm npm-shrinkwrap.json before running npm install during active development - if your direct dependencies have changed, you want package.json to be used, and not the lock! Also include node_modules in your .gitignore or equivalent in your source control system. Then, when you are deploying and getting to run the project, run npm install like normal. If npm finds an npm-shrinkwrap.json file, it will use that to recursively pull all locked modules, and it will ignore package.json in both your project and all dependent projects.
You might find shrinkpack useful – it checks in the tarballs which npm install downloads and bundles them into your repository, before finally rewriting npm-shrinkwrap.json to point at that local bundle instead.
This way, your project is totally locked down, completely available offline, and much quicker to install.

Auto increment appx package version after each build

I am looking for a solution to automatically increment a package version (not to be confused with an assembly version) after each build on CI server (particularly Atlassian Bamboo). Every appx package has a version defined in its manifest file (appxmanifest). Thus in order to increase the version a manifest must be edited before commit. I am considering different approaches to implement this. The first one makes changes in manifest and pushes it back to the repo.
Starts building a plan (in order to lock a build number)
Modifies manifest so that a revision is set to the current build number
Pushes changes to SCM (particularly Atlassian Stash). This step shouldn't trigger the next build.
Continues building the package (invoke MSBuild, UT and other tasks)
Cons
Leads to incorrect workflow on Bamboo: checkout -> push -> build
Each build makes a new commit
Another approach is to setup post receive Stash hook which would modify appxmanifest.
Cons Hard to keep a build number in sync with Bamboo.
Is there any other (cleaner and proper) way to achieve this?
ex-Stash developer here (not that it matters),
I would highly recommend not checking in derived/version information or files. It's going to cause you no end of problems (some of which you have pointed out in your question).
My advice - generate what information you need on the build. I don't know anything about appx packaging, but can you use a placeholder/property (like this) which can be resolved on the Bamboo build? For our builds we use the git hash and timestamp as the version, and in the past I've also used the job/build number (timestamp is better though).
As more food for thought - if that appx version is important for developers to see locally, and it becomes hard to match up with the Git version then you can also attach a Git tag/note to the commit in Bamboo as well. The nice thing about that is that anyone fetching from Git can easily see that extra metadata, but it doesn't result in extra commits for every build. If the appx version need to be based off the previous version then this makes it possible for the build scripts to inspect the previous commit and bump the version appropriately.
I hope that helps.

Maven - installing artifacts to a local repository in workspace

I'd like to have a way in which 'mvn install' puts files in a repository folder under my source (checkout) root, while using 3rd party dependencies from ~/.m2/repository.
So after 'mvn install', the layout is:
/work/project/
repository
com/example/foo-1.0.jar
com/example/bar-1.0.jar
foo
src/main/java
bar
src/main/java
~/.m2/repository
log4j/log4j/1.2/log4j-1.2.jar
(In particular, /work/project/repository does not contain log4j)
In essense, I'm looking for a way of creating a composite repository that references other repositories
My intention is to be able to have multiple checkouts of the same source and work on each without overwriting each other in the local repository with 'install'. Multiple checkouts can be because of working on different branches in cvs/svn but in my case it is due to cloning of the master branch in git (in git, each clone is like a branch). I don't like the alternatives which are to use a special version/classifier per checkout or to reinstall (rebuild) everything each time I switch.
Maven can search multiple repositories (local, remote, "fake" remote) to resolve dependencies but there is only ONE local repository where artifacts get installed during install. It would be a real nightmare to install artifacts into specific locations and to maintain this list without breaking anything, that would just not work, you don't want to do this.
But, TBH, I don't get the point. So, why do you want to do this? There might be alternative and much simpler solutions, like installing your artifacts in the local repository and then copying them under your project root. Why wouldn't this work? I'd really like to know the final intention though.
UPDATE: Having read the update of the initial question, the only solution I can think of (given that you don't want to use different versions/tags) would be to use two local repositories and to switch between them (very error prone though).
To do so, either use different user accounts (as the local repository is user specific by default).
Or update your ~/.m2/settings.xml each time you want to switch:
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0
http://maven.apache.org/xsd/settings-1.0.0.xsd">
<localRepository>${user.home}/.m2/repository</localRepository>
<!--localRepository>${user.home}/.m2/repository2</localRepository-->
...
</settings>
Or have another settings.xml and point on it using the --settings option:
mvn install --settings /path/to/alternate/settings.xml
Or specify the alternate location on the command line using the -Dmaven.repo.local option:
mvn -Dmaven.repo.local=/path/to/repo
These solutions are all error prone as I said and none of them is very satisfying. Even if you might have very good reasons to work on several branches in parallel, your use case (not rebuilding everything) is not very common. Here, using distinct user accounts migh be the less worse solution IMO.
This is INDEED possible with the command line, and in fact is quite useful. For example, if you want to create an additional repo under your Eclipse project, you just do:
mvn install:install-file -DlocalRepositoryPath=repo \
-DcreateChecksum=true -Dpackaging=jar \
-Dfile=%2 -DgroupId=%3 -DartifactId=%4 -Dversion=%5
It's the "localRepositoryPath" parameter that will direct your install to any local repo you want.
I have this in a batch file that I run from my project root, and it installs the file into a "repo" directory within my project (hence the % parameters). So why would you want to do this? Well, let's you say you are professional services consultant, and you regularly go into customer locations where you are forced to use their security hardened laptops. You copy your self-contained project to their laptop from a USB stick, and presto, you can do your maven build no problem.
Generally, if you are using YOUR laptop, then it makes sense to have a single local repo that has everything in it. But to you who got cocky and said things like "why would you want to do that", I have some news...the world is a bigger place with more options than you might realize. If you are using laptops that are NOT yours, and you need to build your project on that laptop, get the resulting artifact, and then remove your project directory (and the local repo you just used), this is the way to go.
As to why you would want to have 2 local repos, the default .m2/repository is where the companies standard stuff goes, and the local "in project" repo is where YOUR stuff goes.
This is not possible with the command line client but you can create more complex repository layouts with a Maven repository server like Nexus.
The reason why it's not possible is that Maven allows to nest projects and most of them will reference each other, so installing each artifact in a different repository would lead to lots of searches on your local hard disk (or to failed builds when you start a build in a sub-project).
FYI: symlinks work in Windows7 and above so this kind of thing is easy to achieve if all your code goes in the same place in the local repo, i.e /com/myco/.
type mklink for details
I can see that you do not want to use special versions or classifiers but that is one of the best solutions to solve this problem. I work on the same project but different versions and each mvn install takes half an hour to build. The best option is to change the pom version appended with the change name, for example 1.0.0-SNAPSHOT-change1 that I'm working on thereby having multiple versions of the same project but with different code base.
It has made my life very easy in the long run. It helps run multiple builds at the same time without issues. Even during SCM push, we can skip the pom file from staging so there can always be 2 versions for you to work on.
In case you have a huge project with multiple sub-modules and want to change all the versions together, you can use the below command to do just that
mvn versions:set -DnewVersion=1.0.0-SNAPSHOT-change1 -DprocessAllModules
And once done, you can revert using
mvn versions:revert
I know this might be not what you are looking for, but it might help someone who wants to do this.