NPM ChangeLogging and automatization - npm

Python packages have best practices for documenting public API changes using CHANGES.txt (see an example). There are tools like zest.releaser which do automated package publish and release notes maintenance.
Do NPM packages have best practices for documenting changes a.k.a. ChangeLog? (or are people expected to make sense from Github history, etc.)
Does NPM package have automated tools for maintaining change log when doing NPM package publishing, so that release dates and version numbers would be recorded in ChangeLog?
I found npm-release script, but its functionality is limited to tagging and pushing out new NPM packages.
CHANGES.txt example from Python:
Changelog
=========
1.0.0-dev (Unreleased)
----------------------
- Added feature Z.
[github_userid1]
- Removed Y.
[github_userid2]
1.0.0-alpha.1 (2012-12-12)
--------------------------
- Fixed Bug X.
[github_userid1]

From what I have seen so far, people tend to build custom mini tools that would read the Git (or other VCS) history and output a changelog based on some internal conventions.
This is not specific to the Node.js world though.
There are actually a couple of Grunt plugins that might help you with that:
https://github.com/btford/grunt-conventional-changelog
https://github.com/ericmatthys/grunt-changelog
Grunt is one of the finest build tools out there. It's quite popular (until the next one?), and it can help you integrate this phase into your release process. We can easily imagine orchestrating the changelog task with the grunt-release plugin.
I don't have in mind any standalone tool or plugin that would allow you to do all that zest.releaser does out of the box (but that doesn't mean it does not exists).

Related

git submodule vs npm package?

I'm using git submodule to build and shared components between projects. The project is not in production yet, so, at this point submodule is serving well.
But I'm concerned about maintenance and deploy, would be a good idea transform it into a npm package ?
An npm package will allow fragmentation across different package versions. On the other hand, git submodules have a bit of a learning curve, and the tooling is really not that good. With git submodules, you have all the source in one folder.
If it's at all possible, I'd recommend using a plain monorepo for all projects. You may need to create build time variables (via babel plugin/s), you may need some sort of "live config" get served from the backend. I worked with git submodules for a year and I've recently worked with a project that uses npm to share code.
I would recommend using only one git submodule, for all shared code, instead of several submodules. I would strongly consider using lerna, and use your one git submodule to track lerna's packages directory. And if the team decides they don't like git submodules, you can easily make this repo a sibling git repo, instead of a submodule. However, above all this, I'd recommend using a plain monorepo.
Here's a great talk on monorepo's from Netflix: https://www.youtube.com/watch?v=VNqmHJtItCs (strong focus on discouraging npm-style packages)
Here's google's infamous monorepo talk: https://www.youtube.com/watch?v=W71BTkUbdqE
This is a great site to read to help you think about good development flows: https://trunkbaseddevelopment.com/ (it primarily advocates for the monorepo approach)
If you are developing software for different clients(different people/companies paying you for similar projects), and have some agreement that they should be at least ~80% the same, you may really enjoy using build flags to help get started on splitting functionality, but I'm sure you should very proactively keep the code around the build flags clean, and refactor into re-usable components/packages. Give each client some sort of build-flags.json. Build flags should be named for features only, which in theory can all be individually toggled. Some code may be totally custom for each project, in this case, you may want to consider using dynamic imports, but generally this is a pain point I have yet to fully cross, although I have plenty of unrefined ideas around this.
If a monorepo is just not happening, I would actually recommend using npm packages+separate repos over git submodules, assuming you can do good semantic versioning of the package. (And, yalc seems to be a good tool for linking together packages, as opposed to the standard npm/yarn link)
My findings after trying both lerna, npm workspaces and git submodules. I find it is not a case of the one vs the other.
The reason why I say this is because one can have submodules that are part of the monorepo. Doing exactly this made my development experience better as I could clone an existing project and actively develop it within the bigger project (monorepo). I could then contribute back to the cloned project once satisfied with the changes. This is something that you cannot do with npm workspaces alone. Hence my argument that it is not a case of one vs the other. They solve different problems and can therefore complement each other.
Before using npm workspaces I would use npm link all the time. npm workspaces makes this use-case of developing with multiple packages more convenient. Even when the team you work with does not use a monorepo; you could use one to develop multiple packages and test them in conjunction. Once satisfied, you can push the individual repos. This is something you cannot do with git alone.
Maybe you can think of more novel ways of combining the features of npm and git.

Straight forward way to use your own NPM package without the NPM registry

I want to split up the code base of several of my project into isolated package like projects. Those should be easily usable by npm but they do not seem significant enough to be published to the global npm registry.
So, my question is if there is a middle way to handling them like local provided packages and installing them with their path and publishing them in the global repository.
Concerns:
cluttering the npm registry with packages which don't seem to be significant enough to take up the name
the need to document and to create tests for each package seems to be too much and I would not sleep well publishing packages which are not well documented and tested
I take up a name which might be more appropriate to be used by a more sophisticated package and maintainers
I still want other to be able to easily try / use this package, to see if it fits their needs
Alternatives:
A) creating a private npm repository (with CouchDB?)
+ is pretty much identical to the npm repository and would be easy to use
+ the versioning is identical just pure semver lookup
- every user needs to set up this repository if they want to use this package or need it as a child dependency in their (public npm) package (even though this is unlikely)
- Need to invest time into setting it up and maintaining it
B) Using my username npm namespace
+ would solve pretty much every problem
- namespaces seem to be meant for projects and its sub packages which wouldn't be the case for my packages since their only connection is the creator
- it seems arrogant to prefix your packages with your name, like you are tagging it with a big sign THIS WAS DONE BY ME
C) Using GitHub with a special detached branch which contains the (tagged) releases
+ you could use it like the global npm repository since the npm resolving strategy allows the repository url with a semver range in place of the version
- special case which is bound to break
- GitHub is not meant to provide npm packages, about no developer expects a git url instead of the versionrange, tools and firewalls might have problem with this
- workflow is really not meant this way neither for git nor for npm
D) using a local package and install package by its path
+ easy to setup and use
- no version management
- build steps must be done manually beforehand
- can not publish packages depending on those packages
- all dependencies have to be installed locally
E) making those packages more useful, implementing edge cases, writing documentation and testing the whole package
+ would resolve about all problems
- ALOT of extra work, primarily thinking about edge cases and giving the developer a good api
- sometimes you can't really get the name for you package (it collides with other) which results in weird
- it is your responsibility, you have to maintain it, be responsible (test it well, edge cases)
- cluttering of the npm repository
So those are all the alternatives which came to mind when I tried to find a solution. Please leave a comment / answer if you have another idea or maybe you can remove / reduce the importance of those contra points.
Maybe you could include your own experience, so I get a better view for the whole problem.
Currently I would just try to make the package more helpful to the greater majority but this does not work in all cases.
Thank you all for your time!
Installing from git is pretty standard feature in package managers. npm doesn't have Github-support, it's generic support for any git repo. Unless you can find some discussion about deprecating it from npm, I'd not worry about it. It's used internally in many companies for private packages.
Of course, there is still some trade offs: build artifacts and maybe a bit more clumsy workflow. Things like npm outdated doesn't understand git semver. For build artifacts, I have seen many projects to commit them to master branch to support direct git-install. If you look around older open source projects for example, that's the case quite often.
We went for a private repository with verdaccio running in a docker container, which is very similar to version A. It took some setup, but for our developers all it took was a single npm command to add the private repo "in front of" npm for all packages of the namespace we created. Granted, our packages are project specific, but in a private repository that does not really matter either way, does it?
We considered the local package option at first, but the drawbacks were just too big for us, even if it's very easy to setup.
I'm not sure this helps, but this is at least the setup we decided upon when we had the same issue a few months ago.

How to determine if a package (gulp-copy) goes well with another one (gulp)?

I see this package called gulp-copy and I can't see anywhere if it's adopted for the latest version of gulp. Is that never an issue? I'm worried that I happen to pick wrong package constellation or perhaps an obsolete configuration all together.
Questions are:
In this particular case, does the linked gulp-copy work well with gulp 4?
Is there a general way to determine which packages work well with gulp?
There is no generalized way to determine whether a certain package only works with gulp 3 or gulp 4 (besides reading the documentation for that package). Package creators cannot programmatically specify what version of gulp their package supports and there's no warning when using a package that's designed for a different version of gulp.
That being said, there are some heuristics you can use depending on what kind of packages you are dealing with:
General node packages: those are packages that were not specifically designed for gulp at all. You can use them with gulp, because you can use any node package with gulp, but they make sense outside of gulp as well.
These packages should work with any version of gulp since they don't contain gulp-specific code and are therefore independent of any changes made to gulp. Examples that are often used with gulp are merge-stream and del.
Gulp-specific packages on the other hand can be affected by changes to gulp.
Among those there's gulp plugins which are packages that are supposed to be used in gulp streams with .pipe(). Their names almost always start with gulp-, they are tagged with gulpplugin on npm and listed on the GulpJS website.
These should also generally be safe to use with any version of gulp. Gulp streams are just regular nodejs streams, so those plugins should work with either version of gulp (although nodejs streams have their own history of compatibility problems, but that's not really relevant anymore). Barring major changes to the vinyl file format there's not much that can happen that might affect gulp plugins.
The gulp-copy plugin that you mention falls into this category and should be safe to use with both gulp 3 and gulp 4.
All that being said there are a few gulp plugins that only make sense for a specific version. gulp-plumber for example fixes an annoying issue with error handling in streams that is only necessary for gulp 3, but not gulp 4. gulp-src-ordered-globs circumvents a problem with ignore patterns in gulp 3 that's fixed in gulp 4.
Finally there's what I like to call gulp extensions. They're not supposed to be used with .pipe(). Instead they extend the capabilities of gulp in other ways.
These are the ones you need to watch out for. A lot of them deal with gulp's task running capabilities which have undergone major changes between gulp 3 and gulp 4. There's probably many packages in this category that only work with a certain version of gulp.
I wouldn't worry too much about it though. Most of those packages will prominently display their limitations in their documentation. run-sequence for example has a big fat note at the top informing the user that this is a temporary solution for gulp 3. I published a package named gulp-parameterized the other day that only works with gulp 4 and it screams so in all-caps at the top of the docs.
Basically scan the documentation of any package you want to use for these kinds of notes and you should be relatively safe.

How do you deal with people removing npm package versions?

Today I found that an npm package version, Babel 6.0.15, that my application relies on had been removed from npm.
This caused compilation failure on a new pc, and I had to go manually find the closest available version for it, and all the cascading version changes it affected on related packages.
What is the best of way of dealing with npm packages, now that I know they can go missing at any time?
Do you check your node_modules folder into source control?
Is there a rule on npm about what versions (major, minor, etc) may be removed by the creator, and which are more 'long term support' and must be retained?
How do you get npm locally to inform you when 'npm update' fails on a new pc, rather than silently failing?
After thinking about this for a while I wrote a blog post summarising what I think is best practice. Reproduced below:
Summary
Specify exact versions of all npm modules, e.g. “alt”: “0.17.8”
Commit your node_modules folder into source control
Don’t use DefinitelyTyped or any other external library Typescript definition tools
Why?
Some of these principles might be controversial, so here’s my reasoning:
Specify exact versions of all npm modules
Semver (semantic versioning) says that breaking changes should occur only if the major version changes. So you should be able to say just “alt”: “0.17”
But I’ve found in practice that even patch changes (bugfixes) can break your application – because libraries that rely on these library often expect some tiny behaviour in a particular version not to change. So in order for all your particular versions of particular libraries to work, they need to rely on exact versions of other libraries.
Commit your node_modules folder into source control
I first assumed that all versions of famous npm libraries would remain there indefinitely. But I then discovered that creators often remove old versions of their software from npm – which then breaks the cascading chain of exact version number dependencies you’ve configured for your app.
Yes, committing all your npm libraries will take up space in your repository, but they’re text files after all, not .DLLs, so they’ll get compressed really small. And the alternative is one day not being able to compile your app at all on a new computer because a library has been completely removed from npm.
Don’t use DefinitelyTyped or any other external library Typescript definition tools
It’s wonderful to be given compile errors for external tools you use. But I’ve found it’s not worth the effort because:
there’s no way to match the definition file version number and npm library version number, so you get definitions that are out of sync with the library you are using
they often have bugs
the type bugs you catch at compile time are probably going to occur in your own app, not in how you call external libraries
Instead of using .d.ts files for external libraries, just say:
declare module 'lodash'
{
let x: any;
export = x;
}
Or use the –allowJS flag in Typescript 1.8 onwards.

Testing a NuGet package

We are big users of NuGet, we've got 25-30 packages which we make available on a network share.
We'd like to be able to test new packages before they're built and released in the consuming applications. Ideally, this could be done using something similar to Maven's snapshot and having a specific development package (e.g. snapshot functionality).
Has anyone else come up with a, ideally reasonably non-hacky, way of doing it?
Our favoured method is to generate the package assemblies and then manually overwrite the assemblies in the packages/ directory, i.e. to replace the actual project references, but that doesn't seem particularly clean.
Update:
We use a CI build server which creates builds on every commit and has a specific manually triggered NuGet build which works off specifically tagged versions of the codebase. We don't want to create a NuGet build off every commit, but we would like to be able to test a likely candidate in the wild before we trigger the manual NuGet package build.
I ended up writing a unit / integration testing framework to solve a simular problem. Basically, I needed to verity the content of the package, the versions and info, what would happen when I installed and uninstalled the package, what versions were the assemblies in the lib, what bits the assemblies were built as (x86 or x64) and so on - and I needed it all to run without Visual Studio installed and on my build machine (headless) as a quality gate.
Standing on the shoulders of giants like: Pester, PETools, and SharpDevelop's package management module I put together - nuget-test
Clone the project into your package directory (where your .nuspec file and package files are). If for whatever reason you want to keep the nuget-test project as a "git" repo then simple remove "remove-item nuget-test/.git -Recurse -Force" from the command below.
git clone https://github.com/nickfloyd/nuget-test.git; remove-item nuget-test/.git -Recurse -Force
Run Setup.ps1 in the root of the nuget-test directory in an x86 instance of PowerShell.
PS> .\setup.ps1
Write tests and place them in the nuget-test/test directory using the Pester syntax.
Run the tests.
PS> Invoke-Pester
Project page: nuget-test
On github: https://github.com/nickfloyd/nuget-test
I hope this helps you get closer to what you're trying to get done.
If you're using NuGet packages to distribute your libraries, you should not limit to only testing the libraries. You should test the packages themselves as well (if your binaries are OK but incorrectly installed, consumers still have issues). The whole point is to improve this experience.
One way could be to have an additional CI or QA repository. The one you currently have is actually your "production" repository containing consumable releases, considered finished high-quality products.
Going further, you could have a logical package promotion flow (based on Continuous Integration or even using a Continuous Delivery approach), where:
- each check-in produces a package on your CI repository
- testers pick up a CI package for QA and if found OK promote it to either a QA feed, or to the Production feed (whatever you prefer, depends on the quality of your testing and how well it is automated)
There are various ways of implementing this scenario, using simple network shares, internal NuGet.Server or Gallery implementations, or simply use http://myget.org to give it a try with minimal cost and zero effort.
Hope that helps!
Cheers,
Xavier