Why does `npm install` use the shrinkwrap's 'resolved' property - npm

I am considering setting up a local npm mirror such as "npm_lazy" on my computer.
But it seems like npm install and npm shrinkwrap don't work well with local mirrors.
Let me explain. When there is an npm-shrinkwrap.json file, the npm install command always requests packages from the URL specified in the shrinkwrap file's "resolved" property. So, even if I have a local npm mirror running at http://localhost:12345/, and even if I configure npm to use that as its registry, it will not request any package modules from my local mirror (unless a "resolved" property in the shrinkwrap file happens to point to http://localhost:12345/).
Basically, npm install ignores npm's registry configuration and follows the shrinkwrap "resolved" property.
Is there a reason why npm install uses the "resolved" property instead of constructing it dynamically with the dependency package name and version? Why does npm-shrinkwrap.json have this field at all?
So back to my issue. I want to use npm_lazy as a local npm mirror. I could rewrite all the "resolved" URLS in npm-shrinkwrap.json to point to http://localhost:12345/. But then my shrinkwrap file is less portable — my coworkers would not be able to use unless their computers have the same npm_lazy server running.
I have considered redirecting all registry.npmjs.org traffic to localhost in order to create a transparent mirror. But it would be too hard -- it needs to supoprt HTTPS, and also, how would npm_lazy access the true domain? I would have to specify it by its IP address, which may change.
Has anyone else attempted to do the same thing -- to set up a local-computer NPM cache?
But, my main question is, why does npm use the "resolved" property? Thanks.

Shrinkwrap locks down the dependencies, attempting to guarantee the same 'build' (or dependencies) for everyone using that shrinkwrap file. It not only locks down the versions, but also the repository URL, for the same reason: changing any of these potentially changes the content of the package, thus losing any guarantee. Of course, if shrinkwrap were aware of the concept of a repository cache or proxy, it should certainly use yours, but apparently it isn't.
Normally you'd replace the repository URL in package.json (since this is a source file) and run npm shrinkwrap again to re-generate the npm-shrinkwrap.json file (since it is a generated file), and keep that on a local dev branch. But it is a hassle to keep configuration files separated.
So you could enter cache.repository.example.com as the repository hostname, and add a CNAME to DNS pointing to npmregistry. Anyone having npm_lazy installed locally could then safely override this DNS entry in their Hosts file to point to localhost.
However, there is a simpler solution to localize your checkout. Recently I answered a question with this little script to update package.json versions with values from npm-shrinkwrap.json, which can easily be adapted to update all resolved properties in npm-package.json to use your proxy, and which I will leave as an excercise ;-)

I prefer not to save resolved property into npm-package.json.
The utility shonkwrap do the trick. But it impacts everybody in the team: type shonkwrap instead of npm shrinkwrap. Or you can write similar code into your build script (e.g. gulpfile) to delete resolved property from existing npm-package.json.

Related

Switching from NPM to GitHub Packages

I have a NPM package with a small user base, yesterday I created a new version and wanted to release it. I thought that I might as well make use of the new GitHub Packages and so I setup everything as GitHub suggested and released!
Now the problem is that I still have the old NPM page running on version 2.0.2 while everyone currently uses this as their dependency while the new GitHub package is on 2.0.4, Is there a way to 'synchronize' these two. Of course the GitHub Packages uses the <USER>/<PACKAGE> labeling while NPM just uses <NAME>.
Is the only thing I can do to publish on GitHub Packages and on NPM and just try to move users away from the NPM page?
If your publishing a public package, your better off just publishing it on NPM, as that is what most developers are used to.
I use GitHub Packages at work and the only advantage is that is effective free for hosting internal packages, as we are already paying for GitHub anyway. If it wasn’t for the zero price we wouldn’t be using it.
If you really want to force all your users to migrate to GitHub packages, and have to set up npm to work with it you could mark you old version npm deprecated and use that to point people to the new version.
https://docs.npmjs.com/cli/v6/commands/npm-deprecate
Here is another solution, but there is a catch.
Change your registry.npmjs.org package content to
index.js
export * from '#schotsl/my-package';
Now your registry.npmjs.org package is (almost) pointing to your npm.pkg.github.com package.
Only almost because any development directory for a project downstream of registry.npmjs.org/my-package, must configure the scope-to-server mapping for #schotsl/my-package to npm.pkg.github.com in a package manager config file.
In the case of package managers 'npm' and 'yarn' (v1) that can be done in
an .npmrc file at the same level as package.json.
The required .npmrc content is
#schotsl:registry=https://npm.pkg.github.com
# Github PAT token, packages:read authorization only ok
//npm.pkg.github.co/:_authToken="ghp_XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
The first line is the scope to server mapping.
The second line is a Github personal authorization token (PAT) with at least package:read permission. It is actually pretty liberal. A PAT with package:read issued from any github account will allow read access to every github accounts packages.
For the 'yarn' v2 package, the .npmrc file does not work, and instead a couple of keys need to be set in .yarnrc.yml.
Unfortunately there is no way to set the scope-to-server mapping and the token inside the registry.npmjs.org/my-package package itself.
Putting the .npmrc file in there doesn't work, it is ignored. And that wouldn't be a good solution anyway, because not all package managers read .npmrc files.
That is the 'catch' - using npm.pkg.github.com packages requires package manager specific config settings to be made by every downstream developer.
In addition, what if two different upstream packages have colliding scope names, each mapping to a different server? The current config methodology fails in that case.
Feature Proposal not current behavior
Ideally, there would be a common interface agreed upon by all package managers inside package.json - and the scope-to-server mapping would be defined in the package that directly references the scope. For example, in the package.json of my-package on registry.npmjs.org
{
dependencies:{
"#schotsl/my-package":"1.0.0"
},
registries:{
"#schotsl/my-package":"https://npm.pkg.github.com",
},
auths:{
"https://npm.pkg.github.com":"ghp_XXXXXXXXXXXXXXXX",
},
}
Then downstream users would not need to config for each scope, and predictable (and risky) problems with scope name or package name collisions would not occur.
But that is not the way it is. Therefore Github Packages (npm.pkg.github.com) doesn't really seem to be a feasible way to provide public packages which may become dependencies of other public packages. No problem for private packages though.

How can I shim access to a private repository with scoped npm packages on a machine that can't access it?

Let's say I have a private, scoped NPM repository that lives behind a corporate firewall. I'd like to set my project up on another computer that will not connect to the VPN, so it will not be able to access that private repo.
How can I set up my project to easily import those dependencies from local folders and/or my local npm cache and skip the private repo?
That is, if my package.json file has...
"dependencies": {
"#privateRepo/some-library-framework": "4.2.1"
}
... and I can't get to the server, but I can get the files that are needed and would've been installed from another node_modules folder that lives on a machine that can access the repo.
I tried taking the files from the packages in #privateRepo and using npm cache add D:\path\to\lib\with\packageDotJsonInside for each of them, but still got...
Not Found - GET https://registry.npmjs.org/#privateRepo%2some-library-framework - Not found
... when I tried to npm i the rest.
I think that means that I need to set something up in .npmrc like is described here...
registry=https://registry.npmjs.org/
#test-scope:registry=http://nexus:8081/nexus/content/repositories/npm-test/
//nexus:8081/nexus/content/repositories/npm-test/:username=admin
//nexus:8081/nexus/content/repositories/npm-test/:_password=YWRtaW4xMjM=
email=…
... where you'd normally set up auth, but where you're also setting up the URL to a scoped package. I think I want to set up #privateRepo:registry=http://localhost/something/something here.
But I think that also implies I would at least need to create a local webserver (or npm repo?) to answer requests (and then maybe I'm looking for something like verdaccio?).
So, simplest case, is there a way to force the app to use the cached version or is there more I need to shim? If not, what's the easiest way to create a local repo to serve those packages in the place of the private repo?
Seeing nothing better, the easiest answer does seems to be setting up a local npm repo. You can then set up your .npmrc to point to localhost for the scoped private registry instead of the "real" version behind a VPN.
And as it turns out, Verdaccio actually does exactly this -- you could also use it to host a "real" private repo, including behind your firewall, but installing on your dev box will allow you to provide your npm packages to any new codebase locally.
This is described in some detail by this video that's linked on Verdaccio's docs site. Here's the quick version:
Install verdaccio: npm install --global verdaccio
Run verdaccio: verdaccio
You can then check out its interface at http://localhost:4873/ (or elsewhere if you changed defaults)
Create a user: npm adduser --registry http://localhost:4873
Login: npm login --registry http://localhost:4873
You can now log in as that user on the web UI too, if you want.
Navigate to your packages' files. Navigate into the folder that's package specific.
That is, if you pull all of your packages from another project's node_modules, you need to go into each folder where the individual package's package.json file lives to publish it.
Publish the package: npm publish --registry http://localhost:4873
You can double-check that it "took" by refreshing the web UI.
Repeat for each additional package.
That's it! You now have an npm repo for the packages you can use to remove the VPN requirement for running npm i. Just schlep the new versions of the packages over to your local npm and publish them as appropriate.
You will need to set up a scoped entry for this registry in your .npmrc, but you were already doing that for your repo behind the firewall, so no big deal, right?
Ready to move the check for a better answer, but this seems like it oughta work.

NPM lockfiles/shrinkwrap get random "dl" parameter tacked on to the "resolved" URL

Our company uses an Artifactory repository for storing internally-published packages and as a proxy for the NPM registry. Sometimes the resolved field in lockfiles/shrinkwrap files is as expected, containing URLs for our internal repository, but occasionally they show up as something like this (line break added for clarity):
https://our.repository.com/artifactory/api/npm/some-repo/lodash/-/lodash-3.10.1.tgz
?dl=https://registry.npmjs.org/lodash/-/lodash-3.10.1.tgz
Then, from pull request to pull requests, these dl parameters constantly oscillate to being present or removed depending on which developer does an npm install, leading to a lot of pull request & commit noise.
I'm guessing it's Artifactory that's adding this dl param, since I fail to see it in a code search in the npm code base.
Why does this happen? Can we disable this behavior? And is it safe to strip this parameter as a postshrinkwrap script workaround?
I think the root of your problem is likely caching.
NPM caches packages that have been downloaded, so they don't have to be downloaded again, and they can even be re-installed offline if necessary. It also caches the resolved value for later use. If a package of the same version has already been resolved and downloaded, it doesn't need to go and fetch it again and get the updated download/resolved URL.
You can manually clear this cache with the following command.
npm cache clean --force
Alternately, it could be that difference in how different versions of NPM calculate the resolved field are to blame (following the Location header or not). However I think caching is more-likely to blame.

Peer dependency that is also dev dependency of linked npm module is acting as a separate instance

In my app, I have these dependencies:
TypeORM
typeorm-linq-repository AS A LOCAL INSTALL ("typeorm-linq-repository": "file:../../../IRCraziestTaxi/typeorm-linq-repository"), who has a dev dependency AND a peer dependency of TypeORM
The reason I use a "file:" installation of typeorm-linq-repository is that I am the developer and test changes in this app prior to pushing releases to npm.
I was previously using node ~6.10 (npm ~4), so when I used the "file:" installation, it just copied the published files over, which is what I want.
However, after upgrading to node 8.11.3 (npm 5.6.0), it now links the folder rather than copying the published files.
Note, if it matters, that my environment is Windows.
The problem is this: since both my app and the linked typeorm-linq-repository have TypeORM in their own node_modules folders, TypeORM is being treated as a separate "instance" of the module in each app.
Therefore, after creating a connection in the main app, when the code that accesses the connection in typeorm-linq-repository is reached, it throws an error of Connection "default" was not found..
I have searched tirelessly for a solution to this. I have tried --preserve-symlinks, but that does not work.
The only way for me to make this work right now is to manually create the folder in my app's node_modules and copy applicable files over, which is a huge pain.
How can I either tell npm to NOT symlink the "file:" installation or get it to use the same instance of the TypeORM module?
I made it work pretty easily, although I feel like it's kind of a band-aid. I will post the answer here to help anybody else who may be having this issue, but if anybody has a more proper solution, feel free to answer and I will accept.
The trick was to link my app's installation of TypeORM to the TypeORM folder in my other linked dependency's node_modules folder.
...,
"typeorm": "file:../../../IRCraziestTaxi/typeorm-linq-repository/node_modules/typeorm",
"typeorm-linq-repository": "file:../../../IRCraziestTaxi/typeorm-linq-repository",
...

Overriding default npm registry for single packages served from local folder

I'm adding several dependencies to a project that currently uses the default npm registry. Obviously the dependencies cannot be resolved since the packages are not found there.
I'm wondering if I can provide the packages via a folder or zip file instead and tell npm to bypass the registry for certain dependencies and take the packages directly from the folder. I want to avoid to setup my own registry.
Sinopia seems to be a lightweight solution for the problem. It is a private repository server that allows to use private packages, cache the npmjs.org registry, and to override public packages.
Disclaimer: I haven't tried it because my problem was solved by another private registry I didn't know at the time of writing the question. However, maybe it helps someone else.