Cache npm install Task in VSTS - npm

I have configured a private agent in VSTS and have installed NPM there globally. When I'm trying to install NPM through my build task, it is still installing NPM packages for every build which is taking an aweful lot of time- approximately 12 minutes.
How can I cache the NPM installations so that the build time is reduced?

We use npm-cache, npm-cache is a node module that will calculate a hash of your package.json file for every hash it will create zip folder on your build server with the content of node_modules, now npm install is reduced to extracting a zip on every build (of course only in case you didn’t actually change package.json).
The idea is: in the first time the tool download the npm packages and save them locally, in the second time if the package.json not changed he takes the packages from the local disk and copy them to the build agent folder, only if the package.json changed he downloads the packages from the internet.
Install the npm-cache on the build machine:
npm install npm-cache -g
In the build definition add Command Line task (Tool: C:\Windows\User\AppData\Roaming\npm\npm-cache (or just npm-cache if you add the tool to environment path variables); Arguments:install npm; Working folder: $(Build.SourcesDirectory) (or where package.json located).

MS has finally implemented this feature (currently in beta) https://learn.microsoft.com/en-us/azure/devops/pipelines/caching/index?view=azure-devops#nodejsnpm
From there:
variables:
npm_config_cache: $(Pipeline.Workspace)/.npm
steps:
- task: CacheBeta#0
inputs:
key: $(Build.SourcesDirectory)/package-lock.json
path: $(npm_config_cache)
displayName: Cache npm
- script: npm ci

Unfortunately we cannot cache the NPM installations as no such a built-in feature for now.
However there's already a user voice submitted to suggest the feature : Improve hosted build agent performance with build caches, and seems the VSTS team are actively working on this now...
For now, you can try to speed Up NPM INSTALL on Visual Studio Team Services

Use Cache task
Caching is added to a pipeline using the Cache pipeline task. This
task works like any other task and is added to the steps section of a
job
With the following configuration:
pool:
name: Azure Pipelines
steps:
- task: Cache#2
inputs:
key: 'YOUR_WEB_DIR/package.json'
path: 'YOUR_WEB_DIR/node_modules/'
- task: Npm#1
inputs:
command: 'install'
workingDir: 'YOUR_WEB_DIR/frontend'
You can use key YOUR_WEB_DIR/package-lock.json too, but be aware that file might be changed by other next step like npm install so hash also will be changed.

Related

Is there a way to only run npm scripts for installed packages (opposite of --ignore-scripts)

I am in a situation where I need to ship node_modules with the rest of my code because the destination machines do not have access to our private network (and our private npm repository).
My problem is that I want to execute everything that happens after npm downloads all the files so that individual packages can build themselves correctly for the target machine. Is there a way to accomplish this? Here are a couple other ways to phrase this question:
How can I run npm install, but skip the download step?
How can I run postinstall for installed node_modules only?
I finally got it figured it out. There were a couple of important steps to make this happen:
When we get ready to package our code for distribution, we download all of the npm dependencies with the --ignore-scripts and --no-bin-links option. This prevents any packages from building/compiling or linking any bin files. This is effectively only downloading the node_modules.
npm install --omit=dev --ignore-scripts --no-bin-links
We then distribute our code to the target machine and run the following command so that any compilations and bin links happen on the target machine:
npm rebuild

npm ci can only install packages with an existing package-lock.json or npm-shrinkwrap.json with lockfileVersion >= 1

This is the issue that I am facing when running the command npm ci to install dependencies in my GitHub Action file.
I am working on an expo managed app and using GitHub Actions as a CI for triggering builds whenever I push my code to developmemt branch.
Here's my build script:
name: EAS PIPELINE
on:
push:
branches:
- development
workflow_dispatch:
jobs:
build:
name: Install and build
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v2
with:
persist-credentials: false
- name: Setup Node.js
uses: actions/setup-node#v1
with:
node-version: 14.x
- name: Setup Expo
uses: expo/expo-github-action#v6
with:
expo-version: 4.x
token: ${{ secrets.EXPO_TOKEN }}
expo-cache: true
- name: Install dependencies
run: npm ci
- name: Build on EAS
run: EAS_BUILD_AUTOCOMMIT=${{1}} npx eas-cli build --platform all --non-interactive
Here's the issue that I am facing Install dependencies step.
Run npm ci
npm ci
shell: /usr/bin/bash -e {0}
env:
EXPO_TOKEN: ***
npm ERR! cipm can only install packages with an existing package-lock.json or npm-shrinkwrap.json with lockfileVersion >= 1. Run an install with npm#5 or later to generate it, then try again.
npm ERR! A complete log of this run can be found in:
npm ERR! /home/runner/.npm/_logs/2021-10-28T15_16_06_934Z-debug.log
Error: Process completed with exit code 1.
After a lot of research, I was able to figure out that this happens when you are not using npm install for installing dependencies. In my case, I was only using yarn for the dependencies so I was only having yarn.lock file and no package-lock.json file.
One way to resolve this was using npm install to install the dependencies, then you'll have a package-lock.json file and CI won't throw any error.
And the other way if you only want to use yarn, then you need to update that step in your eas-pipeline.yml file for installing the dependencies.
*****************************************************************************************
- name: Install dependencies
run: |
if [ -e yarn.lock ]; then
yarn install --frozen-lockfile
elif [ -e package-lock.json ]; then
npm ci
else
npm i
fi
***************************************************************************************
As I wasn't able to find any solution on StackOverflow and it is our first go-to place to look for any issue. So, I decided to write this answer here.
Here's the original answer: https://github.com/facebook/docusaurus/issues/2846#issuecomment-691706184
I had a similar error.
`npm ci` can only install packages when your package.json and package-lock.json or npm-shrinkwrap.json are in sync. Please update your lock file with `npm install` before continuing.
with bunch of missing dependency name following this error.
I would run npm ci locally and it would run successfully. However, it would give me the error above when npm ci is run in the CI pipeline and in my case it was because of the version difference of the Node.js installed in the environment that Jenkins pipeline is running in.
My local Node version was 16.x and in Jenkins container it was 12.x.
Upgrading it fixed.
Old post, but I found this while searching for this same error. In my case I did have a package-lock.json file in my root directory. However, when opening it, I realized that there was a JSON syntax error that slipped in during a previous merge conflict. After running npm i again the file was fixed. The npm ERR! The 'npm ci' command can only install with an existing package-lock.json wasn't a super helpful error in that case.
This same thing happened to me, and I couldn't figure it out for the longest time. It turns out that I had legacy-peer-deps=true set globally, and I had no idea.
This caused my npm install command to alter the package-lock.json in a way that broke the build on our CI server. I reset package-lock.json with the version from master, removed that npm config, and reinstalled. Everything worked fine after that.
For the people who has this issue on AWS Amplify. You might have to run npm install and commit the package-lock.json file, then deploy again.
If you're using pnpm, you install node dependencies via this step:
- name: Install Node.js dependencies
run: |
npm i -g pnpm
pnpm i
After battling with this issue for about 2 days, It's finally deploying successfully to Firebase Functions after deleting package-lock.json from both the src and src/functions folders.
This occurred because the package lock file was not generated with npm, despite the fact that the npm ci requires npm to install the packages.
And because npm requires package-lock.json, we get this error. To fix this error for GitHub actions, this what I did:
- run: yarn install --frozen-lockfile
- run: yarn lint
- run: yarn test:ci
Commit diff:
deleting deploy.json helped me, since the token is updated when overwritten
rm ~/.config/configstore/#vkontakte/vk-miniapps-deploy.json
but I have other services
This happens sometimes because of the version difference of the Node.js installed in the environment that the pipeline is running in.
To fixe this, I ran:
$ firebase init hosting:github
then type Y to set up workflow when asked to.
finally, add "npm i" as one of the scripts to run before deploying like this:
npm i && npm ci && npm run build
I was using npm package manager and migrated to yarn package manager removing package-lock.json file.
I had this configuration in my .circleci/config.yml file
- node/install-packages
changed to
- node/install-packages:
pkg-manager: yarn
In my case the problem were some 'extraneous' packages, concretely local path dependencies. After removing them from package.json the problem was solved.
I got the error message while running npm install instead of npm ci.
On package.json I change this :
"overrides": {
"trim-newlines": "^3.0.1"
},
to : `
"overrides": {
"trim-newlines": "^1.0.0"
}
`
That Work for me successfully.
I had a similar problem deploying to heroku. I simply deleted the existing package-lock.json file and then ran
npm install
Merging the new lock file fixed the deploy.
In my case, I had this error with npm ci while using Yarn. I eventually figured out the version of Node I was using wasn't supported.
I did the following:
node -v to confirm my node version (18.0.1)
nvm use 16.13.0
Delete the node_modules directory
Delete yarn.lock
Run yarn
Run yard add + package names
After this, the error no longer occurred and the app deployed correctly.

Gitlab CI: create dist folder inside repository?

I just recently started using gitlab CI to automate some build/deploy steps. It works perfectly to build docker images etc, but I was wondering if it's possible to create a folder in the repository during a build step? For example I'm now making an npm utility package, but I'm just importing it in my other projects via a private gitlab repo (using deploy token), but the code of the util package is written in es6 and needs to be transpiled to commonJS to be used in the other packages. Manually I can run npm run build and it will output a dist folder with the transpiled code.
I was trying (and researching) if it's possible to automate this build process using .gitlab-ci but so far I couldn't find anything.
Anyone know how I can achieve this and/or if this is possible?
Thanks in advance!
Not sure if I got your question correctly, so add more details if not.
When your CI build creates new folders or files, they are written to the task runner's file system (no surprise here, I assume).
If you want to access these files from Gitlab's web UI you can define them as artifacts in your build job (see https://docs.gitlab.com/ee/user/project/pipelines/job_artifacts.html)
Your build job would look something like that (pseudo code written by memory, not tested on Gitlab):
build:
script:
- npm run build
artifacts:
paths:
- dist/
expire_in: 1 week
UPDATE If you want to upload the build artifact to an NPM registry, you could just build and push together.
build:
script:
- npm run build
- npm publish <PARAMETERS>

NPM not installing devDependencies on bitbucket pipeline?

I'm trying to setup my first Bitbucket pipeline which simply builds my application and deploys it to my FTP server using the following bitbucket-pipelines.yml
image: node:6.9.4
pipelines:
default:
- step:
caches:
- node
script:
- npm install
- npm test
- step:
script:
- npm run build
- node deploy.js
The issue lies in the npm install because when bitbucket tries to run the npm run build command it says that rimraf (a npm package) is not found. rimraf however is listed in my devDependencies, all regular dependencies in my package.json are downloaded correctly.
There is no global variable set by my so the NODE_ENV could not be it right?
I had the same issue with gulp.
Gulp was in devDependencies and also specified in package.json as script but still it said npm ERR! missing script: gulp
The documentation says to install it globally, so there might be a related issue with your package.
https://confluence.atlassian.com/bitbucket/javascript-node-js-with-bitbucket-pipelines-873891287.html
I had this same issue. For me, the problem was that the version of Node on my local development device was different from the version of Node in the bitbucket-pipelines.yml file.
To fix it, I went into bitbucket-pipelines.yml and changed this line:
image: node:10.15.3
to this:
image: node:14.15.0

How to avoid reinstalling dependencies for each job in Gitlab CI

I'm using Gitlab CI 8.0 with gitlab-ci-multi-runner 0.6.0. I have a .gitlab-ci.yml file similar to the following:
before_script:
- npm install
server_tests:
script: mocha
client_tests:
script: karma start karma.conf.js
This works but it means the dependencies are installed independently before each test job. For a large project with many dependencies this adds a considerable overhead.
In Jenkins I would use one job to install dependencies then TAR them up and create a build artefact which is then copied to downstream jobs. Would something similar work with Gitlab CI? Is there a recommended approach?
Update: I now recommend using artifacts with a short expire_in. This is superior to cache because it only has to write the artifact once per pipeline whereas the cache is updated after every job. Also the cache is per runner so if you run your jobs in parallel on multiple runners it's not guaranteed to be populated, unlike artifacts which are stored centrally.
Gitlab CI 8.2 adds runner caching which lets you reuse files between builds. However I've found this to be very slow.
Instead I've implemented my own caching system using a bit of shell scripting:
before_script:
# unique hash of required dependencies
- PACKAGE_HASH=($(md5sum package.json))
# path to cache file
- DEPS_CACHE=/tmp/dependencies_${PACKAGE_HASH}.tar.gz
# Check if cache file exists and if not, create it
- if [ -f $DEPS_CACHE ];
then
tar zxf $DEPS_CACHE;
else
npm install --quiet;
tar zcf - ./node_modules > $DEPS_CACHE;
fi
This will run before every job in your .gitlab-ci.yml and only install your dependencies if package.json has changed or the cache file is missing (e.g. first run, or file was manually deleted). Note that if you have several runners on different servers, they will each have their own cache file.
You may want to clear out the cache file on a regular basis in order to get the latest dependencies. We do this with the following cron entry:
#daily find /tmp/dependencies_* -mtime +1 -type f -delete
EDIT: This solution was recommended in 2016. In 2021, you might consider the caching docs instead.
A better approach these days is to make use of artifacts.
In the following example, the node_modules/ directory is immediately available to the lint job once the build stage has completed successfully.
build:
stage: build
script:
- npm install -q
- npm run build
artifacts:
paths:
- node_modules/
expire_in: 1 week
lint:
stage: test
script:
- npm run lint
From docs:
cache: Use for temporary storage for project dependencies. Not useful for keeping intermediate build results, like jar or apk files. Cache was designed to be used to speed up invocations of subsequent runs of a given job, by keeping things like dependencies (e.g., npm packages, Go vendor packages, etc.) so they don’t have to be re-fetched from the public internet. While the cache can be abused to pass intermediate build results between stages, there may be cases where artifacts are a better fit.
artifacts: Use for stage results that will be passed between stages. Artifacts were designed to upload some compiled/generated bits of the build, and they can be fetched by any number of concurrent Runners. They are guaranteed to be available and are there to pass data between jobs. They are also exposed to be downloaded from the UI. Artifacts can only exist in directories relative to the build directory and specifying paths which don’t comply to this rule trigger an unintuitive and illogical error message (an enhancement is discussed at https://gitlab.com/gitlab-org/gitlab-ce/issues/15530 ). Artifacts need to be uploaded to the GitLab instance (not only the GitLab runner) before the next stage job(s) can start, so you need to evaluate carefully whether your bandwidth allows you to profit from parallelization with stages and shared artifacts before investing time in changes to the setup.
So, I use cache. When don't need to update de cache (eg. build folder in a test job), I use policy: pull (see here).
I prefer use cache because removes files when pipeline finished.
Example
image: node
stages:
- install
- test
- compile
cache:
key: modules
paths:
- node_modules/
install:modules:
stage: install
cache:
key: modules
paths:
- node_modules/
after_script:
- node -v && npm -v
script:
- npm i
test:
stage: test
cache:
key: modules
paths:
- node_modules/
policy: pull
before_script:
- node -v && npm -v
script:
- npm run test
compile:
stage: compile
cache:
key: modules
paths:
- node_modules/
policy: pull
script:
- npm run build
I think it´s not recommended because all jobs of the same stage could be executed in parallel.
First all jobs of build are executed in parallel.
If all jobs of build succeeds, the test jobs are executed in parallel.
If all jobs of test succeeds, the deploy jobs are executed in parallel.
If all jobs of deploy succeeds, the commit is marked as success.
If any of the previous jobs fails, the commit is marked as failed and no jobs of further stage are executed.
I have read that here:
http://doc.gitlab.com/ci/yaml/README.html
Solved a problem with a symbolic link to a folder outside the working directory. The solution looks like this:
//.gitlab-ci.yml
before_script:
- New-Item -ItemType SymbolicLink -Path ".\node_modules" -Target "C:\GitLab-Runner\cache\node_modules"
- yarn
after_script:
- (Get-Item ".\node_modules").Delete()
I know this is a enough dirty solution but it saves a lot of time for build process and extends the storage life.
GitLab introduced caching to avoid redownloading dependencies for each job.
The following Node.js example is inspired from the caching documentation.
image: node:latest
# Cache modules in between jobs
cache:
key: $CI_COMMIT_REF_SLUG
paths:
- .npm/
before_script:
- npm ci --cache .npm --prefer-offline
server_tests:
script: mocha
client_tests:
script: karma start karma.conf.js
Note that the example uses npm ci. This command is like npm install, but designed to be used in automated environments. You can read more about npm ci in the documentation and the command line arguments you can pass.
For further information, check Caching in GitLab CI/CD and the cache keyword reference.