The internet is full of complains about Gitlab not caching, but in my case I think, that Gitlab CI indeed caches correctly. The thing is, that npm seems to install everything again anyway.
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- vendor/
- bootstrap/
- node_modules/
build-dependencies:
image: ...
stage: build
script:
- cp .env.gitlab-testing .env
- composer install --no-progress --no-interaction
- php artisan key:generate
- npm install
- npm run prod
- npm run prod
artifacts:
paths:
- vendor/
- bootstrap/
- node_modules/
- .env
- public/mix-manifest.json
tags:
- docker
This is my gitlab-ci.yml file (well.. the relevant part). While the cached composer dependencies are used, the node_modules aren't. I even added everything to cache and artifacts out of desperation..
Updated Answer (Jul 30, 2022, GitLab#^15.12 & >13)
Just like the comments received, the use of artifacts is not ideal in the original answer, but was the cleanest approach that worked reliably. Now, that GitLab's documentation has been updated around the use of cache and it was also expanded to support multiple cache keys per job (4 maximum unfortunately), there is a better way to handle node_modules across a pipeline.
The rationale for implementation is based on understand the quirks of both GitLab and how npm works. These are the fundamentals:
NPM recommends the use of npm ci instead of npm install when running in a CI/CD environment. FYI, this will require the existence of package-lock.json, which is used to ensure 0 packages are automatically version bumped while running in a CI environment (npm i by default will not create the same deterministic build every time, such as on a job re-run).
npm ci deliberately removes the entirety of node_modules first before re-installing all packages listed in package-lock.json. Therefore, it is best to only configure GitLab to run npm ci once and ensure the resulting node_modules is passed to other jobs.
NPM has its own cache that it stores at ~/.npm/ in case of offline builds and overall speed. You can specify a different cache location with the --cache <dir> option (you will need this). (variation of #Amityo's answer)
GitLab cannot cache any directory outside of the repository! This means the default cache directory ~/.npm cannot be cached.
GitLab's global cache configuration is applied to every job by default. Jobs will need to explicit override the cache config if it doesn't need the globally cached files. Using YAML anchors, the global cache config can be copied & modified but this doesn't seem to work if you want to override a global setting when the cache uses a list (resolution still under investigation).
To run additional npx or npm run <script> without re-running an install, you should cache node_modules/ folder(s) across the pipeline.
GitLab's expectation is for users to use the cache feature to handle dependencies and only use artifacts for dynamically generated build results. This answer now supports this desire better than possible before. There is the restriction that artifacts should be less than the maximum artifact size or 1GB (compressed) on GitLab.com. And artifacts uses your storage usage quota
The use of the needs or dependencies directives will influence if artifacts from a previous job will be downloaded (or deleted) automatically on the next job.
GitLab cache's can monitor the hash value of a file and use that as the key so its possible the cache will only update when the package-lock.json updates. You could use package.json but you would invalidate your deterministic builds as it does not update when minor or patches are available.
If you have a mono-repo and have more than 2 separate packages, you will hit the cache entry limit of 4 during the install job. You will not have the ideal setup but you can combine some cache definitions together. It is also worth noting, GitLab cache.key.files supports a maximum of 2 files to use for the key hash so you likely will need to use another method to determine a useful key. One likely solution will be to use a non-file related key and cache all node_modules/ folders under that key. That way you have only 2 cache entries for the install job and 1 for each subsequent job.
Solution
Run a single install job as .pre stage, using cached downloaded packages (tar.gz's) across entire repository.
Cache all node_modules/ folders to following jobs for that pipeline execution. Do not allow any jobs except install to upload cache (lowers pipeline run time & prevents unintended consequences)
Pass the build/ directory via artifacts on to other jobs only when needed
# .gitlab-ci.yml
stages:
- build
- test
- deploy
# global cache settings for all jobs
# Ensure compatibility with the install job
# goal: the install job loads the cache and
# all other jobs can only use it
cache:
# most npm libraries will only have 1 entry for the base project deps
- key: &global_cache_node_mods
files:
- package-lock.json
paths:
- node_modules/
policy: pull # prevent subsequent jobs from modifying cache
# # ATTN mono-repo users: with only additional node_modules,
# # add up to 2 additional cache entries.
# # See limitations in #10.
# - key:
# files:
# - core/pkg1/package-lock.json
# paths:
# - core/pkg1/node_modules/
# policy: pull # prevent jobs from modifying cache
install:
image: ...
stage: .pre # always first, no matter if it is listed in stages
cache:
# store npm cache for all branches (stores download pkg.tar.gz's)
# will not be necessary for any other job
- key: ${CI_JOB_NAME}
# must be inside $CI_PROJECT_DIR for gitlab-runner caching (#3)
paths:
- .npm/
when: on_success
policy: pull-push
# Mimic &global_cache_node_mods config but override policy
# to allow this job to update the cache at the end of the job
# and only update if it was a successful job
# NOTE: I would use yaml anchors here but overriding the policy
# in a yaml list is not as easy as a dictionary entry (#5)
- key:
files:
- package-lock.json
paths:
- node_modules/
when: on_success
policy: pull-push
# # ATTN Monorepo Users: add additional key entries from
# # the global cache and override the policy as above but
# # realize the limitations (read #10).
# - key:
# files:
# - core/pkg1/package-lock.json
# paths:
# - core/client/node_modules/
# when: on_success
# policy: pull-push
# before_script:
# - ...
script:
# define cache dir & use it npm!
- npm ci --cache .npm --prefer-offline
# # monorepo users: run secondary install actions
# - npx lerna bootstrap -- --cache .npm/ --prefer-offline
build:
stage: build
# global cache settings are inherited to grab `node_modules`
script:
- npm run build
artifacts:
paths:
- dist/ # where ever your build results are stored
test:
stage: test
# global cache settings are inherited to grab `node_modules`
needs:
# install job is not "needed" unless it creates artifacts
# install job also occurs in the previous stage `.pre` so it
# is implicitly required since `when: on_success` is the default
# for subsequent jobs in subsequent stages
- job: build
artifacts: true # grabs built files
# dependencies: could also be used instead of needs
script:
- npm test
deploy:
stage: deploy
when: on_success # only if previous stages' jobs all succeeded
# override inherited cache settings since node_modules is not needed
cache: {}
needs:
- job: build
artifacts: true # grabs dist/
script:
- npm publish
GitLab's recommendation for npm can be found in the GitLab Docs.
[DEPRECATED] Original Answer (Oct 27, 2021, GitLab<13.12)
All the answers I see so far give only half answers but don't actually fully accomplish the task of caching IMO.
In order to fully cache with npm & GitLab, you must be aware of the following:
See #1 above
npm ci deliberately removes the entirety of node_modules first before re-installing all packages listed in package-lock.json. Therefore, configuring GitLab to cache the node_modules directory between build jobs is useless. The point is to ensure no preparation hooks or anything else modified node_modules from a previous run. IMO, this is not really valid for a CI environment but you can't change it and maintain the fully deterministic builds.
See #3-#4 above
If you have multiple stages, global cache will be downloaded every job. This likely is not what you want!
To run additional npx commands without re-running an install, you should pass the node_modules/ folder as an artifact to other jobs.
[DEPRECATED] Solution
Run a single install job as .pre stage, using cached downloaded packages (tar.gz's) across entire repository.
Pass node_modules & the build directory on to other jobs only when needed
stages:
- build
- test
- deploy
install:
image: ...
stage: .pre # always first, no matter if it is listed in stages
cache:
key: NPM_DOWNLOAD_CACHE # a single-key-4-all-branches for install jobs
paths:
- .npm/
before_script:
- cp .env.gitlab-testing .env
- composer install --no-progress --no-interaction
- php artisan key:generate
script:
# define cache dir & use it npm!
- npm ci --cache .npm --prefer-offline
artifacts:
paths:
- vendor/
- bootstrap/
- node_modules/
- .env
- public/mix-manifest.json
build:
stage: build
needs:
- job: install
artifacts: true # true by default, grabs `node_modules`
script:
- npm run build
artifacts:
paths:
- dist/ # whereever your build results are stored
test:
stage: test
needs:
- job: install
artifacts: true # grabs node_modules
- job: build
artifacts: true # grabs built files
script:
- npm test
deploy:
stage: deploy
needs:
# does not need node_modules so don't state install as a need
- job: build
artifacts: true # grabs dist/
- job: test # must succeed
artifacts: false # not needed
script:
- npm publish
Actually it should work, your cache is set globally, your key refers to the current branch ${CI_COMMIT_REF_SLUG}...
This is my build and it seems to cache the node_modules between the stages.
image: node:latest
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
- .next/
stages:
- install
- test
- build
- deploy
install_dependencies:
stage: install
script:
- npm install
test:
stage: test
script:
- npm run test
build:
stage: build
script:
- npm run build
I had the same issue, for me the problem was down to the cache settings, by default the cache does not keep unversioned git files and since we do not store node_modules in git the npm files were not cached at all.
So all I had to do was insert one line "untracked: true" like below
cache:
untracked: true
key: ${CI_COMMIT_REF_SLUG}
paths:
- vendor/
- bootstrap/
- node_modules/
Now npm is faster, although it still needs to check if things have changed, for me this still takes a couple of minutes, so I considering having a specific job to do the npm install, but it has sped things up a lot already.
The default cache path is ~/.npm
To set the npm cache directory:
npm config set cache <path> --global
see here for more information
Related
I have this .gitlab-ci.yml file:
image: node:latest
stages:
- build
- test
- publish
cache:
key:
files:
- package.json
- package-lock.json
paths:
- node_modules
build:
stage: build
script:
- echo -e "//my.private.repo.com/:_authToken=${NPM_TOKEN}\n$(cat .npmrc)">.npmrc
- npm install
- npm run build
artifacts:
paths:
- node_modules
- .npmrc
test:
stage: test
script:
- npm test
publish:
stage: publish
script:
- npm publish
only:
- tags
With this configuration, when I push a simple commit, everything is ok : build + test as expected.
But, when I push tag (created here with npm version, two pipeline are created : 1 for the commit, and 1 for the tag. So, build and tests are executed twice.
What can I do to prevent this behavior, and have the tag pipeline to "cancel" the commit pipeline
You could have different jobs for when you push a simple commit or tag, and use only and except keywords to differentiate between the cases, otherwise this is the correct behaviour considered by GitLab. You can see the discussion around a closed issue here.
I have a simple .gitlab-ci.yml file that builds my Vue application. I build once and then deploy the dist folder to my various environments:
stages:
- build
- deploy_dev
- deploy_stg
- deploy_prd
build:
image: node:latest # Pull Node image
stage: build
script:
- npm install -g #vue/cli#latest
- npm install
- npm run build
artifacts:
expire_in: 2 weeks
paths:
- dist/
deploy_to_dev:
image: python:latest
stage: deploy_dev
dependencies:
- build
only:
- master # Only deply master branch automatically to Dev
script:
- export AWS_ACCESS_KEY_ID=$DEV_AWS_ACCESS_ID
- export AWS_SECRET_ACCESS_KEY=$DEV_AWS_ACCESS_KEY
- pip install awscli # Install AWS CLI
- aws s3 sync ./dist s3://$DEV_BUCKET
This all works great, however, I've now introduced some config and build my app differently per environment - for 3 environments I have 3 different build commands. Eg, I have an .env.production so for a production build my command becomes:
npm run build -- --mode production
Is there any way to get around having different builds for each environment but still using the .env files based on a GitLab variable?
You should split your build job to have one per environment and use the environment concept to have something like that for dev and production envs :
.build_template: &build_template
image: node:latest # Pull Node image
script:
- npm install -g #vue/cli#latest
- npm install
- npm run build -- --mode $CI_ENVIRONMENT_NAME
build_dev:
stage: build_dev
<<: *build_template
environment:
name: dev
build_prod:
stage: build_prod
<<: *build_template
environment:
name: production
In this snippet, I used anchors to avoid duplicate lines.
I'm using Azure Pipelines with hosted builds to build a web project. Our build times were hitting 10-15 minutes, with most (5-10 minutes) of the time spent doing npm install. To speed this up, I'm trying to use the Cache task (https://learn.microsoft.com/en-us/azure/devops/pipelines/caching/?view=azure-devops).
However, when the auto-added task Post-job: Cache runs, it always errors out with:
##[error]The system cannot find the file specified
The host server is Windows Server 2017.
Here is my entire build YAML
# Node.js with Vue
# Build a Node.js project that uses Vue.
# Add steps that analyze code, save build artifacts, deploy, and more:
# https://learn.microsoft.com/azure/devops/pipelines/languages/javascript
trigger:
- develop
pool:
name: Default
variables:
FONTAWESOME_NPM_AUTH_TOKEN: $(FONTAWESOME_NPM_AUTH_TOKEN_VARIABLE)
npm_config_cache: $(Pipeline.Workspace)/.npm
steps:
- task: DutchWorkzToolsAllVariables#1
- task: NodeTool#0
inputs:
versionSpec: '10.x'
displayName: 'Install Node.js'
- task: Cache#2
inputs:
key: 'npm | "$(Agent.OS)" | package-lock.json'
path: $(npm_config_cache)
cacheHitVar: NPM_CACHE_RESTORED
- task: Npm#1
displayName: 'npm install'
inputs:
command: 'install'
condition: ne(variables.NPM_CACHE_RESTORED, 'true')
- task: Npm#1
displayName: 'npm run build'
inputs:
command: 'custom'
customCommand: 'run build'
- task: CopyFiles#2
inputs:
SourceFolder: '$(Build.Repository.LocalPath)\dist'
Contents: '**'
TargetFolder: '$(Build.StagingDirectory)'
CleanTargetFolder: true
- task: PublishBuildArtifacts#1
inputs:
PathtoPublish: '$(Build.ArtifactStagingDirectory)'
ArtifactName: 'drop'
publishLocation: 'Container'
Cache task output:
Starting: Cache
==============================================================================
Task : Cache
Description : Cache files between runs
Version : 2.0.0
Author : Microsoft Corporation
Help : https://aka.ms/pipeline-caching-docs
==============================================================================
Resolving key:
- npm [string]
- "Windows_NT" [string]
- package-lock.json [file] --> F93EFA0B87737CC825F422E1116A9E72DFB5A26F609ADA41CC7F80A039B17299
Resolved to: npm|"Windows_NT"|rbCoKv9PzjbAOWAsH9Pgr3Il2ZhErdZTzV08Qdl3Mz8=
Information, ApplicationInsightsTelemetrySender will correlate events with X-TFS-Session zzzzz
Information, Getting a pipeline cache artifact with one of the following fingerprints:
Information, Fingerprint: `npm|"Windows_NT"|rbCoKv9PzjbAOWAsH9Pgr3Il2ZhErdZTzV08Qdl3Mz8=`
Information, There is a cache miss.
Information, ApplicationInsightsTelemetrySender correlated 1 events with X-TFS-Session zzzzz
Finishing: Cache
Post-job: Cache output:
Starting: Cache
==============================================================================
Task : Cache
Description : Cache files between runs
Version : 2.0.0
Author : Microsoft Corporation
Help : https://aka.ms/pipeline-caching-docs
==============================================================================
Resolving key:
- npm [string]
- "Windows_NT" [string]
- package-lock.json [file] --> 2F208E865E6510DE6EEAA6DB0CB7F87B323386881F42EB63E18ED1C0D88CA84E
Resolved to: npm|"Windows_NT"|OQo0ApWAY09wL/ZLr6fxlRIZ5qcoTrNLUv1k6i6GO9Q=
Information, ApplicationInsightsTelemetrySender will correlate events with X-TFS-Session zzzzz
Information, Getting a pipeline cache artifact with one of the following fingerprints:
Information, Fingerprint: `npm|"Windows_NT"|OQo0ApWAY09wL/ZLr6fxlRIZ5qcoTrNLUv1k6i6GO9Q=`
Information, There is a cache miss.
Information, ApplicationInsightsTelemetrySender correlated 1 events with X-TFS-Session zzzzz
##[error]The system cannot find the file specified
Finishing: Cache
How can I fix my build definition so the caching works?
#Levi Lu-MSFT was right in his comment but there's a gotcha.
#FLabranche has a working solution in his answer but I believe reasoning is not quite right.
The problem
npm install and #Cache task are looking for the npm cache at different locations. Consider the flow when pipeline runs for the first time:
#Cache task: does nothing since there's no cache yet.
npm i (or npm ci) task: installs packages in node_modules/ and updates the npm cache at default location. Default location is ~/.npm on Linux/Mac and %AppData%/npm-cache on Windows. On Linux hosted cloud agent the absolute path will be /home/vsts/.npm.
(... more tasks from your pipeline)
Post-job #Cache task (added implicitly): reads the npm cache found at user-provided location to store it for future reuse. User-provided location is set by the npm_config_cache: $(Pipeline.Workspace)/.npm variable. On Linux hosted cloud agent the absolute path will be /home/vsts/work/1/.npm.
As a result, #Cache task fails with tar: /home/vsts/work/1/.npm: Cannot open: No such file or directory.
Solution
Make npm install and #Cache task use the same npm cache location.
One option suggested by Levi Lu is to update the npm config with npm config set cache $(npm_config_cache) --global but it won't work in the pipeline (at least it didn't work for me in an Azure-hosted Linux agent): Error: EACCES: permission denied, open '/usr/local/etc/npmrc'
npm ci --cache $(npm_config_cache) updates the npm cache location for a single call and it does work in this case. It feels a bit hacky though since --cache option is not even documented on the npm website.
All in all this code worked for me:
variables:
NPM_CACHE_FOLDER: $(Pipeline.Workspace)/.npm
steps:
- task: Cache#2
displayName: Cache npm dependencies
inputs:
key: 'npm | "$(Agent.OS)" | package-lock.json'
restoreKeys: |
npm | "$(Agent.OS)"
npm
path: $(NPM_CACHE_FOLDER)
- script: npm ci --cache $(NPM_CACHE_FOLDER)
displayName: 'Install npm dependencies'
...
You can log into your Windows Server 2017 server and check if the folder $(Pipeline.Workspace)/.npm is created and the dependencies are stored inside.
I copied and tested your yaml. It worked both on local agent(win2019) and cloud agents. You can try to run your pipeline on the cloud agents or other agents with newer system to check if it is the agent that cause this error.
The keys generated with your package-lock.json differ between the two tasks.
It happens when the file is modified. Here, they're modified by your npm install task.
You can use the restoreKeys option when configuring the Cache task to fall back onto the latest cache entry.
And I think you don't need the 'npm install' task.
Could you try replacing this :
- task: Cache#2
inputs:
key: 'npm | "$(Agent.OS)" | package-lock.json'
path: $(npm_config_cache)
cacheHitVar: NPM_CACHE_RESTORED
- task: Npm#1
displayName: 'npm install'
inputs:
command: 'install'
condition: ne(variables.NPM_CACHE_RESTORED, 'true')
By this definition :
- task: Cache#2
inputs:
key: 'npm | "$(Agent.OS)" | package-lock.json'
restoreKeys: |
npm | "$(Agent.OS)"
npm
path: $(npm_config_cache)
displayName: Cache npm
- script: npm ci --cache $(npm_config_cache)
Yesterday, I was able to get it working with no issue at all on a self-hosted machine agent by using this:
- task: Cache#2
inputs:
key: '**/package-lock.json, !**/node_modules/**/package-lock.json, !**/.*/**/package-lock.json'
path: '$(System.DefaultWorkingDirectory)/node_modules'
displayName: 'Cache Node Modules'
Today, trying to work on a hosted agent today and this doesn't cut it at all. Aggh, Back to the grinding board. Anyhow, maybe could work for you on your self-hosted pipeline
This seems to be related to this open issue.
I have resolved the problem by switching the build agent pool to hosted and using windows-latest image.
pool:
vmImage: 'windows-latest'
Since there is already a saved cache for node_modules when the Pipeline is run, it does not make the npm install so new dependencies are not installed. Because of this, while the Pipeline is still running, it suddenly breaks as project does not find the corresponding package.
pipelines:
custom:
deploy-staging:
- step:
name: install dependencies
caches:
- node
script:
- npm install
- step:
name: deploy to all STAGING themes
deployment: staging
caches:
- node
- globalnode
script:
- npm run deployall -- -t staging
How could I fix that?
I'm using Gitlab CI 8.0 with gitlab-ci-multi-runner 0.6.0. I have a .gitlab-ci.yml file similar to the following:
before_script:
- npm install
server_tests:
script: mocha
client_tests:
script: karma start karma.conf.js
This works but it means the dependencies are installed independently before each test job. For a large project with many dependencies this adds a considerable overhead.
In Jenkins I would use one job to install dependencies then TAR them up and create a build artefact which is then copied to downstream jobs. Would something similar work with Gitlab CI? Is there a recommended approach?
Update: I now recommend using artifacts with a short expire_in. This is superior to cache because it only has to write the artifact once per pipeline whereas the cache is updated after every job. Also the cache is per runner so if you run your jobs in parallel on multiple runners it's not guaranteed to be populated, unlike artifacts which are stored centrally.
Gitlab CI 8.2 adds runner caching which lets you reuse files between builds. However I've found this to be very slow.
Instead I've implemented my own caching system using a bit of shell scripting:
before_script:
# unique hash of required dependencies
- PACKAGE_HASH=($(md5sum package.json))
# path to cache file
- DEPS_CACHE=/tmp/dependencies_${PACKAGE_HASH}.tar.gz
# Check if cache file exists and if not, create it
- if [ -f $DEPS_CACHE ];
then
tar zxf $DEPS_CACHE;
else
npm install --quiet;
tar zcf - ./node_modules > $DEPS_CACHE;
fi
This will run before every job in your .gitlab-ci.yml and only install your dependencies if package.json has changed or the cache file is missing (e.g. first run, or file was manually deleted). Note that if you have several runners on different servers, they will each have their own cache file.
You may want to clear out the cache file on a regular basis in order to get the latest dependencies. We do this with the following cron entry:
#daily find /tmp/dependencies_* -mtime +1 -type f -delete
EDIT: This solution was recommended in 2016. In 2021, you might consider the caching docs instead.
A better approach these days is to make use of artifacts.
In the following example, the node_modules/ directory is immediately available to the lint job once the build stage has completed successfully.
build:
stage: build
script:
- npm install -q
- npm run build
artifacts:
paths:
- node_modules/
expire_in: 1 week
lint:
stage: test
script:
- npm run lint
From docs:
cache: Use for temporary storage for project dependencies. Not useful for keeping intermediate build results, like jar or apk files. Cache was designed to be used to speed up invocations of subsequent runs of a given job, by keeping things like dependencies (e.g., npm packages, Go vendor packages, etc.) so they don’t have to be re-fetched from the public internet. While the cache can be abused to pass intermediate build results between stages, there may be cases where artifacts are a better fit.
artifacts: Use for stage results that will be passed between stages. Artifacts were designed to upload some compiled/generated bits of the build, and they can be fetched by any number of concurrent Runners. They are guaranteed to be available and are there to pass data between jobs. They are also exposed to be downloaded from the UI. Artifacts can only exist in directories relative to the build directory and specifying paths which don’t comply to this rule trigger an unintuitive and illogical error message (an enhancement is discussed at https://gitlab.com/gitlab-org/gitlab-ce/issues/15530 ). Artifacts need to be uploaded to the GitLab instance (not only the GitLab runner) before the next stage job(s) can start, so you need to evaluate carefully whether your bandwidth allows you to profit from parallelization with stages and shared artifacts before investing time in changes to the setup.
So, I use cache. When don't need to update de cache (eg. build folder in a test job), I use policy: pull (see here).
I prefer use cache because removes files when pipeline finished.
Example
image: node
stages:
- install
- test
- compile
cache:
key: modules
paths:
- node_modules/
install:modules:
stage: install
cache:
key: modules
paths:
- node_modules/
after_script:
- node -v && npm -v
script:
- npm i
test:
stage: test
cache:
key: modules
paths:
- node_modules/
policy: pull
before_script:
- node -v && npm -v
script:
- npm run test
compile:
stage: compile
cache:
key: modules
paths:
- node_modules/
policy: pull
script:
- npm run build
I think it´s not recommended because all jobs of the same stage could be executed in parallel.
First all jobs of build are executed in parallel.
If all jobs of build succeeds, the test jobs are executed in parallel.
If all jobs of test succeeds, the deploy jobs are executed in parallel.
If all jobs of deploy succeeds, the commit is marked as success.
If any of the previous jobs fails, the commit is marked as failed and no jobs of further stage are executed.
I have read that here:
http://doc.gitlab.com/ci/yaml/README.html
Solved a problem with a symbolic link to a folder outside the working directory. The solution looks like this:
//.gitlab-ci.yml
before_script:
- New-Item -ItemType SymbolicLink -Path ".\node_modules" -Target "C:\GitLab-Runner\cache\node_modules"
- yarn
after_script:
- (Get-Item ".\node_modules").Delete()
I know this is a enough dirty solution but it saves a lot of time for build process and extends the storage life.
GitLab introduced caching to avoid redownloading dependencies for each job.
The following Node.js example is inspired from the caching documentation.
image: node:latest
# Cache modules in between jobs
cache:
key: $CI_COMMIT_REF_SLUG
paths:
- .npm/
before_script:
- npm ci --cache .npm --prefer-offline
server_tests:
script: mocha
client_tests:
script: karma start karma.conf.js
Note that the example uses npm ci. This command is like npm install, but designed to be used in automated environments. You can read more about npm ci in the documentation and the command line arguments you can pass.
For further information, check Caching in GitLab CI/CD and the cache keyword reference.