ccache is a fantastic way to speed up building C binaries which you have already built previously, by caching the results. (Another great tool is distcc, which will pass code to other machines for parallel compilation!)
Can I get npm to use ccache when it build C files using gyp?
Here is a way to test:
$ time npm install mmmagic
...
npm install mmmagic 103.83s user 9.06s system 100% cpu 1:51.84 total
$ rm -rf node_modules/mmmagic
$ time npm install mmmagic
...
npm install mmmagic 103.48s user 8.59s system 102% cpu 1:48.87 total
If we can use ccache, it should be significantly faster on the second attempt.
Another way to see if ccache is being called, and if it is helping, is to run this in a separate terminal while a build is underway:
$ watch -d ccache -s
This will display a live update of ccache's statistics.
You should be able to do this by setting your environment variables correctly.
For a C compiler: export CC="ccache gcc" (or export CC="ccache clang") should work fine.
Related
I have a recipe that installed some NPM packages that worked on an older version of Yocto.
After upgrading to sumo, the recipe fails with the following error:
installnpmpackages/0.0.1-r0/temp/run.do_compile.7272: npm: not found
| WARNING: exit code 127 from a shell command.
I tried using the developer shell and NPM does work in that case.
The do_compile from the recipe:
do_compile() {
# Create a working directory
mkdir -p ${WORKDIR}/scratch
# changing the home directory to the working directory, the .npmrc will be created in this directory
export HOME=${WORKDIR}/scratch
# configure cache to be in working directory
npm set cache ${WORKDIR}/scratch/npm_cache
# clear local cache prior to each compile
npm cache clear
# compile and install node modules in source directory
cd ${WORKDIR}/scratch
npm --arch=${TARGET_ARCH} --verbose install node-gyp
npm --arch=${TARGET_ARCH} --verbose install connect
npm --arch=${TARGET_ARCH} --verbose install socket.io
#npm --arch=${TARGET_ARCH} --verbose install sqlite3
#npm --arch=${TARGET_ARCH} --verbose install serialport
npm --arch=${TARGET_ARCH} --verbose install express
npm --arch=${TARGET_ARCH} --verbose install csv
npm --arch=${TARGET_ARCH} --verbose install md5
# clear local cache before we package. No need to copy over all this cache stuff; just need the modules.
npm cache clear
}
Note sqlite3 and serialport are commented out as they did not work on the previous version.
What needs to be changed with sumo (vs morty) for NPM to function in a recipe?
Thank you in advance!
I found a simple solution.
I created individual recipes using devtool add.
Here is the command used to create a recipe for the serialport npm module:
devtool add "npm://registry.npmjs.org;name=serialport;version=7.1.4"
I'm answering to #Hsn comment as my account is new and I don't have 50 reputation.
If you are able to add a recipe with devtool and it worked, you can use devtool as well to finish working on the recipe and tell devtool in which meta you want to put the recipe like :
devtool finish recipe_name meta-destination
And in order to put it into your final OS rootfs, you need to add it to your image bb file, for example : image-dev.bb :
IMAGE_INSTALL_append += "recipe_name"
Make sure also that the meta which holds your recipe is present in your bblayers.conf.
I am installing some packages on NPM, sometimes I have to write -s and -g? What do they mean?
npm install -s socket.io
npm install -g xxxxxxx
npm -g <package> will install a package globally on your machine. Without the -g or --global flag, the package will be installed locally in the directory you were in when you ran the command.
npm -S <package> with an uppercase -S or --save will install the package and save it to your dependencies in your package.json, although I believe that is now the default behavior in current npm. I recommend reading the docs if you're unfamiliar with what's happening when you pass different options to npm.
#gmmetheny answered the question about the global -g flag, but -s appears to silence the output of the command (at least in npm v7.0.15).
In other words, including -s (or --silent) in your npm install command means that it will have no output (only a newline):
> npm install -s example-package1 example-package2
This may be useful for running the command in a script.
Running the command without the -s flag echoes information about what was installed, e.g.:
> npm install example-package1 example-package2
npm WARN deprecated some-pkg#1.2.3: this library is no longer supported
added 160 packages, and audited 160 packages in 6s
14 packages are looking for funding
run `npm fund` for details
found 0 vulnerabilities
You can diff the resulting directories created after running each variant of the command and you can verify that the effects are the same.
As #Max mentioned, this option is NOT mentioned in the npm docs (at least not in any prevalent location where a user might find it after a reasonable amount of searching).
The npm pack command only allows you to write out a file to the current working directory.
I am looking for a way to do this:
npm pack > $myfile
So I am considering writing my own version of npm pack - but I dont quite know how it's implemented.
Does anyone know if npm pack as its currently implemented bundles node_modules? is there some quick and dirty way to replicate its functionality but write output to stdout?
I assume npm pack is based on the tar command and that the tar command allows you to write to stdout...
You can use yarn instead of rewriting npm
yarn pack -f some/other/path.tgz
or if you really need to redirect:
yarn pack -f /dev/stdout > some/other/path.tgz
I am using Node 6.10.1 and npm 3.10.10 on a Dell XPS 15 running Ubuntu 16.04 with Kernel 4.13.0.0-36-generic.
I am behind a corporate proxy which is configured through cntlm.
When I run an npm install -d on a project It works from a short time, and after a while I get Error: socket hang up.
I have found numerous questions about my problem but no solution seemed to work.
Here is an extract of a npm config list :
; cli configs
user-agent = "npm/3.10.10 node/v6.10.1 linux x64"
; userconfig /home/msb/.npmrc
https-proxy = "http://localhost:3128/"
registry = "http://urlTocorporateRegistryWhichWorksOnOtherComputers"
strict-ssl = false
; node bin location = /home/msb/.nvm/versions/node/v6.10.1/bin/node
; cwd = /home/msb
; HOME = /home/msb
; "npm config ls -l" to show all defaults.
I cannot change the registry since we are using some internal modules, and I have to keep the current versions of node/npm.
I have already tried :
Using the proxy directly in npm config rather than through cntlm
Limiting my upload/download capabilities with trickle through the command trickle -s -d 100 -u 100 npm install -d
Another indication : It works on Windows, and I have a collegue running Ubuntu 17.04 on a slower pc and it works for him. We think my machine might be a bit too brutal when requesting the registry. Does anyone know a way to slow npm requests ?
It used to work through yarn but some new developments have forced me to go back to npm.
Has anyone encountered and corrected this problem ?
Thanks for your help.
I experimented the same problem, with no apparent reason, on Ubuntu 18.04.
I finally used docker with bind mounts to solve it. The steps are the following:
Create a dockerfile with the following elements (you can also directly run with the used image if you don't need to configure a proxy like me)
FROM node:6.10.1
ENV HTTPS_PROXY "http://yourproxy:yourport/"
# Different RUN commands to configure npm and git corporate proxy
WORKDIR /home/root/
Build the image (from the dockerfile's folder): docker image build -f npm-installer/Dockerfile -t custom-npm-installer .
Go inside the project folder where you would normally run npm install
Run the following command to run the container interactively: docker container run -it --network host -v </host/path/to/pj>:/home/root/pj-to-install --name custom-npm-installer custom-npm-installer bash
You can now run the npm install command from the container. Careful however, you'll then need to use chmod on the node_modules folder recursively since the container uses root by default.
Another thing, if you're using node-sass, it is most of the time compiled on the fly when npm installing, and matches your OS current version/architecture. So if your linux distribution is not exactly the same than the container's you might need to recompile node-sass on your host after running npm install on the container. No worries though, node-sass will give you the command to run the moment you launch your application.
Issue:
I just updated my WSL installation after installing the Fall Creators Update and now when I run npm i I get the following warnings from npm I get probably 2-20 of these warnings from random packages each time I install, it's never consistent. Sometimes it even works, no warnings. I thought that this might be okay, but when I run my project npm run dev I get all sorts of errors. It seems to me that the packages aren't installing correctly. But on the occasion when it doesn't show warnings the application runs as expected. I tested with some random projects from GitHub and same issue.
Versions:
NPM Version: 5.5.1
NodeJS Version: 8.9.0
Other Factors: ZSH
ERROR:
npm WARN tar EINVAL: invalid argument, open '/mnt/c/Users/Me/Documents/project/node_modules/.staging/parse-json-07a114c7/index.js'
npm WARN tar EINVAL: invalid argument, open '/mnt/c/Users/Me/Documents/Project/node_modules/.staging/esrecurse-fe2bc2eb/package.json'
Notes:
Tried a fresh install of WSL, same issue
can install globally without issue, only seems to fail in the /mnt/** path.
Can confirm it works in the Linux folders, successful installation in home directory, but breaks on /mnt/**
EDIT: After much troubleshooting I decided to run without ZSH and switching back to using bash.exe instead of the suggested wsl.exe. First install worked. Testing further.
The issue was in fact with the Fall Creators Update. Many optimizations were made and it seems that something to do with symlinking mounted drives had issues. See all the technical conversation here
There are two solutions, first and recommended, the WSL team has already fixed and the fix is in Insiders Build 17035. This fix requires going to Settings -> Insiders -> Selecting "get Active Builds" and then "Fast Ring". Only do this if you are experienced in dealing with occasional breaks as it is essentially beta software.
Fix number two and recommended if you can't update or don't feel comfortable with Insiders Builds is to add this to your .bashrc file:
if ! mount | grep -q "C: on /mnt/c type drvfs (rw,noatime,fallback=1)"; then
echo "== Remount of C: drive required =="
pushd ~ > /dev/null
sudo umount /mnt/c
sudo mount -t drvfs -o noatime,fallback=1 C: /mnt/c
popd > /dev/null
fi
The .bashrc solution does remove many performance gains however so only do if really necessary.