How to undo what was executed from an npx command package? - npx

I installed bit on my Ubuntu 22.04 machine using npx #teambit/bvm install, which created an executable in my HOME/bin folder and an entry on my .zshrc
Now, I would like to know if anything else was installed, and how can I completely remove Bit from my machine.
Ideally, I would like to know which code was run when doing npx #teambit/bvm install
I use volta to install nodejs https://volta.sh/

Answering this question requires some context.
First and foremost, #teambit/bvm produces side-effects in the ~/.bvm/ directory (see code here - https://github.com/teambit/bvm). To delete Bit and BVM completely, you need to manually remove that directory.
In general, npx doesn't have a way to revert side effects by packages/commands you run through it (if they do produce any side effects).
There's no way to get npx to undo what any tool did, as npx doesn't force any boundaries on tools.
At the end of the day, you need to delete each tool per to its own instructions.
The only thing that npx does is creating the ~/bin/bvm (in the case of #teambit/bvm, for other tools naming will be different). This is a shortcut for the command that the package-manager puts in place. It's unrelated to bvm or bit. npx may also place things in global node_modules or do other npm-related things.

Related

Blackduck Synopsys Yarn Detector cannot find project version name

I'm using Blackduck version 5.6.2 on a Create-React-App application with dependencies installed using yarn v1.22.11.
Blackduck executes as a job in a GitLab CI pipeline. Previously, I used npm to install the packages in the blackduck step before running the scan. Blackduck scanner was able to pick up the project name and version number without any explicit configuration.
I've recently changed from NPM to yarn to take advantage of selective dependency resolution feature in attempts to overcome nested dependencies reported by Blackduck scanner. Once I converted the Blackduck step to use yarn, Blackduck picked up on yarn.lock and package.json files in my repository to use Yarn detector this was seen from the outputs (and the expected behaviour according to
Synopsys' documentation). However, it is no longer able to pick up the project version name correctly.
Here are the messages that indicate this:
2021-09-27 19:13:47 INFO [main] --- Determining project info.
2021-09-27 19:13:47 INFO [main] --- A project name could not be decided. Using the name of the source path.
2021-09-27 19:13:47 INFO [main] --- A project version name could not be decided. Using the default version text.
Once the test completes and I visit the blackduck web console for test results, I now see "Default Detect Version" as the version name instead of the version number that I had before.
I've tried passing -detect.project.version.notes=./package.json argument into the step that starts the blackduck scanner shell script, but it's still falling back onto "Default Detect Version" as the version name.
The docs for yarn detector stops after specifying arguments to include and exclude workspaces, the path to yarn binary, and if production dependencies should be included.
Does anyone have any idea why the Yarn detector is unable to pick up the version number from package.json and how I can address this issue without passing a hardcoded value as the argument to the scanner shell script?
I was able to solve both the project name and version issue as well as blackduck scanner reporting vulnerabilities within nested dependency by performing the following
Switch from using yarn to npm for package installation.
This allowed the NPM detector to utilize package.json and package-lock.json to automatically determine the project name and version.
Add the following flags to blackduck scanner when starting the shell script.
--detect.npm.include.dev.dependencies=false
--detect.npm.include.peer.dependencies=false
Example:
bash <(curl -s https://detect.synopsys.com/detect.sh) --blackduck.url="${BLACKDUCK_HUB_URL}" --blackduck.username="${BLACKDUCK_HUB_USER}" --blackduck.password="${BLACKDUCK_HUB_PASS}" --blackduck.trust.cert=true --detect.policy.check=true --detect.hub.signature.scanner.paths=./ --detect.npm.include.dev.dependencies=false --detect.npm.include.peer.dependencies=false
This will instruct blackduck scanner to not look at dev dependency and nested dependencies.
Additional notes
It is not necessary to use npm-forced-resolutions or yarn's selective dependency resolution; For future reference, npm's "override" function was not mentioned because it has not been implemented yet at the time this answer was written.
If you are using create-react-app, make sure react-scripts is moved from dependencies to devDependencies (as per Dan Abramov's recommendation to address this issue). You can do this by running npm uninstall react-scripts, then running npm install --save-dev react-scripts or npm install --save-dev react-scripts#<specific version>
first off let me point you to the Synopsys Community portal which is the best and quickest place to ask these questions and get answers https://community.synopsys.com/s/. Full disclose, I work at Synopsys and am involved with the Black Duck product.
Can I also ask if the version "5.6.2" you listed refers to the Synopsys Detect scanning tool or Black Duck itself? I ask as the latest version of Black Duck 2021.8.2 and detect 7.5.0 does support extracting the project version name from the yarn.lock and/or package.json, and you should not need to provide any additional parameters to have detect automatically pick this up.
One other thing to note is that generally you should be running the yarn command on your project prior to running detect. This produces more accurate analysis results within Black Duck and resolves the dependencies using your normal package manager tools. Which would mean detect will extract the project name and version information from the yarn.lock file not the package.json file. For more details, you can see this page here https://synopsys.atlassian.net/wiki/spaces/INTDOCS/pages/631275892/Detectors

about mocha in windows

Mocha is installed globally on Windows, but cmd shows "mocha is not an internal or external command, nor is it a runnable program or batch file"
Mocha (test framework for Node.js) uses make and on Windows machine, such errors occur a lot. I guess, at the time of execution it's not recognizing the path. So, you can follow any of the 2 below:
1) Install mocha globally(if not done already) so that it works in the regular windows command line:
npm install -g mocha
Then run your tests with mocha path\to\test.js
OR
2) Other way to deal with this is to use Cygwin and ensure that the developer packages for Cygwin are installed.
Read this article, it will help you: https://altamodatech.com/blogs/?p=452
On installation, the location of mocha.cmd is not added to path. If you install globally, as #hemanshu suggests, that location is %APPDATA%\npm. So, you either add that to your path, or (as I do) define an alias; my cmd.exe shortcut loads a script to set the path to things actually useful in the command line, set environment variables, etc, and in there I have this:
#doskey mocha=%APPDATA%\npm\mocha.cmd

Why doesn't Yarn install all executables in .bin folder?

I've just started using the Yarn package manager and I downloaded a starter Ionic 2 project.
In this project, we have a lot of help from scripts that compile, minify, lint and bundle our code. All of this is provided by ionic-app-scripts, which has several dependencies which it uses to run commands.
The problem is when I use Yarn to install, the node_modules/.bin/ folder does not contain all the necessary executables, like tslint, which is a dependency of ionic-app-scripts, so it is not directly in my package.json.
The result is that when I use Yarn, ionic-app-scripts doesn't work because it expects that the .bin folder contains a tslint executable!
What can I do? Are the ionic-app-scripts's definitions a problem?
[note]: npm install works, but Yarn is much faster!
This is a known issue, and there's a pull request with more information.
In short, if you want to fix this now, you'll have to explicitly include the packages from which you need binaries in your dependencies.
I had this issue but a different solution.
The solution was from this ticket https://github.com/yarnpkg/yarn/issues/992#issuecomment-318996260
... my workaround is going to file manager, right click on /node_modules main folder, selecting properties, and check-uncheck "read-only". You can do it also using attrib in command line. Then you retry installation and it works.

When to use shrinkwrap, npm-lockdown, or npm-seal

I'm coming from a background much more familiar with composer. I'm getting gulp (etc) going for the build processes and learning node and how to use npm as I go.
It's very odd (again, coming from a composer background) that a composer.lock-like manifest is not included by default. Having said that, I've been reading documentation on [shrinkwrap], [npm-lockdown], and [npm-seal]. ...and the more documentation I read, the more confused I become as to which I should be choosing (everyone thinks their way is the best way). One of the issues I notice is that npm-seal hasn't changed in 4 years and npm-lockdown in 8 months -- this all leads me to wonder if this because it's not needed with the newest version of npm...
What are the benefits / drawbacks of each?
In what cases would I use one over another in Project A, but use a different one in Project B?
How will each impact our development workflow?
PS: Brownie points if you include the most basic implementation example for each. ;)
npm shrinkwrap is the most standard way how to lock your dependencies. And yes, npm install does not create it by default which is a pity and it is something that npm creators definitely should change.
npm-lockdown is trying to do the same things as npm shrinkwrap, there are two minor points in which npm-lockdown is better: it handles optional dependencies better and it validates checksums of the package:
https://www.npmjs.com/package/lockdown#related-tools
Both these features seem not so relevant for me; I'm quite happy with npm shrinkwrap: For example, npmjs guarantees that once you upload certain package at certain version, it stays immutable - so checking sha checksums is not so hot (I've never encountered an error caused by this).
seal is meant to be used together with npm shrinkwrap. It adds the 'checksum checking' aspect. It looks abandoned and quite raw.
Good question - I'm going to skip everything but shrinkwrap because it is the de-facto way to do this, per NPM's docs.
Long story short the npm-shrinkwrap.json file is akin to your lock files you are used to in every other package manager, though NPM allows different versions of the same package to play nice together by isolation - literally scoping and copying different entire versions to node_modules at different levels of the tree. If two projects that are parent-child to each other use the exact same version, NPM will copy the version to only the parent and the child will traverse up the tree to find the package.
Best practice is simply to update package.json for your direct dependencies, run npm install, verify that things are working while developing, then run npm shrinkwrap when you are just about to commit and push. NOTE: make sure to rm npm-shrinkwrap.json before running npm install during active development - if your direct dependencies have changed, you want package.json to be used, and not the lock! Also include node_modules in your .gitignore or equivalent in your source control system. Then, when you are deploying and getting to run the project, run npm install like normal. If npm finds an npm-shrinkwrap.json file, it will use that to recursively pull all locked modules, and it will ignore package.json in both your project and all dependent projects.
You might find shrinkpack useful – it checks in the tarballs which npm install downloads and bundles them into your repository, before finally rewriting npm-shrinkwrap.json to point at that local bundle instead.
This way, your project is totally locked down, completely available offline, and much quicker to install.

Why do I get 'divide by zero` errors when I try to run my script with Rakudo?

I just built Rakudo and Parrot so that I could play with it and get started on learning Perl 6. I downloaded the Perl 6 book and happily typed in the first demo program (the tennis tournament example).
When I try to run the program, I get an error:
Divide by zero
current instr.: '' pc -1 ((unknown file):-1)
I have my perl6 binary in the build directory. I added a scripts directory under the rakudo build directory:
rakudo
|- perl6
\- scripts
|- perlbook_02.01
\- scores
If I try to run even a simple hello world script from my scripts directory I get the same error:
#!/home/daotoad/rakudo/perl6
use v6;
say "Hello nurse!";
However if I run it from the rakudo directory it works.
It sounds like there are some environment variables I need to set, but I am at a lost as to what the are and what values to give them.
Any thoughts?
Update:
I'd rather not install rakudo at this point, I'd rather just run things from the build directory. This will allow me to keep my changes to my system minimal as I try out different Perl6 builds (Rakudo * is out very soon).
The README file encouraged me to think that this was possible:
$ cd rakudo
$ perl Configure.pl --gen-parrot
$ make
This will create a "perl6" or "perl6.exe" executable in the
current (rakudo) directory. Programs can then be run from
the build directory using a command like:
$ ./perl6 hello.pl
Upon rereading, I found a reference to the fact that it is necessary to install rakudo before running scripts outside the build directory:
Once built, Rakudo's make install target will install Rakudo
and its libraries into the Parrot installation that was used to
create it. Until this step is performed, the "perl6" executable
created by make above can only be reliably run from the root of
Rakudo's build directory. After make install is performed,
the installed executable can be run from any directory (as long as
the Parrot installation that was used to create it remains intact).
So it looks like I need to install rakudo to play with Perl 6.
The next question is, where rakudo be installed? README says into the Parrot install used to build.
I used the --gen-parrot option in my build, which looks like it installs into rakudo/parrot-install. So rakudo will be installed into my rakudo\parrot-install?
Reading the Makefile, supports this conclusion. I ran make install, and it did install into parrot_install.
This part of the build/install process is unclear for a newbie to Perl6. I'll see if I can up with a documentation patch to clarify things.
Off the top of my head:
Emphasize running make install before running scripts outside of build. This requirement is currently burried in the middle of a paragraph and can be easily missed by someone skimming the docs (me).
Explicitly state that with --gen-parrot will install perl6 into the parrot_install directory.
Did you run make install in Rakudo?
It's necessary to do it to be able to use Rakudo outside its build directory (and that's why both the README and http://rakudo.org/how-to-get-rakudo tell you to do it.
Don't worry, the default install location is local (in parrot_install/bin/perl inside your rakudo directory).
In response to your update I've now updated the README:
http://github.com/rakudo/rakudo/commit/261eb2ae08fee75a0a0e3935ef64c516e8bc2b98
I hope you find that clearer than before. If you still see room for improvement, please consider submitting a patch to rakudobug#perl.org.