Lerna concurrency errors in Docker container - npm

I have a typescript monorepo managed with Lerna.
The tests are done with Jest.
There are tens of packages that have the test script defined, while the jest config is stored in a central location and used by all.
An example test script looks like this:
jest --config ../../../tests/jest.config.json --setupFiles ../../../tests/jest-setup.js --rootDir .
Each package has a different number of "../" depending on its location in the source folder tree.
It works 100% of the time on multiple platforms like Windows, Linux and Mac.
For some reason when we run it inside a docker container, if the concurrency isn't set to 1 we see jest process from one package getting actually the paramters of another one which cause it to fail:
#cmd/example-package: > #cmd/example-package#0.0.0 test:integration
#cmd/example-package: > jest --config ../../tests/jest.config.json --setupFiles ../../tests/jest-setup.js --setupFilesAfterEnv ../../tests/setEnvVars.js --rootDir . --testPathPattern=./integration-tests --runInBand
#cmd/example-package: jest parameters: /src/cmd/example-package , --config,../../../tests/jest.config.json,--setupFiles,../../../tests/jest-setup.js,--setupFilesAfterEnv,../../../tests/setEnvVars.js,--rootDir,.,--testPathPattern=./integration-tests,--runInBand
The last line is printed from code added at the begining of the jest script which prints the current folder and the parameters passed. You can see that the parameters lerna reports passing to jest aren't the one which were actually used.
We saw such errors in the build process as well.
Any idea on how to solve it will be highly appriciated.
Tried multiple nodejs base images (alpine, node, Bullseye), multiple node versions and multiple lerna versions.
Even tried to switch from Lerna to Turborepo but still getting these concurrency errors

In our case, the problem was related to the installation of a different version of the npm on top of the alpine node docker image in one of the container's base images.
Leaving the question here, in case it helps someone.

Related

Can't start Framework Qwik project

I have an issue while creating and starting project. I followed the instructions given here https://qwik.builder.io/docs/getting-started/ and used npm, selected Basic App (QwikCity), but when I start the project I'm given the next error:
Error
Terminal output:
[vite] Internal server error: Failed to load url /src/root_component_vgnegdacmce.js (resolved id: C:/Users/JESUS LOPEZ/Documents/Universidad/Pasantías/qwik-app/src/root_component_vgnegdacmce.js). Does the filnt_vgnegdacmce.js). Does the file exist?
File: /C:/Users/JESUS%20LOPEZ/Documents/Universidad/Pasant%C3%ADas/qwik-app/node_modules/vite/dist/node/chunks/dep-5e7f419b.js:39304:21
at loadAndTransform (file:///C:/Users/JESUS%20LOPEZ/Documents/Universidad/Pasant%C3%ADas/qwik-app/node_modules/vite/dist/node/chunks/dep-5e7f419b.js:39304:21)
I'm using Windows 10 and node 18.12.0, I tried with yarn and happened the same. I'm just testing this framework because I was required to create a component library, so I wanted to test the waters with a basic app project and then move on with the component library but even if I select this option, I have a similar error.
This is my repo: https://github.com/luisamlopez/qwik-app but it's literally just a brand new qwik project (npm create qwik#latest) so I haven't touch anything.
Your code works fine without any problems. Maybe some node modules or other dependencies would have not been installed properly because of firewall or network issues.
Clean the node_modules manually delete the folder or by the following command
rm -r node_modules/
npm prune
Note: prune command is optional.
Install the package dependencies by
npm i
Make sure the installation happens successfully without any issues or try to install with a different network or turn off the firewall for a while. Or worst case try with different machines.

Getting 'Could not find test files' Error when attempting to run TestCafe tests

I'm trying to run some TestCafe tests from our build server, but getting the following error...
"Could not find test files at the following location: "C:\Testing\TestCafe".
Check patterns for errors:
tests/my-test.ts
or launch TestCafe from a different directory."
I did have them running or able to be found on this machine previously, but others have taken over the test coding and changed the structure a bit when moving it to a Git repository. Now when I grab the tests from Git and try to run, the problem presents itself. I'm not sure if there is something in a config file that needs adjustment but don't know where to start looking.
The intention is to have it part of our CI process, but the problem is also seen when I attempt to run the tests from the command line. The build process does install TestCafe, but there is something strange around this as well.
When the build failes with the can't find tests error, if I try to run the following command in the proper location...
tescafe chrome tests/my-test.ts
... I get, 'testcafe' is not recognized as an internal or external command,
operable program or batch file.
Just can't understand why I can't get these tests running. TestCafe setup was pretty much easy previously.
ADDENDUM: I've added a screenshot of the working directory where I cd to and run the testcafe command as well as the tests subdirectory containing the test I'm trying to run.
Any help is appreciated!!
testcafe chrome tests/my-test.ts is just a template; it isn't a real path to your tests. This error means that the path that you set in CLI is wrong, and there aren't any tests. You need to:
Find out where you start CLI. Please attach a screenshot to your question.
Define an absolute path to tests or a path relative to the place where CLI was started. Please share a screenshot of your project tree where the directory with tests is open.
Also, you missed t in the tescafe chrome tests/my-test.ts command. It should be tesTcafe chrome tests/my-test.ts. That is why you get the "'tescafe' is not recognized as an internal or external command" error.
I was able to get things working by starting from scratch. I uninstalled TestCafe and cleaned the working folder. During next build it was fine. I'm sure I've tried this several times, but it just started working.
One positive that came out of it was that I discovered a typo in a test file name, which was also causing issues finding the test I was using to check testing setup.
Thanks for helping!!

“ERROR Unable to find the browser. “saucelabs:Chrome#83.0:Windows10” is not a browser alias or path to an executable file

I am trying to run my UI test using testcafe and saucelabs. I am facing this above error. Currently I am using testcafe v1.8.3 and testcafe-browser-provider-saucelabs v1.7.0
I have tried changing versions of browser provider also but still facing the above error. Pls help out with a solution for this as i am stuck with it for more than a week
So, it looks like the runner you are using (testcafe-browser-provider) is a very old one, there is a new runner you can use for testcafe tests called saucectl.
TLDR:
Install saucectl globally npm install -g saucectl
Set up saucectl within your project folder with saucectl init This will create a .sauce/config.yml file
Tweak the settings to run the spec files and OS/ browser of your choice
Use saucectl run
You can see an example proj here: https://github.com/saucelabs/saucectl-testcafe-example
It looks like your provider is installed locally, while you are using the global TestCafe installation. You also need to install TestCafe locally or both packages globally. After this, check your browser provider: testcafe -b saucelabs.
I am using testcafe v1.8.3 and testcafe-browser-provider-saucelabs v1.7.0
Please update your testcafe and testcafe-browser-provider-saucelabs versions to the latest ones.

Vue-CLI v3 app: hot module reload not working

I've installed Vue CLI v3, and in my terminal:
created a new app using 'vue create my-project' (accepting default config)
navigated to the 'my-project' app directory and run 'npm run serve', the result of which is:
DONE Compiled successfully in 11889ms
App running at:
- Local: http://localhost:8080/
- Network: http://192.168.0.3:8080/
Note that the development build is not optimized. To create a production build, run npm run build.
... and then, when making any change whatsoever to the Hello World component, e.g., a tweak to the css, something obvious like the link color, nothing happens; no response in the terminal, no browser refresh, and no update to the page when manually refreshing.
I've built a few apps using Vue in the past, hot module reloading was working previously, but now there is zero activity/response in the terminal regardless of what I change in any project file; only if I close the terminal tab, re-open a tab, navigate to the project directory and re-run 'npm run serve', and refresh the browser do I see the changes. obviously this is unusable. What am I missing?
This issue has been resolved, though I am not 100% sure what caused it.
I noticed that some people with similar failures of hot reload had mentioned bad directory names. My vue project's parent directory name was legit but I had renamed it at one point (though that was multiple restarts and reinstalls ago), and I also noticed that some of the vue-cli-created project folders were not displaying in the Finder until it was quit and restarted. I figured there was something corrupted about that folder. I created a new folder - a sibling of the dubious folder - and had another go with vue-cli, and it worked as expected.
Hope this helps someone. Thanks again to those of you who offered suggestions.
Whiskey T.
For anyone using WSL. I ran into this exact problem and solved it via this method.
I had the same issue, It seems wsl2 does not watch for file changes
inside the windows filesystem. Everything works fine if the vue
project is inside the Ubuntu filesystem. Check out this link for
further info
https://learn.microsoft.com/en-us/windows/wsl/wsl2-ux-changes.
Source: https://github.com/vuejs/vue-cli/issues/4421#issuecomment-557194129
If u installed Node js as sudo, then Running sudo npm run serve worked for me. Actually node.js was installed as sudo and the project also created as sudo so when I run npm run serve the vue-hot-reload-api cannot access the node server to do hot reload
Additionally if u want the hot reload to work in offline mode, then switch off your network and then npm run serve and then reconnect to your network. That will work as localhost protocol and not use your local network IP.
Cheers
Add following script tag to package.json file
...
"scripts": {
"dev": "cross-env NODE_ENV=development vue-cli-service serve --open --host localhost",
....
},
....
and run with
npm install –save-dev cross-env
npm run dev
source: https://www.davidyardy.com/blog/vue-cli-creating-a-project%E2%80%93issue-with-hot-reload/

How to get around memory error with karma & phantomjs

We're running tests using karma and phantomjs Last week, our tests mysteriously started crashing phantomJS with an error of -1073741819.
Based on this thread for Chutzpah it appears that code indicates a native memory failure with PhantomJS.
Upon further investigation, we are consistently seeing phantom crash around 750MB of memory.
Is there a way to configure Karma so that it does not run up against this limit? Or a way to tell it to flush phantom?
We only have around 1200 tests so far. We're about 1/4 of the way through our project, so 5000 UI tests doesn't seem out of the question.
Thanks to the StackOverflow phenomenon of posting a question and quickly discovering an answer, we solved this by adding gulp tasks. Before we were just running karma start at the command line. This spun up a single instance of phantomjs that crashed when 750MB was reached.
Now we have a gulp command for each one of our sections of tests, e.g. gulp common-tests and gulp admin-tests and gulp customer-tests
Then a single gulp karma that runs each of those groupings. This allows each gulp command to have its own instance of phantom, and therefore stay underneath that threshold.
We ran into similar issue. Your approach is interesting and certainly side steps the issue. However, be prepared to face it again later.
I've done some investigation and found the cause of memory growth (at least in our case). Turns out when you use:
beforeEach(inject(SomeActualService)){ .... }
the memory taken up by SomeActualService does not get released at the end of the describe block and if you have multiple test files where you inject the same service (or other injectable objects) more memory will be allocated for it again.
I have a couple of ideas on how to avoid this:
1. create mock objects and never use inject to get real objects unless you are in the test that tests that module. This will require writing tons of extra code.
2. Create your own tracker (for tests only) for injectable objects. That way they can be loaded only once and reused between test files.
Forgot to mention: We are using angular 1.3.2, Jasmine 2.0 and hit this problem around 1000 tests.
I was also running into this issue after about 1037 tests on Windows 10 with PhantomJS 1.9.18.
It would appear as ERROR [launcher]: PhantomJS crashed. after the RAM for the process would exceed about 800-850 MB.
There appears to be a temporary fix here:
https://github.com/gskachkov/karma-phantomjs2-launcher
https://www.npmjs.com/package/karma-phantomjs2-launcher
You install it via npm install karma-phantomjs2-launcher --save-dev
But then need to use it in karma.conf.js via
config.set({
browsers: ['PhantomJS2'],
...
});
This seems to run the same set of tests while only using between 250-550 MB RAM and without crashing.
Note that this fix works out of the box on Windows and OS X, but not Linux (PhantomJS2 binaries won't start). This affects pushes to Travis CI.
To work around this issue on Debian/Ubuntu:
sudo apt-get install libicu52 libjpeg8 libfontconfig libwebp5
This is a problem with PhantomJS. According to another source, PhantomJS only runs the garbage collector when the page is closed, and this only happens after your tests run. Other browsers work fine because their garbage collectors work as expected.
After spending a few days on the issue, we concluded that the best solution was to split tests into groups. We had grunt create a profile for each directory dynamically and created a command that runs all those profiles. For all intents and purposes, it works just the same.
We had a similar issue on linux (ubuntu), that turned out to be the amount of memory segments that the process can manage:
$ cat /proc/sys/vm/max_map_count
65530
Then run this:
$ sudo bash -c 'echo 6553000 > /proc/sys/vm/max_map_count'
Note the number was multiplied by 100.
This will change the session settings. If it solves the problem, you can set it up for all future sessions:
$ sudo bash -c 'echo vm.max_map_count = 6553000 > /etc/sysctl.d/60-max_map_count.conf'
Responding to an old question, but hopefully this helps ...
I have a build process which a CI job runs in a command line only linux box. So, it seems that PhantomJS is my only option there. I have experienced this memory issue locally on my mac, but somehow it doesn't happen on the linux box. My solution was to add another test command to my package.json to run karma using Chrome, and run that locally to run my tests. When pushed up, Jenkins would kick off the regular test command, running PhantomJS.
Install this plugin: https://github.com/karma-runner/karma-chrome-launcher
Add this to package.json
"test": "karma start",
"test:chrome": "karma start --browsers Chrome"