Electron Forge keeps hanging when started at the "Launching dev servers for renderer process code" - npm

I have an Electron Forge project I am working on with others. I recently had to replace my computer, including reinstalling Node.js and getting the install to work despite self-signed certificates being involved in my company's network. I finally got it to go through, and "npm install" got all of the dependencies. However, it seems to stop and fail to progress with any error message as to why. This works fine with the same code and setup (as far as I can tell) on my colleagues' machines.
When I run "npm start", it stops on the last step below and won't progress.
npm start
cfg-ui#1.0.0 start
electron-forge start
[STARTED] Checking your system
[STARTED] Checking git exists
[STARTED] Checking node version
[STARTED] Checking packageManager version
[TITLE] Found node#18.13.0
[SUCCESS] Found node#18.13.0
[TITLE] Found git#2.32.0.windows.1
[SUCCESS] Found git#2.32.0.windows.1
[TITLE] Found npm#9.4.0
[SUCCESS] Found npm#9.4.0
[SUCCESS] Checking your system
[STARTED] Locating application
[SUCCESS] Locating application
[STARTED] Loading configuration
[SUCCESS] Loading configuration
[STARTED] Preparing native dependencies
[TITLE] Preparing native dependencies
[SUCCESS] Preparing native dependencies
[STARTED] Running generateAssets hook
[SUCCESS] Running generateAssets hook
[STARTED] [plugin-webpack] Compiling main process code
[SUCCESS] [plugin-webpack] Compiling main process code
[STARTED] [plugin-webpack] Launching dev servers for renderer process code
If I use Ctrl+C to exit, I get the following:
RpcExitError: Process 14152 exited with code 3221225786
Issues checking service aborted - probably out of memory. Check the `memoryLimit` option in the ForkTsCheckerWebpackPlugin configuration.
If increasing the memory doesn't solve the issue, it's most probably a bug in the TypeScript.
RpcExitError: Process 12940 exited with code 3221225786
Issues checking service aborted - probably out of memory. Check the `memoryLimit` option in the ForkTsCheckerWebpackPlugin configuration.
If increasing the memory doesn't solve the issue, it's most probably a bug in the TypeScript.
RpcExitError: Process 13936 exited with code 3221225786
Issues checking service aborted - probably out of memory. Check the `memoryLimit` option in the ForkTsCheckerWebpackPlugin configuration.
If increasing the memory doesn't solve the issue, it's most probably a bug in the TypeScript.
RpcExitError: Process 9688 exited with code 3221225786
Issues checking service aborted - probably out of memory. Check the `memoryLimit` option in the ForkTsCheckerWebpackPlugin configuration.
If increasing the memory doesn't solve the issue, it's most probably a bug in the TypeScript.
I suspect the error about running out of memory might be a red herring based on this issue just mentioning it as "not closing cleanly".

Related

Terminal process get killed with code ELIFECYCLE errno: 137 when VS Code is open. Quitting VS Code resolves the issue?

I've only recently in the last two days begun encountering this issue.
When I attempt to build my Angular project, It's getting to this one point and failing with errors below.
The only way I can get it to run is to quit VS code and rerun the exact same command and it builds without issue.
Any ideas what may be causing this?
137 is 128 + 9. In some situations—and I'm guessing that this is one of them—this indicates that the process died with a signal 9. Signal 9 is, on macOS (and multiple other OSes), SIGKILL. This signal is sent by the "out of memory" killer.
This also explains why exiting VSCode fixes things: VSCode is a memory hog. Exiting it returns the memory to the system.
To fix this more permanently, either reduce the memory needs of your build and/or of VSCode, or add more memory to your system.
See also What killed my process and why?

dbt deps command results in "Unable to connect to registry hub"

When running dbt deps, I get back this error message:
Running with dbt=0.17.0
Error sending message, disabling tracking
Encountered an error:
Unable to connect to registry hub
What's happening here, and how can I work around it?
First of all, it's worth understanding what's going on here. It looks like you're trying to install a package from the dbt hub site (hub.getdbt.com) — if you open up your packages.yml file, you'll find something like this:
packages:
- hub: package-owner/package-name
version: 0.1.0
When you run dbt deps (at a high level):
dbt sends a request to hub.getdbt.com
From hub.getdbt.com, a request is sent to GitHub to download the package.
The package is copied into your project
This error occurs if dbt cannot connect to the hub site after sending a network request repeatedly. First off, we recommend you retry the dbt deps command — sometimes it's just a blip in connectivity that goes away on the second try.
If the error persists, there may be a few different reasons for it:
hub.getdbt.com might be unavailable. This happens but is relatively rare. You can navigate to hub.getdbt.com to check if this is the case. Also check the Netlify status page to see if there are any issues.
GitHub might be down — you can check this by going to the GitHub status page.
Finally, it may be that a firewall rule or antivirus software on your computer is rejecting the request. Talk to your IT team to find out if this is the case and whether that restriction can be removed.
We generally recommend using the hub syntax for packages, however if you need to work around it, you can consider using the git syntax (docs) or installing the package from a local directory (docs)

JavaScript heap out of memory when building Vue.js app

I'm trying to build a vue.js app for production. This error message always appears midway through.
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
I already tried to increase the memory by adding --max_old_space_size=4096 and even tried to change it to 8192, but to no avail. I am using a Mac with 8 GB of RAM so I'm not sure why this is happening.
This is the code I run for npm run build:
vue-cli-service build --max_old_space_size=4096
I ran into this problem too. The memory limitation was with Node so running this command worked:
NODE_OPTIONS=--max_old_space_size=4096 npm run build
On Windows, use:
set NODE_OPTIONS=--max_old_space_size=4096
npm run build
The default memory limit for Node is 512MB, running this command temporarily increased it to 4GB.
If you have a large project with each .less file in every folder(such as components, views) you may have this problem.
I have solved it by move all .less file into just one folder "/assets/less/" and import all of them to "/assets/less/index.less"

How do I fix this websocket connection error with Velocity?

I've tried out cucumber as well as jasmine with brand new projects, but all my tests are getting this failure. When I run:
$ meteor --test
I get:
stream error Network error: ws://localhost:3000/websocket: connect ECONNREFUSED
This failure comes from a fresh application using the xolvio:cucumber package.
When I check out the mirror logs, it ends with:
[chimp] Finished running async processes with errors
stream error Network error: ws://localhost:3000/websocket: connect ECONNREFUSED
stream error Network error: ws://localhost:3000/websocket: connect ECONNREFUSED
Parent process ( 20797 ) is dead! Exiting cucumber
So, is this some kind of system error just for me? I have the latest Meteor: 1.1.0.2
I realize also this error used to be an old bug that's now considered fixed in the meteor-cucumber repo.
Any ideas?
This is not an error it's actually a known issue and should not affect your spec runs.
When you run meteor --test it will start a main app and a mirror for cucumber to run on.
The message happens when the main app closes and the mirror is no longer able to access the main app through websockets. It's a harmless message.

How to get around memory error with karma & phantomjs

We're running tests using karma and phantomjs Last week, our tests mysteriously started crashing phantomJS with an error of -1073741819.
Based on this thread for Chutzpah it appears that code indicates a native memory failure with PhantomJS.
Upon further investigation, we are consistently seeing phantom crash around 750MB of memory.
Is there a way to configure Karma so that it does not run up against this limit? Or a way to tell it to flush phantom?
We only have around 1200 tests so far. We're about 1/4 of the way through our project, so 5000 UI tests doesn't seem out of the question.
Thanks to the StackOverflow phenomenon of posting a question and quickly discovering an answer, we solved this by adding gulp tasks. Before we were just running karma start at the command line. This spun up a single instance of phantomjs that crashed when 750MB was reached.
Now we have a gulp command for each one of our sections of tests, e.g. gulp common-tests and gulp admin-tests and gulp customer-tests
Then a single gulp karma that runs each of those groupings. This allows each gulp command to have its own instance of phantom, and therefore stay underneath that threshold.
We ran into similar issue. Your approach is interesting and certainly side steps the issue. However, be prepared to face it again later.
I've done some investigation and found the cause of memory growth (at least in our case). Turns out when you use:
beforeEach(inject(SomeActualService)){ .... }
the memory taken up by SomeActualService does not get released at the end of the describe block and if you have multiple test files where you inject the same service (or other injectable objects) more memory will be allocated for it again.
I have a couple of ideas on how to avoid this:
1. create mock objects and never use inject to get real objects unless you are in the test that tests that module. This will require writing tons of extra code.
2. Create your own tracker (for tests only) for injectable objects. That way they can be loaded only once and reused between test files.
Forgot to mention: We are using angular 1.3.2, Jasmine 2.0 and hit this problem around 1000 tests.
I was also running into this issue after about 1037 tests on Windows 10 with PhantomJS 1.9.18.
It would appear as ERROR [launcher]: PhantomJS crashed. after the RAM for the process would exceed about 800-850 MB.
There appears to be a temporary fix here:
https://github.com/gskachkov/karma-phantomjs2-launcher
https://www.npmjs.com/package/karma-phantomjs2-launcher
You install it via npm install karma-phantomjs2-launcher --save-dev
But then need to use it in karma.conf.js via
config.set({
browsers: ['PhantomJS2'],
...
});
This seems to run the same set of tests while only using between 250-550 MB RAM and without crashing.
Note that this fix works out of the box on Windows and OS X, but not Linux (PhantomJS2 binaries won't start). This affects pushes to Travis CI.
To work around this issue on Debian/Ubuntu:
sudo apt-get install libicu52 libjpeg8 libfontconfig libwebp5
This is a problem with PhantomJS. According to another source, PhantomJS only runs the garbage collector when the page is closed, and this only happens after your tests run. Other browsers work fine because their garbage collectors work as expected.
After spending a few days on the issue, we concluded that the best solution was to split tests into groups. We had grunt create a profile for each directory dynamically and created a command that runs all those profiles. For all intents and purposes, it works just the same.
We had a similar issue on linux (ubuntu), that turned out to be the amount of memory segments that the process can manage:
$ cat /proc/sys/vm/max_map_count
65530
Then run this:
$ sudo bash -c 'echo 6553000 > /proc/sys/vm/max_map_count'
Note the number was multiplied by 100.
This will change the session settings. If it solves the problem, you can set it up for all future sessions:
$ sudo bash -c 'echo vm.max_map_count = 6553000 > /etc/sysctl.d/60-max_map_count.conf'
Responding to an old question, but hopefully this helps ...
I have a build process which a CI job runs in a command line only linux box. So, it seems that PhantomJS is my only option there. I have experienced this memory issue locally on my mac, but somehow it doesn't happen on the linux box. My solution was to add another test command to my package.json to run karma using Chrome, and run that locally to run my tests. When pushed up, Jenkins would kick off the regular test command, running PhantomJS.
Install this plugin: https://github.com/karma-runner/karma-chrome-launcher
Add this to package.json
"test": "karma start",
"test:chrome": "karma start --browsers Chrome"