I'm just getting back into trying some front end projects for the first time in a few years. Many npm-based javascript projects I try out end up taking a long time to start up in development mode even for Hello World-ish examples. In particular I'm trying out Nuxt.js.
Dev server startup takes about 100 seconds, and nothing seems to get cached so restarts (not hot reloads) take the same amount of time. My research into the project and known npm issues did not turn up any definitive root cause or ways to improve this yet.
I'm using emacs 26.1 in terminal mode on a 2018 13" MacBook Pro with a core i5, 8 GB of ram, and an SSD.
When I run npm run dev to startup the nuxt dev server I get repeated error in process filter: Args out of range: "\342", -1 errors related to some unusual characters they are using to try to make the output pretty. If I try the same thing in a vanilla Mac OS terminal the server startup goes 10x faster. Why do those errors occur, and why is it so much slower in an emacs terminal?
It turns out the repeated error in process filter issue may be caused by a bug in term mode that was recently fixed but might still be an issue in my version of emacs.
As a workaround, the following can get the nuxt dev server running in ~10s instead of ~100s in an emacs terminal on my mac by filtering out the repeated lines about the modules being built:
$ npm run dev | grep -v modules
Note that I tried using npm's options to adjust the log level but none seem to filter this output. If anyone knows a more "official" way of filtering this, or even better, if you know how to make it such that it doesn't try to rebuild the modules on every dev server start up, I'd be interested to know.
Edit: it might make sense to adjust the dev script command in the package.json file to include the grep filter, that way you can still just type npm run dev and get the workaround.
Related
We have JUnit tests that boot up the entire server and then test some functions in the booted server. However this takes a lot longer than when simply booting the server regularly through it's main function, outside of a a junit test. Why could that be?
More concretely, we're using the dropwizard framework with a jetty server. The logs I get after starting within a unit test are
INFO [2022-09-19 15:12:45,985] org.eclipse.jetty.server.Server: Started #60649ms
and the logs I get with a regular application start are
INFO [2022-09-19 15:15:06,887] org.eclipse.jetty.server.Server: Started #13093ms
As you can see, it's around 6 times faster outside of JUnit.
Is there any reason for that? The server spawned in junit isn't 100% identical to the other one but it comes pretty close. I don't see any reason why it should just be that much slower. Is there any "known" reason why this is happening or is it more likely that's something about how we spawn the server in our testing environment?
When trying to find the bottleneck, I couldn't identify one single place that runs slower. It seems that everything is just running slower in the tests.
Edit:
An additional information is that I run on an M1 mac. My previous results, where the tests were 6 times slower than the server start, were with brew install --cask adoptopenjdk11. However when I switched to the brew install --cask zulu-jdk11, now the disparity is much smaller: 18 seconds for test runs, 12 seconds for server starts. It doesn't make the mystery smaller, but it makes me a bit happier.
Runing the Build and Run Iot Edge Solution in Simulator stopped working suddenly, and the log builds the containers and stops at this message Network azure-iot-edge-dev is external, skipping Everything was working fine 5mins ago, tried rebooting, restarting docker, iotedgehubdev, but in vain. Do you know how can I get more logs and/or resolve this problem ?
This is a generic error that is thrown for a variety of reasons.
Possible causes can be:
Proxy blocking pulling images of edgeHub and edgeAgent
A problem in the deployment.template.json (or debug template), e.g. missing brackets
Ports of edgeHub that are in use
Docker configured for Windows containers when targeting Linux or visa-versa
The Build output or IoT hub output in Visual Studio usually gives a bit more information.
Do you know how can I get more logs and/or resolve this problem ?
Are you trying to debug/simulate it locally on windows 10 machine?
When you need to gather logs from an IoT Edge device, the most convenient way is to use the support-bundle command. By default, this command collects module, IoT Edge security manager and container engine logs, iotedge check JSON output, and other useful debug information. It compresses them into a single file for easy sharing. The support-bundle command is available in release 1.0.9 and later.
Run the support-bundle command with the --since flag to specify how long from the past you want to get logs. For example 6h will get logs since the last six hours, 6d since the last six days, 6m since the last six minutes and so on. Include the --help flag to see a complete list of options.
sudo iotedge support-bundle --since 6h
By default, the support-bundle command creates a zip file called support_bundle.zip in the directory where the command is called. Use the flag --output to specify a different path or file name for the output.
Incase if you are looking to debug the modules locally on VS2019, VS Code below documentations will be useful.
Use Visual Studio 2019 to develop and debug modules for Azure IoT Edge and
Use Visual Studio Code to develop and debug modules for Azure IoT Edge
Please share the complete logs if the above doesn't help.
I have this issue when trying to start a project from within Visual Studio when it's not running in administrator mode. I always forget and end up with this error. Same goes for if I run "iotedgehubdev.exe setup" from a command prompt. It only works when the command prompt is running in administrator mode.
I'm trying to install Rust on a 32 bit CentOS 7 server through SSH. I run this command suggested here:
curl https://sh.rustup.rs -sSf | sh
This command never finishes, and the server machine CPU fan goes high speed. Eventually, the server machine screen becomes black with keyboard lights getting on and off (blinking) repeatedly.
Is the problem due to the fact that the server is 32 bit? Is there any other possibility?
Update
The kernel panic occurred multiple times, for example when trying to run cargo run as described here:
kernel bug at kernel/auditsc.c:1532
#amo-ej1 comment worked.
Offline installers are available here. Just downloaded the i686-unknown-linux-gnu file tar.gz for nightly. Installed by running the install.sh script inside it with --verbose option.
An average npm install seems to take around 44 seconds on my machine for a new Angular project, when created using the Angular CLI.
I looked at the usage of the computer resources but I didn't see anything being used at 100% (CPU, RAM, Disk, Ethernet).
Is the install time that 'slow' due to the response times of the requests made during the process, or the speed of the server that feeds me the node modules or is there a specific hardware component that is slowing down the process?
Basically, I want to know if upgrading something on my computer could decrease the install time.
I am trying to figure out a problem with WebStorm 8's NPM UI tool. I have come to a point that running "npm search" command line actually hangs. That is what the UI tool runs initially. I am running Windows 8.1. I have also tried running the same command in Windows 7. It actually returns an error running the "npm search" command. I really want to run WebStorm with NPM on windows 8.1.
Same issue on OSX 10.9.4. Via command line, 'npm search' fails with "FATAL ERROR: JS Allocation failed - process out of memory". Via PHPStorm, it just hangs.
If I add an argument after npm search in command line, it works fine.
I don't think it is a Windows only problem. Instead it looks exactly like this reported bug: npm.commands.search fails and dumps 100+ Mb to output log
I can reproduce it on Linux with npm 2.0.2 by simply calling
npm search
(It just hangs and allocates huge amounts of memory)
To quote from the bug report:
We're treating the combined metadata for all packages in the registry as an in-memory database backed by a JSON file. It's already a hefty wad of data to download, parse, and keep in memory, and as the number of packages in the registry grows, it's going to cause more and more problems, especially in resource-constrained environments
Let us hope they can fix it soon.
(Another related bug report is npm search" runs out of memory and dies without good error. Looks like a duplicate to me.)