jhipster application load very slow with production - angular5

use
yarn webpack:prod
to build a package for production. when deploy to the cloud (the bandwidth is 1M bit per second).
It tooks me about 30 seconds to load all assets, I know the bandwidth may a bit of poor, aside of that, I want to know how to improve the load speed, aot is one option (another is lazy loading, this will be done with source code level), how to apply aot in this case?

Related

How to increase performance when building Vue webapp?

When I build my Vue project for production, it usually takes a few minutes of processing, even when using a powerfull workstation ...
I think this may be due to the default hardware limitation of 1 worker and 2GB memory as shown by the log below :
$ vue-cli-service build
Building for production...Starting type checking service...
Using 1 worker with 2048MB memory limit
\ Building for production...
Would the build process be faster if this limit was increased ? If yes, how can I change it ?
The build process taking minutes seems really high. There could be a number of reasons for that.
Specific to your question related to the memory limit, that's just for type checker. Vue uses fork-ts-checker-webpack-plugin to pull out the type checking into a separate process to speed up the build. If that is the main cause of slow build, then indeed playing with the memory limit and workers may help.
Somebody already answered how to do that in this SO post.
That only answers how to change the memory limit.
But you can change the number of workers in the same way. E.g.
forkTsCheckerOptions.workers = 4;

High CPU with ImageResizer DiskCache plugin

We are noticing occasional periods of high CPU on a web server that happens to use ImageResizer. Here are the surprising results of a trace performed with NewRelic's thread profiler during such a spike:
It would appear that the cleanup routine associated with ImageResizer's DiskCache plugin is responsible for a significant percentage of the high CPU consumption associated with this application. We have autoClean on, but otherwise we're configured to use the defaults, which I understand are optimal for most typical situations:
<diskCache autoClean="true" />
Armed with this information, is there anything I can do to relieve the CPU spikes? I'm open to disabling autoClean and setting up a simple nightly cleanup routine, but my understanding is that this plugin is built to be smart about how it uses resources. Has anyone experienced this and had any luck simply changing the default configuration?
This is an ASP.NET MVC application running on Windows Server 2008 R2 with ImageResizer.Plugins.DiskCache 3.4.3.
Sampling, or why the profiling is unhelpful
New Relic's thread profiler uses a technique called sampling - it does not instrument the calls - and therefore cannot know if CPU usage is actually occurring.
Looking at the provided screenshot, we can see that the backtrace of the cleanup thread (there is only ever one) is frequently found at the WaitHandle.WaitAny and WaitHandle.WaitOne calls. These methods are low-level synchronization constructs that do not spin or consume CPU resources, but rather efficiently return CPU time back to other threads, and resume on a signal.
Correct profilers should be able to detect idle or waiting threads and eliminate them from their statistical analysis. Because New Relic's profiler failed to do that, there is no useful way to interpret the data it's giving you.
If you have more than 7,000 files in /imagecache, here is one way to improve performance
By default, in V3, DiskCache uses 32 subfolders with 400 items per folder (1000 hard limit). Due to imperfect hash distribution, this means that you may start seeing cleanup occur at as few as 7,000 images, and you will start thrashing the disk at ~12,000 active cache files.
This is explained in the DiskCache documentation - see subfolders section.
I would suggest setting subfolders="8192" if you have a larger volume of images. A higher subfolder count increases overhead slightly, but also increases scalability.

Speeding up the Dojo Build

We are running a build of our application using Dojo 1.9 and the build itself is taking an inordinate amount of time to complete. Somewhere along the lines of 10-15 minutes.
Our application is not huge by any means. Maybe 150K LOC. Nothing fancy. Furthermore, when running this build locally using Node, it takes less than a minute.
However, we run the build on a RHEL server with plenty of space and memory, using Rhino. In addition, the tasks are invoked through Ant.
We also use Shrinksafe as the compression mechanism, which could also be the problem. It seems like Shrinksafe is compressing the entire Dojo library (which is enormous) each time the build runs, which seems silly.
Is there anything we can do to speed this up? Or anything we're doing wrong?
Yes, that is inordinate. I have never seen a build take so long, even on an Atom CPU.
In addition to the prior suggestion to use Node.js and not Rhino (by far the biggest killer of build performance), if all of your code has been correctly bundled into layers, you can set optimize to empty string (don’t optimize) and layerOptimize to "closure" (Closure Compiler) in your build profile so only the layers will be run through the optimizer.
Other than that, you should make sure that there isn’t something wrong with the system you are running the build on. (Build files are on NAS with a slow link? Busted CPU fan forcing CPUs to underclock? Ancient CPU with only a single core? Insufficient/bad RAM? Someone else decided to install a TF2 server on it and didn’t tell you?)

Signing Apps Taking Forever

I am trying to increase the work flow of my app deployment. From building to signing to getting it onto app it can take anywhere up to 40mins. What advice can somebody give me on:
1) Speeding up compile time
2) Speeding up the archive process
3) Speeding up the code signing
thanks
For reference, my early 2009 2.93GHz C2D iMac with 8GB RAM can archive and sign a 2GB application in approximately 15-20 minutes. My late 2011 1.8GHz i7 MacBook Air can do it significantly faster. 40 minutes for a 500MB application seems far too slow unless there is something else bogging down your system. Try checking your disk with Disk Utility and seeing what else is running with Activity Monitor.
Things to consider are the size of resources. Can any resources such as videos or images be compressed and still usable? Are there a large number of files that could be compressed into a zip file and then unzipped on first launch? Also check and make sure you do not have any long running custom scripts in the build process. After you've determined that resources or a build configuration setting is not an issue then I would advise investing in a faster computer (more RAM and processing power) if you are running on older hardware.
The rarely changed code could be imported to the libraries (maybe with the help of additional projects not to produce many targets), that dramatically increases the compilation speed while the signing and archiving is usually faster than the build itself.

Slow shrinksafe operation during dojo build process

I use dojo build process on my application during build stage.
But it is very slow, takes several minutes to optimize one big .js file.
I am calling it within ant build script and groovy antBuilder.
Here is the call:
ant.java(classname:"org.mozilla.javascript.tools.shell.Main",fork:"true", failonerror:"true",dir:"${properties.'app.dir'}/WebRoot/release 1.5/util/buildscripts",maxmemory:"256m") {
ant.jvmarg(value:"-Dfile.encoding=UTF8")
ant.classpath() {
ant.pathelement(location:"${properties.'app.dir'}/WebRoot/release-1.5/util/shrinksafe/js.jar")
ant.pathelement(
location:"${properties.'app.dir'}/WebRoot/release-1.5/util/shrinksafe/shrinksafe.jar")
}
ant.arg(file:"${properties.'app.dir'}/WebRoot/release-1.5/util/buildscripts/build.js")
ant.arg(
line:"profileFile=${properties.'app.dir'}/dev-tools/build-scripts/standard.profile.js releaseDir='../../../' releaseName=dojo15 version=0.1.0 action=clean,release")
}
and this is taking about 15 min to optimize and combine all dojo and our own files.
Is there a way to speed it up, maybe run in parallel somehow.
The script is running on a big 8 CPU solaris box so hardware is no problem here.
Any suggestion?
We've had similar problems. Not sure exactly what it is about running in ant that makes it so much slower. You might try increasing the memory. We couldn't even get Shrinksafe to process large layers without increasing our heap beyond the 2g limit (needed a 64-bit JVM) You might also try using closure with the Dojo build tool.