Slow shrinksafe operation during dojo build process - dojo

I use dojo build process on my application during build stage.
But it is very slow, takes several minutes to optimize one big .js file.
I am calling it within ant build script and groovy antBuilder.
Here is the call:
ant.java(classname:"org.mozilla.javascript.tools.shell.Main",fork:"true", failonerror:"true",dir:"${properties.'app.dir'}/WebRoot/release 1.5/util/buildscripts",maxmemory:"256m") {
ant.jvmarg(value:"-Dfile.encoding=UTF8")
ant.classpath() {
ant.pathelement(location:"${properties.'app.dir'}/WebRoot/release-1.5/util/shrinksafe/js.jar")
ant.pathelement(
location:"${properties.'app.dir'}/WebRoot/release-1.5/util/shrinksafe/shrinksafe.jar")
}
ant.arg(file:"${properties.'app.dir'}/WebRoot/release-1.5/util/buildscripts/build.js")
ant.arg(
line:"profileFile=${properties.'app.dir'}/dev-tools/build-scripts/standard.profile.js releaseDir='../../../' releaseName=dojo15 version=0.1.0 action=clean,release")
}
and this is taking about 15 min to optimize and combine all dojo and our own files.
Is there a way to speed it up, maybe run in parallel somehow.
The script is running on a big 8 CPU solaris box so hardware is no problem here.
Any suggestion?

We've had similar problems. Not sure exactly what it is about running in ant that makes it so much slower. You might try increasing the memory. We couldn't even get Shrinksafe to process large layers without increasing our heap beyond the 2g limit (needed a 64-bit JVM) You might also try using closure with the Dojo build tool.

Related

How to increase performance when building Vue webapp?

When I build my Vue project for production, it usually takes a few minutes of processing, even when using a powerfull workstation ...
I think this may be due to the default hardware limitation of 1 worker and 2GB memory as shown by the log below :
$ vue-cli-service build
Building for production...Starting type checking service...
Using 1 worker with 2048MB memory limit
\ Building for production...
Would the build process be faster if this limit was increased ? If yes, how can I change it ?
The build process taking minutes seems really high. There could be a number of reasons for that.
Specific to your question related to the memory limit, that's just for type checker. Vue uses fork-ts-checker-webpack-plugin to pull out the type checking into a separate process to speed up the build. If that is the main cause of slow build, then indeed playing with the memory limit and workers may help.
Somebody already answered how to do that in this SO post.
That only answers how to change the memory limit.
But you can change the number of workers in the same way. E.g.
forkTsCheckerOptions.workers = 4;

How to reduce PhantomJS's CPU and memory usage?

I'm using PhantomJS via Python's webdriver lib. It eats lots of RAM and CPU, and it's an issue because I'd like to run as many instances as it's possible.
Some google'ing didn't give me anything helpful. So I'll ask directly:
Does the size matter? If I set driver.set_window_size(1280, 1024), will it eat more memory than 1024x768?
Is there any option in the source code which can be turned off without real issues and which lead to significant memory usage reduce? Yes I still need images and CSS and JS loading and applying, but I can get rid of some other features... For example, I can turn off caching (and load all media files every time). Yes, I do need to speed it up and make it less greedy and I'm ready to re-compile it... Any ideas here?
Thanks a lot!
I assume you call phantomjs once for every rendering job. This creates a new phantomjs process every time. You could try batching as many as you could in the one js script and call phantomjs once for the whole batch.

Speeding up the Dojo Build

We are running a build of our application using Dojo 1.9 and the build itself is taking an inordinate amount of time to complete. Somewhere along the lines of 10-15 minutes.
Our application is not huge by any means. Maybe 150K LOC. Nothing fancy. Furthermore, when running this build locally using Node, it takes less than a minute.
However, we run the build on a RHEL server with plenty of space and memory, using Rhino. In addition, the tasks are invoked through Ant.
We also use Shrinksafe as the compression mechanism, which could also be the problem. It seems like Shrinksafe is compressing the entire Dojo library (which is enormous) each time the build runs, which seems silly.
Is there anything we can do to speed this up? Or anything we're doing wrong?
Yes, that is inordinate. I have never seen a build take so long, even on an Atom CPU.
In addition to the prior suggestion to use Node.js and not Rhino (by far the biggest killer of build performance), if all of your code has been correctly bundled into layers, you can set optimize to empty string (don’t optimize) and layerOptimize to "closure" (Closure Compiler) in your build profile so only the layers will be run through the optimizer.
Other than that, you should make sure that there isn’t something wrong with the system you are running the build on. (Build files are on NAS with a slow link? Busted CPU fan forcing CPUs to underclock? Ancient CPU with only a single core? Insufficient/bad RAM? Someone else decided to install a TF2 server on it and didn’t tell you?)

Signing Apps Taking Forever

I am trying to increase the work flow of my app deployment. From building to signing to getting it onto app it can take anywhere up to 40mins. What advice can somebody give me on:
1) Speeding up compile time
2) Speeding up the archive process
3) Speeding up the code signing
thanks
For reference, my early 2009 2.93GHz C2D iMac with 8GB RAM can archive and sign a 2GB application in approximately 15-20 minutes. My late 2011 1.8GHz i7 MacBook Air can do it significantly faster. 40 minutes for a 500MB application seems far too slow unless there is something else bogging down your system. Try checking your disk with Disk Utility and seeing what else is running with Activity Monitor.
Things to consider are the size of resources. Can any resources such as videos or images be compressed and still usable? Are there a large number of files that could be compressed into a zip file and then unzipped on first launch? Also check and make sure you do not have any long running custom scripts in the build process. After you've determined that resources or a build configuration setting is not an issue then I would advise investing in a faster computer (more RAM and processing power) if you are running on older hardware.
The rarely changed code could be imported to the libraries (maybe with the help of additional projects not to produce many targets), that dramatically increases the compilation speed while the signing and archiving is usually faster than the build itself.

Measuring build times to identify bottlenecks

I'm working on improving the build for a few projects. I've improved build times quite significantly, and I'm at a point now where I think the bottlenecks are more subtle.
The build uses GNU style makefiles. I generate a series of dependency files (.d) and include them in the makefile, otherwise there's nothing fancy going on (eg, no pre-compiled headers or other caching mechanisms).
The build takes about 95 seconds on a 32-core sparc ultra, running with 16 threads in parallel. Idle time hovers around 80% while the build runs, with kernel time hovering between 8-10%. I put the code in /tmp, but most of the compiler support files are NFS mounted and I believe this may be creating a performance bottleneck.
What tools exist for measuring & tracking down these sorts of problems?
From my own experience, compiling C/C++ code requires reading a lot of header files by C preprocessor. I've experienced situations when it took more than 50% of g++ run-time to generate a complete translation unit.
As you mentioned that it idles 80% when compiling it must be waiting for I/O then. iostat and DTrace would be a good starting point.