We are running a build of our application using Dojo 1.9 and the build itself is taking an inordinate amount of time to complete. Somewhere along the lines of 10-15 minutes.
Our application is not huge by any means. Maybe 150K LOC. Nothing fancy. Furthermore, when running this build locally using Node, it takes less than a minute.
However, we run the build on a RHEL server with plenty of space and memory, using Rhino. In addition, the tasks are invoked through Ant.
We also use Shrinksafe as the compression mechanism, which could also be the problem. It seems like Shrinksafe is compressing the entire Dojo library (which is enormous) each time the build runs, which seems silly.
Is there anything we can do to speed this up? Or anything we're doing wrong?
Yes, that is inordinate. I have never seen a build take so long, even on an Atom CPU.
In addition to the prior suggestion to use Node.js and not Rhino (by far the biggest killer of build performance), if all of your code has been correctly bundled into layers, you can set optimize to empty string (don’t optimize) and layerOptimize to "closure" (Closure Compiler) in your build profile so only the layers will be run through the optimizer.
Other than that, you should make sure that there isn’t something wrong with the system you are running the build on. (Build files are on NAS with a slow link? Busted CPU fan forcing CPUs to underclock? Ancient CPU with only a single core? Insufficient/bad RAM? Someone else decided to install a TF2 server on it and didn’t tell you?)
Related
On my Minecraft multiplayer server, I have a game called Destruction.
In there the goal is to survive several natural disasters coded within the plugin.
I use the plugin FastAsyncWorldedit to process the block management in the different disasters.
Other stuff I made with async tasks if it was possible.
Now my problem is, that even at 2 players playing it, they are lagging through the world.
(The world is a custom built map 150x150 blocks) TPS is nearly constant at 20* ticks and RAM usage is also not overused.
Does someone know why the hell it still lags from the players view?
I think the problem is FastAsyncWorldEdit. I used that plugin for a while because it runs quite a bit faster, but it has one fatal issue: it's indefinably unstable. I know normal WorldEdit is slow and, in some cases, laggy, but it's a lot better that FAWE. Try switching it out.
If that doesn't work, try this: install PlaceHolderAPI, then run the command /papi ecloud download Player to install the extension. Then install some plugin that allows usage of placeholders. I'd recommend MyCommand. Here's a link to a simple command I rigged up. Place it in the folder marked commands in the MyCommand folder. Then have the glitching players do either /ping or /p to see what their current respective pings are. (Either reload the plugin or restart the server to register custom commands.) If they're above 100, it could potentially cause issues.
I have these settings:
-server
-Xms2048m
-Xmx8096m
-XX:MaxPermSize=2048m
-XX:ReservedCodeCacheSize=2048m
-XX:+UseConcMarkSweepGC
-XX:SoftRefLRUPolicyMSPerMB=512
-ea
-Dsun.io.useCanonCaches=false
-Djava.net.preferIPv4Stack=true
-XX:+HeapDumpOnOutOfMemoryError
-Dawt.useSystemAAFontSettings=lcd
Yes, they are maxed out.
I have also changed from 2500 to:
idea.max.intellisense.filesize=500
I am developing in a Java project which mostly works fine, although in some java classes it is slow at times, like when just editing a String.
However, today I am touching some html, css and javascript files and it is just going slower and slower.
The CPU level are not increasing considerably but just slow.
I am in debug mode most of the time, but i don't have auto build on save.
What other parameters can I increase/decrease to get it run smoother?
Right now it is not able to provide me with any help.
I have 24 GB ram and a I7-4810MQ so it's a pretty powerful laptop.
According to this Jetbrains blog post, you can often double the performance of IDEA by fixing various NTFS formatted disk related issues:
If you are running a Windows machine with NTFS disks, there is a good chance to double the performance of IntelliJ IDEA by optimizing the MFT tables, disk folder structure and Windows paging file.
We have used the Diskeeper, 2007 Pro Trial version tool to carry out the following procedure. You may of course, repeat this with your favorite defragmenter, provided it supports equivalent functionality.
Free about 25% space on the drive you are optimizing.
Turn off any real-time antivirus protection and reboot your system.
Defragment files.
Defragment MFT (Do a Frag Shield, if you are using Diskeeper). Note that this is quite lenghty process which also requires your
machine to reboot several times.
Defragment the folder structure (perform the Directory consolidation).
Defragment the Windows paging file.
The above optimizations have positive impact not only on IntelliJ
IDEA, but on general system performance as well.
You could open VisualVM, YourKit or other profiler and see what exactly is slow.
Or just report a performance problem.
VisualVM alone would tell you if the CPU is spending time with garbage collecting or normal stuff.
Large heap provides a considerable benefit only when garbage collection causes lags or eats too much CPU. Also if you enable a Memory Indicator by enabling Settings | Show Memory Indicator you will see how much of heap is occupied and when GC clears it.
BTW you absolutely need an SSD.
I am trying to increase the work flow of my app deployment. From building to signing to getting it onto app it can take anywhere up to 40mins. What advice can somebody give me on:
1) Speeding up compile time
2) Speeding up the archive process
3) Speeding up the code signing
thanks
For reference, my early 2009 2.93GHz C2D iMac with 8GB RAM can archive and sign a 2GB application in approximately 15-20 minutes. My late 2011 1.8GHz i7 MacBook Air can do it significantly faster. 40 minutes for a 500MB application seems far too slow unless there is something else bogging down your system. Try checking your disk with Disk Utility and seeing what else is running with Activity Monitor.
Things to consider are the size of resources. Can any resources such as videos or images be compressed and still usable? Are there a large number of files that could be compressed into a zip file and then unzipped on first launch? Also check and make sure you do not have any long running custom scripts in the build process. After you've determined that resources or a build configuration setting is not an issue then I would advise investing in a faster computer (more RAM and processing power) if you are running on older hardware.
The rarely changed code could be imported to the libraries (maybe with the help of additional projects not to produce many targets), that dramatically increases the compilation speed while the signing and archiving is usually faster than the build itself.
I'm working on improving the build for a few projects. I've improved build times quite significantly, and I'm at a point now where I think the bottlenecks are more subtle.
The build uses GNU style makefiles. I generate a series of dependency files (.d) and include them in the makefile, otherwise there's nothing fancy going on (eg, no pre-compiled headers or other caching mechanisms).
The build takes about 95 seconds on a 32-core sparc ultra, running with 16 threads in parallel. Idle time hovers around 80% while the build runs, with kernel time hovering between 8-10%. I put the code in /tmp, but most of the compiler support files are NFS mounted and I believe this may be creating a performance bottleneck.
What tools exist for measuring & tracking down these sorts of problems?
From my own experience, compiling C/C++ code requires reading a lot of header files by C preprocessor. I've experienced situations when it took more than 50% of g++ run-time to generate a complete translation unit.
As you mentioned that it idles 80% when compiling it must be waiting for I/O then. iostat and DTrace would be a good starting point.
I use dojo build process on my application during build stage.
But it is very slow, takes several minutes to optimize one big .js file.
I am calling it within ant build script and groovy antBuilder.
Here is the call:
ant.java(classname:"org.mozilla.javascript.tools.shell.Main",fork:"true", failonerror:"true",dir:"${properties.'app.dir'}/WebRoot/release 1.5/util/buildscripts",maxmemory:"256m") {
ant.jvmarg(value:"-Dfile.encoding=UTF8")
ant.classpath() {
ant.pathelement(location:"${properties.'app.dir'}/WebRoot/release-1.5/util/shrinksafe/js.jar")
ant.pathelement(
location:"${properties.'app.dir'}/WebRoot/release-1.5/util/shrinksafe/shrinksafe.jar")
}
ant.arg(file:"${properties.'app.dir'}/WebRoot/release-1.5/util/buildscripts/build.js")
ant.arg(
line:"profileFile=${properties.'app.dir'}/dev-tools/build-scripts/standard.profile.js releaseDir='../../../' releaseName=dojo15 version=0.1.0 action=clean,release")
}
and this is taking about 15 min to optimize and combine all dojo and our own files.
Is there a way to speed it up, maybe run in parallel somehow.
The script is running on a big 8 CPU solaris box so hardware is no problem here.
Any suggestion?
We've had similar problems. Not sure exactly what it is about running in ant that makes it so much slower. You might try increasing the memory. We couldn't even get Shrinksafe to process large layers without increasing our heap beyond the 2g limit (needed a 64-bit JVM) You might also try using closure with the Dojo build tool.