I'm running Selenium scripts from IntelliJ and I noticed that lots several GBs from my disk free space and nearly coming to out of space.
Where can I found temporary files from my execution
Related
I have these settings:
-server
-Xms2048m
-Xmx8096m
-XX:MaxPermSize=2048m
-XX:ReservedCodeCacheSize=2048m
-XX:+UseConcMarkSweepGC
-XX:SoftRefLRUPolicyMSPerMB=512
-ea
-Dsun.io.useCanonCaches=false
-Djava.net.preferIPv4Stack=true
-XX:+HeapDumpOnOutOfMemoryError
-Dawt.useSystemAAFontSettings=lcd
Yes, they are maxed out.
I have also changed from 2500 to:
idea.max.intellisense.filesize=500
I am developing in a Java project which mostly works fine, although in some java classes it is slow at times, like when just editing a String.
However, today I am touching some html, css and javascript files and it is just going slower and slower.
The CPU level are not increasing considerably but just slow.
I am in debug mode most of the time, but i don't have auto build on save.
What other parameters can I increase/decrease to get it run smoother?
Right now it is not able to provide me with any help.
I have 24 GB ram and a I7-4810MQ so it's a pretty powerful laptop.
According to this Jetbrains blog post, you can often double the performance of IDEA by fixing various NTFS formatted disk related issues:
If you are running a Windows machine with NTFS disks, there is a good chance to double the performance of IntelliJ IDEA by optimizing the MFT tables, disk folder structure and Windows paging file.
We have used the Diskeeper, 2007 Pro Trial version tool to carry out the following procedure. You may of course, repeat this with your favorite defragmenter, provided it supports equivalent functionality.
Free about 25% space on the drive you are optimizing.
Turn off any real-time antivirus protection and reboot your system.
Defragment files.
Defragment MFT (Do a Frag Shield, if you are using Diskeeper). Note that this is quite lenghty process which also requires your
machine to reboot several times.
Defragment the folder structure (perform the Directory consolidation).
Defragment the Windows paging file.
The above optimizations have positive impact not only on IntelliJ
IDEA, but on general system performance as well.
You could open VisualVM, YourKit or other profiler and see what exactly is slow.
Or just report a performance problem.
VisualVM alone would tell you if the CPU is spending time with garbage collecting or normal stuff.
Large heap provides a considerable benefit only when garbage collection causes lags or eats too much CPU. Also if you enable a Memory Indicator by enabling Settings | Show Memory Indicator you will see how much of heap is occupied and when GC clears it.
BTW you absolutely need an SSD.
I am using Pentaho for reading a very large file. 11GB.
The process is sometime crashing with out of memory exception, and sometimes it will just say process killed.
I am running the job on a machine with 12GB, and giving the process 8 GB.
Is there a way to run the Text File Input step with some configuration to use less memory? maybe use the disk more?
Thanks!
Open up spoon.sh/bat or pan/kettle .sh or .bat and change the -Xmx figure. Search for JAVAMAXMEM Even though you have spare memory unless java is allowed to use it it wont work. although to be fair in your example above i can't really see why/how it would be consuming much memory anyway!
I have a relatively big file - around 10GB to process. I suspect it won't fit into my laptop's RAM, if MRJob decides to sort it in RAM or something similar.
At the same time, I don't want to setup hadoop or EMR - the job is not urgent and I can simple start worker before going to sleep and get the results the next morning. In other words, I'm quite happy with local mode. I know, the performance won't be perfect but it's ok for now.
So can it process such 'big' files at a single weak machine? If yes - what would you recommend to do (besides setting a custom tmp dir to point to the filesystem, not to the ramdisk which will be exhausted quickly). Let's assume we use version 0.4.1.
I think the RAM size won't be an issue with the python runner of mrjob. The output of each step should be written out to temporary file on disk, so it should not fill up the RAM I believe. Dumping output to disk is the way it should be with Hadoop (and the reason why it is slow due to IO). So I would just run the job and see how it goes.
If the RAM size is an issue, you can create enough swap space on your laptop to make it at least run, thought it will be slow if the partition isn't on SSD.
I was looking at a macro that imports several csvs from a fileserver. Running the macros takes a few seconds (20ish) to initialize before the first csv gets imported. the imports themselves happen fairly quick. If I run the amcro a second time, ther eis no delay.
When I manually open the folder on the file server with explorer it also takes quite a while (30 secs or so) until all the files are shown, so I assume the macro also has to wait until the relevant files are loaded. So, my question: Is there a way to have excel automatically index that folder to be able to open it quicker or can I already run a process in the background when opening the excel file that would read out the folder?
Cheers,
CE
Edit: I can not archive the folder and make it slimmer
The file might be cached in memory, thereby avoiding lengthy disk I/O. You need to monitor your machine activity in terms of CPU, I/O and Network activity to figure out where the time is spent. Launch perfmon.msc and add the relevant counters to do so.
I am trying to increase the work flow of my app deployment. From building to signing to getting it onto app it can take anywhere up to 40mins. What advice can somebody give me on:
1) Speeding up compile time
2) Speeding up the archive process
3) Speeding up the code signing
thanks
For reference, my early 2009 2.93GHz C2D iMac with 8GB RAM can archive and sign a 2GB application in approximately 15-20 minutes. My late 2011 1.8GHz i7 MacBook Air can do it significantly faster. 40 minutes for a 500MB application seems far too slow unless there is something else bogging down your system. Try checking your disk with Disk Utility and seeing what else is running with Activity Monitor.
Things to consider are the size of resources. Can any resources such as videos or images be compressed and still usable? Are there a large number of files that could be compressed into a zip file and then unzipped on first launch? Also check and make sure you do not have any long running custom scripts in the build process. After you've determined that resources or a build configuration setting is not an issue then I would advise investing in a faster computer (more RAM and processing power) if you are running on older hardware.
The rarely changed code could be imported to the libraries (maybe with the help of additional projects not to produce many targets), that dramatically increases the compilation speed while the signing and archiving is usually faster than the build itself.