I have an implementation of a RNN for sequence classification, on my local machine (Win 10, 16Gb ram) when I run the training, it reaches sometimes 100% memory usage.
When I try to run it on a Azure VM (Linux Ubuntu, 14gb ram) the process get killed as soon as it reaches high ram usages.
I am currently using a batch size of 5000 on the local machine, so I tried to reduce the size for the VM, but even if I put it at 2000, the process got killed anyway.
Also on the vm it gives me warnings:
tensorflow/core/framework/allocator.cc:113] Allocation of 1152000000 exceeds 10% of system memory.
What I don't understand is how the local machine can handle the memory usage, and the vm cannot even if they differ by only 2gb of ram.
Does someone have any clue?
Let me know if you need more information, thanks in advance!
Related
I am running Apache Guacamole on a Google Cloud Compute Engine f1-micro with CentOS 7 because it is free.
Guacamole runs fine for some time (an hour or so) then unexpectantly crashes. I get the ERR_CONNECTION_REFUSED error in Chrome and when running htop I can see that all of the tomcat processes have stopped. To get it running again I just have to restart tomcat.
I have a message saying "Instance "guac" is overutilized. Consider switching to the machine type: g1-small (1 vCPU, 1.7 GB memory)" in the compute engine console.
I have tried limiting the memory allocation to tomcat, but that didn't seem to work.
Any suggestions?
I think the reason for the ERR_CONNECTION_REFUSED is likely due to the VM instance falling short on resources and in order to keep the OS up, process manager shuts down some processes. SSH is one of those processes, and once you reboot the vm, resource will resume operation in full.
As per the "over-utilization" notification recommending g1-small (1 vCPU, 1.7 GB memory)", please note that, f1-micro is a shared-core micro machine type with 0.2 vCPU, 0.60 GB of memory, backed by a shared physical core and is only ideal for running smaller non-resource intensive applications..
Depending on your Tomcat configuration, also note that:
Connecting to a database is an intensive process.
Creating a Tomcat with Google Marketplace, the default VM setting is "VM instance: 1 vCPU + 3.75 GB memory (n1-standard-1) so upgrading to machine type: g1-small (1 vCPU, 1.7 GB memory) so should ideal in your case.
Why was g1 small machine type recommended. Please note that Compute Engine uses the same CPU utilization numbers reported on the Compute Engine dashboard to determine what recommendations to make. These numbers are based on the average utilization of your instances over 60-seconds intervals, so they do not capture short CPU usage spikes.
So, applications with short usage spikes might need to run on a larger machine type than the one recommended by Google, to accommodate these spikes"
In summary my suggestion would be to upgrade as recommended. Also note that, the rightsizing gives warnings when VM is underutilized or overutilized and in this case, it is recommending to increase your VM size due to overutilization and keep in mind that this is only a recommendation based on the available data.
My spark application is running on remote machine in our internal lab. To analyse the memory consumption of remote application, attached the remote application pid to JProfiler by using the 'attach mode' (with help of jpenable) from my local machine.
After attaching the remote application to JProfiler in local machine, the JProfiler showing only 5% of memory consumption of the remote machine but when we ran the 'top' command on remote Centos machine, the 'top' command showing the 72% of memory consumption. And I am unable to find the whole 72% consumption with JProfiler application.
Please help me to get the whole memory consumption (i.e., 72% of memory usage) statistics by using the JProfiler application.
top shows memory reserved by the JVM, not the actually used heap, so you cannot compare the two values.
In addition, the JVM uses native memory that does not show up in the heap. A Java profiler cannot analyze that memory.
I am developing a program which use huge ram size. Unfortunately there is no way to decrease it. In linux when ram is low I can generate a swap file and mount it to system that solve my problem but in windows how can I do that or is there any api(c/c++) that can use a temp file like a ram?
On a multicore machine, does the JVisualVM CPU usage graph show total machine CPU capacity or something else?
As an example, on a machine with 16 cores, if I see CPU usage in JVisualVM going up to 50 percent, does that mean the equivalent of 8 cores fully in use?
I just tested with VisualVM 1.3.2 and the CPU display is calibrated so that 100% is 100% of all cores.
I tested by creating a simple application that entered a tight while loop upon launch. I verified using Activity Monitor that Java was using 100% of one core. In VisualVM it showed approximately 12% CPU usage.
Windows XP as base OS. Laptop has 4GB RAM and 2*2.2GHz cores. About 3 year old laptop
Am using Windows7 in VMWare Player. If I allocate more than 1GB of RAM to the Win7 machine in the VMWare player settings it goes so slow, and is continually swapping to disk.
I've turned off all Win7 processor intensive stuff.
http://www.computingunleashed.com/speed-up-windows-7-ultimate-guide-to.html
http://www.computingunleashed.com/list-of-services-in-windows-7-that-can.html
The base OS only reports using aboiut 144MB of RAM to the player. Very weird.
I'm using 2 virtual disks: 20GB SCSI for c:\ and 25GB SCSI for data f:\
Problem: How to tweak Win7 VMware (ie VS2010, Sql2008R2) well on an older laptop. Or use something else?
The problem is that by default vmware player uses file as memory.
Read this for more info & fix
http://communities.vmware.com/thread/46122
If you want to achieve this for all your VMs, you may just add/append following two lines:
prefvmx.minVmMemPct = 100
mainMem.useNamedFile = "false"
... inside the following VMware-wide configuration file:
C:/ProgramData/VMware/VMware Workstation/config.ini (or sometimes settings.ini)
The first line sets the percentage of configured VM memory that should fit into the host memory and the second (as already shown in the prior answer) disables default file-based memory usage.
If you want to apply this to a specific VM only, in order to not alter general VMware configuration, adding the following line to the VM's *.vmx file may be an alternative:
hard-disk.hostBuffer = "disabled"