JVM allocation for multiple domain in Weblogic server - jvm

If I create 3 domains for a Weblogic server and configure each setDomainEnv to use min Heap Size of 4096m(Xms) and max of 8192m(Xmx) then will that throw an error for the 64 bit JVM. I have a machine with 8GB RAM.
I get the error as :
Could not Create the Java Virtual Machine.
Minimum heap size invalid.

your machine has 8GB RAM in the sense system will use 7.X GB of RAM. So try to reduce the MAX heap size to available RAM so that issue will be resolved

If you have a machine with 8GB you should think that the sum of memory used by the 3 domains cannot be more then 8GB.
Total memory = OS memory + domain 1 memory + domain 2 memory + domain 3 memory
You should not forget that in order to run those JVM's the operationg system needs memory as well.
I would recommend to start all those 3 domain with the same min and max memory 2GB
https://docs.oracle.com/cd/E13222_01/wls/docs81/perform/JVMTuning.html

Related

Tensorflow memory consumption is different on VM

I have an implementation of a RNN for sequence classification, on my local machine (Win 10, 16Gb ram) when I run the training, it reaches sometimes 100% memory usage.
When I try to run it on a Azure VM (Linux Ubuntu, 14gb ram) the process get killed as soon as it reaches high ram usages.
I am currently using a batch size of 5000 on the local machine, so I tried to reduce the size for the VM, but even if I put it at 2000, the process got killed anyway.
Also on the vm it gives me warnings:
tensorflow/core/framework/allocator.cc:113] Allocation of 1152000000 exceeds 10% of system memory.
What I don't understand is how the local machine can handle the memory usage, and the vm cannot even if they differ by only 2gb of ram.
Does someone have any clue?
Let me know if you need more information, thanks in advance!

Apache Tomcat Crashes In Google Compute Engine f1-micro

I am running Apache Guacamole on a Google Cloud Compute Engine f1-micro with CentOS 7 because it is free.
Guacamole runs fine for some time (an hour or so) then unexpectantly crashes. I get the ERR_CONNECTION_REFUSED error in Chrome and when running htop I can see that all of the tomcat processes have stopped. To get it running again I just have to restart tomcat.
I have a message saying "Instance "guac" is overutilized. Consider switching to the machine type: g1-small (1 vCPU, 1.7 GB memory)" in the compute engine console.
I have tried limiting the memory allocation to tomcat, but that didn't seem to work.
Any suggestions?
I think the reason for the ERR_CONNECTION_REFUSED is likely due to the VM instance falling short on resources and in order to keep the OS up, process manager shuts down some processes. SSH is one of those processes, and once you reboot the vm, resource will resume operation in full.
As per the "over-utilization" notification recommending g1-small (1 vCPU, 1.7 GB memory)", please note that, f1-micro is a shared-core micro machine type with 0.2 vCPU, 0.60 GB of memory, backed by a shared physical core and is only ideal for running smaller non-resource intensive applications..
Depending on your Tomcat configuration, also note that:
Connecting to a database is an intensive process.
Creating a Tomcat with Google Marketplace, the default VM setting is "VM instance: 1 vCPU + 3.75 GB memory (n1-standard-1) so upgrading to machine type: g1-small (1 vCPU, 1.7 GB memory) so should ideal in your case.
Why was g1 small machine type recommended. Please note that Compute Engine uses the same CPU utilization numbers reported on the Compute Engine dashboard to determine what recommendations to make. These numbers are based on the average utilization of your instances over 60-seconds intervals, so they do not capture short CPU usage spikes.
So, applications with short usage spikes might need to run on a larger machine type than the one recommended by Google, to accommodate these spikes"
In summary my suggestion would be to upgrade as recommended. Also note that, the rightsizing gives warnings when VM is underutilized or overutilized and in this case, it is recommending to increase your VM size due to overutilization and keep in mind that this is only a recommendation based on the available data.

Installation of openstack on virtual machines (multi-node architecture)

Can I install openstack on 3 different virtual machines with the configurations as listed:
Controller Node: 1 processor, 4 GB memory, and 10 GB storage
Network Node: 1 processor, 4 GB memory, and 20 GB storage
Compute Node: 1 processor, 4 GB memory, and 30 GB storage
I wanted to know if having a physical machine with visualization enabled processor is essential for openstack deployment or one can proceed with virtual machines only. I am asking this because in almost all the documents i have read it suggests to have physical nodes.
I also want to know what difference will it make if i install on a virtual machine(assuming it is possible to install on a VM) & why can i not install openstack on virtual machines(assuming i cannot install openstack on virtual machines)
Please bear in mind that i dont want to install devstack.
I guess one can install controller and the neutron on VMs.
However, for the compute node you require a physical VM.
A simple configuration could be (as suggested in the openstack docs)
Controller Node: 1-2 CPU, 8GB RAM, 100GB Storage 1 NIC
Neutron Node: 1-2 CPU 2 GB RAM, 50 GB Storage 3 NIC
Compute Node: 2-4 CPU , 8GB RAM, 100+ GB Storage 2 NIC
However i guess(though unsure ) that the compute if has CPU virtualisation enabled, then the compute could also be a VM.
Someone who could specify the implications of running these nodes on a VM compared to physical nodes???

Minimum RAM that Weblogic Admin Console can run under

What is the minimum amount of RAM that the Weblogic Admin Consoel (version 10.3.2) will run under?
256MB is usually a safe bet but for our larger production domains, we use as much as 512mb. In addition to the domain size, your WLDF settings can affect the required heap size.
On a test system, you can probably get away with 128mb.
4 GB
Setting the memory arguments for the WebLogic application servers for SASĀ® Financial Management

VMWare Player - swapping to disk more if more memory allocated

Windows XP as base OS. Laptop has 4GB RAM and 2*2.2GHz cores. About 3 year old laptop
Am using Windows7 in VMWare Player. If I allocate more than 1GB of RAM to the Win7 machine in the VMWare player settings it goes so slow, and is continually swapping to disk.
I've turned off all Win7 processor intensive stuff.
http://www.computingunleashed.com/speed-up-windows-7-ultimate-guide-to.html
http://www.computingunleashed.com/list-of-services-in-windows-7-that-can.html
The base OS only reports using aboiut 144MB of RAM to the player. Very weird.
I'm using 2 virtual disks: 20GB SCSI for c:\ and 25GB SCSI for data f:\
Problem: How to tweak Win7 VMware (ie VS2010, Sql2008R2) well on an older laptop. Or use something else?
The problem is that by default vmware player uses file as memory.
Read this for more info & fix
http://communities.vmware.com/thread/46122
If you want to achieve this for all your VMs, you may just add/append following two lines:
prefvmx.minVmMemPct = 100
mainMem.useNamedFile = "false"
... inside the following VMware-wide configuration file:
C:/ProgramData/VMware/VMware Workstation/config.ini (or sometimes settings.ini)
The first line sets the percentage of configured VM memory that should fit into the host memory and the second (as already shown in the prior answer) disables default file-based memory usage.
If you want to apply this to a specific VM only, in order to not alter general VMware configuration, adding the following line to the VM's *.vmx file may be an alternative:
hard-disk.hostBuffer = "disabled"