Why oVirt 4.1 RedHat contain a type "maximum memory" and reserved this on Host, blocking create another machines - kvm

Why Virtual Machine on oVirt 4.1 RedHat contain "memory, and maximum memory". By default all maximum memory it is 2x memory, and this eat my memory space on Host. This blocking to create another Virtual Machine. In the oVirt 3.5 this "Max memory" don't exist.

This corresponds to new feature of libvirt, it's maximum memory which could be hot-plugged:
maxMemory
The run time maximum memory allocation of the guest. The initial memory specified by either the element or the NUMA cell size configuration can be increased by hot-plugging of memory to the limit specified by this element. The unit attribute behaves the same as for . The slots attribute specifies the number of slots available for adding memory to the guest. The bounds are hypervisor specific. Note that due to alignment of the memory chunks added via memory hotplug the full size allocation specified by this element may be impossible to achieve. Since 1.2.14 supported by the QEMU driver.

Related

qemu: hugepages for guest memory

qemu-kvm is launched with -mem-prealloc -mem-path /mnt/hugepages/libvort/qemu parameters. Does this mean that the guest memory will be allocated from the hugepages on the host?
Also, libvirt defines the following in domain xml:
<memoryBacking>
<hugepages/>
<locked/>
</memoryBacking>
This basically tells the hypervisor to use hugepages for its guest memory, and these pages will be locked in the host's memory (not allowed to be swapped out).
Are these options work together (-mem-prealloc and libvirt's xml directive), and one supplement the other?
The <hugepages/> element will cause guest RAM to be allocated from the default huge pages size pool, and will cause immediate allocation of those huge pages at QEMU startup. Huge pages are always locked into host RAM.
The <locked/> element is used to control other QEMU memory allocations that are separate from guest RAM. It ensures all non-guest RAM pages are also locked into host RAM.
If you are not trying to overcommit host RAM, then using huge pages on the host side is a very good idea for maximizing performance as it improves page table hit rates. This is true regardless of whether the guest OS in turn uses huge pages. This performance benefit will apply to all workloads and all guest OS.
The KSM feature for when you are trying to overcommit host RAM (ie run many guests whose total RAM exceeds available host RAM). It tries to share RAM pages between guests to avoid risk of swapping during overcommit. KSM has a quite significant penalty in CPU consumption though, so its a tradeoff as to whether it is useful for a particular workload and/or guest OS.

Specify maximum CPU and memory utilization of ABAP Application Server

Is there any means to configure an ABAP application server to that it does only consume X percent of CPU usage and Y percent of memory on the machine it runs on?
Or is this rather something that is only possible on the operating system level?
Google research revealed how to view the operating system status. As this is only viewing, I would be interested in a means to control this status also from within the ABAP application server.
I'm not aware of a method to bind the memory allocation of an application server to a manually adjusted percentage of the host OS memory. There are several profile parameters that control the different memory types used in an application server. SAP offers a detailed documentation on their memory management.
As far as I know, the maximum memory allocated by an application server is controlled by the size of the roll area for work processes, the extended memory and the total heap size. Profile parameters for those settings are:
ztta/roll_area / ztta/roll_first (per work process, not total)
em/initial_size_MB
abap/heap_area_total
Work processes first receive memory from the roll area, after that they can request more memory from the extended memory up to the size of ztta/roll_extension. If all extended memory is allocated, the work process can allocate heap memory (with a few downsides, which is why that is happening only when necessary)
The biggest influence on memory will be em/initial_size_MB and abap/heap_area_total (with em/initial_size_MB being the main control mechanism). I'd focus on those two to adjust the total memory consumption of your application server instance.
Side note: em/initial_size_MB has a default of 70 % of the total host memory, so there is already a percentage based memory allocation happening in the kernel as long as that parameter isn't set. But I'm not aware of a way to influence the percentage used by the kernel.
Update, thanks to mkysoft for the information: the two parameters CPU_CORES and PHYS_MEMSIZE are by default set by the operating system and contain the total number of CPUs and the total memory installed in the system. You can manually override them, reducing the resources the SAP kernel uses to calculate default values for several kernel parameters. You could for instance reduce PHYS_MEMSIZE and leave em/initial_size_MB to default. Both parameters also allow you to set a percentage instead of absolute values. You could for instance set both values to 50%, reducing the maximum resources for that application server instance to 50 % of what the hardware has to offer. There's some additional documentation for those two parameters available as well.

Java Virtual Machine and Swap Space

Apppreciate any expert here could advise for below JVM and swap space related queries. Thanks in advance
1) Am I right that Operating System will use swap space when OutOfMemory occured in JVM Java Heap, Perm Generation or Native Heap ? Or swap space is used for OutOfMemory in Native Heap ?
2) Am I right that Native heap size is not configurable at JVM, because OS will assign available RAM to JVM during runtime ?
3) How can we enable swap space for JVM, or swap space is enabled for all processes at Unix and Window level by default ?
4) Understand that swap space can affect application performance, is that best practice to disable swap space for JVM ? If not, what is the reason ?
5) How can we disable swap space and change the swap space size for particular JVM in both Unix and Window OS, or it is only configurable at OS level which is applied to all processes in the OS ?
There are a lot of questions here... Operating systems indeed use swap space to create the so called virtual memory (which is obviously bigger then the RAM you might have). It is usually enabled by default, but you need to check.
You can not instruct the JVM to use only the physical RAM AFAIK, but that would be a limitation of the OS itself and not JVM (this should answer 5).
You can disable swap (again for the OS, not JVM), but that is a bad idea. There are multiple processes that run inside the operating system and they each need space to run into (that at some point in time might exceed your actual RAM). It indeed affects performance, but what is worse - some performance penalties (I assume the OS has many things to make this better for you) or the death of the application? (this should answer 4).
Regarding (2) there are two parameters that control how much heap you will have: Xmx - maximum heap that JVM process will use. And Xms - initial heap. Actually just recent there was a very good talk about this: here.
I think -Xmx and -Xms configure how much heap is available for the java process that is run inside the virtual machine. The virtual machine itself is a native process that requires additional heap for running the virtual machine itself. The JVM process can therefore consume more memory then that indicated by the -Xmx option.

Cannot create JVM with -XX:+UseLargePages enabled

I have a Java service that currently runs with a 14GB heap. I am keen to try out the -XX:+UseLargePages option to see how this might affect the performance of the system. I have configured the OS as described by Oracle using appropriate shared memory and page values (these can also be calculated with an online tool).
Once the OS is configured, I can see that it allocates the expected amount of memory as huge-pages. However, starting the VM with the -XX:+UseLargePages option set always results in one of the following errors:
When -Xms / -Xmx is almost equal to the huge page allocation:
Failed to reserve shared memory (errno = 28). // 'No space left on device'
When -Xms / -Xmx is less than the huge page allocation:
Failed to reserve shared memory (errno = 12). // 'Out of memory'
I did try introducing some leeway - so on a 32GB system I allocated 24GB of shared memory and hugepages to use with a JVM configured with a 20GB heap, of which only 14GB is currently utilized. I also verified that the user executing the JVM did have group rights consistent with /proc/sys/vm/hugetlb_shm_group.
Can anyone give me some pointers on where I might be going wrong and what I could try next?
Allocations/utilization:
-Xms / -Xmx - 20GB
Utilized heap - 14GB
/proc/sys/kernel/shmmax - 25769803776 (24GB)
/proc/sys/vm/nr_hugepages - 12288
Environment:
System memory - 32GB
System page size - 2048KB
debian 2.6.26-2-amd64
Sun JVM 1.6.0_20-b02
Solution
Thanks to #jfgagne for providing an answer that lead to a solution. In addition to the /proc/sys/kernel/shmall setting (specified as 4KB pages), I had to add entries to /etc/security/limits.conf as described on Thomas' blog. However, as my application is started using jsvc I also had to duplicate the settings for the root user (note that the limits are specified in KB):
root soft memlock 25165824
root hard memlock 25165824
pellegrino soft memlock 25165824
pellegrino hard memlock 25165824
It's also worth mentioning that settings could be tested quickly by starting the JVM with the -version argument:
java -XX:+UseLargePages -Xmx20g -version
When you use huge pages with Java, there is not only the heap using huge pages, but there is also the PermGen: do not forget to allocate space for it. It seems this is why you have a different errno message when you set Xmx near the amount of huge pages.
There is also the shmall kernel parameter that needs to be set which you did not mention, maybe it is what is blocking you. In your case, you should set it to 6291456.
The last thing to say: when using huge pages, the Xms parameter is not used anymore: Java reserves all Xmx in shared memory using huge pages.

Where are default JVM heap sizes defined on linux (SL4)

I'm currently using sun's java 1.6 on a SL4 cluster.
For some reason, the 1.6 JVM is starting up with an impossibly large heap, and cannot start:
java -version
Error occurred during initialization of VM
Could not reserve enough space for object heap
Could not create the Java virtual machine.
If I start it with e.g. -Xmx1800M, then it works OK. So, I'm wondering where the default heap size is set, and more importantly how to change it?
The machine has 8GB of physical memory, and I believe that sun's server JVM is supposed to start with a default of half the memory up to 512M, but this is clearly not the case, as it's trying to allocate over 1800M.
EDIT: I realise that it's possible to use _JAVA_OPTIONS, but this feels a bit clunky; I was expecting a properties file somewhere, but so far I've been unable to find it.
There is no properties file for this. According to Garbage Collector Ergonomics:
initial heap size:
Larger of 1/64th of the machine's
physical memory on the machine or some
reasonable minimum. Before J2SE 5.0,
the default initial heap size was a
reasonable minimum, which varies by
platform. You can override this
default using the -Xms command-line
option.
maximum heap size:
Smaller of 1/4th of the physical
memory or 1GB. Before J2SE 5.0, the
default maximum heap size was 64MB.
You can override this default using
the -Xmx command-line option.
Note: The boundaries and fractions given for the heap size are correct
for J2SE 5.0. They are likely to be
different in subsequent releases as
computers get more powerful.
Given you have 8GB of RAM, default maximum heap size should be 1GB assuming you're using Java 6.
There's no standard properties file. The (Sun) JVM has the default values hardcoded in it.