I only setup some of JVM configuration on startup: -Xmx1024m -Xms1024m -XX:MaxPermSize=128m -XX:PermSize=128m
From HotSpot sources:
product(uintx, MaxNewSize, max_uintx, \
"Maximum new generation size (in bytes), max_uintx means set " \
"ergonomically") \
Since you haven't set MaxNewSize explicitly, the default value is taken which is treated specially.
Anyway, MaxNewSize value is only a hint, while NewSize holds the real size of young generation.
The size of the young generation to the old generation is controlled by NewRatio. So, even though MaxNewSize > MaxHeapSize, with NewRatio=2, the effective max size of new space is 1:2. The old generation occupies 2/3 of the heap while the new generation occupies 1/3.
In your case, that is 2/3* 1024 = 682.6M for old space and 1/3 * 1024 = 341M for new space.
The threshold for MaxNewSize would only kick in if it was lower than that provided with NewRatio. Think of these as multiple, independent knobs with which to configure memory. The JVM will choose a setting conforming with all settings.
Related
I have a 64-bit hotspot JDK version 1.7.0 installed on a 64-bit RHEL 6 machine. I use the following JVM options for my tomcat application.
CATALINA_OPTS="${CATALINA_OPTS} -Dfile.encoding=UTF8 -Dorg.apache.catalina.loader.WebappClassLoader.ENABLE_CLEAR_REFERENCES=false -Duser.timezone=EST5EDT"
# General Heap sizing
CATALINA_OPTS="${CATALINA_OPTS} -Xms4096m -Xmx4096m -XX:NewSize=2048m -XX:MaxNewSize=2048m -XX:PermSize=512m -XX:MaxPermSize=512m -XX:+UseCompressedOops -XX:+DisableExplicitGC"
# Enable the CMS GC policy
CATALINA_OPTS="${CATALINA_OPTS} -XX:+UseConcMarkSweepGC -XX:CMSWaitDuration=15000 -XX:+CMSParallelRemarkEnabled -XX:+CMSCompactWhenClearAllSoftRefs -XX:+CMSConcurrentMTEnabled -XX:+CMSScavengeBeforeRemark -XX:+CMSClassUnloadingEnabled"
# Verbose Garbage Collection Logging
CURRENT_DATE=`date +%Y%m%d%H%M%S`
CATALINA_OPTS="${CATALINA_OPTS} -verbose:gc -XX:+PrintGCDetails -Xloggc:${CATALINA_BASE}/logs/gc-${CURRENT_DATE}.log -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution"
When I have a Garbage Collection analysis, the GC logs show a maximum available heap of only 3.8GB instead of 4GB allocated to the JVM. Why is that?
New Generation (2048M) consists of 80% Eden (1638.4M) and two Survivor Spaces (10% or 204.8M each):
Heap
par new generation total 1887488K, used 134226K [0x00000006fae00000, 0x000000077ae00000, 0x000000077ae00000)
eden space 1677824K, 8% used [0x00000006fae00000, 0x00000007031148e0, 0x0000000761480000)
from space 209664K, 0% used [0x0000000761480000, 0x0000000761480000, 0x000000076e140000)
to space 209664K, 0% used [0x000000076e140000, 0x000000076e140000, 0x000000077ae00000)
concurrent mark-sweep generation total 2097152K, used 242K [0x000000077ae00000, 0x00000007fae00000, 0x00000007fae00000)
At any time one of survivor spaces is empty (see Generations).
So, the useful heap size is 1638.4 + 204.8 + 2048 = 3891.2 MB
I am using a 64 MB QSPI formatted in some UBI partitions.
df is an applet of busybox 1.27.2
Actually,
~ # df -h
Filesystem Size Used Available Use% Mounted on
/dev/ubi0_0 3.1T 1.9T 1.2T 63% /
/dev/ubi1_0 1.6T 21.8G 1.5T 1% /conf
But, obviously, the size cannot be that! Anyway, the use % seems to be correct, for the files contained in the partitions weight few MB.
How do you explain that?
I have been able to fix the issue.
Busybox 1.28.0 commit d1535216 substitutes use of statfs with statvfs (https://github.com/mirror/busybox/commit/d1535216ca27047e3962d61b975bd2a638aa45a2).
I applied the commit to my project using Busybox 1.27.2 and, now, sizes are correct!
Thanks anyway.
I am reworking the programmer for the Olimex iCE40HX1K board (targetted towards a STM32F103 ma) where I also would like to implement the "SPI Slave" mode to configure an image directly into RAM without using the serial flash.
Looking at the Lattice "programming and Configuration guide" (page 11), it is noted in table 8 that a EPROM for a ICE40-LP/LX1K must be at least 34112 bytes. (which -I guess- means that the configuration-files can be up to that size).
However, all images I have (sofar) created with the icestorm tools are 32220 octets.
I am a bit puzzled here.
Can somebody explain the difference between these two figures?
Does the HX1K need a configuration-file of 32220 or 34112 bytes?
I don't know how Lattice arrived at this number. A complete HX1K bin file with BRAM initialization but without comment and without multiboot header is 32220 bytes in size. The (optional) multiboot header would add another 160 bytes (32220 + 160 = 32380). The lattice tools usually add about 80 bytes to the comment field (32220 + 80 = 32300). Whatever I do, all numbers I have are more than 1000 short of 34112.
I don't know if there is a maximum length for the comment. Maybe there is and 34112 is the size of a bit stream with a comment of maximum length?
34112 - 32220 = 1892. Maybe someone decided to add 8kB (8192 bytes) just in case, but that person accidentally swapped the first two digits? Idk..
If you don't care about comments or multiboot headers, then iCE40 1K bit-streams have a fixed size, and that size is 32220 bytes.
So on one system, I have values that are pretty wide open:
$ ulimit -a | grep mem
max locked memory (kbytes, -l) 40000
max memory size (kbytes, -m) unlimited
virtual memory (kbytes, -v) unlimited
Another system has much more limiting values, but I can't for the life of me find out where the 32MB upper limit (it is 32MB despite the mislabling) is being set:
# ulimit -a | grep mem
max locked memory (kbytes, -l) 32
max memory size (kbytes, -m) unlimited
virtual memory (kbytes, -v) unlimited
The second system is a RHEL 5.5 box. I am looking to increase this limit for at least one user- I need a bigger APC mmap memory allocation, but I can't go above 30 MB without running into the above limit, and I would rather not hack the provided apache init script. Where should I be trying to override the system default value so I can map a bigger segment of memory? Doing it in limits.conf for the apache user doesn't do a whole lot; probably because the init script doesn't do anything through PAM.
If the user granularity setting you tried isn't working, you should make sure that's you've correctly matched which user is hitting the limit.
You should also be able to add a line like this to limits.conf:
* hard memlock 40000
That'll change the default setting for all users.
From the limits.conf manpage:
The syntax of the lines is as follows:
<domain> <type> <item> <value>
The fields listed above should be filled as follows:
<domain>
[snip]
ยท the wildcard *, for default entry.
Hi I was working with apache fop and when the number of pages exceeds about 130 pages ,it could not generate the pdf ....
Is there any limit to page number or the length of xml file...
Exception in thread "main" java.lang.OutOfMemoryError: Java heap
space
at java.io.BufferedReader.(BufferedReader.java:80)
at java.io.BufferedReader.(BufferedReader.java:91)
at org.apache.xml.dtm.ObjectFactory.findJarServiceProviderName(ObjectFac
tory.java:579)
at org.apache.xml.dtm.ObjectFactory.lookUpFactoryClassName(ObjectFactory
.java:373)
at org.apache.xml.dtm.ObjectFactory.lookUpFactoryClass(ObjectFactory.jav
a:206)
at org.apache.xml.dtm.ObjectFactory.createObject(ObjectFactory.java:131)
at org.apache.xml.dtm.ObjectFactory.createObject(ObjectFactory.java:101)
at org.apache.xml.dtm.DTMManager.newInstance(DTMManager.java:135)
at org.apache.xpath.XPathContext.reset(XPathContext.java:350)
at org.apache.xalan.transformer.TransformerImpl.reset(TransformerImpl.ja
va:505)
at org.apache.xalan.transformer.TransformerImpl.transformNode(Transforme
rImpl.java:1436)
at org.apache.xalan.transformer.TransformerImpl.transform(TransformerImp
l.java:709)
at org.apache.xalan.transformer.TransformerImpl.transform(TransformerImp
l.java:1284)
at org.apache.xalan.transformer.TransformerImpl.transform(TransformerImp
l.java:1262)
at org.apache.fop.cli.InputHandler.transformTo(InputHandler.java:214)
at org.apache.fop.cli.InputHandler.renderTo(InputHandler.java:125)
at org.apache.fop.cli.Main.startFOP(Main.java:166)
at org.apache.fop.cli.Main.main(Main.java:197)
I've created reports that are made from xmls that were several hundred thousand lines long. However I have had some issues creating smaller reports filled with svgs.
Your issue is probably that java by default only allocates 32 MB memory (if I recall correctly) so it's running out of memory.
In the fop.bat file (assumimg you're running on windows) add the following setting
rem Increase standard Java VM heap size, so that bigger reports get enough memory
set JAVAOPTS=-Xmx512M
and alter the execution line as follows
"%JAVACMD%" %JAVAOPTS% %LOGCHOICE% %LOGLEVEL% -cp "%LOCALCLASSPATH%" org.apache.fop.cli.Main %FOP_CMD_LINE_ARGS%
This should work with 0.95 at least