What is the maximum number of pages tha apache fop can generate? - apache

Hi I was working with apache fop and when the number of pages exceeds about 130 pages ,it could not generate the pdf ....
Is there any limit to page number or the length of xml file...
Exception in thread "main" java.lang.OutOfMemoryError: Java heap
space
at java.io.BufferedReader.(BufferedReader.java:80)
at java.io.BufferedReader.(BufferedReader.java:91)
at org.apache.xml.dtm.ObjectFactory.findJarServiceProviderName(ObjectFac
tory.java:579)
at org.apache.xml.dtm.ObjectFactory.lookUpFactoryClassName(ObjectFactory
.java:373)
at org.apache.xml.dtm.ObjectFactory.lookUpFactoryClass(ObjectFactory.jav
a:206)
at org.apache.xml.dtm.ObjectFactory.createObject(ObjectFactory.java:131)
at org.apache.xml.dtm.ObjectFactory.createObject(ObjectFactory.java:101)
at org.apache.xml.dtm.DTMManager.newInstance(DTMManager.java:135)
at org.apache.xpath.XPathContext.reset(XPathContext.java:350)
at org.apache.xalan.transformer.TransformerImpl.reset(TransformerImpl.ja
va:505)
at org.apache.xalan.transformer.TransformerImpl.transformNode(Transforme
rImpl.java:1436)
at org.apache.xalan.transformer.TransformerImpl.transform(TransformerImp
l.java:709)
at org.apache.xalan.transformer.TransformerImpl.transform(TransformerImp
l.java:1284)
at org.apache.xalan.transformer.TransformerImpl.transform(TransformerImp
l.java:1262)
at org.apache.fop.cli.InputHandler.transformTo(InputHandler.java:214)
at org.apache.fop.cli.InputHandler.renderTo(InputHandler.java:125)
at org.apache.fop.cli.Main.startFOP(Main.java:166)
at org.apache.fop.cli.Main.main(Main.java:197)

I've created reports that are made from xmls that were several hundred thousand lines long. However I have had some issues creating smaller reports filled with svgs.
Your issue is probably that java by default only allocates 32 MB memory (if I recall correctly) so it's running out of memory.
In the fop.bat file (assumimg you're running on windows) add the following setting
rem Increase standard Java VM heap size, so that bigger reports get enough memory
set JAVAOPTS=-Xmx512M
and alter the execution line as follows
"%JAVACMD%" %JAVAOPTS% %LOGCHOICE% %LOGLEVEL% -cp "%LOCALCLASSPATH%" org.apache.fop.cli.Main %FOP_CMD_LINE_ARGS%
This should work with 0.95 at least

Related

Why MaxNewSize is larger than MaxHeapSize in JVM?

I only setup some of JVM configuration on startup: -Xmx1024m -Xms1024m -XX:MaxPermSize=128m -XX:PermSize=128m
From HotSpot sources:
product(uintx, MaxNewSize, max_uintx, \
"Maximum new generation size (in bytes), max_uintx means set " \
"ergonomically") \
Since you haven't set MaxNewSize explicitly, the default value is taken which is treated specially.
Anyway, MaxNewSize value is only a hint, while NewSize holds the real size of young generation.
The size of the young generation to the old generation is controlled by NewRatio. So, even though MaxNewSize > MaxHeapSize, with NewRatio=2, the effective max size of new space is 1:2. The old generation occupies 2/3 of the heap while the new generation occupies 1/3.
In your case, that is 2/3* 1024 = 682.6M for old space and 1/3 * 1024 = 341M for new space.
The threshold for MaxNewSize would only kick in if it was lower than that provided with NewRatio. Think of these as multiple, independent knobs with which to configure memory. The JVM will choose a setting conforming with all settings.

Mono human readable GC statistics in runtime

Is there a Mono profiler mode similar to Java -Xloggc?
I would like to see a human readable GC report while my application is running. Currently Mono can be run with --profile=log option but the output is in binary format and every time I need to run mprof-report to read it. The output file also contains a lot of info which is not interesting for me.
I tried to reduce the file size by specifying heapshot=14400000ms to collect statistics every few hours but it didn't help a lot. In a week I had few gigabytes log.
I also tried to use "sample" profiler but the overhead was too much.
You can use Mono's trace filters for this. Just set the MONO_LOG_MASK to gc and lower the MONO_LOG_LEVEL. Then run your app normally and you will get the human-readable GC statistics while your app is running:
$ export MONO_LOG_MASK=gc
$ export MONO_LOG_LEVEL=debug
$ mono ... # run your application normally ..
...
# notice the human readable GC output
mono: GC_MAJOR: (LOS overflow) pause 26.00ms, total 26.06ms, bridge 0.00ms major 31472K/0K los 1575K/0K
Mono: GC_MINOR: (Nursery full) pause 2.30ms, total 2.35ms, bridge 0.00ms promoted 31456K major 31456K los 5135K
Mono: GC_MINOR: (Nursery full) pause 2.43ms, total 2.45ms, bridge 0.00ms promoted 31456K major 31456K los 8097K
Mono: GC_MINOR: (Nursery full) pause 1.80ms, total 1.82ms, bridge 0.00ms promoted 31472K major 31472K los 11425K

Remote Proc fails to load FreeRTOS Elf

I am using this port of FreeRTOS and I am loading it onto the Cortex-M3 within an OMAP4430. This works fine using the remote proc framework and I am able to use RPMsg to communicate with it.
Sometimes, however, rproc fails to load the elf and gives the following error:
rproc remoteproc1: bad phdr da 0x0 mem 0x10310
rproc remoteproc1: Failed to load program segments: -22
rproc remoteproc1: rproc_boot() failed -22
This seems to happen when the size of the elf file gets too large: this happens when the size is 377331 bytes but does not happen when I simply remove a bunch of print statements and bring the size down to 342563 bytes.
I have tracked the error message down to this piece of code: http://lxr.free-electrons.com/source/drivers/remoteproc/remoteproc_elf_loader.c?v=3.9#L188. It seems that rproc_da_to_va is unable to find a segment in memory large enough to fit the ELF.
How can I make sure that there is enough memory for the size of my ELF? Can I tell the kernel that I specifically want a certain region preallocated for this kind of thing? Is there some way to ensure that this part of my ELF remains small?
Thanks!
Make sure that the FreeRTOS configuration constants configTEXT_SIZE and configDATA_SIZE agree with the amounts demanded by your linker script. For example, if your linker script contains
MEMORY
{
TEXT (rwx) : ORIGIN = 0x00000000, LENGTH = 1M
DATA (rwx) : ORIGIN = 0x80000000, LENGTH = 1M
}
then you should set configTEXT_SIZE and configDATA_SIZE to 0x100000.

Pig local mode, group, or join = java.lang.OutOfMemoryError: Java heap space

Using Apache Pig version 0.10.1.21 (reported),
CentOS release 6.3 (Final), jdk1.6.0_31 (The Hortonworks Sandbox v1.2 on Virtualbox, with 3.5 GB RAM)
$ cat data.txt
11,11,22
33,34,35
47,0,21
33,6,51
56,6,11
11,25,67
$ cat GrpTest.pig
A = LOAD 'data.txt' USING PigStorage(',') AS (f1:int,f2:int,f3:int);
B = GROUP A BY f1;
DESCRIBE B;
DUMP B;
pig -x local GrpTest.pig
[Thread-12] WARN org.apache.hadoop.mapred.JobClient - No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
[Thread-12] INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths to process : 1
[Thread-13] INFO org.apache.hadoop.mapred.Task - Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin#19a9bea3
[Thread-13] INFO org.apache.hadoop.mapred.MapTask - io.sort.mb = 100
[Thread-13] WARN org.apache.hadoop.mapred.LocalJobRunner - job_local_0002
java.lang.OutOfMemoryError: Java heap space
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.<init>(MapTask.java:949)
at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:674)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:756)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:212)
[main] ERROR org.apache.pig.tools.pigstats.PigStatsUtil - 1 map reduce job(s) failed!
[main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1066: Unable to open iterator for alias B
The java.lang.OutOfMemoryError: Java heap space error occurs each time I use GROUP or JOIN in a pig script executed in local mode. There is no error when the script is executed in mapreduce mode on HDFS.
Question 1: How come there is an OutOfMemory error while the data sample is minuscule and local mode is supposed to use less resources than HDFS mode?
Question 2: Is there a solution to run successfully a small pig scripts with GROUP or JOIN in local mode?
Solution: force pig to allocate less memory for the java property io.sort.mb
I set to 10 MB here and the error disappears. Not sure what would be the best value but at least, this allow to practice pig syntax in local mode
$ cat GrpTest.pig
--avoid java.lang.OutOfMemoryError: Java heap space (execmode: -x local)
set io.sort.mb 10;
A = LOAD 'data.txt' USING PigStorage(',') AS (f1:int,f2:int,f3:int);
B = GROUP A BY f1;
DESCRIBE B;
DUMP B;
The reason is you have less memory allocated to Java locally than you do on your Hadoop cluster machines. This is actually a pretty common error in Hadoop. It mostly occurs when you create a really long relation in Pig at any point, and happens because Pig always wants to load an entire relation into memory and doesn't want to lazy load it in any way.
When you do something like GROUP BY where the tuple you're grouping by is non-sparse over many records, you frequently wind up creating single long relations at least temporarily since you're basically taking a whole bunch of individual relations and cramming them all into one single long relation. Either change your code so you don't wind up creating single very long relations at any point (i.e. group by something more sparse), or increase the memory available to Java.

Why Rebol Copy Big File fails with really big files whereas windows explorer doesn't?

I tried carl function
http://www.rebol.com/article/0281.html
with 155 Mo it works.
Then I tested with 7 Go it fails without saying the limit.
Why is there a limit I can't see anything in code that puts a limit.
There's no error message
>> copy-file to-rebol-file "D:\#mirror_ftp\cpmove.tar" to-rebol-file "D:\#mirror_ftp\testcopy.tar"
0:00
== none
>>
REBOL uses 32-bit signed integers, so it can't read files bigger than 2147483647 bytes (2^31-1) which is roughly 2GB. REBOL3 uses 64-bit integers, so won't have such limitation.