I have pentaho 8.3 and 8.2 Community.
In Windows 10 and Windows Server 2019, when I increase the RAM https://help.pentaho.com/Documentation/5.2/0H0/070/020/010 it does not get applied and it still takes a lot of time to Start the Application.
Works fine in Linux.
Did you change the designated memory on spoon.bat ?
Xms: set initial Java heap size.
Xmx: set the maximum Java heap size.
XX:MaxPermSize: set size for Permanent Generation.
example: PENTAHO_DI_JAVA_OPTION
PS: if you want better start up times check this :
Improving startup time of Pentaho Data Integration
Related
I need help, maybe someone had similar problem with JMeter report generation
I have JMeter script with SOAP API requests, which are placing a purchasing order. There are no issues during order creation time, but when all requests are finished and JMeter is trying to generate report I am getting an error:
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid8676.hprof ...
Heap dump file created [7011161840 bytes in 93.212 secs]
Uncaught Exception java.lang.OutOfMemoryError: Java heap space in thread Thread[StandardJMeterEngine,5,main]. See log file for details.
I used JConsole to monitor JMeter during execution and noticed that heap mostly was at 25% during test run and went up to 100% during report generation.
Thanks in advance
I've hit this issue numerous times when the results file from a test is very large. What you need to do is set the max heap size before generating the Dashboard report. What value you can set the max heap size depends on whether you're running on a 32-bit OS versus a 64-bit and how much RAM is available on the machine/VM. If the machine you're generating the report on doesn't have enough RAM available, then you can copy the results file to another machine which has more. For example, when I hit this issue on a test VM, I normally just copy the results file locally to my laptop which has 16GB and run the following command:
JVM_ARGS="-Xms3072m -Xmx12g" && export JVM_ARGS && jmeter -g resultsFile.jtl -o outputReport
The error clearly states that JMeter lacks Java Heap space in order to complete the report generation operation, it might be the case you executed a long-running test and your .jtl results file is very big so it doesn't fit into 7GB of heap. It normally indicates that a developer is not too familiar with memory management and cannot come up with a better solution than load everything into memory in one shot instead of doing it in batches/buffers so it might be a good idea to raise an issue in JMeter's Bugzilla
So just increase JVM heap size allocated to JMeter by manipulating -Xmx parameter, even if your system will start intensively using the swap file I believe you should be able to live with this.
Alternative option is generating tables/charts using JMeter Plugins Command Line Graph Plotting Tool, however this way you will get individual results rather than the fancy HTML dashboard
We have an intermittent performance issue on one of our servers reported by a couple of users when using one particular XPages application. My first thought was that it was related to Java Memory usage and poor code recycling so I'm starting there. At the moment, according to Mark Leusink's fantastic Debug Toolbar, the usage data for the server (64-bit Windows machine with 32Gb physical RAM) looks like this:
I'd like to confirm my understanding of the figures:
Maximum Heap Size - I'm okay with this and know how to change it (and that recommended setting is a quarter of the available RAM but due to low user population on this server, I'm sure 2Gb is more than adequate)
Total Allocated - this seems low to me but am I correct in that this is automatically set by the server and that, if more Java memory is needed then it will allocate more (up to the amount specified in the maximum heap size?) Does this happen only if garbage collection cannot free enough space to load a new java object?
Used - I believe this shows the memory being used across the server and not just in the application containing the debug toolbar itself. Will this only show the memory being used by the Domino HTTP task (so all XPages apps) or can it be affected by Java agents too?
Bonus questions:
How is the "total allocated" figure initially set? On a development server we have (with one current user - me) the figure is currently set to 256M but I can't relate this back to any Notes.ini parameters. (Also, is there a recommended value for this figure?)
If I'm correct about garbage collection running when the "total allocated" figure is reached, then, presumably, a low figure will force it to run more often. Will this have an adverse affect on server performance?
Is the fact that we are also running Traveler on this server (albeit with only about 9 users) something we should take into consideration?
Thanks
The information shown in the toolbar are the standard numbers that the JVM provides: totalMemory(), maxMemory() and freeMemory(). See this question for a detailed explanation. The three values given are for the entire JVM, not a specific application.
In the Domino HTTP JVM you can set the maxMemory with the HTTPJVMMaxHeapSize notes.ini parameter. You cannot set/ change the total allocated (totalMemory) value, but that's also not needed. The JVM will simply allocate more memory when it needs it (up to the value of maxMemory). When garbage collection has been performed, it will eventually also free this memory again.
Java agents do not affect these numbers. The only exception would be a Java agent that runs in the HTTP process (e.g. called from a browser using the ?OpenAgent command).
On a server you can run into memory issues (OutOfMemory exceptions) if the JVM needs more memory that can be allocated. You can monitor this value by creating a simple XAgent to output the current values for the JVM:
<?xml version="1.0" encoding="UTF-8"?>
<xp:view xmlns:xp="http://www.ibm.com/xsp/core" rendered="false" viewState="nostate">
<xp:this.afterRenderResponse><![CDATA[#{javascript:
var externalContext = facesContext.getExternalContext();
var writer = facesContext.getResponseWriter();
var response = externalContext.getResponse();
response.setContentType("application/json");
response.setHeader("Cache-Control", "no-cache");
var max = java.lang.Runtime.getRuntime().maxMemory();
var free = java.lang.Runtime.getRuntime().freeMemory();
var total = java.lang.Runtime.getRuntime().totalMemory();
var memory = {
max : max,
total : total,
free : free,
available : (max - total) + free
};
writer.write( toJson(memory) );
writer.endDocument();
}]]>
</xp:this.afterRenderResponse>
</xp:view>
I have CSV file of around 188MB. When I try to upload data using hot folder technique its taking too much time 10-12 hrs. How can I speedup the data upload?
Thanks
Default value of impex.import.workers is 1. Try to change this value. And I recommend making performance test with a bit smaller file first, than 188Mb (just to get swift results)
Adjust the number of impex threads on the backoffice server to speed up ImpEx file processing. It is recommended that you start with it equal to the number of cores available on a backoffice node. You should not adjust it any higher than 2 * number of cores, and this is only if the IMPEX processes will be the only item running on the node. The actual value could be somewhere in between and will only be determined by testing and analyzing the number of other processes, jobs, apps running on your server to ensure you are not maxing out CPU.
NOTE: this value could be higher for lower environments since Hybris will likely be the only process running.
Taken from Tuning Parameters - Hybris Wiki
I am trying to load a dataset to GraphDB 7.0. I wrote a Python script to transform and load the data on Sublime Text 3. The program suddenly stopped working and closed, the computer threatened to restart but didn't, and I lost several hours worth of computing as GraphDB doesn't let me query the inserts. This is the error I get on GraphDB:
The currently selected repository cannot be used for queries due to an error:
org.openrdf.repository.RepositoryException: java.lang.RuntimeException: There is not enough memory for the entity pool to load: 65728645 bytes are required but there are 0 left. Maybe cache-memory/tuple-index-memory is too big.
I set the JVM as follows:
-Xms8g
-Xmx9g
I don't exactly remember what I set as the values for the cache and index memories. How do I resolve this issue?
For the record, the database I need to parse has about 300k records. The program shut shop at about 50k. What do I need to do to resolve this issue?
Open the workbench and check the amount of memory you have given to cache memory.
Xmx should be a value that is enough for
cache-memory + memory-for-queries + entity-pool-hash-memory
sadly the latter cannot be calculated easily because it depends on the number of entities in the repository. You will either have to:
Increase the java memory with a bigger value for Xmx
Decrease the value for cache memory
As the title says, i'm trying to figure out how much RAM is needed to generate and export to excel a large report using SQL Server Reporting Services on Windows Server 2003.
It is not an option to upgrade it to SS2008 and also not an option to export to CSV.
Strictly from a hardware point of view what is a good configuration for a high load server?
(CPU's, RAM, Storage)
You've got problems - the maximum memory size that SSRS2005 can handle is 2GB. (There is a dodge to enable it to handle 3GB, but it's not recommended for production servers.)
SSRS2008 has no such limitation, which is why the normal response in this situation is to recommend an upgrade to 2008.
If your large report won't run on a machine with 2GB available, it doesn't matter how much RAM (or other resources) you put on your server - the report still won't run.
Your only option (given the restrictions stated above) would be to break the report up into smaller pieces and run them one at a time.