JMeter runs out of memory while trying to generate HTML report - api

I need help, maybe someone had similar problem with JMeter report generation
I have JMeter script with SOAP API requests, which are placing a purchasing order. There are no issues during order creation time, but when all requests are finished and JMeter is trying to generate report I am getting an error:
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid8676.hprof ...
Heap dump file created [7011161840 bytes in 93.212 secs]
Uncaught Exception java.lang.OutOfMemoryError: Java heap space in thread Thread[StandardJMeterEngine,5,main]. See log file for details.
I used JConsole to monitor JMeter during execution and noticed that heap mostly was at 25% during test run and went up to 100% during report generation.
Thanks in advance

I've hit this issue numerous times when the results file from a test is very large. What you need to do is set the max heap size before generating the Dashboard report. What value you can set the max heap size depends on whether you're running on a 32-bit OS versus a 64-bit and how much RAM is available on the machine/VM. If the machine you're generating the report on doesn't have enough RAM available, then you can copy the results file to another machine which has more. For example, when I hit this issue on a test VM, I normally just copy the results file locally to my laptop which has 16GB and run the following command:
JVM_ARGS="-Xms3072m -Xmx12g" && export JVM_ARGS && jmeter -g resultsFile.jtl -o outputReport

The error clearly states that JMeter lacks Java Heap space in order to complete the report generation operation, it might be the case you executed a long-running test and your .jtl results file is very big so it doesn't fit into 7GB of heap. It normally indicates that a developer is not too familiar with memory management and cannot come up with a better solution than load everything into memory in one shot instead of doing it in batches/buffers so it might be a good idea to raise an issue in JMeter's Bugzilla
So just increase JVM heap size allocated to JMeter by manipulating -Xmx parameter, even if your system will start intensively using the swap file I believe you should be able to live with this.
Alternative option is generating tables/charts using JMeter Plugins Command Line Graph Plotting Tool, however this way you will get individual results rather than the fancy HTML dashboard

Related

java.lang.OutOfMemoryError at report parsing in gatling using sbt to test grpc service

I try to use gatling for testing perfomance of my grpc service
I make just one grpc call but with huge amount of entities in metadata (1000 ids) and subscribe for server streaming that every 15sec sends me some data for every entity
Test duration is about 2 minutes, and after test completed gatling tries to make a report by parsing log file, then i got this error:
Parsing log file(s)...
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.base/java.lang.Integer.valueOf(Integer.java:1081)
at scala.runtime.java8.JFunction1$mcII$sp.apply(JFunction1$mcII$sp.scala:17)
at scala.collection.immutable.Range.map(Range.scala:59)
at io.gatling.charts.stats.StatsHelper$.buckets(StatsHelper.scala:22)
at io.gatling.charts.stats.LogFileReader.<init>(LogFileReader.scala:151)
at io.gatling.app.RunResultProcessor.initLogFileReader(RunResultProcessor.scala:52)
at io.gatling.app.RunResultProcessor.processRunResult(RunResultProcessor.scala:34)
at io.gatling.app.Gatling$.start(Gatling.scala:93)
at io.gatling.app.Gatling$.fromMap(Gatling.scala:40)
at .load.mygrpc.GatlingRunner$.main(GatlingRunner.scala:16)
at .load.mygrpc.GatlingRunner.main(GatlingRunner.scala)
I tried to increase heap size by editing VM options in Idea (set -Xms2028m -Xmx4096m)
Also i tried to increase heap size for gatling in build.sbt file with javaOptions variable and directly in command line by flags
Also if i run test from terminal by using "sbt gatling:test" command, gatling always make such a thing: Dumping heap to java_pid16676.hprof ...Heap dump file created [1979821821 bytes in 15.669 secs]
And this dump is always the same size, even if i change amount of entities in my call
Anyway, changing java heap size options didn't help, possibly there're some ways to change configuration of simulation log or any manipulations to reduse generated objects in heap, please help, thanks for any advise, even if you reccomend another tool (ghz not suitable)

DebugDiag PerfAnalysis always fails due to a System.Argument exception

I have tried collecting several dumps with DebugDiag for CPU usage, but each time that I try to analyze the dump, whether a mini dump on the server, or a full dump on my workstation, it results in a report like this:
There is nothing more lower down. DebugDiag always just seems to fail to read the dump file. Is there anything that can be done to rectify the situation, or should I proceed to look at other tools?

SSIS out of memory despite tons of available memory

It starts w/the proverbial:
[Notes - F1 [107]] Error: An error occurred with the following error message: "System.OutOfMemoryException: Insufficient memory to continue the execution of the program. (SSIS Integration Toolkit for Microsoft Dynamics 365, v10.2.0.6982 - DtsDebugHost, v13.0.1601.5)".
But even in it's own diagnostics, it shows that plenty of memory is available (yes, that's 32GB I have on my system):
Error: The system reports 47 percent memory load. There are 34270687232 bytes of physical memory with 18094620672 bytes free. There are 4294836224 bytes of virtual memory with 981348352 bytes free. The paging file has 34270687232 bytes with 12832284672 bytes free.
The info messages report memory pressure:
Information: The buffer manager failed a memory allocation call for 506870912 bytes, but was unable to swap out any buffers to relieve memory pressure. 2 buffers were considered and 2 were locked. Either not enough memory is available to the pipeline because not enough are installed, other processes were using it, or too many buffers are locked.
I currently have the max rows set at 500 w/the buffer size at 506,870,912 in this example. I've tried the maximum buffer size, but that fails instantly, and the minimum buffer size still throws errors. I've fiddled w/various sizes, but it never gets anywhere close to processing the whole data set. The error I get when I set the DefaultBufferSize lower is:
[Notes - F1 [107]] Error: An error occurred with the following error message: "KingswaySoft.IntegrationToolkit.DynamicsCrm.CrmServiceException: CRM service call returned an error: Failed to allocate a managed memory buffer of 536870912 bytes. The amount of available memory may be low. (SSIS Integration Toolkit for Microsoft Dynamics 365, v10.2.0.6982 - DtsDebugHost,
I've looked for resources on how to tune this, but cannot find anything relevant to having a 64bit Window 10 machine (as opposed to a server) that has 32GB of RAM to play with.
For a bit more context, I'm migrating notes from one CRM D365 environment to another using Kingsway. The notes w/attachments are the ones causing the issue.
Properties:
Execution
Source
Destination
I have had this problem before and it was not the physical memory (i.e., RAM), but the physical disk space where the database is stored. Check to see what the available hard drive space is on the drive that stores both the database and transaction log files - chances are that it is full and therefore unable to allocate any additional disk space.
In this context, the error message citing 'memory' is a bit misleading.
UPDATE
I think this is actually caused by having too much data in the pipeline buffer. You will need to either need to look at expanding the buffer's memory allocation (i.e., DefaultBufferSize) or you will need to take a look at what data is flowing through the pipeline. Typical causes can be a lot of columns with large NVARCHAR() character counts. Copying the rows with MultiCast will only compound the problem. With respect to the 3rd party component you are using, your guess is as good as mine because I have not used them.
For anyone coming along later:
The error says "CRM service call returned an error: Failed to allocate a managed memory buffer of 536870912 bytes". I understood it to be the CRM Server that had the memory issue.
Regardless, we saw this error when migrating email attachments via the ActivityMimeAttachment entity. The problem appeared to be related to running the insert to the target CRM with too large a batch size and/or multi-threaded.
We set the batch size to 1 and turned off the multi-threading and the issue went away. (We also set the batch size to 1 on the request from the source - we saw "service unavailable" errors from an on-premise CRM when the batch size was too high and the attachments were too large.)

Memory issues with GraphDB 7.0

I am trying to load a dataset to GraphDB 7.0. I wrote a Python script to transform and load the data on Sublime Text 3. The program suddenly stopped working and closed, the computer threatened to restart but didn't, and I lost several hours worth of computing as GraphDB doesn't let me query the inserts. This is the error I get on GraphDB:
The currently selected repository cannot be used for queries due to an error:
org.openrdf.repository.RepositoryException: java.lang.RuntimeException: There is not enough memory for the entity pool to load: 65728645 bytes are required but there are 0 left. Maybe cache-memory/tuple-index-memory is too big.
I set the JVM as follows:
-Xms8g
-Xmx9g
I don't exactly remember what I set as the values for the cache and index memories. How do I resolve this issue?
For the record, the database I need to parse has about 300k records. The program shut shop at about 50k. What do I need to do to resolve this issue?
Open the workbench and check the amount of memory you have given to cache memory.
Xmx should be a value that is enough for
cache-memory + memory-for-queries + entity-pool-hash-memory
sadly the latter cannot be calculated easily because it depends on the number of entities in the repository. You will either have to:
Increase the java memory with a bigger value for Xmx
Decrease the value for cache memory

Automatically taking thread dumps or heap dumps

I'm trying to monitor a Java Application over a long period of time.
I want to automatically take a Thread dump or Heap Dump if number of threads or Heap exceeds some threshold.
Is this functionality available via VisualVM or Mission Control or other profiling tool?
Start the JMX Console in Java Mission Control
Go to the Triggers tab and select trigger rule "Thread Count" or "Live Set". You can select threshold and action to take. If running JDK 8, I think you can invoke a diagnostic command, such as Thread.print. It's also possible to dump a Flight Recording, which contains thread dumps among many other things.