java.lang.OutOfMemoryError at report parsing in gatling using sbt to test grpc service - jvm

I try to use gatling for testing perfomance of my grpc service
I make just one grpc call but with huge amount of entities in metadata (1000 ids) and subscribe for server streaming that every 15sec sends me some data for every entity
Test duration is about 2 minutes, and after test completed gatling tries to make a report by parsing log file, then i got this error:
Parsing log file(s)...
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at java.base/java.lang.Integer.valueOf(Integer.java:1081)
at scala.runtime.java8.JFunction1$mcII$sp.apply(JFunction1$mcII$sp.scala:17)
at scala.collection.immutable.Range.map(Range.scala:59)
at io.gatling.charts.stats.StatsHelper$.buckets(StatsHelper.scala:22)
at io.gatling.charts.stats.LogFileReader.<init>(LogFileReader.scala:151)
at io.gatling.app.RunResultProcessor.initLogFileReader(RunResultProcessor.scala:52)
at io.gatling.app.RunResultProcessor.processRunResult(RunResultProcessor.scala:34)
at io.gatling.app.Gatling$.start(Gatling.scala:93)
at io.gatling.app.Gatling$.fromMap(Gatling.scala:40)
at .load.mygrpc.GatlingRunner$.main(GatlingRunner.scala:16)
at .load.mygrpc.GatlingRunner.main(GatlingRunner.scala)
I tried to increase heap size by editing VM options in Idea (set -Xms2028m -Xmx4096m)
Also i tried to increase heap size for gatling in build.sbt file with javaOptions variable and directly in command line by flags
Also if i run test from terminal by using "sbt gatling:test" command, gatling always make such a thing: Dumping heap to java_pid16676.hprof ...Heap dump file created [1979821821 bytes in 15.669 secs]
And this dump is always the same size, even if i change amount of entities in my call
Anyway, changing java heap size options didn't help, possibly there're some ways to change configuration of simulation log or any manipulations to reduse generated objects in heap, please help, thanks for any advise, even if you reccomend another tool (ghz not suitable)

Related

Wrangler publish fails because of exceeding size while gzip < 1mb

When I run wrangler publish, I get:
Total Upload: 2879.48 KiB / gzip: 474.38 KiB
The documentation mentions a maximum size of <1mb. The gzip is well below this threshold, yet I get the following error:
Script startup timed out. This could be due to script exceeding size limits or expensive code in
the global scope. [code: 10021]
The odd thing is that it sometimes does upload the worker; but most of the time it fails with the above error message.
This failure is due to the startup time limit of 200ms. It sounds like your code is spending more than 200ms to parse everything and execute the global scope. However, sometimes it stays under the limit due to random variation in timing.
This implies that your Worker will always experience cold start times around 200ms.
To fix this you should try to eliminate unneeded dependencies and unnecessary startup-time computation. Maybe there is work being done at startup that could be done lazily when needed by a request?

JMeter runs out of memory while trying to generate HTML report

I need help, maybe someone had similar problem with JMeter report generation
I have JMeter script with SOAP API requests, which are placing a purchasing order. There are no issues during order creation time, but when all requests are finished and JMeter is trying to generate report I am getting an error:
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid8676.hprof ...
Heap dump file created [7011161840 bytes in 93.212 secs]
Uncaught Exception java.lang.OutOfMemoryError: Java heap space in thread Thread[StandardJMeterEngine,5,main]. See log file for details.
I used JConsole to monitor JMeter during execution and noticed that heap mostly was at 25% during test run and went up to 100% during report generation.
Thanks in advance
I've hit this issue numerous times when the results file from a test is very large. What you need to do is set the max heap size before generating the Dashboard report. What value you can set the max heap size depends on whether you're running on a 32-bit OS versus a 64-bit and how much RAM is available on the machine/VM. If the machine you're generating the report on doesn't have enough RAM available, then you can copy the results file to another machine which has more. For example, when I hit this issue on a test VM, I normally just copy the results file locally to my laptop which has 16GB and run the following command:
JVM_ARGS="-Xms3072m -Xmx12g" && export JVM_ARGS && jmeter -g resultsFile.jtl -o outputReport
The error clearly states that JMeter lacks Java Heap space in order to complete the report generation operation, it might be the case you executed a long-running test and your .jtl results file is very big so it doesn't fit into 7GB of heap. It normally indicates that a developer is not too familiar with memory management and cannot come up with a better solution than load everything into memory in one shot instead of doing it in batches/buffers so it might be a good idea to raise an issue in JMeter's Bugzilla
So just increase JVM heap size allocated to JMeter by manipulating -Xmx parameter, even if your system will start intensively using the swap file I believe you should be able to live with this.
Alternative option is generating tables/charts using JMeter Plugins Command Line Graph Plotting Tool, however this way you will get individual results rather than the fancy HTML dashboard

Slave servers KETTLE pentaho

I am using Carte web server to execute transformations remotly, somtimes when the web service called multiple times at the same moment I got a timeout then an error with description "GC overhead limit exceeded".
I want to know Why I am getting this issue, and should creating multiple slave servers be the solution, if so what is the procedure?
NOTE: https://xxxx/kettle/getSlaves return :
<SlaveServerDetections></SlaveServerDetections>
The answer
GC overhead limit exceeded
is about you carte server is run out of memory. Carte server is just a jetty server with PDI functionality, it is java process by it's nature wich is run jobs or transformations. Jobs and transformations by it's nature just a description of what carte server should do. Fetch some data, sort string, anything that have been configured. If you want to run massive tasks of Carte server you have to tune Carte startup script to give to java process more memory, more heap space, define best GC strategy or what ever based on your knowledge on what is exactly have to be tuned. Just try to google on 'GC overhead limit exceeded' and play with java process startup arguments.
When server returns
<SlaveServerDetections></SlaveServerDetections>
I is just says it is did not find any slaves (most probably you carte server is a alone master). It is not related to a GC overhead.

Memory issues with GraphDB 7.0

I am trying to load a dataset to GraphDB 7.0. I wrote a Python script to transform and load the data on Sublime Text 3. The program suddenly stopped working and closed, the computer threatened to restart but didn't, and I lost several hours worth of computing as GraphDB doesn't let me query the inserts. This is the error I get on GraphDB:
The currently selected repository cannot be used for queries due to an error:
org.openrdf.repository.RepositoryException: java.lang.RuntimeException: There is not enough memory for the entity pool to load: 65728645 bytes are required but there are 0 left. Maybe cache-memory/tuple-index-memory is too big.
I set the JVM as follows:
-Xms8g
-Xmx9g
I don't exactly remember what I set as the values for the cache and index memories. How do I resolve this issue?
For the record, the database I need to parse has about 300k records. The program shut shop at about 50k. What do I need to do to resolve this issue?
Open the workbench and check the amount of memory you have given to cache memory.
Xmx should be a value that is enough for
cache-memory + memory-for-queries + entity-pool-hash-memory
sadly the latter cannot be calculated easily because it depends on the number of entities in the repository. You will either have to:
Increase the java memory with a bigger value for Xmx
Decrease the value for cache memory

Automatically taking thread dumps or heap dumps

I'm trying to monitor a Java Application over a long period of time.
I want to automatically take a Thread dump or Heap Dump if number of threads or Heap exceeds some threshold.
Is this functionality available via VisualVM or Mission Control or other profiling tool?
Start the JMX Console in Java Mission Control
Go to the Triggers tab and select trigger rule "Thread Count" or "Live Set". You can select threshold and action to take. If running JDK 8, I think you can invoke a diagnostic command, such as Thread.print. It's also possible to dump a Flight Recording, which contains thread dumps among many other things.