gc memory overhead exceeded in jmeter - testing

My test execution shows "gc memory overhead exceeded" exception in linux cent os 7. I changed jmeter.bat's heap max size 6g and min size as 512m. I am not used any listeners, preprocessor, http header manager. Used regular expression extractor for 2 samplers and constant timer as common. I run my test in terminal and store result in jtl file. I run it for 250 users, rampup period as 1 and scheduler as 5400 seconds. But still issue persist..
System configuration:
Ram 8 GB
CPU octa core 3.12 GHz
Swap memory 16 GB

You say that you changed jmeter.bat, but the problem is on Linux, which doesn't use jmeter.bat. Unless it's a typo, try to change jmeter or jmeter.sh (whichever one you use to invoke JMeter).
Generally I would not recommend more than 2GB for moderate use, and 4GB for heavy use. For instance my settings are:
HEAP="-Xms4096m -Xmx4096m"
and I can run up to 300 concurrent users with a lot of samplers/heavy scripting even in GUI mode. Setting larger heap may cause larger pauses on GC, which can cause the exception you are getting.
After you start JMeter, run the following command to make sure the memory settings are indeed as you expect them to be:
ps -ef | grep JMeter

I actually changed Xmx in jmeter.bat file instead of jmeter.sh file since i used linux for this test. Jmeter.bat is supported in windows os and jmeter.sh is supported for Linux os. So that the above mentioned error occurred. Once I changed it in jmeter.sh file it works perfectly.

Related

java option xms doesn't seem to go as expected What could be the problem? [duplicate]

I have a java program running in centos Box.
My -Xmx and -Xms set to 4000 Mb.
The program works fine.
But when i do free -m , the used memory is showing as 506 MB. As per my understanding , XMS memory should be reserved for JVM.Why does free command not showing the java used memory ?
I have also done jstat -gccapacity $(pidof java) and there NGCMN and NGCMX updated and have the same value ?
Any support would be helpful.
I'm running my program as java -Xms41000m -Xmx42000m -jar
Even when -Xmx and -Xms set to the same value, the space reserved for Java Heap is not immediately allocated in RAM.
Operating System typically allocates physical memory lazily, only on the first access to a virtual page. So, while unused part of Java Heap is not touched, it won't really consume memory.
You may use -XX:+AlwaysPreTouch option to forcibly touch all heap pages on JVM start.

How to reduce the size of dll/wasm compiled by aspnet/blazor?

I notice that the file size of *.wasm compiled by Rust is acceptable . However , a minimal HelloWorld compiled by AspNet/Blazor will take up almost 2.8MB .
mono.wasm 1.75MB
mscorlib.dll 1.64MB
*.dll ....
If I understand correctly , the mono.wasm is the VM that runs in browser and runs the dll we write . Does that mean no matter what we do , we cannot make the size of files less than 1.75MB ? If not , is there a way to reduce the file size ?
Yes, 2.8 MBytes is quite a large payload for a 'Hello World' applications. However, Blazor is still very much an experimental technology, which is not ready for production use yet. There are numerous reasons why the generated output is so large at the moment:
Your current application runs in an interpreted mode, where the mono.wasm file ships the CLR to your browser, allowing it to execute your DLL. A faster, and more size efficient approach would be to use Ahead of Time Compilation (AOT) as described in this article. This would allow the compiler to strip out any library functions that are not used, giving a highly optimised output.
The features of the WebAssembly runtime itself are quite limited, future version will add garbage collection and various other capabilities that Blazor will be able to use directly. At the moment mono.wasm includes its own garbage collector.
The Blazor project itself has a number of open issues describing various optimisations which are being actively worked on. It already performs tree-shaking and various other optimisations, but this type of work takes time.
Currently (2021), a hello world Blazor WASM application (Visual Studio project template) downloads over 17 MB of data. When gzip is used, this got reduced to 7 MB - which is really huge if we think about the fact that no application code/logic is included yet!
But I found out that it seems the linker was not active during debugging. If we publish the application in release mode (-c Release switch), only necessary files were loaded. This increases the transfer size to 5.6 MB or even 2.4 MB with gzip activated. You can also see this in the size of the published folder:
$ dotnet publish --output publish_debug -c Debug
$ dotnet publish --output publish_release -c Release
$ du -hs publish_debug/
30M publish_debug/
$ du -hs publish_release/
11M publish_release/
It's still a noticeable amount of data. However, this information may help others finding this questions because of the much larger 17/7 MB shown in debug mode.
Since the question is from 2018, it may be also interested to mention that framework caching was improved in 3.2.0-preview2. This means: The runtime and framework are stored in the browser cache after fetching them initially from the server. Since this is handled by JavaScript, no further requests are made to this files after they got cached! The server may would respond with 304 Not Modified, but it's still some overhead which we haven't any more now.
This also means that they only appear on the first page load in the network tab! If you want to measure the loading time without cache, delete the cache for those domain. This has to be done manually! Checking the no cache checkbox in the browser console is not enough, since it seems that Blazor uses the local storage with JS.

Why .class file of java needs to be executed on JVM?

As per my knowledge, JVM is a process virtual machine which means it does not emulate the entire existing computer architechture but emulates/mimics only the cpu of the host computer.
Now, my question is:
Why a .class java file needs to be executed inside virtual CPU(i.e. JVM) instead of being executed on actual CPU memory of the host computer?
For code to run on the actual CPU, it has to be in the instruction set of that CPU. Each CPU architecture has its own, distinct instruction set, so code written for one CPU won't run on another type of CPU.
The point of defining a Java Virtual Machine is so that the code will run on any type of computer, as long as it has a JVM interpreter.
The JVM instructions are not real CPU instructions but are for an abstract CPU.
Add to that some security proofs on the JVM byte code.
The JVM implementation's Just in Time compiler will translate abstract instructions to host CPU instructions on demand to achieve better performance.
JVM actually conversts the java bytecode to the instruction set applicable to that particular CPU.Every CPU do not have similar instruction set.
So .class file is generated as it can run on any CPU. JVM does the task of converting onto machin code applicable to it.

What is the relation between 'mapreduce.map.memory.mb' and 'mapred.map.child.java.opts' in Apache Hadoop YARN?

I would like to know the relation between the mapreduce.map.memory.mb and mapred.map.child.java.opts parameters.
Is mapreduce.map.memory.mb > mapred.map.child.java.opts?
mapreduce.map.memory.mb is the upper memory limit that Hadoop allows to be allocated to a mapper, in megabytes. The default is 512.
If this limit is exceeded, Hadoop will kill the mapper with an error like this:
Container[pid=container_1406552545451_0009_01_000002,containerID=container_234132_0001_01_000001]
is running beyond physical memory limits. Current usage: 569.1 MB of
512 MB physical memory used; 970.1 MB of 1.0 GB virtual memory used.
Killing container.
Hadoop mapper is a java process and each Java process has its own heap memory maximum allocation settings configured via mapred.map.child.java.opts (or mapreduce.map.java.opts in Hadoop 2+).
If the mapper process runs out of heap memory, the mapper throws a java out of memory exceptions:
Error: java.lang.RuntimeException: java.lang.OutOfMemoryError
Thus, the Hadoop and the Java settings are related. The Hadoop setting is more of a resource enforcement/controlling one and the Java is more of a resource configuration one.
The Java heap settings should be smaller than the Hadoop container memory limit because we need reserve memory for Java code. Usually, it is recommended to reserve 20% memory for code. So if settings are correct, Java-based Hadoop tasks should never get killed by Hadoop so you should never see the "Killing container" error like above.
If you experience Java out of memory errors, you have to increase both memory settings.
The following properties let you specify options to be passed to the JVMs running your tasks. These can be used with -Xmx to control heap available.
Hadoop 0.x, 1.x (deprecated) Hadoop 2.x
------------------------------- --------------------------
mapred.child.java.opts
mapred.map.child.java.opts mapreduce.map.java.opts
mapred.reduce.child.java.opts mapreduce.reduce.java.opts
Note there is no direct Hadoop 2 equivalent for the first of these; the advice in the source code is to use the other two. mapred.child.java.opts is still supported (but is overridden by the other two more-specific settings if present).
Complementary to these, the following let you limit total memory (possibly virtual) available for your tasks - including heap, stack and class definitions:
Hadoop 0.x, 1.x (deprecated) Hadoop 2.x
------------------------------- --------------------------
mapred.job.map.memory.mb mapreduce.map.memory.mb
mapred.job.reduce.memory.mb mapreduce.reduce.memory.mb
I suggest setting -Xmx to 75% of the memory.mb values.
In a YARN cluster, jobs must not use more memory than the server-side config yarn.scheduler.maximum-allocation-mb or they will be killed.
To check the defaults and precedence of these, see JobConf and MRJobConfig in the Hadoop source code.
Troubleshooting
Remember that your mapred-site.xml may provide defaults for these settings. This can be confusing - e.g. if your job sets mapred.child.java.opts programmatically, this would have no effect if mapred-site.xml sets mapreduce.map.java.opts or mapreduce.reduce.java.opts. You would need to set those properties in your job instead, to override the mapred-site.xml. Check your job's configuration page (search for 'xmx') to see what values have been applied and where they have come from.
ApplicationMaster memory
In a YARN cluster, you can use the following two properties to control the amount of memory available to your ApplicationMaster (to hold details of input splits, status of tasks, etc):
Hadoop 0.x, 1.x Hadoop 2.x
------------------------------- --------------------------
yarn.app.mapreduce.am.command-opts
yarn.app.mapreduce.am.resource.mb
Again, you could set -Xmx (in the former) to 75% of the resource.mb value.
Other configurations
There are many other configurations relating to memory limits, some of them deprecated - see the JobConf class. One useful one:
Hadoop 0.x, 1.x (deprecated) Hadoop 2.x
------------------------------- --------------------------
mapred.job.reduce.total.mem.bytes mapreduce.reduce.memory.totalbytes
Set this to a low value (10) to force shuffle to happen on disk in the event that you hit an OutOfMemoryError at MapOutputCopier.shuffleInMemory.

Strange apache behaviour when lauching an external binary called by a perl script

I am currently setting up a web service powered by apache and running on CENTOS 6.4.
This service uses perl scripts (cgi-bin) launching in particular external homemade fortran compiled binaries.
Here is the issue: when I boot my server, everything goes well except that one of my binary crashes systematically (with a kernel segfault) when called by my perl scripts.
If I restart manually the httpd service (at the command line: service httpd restart), the issue is totally fixed.
I examined apache/system logs and nothing suspicious can be found.
It appears that the problem occurs only when httpd is launched by /etc/rc[0-6].d startup directives. I tried to change the launch order of http (S85httpd by default) to any other position without success.
To summarize, my web service is only functional (with no external binary crash) when httpd is launched at the command line once the server has fully booted up!
[EDIT] This issue is now resolved:
My fortran binary handles very large arrays and complex functions requiring an unlimited stack size.
Despite that the stack size limit was defined on a system-wide basis (in /etc/security/limits.conf), for any reason it appears that the "apache/perl/fortran binary" ensemble was not aware of that (causing my binary to crash each time it was called).
At the contrary, when I manually restarted apache at the shell prompt, the stacksize limit was correctly passed (.bashrc with 'ulimit -S -s unlimited').
As a workaround, I used BSD::Resource module (http://metacpan.org/pod/BSD::Resource) to define stacksize directly in my perl script by using e.g. setrlimit(RLIMIT_STACK, $softlimit, $hardlimit);
Thus, this new stack size limit is now directly passed from my perl script to my binary.
I've run into similar problems before. Maybe one way to solve this is to put the binary on a 'delayed start', so that it starts after everything else on your system is running. One way to do this is to put an at job in your /etc/rc.local script, to start the binary in X minutes.