Ambari-2.1.2,HDP-2.3.2.0-2950
Noticed to many resident memory for the agents on a cluster that is running for few day.
I found a solution.
https://community.hortonworks.com/questions/21253/ambari-agent-memory-leak-or-taking-too-much-memory.html https://issues.apache.org/jira/browse/AMBARI-17539
I have modified the code for main.py, but the agent still has memory leak. The following is the code I added
[main.py]https://community.hortonworks.com/storage/attachments/34791-mainpy.txt
If I remember correctly, in older Ambari versions, there was a memory leak because a new log formatter was created every time the logging has been invoked.
Anyway, the best solution would be to upgrade to newer Ambari version. Stack 2.3 is widely supported
Related
I was trying to build with bubblewrap and I couldn't find an answer anywhere. it says that outofmemory error
cli ERROR Command failed: gradlew.bat bundleRelease --stacktrace
FAILURE: Build failed with an exception.
* What went wrong:
unable to create native thread: possibly out of memory or process/resource limits reached
* Exception is:
java.lang.OutOfMemoryError: unable to create native thread: possibly out of memory or process/resource limits reached
it says it's out of memory and to allocate more I need to run the java command my self which I can't is there anything I can do here?
As described in this issue: https://github.com/GoogleChromeLabs/bubblewrap/issues/611
Unfortunately, this is an issue with the JVM on Windows, and there isn't much that can be done in Bubblewrap.
It seems the JVM requires contiguous memory addresses when allocating memory. Even though the machine may have enough memory, and enough memory free, the JVM can fail to allocate if the memory is fragmented. There are more details in this StackOverflow question: Java maximum memory on Windows XP
The -Xmx1536 parameter is the default used by Android Studio when creating Android projects. Removing -Xmx1536 worked in this case, but is unlikely to work in all cases for 2 reasons:
If Gradle actually needs that amount of memory, it will still be unable to allocate it and the build will fail (at a later time).
It may still be impossible to allocate smaller chunks of memory.
Rebooting Windows is a solution known to help in these cases too.
I recently started using the Spring Tool Suite (STS) for development under Spring Boot.
After not continuing the work after starting STS hangs tightly.
Used OS Linux Mint.
Tell me how to diagnose the problem of suspense?
If STS hangs, I would strongly recommend to:
capture a thread dump while STS hangs (using jps and jstack)
file an issue at https://github.com/spring-projects/spring-ide/issues and attach the full thread dump
continue to capture thread dumps and attach them to the issue, sometimes slightly different thread dumps reveal interesting details
The thread dump usually reveals whether there is a thread deadlock under the hood, a network issue, or some other very long running process doing some work.
Faced similar issue in windows. Encountered when extracted by jar -xvf spring-tool-***.jar.
It has been resolved the way by which the jar is extracted by executing it rather extract directly.
java -jar spring-tool-***.jar
When using FIFO scheduler with YARN(FIFO is default right?), I found out YARN reserve some memory/CPU to run the application. Our application doesn't need to reserve any of these, since we want fixed number of cores to do the tasks depending on user's account. This reserved memory makes our calculation inaccurate, so I am wondering if there is any way to solve this. If removing this is not possible, we are trying to scale the cluster(we are using dataproc on GCP), but without graceful decommission, scaling down the cluster is shutting down the job.
Is there any way to get rid of reserved memory?
If not, is there any way to implement graceful decommission to yarn
2.8.1? I found out cases with 3.0.0 alpha(GCP only has beta version), but couldn't find any working instruction for 2.8.1.'
Thanks in advance!
Regarding 2, Dataproc supports YARN graceful decommissioning because Dataproc 1.2 uses Hadoop 2.8.
I was trying to profile my spark application (that uses the G1 GC) using jprofiler. I came across their website where they mention that jprofiler remote profiling works reliably only with the standard garbage collector:
http://resources.ej-technologies.com/jprofiler/help/doc/index.html
(Under section Probe Settings/Starting Remote Sessions)
"Please note that the profiling interface JVMTI only runs reliably with the standard garbage collector. If you have VM parameters on your command line that change the garbage collector type such as
-Xincgc
-XX:+UseParallelGC
-XX:+UseConcMarkSweepGC
-XX:+UseParNewGC
please make sure to remove them. It might be a good idea to remove all -XX options if you have problems with profiling."
Is this true for the latest version of jprofiler as well (9.0) ? Does this affect cpu profiling as well ?
I am able to do a memory profiling with visualVM, just wondering why this limitation (if at all) with jprofiler.
It's not a limitation, it's just advice. Some of the alternative GCs are not well tested with the JVMTI (the profiling interface of the JVM). G1 GC will become the standard GC, so there the situation is different.
When I run IntelliJ on Linux Mint, I get an warning on the terminal screen.
/IDEA/idea-IU-141.178.9/bin $ ./idea.sh
[ 47358] WARN - om.intellij.util.ProfilingUtil - Profiling agent is not enabled. Add -agentlib:yjpagent to idea.vmoptions if necessary to profile IDEA.
[ 63287] WARN - .ExternalResourceManagerExImpl - Cannot find standard resource. filename:/META-INF/tapestry_5_3.xsd class=class com.intellij.javaee.ResourceRegistrarImpl, classLoader:null
I'm using Java 8 64-bit. I thing that this error is leading to some CSS loading problem.
Does anyone know what's going on with this?
It's not an error, it's a warning.
You don't have the built-in profiler enabled so that you can get diagnostics like CPU and memory usage, which are useful for when IntelliJ becomes unresponsive or sluggish.
Don't worry about it; if you don't encounter a lot of startup pain, then it's not anything critical. If you do require the profiler enabled, then you can follow the instructions here to add the appropriate run time flags to your executable.
does Mint use the oraclejdk? or the openjdk? intellij recommends the use of oraclejdk for Idea. it fixed at least one problem i had with it under fedora (at the cost of some disk space).