I was trying to build with bubblewrap and I couldn't find an answer anywhere. it says that outofmemory error
cli ERROR Command failed: gradlew.bat bundleRelease --stacktrace
FAILURE: Build failed with an exception.
* What went wrong:
unable to create native thread: possibly out of memory or process/resource limits reached
* Exception is:
java.lang.OutOfMemoryError: unable to create native thread: possibly out of memory or process/resource limits reached
it says it's out of memory and to allocate more I need to run the java command my self which I can't is there anything I can do here?
As described in this issue: https://github.com/GoogleChromeLabs/bubblewrap/issues/611
Unfortunately, this is an issue with the JVM on Windows, and there isn't much that can be done in Bubblewrap.
It seems the JVM requires contiguous memory addresses when allocating memory. Even though the machine may have enough memory, and enough memory free, the JVM can fail to allocate if the memory is fragmented. There are more details in this StackOverflow question: Java maximum memory on Windows XP
The -Xmx1536 parameter is the default used by Android Studio when creating Android projects. Removing -Xmx1536 worked in this case, but is unlikely to work in all cases for 2 reasons:
If Gradle actually needs that amount of memory, it will still be unable to allocate it and the build will fail (at a later time).
It may still be impossible to allocate smaller chunks of memory.
Rebooting Windows is a solution known to help in these cases too.
Related
Besides Heap memory, can JProfiler perform native memory profiling?
My java application is causing linux cgroup limit exceeded issue on production and would like to run profiling during development or performance test cycle.
No, JProfiler only includes memory profiling for the Java heap.
Caused by: java.lang.IllegalStateException: Unable to complete the scan for annotations for web application [] due to a StackOverflowError. Possible root causes include a too low setting for -Xss and illegal cyclic inheritance dependencies. The class hierarchy being processed was [org.bouncycastle.asn1.ASN1EncodableVector->org.bouncycastle.asn1.DEREncodableVector->org.bouncycastle.asn1.ASN1EncodableVector]
at org.apache.catalina.startup.ContextConfig.checkHandlesTypes(ContextConfig.java:2104)
at org.apache.catalina.startup.ContextConfig.processAnnotationsStream(ContextConfig.java:2048)
at org.apache.catalina.startup.ContextConfig.processAnnotationsJar(ContextConfig.java:1994)
at org.apache.catalina.startup.ContextConfig.processAnnotationsUrl(ContextConfig.java:1964)
at org.apache.catalina.startup.ContextConfig.processAnnotations(ContextConfig.java:1917)
...
I am new to the Spring framework.
Here is the error message. Sometimes it can run while after stopping the project and re-run it, these messages are shown (but if I leave the IDE along for 5-10 mins, it can be run without errors).
What exactly is happening here? I am using the newest IDEA and Tomcat 8.
Solved. The reason is I was using a different version of Tomcat, version 8, while somehow the code is using version 7.
I am using macOS Mojave. On Windows, the code can be run via different versions, however, it seems that on macOS the version matters.
Ambari-2.1.2,HDP-2.3.2.0-2950
Noticed to many resident memory for the agents on a cluster that is running for few day.
I found a solution.
https://community.hortonworks.com/questions/21253/ambari-agent-memory-leak-or-taking-too-much-memory.html https://issues.apache.org/jira/browse/AMBARI-17539
I have modified the code for main.py, but the agent still has memory leak. The following is the code I added
[main.py]https://community.hortonworks.com/storage/attachments/34791-mainpy.txt
If I remember correctly, in older Ambari versions, there was a memory leak because a new log formatter was created every time the logging has been invoked.
Anyway, the best solution would be to upgrade to newer Ambari version. Stack 2.3 is widely supported
When I run IntelliJ on Linux Mint, I get an warning on the terminal screen.
/IDEA/idea-IU-141.178.9/bin $ ./idea.sh
[ 47358] WARN - om.intellij.util.ProfilingUtil - Profiling agent is not enabled. Add -agentlib:yjpagent to idea.vmoptions if necessary to profile IDEA.
[ 63287] WARN - .ExternalResourceManagerExImpl - Cannot find standard resource. filename:/META-INF/tapestry_5_3.xsd class=class com.intellij.javaee.ResourceRegistrarImpl, classLoader:null
I'm using Java 8 64-bit. I thing that this error is leading to some CSS loading problem.
Does anyone know what's going on with this?
It's not an error, it's a warning.
You don't have the built-in profiler enabled so that you can get diagnostics like CPU and memory usage, which are useful for when IntelliJ becomes unresponsive or sluggish.
Don't worry about it; if you don't encounter a lot of startup pain, then it's not anything critical. If you do require the profiler enabled, then you can follow the instructions here to add the appropriate run time flags to your executable.
does Mint use the oraclejdk? or the openjdk? intellij recommends the use of oraclejdk for Idea. it fixed at least one problem i had with it under fedora (at the cost of some disk space).
I have updated by JDK 4 months back to 1.6.0_45 in my SOLARIS SPRAC Machine, till yesterday it went well without any issues, but yesterday unfortunately i got an fatal error as below and the instance get crashed, as a work around i have restarted my server instance and it up and running fine now.
I need to know,
whats the exact root cause for this error?
How to investigate this ?
How can i avoid this in near future?
A fatal error has been detected by the Java Runtime Environment:
SIGSEGV (0xb) at pc=0xfebd390c, pid=2626, tid=3
JRE version: 6.0_45-b06
Java VM: Java HotSpot(TM) Server VM (20.45-b01 mixed mode solaris-sparc )
Problematic frame:
V [libjvm.so+0x7d390c] void PSScavenge::copy_and_push_safe_barrier(PSPromotionManager*,__type_0*)+0xcc
If you would like to submit a bug report, please visit:
http://java.sun.com/webapps/bugreport/crash.jsp
Your JVM did crash inside libjvm.so, while doing GC. You can try changing the GC methodology (for example, try using -XX:+UseParallelOldGC) as meanwhile alternative. Else best bet would be to update the JVM.