Apache is leaking semaphores when running mod_mono - apache

I'm running an ASP.NET MVC2 application under mod_mono with mono 2.8.1 and currently have to periodically clear out semaphore arrays that apache seems to be leaking.
I started with mono rpm's for 2.6.7 a while back but had had some issues both with leaking semaphore arrays (i.e. more and more accumulating in ipcs) and some incompatibility with ASP.NET MVC2, so I built 2.8 from source. The leak continued, so I just built 2.8.1 from source and the same thing is still happening. This is on an Amazon AMI (i guess it's centos under the hood). The symptoms are that semaphore arrays keep building up and if i don't manually remove them with ipcrm after a while requests to ASP.NET pages return no content with no errors in the logs. I've also reproduced the same issue in an centos 5.4 AMI.
Is anyone successfully running ASP.NET under apache/mod_mono and I'm just running into some edge case? Since I can't find any mentions of this happening to anyone else, I assume it's not general ASP.NET bug. Any ideas how i can troubleshoot this further?

Finally figured this out and while the solution exposes my own mistake of not following up other warning I was receiving, i figure this should be useful to anyone else runnning into this.
By default apache config has the below config order:
Include conf.d/*.conf
User apache
Group apache
I.e. all conf files (usually where vhosts are defined) are loaded before httpd user and group are set. This results in the below warning on restart:
[Mon Jan 24 00:12:50 2011] [crit] The unix daemon module not initialized yet.
Please make sure that your mod_mono module is loaded after the User/Group
directives have been parsed. Not initializing the dashboard.
While everything seems to work anyhow, this is the cause for the semaphore leak. If you move the Include after the User/Group, the warning goes away and mod_mono no longer leaks semaphores.

I've seen this with the shared memory used by cross-process handles.
My fix was to set MONO_DISABLE_SHM=1, however I'm not sure if this is your problem since the cross-process handle support is disabled starting with 2.8.
You could probably still try MONO_DISABLE_SHM to see if it makes a difference.

Try to use new sgen garbage collector instead of Boehm.
To use the new garbage collector, you
just need to invoke Mono with the
--gc=sgen command line option, or set the MONO_ENV_OPTIONS environment
variable to contain "--gc=sgen"
option. By default Mono continues to
use the Boehm collector.

Related

Weblogic 10.3.6 generates empty heapdump on OutOfMemoryError

I'm trying to generate a full heapdump from Weblogic 10.3.6 due to an OutOfMemoryError generated by a Web Application deployed on the Server.
I've setted the following start script:
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/path/to/heapdump
When the OutOfMemoryError occurs, Weblogic generates an empty hprof file (0 bytes size) in /path/to/heapdump folder, and nothing happens: the Server remains in RUNNING mode, even if is not reachable anymore.
The java process is still alive, but with 0% of processor.
Even the server.out log seems completely frozen, without any trace of the OutOfMemoryError.
What's wrong with the configuration?
Probably you can use Java Flight Recorder to save events and check which objects are generating OOM.
(any profiler should work as well).
Been there :( . I remember at the time that we've found it was somewhat logical since there was not enough memory for normal operation, the JVM could not automagically find enough memory to create a heapdump either. If memory serves me well, at that time we did 2 things to debug the memory leak. First we were "lucky" enough that the problem was happening fairly regularly so a close manual monitoring was possible (monitoring of the gc.log looking for repeated FullGC and monitoring of the performance tab in the console). Knowing when the onset of the problem was starting we were doing some kill -3 to get the dump manually. We also used jstack {PID} (JDK 1.6 on Linux) with some luck. With those, at the time, the devs were able to identify the memory leak. Hope that helps.
Okay, your configuration looks alright.. you might want to check if the weblogic process user has the rights to edit the heap dump file.
You can take heap dump by Java tools :
JAVA_HOME/bin/jmap -dump:format=b,file=path_of_the_file
OR
%JROCKIT_HOME%\bin\jrcmd hprofdump filename=path_of_the_file

How to make a core persistent on Solr 5.2.1

This thing bugs me for sometime now - I've been googling the web and literally nothing on something obvious such as keeping a Solr core persistent on 5.2.1
Each time I restart the service "sudo service solr restart" my cores are getting lost, although the data can still be found in: /var/solr/data.
The persistence flag is no longer supported in 5.x - so what is the alternative ?
Any help would be much appreciated !
I've run into an issue with a non-persistent core as well. Make sure to check the solr log (/var/solr/logs, in my case) for startup details:
ERROR - 2016-04-05 05:05:49.547; [ ]
org.apache.solr.common.SolrException;
null:org.apache.solr.common.SolrException:
Found multiple cores with the name [somecore],
with instancedirs
[/var/solr/data/somecore/] and [/var/solr/data/gettingstarted/]
The gettingstarted core (a sample created on solr installation) had the same core name set in its core.properties.
Deleting the gettingstarted directory resolved this issue.
Solr uses core discovery now, so the persistent flag isn't needed. Any core available in the data dir should be loaded automagically. You might still need the solr.xml file, although it should be enough to just have an empty <solr> element.
The content within the core directory (conf, data, core.properties) should also be present.

apache 2.2 couldn't load a module on AIX 6.1

I am testing an auth module with apache 2.2 server on 6.1 power AIX, 64 bit platform. The apache server doesn't start at all when I give my module path name in httpd.conf, it works fine on AIX 5.3 though with same module.
No crash, no other error message than following in error in error_log file:
httpd: Syntax error on line 423 of /home/apache22-aix64/installApache/conf/httpd.conf: Syntax error on line 9 of /
/home/apache22-aix64/installApache/conf/agent.conf: Cannot load /home/agent/apache/lib/auth-module.so into server: Not enough space
I have checked by increasing ThreadStackSize to 6mb, increased memory and other parameters, but issue is still the same. Issue is common in prefork n worker mode of apache server.
That's a new one on me... I'm guessing you are out of something (yea, brilliant guess right? ). Try checking ulimit -a between the two systems (5.3 and 6.1). I presume you are starting apache using the same type of id (non-root id with the same limits, permissions, etc).
I would also suggest tagging this with Apache to see if some apache guys might be able to help out. We need to determine what is it out of -- memory, stack, disk space, paging space, etc.
Did you build this apache version yourself?

tomcat7 clickstack not finding Config params

I am testing the tomcat7 clickstack for our application which has some config parameters set using the built in Config features of Cloudbees. The tomcat7 clickstack does not find them, but the standard tomcat6 container does. I have double checked them and reset them through the cloudbees sdk and they are there and correct, but are coming back as null for tomcat7.
The switch to clickstacks requires us to refactor how the servlet container gets configured so that the injection points such as cloudbees-web.xml and jvm system properties behave consistently across all the servlet container clickstacks.
Some of that refactoring has been committed but some of the work is still in my backlog... Assuming none of the other bees steal that task from my backlog before I get to it ;-)
IF I recall correctly, the parameters should be available as environment variables (sub optimal I know, but all containers should be giving this as a consistent UX for all clickstacks, eg both non-java based and java-based) and may be already available as system properties (again sub optimal, but the java container refactoring should be giving this as a consistent UX for all java based clickstacks). The consistent java servlet UX has not been committed yet but should be available soon.

Tomcat showing this error "This is very likely to create a memory leak". How to resolve this issue?

I have created a web application in Apache Cocoon.This website is running properly but after every 3-4 days, it stops responding. It doesn't run until and unless, we restart the tomcat service. In the catalina.2011-05-09.log file, it shows following error:-
"May 9, 2011 3:17:34 AM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads
SEVERE: The web application [/webresources] is still processing a request that has yet to finish. This is very likely to create a memory leak. You can control the time allowed for requests to finish by using the unloadDelay attribute of the standard Context implementation."
I am not been able to understand the cause of this problem. Can someone suggest me how to resolve this issue?
You are using a library that is starting one or more threads and is not properly shutting them down or releasing other resources captured by the thread. This often happens with things like Apache HTTP components (I get this error with Http Components) and anything that uses separate threads internally. What libraries are you using in your Cocoon application?
It is telling you the issue:
[...] is still processing a request that has yet to finish
You need to find out what that request is/is going to. One easy way is to have something like PsiProbe installed.
Also, it's not a bad idea to restart Tomcat every night. It can help alleviate these kinds of issues until you find the root cause.