"Broken Pipe" between Apache and GlassFish when using mod_jk - apache

I am using Apache as the front-end to GlassFish 3.1, using mod_jk as the connector. The connection between the two is very unstable - works about 50% of the time - even when I am the only person on the system. When the problem occurs, the browser gives me an HTTP timeout and the GlassFish server has two types exceptions in its log:
java.io.IOException
at org.apache.jk.common.JkInputStream.receive(JkInputStream.java:249)
at org.apache.jk.common.JkInputStream.refillReadBuffer(JkInputStream.java:309)
at org.apache.jk.common.JkInputStream.doRead(JkInputStream.java:227)
at com.sun.grizzly.tcp.Request.doRead(Request.java:501)
at org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java:336)
at com.sun.grizzly.util.buf.ByteChunk.substract(ByteChunk.java:431)
at org.apache.catalina.connector.InputBuffer.read(InputBuffer.java:357)
at org.apache.catalina.connector.CoyoteInputStream.read(CoyoteInputStream.java:265)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:264)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:306)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:158)
at java.io.InputStreamReader.read(InputStreamReader.java:167)
at com.ctc.wstx.io.MergedReader.read(MergedReader.java:101)
at com.ctc.wstx.io.ReaderSource.readInto(ReaderSource.java:84)
at com.ctc.wstx.io.BranchingReaderSource.readInto(BranchingReaderSource.java:57)
at com.ctc.wstx.sr.StreamScanner.loadMore(StreamScanner.java:967)
at com.ctc.wstx.sr.StreamScanner.getNext(StreamScanner.java:738)
at com.ctc.wstx.sr.BasicStreamReader.nextFromProlog(BasicStreamReader.java:1995)
at com.ctc.wstx.sr.BasicStreamReader.nextFromTree(BasicStreamReader.java:2647)
at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1019)
java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
at org.apache.jk.common.ChannelSocket.send(ChannelSocket.java:580)
at org.apache.jk.common.JkInputStream.doWrite(JkInputStream.java:206)
at com.sun.grizzly.tcp.Response.doWrite(Response.java:685)
at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:420)
On the Apache side, the mod_jk log is completely empty. Once I hit this condition, the only way to recover is to restart the Apache server. The funny thing is that after the restart, the requests that timed out are automatically executed - magically! I have no idea who stores them.
Anyway, I am not at all experienced with Apache and mod_jk and was wondering where to start looking for problems. Software versions I am using are as follows:
Apache: version 2.2.17-2, GlassFish: 3.1, mod_jk: 1.2.30-1
Any help would be much appreciated!
Thanks.

Check the mod_jk logs for initialilzation of mod_jk during Apache startup. If no logs are written then something's wrong with the installation/configuration of mod_jk module.
Have you created a Glassfish cluster?
If yes then set DjvmRoute and Dcom.sum.web.enterprise.jkenabled jvm options for the cluster and also check the http network listener on DAS host that needs to be created to listen requests from mod_jk (it is initially jk_disabled, so enable it)..
If not then check the http network listener for mod_jk on each of the server domains on which you are deploying your application.

Related

HAproxy with standalone mod_security routed to multiple web servers

I want to have one server with HAproxy and a standalone mod_security installed which routes every packets to mod_security first and check by its rules.
Then if there wasn't anything suspicious in packets (SQL Injection, DOS Attacks, ...) pass them back from mod_security to haproxy and haproxy routes them to multiple servers with different webservers.
Therefore I don't need to install and config mod_security on all my webservers.
This is technically possible, possibly with running 2 instances of HAProxy. However, you will need a webserver to run underneath ModSec, typically Apache or nginx, and this kind of negates the advantage of not having to install ModSec on all your webservers.
The standard setup is: haproxy -> reverse-proxies with ModSec -> application-servers
Just to answer this old, but still valid, question:
The solution should be to use HAProxies Stream Processing Offload Engine (SPOE) through the Stream Processing Offload Protocol (SPOP) to talk a Stream Processing Offload Agent (SPOA) which is a standalone modsecurity daemon.
HAProxy example config from their github repo
frontend my-front
...
filter spoe engine modsecurity config spoe-modsecurity.conf
...
enter code here
backend spoe-modsecurity
mode tcp
balance roundrobin
timeout connect 5s
timeout server 3m
server modsec1 127.0.0.1:12345
# Block potential malicious requests with returncode < 0
http-request deny if { var(txn.modsec.code) -m int gt 0 }
There's also a Github project where the daemon has been made available as Docker container
Offical HAProxy blog post

Clustering doesn't work with mod_cluster on JbossAS7 - Stateful Application

I'm going to explain my situation.
Background:
I'm running three virtual machines with Debian Jessie on Open Nebula, one as master and the other two as slaves. In them i've installed JBoss AS 7.1 and mod_cluster 1.2.
Goal:
Run a stateful app, so when I shutdown the master server the cluster allows me to continue using the app with shared session and mantain the variables values.
I followed this guide with the given web application.
Errors:
I can't access directly the app at http://master/cluster-demo/ like as in the guide above, I have to specify the port (8330 for server-three).
When I shutdown server-three the slaves notices that the server is shutted down but the session is not shared and the application is no more accessible. This is the output on slave when i shoutdown server-three on master.
Configuration Files
I attach my configuration files:
/opt/jboss/domain/configuration/domain.xml
/opt/jboss/httpd/httpd/conf/httpd.conf
/opt/jboss/domain/configuration/host.xml in the master
/opt/jboss/domain/configuration/host.xml in the slaves
Answer
mod_cluster does not have anything in common with messaging (JMS, HornetQ) subsystems. mod_cluster setting also does not have anything in common with clustering subsystem, i.e. Infinispan and its workhorse, JGroups.
What AS7 mod_cluster subsystem does is that is listens to UDP multicast advertising messages emitted by Apache HTTP Server mod_cluster modules. When it receives such message, it registers itself with your Apache HTTP Server load balancer. From that moment, your registered AS7 "worker" node keeps sending specialized HTTP messages (via TCP), informing Apache HTTP Server about:
its name (jvmRoute or generated)
its current load
its deployments, i.e. application contexts
aliases etc.
When there are no worker nodes registered with your Apache HTTP Server balancer, there are no contexts, hence there is nowhere to forward your requests to.
According to the configuration you posted, you rely on UDP multicast messages being sent to/received from 224.0.1.105:23364.
Open Nebula, firewall and UDP multicast
It is possible that Open Nebula doesn't allow UDP multicast between hosts or that your iptables are blocking it. Try this:
use curl on your worker host to access the balancer host -- exactly the VirtualHost where you have the directive EnableMCPMReceive defined.
if it doesn't work, you must fix iptables, selinux, httpd's allow/deny and such
if it works, it's a good sign that worker can talk to the balancer
go to your AS7 xml, modcluster subsystem, and add attribute to the config: <mod-cluster-config advertise-socket="modcluster" proxy-list="your-httpd-address:port"> -- the one you've just tried with curl
now it should work even without UDP multicast
if you would like to debug your UDP multicast settings in Open Nebula, give it a shot with Advertize.java
1.2.0 is too old, do not use vulnerable code
Please, do not use mod_cluster 1.2.0 with your Apache HTTP Server. The version is completely obsolete and it contains serious bugs, including a code injection CVE and severe performance issue. Download mod_cluster 1.3.1.Final for httpd 2.4.x or build your own from the sources, if you desire httpd 2.2.x support. If you happen to need any any help with that, ask.

Stopping apache mina server

I have written a tcp server in Apache Mina.The server listens to a port n do the process. I have deployed this in jboss server. But now i am not able to stop the server.
I have done the following things for the stop program:
acceptor.dispose();
acceptor.unbind();
acceptor.setCloseOnDeactivation(true);
But the port is still active and the server still runs.
Can anybody help me in understanding how to stop the server.
As you can see in Apache Mina Acceptor service can be stopped and waiting for it using acceptor.dispose( true ). Also you can use isDisposing() method to verify that.

Wildfly 8.1 - Websockets in a Clustered Config?

I have a 2-node cluster setup using the standalone-full-ha.xml configuration on Wildfly 8.1. I'm trying to open up a web socket connection through Apache HTTPD URL, but when I do I see the following error in my logs, and the web socket open fails with an error in JavaScript:
2014-07-28 15:58:52,675 ERROR [io.undertow.request] (default task-4) UT005023: Exception handling request to /WebSocketTest/hello: java.lang.IllegalStateException: UT000077: The underlying transport does not support HTTP upgrade
Is there any way to get such a configuration working in a clustered setup? Or would I need to go to the specific app server port directly and bypass Apache HTTPD?
what version of Apache httpd are you using?
Web Sockets seem to be supported from 2.4 with mod_proxy_wstunnel
also undertow's documentation states that AJP does not support protocol upgrade

SailsJS on production - Error: listen EADDRINUSE

I have a VPS server with CentOS and Apache server.
But I want to run my node.js applications too. I am using sails.js
This sails application is trying to listen to port 80 of specified host.
Here is error (after sails lift running):
debug: Starting server in /var/www/user/data/nodeprojects/projectname...
info - socket.io started
debug: Restricting access to host: projectname.com
warn - error raised: Error: listen EADDRINUSE
warn:
warn: Server doesn't seem to be starting.
warn: Perhaps something else is already running on port 80 with hostname projectname.com?
What is the problem? Can I run both apache and nodejs servers on one server with one port (80)?
No, you cannot.
When a server process opens a TCP port to answer requests, it has exclusive use of that port. So, you cannot run both SailsJS and Apache servers on the same port.
Having said that, you can do lots of interesting things with Apache, such as proxying specific requests to other servers running on different ports.
A typical setup would have Apache on port 80 and SailsJS on port 8000 (or some other available port) where Apache would forward requests to certain URLs to SailsJS and then forward the reply from SailsJS back to the browser.
See either configuring Apache on Mountain Lion proxying to Node.js or http://thatextramile.be/blog/2012/01/hosting-a-node-js-site-through-apache for example implementations of this approach.
you cannot use same port for different application. NodeJS can use any open port. What you need todo is port forwarding for your app. :)