I'm going to explain my situation.
Background:
I'm running three virtual machines with Debian Jessie on Open Nebula, one as master and the other two as slaves. In them i've installed JBoss AS 7.1 and mod_cluster 1.2.
Goal:
Run a stateful app, so when I shutdown the master server the cluster allows me to continue using the app with shared session and mantain the variables values.
I followed this guide with the given web application.
Errors:
I can't access directly the app at http://master/cluster-demo/ like as in the guide above, I have to specify the port (8330 for server-three).
When I shutdown server-three the slaves notices that the server is shutted down but the session is not shared and the application is no more accessible. This is the output on slave when i shoutdown server-three on master.
Configuration Files
I attach my configuration files:
/opt/jboss/domain/configuration/domain.xml
/opt/jboss/httpd/httpd/conf/httpd.conf
/opt/jboss/domain/configuration/host.xml in the master
/opt/jboss/domain/configuration/host.xml in the slaves
Answer
mod_cluster does not have anything in common with messaging (JMS, HornetQ) subsystems. mod_cluster setting also does not have anything in common with clustering subsystem, i.e. Infinispan and its workhorse, JGroups.
What AS7 mod_cluster subsystem does is that is listens to UDP multicast advertising messages emitted by Apache HTTP Server mod_cluster modules. When it receives such message, it registers itself with your Apache HTTP Server load balancer. From that moment, your registered AS7 "worker" node keeps sending specialized HTTP messages (via TCP), informing Apache HTTP Server about:
its name (jvmRoute or generated)
its current load
its deployments, i.e. application contexts
aliases etc.
When there are no worker nodes registered with your Apache HTTP Server balancer, there are no contexts, hence there is nowhere to forward your requests to.
According to the configuration you posted, you rely on UDP multicast messages being sent to/received from 224.0.1.105:23364.
Open Nebula, firewall and UDP multicast
It is possible that Open Nebula doesn't allow UDP multicast between hosts or that your iptables are blocking it. Try this:
use curl on your worker host to access the balancer host -- exactly the VirtualHost where you have the directive EnableMCPMReceive defined.
if it doesn't work, you must fix iptables, selinux, httpd's allow/deny and such
if it works, it's a good sign that worker can talk to the balancer
go to your AS7 xml, modcluster subsystem, and add attribute to the config: <mod-cluster-config advertise-socket="modcluster" proxy-list="your-httpd-address:port"> -- the one you've just tried with curl
now it should work even without UDP multicast
if you would like to debug your UDP multicast settings in Open Nebula, give it a shot with Advertize.java
1.2.0 is too old, do not use vulnerable code
Please, do not use mod_cluster 1.2.0 with your Apache HTTP Server. The version is completely obsolete and it contains serious bugs, including a code injection CVE and severe performance issue. Download mod_cluster 1.3.1.Final for httpd 2.4.x or build your own from the sources, if you desire httpd 2.2.x support. If you happen to need any any help with that, ask.
Related
What is TCP discovery SPI in Apache Ignite and what it does?
What 127.0.0.1:47500..47509 in example-cache.xml of apache ignite?
Apache Ignite nodes are all the same and join together automatically to form a cluster. The "Discovery" mechanism is how they find each other, by using the IP address and port of at least 1 other Ignite node to join the rest of the cluster.
TCP Discovery uses a TCP connection to the addresses and ports specified.
127.0.0.1:47500..47509 is a shortened notation that says contact IP 127.0.0.1 (also known as your localhost) and use ports from 47500 to 47509.
This is used in the example code so you can easily run a few instances of Ignite for testing on your computer and they all will connect to each other as a cluster, even though they are all running on the same machine.
This is a component responsible for nodes discovery. Please refer to this page for details: https://apacheignite.readme.io/docs/tcpip-discovery
I want to have one server with HAproxy and a standalone mod_security installed which routes every packets to mod_security first and check by its rules.
Then if there wasn't anything suspicious in packets (SQL Injection, DOS Attacks, ...) pass them back from mod_security to haproxy and haproxy routes them to multiple servers with different webservers.
Therefore I don't need to install and config mod_security on all my webservers.
This is technically possible, possibly with running 2 instances of HAProxy. However, you will need a webserver to run underneath ModSec, typically Apache or nginx, and this kind of negates the advantage of not having to install ModSec on all your webservers.
The standard setup is: haproxy -> reverse-proxies with ModSec -> application-servers
Just to answer this old, but still valid, question:
The solution should be to use HAProxies Stream Processing Offload Engine (SPOE) through the Stream Processing Offload Protocol (SPOP) to talk a Stream Processing Offload Agent (SPOA) which is a standalone modsecurity daemon.
HAProxy example config from their github repo
frontend my-front
...
filter spoe engine modsecurity config spoe-modsecurity.conf
...
enter code here
backend spoe-modsecurity
mode tcp
balance roundrobin
timeout connect 5s
timeout server 3m
server modsec1 127.0.0.1:12345
# Block potential malicious requests with returncode < 0
http-request deny if { var(txn.modsec.code) -m int gt 0 }
There's also a Github project where the daemon has been made available as Docker container
Offical HAProxy blog post
I have attempted everything recommended by the following error message:
(error) DENIED Redis is running in protected mode because protected mode is enabled, no bind address was specified, no authentication password is requested to clients. In this mode connections are only accepted from the loopback interface. If you want to connect from external computers to Redis you may adopt one of the following solutions: 1) Just disable protected mode sending the command 'CONFIG SET protected-mode no' from the loopback interface by connecting to Redis from the same host the server is running, however MAKE SURE Redis is not publicly accessible from internet if you do so. Use CONFIG REWRITE to make this change permanent. 2) Alternatively you can just disable the protected mode by editing the Redis configuration file, and setting the protected mode option to 'no', and then restarting the server. 3) If you started the server manually just for testing, restart it with the '--protected-mode no' option. 4) Setup a bind address or an authentication password. NOTE: You only need to do one of the above things in order for the server to start accepting connections from the outside.
My /etc/redis/sentinel.conf:
daemonize yes
sentinel myid XXX
sentinel monitor master XXX 6379 2
sentinel down-after-milliseconds master 60000
sentinel config-epoch master 0
protected-mode no
bind 0.0.0.0
port 26379
EDIT: My /etc/redis/redis.conf:
port 6379
bind 0.0.0.0
protected-mode no
I've also tried adding sentinel auth-pass master XXX.
My entire backend is on private subnets. I'm VPN'd into my datacenter behind the firewall, coming from the same private network, and I can still only connect locally without getting that frustrating error message.
Server Environment: Debian 8, Redis 3.2.6
Client Environment: Ubuntu 16.10, redis-cli 3.2.1
Redis instances: 3
Sentinel instances: 3
I've done not just one, but 3/4 of the things suggested (didn't set the command-line flags). Does anyone have any guidance or ideas? I'm clearly missing something that I've been unable to figure out from the error message, documentation, Stackoverflow, Google, and trial & error. I figured I'd post a question here first, before diving into the source code.
Any help is appreciated. Thanks!
... and, yes, I've restarted the daemons after configuration changes. :)
https://www.reddit.com/r/redis/comments/3zv85m/new_security_feature_redis_protected_mode/
As you know we got several problems from unprotected Redis instances exposed to the internet. I covered the reason why a restrictive binding to 127.0.0.1 by default may be an usability concern and, even worse, may not fix the problem (hey just comment the "bind" statement and restart!) in my blog post.
The same blog post introduced an attack that was heavily used by script kiddies to break into Redis instances (serious security researchers where already able to do this, I guess).
So I finally decided to do something before Redis 3.2 official release: Protected mode is the result and will be merged into 3.2 RC2.
The feature is already available in the unstable branch, introduced by this commit. This is how it works.
If and only if:
Protected mode is enabled (this is the default both in the configuration file and in the configless default).
AND IF No AUTH password is configured.
AND IF No "bind" directive is used in order to restrict Redis to certain interfaces.
Then Redis only accepts connections from the loopback IPv4 and IPv6 addresses. External connections are accepted just for the time to send the client an error that makes the user aware of what is happening:
> PING
(error) DENIED Redis is running in protected mode because protected mode is enabled, no bind address was specified, no authentication password is requested to clients.
In this mode connections are only accepted from the lookback interface. If you want to connect from external computers to Redis you may adopt one of the following solutions:
Just disable protected mode sending the command 'CONFIG SET protected-mode no' from the loopback interface by connecting to Redis from the same host the server is running, however MAKE SURE Redis is not publicly accessible from internet if you do so. Use CONFIG REWRITE to make this change permanent.
Alternatively you can just disable the protected mode by editing the Redis configuration file, and setting the protected mode option to 'no', and then restarting the server.
If you started the server manually just for testing, restart it with the --protected-mode no option.
Setup a bind address or an authentication password. NOTE: You only need to do one of the above things in order for the server to start accepting connections from the outside.
This should protect errors in a reasonable way while providing users with a clue instead of a connection refused. Please share your feedbacks so that we can make changes to this feature if needed, before it will get merged into Redis 3.2 RC2. Thanks.
Apache mesos fails to find slave usage when you select slaves on the mesos gui. Also the web console is showing "failing when trying to load resource."
This is a common issue when running on EC2 or other cloud providers where machines have both an external and an internal IP. Mesos reports the internal IP in the UI, so if you're using the web UI from outside of EC2, the URLs won't work.
Current Mesos master and the latest 0.15 release candidate fix this issues by adding a --hostname command line option to set the hostname that gets reported in the UI.
If you're running <0.15, you can fix the issue by adding all the hosts in your Mesos cluster to /etc/hosts like so:
<private ip> <public fqdn> <machine hostname>
for example:
10.98.58.170 ec2-54-224-191-136.compute-1.amazonaws.com ec2-54-224-191-136
I am using Apache as the front-end to GlassFish 3.1, using mod_jk as the connector. The connection between the two is very unstable - works about 50% of the time - even when I am the only person on the system. When the problem occurs, the browser gives me an HTTP timeout and the GlassFish server has two types exceptions in its log:
java.io.IOException
at org.apache.jk.common.JkInputStream.receive(JkInputStream.java:249)
at org.apache.jk.common.JkInputStream.refillReadBuffer(JkInputStream.java:309)
at org.apache.jk.common.JkInputStream.doRead(JkInputStream.java:227)
at com.sun.grizzly.tcp.Request.doRead(Request.java:501)
at org.apache.catalina.connector.InputBuffer.realReadBytes(InputBuffer.java:336)
at com.sun.grizzly.util.buf.ByteChunk.substract(ByteChunk.java:431)
at org.apache.catalina.connector.InputBuffer.read(InputBuffer.java:357)
at org.apache.catalina.connector.CoyoteInputStream.read(CoyoteInputStream.java:265)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:264)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:306)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:158)
at java.io.InputStreamReader.read(InputStreamReader.java:167)
at com.ctc.wstx.io.MergedReader.read(MergedReader.java:101)
at com.ctc.wstx.io.ReaderSource.readInto(ReaderSource.java:84)
at com.ctc.wstx.io.BranchingReaderSource.readInto(BranchingReaderSource.java:57)
at com.ctc.wstx.sr.StreamScanner.loadMore(StreamScanner.java:967)
at com.ctc.wstx.sr.StreamScanner.getNext(StreamScanner.java:738)
at com.ctc.wstx.sr.BasicStreamReader.nextFromProlog(BasicStreamReader.java:1995)
at com.ctc.wstx.sr.BasicStreamReader.nextFromTree(BasicStreamReader.java:2647)
at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1019)
java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
at org.apache.jk.common.ChannelSocket.send(ChannelSocket.java:580)
at org.apache.jk.common.JkInputStream.doWrite(JkInputStream.java:206)
at com.sun.grizzly.tcp.Response.doWrite(Response.java:685)
at org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:420)
On the Apache side, the mod_jk log is completely empty. Once I hit this condition, the only way to recover is to restart the Apache server. The funny thing is that after the restart, the requests that timed out are automatically executed - magically! I have no idea who stores them.
Anyway, I am not at all experienced with Apache and mod_jk and was wondering where to start looking for problems. Software versions I am using are as follows:
Apache: version 2.2.17-2, GlassFish: 3.1, mod_jk: 1.2.30-1
Any help would be much appreciated!
Thanks.
Check the mod_jk logs for initialilzation of mod_jk during Apache startup. If no logs are written then something's wrong with the installation/configuration of mod_jk module.
Have you created a Glassfish cluster?
If yes then set DjvmRoute and Dcom.sum.web.enterprise.jkenabled jvm options for the cluster and also check the http network listener on DAS host that needs to be created to listen requests from mod_jk (it is initially jk_disabled, so enable it)..
If not then check the http network listener for mod_jk on each of the server domains on which you are deploying your application.