Orbeon Forms PE (Ehcache) replication without multicasting - replication

We want to set up Orbeon Forms PE replication but we cannot use multicasting as proposed in the docs.
We have two nodes - 172.13.238.241 and 172.13.238.242 and the problem seems to be with the EhCache part. I open the form, load balancer (haproxy) directs me to a node, I turn of the second node and then for several minutes all the requests in the brower fail. Eventually the request start to work again but they are very slow.
This is what I have in node 1 (another node has same config with IP-s replaced with each other and it has a different uniqueId value):
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
channelSendOptions="asynchronous"
channelStartOptions="3">
<Manager className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"/>
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<Interceptor className="org.apache.catalina.tribes.group.interceptors.StaticMembershipInterceptor">
<Member className="org.apache.catalina.tribes.membership.StaticMember"
port="4100"
host="172.13.238.242"
uniqueId="{0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2}" />
</Interceptor>
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="172.13.238.241"
port="4100"
autoBind="0"
maxThreads="6"
selectorTimeout="5000" />
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
</Sender>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatchInterceptor"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.ThroughputInterceptor"/>
</Channel>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter=".*\.gif|.*\.js|.*\.jpeg|.*\.jpg|.*\.png|.*\.htm|.*\.html|.*\.css|.*\.txt"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
</Cluster>
I wonder if anyone could spot some mistakes in my ehcache configuration.

Related

Let webservice use SSL

With WildFly 8.2.1, I am trying to make existing webservice (JAX-WS) use SSL, but I haven't seen any use of SSL in quickstarts and the information I was able to google is limited. So far I've added this to web.xml:
<security-constraint>
<display-name>Foo security</display-name>
<web-resource-collection>
<web-resource-name>FooService</web-resource-name>
<url-pattern>/foo/FooService</url-pattern>
<http-method>POST</http-method>
</web-resource-collection>
<user-data-constraint>
<transport-guarantee>CONFIDENTIAL</transport-guarantee>
</user-data-constraint>
</security-constraint>
and this is in my standalone.xml:
<subsystem xmlns="urn:jboss:domain:webservices:1.2">
<wsdl-host>${jboss.bind.address:127.0.0.1}</wsdl-host>
<endpoint-config name="Standard-Endpoint-Config"/>
<endpoint-config name="Recording-Endpoint-Config">
<pre-handler-chain name="recording-handlers" protocol-bindings="##SOAP11_HTTP ##SOAP11_HTTP_MTOM ##SOAP12_HTTP ##SOAP12_HTTP_MTOM">
<handler name="RecordingHandler" class="org.jboss.ws.common.invocation.RecordingServerHandler"/>
</pre-handler-chain>
</endpoint-config>
<client-config name="Standard-Client-Config"/>
</subsystem>
but apparently that's not enough; when I look into standalone/data/wsdl/foo.ear/foo.war/FooService/Bar.wsdl I see:
<service name="FooService">
<port binding="foowsb:FooBinding" name="FooBinding">
<soap:address location="http://localhost:8080/foo/FooService"/>
</port>
</service>
Note that in the EAR/WAR, the soap:address.location is filled just with a placeholder (I suppose that the value is ignored).
I've found some info about setting up security realm, and creating the self-signed certificate using keytool (which I did), but I completely miss how this should be linked together.
I've also tried to setup wsdl-uri-scheme=https, but this is supported only in later versions of CXF.
Seems that the soap:address.location value is not ignored when it's being replaced, since changing that from REPLACE_WITH_ACTUAL_URL to https://REPLACE_WITH_ACTUAL_URL did the trick - now the service got exposed on https://localhost:8443.
There is a couple of more steps I had to do in standalone.xml: in undertow, add https-listener:
<https-listener name="secure" socket-binding="https" security-realm="SslRealm"/>
define the SslRealm:
<security-realm name="SslRealm">
<server-identities>
<ssl>
<keystore path="foo.keystore" relative-to="jboss.server.config.dir" keystore-password="foo1234" alias="foo" key-password="foo1234"/>
</ssl>
</server-identities>
<authentication>
<truststore path="foo.truststore" relative-to="jboss.server.config.dir" keystore-password="foo1234"/>
</authentication>
</security-realm>
Note that I reuse the same keystore for server and clients here. And since my clients are ATM in the same WF node during development, I had to setup the client-side part there, too:
<system-properties>
<property name="javax.net.ssl.trustStore" value="${jboss.server.config.dir}/foo.keystore"/>
<property name="javax.net.ssl.trustStorePassword" value="foo1234"/>
<property name="org.jboss.security.ignoreHttpsHost" value="true"/>
</system-properties>
The last property should be replaced in WF 9+ with cxf.tls-client.disableCNCheck.

Double Tomcat behind mod_jk load balancer

I am in the process of setting up two Tomcat instances on the same server with an Apache mod_jk load balancer in front of it. I have been using a guide and the Apache Tomcat documentation and stuck to the basic setup suggested. When i try to start up any of the Tomcat instances, i get a BindException from when it tries to start up the SimpleTcpCluster. The error message is "Cannot assign requested address".
I googled for solutions to this issue and came across two suggestions, the first one being to ensure that Java is configured to prefer IPv4 addresses. Tried it - no change. The second suggested to replace the auto value on the address parameter on the Receiver component inside the cluster config (see config below).
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" channelSendOptions="8">
<Manager className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"/>
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<Membership className="org.apache.catalina.tribes.membership.McastService"
address="228.0.0.4"
port="45564" frequency="500"
dropTime="3000"/>
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
</Sender>
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="localhost" port="4000" autoBind="100"
selectorTimeout="5000" maxThreads="6"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
</Channel>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter=""/>
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
</Cluster>
I tried changing "auto" to "localhost", which led to a different error message on Tomcat startup, saying "Address already in use :8009".
At this point i really don't know where to look. Is localhost a bad value? Should i be using auto but make a change somewhere else? Is there anyone out there with a little more experience on this that can give me a helping hand?
We got around this issue by changing the address parameter in the Receiver tag inside the Cluster configuration from "auto" to the actual IP address of the server. I was never able to figure out why this was not working and didn't want to spend any more time once we got the calls through.
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="123.123.x.x" port="4000" autoBind="100"
selectorTimeout="5000" maxThreads="6"/>

Tomcat 8 Session replication - problems

need help in figuring out what`s missing with the configuration that have setup - clustering seems to be work fine,
3 node tomcat cluster on 3 machines
machine A - apache webserver A + Tomcat A
machine B - apache webserver B + Tomcat B
machine C - apache webserver C + Tomcat C
all the tomcat instances are aware of other tomcat nodes and during restart, i see tomcat instances are getting added into cluster and each cluster is being aware of presence of other instances.
but up on accessing a web application (where have enabled ) , session is not being replicated across tomcat instances.
Here is my server.xml configuration
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
channelSendOptions="8">
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<Membership className="org.apache.catalina.tribes.membership.McastService"
address="228.0.0.8"
bind="machineA ip"
port="45564"
frequency="500"
dropTime="3000" />
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="machineA ip"
port="4200"
autoBind="100"
selectorTimeout="5000"
maxThreads="6" />
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
</Sender>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
</Channel>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
filter=".*\.gif;.*\.js;.*\.jpg;.*\.htm;.*\.html;.*\.txt;" />
<Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
tempDir="D:/cluster/temp/war-temp/"
deployDir="D:/cluster/temp/war-deploy/"
watchDir="D:/cluster/temp/war-listen/"
watchEnabled="false" />
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
and after going through some online blogs and posts i have moved the contents to global context.xml file
'$catalina_base/conf/context.xml'
<Manager className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true" />
but still the session is not being replicated across the cluster
please provide your inputs/suggestions

gzip Cannot enable in Wildfly 8?

I want to enable gzip compression in wildfly server. I used the following tutorial. Tutorial
This is the gzip enabling code I included in the standalone.xml
<subsystem xmlns="urn:jboss:domain:undertow:1.0">
<buffer-caches>
<buffer-cache name="default" buffer-size="1024" buffers-per-region="1024" max-regions="10"/>
</buffer-caches>
<server name="default-server">
<http-listener name="default" socket-binding="http"/>
<host name="default-host" alias="localhost">
<location name="/" handler="welcome-content" />
**<filter-ref name="gzipFilter" predicate="path-suffix['.css'] or path-suffix['.js']" />**
<filter-ref name="server-header"/>
<filter-ref name="x-powered-by-header"/>
</host>
</server>
<servlet-container name="default" default-buffer-cache="default" stack-trace-on-error="local-only">
<jsp-config/>
</servlet-container>
<handlers>
<file name="welcome-content" path="${jboss.home.dir}/welcome-content" directory-listing="true"/>
</handlers>
<filters>
<response-header name="server-header" header-name="Server" header-value="Wildfly 8"/>
<response-header name="x-powered-by-header" header-name="X-Powered-By" header-value="Undertow 1"/>
<gzip name="gzipFilter"/>
</filters>
</subsystem>
But When I used this code in the Ubuntu 14.04.1 LTS, It works perfectly. But when I installed the application in CentOS Linux release 7.0.1406. It doesn't work. I used the same settings. But I could not figure out the problem so far. I'm very grateful someone can provide your valuable idea.
I recommend you to test upgrade to wildfly 8.2
I has the new undertow 1.1.0 interated which has solved a couple of issues around filters.
I assume your issue is also this:
UNDERTOW-331

Infinispan Initial State Transfer Hangs and times out

I'm trying to cluster a pair of servers with a shared Infinispan cache (Replicated Asynchronously). One always starts successfully, and registers itself properly with the JDBC database. When the other starts, it registers properly with the database, and I see a bunch of chatter between them, then, while waiting on a response from the second server, I get
`org.infinispan.commons.CacheException: Initial statue transfer timed out`
I think it's just an issue of configuration, but I'm not sure how to debug my configuration issues. I've spent several days configuring and re-configuring my Infinispan XML, and my JGroups.xml:
Infinispan:
<?xml version="1.0" encoding="UTF-8"?>
<infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:infinispan:config:6.0"
xsi:schemaLocation="urn:infinispan:config:6.0 http://www.infinispan.org/schemas/infinispan-config-6.0.xsd
urn:infinispan:config:remote:6.0 http://www.infinispan.org/schemas/infinispan-cachestore-remote-config-6.0.xsd"
xmlns:remote="urn:infinispan:config:remote:6.0"
>
<!-- *************************** -->
<!-- System-wide global settings -->
<!-- *************************** -->
<global>
<shutdown hookBehavior="DEFAULT"/>
<transport clusterName="DSLObjectCache">
<properties>
<property name="configurationFile" value="jgroups.xml"/>
</properties>
</transport>
<globalJmxStatistics enabled="false" cacheManagerName="Complex.com"/>
</global>
<namedCache name="ObjectCache">
<transaction transactionMode="TRANSACTIONAL" />
<locking
useLockStriping="false"
/>
<invocationBatching enabled="true"/>
<clustering mode="replication">
<async asyncMarshalling="true" useReplQueue="true" replQueueInterval="100" replQueueMaxElements="100"/>
<stateTransfer fetchInMemoryState="true" />
</clustering>
<eviction strategy="LIRS" maxEntries="500000"/>
<expiration lifespan="86400000" wakeUpInterval="1000" />
</namedCache>
<default>
<!-- Configure a synchronous replication cache -->
<locking
useLockStriping="false"
/>
<clustering mode="replication">
<async asyncMarshalling="true" useReplQueue="true" replQueueInterval="100" replQueueMaxElements="100"/>
<stateTransfer fetchInMemoryState="true" />
</clustering>
<eviction strategy="LIRS" maxEntries="500000"/>
<expiration lifespan="86400000" wakeUpInterval="1000" />
<persistence>
<cluster remoteCallTimeout="60000" />
</persistence>
</default>
</infinispan>
Jboss.xml:
<config xmlns="urn:org:jgroups"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.0.xsd">
<!-- Default the external_addr to #DEADBEEF so we can see errors coming through
on the backend -->
<TCP
external_addr="${injected.external.address:222.173.190.239}"
receive_on_all_interfaces="true"
bind_addr="0.0.0.0"
bind_port="${injected.bind.port:12345}"
conn_expire_time="0"
reaper_interval="0"
sock_conn_timeout="20000"
tcp_nodelay="true"
/>
<JDBC_PING
datasource_jndi_name="java:jboss/datasources/dsl/control"
/>
<MERGE2 max_interval="30000" min_interval="10000"/>
<FD_SOCK
external_addr="${injected.external.address:222.173.190.239}"
bind_addr="0.0.0.0"
/>
<FD timeout="10000" max_tries="5"/>
<VERIFY_SUSPECT timeout="1500"
bind_addr="0.0.0.0"
/>
<pbcast.NAKACK use_mcast_xmit="false"
retransmit_timeouts="300,600,1200,2400,4800"
discard_delivered_msgs="true"/>
<UNICAST3 ack_batches_immediately="true"
/>
<RSVP ack_on_delivery="true"
throw_exception_on_timeout="true"
timeout="1000"
/>
<pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000"
max_bytes="400000"/>
<pbcast.GMS print_local_addr="true" join_timeout="5000"
view_bundling="true" view_ack_collection_timeout="5000"/>
<FRAG2 frag_size="60000"/>
<pbcast.STATE_SOCK
bind_port="54321"
external_addr="${injected.external.address:222.173.190.239}"
bind_addr="0.0.0.0"
/>
<pbcast.FLUSH timeout="1000"/>
</config>
I've tried, frankly, every configuration option I can think of, and I'm not sure why the replication keeps timing out. All communication between these servers is wide open. Sorry to just dump so much XML, but I'm not even sure how to collect more information.
Continued exploration indicated that Infinispan was pushing logs to the server.log, but - due to my configuration, this was not duplicated on the console. Further inspection revealed that I left a single element in my cache objects unserializable - making it impossible for it to be written to the wire and transferred. The logs are very specific, making this actually a very easy problem to track down once I realized where the logs were being written.
If you come here from the future, my advice is to just tail every single log you can on the working server, and see what comes up.