Infinispan 5.3.0 - Using UDP protocol for multicasting without cluster configurations and JGroups - infinispan

We are using Infinispan for caching on Tomcat 7.0. We found that Infinispan making UDP requests for multicasting without JGroups and cluster configuration. We observed Infinispan jar itself contains jgroups-udp.xml. But we are not referring it no where. Is infinispan by default uses UDP protocol for multi/uni casting the messages?
The cache configuration file we used is here.
<infinispan
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:5.3 http://www.infinispan.org/schemas/infinispan-config-5.3.xsd"
xmlns="urn:infinispan:config:5.3">
<global>
<globalJmxStatistics enabled="true"/>
</global>
<namedCache name="FORMS">
<jmxStatistics enabled="true"/>
<eviction strategy="LIRS" maxEntries="1000" />
<expiration lifespan="86400000" /><!-- Setting expiration to 1 day -->
</namedCache>
</infinispan>

Related

Configuring Virtual Topics

I am trying to get a simple example of Virtual Topics working but I am failing miserably.
From what I have read the documentation may be incorrect on the activemq site.
My C# is as follows:
I have a consumer connect to queue://Consumer.A.VirtualTopic.FOO
I have a producer connect to topic://VirtualTopic.FOO
The producer publishes a message
My server config is as follows:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<broker xmlns="http://activemq.apache.org/schema/activemq/apollo">
<notes>
The default configuration with tls/ssl enabled.
</notes>
<log_category console="console" security="security" connection="connection" audit="audit"/>
<authentication domain="apollo"/>
<!-- Give admins full access -->
<access_rule allow="admins" action="*"/>
<access_rule allow="*" action="connect" kind="connector"/>
<virtual_host id="mybroker">
<!--
You should add all the host names that this virtual host is known as
to properly support the STOMP 1.1 virtual host feature.
-->
<host_name>mybroker</host_name>
<host_name>localhost</host_name>
<host_name>127.0.0.1</host_name>
<!-- Uncomment to disable security for the virtual host -->
<!-- <authentication enabled="false"/> -->
<!-- Uncomment to disable security for the virtual host -->
<!-- <authentication enabled="false"/> -->
<access_rule allow="users" action="connect create destroy send receive consume"/>
<!-- You can delete this element if you want to disable persistence for this virtual host -->
<leveldb_store directory="${apollo.base}/data"/>
</virtual_host>
<web_admin bind="http://127.0.0.1:61680"/>
<web_admin bind="https://127.0.0.1:61681"/>
<connector id="tcp" bind="tcp://0.0.0.0:61613" connection_limit="2000"/>
<connector id="tls" bind="tls://0.0.0.0:61614" connection_limit="2000"/>
<connector id="ws" bind="ws://0.0.0.0:61623" connection_limit="2000"/>
<connector id="wss" bind="wss://0.0.0.0:61624" connection_limit="2000"/>
<key_storage file="${apollo.base}/etc/keystore" password="password" key_password="password"/>
<destinationInterceptors>
<virtualDestinationInterceptor>
<virtualDestinations>
<virtualTopic name="VirtualTopic.>" prefix="Consumer.*."/>
</virtualDestinations>
</virtualDestinationInterceptor>
</destinationInterceptors>
</broker>
Any help would be greatly appreciated
I made a mistake. I am using activemq apollo which is different from activemq. The configurations are different but the server does not complain if you give it bogus configs. The apollo configs are documented here: https://activemq.apache.org/apollo/documentation/user-manual.html. I needed to add a element with mirrored option set to true in the element.

Let webservice use SSL

With WildFly 8.2.1, I am trying to make existing webservice (JAX-WS) use SSL, but I haven't seen any use of SSL in quickstarts and the information I was able to google is limited. So far I've added this to web.xml:
<security-constraint>
<display-name>Foo security</display-name>
<web-resource-collection>
<web-resource-name>FooService</web-resource-name>
<url-pattern>/foo/FooService</url-pattern>
<http-method>POST</http-method>
</web-resource-collection>
<user-data-constraint>
<transport-guarantee>CONFIDENTIAL</transport-guarantee>
</user-data-constraint>
</security-constraint>
and this is in my standalone.xml:
<subsystem xmlns="urn:jboss:domain:webservices:1.2">
<wsdl-host>${jboss.bind.address:127.0.0.1}</wsdl-host>
<endpoint-config name="Standard-Endpoint-Config"/>
<endpoint-config name="Recording-Endpoint-Config">
<pre-handler-chain name="recording-handlers" protocol-bindings="##SOAP11_HTTP ##SOAP11_HTTP_MTOM ##SOAP12_HTTP ##SOAP12_HTTP_MTOM">
<handler name="RecordingHandler" class="org.jboss.ws.common.invocation.RecordingServerHandler"/>
</pre-handler-chain>
</endpoint-config>
<client-config name="Standard-Client-Config"/>
</subsystem>
but apparently that's not enough; when I look into standalone/data/wsdl/foo.ear/foo.war/FooService/Bar.wsdl I see:
<service name="FooService">
<port binding="foowsb:FooBinding" name="FooBinding">
<soap:address location="http://localhost:8080/foo/FooService"/>
</port>
</service>
Note that in the EAR/WAR, the soap:address.location is filled just with a placeholder (I suppose that the value is ignored).
I've found some info about setting up security realm, and creating the self-signed certificate using keytool (which I did), but I completely miss how this should be linked together.
I've also tried to setup wsdl-uri-scheme=https, but this is supported only in later versions of CXF.
Seems that the soap:address.location value is not ignored when it's being replaced, since changing that from REPLACE_WITH_ACTUAL_URL to https://REPLACE_WITH_ACTUAL_URL did the trick - now the service got exposed on https://localhost:8443.
There is a couple of more steps I had to do in standalone.xml: in undertow, add https-listener:
<https-listener name="secure" socket-binding="https" security-realm="SslRealm"/>
define the SslRealm:
<security-realm name="SslRealm">
<server-identities>
<ssl>
<keystore path="foo.keystore" relative-to="jboss.server.config.dir" keystore-password="foo1234" alias="foo" key-password="foo1234"/>
</ssl>
</server-identities>
<authentication>
<truststore path="foo.truststore" relative-to="jboss.server.config.dir" keystore-password="foo1234"/>
</authentication>
</security-realm>
Note that I reuse the same keystore for server and clients here. And since my clients are ATM in the same WF node during development, I had to setup the client-side part there, too:
<system-properties>
<property name="javax.net.ssl.trustStore" value="${jboss.server.config.dir}/foo.keystore"/>
<property name="javax.net.ssl.trustStorePassword" value="foo1234"/>
<property name="org.jboss.security.ignoreHttpsHost" value="true"/>
</system-properties>
The last property should be replaced in WF 9+ with cxf.tls-client.disableCNCheck.

Infinispan Initial State Transfer Hangs and times out

I'm trying to cluster a pair of servers with a shared Infinispan cache (Replicated Asynchronously). One always starts successfully, and registers itself properly with the JDBC database. When the other starts, it registers properly with the database, and I see a bunch of chatter between them, then, while waiting on a response from the second server, I get
`org.infinispan.commons.CacheException: Initial statue transfer timed out`
I think it's just an issue of configuration, but I'm not sure how to debug my configuration issues. I've spent several days configuring and re-configuring my Infinispan XML, and my JGroups.xml:
Infinispan:
<?xml version="1.0" encoding="UTF-8"?>
<infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:infinispan:config:6.0"
xsi:schemaLocation="urn:infinispan:config:6.0 http://www.infinispan.org/schemas/infinispan-config-6.0.xsd
urn:infinispan:config:remote:6.0 http://www.infinispan.org/schemas/infinispan-cachestore-remote-config-6.0.xsd"
xmlns:remote="urn:infinispan:config:remote:6.0"
>
<!-- *************************** -->
<!-- System-wide global settings -->
<!-- *************************** -->
<global>
<shutdown hookBehavior="DEFAULT"/>
<transport clusterName="DSLObjectCache">
<properties>
<property name="configurationFile" value="jgroups.xml"/>
</properties>
</transport>
<globalJmxStatistics enabled="false" cacheManagerName="Complex.com"/>
</global>
<namedCache name="ObjectCache">
<transaction transactionMode="TRANSACTIONAL" />
<locking
useLockStriping="false"
/>
<invocationBatching enabled="true"/>
<clustering mode="replication">
<async asyncMarshalling="true" useReplQueue="true" replQueueInterval="100" replQueueMaxElements="100"/>
<stateTransfer fetchInMemoryState="true" />
</clustering>
<eviction strategy="LIRS" maxEntries="500000"/>
<expiration lifespan="86400000" wakeUpInterval="1000" />
</namedCache>
<default>
<!-- Configure a synchronous replication cache -->
<locking
useLockStriping="false"
/>
<clustering mode="replication">
<async asyncMarshalling="true" useReplQueue="true" replQueueInterval="100" replQueueMaxElements="100"/>
<stateTransfer fetchInMemoryState="true" />
</clustering>
<eviction strategy="LIRS" maxEntries="500000"/>
<expiration lifespan="86400000" wakeUpInterval="1000" />
<persistence>
<cluster remoteCallTimeout="60000" />
</persistence>
</default>
</infinispan>
Jboss.xml:
<config xmlns="urn:org:jgroups"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/JGroups-3.0.xsd">
<!-- Default the external_addr to #DEADBEEF so we can see errors coming through
on the backend -->
<TCP
external_addr="${injected.external.address:222.173.190.239}"
receive_on_all_interfaces="true"
bind_addr="0.0.0.0"
bind_port="${injected.bind.port:12345}"
conn_expire_time="0"
reaper_interval="0"
sock_conn_timeout="20000"
tcp_nodelay="true"
/>
<JDBC_PING
datasource_jndi_name="java:jboss/datasources/dsl/control"
/>
<MERGE2 max_interval="30000" min_interval="10000"/>
<FD_SOCK
external_addr="${injected.external.address:222.173.190.239}"
bind_addr="0.0.0.0"
/>
<FD timeout="10000" max_tries="5"/>
<VERIFY_SUSPECT timeout="1500"
bind_addr="0.0.0.0"
/>
<pbcast.NAKACK use_mcast_xmit="false"
retransmit_timeouts="300,600,1200,2400,4800"
discard_delivered_msgs="true"/>
<UNICAST3 ack_batches_immediately="true"
/>
<RSVP ack_on_delivery="true"
throw_exception_on_timeout="true"
timeout="1000"
/>
<pbcast.STABLE stability_delay="1000" desired_avg_gossip="50000"
max_bytes="400000"/>
<pbcast.GMS print_local_addr="true" join_timeout="5000"
view_bundling="true" view_ack_collection_timeout="5000"/>
<FRAG2 frag_size="60000"/>
<pbcast.STATE_SOCK
bind_port="54321"
external_addr="${injected.external.address:222.173.190.239}"
bind_addr="0.0.0.0"
/>
<pbcast.FLUSH timeout="1000"/>
</config>
I've tried, frankly, every configuration option I can think of, and I'm not sure why the replication keeps timing out. All communication between these servers is wide open. Sorry to just dump so much XML, but I'm not even sure how to collect more information.
Continued exploration indicated that Infinispan was pushing logs to the server.log, but - due to my configuration, this was not duplicated on the console. Further inspection revealed that I left a single element in my cache objects unserializable - making it impossible for it to be written to the wire and transferred. The logs are very specific, making this actually a very easy problem to track down once I realized where the logs were being written.
If you come here from the future, my advice is to just tail every single log you can on the working server, and see what comes up.

Jboss AS 7 Infinispan in clustered environment

I have an application deployed in Jboss AS 7 running on multiple nodes across different servers and I am trying to use infinispan 5.2 for caching data. The problem is that the cache value is not being replicated across different servers, it is accessible only from the same node.
The configuration used for caching is given here
<cache-container name="cluster" aliases="ha-partition" default-cache="cache" jndi-name="java:jboss/infinispan/container/cluster">
<transport lock-timeout="10000"/>
<distributed-cache name="cache" mode="SYNC" start="EAGER" batching="false">
<locking isolation="REPEATABLE_READ"/>
<eviction strategy="LIRS" max-entries="1000"/>
<expiration lifespan="300000" />
</distributed-cache>
</cache-container>
I have also tried using replicated-cache instead of distributed-cache.
The tag is defined in my web.xml.
I was with a problem like yours.
This article from Abhishek Mathur helped me to solve that.
I donĀ“t know what type of data do you want to replicate, I want to replicate http session and the article work for me.

Infinispan Cluster

I need to form a infinispan cluster in distributed mode. This cache is used for storing session data. Currently I am using tomcatInfinispanSessionManager developed by Manik from Jboss team.
I have created the infinispan xml in distributed mode and using two tomcats for testing. Using apache as a load balancer. Each machine has its own copy of infinispan cache entry. When any of the tomcat is shut down the session is retrieved from other infinispan cache.
My question is: how to make this cache entry into an infinispan server (either using hotrod/memcached) that is running on a separate machine?
If you add a remote cache loader to the cache configuration you have, it'll back up the data in a remote Hot Rod server, assuming you configure the IP:Port address of the Hot Rod server(s) correctly.
However, if you're trying to cluster your session data, I'd highly recommend you download JBoss EAP 6.1, which comes with Infinispan-based cluster-ready session data storage out of the box. The session cache can still be configured with a remote cache loader too, but the configuration will be slightly different since it uses JBoss EAP configuration format.
I am using ispn 5.1 version and started the server in hotrod mode. My cache config xml is as follows.
<?xml version="1.0" encoding="UTF-8"?>
<infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:5.3 http://www.infinispan.org/schemas/infinispan-config-5.3.xsd
urn:infinispan:config:remote:5.3 http://www.infinispan.org/schemas/infinispan-cachestore-remote-config-5.3.xsd"
xmlns="urn:infinispan:config:5.3" xmlns:remote="urn:infinispan:config:remote:5.3">
<global>
<transport clusterName="tomcatSession">
<properties>
<property name="configurationFile"
value="E:/Software/apache-tomcat-7.0.34/conf/jgroups.xml">
</property>
</properties>
</transport>
<globalJmxStatistics enabled="true" />
</global>
<namedCache name="tc_session_ispn-sess-mgr">
<clustering mode="distribution">
<l1 enabled="true" lifespan="600000" />
</clustering>
<loaders>
<remoteStore xmlns="urn:infinispan:config:remote:5.3"
fetchPersistentState="false" ignoreModifications="false"
purgeOnStartup="false" remoteCache="myCache" rawValues="true">
<servers>
<server host="10.145.4.172" port="11222" />
</servers>
<connectionPool maxActive="10" exhaustedAction="CREATE_NEW" />
<async enabled="true" />
</remoteStore>
</loaders>
</namedCache>
</infinispan>
While using this cache config xml I am gettong the following exception
Exception in thread "main" org.infinispan.config.ConfigurationException: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[39,104]
Message: Unexpected element '{urn:infinispan:config:remote:5.3}remoteStore' encountered
at org.infinispan.configuration.parsing.Parser.parse(Parser.java:168)
at org.infinispan.configuration.parsing.Parser.parse(Parser.java:130)
at org.infinispan.manager.DefaultCacheManager.<init>(DefaultCacheManager.java:368)
at org.infinispan.manager.DefaultCacheManager.<init>(DefaultCacheManager.java:340)
at org.infinispan.manager.DefaultCacheManager.<init>(DefaultCacheManager.java:327)
Kindly correct me if I am wrong and suggest how to proceed further?