Infinispan invalidation cache invalidates on new key - infinispan

We have a two active/active node Wildfly 19 cluster configuration with infinispan (v 9.4.18) invalidation cache.
<invalidation-cache name="opencell-tenant-cache">
<transaction locking="OPTIMISTIC" mode="NONE"/>
</invalidation-cache>
According to the infinispan documentation, when a cached value change on node1, a InvalidateCommand is send from node 1 to node2, invalidating/removing a key entry from a node2 cache.
What I noticed is that InvalidateCommand is send even on a new key put.
In our application if key is not found in cache, a value will be loaded from a DB and put in a cache. And as both servers are active, I get the following never ending scenario:
Request to Node 1 > Key not found on Node1 cache > Value is loaded from DB and put to Node1 cache > key invalidated on Node2
Request to Node 2 > Key not found on Node2 cache > Value is loaded from DB and put to Node2 cache > key invalidated on Node 1
Request to Node 1 > Key not found on Node1 cache > Value is loaded from DB and put to Node1 cache > key invalidated on Node 2
and so on.
With this scenario, I am invalidating cache constantly, even though data is never changed.
I would expect that no invalidation command is send on new key put.
Otherwise what is a practical use of invalidation cache, if same key put after receiving invalidation command will trigger an invalidation again.
Thanks

I would expect that no invalidation command is send on new key put.
And what is a new put?
Request to Node 2 > Key not found on Node2 cache <- in your example, Node 2 does not have the key locally; so it is a new put.
Node 2 can fetch the key from Node 2 if you configure a ClusterLoader. See Invalidation Cache Mode Documentation.
If invalidation does not fit your data access, take a look at the other cache modes (like replicated/distributed).

Related

org.apache.ignite.client.ClientConnectionException: Ignite cluster is unavailable inside kubernetes environment

We have a setup wherein, one ignite server node serves 15 to 20 thick client nodes and 40 to 50 thin client nodes, thin client connection is singlton,
In operation, some times we get below error,
org.apache.ignite.client.ClientConnectionException: Ignite cluster is unavailable [sock=Socket[addr=hostnm19.hostx.com/10.13.10.19,port=30519,localport=57552]]
On the Server node, we are inserting data inside a third party store using CacheStoreAdapters
Don't know where it goes wrong since out of 100 operations one operation fails with the above error.
Also, let me know what can we do for this failure handling.
Apache Ignite version: 2.8
Edits: (Code Snippet)
ClientConfiguration cfg = new ClientConfiguration()
.setAddresses("host:port");
IgniteClient client = Ignition.startClient(cfg); // this client is singleton
client.getOrCreateCache("ABC_CACHE").put(key, val);
StatckTrace:
org.apache.ignite.client.ClientConnectionException: Ignite cluster is unavailable [sock=Socket[addr=hostnm19.hostx.com/10.13.10.19,port=30519,localport=57552]]
at org.apache.ignite.internal.client.thin.TcpClientChannel.handleIOError(TcpClientChannel.java:499)
at org.apache.ignite.internal.client.thin.TcpClientChannel.handleIOError(TcpClientChannel.java:491)
at org.apache.ignite.internal.client.thin.TcpClientChannel.access$100(TcpClientChannel.java:92)
at org.apache.ignite.internal.client.thin.TcpClientChannel$ByteCountingDataInput.read(TcpClientChannel.java:538)
at org.apache.ignite.internal.client.thin.TcpClientChannel$ByteCountingDataInput.readInt(TcpClientChannel.java:572)
at org.apache.ignite.internal.client.thin.TcpClientChannel.processNextResponse(TcpClientChannel.java:272)
at org.apache.ignite.internal.client.thin.TcpClientChannel.receive(TcpClientChannel.java:234)
at org.apache.ignite.internal.client.thin.TcpClientChannel.service(TcpClientChannel.java:171)
at org.apache.ignite.internal.client.thin.ReliableChannel.service(ReliableChannel.java:160)
at org.apache.ignite.internal.client.thin.ReliableChannel.request(ReliableChannel.java:187)
at org.apache.ignite.internal.client.thin.TcpIgniteClient.getOrCreateCache(TcpIgniteClient.java:114)
Caused by: java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:210)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at org.apache.ignite.internal.client.thin.TcpClientChannel$ByteCountingDataInput.read(TcpClientChannel.java:535)
... 36 more
You probably have network or NAT configured which will reset connections when not used, or even sporadically.
In this case, you will have to reconnect.
Another option, are you sure you are connecting to thin client port and not some other port?

Dockerized DCTM 7.3 and Dockerized DCTM REST 7.3 not able to retrieve global registry or its documents

My setup consists of
Documentum Content Server 7.3 (dctm-cs) running in a docker container (from EMC)
Documentum REST Services 7.3 (dctm-rest) running in a docker container (from EMC)
I am definitively able to get information from within dctm by running queries against it with iapi, for example:
API> ?,c,select user_name from dm_user enable (return_top 5)
user_name
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
docu
ubuntudb
dm_superusers
dm_superusers_dynamic
dm_browse_all
(5 rows affected)
I am also able to $ curl http://localhost:8080/dctm-rest/repositories.json from both the dctm-rest container as well as its host container and get the results:
{"id":"http://localhost:8080/dctm-rest/repositories","title":"Repositories","author":[{"name":"EMC Documentum"}],"updated":"2017-08-16T21:42:44.177+00:00","page":1,"items-per-page":1000,"total":1,"links":[{"rel":"self","href":"http://localhost:8080/dctm-rest/repositories.json"}],"entries":[{"id":"http://localhost:8080/dctm-rest/repositories/ubuntudb","title":"ubuntudb","summary":"ubuntudb","updated":"2017-08-16T21:42:44.178+00:00","published":"2017-08-16T21:42:44.178+00:00","links":[{"rel":"edit","href":"http://localhost:8080/dctm-rest/repositories/ubuntudb.json"}],"content":{"type":"application/json","src":"http://localhost:8080/dctm-rest/repositories/ubuntudb.json"}}]}
Attempting to $ curl http://localhost:8080/dctm-rest/repositories/ubuntudb.json, however hangs indefinitely.
I have attempted to provide the default username and password via basic HTTP authentication, also with the same results.
The contents of the dfc.properties file in dctm-cs:
dfc.data.dir=/opt/dctm
dfc.tokenstorage.dir=/opt/dctm/apptoken
dfc.tokenstorage.enable=false
dfc.docbroker.host[0]=ubuntustateless
dfc.docbroker.port[0]=1489
dfc.crypto.repository=ubuntudb
dfc.session.secure_connect_default=try_secure_first
dfc.globalregistry.repository=ubuntudb
dfc.globalregistry.username=dm_bof_registry
dfc.globalregistry.password=AAAAEL9wp8c6k3K2UTQJwTYO5kMnE3rDrHJVDL+LijAg+zLk
The contents of the dfc.properties file in dctm-rest:
dfc.docbroker.host[0]=172.18.0.1
dfc.docbroker.port[0]=1489
#Add the global registry repository name to the following key.
dfc.globalregistry.repository=ubuntudb
#Add the username of the global registry user to the following key.
dfc.globalregistry.username=dmadmin
#Add an encrypted password value for the following key.
dfc.globalregistry.password=password
dfc.exception.include_id=false
dfc.exception.include_decoration=false
I have attempted to change the value of dfc.globalregistry.username to be the same as in dctm-cs, to no avail and same hang on request.
I have also attempted to use both encrypted and decrypted values for dfc.globalregistry.password, in both dctm-cs and dctm-rest also with no luck.

GemFire 8.2.0 ClientCacheFactory addPoolLocator, how to set list of locators on client

I have this code
ClientCacheFactory clientCacheFactory = new ClientCacheFactory();
clientCacheFactory.set("cache-xml-file", "client-cache.xml");
clientCacheFactory.set("mcast-port", "0");
List<String> locators = Arrays.asList(peersIp.split(","));
for (String locator : locators) {
clientCacheFactory.addPoolLocator(locator, 11001);
}
I am adding a list of locator ips to clientCacheFactory
In the clients logs i see
[info 2016/02/09 13:14:15.440 UTC tid=0x1] Running in local mode since mcast-port was 0 and locators was empty.
[info 2016/02/09 13:14:15.694 UTC tid=0x1] Pool DEFAULT started with multiuser-authentication=false
[info 2016/02/09 13:14:15.725 UTC tid=0x16] Updating membership port. Port changed from 0 to 49,879.
*
my client-cache.xml
<!DOCTYPE client-cache PUBLIC
"-//GemStone Systems, Inc.//GemFire Declarative Caching 6.6//EN" "http://www.gemstone.com/dtd/cache6_6.dtd">
<client-cache>
<region name="exampleRegion" refid="PROXY"/>
</client-cache>
I want to use the ClientCacheFactory, to define a list of locators for the client to connect to.
The method 'addPoolLocator' looks just what i want but, the logs say no.
The logs are misleading you in this case. It looks like your pool has been created, so you should be good to go. You can verify by connecting gfsh to the locator and issuing a describe region command.
$ cd $GEMFIRE_HOME
$ ./bin/gfsh
gfsh>connect --locator=localhost[10334]
gfsh>describe region --name=/MyRegion
gfsh>list members
gfsh>list clients
Please refer to gfsh documentation for more commands, or just type help in gfsh.
The logs actually mean you are not connected as a peer to the GemFire distributed system. The locators property refers to the peer-to-peer locators and not the one within the pool. I can see how this is misleading, I will file an issue against Apache Geode.

activeMQ master/slave cluster with zookeeper

I have my activeMQ connected to zookeeper (a cluster of 5 zookeepers), in the config file "activemq.xml", I have
<persistenceAdapter>
<replicatedLevelDB
directory="${activemq.data}/leveldb"
replicas="3"
bind="tcp://0.0.0.0:0"
zkAddress="blablabla:2181"
zkPassword="password"
zkPath="/activemq/leveldb-stores"
hostname="blabla"
/>
</persistenceAdapter>
now I have activeMQ-server1 started, successfully become the master; activeMQ-server2 with the same "activemq.xml" config file, successfully become the slave; activeMQ-server3 with the same "activemq.xml" config file, successfully become the slave, but kicks out activeMQ-server2 (start to give connection error)
I think I put the wrong number for replicas, I changed all the 3 config files with "replicas="4"", still not work
what would be the correct replicas number with 3 activeMQ servers, or I am wrong with some other parts. (I only have 1 zookeeper listed in config, since the 5 zookeepers can connect to each other, already a cluster there)
Thanks :)
You need to list all zookeeper servers in the zkAddress portion, zkAddress="zoo1.example.org:2181,zoo2.example.org:2181,zoo3.example.org:2181", taken from activemq replicated levelDB
The replicas value is the number of activemq nodes, not the number of zookeeper nodes. So if you have 3 amq nodes, set replicas="3", not more. http://activemq.apache.org/replicated-leveldb-store.html :
Replicas property :
The number of nodes that will exist in the cluster.
At least (replicas/2)+1 nodes must be online to avoid service outage.
Another thing, all amq nodes in the cluster must get the same name (MyBroker below):
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="MyBroker" dataDirectory="${activemq.data}">

Issue with Open Shift Origin Mongo DB service

I have installed OpenShift Origin V3 on aws ec2(Fedora19) using oo-install.The set up is One Broker +One Node.
I was making some modifications to the security groups to make it more restrictive -
and it ended up some issues in the mongo service.
1.service mongod does not start up and the status shows failed.
The /var/log/mongodb/mongodb.log says
Thu Mar 6 11:24:08.189 [initandlisten] ERROR: listen(): bind() failed errno:99 Cannot assign requested address for socket: :27017
Thu Mar 6 11:24:08.189 [initandlisten] now exiting
Running oo-accept-broker -v says
FAIL: error logging into mongo db: MOPED: Retrying connection to primary for replica set :27017">]>: MOPED: Retrying connection to primary for replica set :27017">]>/MOPED: --username Retrying, exit code: 1
Any pointers on how to resolve this will be greatly appreciated.
Thanks
Shabna
I would try rolling back your changes to the security groups first and then make the changes one by one and see which one causes the issue, then post that to stack and see if anyone can comment on the specific change that is affecting mongodb.