Can I retrieve an expired cache entry from an Infinispan server listener using #ClientCacheEntryExpired? - infinispan

I am modifying an embedded Infinispan application to use the Infinispan server and HotRod client. The embedded implementation relied on detecting cache expiration events in a listener. Using the "pre" event, I am able to read the expired entry and update external data.
This functionality spared me from having to write my own reaper, but as far as I can tell the HotRod client implementation does not provide the same capability. I can detect the expiration with a #ClientCacheEntryExpired, but apparently the event fires after the entry is removed from the cache and the only data available to the listener is the key.
Is there a (simple) way to duplicate the embedded behavior? Or will I have to implement my own expiration reaper?

You can customize the event (see Documentation) to include the removed value, but the event will always be triggered after the removal.
Offtopic; the Infinispan server can communicate with a JDBC store (Documentation) and you can configure eviction with write-behind persistence to store your data externally (see Eviction and Write-Behing documentation).

Related

Is it possible to store cache events in Apache Ignite?

As we can persist cache data in Apache Ignite by enabling persistanceEnabled property. Is there a similar way we can store audit events as well, i.e. when we restart ignite server, all cache events must also be retained as they are currently lost on a server restart.
I am open to any other better approach for auditing via Ignite. I basically want to store all audit operations (especially INSERT,UPDATE) which we can review(fetch) later in the future?
You would need to implement your own EventStorageSpi.

Backup Spring Session with GemFire

Spring documentation says that Spring Session can transparently leverage Redis to back a web application’s HttpSession when using REST endpoints.
Does anyone know if Spring supports GemFire in this place instead of Redis to back a web application's HttpSession ?
Ref: http://docs.spring.io/spring-session/docs/current/reference/html5/guides/rest.html
Not yet, ;).
However, I did spend a little time researching the effort involved to implement a GemFire adapter for Spring Session to back (store/replicate) an HttpSession. I still need to dig a little deeper and I will be tracking this effort in JIRA here (SGF-373).
Also know that GemFire already has support for HTTP server session replication using GemFire's HTTP Session Management Module.
Will post back when I have more details.
Will these 3 steps (at a high level) be sufficient to allow Spring Session to write to Gemfire repository instead of Redis ?
Step 1: Implement just a Configuration class which provide all functions as the annotation
Allow spring to Load the configuration class
Register Spring Session Filter in Container
Establish Repo Connection Factory
Repo connection configuration
we will continue to re-use the Spring Session’s springSessionRepositoryFilter
Step 2: Need to develop an equivalent GemfireOperationsSessionRepository implementing the interface SessionRepository
Step 3: SessionMessageListener.java
3.1. Need to decide a technique to identify and save delta changes in Session to underlying repository
3.2. Need to see how session expire notification from underlying repository can be captured to invoke SessionDestroyEvent and cleanup operations -

AppFabric cache listener

I have several WCF services on a webfarm and they capture every single request object sent to them and cache it, and write it to DB once a specified amount of requests is reached. This is done so that I minimize calls to the DB. I am not using AppFabric caching for it, I am using the in-memory cache which means the cache is separate for each node. It all works fine.
I want to install AppFabric on the server and write the requests to that cache. Now my question is can I do some sort of programming (a DLL perhaps) on AppFabric itself, which periodically reads from this cache, writes to DB and flushes it out? So that all my services will do is to put the requests on cache. This will enable my services to perform better.
Is this even possible?
Yes, AppFabric already offers Write-Behind technique where your create an assembly that implements the DataCacheStoreProvider abstract base class and register it on AppFabric and then you write items to the cache and on an interval basis the items added, update will be written to the backend database according to your provider implementation.
To get more details about creating the provider check the following link

Glassfish - How to broadcast JMS message to all instances in a cluster?

I am using Glassfish 3.1.2, and I set up a cluster with one node and two instances.
I have an message driven bean in my application that subscribes to a topic, which I deployed to the cluster.
When I publish a message to the topic I want both instances to receive the message.
However, in practice I am finding that only one instance receives the message.
I believe I am running into a feature called "shared subscriptions"
http://docs.oracle.com/cd/E18930_01/html/821-2438/gjzpg.html#MQAGgjzpg
The feature (which is enabled by default) says that beans in the cluster with the same client id are shared, and are effectively only one subscription.
It says that by default the client id of an MDB is its name, which means that both my instances are using the same client id.
So other than completely disabling this feature, I would like to know if it is possible to setup an MDB so that each instance subscribes with a different client ID? This seems a bit tricky since both instances are using the same WAR file. I think you can set the client ID in an annotation, but I'm not sure if that can be changed at runtime...
I'm not sure why you would completly disable this feature. In the link you provided, it states clearly that you configure this per ActivationSpec/MDB. So as far as I understand it, it would affect only the MDB you have at hand.
For an MDB, set the ActivationSpec property useSharedSubscriptionInClusteredContainer to false. Do this in exactly
the same way as with other ActivationSpec properties, using
annotations in the MDB itself or in the deployment descriptor
ejb-jar.xml or glassfish-ejb-jar.xml.
But you can of course set the client ID on a connection dynamically during runtime. Please note that you probably would have to handle the JMS connection yourself a bit more than relying on the features managed by the container.
http://docs.oracle.com/javaee/6/api/javax/jms/Connection.html#setClientID(java.lang.String)

Detect cluster node failure Jboss AS 7.1.1-Final

I have configured 2 node clusers in Jboss AS 7.1.1-Final. I am planning to use sticky sessions. Meanwhile I am also recording number of active online users in Infinispan cache with node IP from where that user session was created for reporting purpose.
I have taken care of scenarios for login/logout where I would clear our cache entries. Problem is if one of the server node goes down, I need to write clean up routine to clear such records of that node from cache too.
One of the option is to write a client and check at specific interval if server is alive otherwise trigger a clean up routine. This approach would work but I am looking for more cleaner approach if I could detect server node failure that gets notified to other live nodes then I could hit cleanup.
From console I know that it shows when server goes down or comes up. But what would be that listerner to listen to such events. Any thoughts?
If you just need to know when the node leaves within some server module (inside JBoss server) you can use the ViewChanged listener
You cannot get this information on clients connected via REST or memcached protocols - with HotRod protocol it is doable but pretty hackish, you'd have to override TransportFactory.updateServers (probably just extend TcpTransportFactory - see configuration property infinispan.client.hotrod.transport_factory)