I am using JBoss As 7.1.1 Final.
I have configured a replicated cache with transaction mode 'FULL_XA'.
I am using the cache as an in memory data base.The entries in the cache are manipulated(add/update/delete) by the application.
I am facing a scenario where a JTA transaction rollback does not revert the earlier addition of an entry in the cache.
Infinispan documentation specifies a transaction manager to be configured for the cache. I believe that on a JBoss application server, infinispan should automatically be able to choose the correct transaction manager.Moreover, the infinispan 1.2 XSD do not provide any details on how should we configure a transaction manager for the cache.
Do we really need to configure a transaction manager here ?
If not, what could be a probable cause of cache addition not being rolled back after transaction rollback.
Does infinispan provide the ability to remove previously added entry in the cache once the corresponding transaction is rolled back ?
This is essentially the same Atomicity guarantees provided by a persistent datastore such as RDBMS.
Got the same question (and answer) on the Infinispan forums: https://community.jboss.org/message/778149#778149
Actually Infinispan doesn't write anything to the cache until the transaction is committed, so there's nothing to roll back - provided that the cache really is transactional, the default is not.
You can enable transactions via the transactionMode attribute of the transaction element. There's an attribute for customizing the transaction manager lookup as well (transactionManagerLookupClass), but as you guessed the default should work with AS7.
Related
I like to upgrade the redis memorystore instance in our gcloud because 5.x (at least in Github) appears to have reached its end of life. It's being use for simple key value pairs, so I don't expect anything unexpected during the upgrade to 6.x. However management is nervous and wants a way to rollback the upgrade if there are issues. Is there a way to do this? The documentation appears to say that rollback is not possible. I plan to do the usual backup and then upgrade. The instance is just the basic.
In order to Upgrade the redis memorystore instance, follow the best practices mentioned in the Public Documentation as the following :
We recommend exporting your instance data before running a version upgrade operation.
Note that upgrading an instance is irreversible. You cannot downgrade the Redis version of a Memorystore for a Redis instance.
For Standard Tier instances, to increase the speed and reliability of your version upgrade operation, upgrade your instance during
periods of low instance traffic. To learn how to monitor instance
traffic, see Monitoring Redis instances.
As mentioned in the documentation which recommends you to enable RDB Snapshots.
Memorystore for Redis is primarily used as an in-memory cache. When
using Memorystore as a cache, your application can either tolerate
loss of cache data or can very easily repopulate the cache from a
persistent store.
However, there are some use cases where downtime for a Memorystore
instance, or a complete loss of instance data, can cause long
application downtimes. We recommend using the Standard Tier as the
primary mechanism for high availability. Additionally, enabling RDB
snapshots on Standard Tier instances provides extra protection from
failures that can cause cache flushes. The Standard Tier provides a
highly available instance with multiple replicas, and enables fast
recovery using automatic failover if the primary fails.
In some scenarios you may also want to ensure data can be recovered
from snapshot backups in the case of catastrophic failure of Standard
Tier instances. In these scenarios, automated backups and the ability
to restore data from RDB snapshots can provide additional protection
from data loss. With RDB snapshots enabled, if needed, a recovery is
made from the latest RDB snapshot.
For more information, you can refer to the documentation related to version upgrade behavior.
As we can persist cache data in Apache Ignite by enabling persistanceEnabled property. Is there a similar way we can store audit events as well, i.e. when we restart ignite server, all cache events must also be retained as they are currently lost on a server restart.
I am open to any other better approach for auditing via Ignite. I basically want to store all audit operations (especially INSERT,UPDATE) which we can review(fetch) later in the future?
You would need to implement your own EventStorageSpi.
I am modifying an embedded Infinispan application to use the Infinispan server and HotRod client. The embedded implementation relied on detecting cache expiration events in a listener. Using the "pre" event, I am able to read the expired entry and update external data.
This functionality spared me from having to write my own reaper, but as far as I can tell the HotRod client implementation does not provide the same capability. I can detect the expiration with a #ClientCacheEntryExpired, but apparently the event fires after the entry is removed from the cache and the only data available to the listener is the key.
Is there a (simple) way to duplicate the embedded behavior? Or will I have to implement my own expiration reaper?
You can customize the event (see Documentation) to include the removed value, but the event will always be triggered after the removal.
Offtopic; the Infinispan server can communicate with a JDBC store (Documentation) and you can configure eviction with write-behind persistence to store your data externally (see Eviction and Write-Behing documentation).
Is it possible to persist the ignite cache on local file system?
I need to perform cache operations like insert, update, delete on my look up data.
But this has to be persisted on local file system of the respective nodes to survive the data even after restart of the ignite cluster.
Alternatively I was able to persist the data on MySQL database.
But I'm looking for a persistence solution that works independent of databases and HDFS.
Ignite since version 2.1 has it own native Persistence. Moreover, it has advantages over integration with 3rd party databases.
You can read about it here: https://apacheignite.readme.io/docs/distributed-persistent-store
My project supports nested transactions and thus we have MSDTC service running on web server as well as on database server. The project is working fine. However, we have database mirroring established over database server and thus whenever fail-over happens, site page where, nested transactions are used, throws an error:
The operation is not valid for the state of the transaction.
We have MSTDC service running on mirroring database too. Please suggest what should be done to overcome this problem.
In the default DTC setup it is the DTC of the server that initiates the transactions (the web server in your case) that coordinates them. When the first database server goes down, it rollbacks its current transaction and notifies the transaction coordinator of this and that is why you get the error. The webserver cannot commit the transaction because at least one participant has voted for a rollback.
I don't think you can get around that. What your webserver should do is retry the complete transaction. Database calls would than be handled by the mirror server and would succeed.
That is at least my opinion. I'm no authority on distributed transactions, nor on database clusters with automatic failover...