Gemfire client region to get updates from server - gemfire

I have a Gemfire server region (distributed) and a local region (caching proxy) configured this way:
<client-cache>
<pool name="client" subscription-enabled="true">
<locator host="localhost" port="13489" />
</pool>
<region name="customers" refid="CACHING_PROXY">
<region-attributes>
<subscription-attributes interest-policy="all"/>
<!--<subscription-attributes interest-policy="cache-content"/>-->
</region-attributes>
</region>
</client-cache>
When I'm getting a value from a client region and the key is unknown on the client - it is fetched from the server. After that, however, if the server value changes - the new value is not propagated to client, even when subscription-attributes are set.
What is the misconfiguration here?

To have all changes pushed into your local cache you would need to remove the subscription attributes tag and instead, leave subscription-enabled=true on the pool, and then programmatically call the region.registerInterest API GemFire JavaDoc to actually cause the server to start delivering change notifications to your client.
As a good starting point, I would suggest
region.registerInterestRegex(".*", InterestResultPolicy.NONE, false, false)
This will ensure that you only receive "fresh" values and will take advantage of local cache for repeated retrievals but will not attempt to put all values in memory. However, there are quite a few options for interest registration so you will want to consult the javadoc.
As an additional note, CACHING_PROXY is often combined with some eviction mechanism to ensure that the size of the local cache does not grow indefinitely.
Also, the subscription attributes inside the region tag actually apply to server side configuration, not the client side. Even on the server side it is not ususally necessary to configure the subscription-attributes because the server side region shortcuts (PARTITION, REPLICATE, etc. ) generally configure them appropriately.

Related

Geode region[key] get triggers region listener create event

Using Geode 1.2 and 9.1 Pivotal native client the following code:
IRegion<string, IPdxInstance> r = cache.GetRegion<string, IPdxInstance>("myRegion");
return r[key];
then triggers an AfterCreate event for myRegion. Why does that happen when no data is created, only read?
Same here, never used Native Client. I agreed with what #Urizen suspected - you are calling r[key] from an instance of Geode that doesn't have the entry, so it pulls the data from other instance, which "create" the entry locally.
You have a few options here:
Performing an interest registration for the instance you are initiating the call using registerAllKeys() (doc here). There is a catch here: (might not be applicable for native client), in Java API, you have an option to register interest with an InterestResultPolicy. If you use KEYS_VALUES, you will load all data to local from remote on startup WITHOUT triggering afterCreate callback. If you choose KEYS only or NONE, you will likely have similar problem.
You can check for boolean flag remoteOrigin in EntryEvent. If it is false, it is purely local. In a non-WAN setup, this should be enough to distinguish your local operation from remotely initiated operation (be it a cache syncing or a genuine creation initiated by other cache). Vaguely remembering WAN works a bit different here.
I've never used the Native Client but, at a first glance, it should be expected for the afterCreate event to be invoked on the client side as the entry is actually being created on the local cache. What I mean is that the entry might exists on the server but, internally, the client needs to retrieve it from the server, and then create it locally (thus invoking the afterCreate for the locally installed CacheListener). Makes sense?.

mule cache managed object store with _defaultUserObjectStore not getting refreshed

I am using mule cache managed object store with _defaultUserObjectStore and my data is not getting refreshed on the on-premises production server.
<ee:object-store-caching-strategy name="Caching_Strategy" doc:name="Caching Strategy" >
<managed-store storeName="_defaultUserObjectStore" entryTTL="600000" expirationInterval="600000"/>
</ee:object-store-caching-strategy>
Your TTL is similar to you expirationInterval, that may cause the problem.
Do you actually had a look what those values mean?
Typically the expiration value will be short, it's the time it's checks for expired entries and if found deletes them.
https://docs.mulesoft.com/mule-user-guide/v/3.8/cache-scope#configobjstore

Ignite and CAP theoram

1.Which category of CAP theoram does ignite fall under ?
2.While doing a loadCache using a client on multiple Servers , after the loadCache being called if the client goes down, will the operation complete on the Servers ?(Unable to try it due to some permission restriction)
Ignite guarantees data consistency. In case of cluster is segmented into two parts, they can't be merged back. One of the parts has to be considered invalid and restarted.
Most likely data will not be fully loaded in this case. The loading process should be restarted.
For the first question: the enum of CacheAtomicityMode has two values
TRANSACTIONAL: if you configure this in you CacheConfigure, then your application is CP
ATOMIC: if you configure this in you CacheConfigure, then your application is AP, and ATOMIC is the default value.
For the second question: if your client is embeddedwith your application,locaCache failed for some reason, then CacheLoaderException will be throw. If you want to custome CacheStore,you can extend CacheStoreAdapter and override method.

Gemfire region with data expiration

Regarding this document, "entry-time-to-live-expiration" means How long the region's entries can remain in the cache without being accessed or updated. The default is no expiration of this type. However, when I use Spring Cache and client-region with following configuration, I find that setting dose not work well with being accessed. Going forward, regarding this document-> XMLTTL tab, it said "Configures a replica region to invalidate entries that have not been modified for 15 seconds.". So I am confused if TTL work for being accessed.
<gfe:client-region id="Customer2" name="Customer2" destroy="false" load-factor="0.5" statistics="true" cache-ref="client-cache">
<gfe:entry-ttl action="DESTROY" timeout="60"/>
<gfe:eviction threshold="5"/>
</gfe:client-region>
So, the documentation you might want refer to is here and here. Perhaps relevant to your situation is...
"Requests for entries that have expired on the consumers will be forwarded to the producer."
Based on your configuration, given you did not set either a ClientRegionShortcut or DataPolicy, your Client Region, "Customer2", defaults to ClientRegionShortcut.LOCAL, which sets a DataPolicy of "NORMAL". DataPolicy.NORMAL states...
"Allows the contents in this cache to differ from other caches. Data that this region is interested in is stored in local memory."
And for the shortcut of "LOCAL"...
"A LOCAL region only has local state and never sends operations to a server. ..."
However, it does not mean the client Region cannot receive data (of interests) from the Server. It simply implies operations are not distributed to the Server. It may be expiring the entry and then repopulating it from the Server (producer).
Of course, I am speculating and have not tested these ideas. You might try setting the Expiration Action to "LOCAL_DESTROY" and/or changing your distribution properties through different ClientRegionShortcuts.
Post back if you are still having problems. I too echo what #hubbardr is asking.
Cheers!

What is the difference between the two deliver options in 'Operation Behaviour' in RTC?

At 'Project Area' level on RTC under 'Team Configuration' -> 'Operation Behaviour' there are two deliver options :
What is the differnence between the two ? Are they both not delivering to the server ?
Those are for hooks:
executed on the client, that is before the deliver,
executed on the server, that is at the reception of the deliver.
It is on the client side, for instance, that I set the hook requiring that a Work Item is associated to a change set before said change set can be delivered (as illustrated in your previous question "Can I associate a change set with a work item after it has been delivered?").
I could check it on the server, but why use network traffic if the deliver is rejected anyway?
More precisely, As mentioned in this thread:
In general, you want all preconditions to run on the server, so the server (including the web server) can ensure those preconditions have been executed.
But there are some preconditions that must be run on the client, namely those that need to look at the local state of the client.
This is illustrated by the list of predefined preconditions.
In particular, most of these preconditions refer to the build/compile state of the workspace (information not available on the server), such as: "prohibit unused imports" and "prohibit workspace errors".
Note that there are three client-side preconditions that do not require client-side information ("require work item approval", "require work item and comments", "descriptive change sets").
These are included for backward compatibility, since they were made available in the first release of RTC, but have since then made available as server-side preconditions as well, so you should always use the server-side form of them.
I've submitted work item 209427 to get these client-side preconditions marked as "deprecated" with a pointer to the server-side preconditions that replace them.