1.Which category of CAP theoram does ignite fall under ?
2.While doing a loadCache using a client on multiple Servers , after the loadCache being called if the client goes down, will the operation complete on the Servers ?(Unable to try it due to some permission restriction)
Ignite guarantees data consistency. In case of cluster is segmented into two parts, they can't be merged back. One of the parts has to be considered invalid and restarted.
Most likely data will not be fully loaded in this case. The loading process should be restarted.
For the first question: the enum of CacheAtomicityMode has two values
TRANSACTIONAL: if you configure this in you CacheConfigure, then your application is CP
ATOMIC: if you configure this in you CacheConfigure, then your application is AP, and ATOMIC is the default value.
For the second question: if your client is embeddedwith your application,locaCache failed for some reason, then CacheLoaderException will be throw. If you want to custome CacheStore,you can extend CacheStoreAdapter and override method.
Related
If I push a Runnable to a redisson distributed executor service, what rules am I required to oblige by?
Surely , I can not have free reign, I do not see how that is possible, yet, it is not mention in the docs at all, nor are any rules apparently enforced by the API, like R extends Serializable or similar.
If I pass this runnable:
new Runnable(()-> {
// What can I access here, and have it be recreated in whatever server instance picks it up later for execution?
// newlyCreatedInstanceCreatedJustBeforeThisRunnableWasCreated.isAccissible(); // ?
// newlyComplexInstanceSuchAsADatabaseDriverThatisAccessedHere.isAccissible(); // ?
// transactionalHibernateEntityContainingStaticReferencesToComplexObjects....
// I think you get the point.
// Does Redisson serialize everything within this scope?
// When it is recreated later, surely, I can not have access to those exact objects, unless they run on the same server, right?
// If the server goes does and up, or another server executes this runnable, then what happens?
// What rules do we have to abide by here?
})
Also, what rules do we have to abide by when pushing something to a RQueue, RBlockingDequeu, or Redisson live objects?
It is not clear from the docs.
Also, would be great if a link to a single site documentation site could be provided. The one here requires a lot of clickin and navigation:
https://github.com/redisson/redisson/wiki/Table-of-Content
https://github.com/redisson/redisson/wiki/9.-distributed-services#933-distributed-executor-service-tasks
You can have an access to RedisClient and taskId. Full state of task object will be serialized.
TaskRetry setting applied to each task. If task isn't executed after 5 minutes since the moment of start then it will requeued.
I agree that the documentation is lacking some "under the hood" explanations.
I was able to execute db reads and inserts through the Callable/runnable that was submitted to the remote ExecutorService.
I configured a single Redis on a remote VM, the database and the app running locally on my laptop.
The tasks were executed without any errors.
I am using Ignite.Net and I have a very simple use case, wherein I want to put something into the cache without any transaction using CacheAtomicityMode.ATOMIC, to achieve that I am trying to use putIfAbsentAsync(key,Value) method.
But having a look at the description of the method on
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteCache.html#putIfAbsentAsync-K-V- page, I am a bit confused!
Being new to Ignite can you please help me understand this better?Below are my doubts.
Description of putIfAbsentAsync method in the above link states the below
"For CacheAtomicityMode.ATOMIC return value on primary node crash may be incorrect because of the automatic retries.It is recommended to disable retries with withNoRetries() and manually restore primary-backup consistency in case of update failure."
Can you please explain what are automatic retries ? How and when to use it ?
What are pros and cons of disabling retries with withNoRetries() ?
I am also using ReplaceAsync(), RemoveAsync() And PutIfAbsentAsync() with the same cache configuration. Will there be any impact on the functionally of these functions after disabling retries?
What are the possible scenarios where in primary node may crash?
In what scenarios will putIfAbsentAsync() return false?
In what scenarios will putIfAbsentAsync() throw an exception? And List of all possible exception ?
I know the above link states the list of exceptions
(TransactionTimeoutException,TransactionRollbackException,TransactionHeuristicException) But all three are related to Transactions ! I don't really understand why Transaction exception will be thrown in ATOMIC mode, as there aren't any transactions in ATOMIC mode ?
I tried another use case have just one server node and one client node. Server Node creates and stores the cache and client Node just puts or gets cache ,When I manual stopped the server node just before a client node was trying to put something in the cache I got SocketException i.e java.net.SocketException: Socket is closed Exception. If this is a valid use case it would be better if you list these exception on the page.
I don't understand this line "manually restore primary-backup consistency in case of update failure" can you please explain what is primary-backup consistency ? and how to manually restore it ?
withNoRetries() just disallows retries. If operation fails, you get exception promptly. This compared to default behavior where operation will be retried until it is possible.
See more about ATOMIC limitations and IEP-12 in the docs. Note that normally this is only possible when more than one node leaves cluster at once.
Using Geode 1.2 and 9.1 Pivotal native client the following code:
IRegion<string, IPdxInstance> r = cache.GetRegion<string, IPdxInstance>("myRegion");
return r[key];
then triggers an AfterCreate event for myRegion. Why does that happen when no data is created, only read?
Same here, never used Native Client. I agreed with what #Urizen suspected - you are calling r[key] from an instance of Geode that doesn't have the entry, so it pulls the data from other instance, which "create" the entry locally.
You have a few options here:
Performing an interest registration for the instance you are initiating the call using registerAllKeys() (doc here). There is a catch here: (might not be applicable for native client), in Java API, you have an option to register interest with an InterestResultPolicy. If you use KEYS_VALUES, you will load all data to local from remote on startup WITHOUT triggering afterCreate callback. If you choose KEYS only or NONE, you will likely have similar problem.
You can check for boolean flag remoteOrigin in EntryEvent. If it is false, it is purely local. In a non-WAN setup, this should be enough to distinguish your local operation from remotely initiated operation (be it a cache syncing or a genuine creation initiated by other cache). Vaguely remembering WAN works a bit different here.
I've never used the Native Client but, at a first glance, it should be expected for the afterCreate event to be invoked on the client side as the entry is actually being created on the local cache. What I mean is that the entry might exists on the server but, internally, the client needs to retrieve it from the server, and then create it locally (thus invoking the afterCreate for the locally installed CacheListener). Makes sense?.
Regarding this document, "entry-time-to-live-expiration" means How long the region's entries can remain in the cache without being accessed or updated. The default is no expiration of this type. However, when I use Spring Cache and client-region with following configuration, I find that setting dose not work well with being accessed. Going forward, regarding this document-> XMLTTL tab, it said "Configures a replica region to invalidate entries that have not been modified for 15 seconds.". So I am confused if TTL work for being accessed.
<gfe:client-region id="Customer2" name="Customer2" destroy="false" load-factor="0.5" statistics="true" cache-ref="client-cache">
<gfe:entry-ttl action="DESTROY" timeout="60"/>
<gfe:eviction threshold="5"/>
</gfe:client-region>
So, the documentation you might want refer to is here and here. Perhaps relevant to your situation is...
"Requests for entries that have expired on the consumers will be forwarded to the producer."
Based on your configuration, given you did not set either a ClientRegionShortcut or DataPolicy, your Client Region, "Customer2", defaults to ClientRegionShortcut.LOCAL, which sets a DataPolicy of "NORMAL". DataPolicy.NORMAL states...
"Allows the contents in this cache to differ from other caches. Data that this region is interested in is stored in local memory."
And for the shortcut of "LOCAL"...
"A LOCAL region only has local state and never sends operations to a server. ..."
However, it does not mean the client Region cannot receive data (of interests) from the Server. It simply implies operations are not distributed to the Server. It may be expiring the entry and then repopulating it from the Server (producer).
Of course, I am speculating and have not tested these ideas. You might try setting the Expiration Action to "LOCAL_DESTROY" and/or changing your distribution properties through different ClientRegionShortcuts.
Post back if you are still having problems. I too echo what #hubbardr is asking.
Cheers!
I've got a real lemon on my hands. I hope someone who has the same problem or know how to fix it could point me in the right direction.
The Setup
I'm trying to create a WCF data service that uses an ADO Entity Framework model to retrieve data from the DB. I've added the WCF service reference and all seems fine. I have two sets of data service calls. The first one retrieves a list of all "users" and returns (this list does not include any dependent data (eg. address, contact, etc.). The second call is when a "user" is selected, the application request to include a few more dependent information such as address, contact details, messages, etc. given a user id. This also seems to work fine.
The Lemon
After some user selection change, ie. calling for more dependent data from the data service, the application stops to respond.
Crash error:
The request channel timed out while waiting for a reply after 00:00:59.9989999. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding. The time allotted to this operation may have been a portion of a longer timeout.
I restart the debugging process but the application will not make any data service calls until after about a minute or so, VS 08 displays a message box with error:
Unable to process request from service. 'http://localhost:61768/ConsoleService.svc'. Catastrophic failure.
I've Googled the hell out of this error and related issues but found nothing of use.
Possible Solutions
I've found some leads as to the source of the problem. In the client's app.config:
maxReceivedMessageSize > Set to a higher value, eg. 5242880.
receiveTimeout > Set to a higher value, eg. 00:30:00
I've tried these but all in vain. I suspect there is an underlying problem that cannot be fixed by simply changing some numbers. Any leads would be much appreciated.
I've solved it =P.
Cause
The WCF service works fine. It was the data service calls that was the culprit. Every time I made the call, I instantiated a new reference to the data service, but never closed/disposed the service reference. So after a couple of calls, the data service reaches its maximum connection and halts.
Solution
Make sure to close/dispose of any data service reference properly. Best practice would be to enclose in a using statement.
using(var dataService = new ServiceNS.ServiceClient() )
{
// Use service here
}
// The service will be disposed and connection freed.
Glad to see you fixed your problem.
However, you need to be carefull about using the using statement. Have a look at this article:
http://msdn.microsoft.com/en-us/library/aa355056.aspx