We are trying to use Gemfire in our work. We have a region where we store each request coming in and it goes through its lifecycle (For example, states are A --> B --> C --> D)
But we also have a requirement that we need to update the state to C only if the current state is B (as the state D is getting updated Async by some other process). We used to achieve it in cassandra by using ONLY IF key word. Looking for something similar in Gemfire. Obviously we cannot do Read, Check State and Update because its not atomic.
Other option was to do this by taking a distribted lock and then perform check-update as mentioned above. But this option comes with a performance overhead.
We were also thinking of attaching a CacheWriter and check the state in beforeUpdate(..). But came to know that what we get as parameter to beforeUpdate is a copy of the value and not the real value.
Does anyone have an idea of how to achieve it in a atomic fashion that we can try?
What you are looking for is this, Region.replace(key, oldValue, newValue). This is an atomic operation.
UPDATE: I should also clarify that is not currently possible to inspect certain properties of the mapped object value (e.g. someObject.state = XYZ) to ascertain whether to perform the update/replace. For that you will need a properly implemented Object.equals() method.
Related
I am developing a system that updates the progress of tasks,
always incrementing by 1 a progress attribute into the dynamodb task table.
I want to do that using the atomic increment of the attribute.
How can I do that using aws-java-sdk 2.0?
I did several kinds of research related to this subject. But I didn't find anything.
https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_UpdateItem.html#API_UpdateItem_RequestSyntax under "UpdateExpression".
SET - Adds one or more attributes and values to an item. If any of these attribute already exist, they are replaced by the new values. You can also use SET to add or subtract from an attribute that is of type Number. For example: SET myNum = myNum + :val.
In this case :val always equals 1 then.
I can not give guaranties (and have not tested it) but this seems to me like it should be Atomic.
PS. One of my first StackOverflow comments, constructive feedback very welcome.
I am currently working on project with infinispan 8.1.3. I want to make sure that the node who created object must be owner of that entry all the time in distribution mode .Is there any option to meet my requirement??. I heard the flag LOCAL_MODE.but, it stores entry in local only .I dont know if that node down, local cahe entry will be shared to another node??. thanks
Don't use flags unless you exactly know what you're doing. Flag.CACHE_MODE_LOCAL means that you won't execute any RPC when doing that operation, but in case that the key does not route to this node, a write will result in a noop and read will return null.
It's not possible to tie the entry to the node exclusively - what would you do if this node crashes?
However, if the cluster is stable enough, there's the Key Affinity Service that will give you a key that belongs to this node. See next chapter about grouping, too, it might fit your use case.
EDIT: Instead moving data to the executing node, you can move the execution towards the data. With Grouping API you can find the data by the group, using
Address owningNode = cache.getAdvancedCache().getDistributionManager()
.getCacheTopology().getDistributionInfo(group).primary();
ClusterExecutor executor = cache.getCacheManager().executor()
.filterTargets(Collections.singleton(owningNode));
executor.submit(...)
We are going to implement gemfire for our project. We are currently syncing gemfire cache with our DB2 database. So, we are facing issue while putting DB data into cache.
To put DB data into region. I have implement com.gemstone.gemfire.cache.CacheLoader and override load method of it. As written in java doc load method will return only one Object. But for our requirement we will have to return multiple VO from load method
public List<CmDvceInvtrGemfireBean> load(LoaderHelper<CmDvceInvtrGemfireBean, CmDvceInvtrGemfireBean> helper)
throws CacheLoaderException
While returining multiple VO in form of List<CmDvceInvtrGemfireBean> gemfire region consider it's as single value.
So, when i invoke,
System.out.println("return COUNT" + cmDvceInvtrRecord.query("SELECT COUNT(*) FROM /cmDvceInvtrRecord"));
It return count of one. But i can see total 7 number of data into it.
So, I want to implement the kind of mechanism that will put all the 7 values as a separate VO in Region
Is there any way to do this using Gemfire CacheLoader?
A CacheLoader was meant to load a value only for a single entry in the GemFire Region on a cache miss. As the Javadoc states...
..creates the value for the desired key..
While a key can map to a multi-valued (e.g. an array/Collection) value, the CacheLoader can only populate a single entry.
You will have to resort to other means of populating the cache with multiple "entries" in a single operation.
Out of curiosity, why do you need (requirement?) to load multiple entries (from the DB) at once? Are you trying to minimize the number of round trips to the DB?
Also, what logic are you using to decide what VO from the DB will be loaded based on the information (i.e. key) provided in the CacheLoader?
For instance, are you somehow trying to predictably select values from the DB based on the CacheLoader key that would subsequently minimize cache misses on future Region.get(key) calls?
Sorry, I don't have a better answer for you right now, but answers to some of these questions may help me give you some ideas for alternatives.
Cheers,
John
I'm looking for a faster, more efficient method of assigning data gathered from a DAQ to its proper location in a large cluster containing arrays of subclusters.
My current method 1 relies heavily on the OpenG cluster manipulation tools, but with a large data-set the performance is far too slow.
The array and cluster location of each element of data from the DAQ is determined during an initialization phase and doesn't change during acquisition.
Because the data element origin and end points are the same throughout acquisition, I would think an array of memory locations could be created and the data directly assigned to its proper place. I'm just not sure how to implement such a thing.
The following code does what you want:
For each of your cluster elements (AMC, ANLG_PM and PA) you should add a case in the string case structure, for the elements AMC and PA you will need to place a second case structure.
This is really more of a comment, but I do not have the reputation to leave those yet, so here it is:
Regarding adding cases for every possible value of Array name, is there any reason why you cannot use an enum here? Since you are placing it into a cluster anyway, I would suggest making a type-defined enum of your possible array names. That way, when you want to add or remove one, you only have to do it in one place.
You will still need to right-click on your case structures that use this enum and select Add item for every value if you are adding a value, or manually delete the obsolete value if you are removing one. I suppose some maintenance is required either way...
Out of the blue, i am getting this error when doing a number of updates using nhibernate.
Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): [MyDomainObject]
there is no additional information in the error. Is there some recommended way to help identify the root issue or can someone give me a better explanation on what this error indicated or is a sympton around.
Some additional info
I looked at the object and all of the data looks fine, it has an ID, etc . .
Note this is running in a single call stack from an asp.net-mvc website so i wouldn't expect there to be any threading issues to worry about in terms of concurrency.
NHibernate has an object, let's call it theObject. theObject.Id has a value of 42. NHibernate notices that the object is dirty. The object's Id is different than the unsaved-value, which is zero (0) for integer primary keys. So NHibernate issues an update statement, but no rows are updated, which means that there is no row in the database for that type of object with an Id of 42. So the object has been deleted without NHibernate knowing about it. This could happen inside another transaction (e.g. you have threading issues) or if someone (or another application) deleted/altered the row using SQL directly against the database.
The other possibility is that your unsaved-value is wrong. e.g. You are using -1 to indicate an unsaved-entity, but your mapping has a unsaved-value of zero. This is unlikely as your application is generally working from the sounds of it. If the unsaved-value was wrong, you wouldn't have been able to save any entities to the database as NHibernate would have been issuing UPDATE statements when it should have been issuing INSERT.
It means that you have multiple transactions accessing the same data, thus producing concurrency issues. You should improve on your data access handling, you probably are updating data from multiple threads, syndicate the changed data into a queue first which handles all the access to the db.
An old post, but hopefully my info will help someone. I was getting a similar error but only when persisting associations, after I had added in a new object. The error was of the form:
NHibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect) [My.Entity#0]
Note the zero on the end, which is my identifier property. It should not be trying to save with key zero as I was using identity specification in SQL Server (generator class=native). I had not changed my unsaved-value in my xml so I had no idea what the problem was; for some reason NHibernate was trying to do an update using key value as 0 instead of a save (and getting the next key identity) for my new object.
In the end I found the cause was that I was initialising Version number to 1 for the new object in my constructor! Even though my identifier property was zero, for some reason NHibernate was also looking for a version property of zero as well, to identify it as an unsaved transient instance. The book "NHibernate in Action" does actually mention this on page 120, but for some reason my objects were fine when persisting with version number of 1 normally, and only failing if saving a new object through an association.
So make sure you do not set your Version value (leave as zero or null).
You say that your data is ok, but check if for example you are mapping the ID as self generate. I had the exact same problem, but I was sending an object with an ID different from 0.
Hope it helps!
My problem was this:
[Bind(Include="Name")] EventType eventType
Should have been:
[Bind(Include="EventTypeId,Name")] EventType eventType
Just as other answers suggest nhibernate was using zero as the id for my entity.
If you have a trigger on the table, it can be the reason. In this case, add inside it
SET ROWCOUNT 0;
SET NOCOUNT ON;
This error happened to me in the following way:
List < Device > allDevices = new List < Device > ();
//Add Devices to the list
allDevices.Add(aDevice);
//Add allDevices to database //Will work fine
// allDevices.Clear(); //Should be used here
//Later we add more devices
allDevices.Add(anotherDevice);
//Add allDevices to Database -> We get the error
//Solution to this
allDevices.Clear(); //Before adding new transaction with the oldData,