Variables in SAP MII - properties

Basically, am looking for a variable/Property that holds the value even outside of the transaction execution facilitating to use in the next rounds of execution, similar to a counter in any programming language until it get reset explicitly.
As seen in SAP MII, Local and Transaction properties are reset on completion of transaction execution, leaving all the properties to initial state, without giving any opportunity to use the stored/assigned value in previous transaction. Global properties are not GOOD for such requirement as they are Read-Only.
Did any one use such property/tried for any work around?
Thanks

You can use session variables for this
To store a variable:
/XMII/PropertyAccessServlet?mode=store&PropName=<name>&PropValue=<value>
and to read the variable:
/XMII/PropertyAccessServlet?mode=retrieve&PropName=<name>

You can use shared properties. I use it as a way to keep track of the number of times a transaction has been run. The persistent shared property is stored in a database, so it won't disappear after the transaction ends.
https://help.sap.com/saphelp_mii151sp03/helpdata/en/4c/72e4f8e631469ee10000000a15822d/content.htm?no_cache=true#:~:text=Shared%20Memory%20You%20use%20the%20Shared%20Memoryscreen%20to,are%20migrated%20with%20a%20project%20are%20not%20overwritten.

Related

VB.NET Webforms application: Sharing global variables between code and SQL

I'm working on some old webforms applications. We have some clients and we currently store some of the info in a global variables class like the next:
Public Class globals
Enum Enterprise
Enterprise1 = 10
Enterprise2 = 11
...
End Enum
Enum Brand
Brand1 = 5
Brand2 = 6
Brand3 = 7
...
End Enum
...
End Class
The fact is that, even we use parameters in all SQL SPs, we have some common SPs where we have exceptions for certain clients, so you will find SPs with code like:
if (id_enterprise = 10)
begin
... do whatever
end
And as you see this "10" it is hard-coded so, it's not maintainable and thus, global variables are not totally global.
I'm not so sure about the most efficient way to deal with this, ending with completely global variables, but one possible solution that have come to my mind is to have a table with global variables stored in it, and in any -not defined yet- way using that values as global sql variables that can be also shared with code.
Regarding the code, on app startup I would read that table (or SP or whatever it ends being) and store read variables in app global variables (only once, of course).
What do you think about it? Does it make any sense? Any other better implementation for completely global variables that can be shared between code and SQL? FYI, we work with Azure, maybe an Azure alternative?
Interesting issue here. I would recommend just creating a session table in your Azure SQL database as you have suggested, then keep just a single integer/bigint variable as a session variable (ie. Session("user_session_id")). This will minimize the footprint of the application and avoid the overflow issue that you are wary of. Load the table entries at the start of the session. In each context, look up only the parameters required to complete any particular task.

Update the value in region ONLY IF value.status is 'XXX'

We are trying to use Gemfire in our work. We have a region where we store each request coming in and it goes through its lifecycle (For example, states are A --> B --> C --> D)
But we also have a requirement that we need to update the state to C only if the current state is B (as the state D is getting updated Async by some other process). We used to achieve it in cassandra by using ONLY IF key word. Looking for something similar in Gemfire. Obviously we cannot do Read, Check State and Update because its not atomic.
Other option was to do this by taking a distribted lock and then perform check-update as mentioned above. But this option comes with a performance overhead.
We were also thinking of attaching a CacheWriter and check the state in beforeUpdate(..). But came to know that what we get as parameter to beforeUpdate is a copy of the value and not the real value.
Does anyone have an idea of how to achieve it in a atomic fashion that we can try?
What you are looking for is this, Region.replace(key, oldValue, newValue). This is an atomic operation.
UPDATE: I should also clarify that is not currently possible to inspect certain properties of the mapped object value (e.g. someObject.state = XYZ) to ascertain whether to perform the update/replace. For that you will need a properly implemented Object.equals() method.

Creating a local variable instead of calling a method to get data

My question is about the efficient code. Please let me know which is the efficient approach among the given below.
There's a method call to get an object. For eg.,
relationship.getCommerceItem()
But, we need to call this method multiple times in a single line itself. So, I'm planning to create a local variable to replace the method call and store the return value. Like given below.
commerceItem = relationship.getCommerceItem()
Now, which approach is more efficient and why?
Considering that this code will be executed in an environment where thousands and thousands of requests will be received.
It depends on whether or not the logic executed in the called function needs to be ran every time. In other words, does the return value change?
If not, saving it in a variable saves you the resources needed for the function call (which is the most sane thing to do IMO).

Redis Booksleeve, temporary set

I need to do an Except operation between an existing set and some values coming as input from a user. How can I do this inte best way?
I was first thinking about using a temporary set where I store the values from the user. Will that work in a multithreaded application (web)? If so, how can I be sure the temporary set is not overwritten by other users between before I do the Except call? Or do I need a unique set temporary set for each user?
Maybe transactions are the way to go?
http://redis.io/topics/transactions
Set except is the same as set difference. In Redis, we call this operation set difference, and we can do it using either the SDIFF command, or the SDIFFSTORE command, depending on whether we want to just return the result, or store it in a new set. These are both built in functions.
In your case, since one of your sets is user generated, just encapsulate the whole thing in a pipeline. This will run the whole operation as one atomic transaction that will not allow any other operations against Redis until it finishes (due to Redis' single threaded nature). This would look something like (using Python and Redis-py as an example language):
pipe = redis.pipeline()
pipe.sadd('user_set', 'user_val1', 'user_val2', 'user_valn')
diff_result = pipe.sdiff('my_set', 'user_set')
pipe.del('user_set')
pipe.execute()
#do whatever with diff_result here.

what does this error mean in nhibernate

Out of the blue, i am getting this error when doing a number of updates using nhibernate.
Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect): [MyDomainObject]
there is no additional information in the error. Is there some recommended way to help identify the root issue or can someone give me a better explanation on what this error indicated or is a sympton around.
Some additional info
I looked at the object and all of the data looks fine, it has an ID, etc . .
Note this is running in a single call stack from an asp.net-mvc website so i wouldn't expect there to be any threading issues to worry about in terms of concurrency.
NHibernate has an object, let's call it theObject. theObject.Id has a value of 42. NHibernate notices that the object is dirty. The object's Id is different than the unsaved-value, which is zero (0) for integer primary keys. So NHibernate issues an update statement, but no rows are updated, which means that there is no row in the database for that type of object with an Id of 42. So the object has been deleted without NHibernate knowing about it. This could happen inside another transaction (e.g. you have threading issues) or if someone (or another application) deleted/altered the row using SQL directly against the database.
The other possibility is that your unsaved-value is wrong. e.g. You are using -1 to indicate an unsaved-entity, but your mapping has a unsaved-value of zero. This is unlikely as your application is generally working from the sounds of it. If the unsaved-value was wrong, you wouldn't have been able to save any entities to the database as NHibernate would have been issuing UPDATE statements when it should have been issuing INSERT.
It means that you have multiple transactions accessing the same data, thus producing concurrency issues. You should improve on your data access handling, you probably are updating data from multiple threads, syndicate the changed data into a queue first which handles all the access to the db.
An old post, but hopefully my info will help someone. I was getting a similar error but only when persisting associations, after I had added in a new object. The error was of the form:
NHibernate.StaleObjectStateException: Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect) [My.Entity#0]
Note the zero on the end, which is my identifier property. It should not be trying to save with key zero as I was using identity specification in SQL Server (generator class=native). I had not changed my unsaved-value in my xml so I had no idea what the problem was; for some reason NHibernate was trying to do an update using key value as 0 instead of a save (and getting the next key identity) for my new object.
In the end I found the cause was that I was initialising Version number to 1 for the new object in my constructor! Even though my identifier property was zero, for some reason NHibernate was also looking for a version property of zero as well, to identify it as an unsaved transient instance. The book "NHibernate in Action" does actually mention this on page 120, but for some reason my objects were fine when persisting with version number of 1 normally, and only failing if saving a new object through an association.
So make sure you do not set your Version value (leave as zero or null).
You say that your data is ok, but check if for example you are mapping the ID as self generate. I had the exact same problem, but I was sending an object with an ID different from 0.
Hope it helps!
My problem was this:
[Bind(Include="Name")] EventType eventType
Should have been:
[Bind(Include="EventTypeId,Name")] EventType eventType
Just as other answers suggest nhibernate was using zero as the id for my entity.
If you have a trigger on the table, it can be the reason. In this case, add inside it
SET ROWCOUNT 0;
SET NOCOUNT ON;
This error happened to me in the following way:
List < Device > allDevices = new List < Device > ();
//Add Devices to the list
allDevices.Add(aDevice);
//Add allDevices to database //Will work fine
// allDevices.Clear(); //Should be used here
//Later we add more devices
allDevices.Add(anotherDevice);
//Add allDevices to Database -> We get the error
//Solution to this
allDevices.Clear(); //Before adding new transaction with the oldData,