I need to ensure consistent editing of SAP tables for Fiori Backend calls.
I have multiple situations, where single call to backend changes more than one table on the backend side. The changes are written to transport request.
I want to implement error-free stable solution, so that if first table was changed fine, but second table failed (duplicate entry, missing authorization), the whole bunch of changes is rejected.
However, it seems that there is only "perform FM in update task" available, which requires to put all logic of every backend db change into a FM.
Am I missing something, or SAP really has no Object Oriented way to perform consistent database updates?
The only workaround I have is to check all these preconditions upwards, which is not so nice anymore.
#Florian: Backend call is for example action "Approve" on the document, which changes: 1) Document header table, field status changes from "in workflow" to something else. 2) Approval table - current approver entry is changed. Or it is adding a new document, where 1) Document header table entry is added 2) Document history table entry is added.
I do not want to call Function Modules, I want to implement solution using only classes and class methods. I was working earlier with other ERP systems and there are statements like "Start transaction", "Commit transaction" or "Rollback transaction". Start transcation means you start a LUW, which is only committed on "Commit transaction", and if you call "Rollback transaction", all current database changes of that LUW would be cancelled. I wonder why modern SAP has none of these except for old update task FM (or is it just me not noticing a correct way to process this).
CALL UPDATE FUNCTION MODULE in UPDATE TASK is the only way. How it works in Fiori transnational App, for example,
Database A: You do some business logic, everything is fine. call UPDATE task to CUD database table A.
Database B: You do some business logic, there is some issue regarding authorization, you raise the exception(Error). UPDATE TASK to CUD database table B is NOT called.
After all the business logic are processed, in case any exception is raised, the SADL/Gateway layer would catch the exception, it would call ROLLBACK WORK which means everything is rollback. Otherwise, if there are no errors, it would call COMMIT WORK which means consistent CUDs to all tables,
btw, anything abnormal like DUPLICATE ENTRY happens within the UPDATE Function Module, depending on your coding, you can ignore it or raise MESSAGE E to abort the DB operations.
From my point of view, those kinds of issue should be avoided before your call the UPDATE Function Module.
Related
I'm creating a custom logging model in Odoo. Naturally, I'd like the model to be completely exempt from the typical transaction rollback behavior. The reason for this is, of course, because I don't want exceptions later on in my code to cause log entries to never be added.
Here's an example scenario:
class SomeModel(models.Model):
def some_method(self):
self.env["my.custom.log"].add_entry("SomeModel.some_method started running")
...
raise Exception("Something went wrong...")
In this case, I want to create an entry in my custom log signaling that the method began to run. However, depending on logic later in the code, an exception may be raised. If an exception is raised, Odoo of course rolls back the database transaction, which is desired to prevent incorrect data from being committed to the database.
However, I don't want my log entries to disappear if an exception occurs. Like any sane log, I'd like my my.custom.log model to not be subject to Odoo's typical rollback behavior and immediately log information no matter what happens later.
I am aware that I can manually rollback or commit a transaction, but this doesn't really help me here. If I just run env.cr.commit() after adding my log entry it will definitely add the log entry, but it will also add all the other operations which occurred before it as well. That would definitely not be good. And running env.cr.rollback() before adding the log entry doesn't make sense either, because all pending operations would be rolled back even if no exception occurs later.
How do I solve this? How can I make my log entries always get added to the database regardless of what else happens in the code?
Note: I am aware that Odoo has a built-in log. I have my reasons for wanting to create my own separate log.
I've figured out a solution, but I don't know if it's necessarily the best way to do it.
The solution I've settled on for now would in this example be to define the add_entry() logging model method something like the following:
# If log entries are created normally and an exception occurs, Odoo will rollback
# the current database transaction, meaning that along with other data, the log
# entries will not be created. With this method, log entries are created with a
# separate database cursor, independent of the current transaction.
def add_entry(self, message):
self.flush() # ADDED WITH IMPORTANT EDIT BELOW
with registry(self.env.cr.dbname).cursor() as cr:
record = self.with_env(self.env(cr)).create({"message": message})
return self.browse(record.id)
# Otherwise the method would look simply like this...
def add_entry(self, message):
return self.create({"message": message})
Explanation
I got the with registry(self.env.cr.dbname).cursor() as cr idea from the Odoo delivery module at delivery/models/delivery_carrier.py:212. In that file it looks like Odoo is trying to do pretty much exactly what I'm looking for in this question.
I got the self.with_env(self.env(cr)) idea from the Odoo iap module at iap/models/iap.py:182. It's cleaner than doing self.env(cr)["my.custom.log"].
I don't return the new record directly from the create() method. I'm pretty sure something wouldn't be right about returning a recordset with an environment whose cursor is about to be closed (because of the with statement). Instead, I just hang on to the record ID and then get the record in the current environment and cursor with self.browse(). Hopefully this is actually the best way to do this and doesn't come with any major performance issues.
Please don't hesitate to weigh in if you have any suggestions for improvement!
Important Edit (A Day Later)
I added the self.flush() line in the code above along with this edit. It turns out that for some reason, when the with statement __exit__'s and the separate cursor is committed, an exception can be raised when it tries to flush data.
The same thing is done at delivery/models/delivery_carrier.py:207. I'm not exact sure why it is necessary, but it looks like it is. I assume that flushing the data comes with some performance considerations as well, but I'm not sure what they are and right now it looks like they are unavoidable. Again, please weigh in if you know anything.
So, record-locking in Access is pretty awful. I can't use the built-in record locking because it locks a "page" of records instead of just the individual records (I've tried changing the settings for using record-level locking, but it's still locking a page instead of just one record), but even if I could get that working, it wouldn't solve my issue because the record doesn't lock until the user starts to make changes in the form.
The issue is, when two people open the same record, they can start making changes and both save (thus overwriting the earlier change). To make matters worse, there are listboxes on the form that link to other tables (keyed on an ID) and the changes they make to those tables are then overwritten by any change that comes after if they both opened the same record.
Long story short, I need to make sure it's impossible for two people to even open the same record at the same time (regardless of whether or not they've made any edits to it yet).
To do this, I added a field to the table which indicates if a record has been locked by a user. When they open a form, it sets their name in the field and other users who try to open that record get a notification that it's already locked. The problem is, this "lock" isn't instantaneous. It takes a few seconds for other users to "detect" that the record is locked, so if two people try to open the same record at roughly the same time, it will allow them both to open it. I've applied a transaction to the UPDATE statement that sets the lock, but it still leaves a short window wherein the lock doesn't "take" and two people can open the same record.
So, is there a way to make an UPDATE instantaneous (so all other users immediately see its results), or better yet, a robust and comprehensive way to lock records in an Access multi-user environment?
It not clear why you only receiving “page” locking.
If you turn on row locking in file->options, then you ALSO need to set the particular form to lock the current record. So just turning on record locking will not help you. That setting ONLY sets the default for new forms - it is not a system wide setting.
If you correctly turn on locking for a form, then if two users are viewing the same record and one user starts to edit the record, then all others CANNOT edit the record. Any other user attempting to edit a record will see a “lock” icon in the record selector bar (assuming record selector is turned on for the given form). They also will receive a "beep" if they try to type into any editable control on the given form.
And when they try to edit, they will see a visible "lock" icon on the selector bar like this:
A few things:
If two users are able to edit a record, then you not have turned on locking for that given form. This feature MUST be set on a form-by-form basis. Changing the setting in file->options->client setting ONLY SETS THE DEFAULT for NEW forms you create! So the setting ONLY applies to the default for new forms – it does NOT change existing forms.
So setting record locking is ONLY a form-by-form setting.
So you ALWAYS MUST set each form you want locking to the current edited record. You set this in form design, in the data tab of the properties sheet like this:
And also keep in mind that the setting of record level locking (a different setting and feature) is an Access client setting and does NOT travel with the given application.
So since you state that two users can edit the same record, then CLEARLY you NEVER turned on record locking for that given form. The systemwide “default” record locking ONLY sets the above form default (so existing forms you have are NOT changed).
Next up:
The setting of [x] Open database by using record-level locking is an Access client setting and NOT saved with the application. So this is an Access-wide setting, not an application setting, nor one that travels with the application.
So you have to set this on each client workstation, or you have to set this in your start-up code.
If you can’t go around and change each workstation to change this setting (or you are using the Access runtime), then you can use this VBA in your start-up code to set this feature:
Application.SetOption "Use Row Level Locking", True
Note that the setting does NOT take effect until exit the application, but that’s really a “non” issue since this means the first time you run this code, some users might well be in page locking mode, and others in row locking mode. Most of the time this causes little issue.
However the next time any user launches the application then they will be in row locking mode.
I have in the past also written custom locking code. And can outline how to make this work well, but from what you posted so far, you never turned on or set locking nor had locking working correctly for any of the forms you have now anyway.
OK, I finally figured out all of the issues contributing to this and worked out a solution.
The problem is multi-faceted so I'll cover the issues separately:
First issue: My custom locks weren't instantaneous. Even though I was using a transaction, there were several seconds after a lock was placed where users could still access the same record at the same time. I was using CurrentDb.Execute to UPDATE the record and Workspaces(0).BeginTrans for the transaction. For some reason (despite Microsoft's assurances to the contrary from here: https://msdn.microsoft.com/en-us/library/office/ff197654.aspx) the issue was that the transaction wasn't working when using the Workspaces object. When I switched to DBEngine.BeginTrans, the lock was instantaneous and solved my immediate problem.
The irony is that I almost always use DBEngine for my transactions but went with Workspaces this time for no reason, so that was a bad move obviously.
Second issue: The reason I had to use custom locking in the first place was because record-level locking wasn't working as expected (despite being properly configured). It was still using page-level locking. This was due to a performance trick I was using from here: https://msdn.microsoft.com/en-us/library/dd942824%28v=office.12%29.aspx?f=255&MSPPError=-2147217396
The trick involves opening a connection to the database where your linked tables are contained, which speeds up linked table operations. The problem is that the OpenDatabase method is NOT compatible with record-level locking so it opens the db using page-level locking, and since the first user to open a database determines its lock level (as explained here: https://msdn.microsoft.com/en-us/library/aa189633(v=office.10).aspx), all subsequent connections were forced to page-level.
Third issue: My problem is that my forms are not just simple bound forms to a single table. They open a single record (not allowing the user to navigate) and provide several functions which allow the user make modifications which affect other records in other tables that are related to the record they're editing (through comboboxes and pop-up forms and what not). As a result, I can't allow two people to open the same record at the same time as it leaves way too many opportunities for users to walk over each others' changes. So even if I remove the OpenDatabase performance trick, I'd still have to force the Form to be Dirty as soon as they open it so the record locks immediately and no one else can open it. I don't know if this would be as instantaneous as my custom locking and haven't yet tested that aspect.
In any event, I need a record to be locked the instant a user opens it and for now I've decided to keep using my custom locking (with the fix for the transaction). If something else comes to light that makes that less than ideal, I can try removing the OpenDatabase trick and switching to Access's built-in locking and force an immediate lock on every record when it is opened.
You could use the method described here:
Handle concurrent update conflicts in Access silently
to handle your lock field.
Since Access doesn't make locking records easy, I'm wondering if you were to add a table with locked record entries whether that would solve the problem even though it would be the "duct tape, soup can and coat hanger" solution: You create a "Locked_Record" table with 2 fields a) record ID being updated and b) the user name of the person updating that record. That table would control exactly who owns and therefore can edit what record. Your form would have a search field and when the search term is entered and "Enter" pressed the form would search for the record by looking for it in the data and looking for it in the Locked_Record table. If found in the Locked_Record table, then you user gets an error saying "Record in use already" and display who owns the record. If not found in the data then the appropriate message is displayed. If found in the data and not found in the Locked_Record table, then a Locked_Record entry would be created and the user would then get the data displayed in the form. At this point nobody else can edit that record. When the user is done updating, either the user would need to press a button saying "Done updating" or the form would have to be closed. Either way the Locked_Record entry would be deleted so others could use that record. If the record owner doesn't close out the form or doesn't press the button then that is a training issue. This method could be user for multiple entities such as Customers, Employees, Departments, etc. You would just have to assure your application and DB is set up so any sub-forms used which might lock other tables would ONLY affect that record's entries in the other tables.
I know this is a bit old but the information here inspired me to to use the following. Basically, the me.txtApplication is a text box on the bound form. The form is bound to the table and is set to lock the edited record in the property section. This code won't do anything other than trigger that editing lock and promptly undo the change. If another user tries to load the same record it will attempt to do the same edit, trigger the error, and move to the next record or start a new record without the user being the wiser.
'Lock current record with edit-level lock by editing and removing the edit from a
field.
'If record is already locked, move to next record.
On Error Resume Next
Me.txtApplication = Me.txtApplication & "-%$^$^$$##$"
Me.txtApplication = Replace(Me.txtApplication, "-%$^$^$$##$", "")
If Err.Number = -2147352567 Then
If Me.CurrentRecord < Me.Recordset.RecordCount Then
DoCmd.GoToRecord , , acNext
Else
MsgBox "No available records.", vbOKOnly, "No Records"
DoCmd.GoToRecord , , acNewRec
'[If the condition is not true, then we are on the last record, so don't go
to the next one]
End If
End If
End Sub
I have list of records in TIBDataSet (Embarcadero Delphi) and I need to locate and modify one record in this list. There is chance that underlying database record has been changed by other queries and operations since TIBDataSet had been opened. Therefor I would like to call RefreshSQL for this one record (to get the latest data) before making any changes and before making post. Is it possible to do so and how?
I am not concerned about state of other records and I am sure that the record under consideration will always be updated and those updates will be commited before I need to changes this record from TIBDataSet.
As far as I understand then RefreshSQL is used for automatic retrieve of changes after TIBDataSet has posted upates to database. But I need manual (explicit) retrieval of the latest state before doing updates.
Try adding a TButton to your form and add the following code to its OnClick handler:
procedure TForm1.btnRefreshClick(Sender: TObject);
begin
IBQuery1.Refresh; // or whatever your IBX dataset is called
end;
and set a breakpoint on it.
Then run your app and another one (e.g. 2nd instance of it) and change a row in the second app, and commit it back to the db.
Navigate to the changed row in your app and click btnRefresh and use the debugger to trace execution.
You'll find that TDataSet.Refresh calls its InternalRefresh which in turn calls TIBCustomDataSet.InternalRefresh. That calls inherited InternalRefresh, which does nothing, followed by TIBCustomDataSet.InternalRefreshRow. If you trace into that, you'll find that it contructs a temporary IB query to retrieve the current row from the server, which should give you what you want before making changes yourself.
So that should do what you want. The problem is, it can be thoroughly confusing trying to monitor the data in two applications because they may be in different transaction states. So you are rather dependent on other users' apps "playing the transactional game" with you, so everyone sees a consistent view of the data.
Pretty simple question, but I can't find anything about it..
I have a plugin in Dynamics CRM 2013 that listens to the create and update events of an account. Depending on some business rules, some info about this account is written to an external webservice.
However, sometimes a create or update action can be rolled back from outside the scope of your plugin (for example a third party plugin), so the account won't be created or updated. The crm plugin model handles this nicely by rolling back every SDK call made in this transaction. But as I've have written some info to an external service I need to know when a rollback occured so that I can rollback the external operation manually.
Is there any way to detect a rollback in the plugin execution pipeline and execute some custom code? Alternative solutions are welcome too.
Thx in advance.
There is no trigger that can be subscribed to when the plugin rolls back, but you can determine when it happens after the fact.
Define a new Entity (Call it "Transaction Tracker" or whatever makes sense). Define these attributes for the entity
OptionSet Attribute (Call it "RollbackAction", or again, whatever makes sense).
A Text Attribute that'll serve as a Data Field.
Define a new workflow that get's kicked off when a "TransactionTracker" get's created
Have it's first step be a Wait Condition that is defined as a process Timeout that waits for 1 minute.
Have it's next step be a Custom Workflow Activity that uses the Rollback action to determine how to parse the Text Attribute, to determine if the entity has been rolled back (If it's a Create, does it exist? If it's an update, is the entities Modified On date >= the Transaction Tracker's Date?
If it has been rolled back perform whatever action is nessacary, if it hasn't been rolled back, exit the workflow (Or optionally delete the TransactionTracker Entity)
Within your plugin, before making the external call, create an OrganizationServiceProxy (since you are creating it and not using the existing one, it will be created outside the transaction and therefore, will not get cleaned up).
Create a "TransactionTracker" entity with the out of transaction service, populating that attributes as necessary.
You may need to tweak the timeout, but besides that, it should work fine.
I have just done a quick search and nothing too relevant came up so here goes.
I have released the first version of an app. I have made a few changes to the SQLite db since then, in the next release I will need to update the DB structure but retain the user's data.
What's the best approach for this? I'm currently thinking that on app update I will never replace the user's (documents folder, not in bundle) database file but rather alter its structure using SQL queries.
This would involve tracking changes made to the database since the previous release. Script all these changes into SQL queries and run these to bring the DB to the latest revision. I will also need to keep a field in the database to track the version number (keep in line with app version for simplicity).
Unless there are specific hooks, delegate methods that are fired at first run after an update I will put calls for this logic into the very beginning of the appDelegate, before anything else is run.
While doing this I will display "Updating app" or something to the user.
Next thing, what happens if there is an error somewhere along the line and the update fails. The DB will be out of date and the app won't function properly as it expects a newer version?
Should I take it upon myself to just delete the user's DB file and replace it with the new version from the app bundle. OR, should I just test, test, test until everything is solid on my side and if an error occurs on the user's side it's something else, in which case I can't do anything about it only discard the data.
Any ideas on this would be greatly appreciated. :)
Thanks!
First of all, the approach you are considering is the correct one. This is known as database migration. Whenever you modify the database on your end, you should collect the appropriate ALTER TABLE... etc. methods into a migration script.
Then the next release of your app should run this code once (as you described) to migrate all the user's data.
As for handling errors, that's a tough one. I would be very weary of discarding the user's data. Better would be to display an error message and perhaps let the user contact you with a bug report. Then you can release an update to your app which hopefully can do the migration with no problems. But ideally you test the process well enough that there shouldn't be any problems like this. Of course it all depends on the complexity of the migration process.