How would I exclude an Odoo custom logging model from rollbacks? - odoo

I'm creating a custom logging model in Odoo. Naturally, I'd like the model to be completely exempt from the typical transaction rollback behavior. The reason for this is, of course, because I don't want exceptions later on in my code to cause log entries to never be added.
Here's an example scenario:
class SomeModel(models.Model):
def some_method(self):
self.env["my.custom.log"].add_entry("SomeModel.some_method started running")
...
raise Exception("Something went wrong...")
In this case, I want to create an entry in my custom log signaling that the method began to run. However, depending on logic later in the code, an exception may be raised. If an exception is raised, Odoo of course rolls back the database transaction, which is desired to prevent incorrect data from being committed to the database.
However, I don't want my log entries to disappear if an exception occurs. Like any sane log, I'd like my my.custom.log model to not be subject to Odoo's typical rollback behavior and immediately log information no matter what happens later.
I am aware that I can manually rollback or commit a transaction, but this doesn't really help me here. If I just run env.cr.commit() after adding my log entry it will definitely add the log entry, but it will also add all the other operations which occurred before it as well. That would definitely not be good. And running env.cr.rollback() before adding the log entry doesn't make sense either, because all pending operations would be rolled back even if no exception occurs later.
How do I solve this? How can I make my log entries always get added to the database regardless of what else happens in the code?
Note: I am aware that Odoo has a built-in log. I have my reasons for wanting to create my own separate log.

I've figured out a solution, but I don't know if it's necessarily the best way to do it.
The solution I've settled on for now would in this example be to define the add_entry() logging model method something like the following:
# If log entries are created normally and an exception occurs, Odoo will rollback
# the current database transaction, meaning that along with other data, the log
# entries will not be created. With this method, log entries are created with a
# separate database cursor, independent of the current transaction.
def add_entry(self, message):
self.flush() # ADDED WITH IMPORTANT EDIT BELOW
with registry(self.env.cr.dbname).cursor() as cr:
record = self.with_env(self.env(cr)).create({"message": message})
return self.browse(record.id)
# Otherwise the method would look simply like this...
def add_entry(self, message):
return self.create({"message": message})
Explanation
I got the with registry(self.env.cr.dbname).cursor() as cr idea from the Odoo delivery module at delivery/models/delivery_carrier.py:212. In that file it looks like Odoo is trying to do pretty much exactly what I'm looking for in this question.
I got the self.with_env(self.env(cr)) idea from the Odoo iap module at iap/models/iap.py:182. It's cleaner than doing self.env(cr)["my.custom.log"].
I don't return the new record directly from the create() method. I'm pretty sure something wouldn't be right about returning a recordset with an environment whose cursor is about to be closed (because of the with statement). Instead, I just hang on to the record ID and then get the record in the current environment and cursor with self.browse(). Hopefully this is actually the best way to do this and doesn't come with any major performance issues.
Please don't hesitate to weigh in if you have any suggestions for improvement!
Important Edit (A Day Later)
I added the self.flush() line in the code above along with this edit. It turns out that for some reason, when the with statement __exit__'s and the separate cursor is committed, an exception can be raised when it tries to flush data.
The same thing is done at delivery/models/delivery_carrier.py:207. I'm not exact sure why it is necessary, but it looks like it is. I assume that flushing the data comes with some performance considerations as well, but I'm not sure what they are and right now it looks like they are unavoidable. Again, please weigh in if you know anything.

Related

Consistent database update in SAP/ABAP O/O

I need to ensure consistent editing of SAP tables for Fiori Backend calls.
I have multiple situations, where single call to backend changes more than one table on the backend side. The changes are written to transport request.
I want to implement error-free stable solution, so that if first table was changed fine, but second table failed (duplicate entry, missing authorization), the whole bunch of changes is rejected.
However, it seems that there is only "perform FM in update task" available, which requires to put all logic of every backend db change into a FM.
Am I missing something, or SAP really has no Object Oriented way to perform consistent database updates?
The only workaround I have is to check all these preconditions upwards, which is not so nice anymore.
#Florian: Backend call is for example action "Approve" on the document, which changes: 1) Document header table, field status changes from "in workflow" to something else. 2) Approval table - current approver entry is changed. Or it is adding a new document, where 1) Document header table entry is added 2) Document history table entry is added.
I do not want to call Function Modules, I want to implement solution using only classes and class methods. I was working earlier with other ERP systems and there are statements like "Start transaction", "Commit transaction" or "Rollback transaction". Start transcation means you start a LUW, which is only committed on "Commit transaction", and if you call "Rollback transaction", all current database changes of that LUW would be cancelled. I wonder why modern SAP has none of these except for old update task FM (or is it just me not noticing a correct way to process this).
CALL UPDATE FUNCTION MODULE in UPDATE TASK is the only way. How it works in Fiori transnational App, for example,
Database A: You do some business logic, everything is fine. call UPDATE task to CUD database table A.
Database B: You do some business logic, there is some issue regarding authorization, you raise the exception(Error). UPDATE TASK to CUD database table B is NOT called.
After all the business logic are processed, in case any exception is raised, the SADL/Gateway layer would catch the exception, it would call ROLLBACK WORK which means everything is rollback. Otherwise, if there are no errors, it would call COMMIT WORK which means consistent CUDs to all tables,
btw, anything abnormal like DUPLICATE ENTRY happens within the UPDATE Function Module, depending on your coding, you can ignore it or raise MESSAGE E to abort the DB operations.
From my point of view, those kinds of issue should be avoided before your call the UPDATE Function Module.

How to explicitly call TIBDataSet.RefreshSQL

I have list of records in TIBDataSet (Embarcadero Delphi) and I need to locate and modify one record in this list. There is chance that underlying database record has been changed by other queries and operations since TIBDataSet had been opened. Therefor I would like to call RefreshSQL for this one record (to get the latest data) before making any changes and before making post. Is it possible to do so and how?
I am not concerned about state of other records and I am sure that the record under consideration will always be updated and those updates will be commited before I need to changes this record from TIBDataSet.
As far as I understand then RefreshSQL is used for automatic retrieve of changes after TIBDataSet has posted upates to database. But I need manual (explicit) retrieval of the latest state before doing updates.
Try adding a TButton to your form and add the following code to its OnClick handler:
procedure TForm1.btnRefreshClick(Sender: TObject);
begin
IBQuery1.Refresh; // or whatever your IBX dataset is called
end;
and set a breakpoint on it.
Then run your app and another one (e.g. 2nd instance of it) and change a row in the second app, and commit it back to the db.
Navigate to the changed row in your app and click btnRefresh and use the debugger to trace execution.
You'll find that TDataSet.Refresh calls its InternalRefresh which in turn calls TIBCustomDataSet.InternalRefresh. That calls inherited InternalRefresh, which does nothing, followed by TIBCustomDataSet.InternalRefreshRow. If you trace into that, you'll find that it contructs a temporary IB query to retrieve the current row from the server, which should give you what you want before making changes yourself.
So that should do what you want. The problem is, it can be thoroughly confusing trying to monitor the data in two applications because they may be in different transaction states. So you are rather dependent on other users' apps "playing the transactional game" with you, so everyone sees a consistent view of the data.

iOS Rolling out app updates. Keeping user data intact when DB update required

I have just done a quick search and nothing too relevant came up so here goes.
I have released the first version of an app. I have made a few changes to the SQLite db since then, in the next release I will need to update the DB structure but retain the user's data.
What's the best approach for this? I'm currently thinking that on app update I will never replace the user's (documents folder, not in bundle) database file but rather alter its structure using SQL queries.
This would involve tracking changes made to the database since the previous release. Script all these changes into SQL queries and run these to bring the DB to the latest revision. I will also need to keep a field in the database to track the version number (keep in line with app version for simplicity).
Unless there are specific hooks, delegate methods that are fired at first run after an update I will put calls for this logic into the very beginning of the appDelegate, before anything else is run.
While doing this I will display "Updating app" or something to the user.
Next thing, what happens if there is an error somewhere along the line and the update fails. The DB will be out of date and the app won't function properly as it expects a newer version?
Should I take it upon myself to just delete the user's DB file and replace it with the new version from the app bundle. OR, should I just test, test, test until everything is solid on my side and if an error occurs on the user's side it's something else, in which case I can't do anything about it only discard the data.
Any ideas on this would be greatly appreciated. :)
Thanks!
First of all, the approach you are considering is the correct one. This is known as database migration. Whenever you modify the database on your end, you should collect the appropriate ALTER TABLE... etc. methods into a migration script.
Then the next release of your app should run this code once (as you described) to migrate all the user's data.
As for handling errors, that's a tough one. I would be very weary of discarding the user's data. Better would be to display an error message and perhaps let the user contact you with a bug report. Then you can release an update to your app which hopefully can do the migration with no problems. But ideally you test the process well enough that there shouldn't be any problems like this. Of course it all depends on the complexity of the migration process.

Can I implement if-new-create JPA strategy?

My current persistence.xml table generation strategy is set to create. This guarantees that each new installation of my application will get the tables, but that also means that everytime the application it's started logs are polluted with exceptions of eclipselink trying to create tables that already exist.
The strategy I wish is that the tables are created only in their absence. One way for me to implement this is to check for the database file and if doesn't exist create tables using:
ServerSession session = em.unwrap(ServerSession.class);
SchemaManager schemaManager = new SchemaManager(session);
schemaManager.createDefaultTables(true);
But is there a cleaner solution? Possibly a try-catch way? It's errorprone for me to guard each database method with a try-catch where the catch executes the above code, but I'd expect it to be a property I can configure the emf with.
The Table creation problems should only be logged at the warning level. So you could filter these out by setting the log level higher than warning, or create a seperate EM that mirrors the actual application EM to be used just for table creation but with logging turned off completely.
As for catching exceptions from createDefaultTables - there shouldn't be any. The internals of createDefaultTables wrap the actual createTable portion and ignores the errors that it might throw. So the exceptions are only showing in the log due to the log level including warning messages. You could wrap it in a try/catch and set the session log level to off, and then reset it in the finally block though.

Is it possible to force an error in an Integration Services data flow to demonstrate its rollback?

I have been tasked with demoing how Integration Services handles an error during a data flow to show that no data makes it into the destination. This is an existing package and I want to limit the code changes to the package as much as possible (since this is most likely a one time deal).
The scenario that is trying to be understood is a "systemic" failure - the source file disappears midstream, or the file server loses power, etc.
I know I can make this happen by having the Error Output of the source set to Failure and introducing bad data but I would like to do something lighter than that.
I suppose I could add a Script Transform task and look for a certain value and throw an error but I was hoping someone has come up with something easier / more elegant.
Thanks,
Matt
mess up the file that you are trying to import by pasting some bad data or saving it in another format like UTF-8 or something like that
We always have a task at the end that closes the dataflow in our meta data tables. To test errors, I simply remove the ? that is the variable for the stored proc it runs. Easy to do and easy to fix back the way it was and it doesn't mess up anything datawise as our error trapping then closes the the data flow with an error. You could do something similar by adding a task to call a stored proc with an input variable but assign no parameters to it so it will fail. Then once the test is done, simply disable that task.
Data will make it to the destination if it is not running as a transaction. If you want to prevent populating partial data you have to use transactions. Then there is an option to set the end result of a control flow item as "failed" irrespective of the actual result but this is not available in data flow items. You will have to either produce an actual error in the data or code in a situation that will create an error. There is no other way...
Could we try with transaction level property of the package?
On failure of the data flow it will revert all the data from the target.
On successful dataflow only it will commit the data to target otherwise it will roll back the data from target.