Can I implement if-new-create JPA strategy? - eclipselink

My current persistence.xml table generation strategy is set to create. This guarantees that each new installation of my application will get the tables, but that also means that everytime the application it's started logs are polluted with exceptions of eclipselink trying to create tables that already exist.
The strategy I wish is that the tables are created only in their absence. One way for me to implement this is to check for the database file and if doesn't exist create tables using:
ServerSession session = em.unwrap(ServerSession.class);
SchemaManager schemaManager = new SchemaManager(session);
schemaManager.createDefaultTables(true);
But is there a cleaner solution? Possibly a try-catch way? It's errorprone for me to guard each database method with a try-catch where the catch executes the above code, but I'd expect it to be a property I can configure the emf with.

The Table creation problems should only be logged at the warning level. So you could filter these out by setting the log level higher than warning, or create a seperate EM that mirrors the actual application EM to be used just for table creation but with logging turned off completely.
As for catching exceptions from createDefaultTables - there shouldn't be any. The internals of createDefaultTables wrap the actual createTable portion and ignores the errors that it might throw. So the exceptions are only showing in the log due to the log level including warning messages. You could wrap it in a try/catch and set the session log level to off, and then reset it in the finally block though.

Related

How would I exclude an Odoo custom logging model from rollbacks?

I'm creating a custom logging model in Odoo. Naturally, I'd like the model to be completely exempt from the typical transaction rollback behavior. The reason for this is, of course, because I don't want exceptions later on in my code to cause log entries to never be added.
Here's an example scenario:
class SomeModel(models.Model):
def some_method(self):
self.env["my.custom.log"].add_entry("SomeModel.some_method started running")
...
raise Exception("Something went wrong...")
In this case, I want to create an entry in my custom log signaling that the method began to run. However, depending on logic later in the code, an exception may be raised. If an exception is raised, Odoo of course rolls back the database transaction, which is desired to prevent incorrect data from being committed to the database.
However, I don't want my log entries to disappear if an exception occurs. Like any sane log, I'd like my my.custom.log model to not be subject to Odoo's typical rollback behavior and immediately log information no matter what happens later.
I am aware that I can manually rollback or commit a transaction, but this doesn't really help me here. If I just run env.cr.commit() after adding my log entry it will definitely add the log entry, but it will also add all the other operations which occurred before it as well. That would definitely not be good. And running env.cr.rollback() before adding the log entry doesn't make sense either, because all pending operations would be rolled back even if no exception occurs later.
How do I solve this? How can I make my log entries always get added to the database regardless of what else happens in the code?
Note: I am aware that Odoo has a built-in log. I have my reasons for wanting to create my own separate log.
I've figured out a solution, but I don't know if it's necessarily the best way to do it.
The solution I've settled on for now would in this example be to define the add_entry() logging model method something like the following:
# If log entries are created normally and an exception occurs, Odoo will rollback
# the current database transaction, meaning that along with other data, the log
# entries will not be created. With this method, log entries are created with a
# separate database cursor, independent of the current transaction.
def add_entry(self, message):
self.flush() # ADDED WITH IMPORTANT EDIT BELOW
with registry(self.env.cr.dbname).cursor() as cr:
record = self.with_env(self.env(cr)).create({"message": message})
return self.browse(record.id)
# Otherwise the method would look simply like this...
def add_entry(self, message):
return self.create({"message": message})
Explanation
I got the with registry(self.env.cr.dbname).cursor() as cr idea from the Odoo delivery module at delivery/models/delivery_carrier.py:212. In that file it looks like Odoo is trying to do pretty much exactly what I'm looking for in this question.
I got the self.with_env(self.env(cr)) idea from the Odoo iap module at iap/models/iap.py:182. It's cleaner than doing self.env(cr)["my.custom.log"].
I don't return the new record directly from the create() method. I'm pretty sure something wouldn't be right about returning a recordset with an environment whose cursor is about to be closed (because of the with statement). Instead, I just hang on to the record ID and then get the record in the current environment and cursor with self.browse(). Hopefully this is actually the best way to do this and doesn't come with any major performance issues.
Please don't hesitate to weigh in if you have any suggestions for improvement!
Important Edit (A Day Later)
I added the self.flush() line in the code above along with this edit. It turns out that for some reason, when the with statement __exit__'s and the separate cursor is committed, an exception can be raised when it tries to flush data.
The same thing is done at delivery/models/delivery_carrier.py:207. I'm not exact sure why it is necessary, but it looks like it is. I assume that flushing the data comes with some performance considerations as well, but I'm not sure what they are and right now it looks like they are unavoidable. Again, please weigh in if you know anything.

Consistent database update in SAP/ABAP O/O

I need to ensure consistent editing of SAP tables for Fiori Backend calls.
I have multiple situations, where single call to backend changes more than one table on the backend side. The changes are written to transport request.
I want to implement error-free stable solution, so that if first table was changed fine, but second table failed (duplicate entry, missing authorization), the whole bunch of changes is rejected.
However, it seems that there is only "perform FM in update task" available, which requires to put all logic of every backend db change into a FM.
Am I missing something, or SAP really has no Object Oriented way to perform consistent database updates?
The only workaround I have is to check all these preconditions upwards, which is not so nice anymore.
#Florian: Backend call is for example action "Approve" on the document, which changes: 1) Document header table, field status changes from "in workflow" to something else. 2) Approval table - current approver entry is changed. Or it is adding a new document, where 1) Document header table entry is added 2) Document history table entry is added.
I do not want to call Function Modules, I want to implement solution using only classes and class methods. I was working earlier with other ERP systems and there are statements like "Start transaction", "Commit transaction" or "Rollback transaction". Start transcation means you start a LUW, which is only committed on "Commit transaction", and if you call "Rollback transaction", all current database changes of that LUW would be cancelled. I wonder why modern SAP has none of these except for old update task FM (or is it just me not noticing a correct way to process this).
CALL UPDATE FUNCTION MODULE in UPDATE TASK is the only way. How it works in Fiori transnational App, for example,
Database A: You do some business logic, everything is fine. call UPDATE task to CUD database table A.
Database B: You do some business logic, there is some issue regarding authorization, you raise the exception(Error). UPDATE TASK to CUD database table B is NOT called.
After all the business logic are processed, in case any exception is raised, the SADL/Gateway layer would catch the exception, it would call ROLLBACK WORK which means everything is rollback. Otherwise, if there are no errors, it would call COMMIT WORK which means consistent CUDs to all tables,
btw, anything abnormal like DUPLICATE ENTRY happens within the UPDATE Function Module, depending on your coding, you can ignore it or raise MESSAGE E to abort the DB operations.
From my point of view, those kinds of issue should be avoided before your call the UPDATE Function Module.

Concurrency violation while updating and deleting newly added rows

I've been developing a CRUD application using Datasets in C# and Sql Server 2012. The project is basically an agenda wich holds information about Pokémon (name, habilities, types, image, etc).
By the few months I've been facing a problem related to concurrency violation. In other words, when I try to delete or update rows that I've just added during the same execution of the program, the concurrency exception is generated and it isn't possible to perform any other changes in the database. So I need to restart the program in order to be able to perform the changes again (Important Note: this exception only happens for the new rows added through C#).
I've been looking for a solution for this violation (without using Entity Framework or LINQ to SQL), but I couldn't find anything that I could add in the C#'s source code. Does anyone knows how to handle this? What should I implement in my source code? Is there anything to do in SQL Server that could help on it?
Here is a link to my project, a backup from the database, images of the main form and the database diagram:
http://www.mediafire.com/download.php?izkat44a0e4q8em (C# source code)
http://www.mediafire.com/download.php?rj2e118pliarae2 (Sql backup)
imageshack .us /a /img 708/38 23/pokmonform .png (Main Form)
imageshack .us /a /img 18/95 46/kantopokdexdiagram .png (Database Diagram)
I have looked on your code and it seems, that you use AcceptChanges on a datatable daKanto inconsistently. In fact you use AcceptChangesDuringUpdate, which is also fine, although I prefer to call method datatable.AcceptChanges() explictly after the update, but your way also is fine.
Anyway I have noticed, that you use AcceptChangesDuringUpdate in the method Delete_Click and Update__Click, but do not use it in the Save_Click, and also I think you should use AcceptChangesDuringFill in MainForm_Load, where you fill your datasets.
I cannot guarantee you, that it will help, but I know that uniformity of data access throughout the application reduces the risk of the unexpected data consistency errors.

programmatically set show_sql without recreating SessionFactory?

In NHibernate, I have show_sql turned on for running unit tests. Each of my unit tests clears the database and refills it, and this results in lots of sql queries that I don't want NHibernate to output.
Is it possible to control show_sql without destroying the SessionFactory? If possible, I'd like to turn it off when running setup for a test, then turn it on again when the body of the test starts to run.
Is this possible?
The only place you can set this is when building a NHibernate.Cfg.Configuration.
Once you've created a SessionFactory from your Configuration, there's no way to access the configuration settings, which I think is one of the reasons for using a factory pattern: to ensure that instances once successfully built can't be messed up by runtime re- or mis-configuration.
If you really need that feature, get the NH source code and find the place where the show_sql setting is evaluated.
Another option although it may/may not be as good is to use NHProf and just initialise NHProf when testing.
NHProf doesn't log setting/clearing database just queries used.

Exception [EclipseLink-2004]

I'm having the following issue since a bunch of weeks. See the pastebin link here
link was deleted and no more accurate with the question
here's the Runtime context :
GWT 2.4.0
Oracle 11g
EclipseLink Implementation-Version: 1.1.4.v20100812-r7860 (META-INF)
<persistence-unit name="EXPRESSO_resourceLocalUnit"
transaction-type="RESOURCE_LOCAL">
Tt happens only and always at application first time call, at that time the applications loads data from database to fill a Grid. Wether the exception is raised or not, data is loaded correctly.
No transaction is used while loading data (i.e: no tx.begin() is used)
Thanks in advance.
Pleaes turn logging on to Finest to see what objects are involved, and check if you have any event methods such as postload etc that might result in an exception or perform some operation on the EntityManager.
If the data is populated fine my guess is that it is because your application is handling the exception from the find call and continuing its processing. The stack indicates that the problem is occuring in a finally block, so it is difficult to determine if the exception is the result of another exception occuring in the try block.
EclipseLink 1.1.4 is rather old, so you might also want to try EclipseLink 2.3.3 or later just to verify that the underlying cause wasn't already fixed, or that it might give a better exception.