I'm having the following issue since a bunch of weeks. See the pastebin link here
link was deleted and no more accurate with the question
here's the Runtime context :
GWT 2.4.0
Oracle 11g
EclipseLink Implementation-Version: 1.1.4.v20100812-r7860 (META-INF)
<persistence-unit name="EXPRESSO_resourceLocalUnit"
transaction-type="RESOURCE_LOCAL">
Tt happens only and always at application first time call, at that time the applications loads data from database to fill a Grid. Wether the exception is raised or not, data is loaded correctly.
No transaction is used while loading data (i.e: no tx.begin() is used)
Thanks in advance.
Pleaes turn logging on to Finest to see what objects are involved, and check if you have any event methods such as postload etc that might result in an exception or perform some operation on the EntityManager.
If the data is populated fine my guess is that it is because your application is handling the exception from the find call and continuing its processing. The stack indicates that the problem is occuring in a finally block, so it is difficult to determine if the exception is the result of another exception occuring in the try block.
EclipseLink 1.1.4 is rather old, so you might also want to try EclipseLink 2.3.3 or later just to verify that the underlying cause wasn't already fixed, or that it might give a better exception.
Related
I suppose that this is more of a curiosity as opposed to an actual issue, but I thought I'd ask about it anyway. There are times when an uncaught error occurs in a server-side NetSuite script using SuiteScript 2.0/2.1 (2.x), but instead of seeing a "SYSTEM" scripting log entry, there's nothing. It gives the appearance of a script just stopping for no reason. Now, I know this can easily be avoided by wrapping everything within a try-catch block, but that's not what I'm trying to discuss here.
Does anyone have any insight into why a script would just stop without any SYSTEM error logging? It's just something I find interesting given that with the 1.0 API uncaught errors would always get logged. And it's not a guarantee that an uncaught error won't be logged as a SYSTEM log. It seems more common with map/reduce scripts, but unless memory is not serving correctly I believe that I have seen it happen with suitelets and user event scripts, too.
Just thought that I'd pose the question here to see if there was anyone who might know a little something about it.
This is actually covered in the system help for Map/Reduce scripts. They do fail silently. I've not seen this in any other script type.
I'm creating a custom logging model in Odoo. Naturally, I'd like the model to be completely exempt from the typical transaction rollback behavior. The reason for this is, of course, because I don't want exceptions later on in my code to cause log entries to never be added.
Here's an example scenario:
class SomeModel(models.Model):
def some_method(self):
self.env["my.custom.log"].add_entry("SomeModel.some_method started running")
...
raise Exception("Something went wrong...")
In this case, I want to create an entry in my custom log signaling that the method began to run. However, depending on logic later in the code, an exception may be raised. If an exception is raised, Odoo of course rolls back the database transaction, which is desired to prevent incorrect data from being committed to the database.
However, I don't want my log entries to disappear if an exception occurs. Like any sane log, I'd like my my.custom.log model to not be subject to Odoo's typical rollback behavior and immediately log information no matter what happens later.
I am aware that I can manually rollback or commit a transaction, but this doesn't really help me here. If I just run env.cr.commit() after adding my log entry it will definitely add the log entry, but it will also add all the other operations which occurred before it as well. That would definitely not be good. And running env.cr.rollback() before adding the log entry doesn't make sense either, because all pending operations would be rolled back even if no exception occurs later.
How do I solve this? How can I make my log entries always get added to the database regardless of what else happens in the code?
Note: I am aware that Odoo has a built-in log. I have my reasons for wanting to create my own separate log.
I've figured out a solution, but I don't know if it's necessarily the best way to do it.
The solution I've settled on for now would in this example be to define the add_entry() logging model method something like the following:
# If log entries are created normally and an exception occurs, Odoo will rollback
# the current database transaction, meaning that along with other data, the log
# entries will not be created. With this method, log entries are created with a
# separate database cursor, independent of the current transaction.
def add_entry(self, message):
self.flush() # ADDED WITH IMPORTANT EDIT BELOW
with registry(self.env.cr.dbname).cursor() as cr:
record = self.with_env(self.env(cr)).create({"message": message})
return self.browse(record.id)
# Otherwise the method would look simply like this...
def add_entry(self, message):
return self.create({"message": message})
Explanation
I got the with registry(self.env.cr.dbname).cursor() as cr idea from the Odoo delivery module at delivery/models/delivery_carrier.py:212. In that file it looks like Odoo is trying to do pretty much exactly what I'm looking for in this question.
I got the self.with_env(self.env(cr)) idea from the Odoo iap module at iap/models/iap.py:182. It's cleaner than doing self.env(cr)["my.custom.log"].
I don't return the new record directly from the create() method. I'm pretty sure something wouldn't be right about returning a recordset with an environment whose cursor is about to be closed (because of the with statement). Instead, I just hang on to the record ID and then get the record in the current environment and cursor with self.browse(). Hopefully this is actually the best way to do this and doesn't come with any major performance issues.
Please don't hesitate to weigh in if you have any suggestions for improvement!
Important Edit (A Day Later)
I added the self.flush() line in the code above along with this edit. It turns out that for some reason, when the with statement __exit__'s and the separate cursor is committed, an exception can be raised when it tries to flush data.
The same thing is done at delivery/models/delivery_carrier.py:207. I'm not exact sure why it is necessary, but it looks like it is. I assume that flushing the data comes with some performance considerations as well, but I'm not sure what they are and right now it looks like they are unavoidable. Again, please weigh in if you know anything.
I want to make sure the new procedure valid, insteading of the DB2 always query by the cache pool, I have to rebind the database (db2rbind command). And then I deploy the application on WebSphere. BUT, when I login to the application, the error occurs:
The cursor "SQL_CURSN200C4" is not in a prepared state..SQLCODE=-514 SQLSTATE=26501,DRIVER=3.65.97
further more, the most weird thing is that the error just occurred only once. It will not never occur after this time, and the application runs very well. I'm so curious about how it occurs and the reason why it only occurs only once.
ps: my DB2 version is 10.1 Enterprise Server Edition.
and the sql which the error stack point to is very simple just like:
select * from table where 1=1 and field_name="123" with ur
Unless you configure otherwise (statementCacheSize=0) or manually use setPoolable(false) in your application, WebSphere Application Server data sources cache and reuse PreparedStatements. A rebind can cause statements in the cache to become invalid. Fortunately, WebSphere Application Server has built-in knowledge of the -514 error code and will purge the bad statement from the cache in response to an occurrence of this error, so that the invalidated prepared statement does not continue to be reused and cause additional errors to the application. You might be running into this situation, which could explain how the error occurs just once after the rebind.
I am trying to cache JOOQ record result using redis. But the same is throwing the following error:
org.springframework.data.redis.serializer.SerializationException: Cannot serialize; nested exception is org.springframework.core.serializer.support.SerializationFailedException: Failed to serialize object using DefaultSerializer; nested exception is java.io.NotSerializableException: org.jooq.impl.Utils$Cache$Key
Any suggestion how to fix this?
This seems to be the same issue as https://github.com/jOOQ/jOOQ/issues/5290, which is a bug in jOOQ 3.8.1
I can see two workarounds around this issue:
You explicitly "detach" your records prior to serialisation in redis by calling Record.detach()
You turn off automatic attaching of records to the creating Configuration using the Settings.attachRecords property
In both cases, should you wish to retrieve a record from redis and store it again, you will need to explicitly call Record.attach(Configuration)
My current persistence.xml table generation strategy is set to create. This guarantees that each new installation of my application will get the tables, but that also means that everytime the application it's started logs are polluted with exceptions of eclipselink trying to create tables that already exist.
The strategy I wish is that the tables are created only in their absence. One way for me to implement this is to check for the database file and if doesn't exist create tables using:
ServerSession session = em.unwrap(ServerSession.class);
SchemaManager schemaManager = new SchemaManager(session);
schemaManager.createDefaultTables(true);
But is there a cleaner solution? Possibly a try-catch way? It's errorprone for me to guard each database method with a try-catch where the catch executes the above code, but I'd expect it to be a property I can configure the emf with.
The Table creation problems should only be logged at the warning level. So you could filter these out by setting the log level higher than warning, or create a seperate EM that mirrors the actual application EM to be used just for table creation but with logging turned off completely.
As for catching exceptions from createDefaultTables - there shouldn't be any. The internals of createDefaultTables wrap the actual createTable portion and ignores the errors that it might throw. So the exceptions are only showing in the log due to the log level including warning messages. You could wrap it in a try/catch and set the session log level to off, and then reset it in the finally block though.