Zope: standard_error_message: ZSQLMethod not working - sql

In my Zope website I have customized the standard_error_message as a Python script. This script differentiates between error-types and handles redirects for NotFound-errors. For this, it accesses some ZSQLMethods without problems.
Now I wanted to log the NotFound-errors into a mySQL database (yes I know they are logged in Z2.log) and added another ZSQLMethod for this purpose.
Strangely this ZSQLMethod does not work when called from the standard_error_message script. It works (writes into the database) when called from another python script, also when this other script is called per http-request from an anonymous user. If called from standard_error_message, it throws an error if not called correctly but otherwise just does not write into the database (no error logged).
I'm really lost with this issue...

You are running into transaction issues.
Zope wraps all requests in a transaction, and all external database interactions are tied into that transaction too.
This transaction is committed when a request is handled successfully, and no exceptions were raised, and aborted when there was an exception.
The standard_error_message is displayed when there were exceptions; and because there was an exception, your SQL INSERT is rolled back as the transaction aborts.
You'll have to explicitly commit just the SQL transaction; add a COMMIT statement to your SQL method.

Related

SQLAlchemy error: invalid transaction. How do I locate the source of the error?

I'm experiencing an error in my code, but it appears that the genesis for the error is elsewhere. The error I'm seeing is:
(sqlalchemy.exc.InvalidRequestError) Can't reconnect until invalid transaction is rolled back
Problem is, I have a multi-threaded application accessing the database in a variety of spots. How can I identify which issue is generating this error?
I had been using a db.session.rollback() to enable an immediate read from a different user session (thinking it would be fine even if no writes were pending). I was incorrect. I utilized a db.session.commit() even with no pending writes and it enabled an immediate read from a different user session.

Consistent database update in SAP/ABAP O/O

I need to ensure consistent editing of SAP tables for Fiori Backend calls.
I have multiple situations, where single call to backend changes more than one table on the backend side. The changes are written to transport request.
I want to implement error-free stable solution, so that if first table was changed fine, but second table failed (duplicate entry, missing authorization), the whole bunch of changes is rejected.
However, it seems that there is only "perform FM in update task" available, which requires to put all logic of every backend db change into a FM.
Am I missing something, or SAP really has no Object Oriented way to perform consistent database updates?
The only workaround I have is to check all these preconditions upwards, which is not so nice anymore.
#Florian: Backend call is for example action "Approve" on the document, which changes: 1) Document header table, field status changes from "in workflow" to something else. 2) Approval table - current approver entry is changed. Or it is adding a new document, where 1) Document header table entry is added 2) Document history table entry is added.
I do not want to call Function Modules, I want to implement solution using only classes and class methods. I was working earlier with other ERP systems and there are statements like "Start transaction", "Commit transaction" or "Rollback transaction". Start transcation means you start a LUW, which is only committed on "Commit transaction", and if you call "Rollback transaction", all current database changes of that LUW would be cancelled. I wonder why modern SAP has none of these except for old update task FM (or is it just me not noticing a correct way to process this).
CALL UPDATE FUNCTION MODULE in UPDATE TASK is the only way. How it works in Fiori transnational App, for example,
Database A: You do some business logic, everything is fine. call UPDATE task to CUD database table A.
Database B: You do some business logic, there is some issue regarding authorization, you raise the exception(Error). UPDATE TASK to CUD database table B is NOT called.
After all the business logic are processed, in case any exception is raised, the SADL/Gateway layer would catch the exception, it would call ROLLBACK WORK which means everything is rollback. Otherwise, if there are no errors, it would call COMMIT WORK which means consistent CUDs to all tables,
btw, anything abnormal like DUPLICATE ENTRY happens within the UPDATE Function Module, depending on your coding, you can ignore it or raise MESSAGE E to abort the DB operations.
From my point of view, those kinds of issue should be avoided before your call the UPDATE Function Module.

SQLCODE=-514 SQLSTATE=26501 occurred when I fnished the rebind operator

I want to make sure the new procedure valid, insteading of the DB2 always query by the cache pool, I have to rebind the database (db2rbind command). And then I deploy the application on WebSphere. BUT, when I login to the application, the error occurs:
The cursor "SQL_CURSN200C4" is not in a prepared state..SQLCODE=-514 SQLSTATE=26501,DRIVER=3.65.97
further more, the most weird thing is that the error just occurred only once. It will not never occur after this time, and the application runs very well. I'm so curious about how it occurs and the reason why it only occurs only once.
ps: my DB2 version is 10.1 Enterprise Server Edition.
and the sql which the error stack point to is very simple just like:
select * from table where 1=1 and field_name="123" with ur
Unless you configure otherwise (statementCacheSize=0) or manually use setPoolable(false) in your application, WebSphere Application Server data sources cache and reuse PreparedStatements. A rebind can cause statements in the cache to become invalid. Fortunately, WebSphere Application Server has built-in knowledge of the -514 error code and will purge the bad statement from the cache in response to an occurrence of this error, so that the invalidated prepared statement does not continue to be reused and cause additional errors to the application. You might be running into this situation, which could explain how the error occurs just once after the rebind.

Detect plugin rollback

Pretty simple question, but I can't find anything about it..
I have a plugin in Dynamics CRM 2013 that listens to the create and update events of an account. Depending on some business rules, some info about this account is written to an external webservice.
However, sometimes a create or update action can be rolled back from outside the scope of your plugin (for example a third party plugin), so the account won't be created or updated. The crm plugin model handles this nicely by rolling back every SDK call made in this transaction. But as I've have written some info to an external service I need to know when a rollback occured so that I can rollback the external operation manually.
Is there any way to detect a rollback in the plugin execution pipeline and execute some custom code? Alternative solutions are welcome too.
Thx in advance.
There is no trigger that can be subscribed to when the plugin rolls back, but you can determine when it happens after the fact.
Define a new Entity (Call it "Transaction Tracker" or whatever makes sense). Define these attributes for the entity
OptionSet Attribute (Call it "RollbackAction", or again, whatever makes sense).
A Text Attribute that'll serve as a Data Field.
Define a new workflow that get's kicked off when a "TransactionTracker" get's created
Have it's first step be a Wait Condition that is defined as a process Timeout that waits for 1 minute.
Have it's next step be a Custom Workflow Activity that uses the Rollback action to determine how to parse the Text Attribute, to determine if the entity has been rolled back (If it's a Create, does it exist? If it's an update, is the entities Modified On date >= the Transaction Tracker's Date?
If it has been rolled back perform whatever action is nessacary, if it hasn't been rolled back, exit the workflow (Or optionally delete the TransactionTracker Entity)
Within your plugin, before making the external call, create an OrganizationServiceProxy (since you are creating it and not using the existing one, it will be created outside the transaction and therefore, will not get cleaned up).
Create a "TransactionTracker" entity with the out of transaction service, populating that attributes as necessary.
You may need to tweak the timeout, but besides that, it should work fine.

Can I implement if-new-create JPA strategy?

My current persistence.xml table generation strategy is set to create. This guarantees that each new installation of my application will get the tables, but that also means that everytime the application it's started logs are polluted with exceptions of eclipselink trying to create tables that already exist.
The strategy I wish is that the tables are created only in their absence. One way for me to implement this is to check for the database file and if doesn't exist create tables using:
ServerSession session = em.unwrap(ServerSession.class);
SchemaManager schemaManager = new SchemaManager(session);
schemaManager.createDefaultTables(true);
But is there a cleaner solution? Possibly a try-catch way? It's errorprone for me to guard each database method with a try-catch where the catch executes the above code, but I'd expect it to be a property I can configure the emf with.
The Table creation problems should only be logged at the warning level. So you could filter these out by setting the log level higher than warning, or create a seperate EM that mirrors the actual application EM to be used just for table creation but with logging turned off completely.
As for catching exceptions from createDefaultTables - there shouldn't be any. The internals of createDefaultTables wrap the actual createTable portion and ignores the errors that it might throw. So the exceptions are only showing in the log due to the log level including warning messages. You could wrap it in a try/catch and set the session log level to off, and then reset it in the finally block though.