Hibernate + DB2 trigger - SQLCODE:-746 - sql

I have DB2 database with some legacy triggers and I am moving from jdbc to Hibernate. My problem is, that I am stucked up with error thrown by DB2 while storing data to DB through Hibernate.
The error -746 says:
If a table is being modified (by INSERT, DELETE, UPDATE, or MERGE),
the table can not be accessed by the lower level nesting SQL
statement.
If any table is being accessed by a SELECT statement, no table can be
modified (by INSERT, DELETE, UPDATE, or MERGE) in any lower level
nesting SQL statement.
In my case I am trying to save entity (which is owned by another already entity saved in this transaction), but there is before-insert trigger in DB, which checks constraint (like "is this instance the only one that has flag set to true?"). But while the attempt to execute this trigger, error is thrown.
I already had similar error while saving another entity with another trigger and I did it like store the problematic entity by JDBC, load it by hibernate and save the rest of entities. But this approach seems to me a bit cumbersome. Is there a way how to resolve this problem? And why exactly is this not working by Hibernate whereas by JDBC is? In both cases it is in one transaction. I tried to flush the data before storing the problematic entity, but did not help.

Related

SQL lazy update operations

We insert SQL entities into a table, one by one. It's easy and fast. After the entity insert, we are executing an SP to updates several tables according to the new entity, update some calculated fields, some lookup tables to help to find this new entity. This takes a lot of time and sometimes ends up in a deadlock state.
Inserting the main entity must be fast and reliable, updating the additional tables is not important to happen immediately. I was wondering (I am not a DB expert) if there is an SQL methodology similar to the thread handling in C#, to maintain an update thread, which can be awakened when a new entity arrives to update the additional tables after the insertion. This thread can update these tables in "one thread" to avoid deadlock.
I can imagine an sql job which executes every minute, searches for new entities and executes the updates, but it seems too rough to me.
What is the best practice to implement this on MS SQL side?
There are a number of ways you could achieve this. You mention that the two can be done separately - immediate updating is not important. In that case, you could set up a SQL Agent to run a stored procedure that checks for missing records and performs the update.
Another approach would be to put the entire original update inside a stored procedure responsible for performing the update and all the housekeeping work, then all you would do is call the stored procedure with the right parameters and it would do all the work behind the curtain.
Another way would be to add triggers on the inserted table to do the update for you. Sounds like the first is what you probably want.

Reading original (before change) DB values in the current LUW?

Is it possible to retrieve the old or original values for a table when it has been changed, but not yet committed, in the current LUW?
I'm implementing a BAdI that's supposed to be used to raise messages based on changes performed to an object, but SAP doesn't actually provide the original object data in the BAdI. Trying to read the data with SELECT statements doesn't work as the pending changes have already been applied at that point, just not committed.
If I debug the code I can see the old values just fine in SE16 but it seems like the uncommitted changed values are being returned by any SELECTs I perform in this BAdI.
Is there any way to read this original data?
The reading of a table which was updated previously, during the same Database LUW, will always return the updated values. So, it's at least required to read the table from another Database LUW.
The isolation level used by default depends on the type of database you are using. For HANA and Oracle, the "committed read" is the default, but other databases use the "uncommitted read" by default.
If you don't use HANA/Oracle, you may switch temporarily to the "committed read" isolation level by calling the function module DB_SET_ISOLATION_LEVEL.
Then, you can read the table from another Database LUW by using a service connection (prefixed R/3*), for instance: SELECT ... FROM yourtable ... CONNECTION ('R/3*temp') ...

MS ACCESS Lock a table during update, blocking

How can I lock a table preventing other users querying it while I update its contents?
Currently my data is updated by wiping the table and re-populating it (i know, its not the best way to update data, but the source data has no unique key to do a record by record update and this is the only way). There exists the unlikely, but possible scenario where a user my access the table in the middle of the update and catch it while it is empty thus returning bad info.
Is there at the SQL (or code) level a way to create a blocking statement that will wait for a DB update to complete prior to querying?
Access has very little locking capabilities. Unless you're storing your data in a different backend, you only can set a database-wide lock or no lock at all.
There is some locking capability setting table locks when the table structure of a table is being changed, but as far as I can find, that's not available to the user (neither through the GUI nor through VBA)
Note that both ADO and DAO support locking (in ADO by setting the IsolationLevel, in DAO by setting dbDenyRead + dbDenyWrite when executing the query), but in my short testing, these options do absolutely nothing in Access.

concurrent SQL statements in different transanctions

Reading up the documentation of PL-SQL CREATE TRIGGER statement in ORACLE, I went through the following bit of information:
When a trigger fires, tables that the trigger references might be
undergoing changes made by SQL statements in other users'
transactions. SQL statements running in triggers follow the same rules
that standalone SQL statements do.
It basically says the rules that would apply to two conflicting standalone SQL statements (running at the same time) are unchanged when one of the statements is performed from within a trigger.
So we have some "usual" rules about concurrent transactions and, as for these rules, the following two are mentioned:
Specifically:
Queries in the trigger see the current read-consistent materialized
view of referenced tables and any data changed in the same
transaction.
Updates in the trigger wait for existing data locks to be released
before proceeding.
These two rules look like "obscure" to non-expert users.
What do they mean more precisely?
Queries in the trigger see the current read-consistent materialized
view of referenced tables and any data changed in the same
transaction.
This means the data the trigger sees, like if it does a SELECT on a different table, represents the state of that table when the statement started running. The trigger does not see rows that have been changed by other sessions that have not been committed yet.
Updates in the trigger wait for existing data locks to be released
before proceeding.
When an Oracle statement modifies a row, the row is locked against other people changing it until that session either commits or rolls back its transaction. So if you do an insert on table A, your trigger does an update on table B, but someone else's session has already done an update on table B for that same row, your transaction will wait until they commit or rollback.

nhibernate audit trigger error

I'm using triggers on my sql database to capture change information for a table, it seems to be having a problem with nhibernate though.
The table has a few columns and primary keys and triggers on it. The triggers look like this
CREATE TRIGGER [dbo].[tr_Instrument_update] ON [dbo].[Instrument] FOR UPDATE AS
BEGIN
INSERT [MyAudit].[audit].[Instrument]
SELECT 'Updated', i.*
FROM inserted
INNER JOIN [MyAudit].[dbo].[Instrument] i ON inserted.[InstrumentID] = i.[InstrumentID]
END
Basically on every change we copy the row into the audit table. I have tested and if I modify the data directly through sql management studio triggers function correctly and I get data written to the audit table, however if i update through my app I get the following:
NHibernate.StaleObjectStateException
was unhandled by user code
Message=Row was updated or deleted by
another transaction (or unsaved-value
mapping was incorrect)
I assume this is because the trigger updates another table in another database, is there anyway to make nhibernate ignore this as the change will not affect any of its data, in our mappings we have no reference to this audit data.
Figured out that the trigger was causing Nhibernate to do two identical update calls for some reason. The solution was to SET NOCOUNT ON inside the trigger. Still not sure though why nhibernate makes two updates!