In an ABAP program I have noticed unexpected persistence of data when displaying a local table using the class cl_salv_table.
To reproduce I have created a minimal code sample. The program does an insert, displays data in an ALV, then does a ROLLBACK WORK.
I expect the inserted value to be present in the database BEFORE the rollback, and absent AFTER the rollback.
However, if between the insert and the rollback, an ALV grid is displayed, the data is persisted beyond the rollback, and immediately visible to other transactions.
Is this expected behaviour, and if so, how can I avoid this? We do use this class quite often and it looks like we may inadvertently commit values to database when we don't actually want to.
This is the code:
*&---------------------------------------------------------------------*
*& Report zok_alv_commit
*&
*&---------------------------------------------------------------------*
*&
*&
*&---------------------------------------------------------------------*
REPORT zok_alv_commit.
SELECTION-SCREEN BEGIN OF BLOCK b1.
PARAMETERS: p_showtb TYPE boolean AS CHECKBOX DEFAULT abap_false.
SELECTION-SCREEN END OF BLOCK b1.
START-OF-SELECTION.
DATA: lt_table TYPE TABLE OF zok_alv,
ls_table TYPE zok_alv.
""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
" Create new GUID and insert into table
""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
TRY.
ls_table-guid = cl_system_uuid=>create_uuid_c22_static( ).
CATCH cx_uuid_error.
" Error creating UUID
MESSAGE e836(/basf/dps3_apodata).
ENDTRY.
WRITE: |Create guid { ls_table-guid } |, /.
INSERT zok_alv FROM ls_table.
APPEND ls_table TO lt_table.
""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
" The important bit: show something in an ALV
""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
IF p_showtb = abap_true.
cl_salv_table=>factory(
IMPORTING r_salv_table = DATA(lo_alv)
CHANGING t_table = lt_table
).
lo_alv->display( ).
ENDIF.
""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
" Check existence in table before and after rollback
" Expectation: If the ALV is shown above, the data is already committed,
" so the ROLLBACK WORK will not have an effect.
""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
SELECT SINGLE guid FROM zok_alv INTO #DATA(lv_ignored) WHERE guid = #ls_table-guid.
IF sy-subrc = 0.
WRITE: 'GUID exists on DB before rollback.', /.
ELSE.
WRITE: 'GUID does NOT exist on DB before rollback.', /.
ENDIF.
ROLLBACK WORK.
SELECT SINGLE guid FROM zok_alv INTO #lv_ignored WHERE guid = #ls_table-guid.
IF sy-subrc = 0.
WRITE: 'GUID exists on DB after rollback.', /.
ELSE.
WRITE: 'GUID does NOT exist on DB after rollback.', /.
ENDIF.
It requires a table ZOK_ALV with only MANDT and a 22-character field GUID as primary key, nothing else.
When executing the code with p_showtb unchecked:
As you can see, the value is not present after the rollback, and not present in the table - as expected.
When executing the code with p_showtb checked:
At this point, already, the id is visible in SE16 in another session:
(I leave the ALV screen with Back (F3) at this point)
The code confirms, the value is still present, even after the rollback:
Even after leaving the program, the values persist in the DB.
To answer the 2 questions:
1) Yes, this is the "expected behavior" as stated in the database commit documentation :
a database commit is performed implicitly in the following situation: Completion of a dialog step ...
(it means that any display does a database commit)
This is because when the screen is displayed, SAP doesn't do anything except waiting for the user action, so the workprocess which was used to execute the ABAP code prior to the display can be reused for executing the ABAP code of requests from other users.
So that the workprocess can be reused, the memory of the workprocess (variables) is to be switched, it's called the roll-out/roll-in, which also requires that some system database tables are updated for internal SAP stuff and a database commit is needed for that. This is better explained in the documentation of SAP LUWs. I read that somewhere but I don't remember exactly where.
2) No, you can't "avoid this behavior", but considering your current logic of insert + display + rollback the insert, you can do one of these solutions but I recommend only the first one in your case, not the second one:
Change your logic to conform the SAP rule (i.e. any display does a database commit so bear with it). If your logic is really the one you said, then why do you want to insert something in the database and rollback it? Without further details, my answer is to just remove the insert and the rollback and keep the display. It would be pure speculation to answer something else because you didn't give enough details how your actual class really works (there must be a reason why it does insert + display + rollback, but what is missing in your explanation?) You'd better post another question and give all the details.
Second solution ("non-recommended, counter-performing and dangerous"), if you really really want to stick to your current logic: move your display to a RFC-enabled function module, and do CALL FUNCTION '...' DESTINATION 'NONE' KEEPING LOGICAL UNIT OF WORK (cf documentation). It's not recommended because it's for internal use only. It's counter-performing because it occupies 2 workprocesses at the same time. It's dangerous because "the worst case scenario may be a system shutdown".
How to avoid implicit commit
To avoid implicit commit, you can wrap the INSERT in a UPDATE FUNCTION MODULE.
CALL FUNCTION IN UPDATE TASK as SAP LUW, the actual insert will only happen when you call COMMIT WORK.
Quote from you:
However, if between the insert and the rollback, an ALV grid is displayed, the data is persisted beyond the rollback, and immediately visible to other transactions.
I suppose your database supports Committed Read
Committed read
In committed reads, only the user of a database LUW has access to data modified in the LUW. Other database users cannot access the modified data before a database commit is performed. In the case of reads, a shared lock is set (not possible for the modified data due to the existing exclusive lock). Data can be accessed only when released by a database commit.
But in SAP LUW, you cannot expect that you can select the inserted entry which is inserted in a UPDATE FUNCTION MODULE before the COMMIT WORK statement.
I propose you should always work on a internal table to do insert and ALV display, and provide a SAVE button to trigger the insert from internal tables to database table.
Related
Using MS SQL Server, a Trigger calls a Stored Procedure which internally makes a select, will the return values be the new or old ones?
I know that inside the trigger I can access them by FROM INSERTED i inner join DELETED, but in this case I want to reuse (cannot change it) an existing Stored Procedure that internally makes a select on the triggered table and processes some logic with them. I just want to know if I can be sure that the existing logic will work or not (by accessing the NEW values).
I can simply try to simulate it with one update... But maybe there are other cases (example: using transactions or something else) that I maybe not be aware and never test it that could result in a different case.
I decided to ask someone else that might know better. Thank you.
AFTER triggers (the default) fire after the DML action. When the proc is called within the trigger, the tables will reflect changes made by the statement that fired the trigger as well changes made within the trigger before calling the proc.
Note changes are uncommitted until the trigger completes or explict transaction later committed.
Since the procedure is running in the same transaction as the (presumably, "after") trigger, it will see the uncommitted data.
I hope you see the implications of that: the trigger is executing as part of the transaction started by the DML statement that caused it to fire, so the stored procedure is part of the same transaction, so a "complicated" stored procedure means that transaction stays open longer, holding locks longer, making responses back to users slower, etc etc.
Also, you said
internally makes a select on the triggered table and processes some logic with them.
if you just mean that the procedure is selecting the data in order to do some complex processing and then write it to somewhere else inside the database, ok, that's not great (for reasons given above), but it will "work".
But just in case you mean you are doing some work on the data in the procedure and then returning that back to the client application, Don't do that
The ability to return results from triggers will be removed in a future version of SQL Server. Triggers that return result sets may cause unexpected behavior in applications that aren't designed to work with them. Avoid returning result sets from triggers in new development work, and plan to modify applications that currently do. To prevent triggers from returning result sets, set the disallow results from triggers option to 1.
I have a problem to solve which requires undo operation of each executed sql file in Oracle Database.
I execute them in an xml file with MSBuild - exec command sqlplus with log in and #*.sql.
Obviously rollback won't do, because it can't rollback already commited transaction.
I have been searching for several days and still can't find the answer. What I learned is Oracle Flashback and Point in Time Recovery. The problem is that I want the changes to be undone only for the current user i.e. if another user makes some changes at the same time then my solution performs undo only on user 'X' not 'Y'.
I found the start_scn and commit_scn in flashback_transaction_query. But does it identify only one user? What if I flashback to a given SCN? Will that undo only for me or for other users as well? I have taken out
select start_scn from flashback_transaction_query WHERE logon_user='MY_USER_NAME'
and
WHERE table_name = "MY_TABLE NAME"
and performed
FLASHBACK TO SCN"here its number"
on a chosen operation's SCN. Will that work for me?
I also found out about Point in Time Recovery but as I read it makes the whole database unavailable so other users will be unable to work with it.
So I need something that will undo a whole *.sql file.
This is possible but maybe not with the tools that you use. sqlplus can rollback your transaction, you just have to make sure auto commit isn't enabled and that your scripts only contain a single commit right before you end the sqlplus session (if you don't commit at all, sqlplus will always roll back all changes when it exits).
The problems start when you have several scripts and you want, for example, to rollback a script that you ran yesterday. This is a whole new can of worms and there is no general solution that will always work (it's part of the "merge problem" group of problems, i.e. how can you merge transactions by different users when everyone can keep transactions open for as long as they like).
It can be done but you need to carefully design your database for it, the business rules must be OK with it, etc.
To general approach would be to have a table which contains the information which rows were modified (= created,updated,deleted) by the script plus the script name plus the time when it was executed.
With this information, you can generate SQL which can undo the changes created by a script. To fill such a table, use triggers or generate your scripts in such a way that they write this information as well (note: This is probably beyond a "simple" sqlplus solution; you will have to write your own data loader for this).
Ok I solved the problem by creating a DDL and DML TRIGGER. The first one takes "extra" column (which is the DDL statement you have just entered) from v$open_cursor and inserts into my table. The second gets "undo_sql" from flashback_transaction_query which is the opposite action of your DML action - if INSERT then undo_sql is DELETE with all necessary data.
Triggers work before DELETE,INSERT (DML) on specific table and ALTER,DROP,CREATE (DDL) on specific SCHEMA or VIEW.
In my client place they have a database. Once I complete the incremental changes on database, I have prepare the list of SQL object changes in one SQL file.
The script is like this:
If sql object 1 present in database
DROP the SQL object 1
GO
create the SQL Object 1
If sql object 2 present in database
DROP the SQL object 2
create the SQL Object 2
All the time I have drop the existing Object and re-create the same.
Now this batch may contains some error.
My requirement is that if there any error in file the file. non of the the sql objects has been re-created. it should rollback the old sql objects.
If there is no error then it would create all the SQL objects.
Due the GO statement in middle I could not able to user the TRANSACTION in sql.
How can this be solved?
Don't use GO, then. Simply remove it from your script, and add your BEGIN and COMMIT TRANSACTION commands where you need them.
BEGIN TRAN
IF EXISTS Object1
BEGIN
DROP Object1;
END
CREATE Object1;
IF EXISTS Object2
BEGIN
DROP Object2;
END
CREATE Object2;
COMMIT TRAN
Modifying database schema via DROP/CREATE has many problems:
it may loose data
it looses permissions and extended properties added to the objects dropped
cross object dependencies (eg. foreign keys) require a certain order of drop/create
Usually is better to try to ALTER the object from schema version to schema version. This requires you to know which schema version is currently deployed, but that problem is easily solvable (use a database extended property, see Version Control and your Database).
Back to your question, a naive approach is to wrap your entire script in a big BEGIN TRAN/COMMIT but that seldom works:
it creates a potentially large transaction that requires much log space.
the result is impossible to validate until after the commit when is too late to do anything about it
the behavior mingling exceptions and transactions is messy at best. XACT_ABORT ON helps somehow, but only so much.
Not all DLL statements can be run from inside a transaction
For these resons I would recommnd a much simpler and safer approach: take a backup, WITH COPY_ONLY, of the database before modifying the schema. If anything goes wrong, rollback to the copy. Alternative, a database snapshot can be used as a backup. See How to: Revert a Database to a Database Snapshot.
Note that BEGIN TRAN/COMMIT can span batches (ie. can be separated by multiple GO) so your concern is not an issue.
Short Version:
Does anyone know of a way --inside a SQL 2000 trigger-- of detecting which process modified the data, and exiting the trigger if a particular process is detected?
Long Version
I have a customized synchronization routine that moves data back and forth between dis-similar database schemas.
When this process grabs a modified record from Database A, it needs to transform it into a record that goes into Database B. The database are radically different, but share some of the same data such as user accounts and user activity (however even these tables are structurally different).
When data is modified in one of the pertinent tables, a trigger fires which writes the PK of that record to a "sync" table. This "sync" table is monitored by a process (a stored proc) which will grab the PK's in sequence, and copy over the related data from database A to database B, making transformations as necessary.
Both databases have triggers that fire and copy the PK to the sync table, however these triggers must ignore the sync process itself so as not to enter into "endless" loop (or less, depending on nesting limits).
In SQL 2005 and up, I use the following code in the Sync process to identify itself:
SET CONTEXT_INFO 0xHexValueOfProcName
Each trigger has the following code at the beginning, to see if the process that modified the data is the sync process itself:
IF (CONTEXT_INFO() = 0xHexValueOfProcName)
BEGIN
-- print '## Process Sync Queue detected. This trigger is exiting! ##'
return
END
This system works great, keep chugging along, keeps the data in sync. The problem now however is that a SQL2000 server wants to join the party.
Does anyone know of a way --inside a SQL 2000 trigger-- of detecting which process modified the data, and exiting the trigger if a particular process is detected?
Thanks guys!
(As per Andriy's request, I am answering my own question.)
I put this at the top of my trigger, works like a charm.
-- How to check context info in SQL 2000
IF ((select CONTEXT_INFO from master..sysprocesses where spid = ##SPID) = 0xHexValueOfProcName)
BEGIN
print 'Sync Process Detected -- Exiting!'
return
END
I've a feeling this might not be possible, but here goes...
I've got a table that has an insert trigger on it. When data is inserted into this table the trigger fires and parses a long varbinary column. This trigger performs some operations on the binary data and writes several entries into a second table.
What I have recently discovered is that sometimes the binarydata is not "correct" (i.e. it does not conform to the spec it is supposed to - I have NO control over this whatsoever) and this can cause casting errors etc.
My initial reaction was to wrap things in TRY/CATCH blocks, but it appears this is not a solution either, as the execution of the CATCH means the transaction is doomed and I get a "Transaction doomed in trigger" error.
What is imperitive is that the data still gets written to the initial table. I don't care if the data gets written to the second table or not.
I'm not sure if I can accomplish this or not, and would gratefully receive any advice.
what you could do is commit a transaction inside a trigger and then perform those cast.
i don't know if that's a possible solution to your problem though.
another option would be to create a function IsYourBinaryValueOK which would check the column value. however the check would have to be done with like to not cause an error.
It doesn't sound like this code should run in an insert trigger since it is conceptually two different transactions. You would probably be better off with asynchronous processing such as service broker, a background nanny task that looks for 'not done' work, etc. You could also handle it by using a sproc to do the insert in one transaction and then having it call the do-other-work code afterwards.
If you absolutely have to do it in the trigger then you basically need an autonomous transaction. For some ideas see this link (the techniques apply to sql 2005 as well).