I implemented a spring batch framework with reader, processor, writer. the batch framework initiates a transaction and commit interval is for every 50 records say.
Now within my reader or processor if i dont want to wait for some update or insert statement to wait until commit interval is reached , and want to commit right there is it possible?
It can re framed like how to commit only specific records before commit interval is reached in a spring batch transaction.
I am using ibatis, oracle11g. I tried to commit transactions from my ibatis SQL Template and couldnt see the commit happening.
You can achieve this using Requires New transaction propagation. This way you can commit some data changes no matter whether you commit or roll back the main transaction later.
Related
I am trying to insert a record in IBM DB2 DB. Upon insertion, a timestamp field is auto-generated. I have to take this value and send it via Kafka. I can only commit transaction if Kafka is successful, otherwise I rollback.
I am inserting using a native "INSERT" query.
REQUIREMENT: Want to get the auto-generated timestamp value without committing, so that I can rollback easily.
I tried flushing but that did not work.
What can I do?
session.flush() will work and get you the data if you query your DB. But these changes may not be visible in your Database visualizer (like Teradata) until you commit your transactions.
Hope that helped.
I have a somewhat unusual need for my transaction handling.
I am working on a mutation test framework for SQL Server, for this I need to run my tests inside a transaction, so that the database always is in the state it started when the tests finish.
However I have the problem that users can code inside the test procedures and may call rollback transaction that may or may not be inside a (nested) transaction (savepoint).
high level it looks like this
start transaction
initialize test
run test with user code
may or may not contain:
- start tran
- start tran savename
- commit tran
- commit tran savename
- rollback tran
- rollback tran savename
output testresults
rollback transaction
Is there a way to make sure I can at last always roll back to the initial state? I have to take in account that users can call stored procedures/triggers that maybe nested and can all contain transaction statements. With all my solutions the moment a user uses rollback tran in their test code they escape the transaction and not everything will be cleaned
What I want is that if a user calls rollback only their part of the transaction is rolled back and my transaction that I start before the initialization of the test is still intact.
If it is possible I want to prevent to force my users to use a transaction template that uses savepoints when a transaction already exists.
I do not think this is possible without any communication/rules for the user code. Despite what you do, if the user's code runs as many COMMITs as there are ##TRANCOUNT at that time, the transaction will be commited and there will be nothing you can do about that.
One way you could do this is if you check/enforce the user code to, instead of using COMMIT, change that to if ##TRANCOUNT>=2 COMMIT. This will make sure the TRUE data commit can only be done by YOUR COMMIT command. Of course, it happens you never really want to commit, so you just rollback and it's over.
You mention:
What I want is that if a user calls rollback only their part of the
transaction is rolled back
Please note that nested transactions are kind of a myth. Refer to this excellent article. Long story short: "Nested" BEGIN TRANSACTIONs and COMMITs do actually nothing except change the value of the system var ##TRANCOUNT so that some organisation can be made through procedures.
I don't think it is possible to rollback a part of transaction and keep the other transaction intact. The moment we rollback the transaction, the entire transaction gets rolled back.
As pointed out by serveral others nested transactions are not supported in SQL Server. I have decided to minimize the number of state changing statements after the users code and that I clean those statements at the start of a test batch. This way it doesn't really matter that I can't rollback my own transaction, since the heavy lifting will be rolled back either by me or the user.
I also decided to fail the test when the starting ##TRANCOUNT and the ending ##TRANCOUNT dont match up so that no test can pass when there is something wrong with the users transactions.
The current system however will still struggle with a user doing a COMMIT TRAN. That is a problem for another time.
I believe the actual need is to write the test cases for T-SQL. If that is correct then I feel you don't have to reinvent the wheel and start using the open source test framework T-SQLT https://tsqlt.org/
I have read through the documentation, and it seems that a SQL BATCH command and a transaction accomplish the same purpose, that is committing all statements as an all-or-nothing transaction.
Is this correct, or am I missing something?
I am using Orient through the PhpOrient language binding, and see that it supports both transactions and batches, but I am using SQL exclusively and would like to perform transactions using SQL only. It seems the same from my testing, but I wanted to confirm.
SQL Batch
a) SQL Batch is just that a collection of commands that need to be executed without guaranteed of success or fail.
b) Batch Processing means things are put into queue and it is processed when a certain amount if items is reached, or when a certain period has passed. You can do undo/rollback in this.
In BATCH PROCESSING, the bank would just queue xyz's request to deposit amount. The bank would just put your request in queue with all the other requests and process them at the end of the day or when they reach a certain amount.
SQL Transaction
a) SQL Transaction is a collection of commands that are guaranteed to succeed or fail totally.Transactions won't complete half the commands and then fail on the rest, if one fails they all fail.
b) Transaction is like real time processing that allows you to rollback/undo changes.
In TRANSACTIONS, it's just like the batch, but you have the option to "cancel" it.
transaction
Transactions are atomic units of work that can be committed or rolled back. When a transaction makes multiple changes to the database, either all the changes succeed when the transaction is committed, or all the changes are undone when the transaction is rolled back.
Database transactions, as implemented by InnoDB, have properties that are collectively known by the acronym ACID, for atomicity, consistency, isolation, and durability.
Mysql Manual
I'm using Oracle SQL Developer tool, version 3.0.02 and I'm having some trouble understanding the following: if I Commit an update and the time response is '0 seconds' the commit is done properly? Because it happened a few times and the DB wasn't updated. I don't know if it's a coincidence or not. When I commit for the second time(just to be sure) after it shows me '0 seconds', it appears '0,016 seconds' and the update shows correctly. But I don't wanna commit 4 times in a row just to get it right. What do you guys think about this? Oh and it doesn't give me no errors.
Thank you in advance
The time taken by commit has nothing to do with any malfunctioning. The work is done by the query and commit just notes somewhere in the metadata that the transaction is finished. Commit does almost anything(just force to save on the disk some log files). If something gets wrong (i.e. commit don't work) you'll get an error.
The absence of the error signals that everything is ok, the database has done all you have asked to it.
For example, your updates may do nothing:
UPDATE db SET user='name' where file='name_of_file' and answer='okay' ;
if there is no file named 'name_of_the_file' with answer = 'okay' the database will do no work. And nothing to commit.
For the sake of a complete answer i'll add these points from this blog:
When a transaction is committed, the following occurs:
The internal transaction table for the associated undo table space records that the transaction has committed, and the
corresponding unique system change number (SCN) of the transaction is
assigned and recorded in the table
The log writer process (LGWR) writes redo log entries in the SGA's redo log buffers to the redo log file. It also writes the
transaction's SCN to the redo log file. This atomic event constitutes
the commit of the transaction
Oracle releases locks held on rows and tables
Oracle marks the transaction complete
You can check Oracle documentation to learn why commit is such a fast operation (rollback takes much longer, it has to refer to undo segments).
'Lost' commits may happen if somebody else commits their data, which appears to be the same as the 'old' data for you.
I just want to ask if it is always the first query will be executed when encapsulate to a transaction? for example i got 500 k records to be deleted and 500 k to be inserted, is there a possibility of locking?
Actually I already test this query and it works fine but i want to make sure if my assumption is correct.
Note: this will Delete and Insert the same record with possible update on other columns.
BEGIN TRAN;
DELETE FROM OUTPUT TABLE WHERE ID = (1,2,3,4 etc)
INSERT INTO OUTPUT TABLE Values (1,2,3,4 etc)
COMMIT TRAN;
Within a transaction all write locks (all locks acquired for modifications) must obey the strict two phase locking rule. One of the consequences is that a write (X) lock acquired in a transaction cannot be released until the transaction commits. So yes, the DELETE and INSERT will execute sequentially and all locks acquired during the DELETE will be retained while executing the INSERT.
Keep in mind that deleting 500k rows in a transaction will escalate the locks to one table lock, see Lock Escalation.
Deleting 500k rows and inserting 500k rows in a single transaction, while maybe correct, is a bad idea. You should avoid such large units of works, long transaction, if possible. Long transactions pin the log in place, create blocking and contention, increase recovery and DB startup time, increase SQL Server resource consumption (locks require memory).
You should consider doing the operation in small batches (perhaps 10000 rows at time), use MERGE instead of DELETE/INSERT (if possible) and, last but not least, consider a partitioned sliding window
implementation, see How to Implement an Automatic Sliding Window in a Partitioned Table.
From the documentation on TRANSACTION (emphasis mine):
BEGIN TRANSACTION represents a point at which the data referenced by a
connection is logically and physically consistent. If errors are
encountered, all data modifications made after the BEGIN TRANSACTION
can be rolled back to return the data to this known state of
consistency. Each transaction lasts until either it completes without
errors and COMMIT TRANSACTION is issued to make the modifications a
permanent part of the database, or errors are encountered and all
modifications are erased with a ROLLBACK TRANSACTION statement.
BEGIN TRANSACTION starts a local transaction for the connection
issuing the statement. Depending on the current transaction isolation
level settings, many resources acquired to support the Transact-SQL
statements issued by the connection are locked by the transaction
until it is completed with either a COMMIT TRANSACTION or ROLLBACK
TRANSACTION statement. Transactions left outstanding for long periods
of time can prevent other users from accessing these locked resources,
and also can prevent log truncation.
Although BEGIN TRANSACTION starts a local transaction, it is not
recorded in the transaction log until the application subsequently
performs an action that must be recorded in the log, such as executing
an INSERT, UPDATE, or DELETE statement. An application can perform
actions such as acquiring locks to protect the transaction isolation
level of SELECT statements, but nothing is recorded in the log until
the application performs a modification action.