Are there side effects running a sql query within a transaction? - sql

Are there side effects running a sql select query within a transaction?
I am executing a service method which queries and inserts/updates data in one transaction block.
The query is included in the transaction. Should I expect any negative behavior from this?

Negative or possitive behavior is not an absolute concept.
You should design isolation and transactions according to your application needs. More isolation also longs transactions means more locks.

I wouldn't say there are any negative behaviors, but I know in SQL Server by default transactions lock rows. So if you have queries hitting the table you are updating/inserting into and it takes a while the queries could lock and/or timeout. You can find and example of this here.
About the select statement that is not necessarily bad if you are using it as a condition for a successful insert/update. The point of a transaction is to be able to rollback any issues if something fails. So if the query does not accomplish this goal I would leave it out of the transaction. Here's a cool article that you might want to leaf through to help you get the concept of how to use transactions effectively.

You will lock while the transaction is open. If you'd doing this from .NET and not careful, you will leave the lock on the table. Also, if you roll back your transaction on a table that has an identity column and insert while the first transaction is still open, you'll end up with non-contiguous identities.
However, the benefits for running things in transactions can out weigh these problems.
You should try to keep your transactions as small as possible.

Related

Can I open an stoppable transaction with SQL Server?

I'm looking for something similiar to an SQL transaction. I need the usual protections that transactions provide, but I don't want it to slow down anyone else.
Imagine client A connects to the DB and runs these commands:
BEGIN TRAN
SELECT (something)
(Wait a few seconds maybe.)
UPDATE (something)
COMMIT
Inbetween the SELECT and the UPDATE, client B comes along and attempts to do a query, that under normal circumstances, would end up having to wait for A to COMMIT.
What I'd like is for client A to open it's transaction in such a way that should B come along and perform it's query, client A will find it's transaction immediately rolled back and it's subsequent commands failing. Client B would only experience minimal delay.
(Note that the SELECT and UPDATE are simply illustrative commands.)
Update...
I've got a high priority task (client B) that sometimes (once a month-ish) gets an SQL timeout error, and a low priority task (client A) with a transaction which causes that timeout. I'd rather that the low priority task fails and is reattempted in the next cycle.
I ended up fixing this problem by eliminating the transactions entirely and replacing them with an informal set of flags. The queries were refactored to only do something if the right set of flags are raised and I added something that cleared up abandoned records that the rollback would have cleared in the past.
I fixed my transaction issues by eliminating transactions.
Using SNAPSHOT isolation level will prevent B from blocking. B will see data in the state they were before A issued BEGIN TRANSACTION. Unless B modifies data, they will never block each other.
While not a transaction at all, Optimistic Concurrency may be useful -- it is used by default in LINQ2SQL, etc.
The general idea is that the data is read -- modifications can be independently made -- and then the data written back with a "check" (this is loosely comparable to a Compare and Swap). If the check fails it is up the application to decide what to do (restart the process, proceed anyway, fail).
This naturally doesn't work for all scenarios and may not detect a number of interactions, such as new items added between the "read" and "write". Both the actual read and write can be in separate transactions with the appropriate isolation level; the separate transactions may allow additional transactions to be interleaved.
Of course, depending upon the exact problem and interactions... different isolation levels and/or finer grained locking may be sufficient.
Happy coding.
That is back to front.
You can't have later clients aborting earlier transactions: that's chaos.
You can have snapshot isolation so that client B has a consistent view and isn't blocked (mostly) by client A. Also Wikipedia for more general stuff
Perhaps describe your problem more fully so we can offer suggestions for that...
One thing that I've seen used (but I'm afraid that I don't have any code handy for it) is having transaction A spawn another process which then monitors the transaction. If it sees any blocks caused by the transaction then it immediately issues a KILL to the spid.
If I can find the code for this then I'll add it here.

Is using transaction with each NHibernate operation necessary in order to using caching?

Is using transaction with each NHibernate operation necessary in order to using caching and why?
If you do not use an explicit transaction. Most databases will use implicit transactions. This means that each query you make will be wrapped in a transaction that is committed upon query completion. See this article: Use Of Implicit Transactions Is Discouraged
So it is a good idea to wrap your application actions in transactions, even if all they do is fetch data. Especially since your question is dealing with caching, you want to use transactions if you want to make use of the 2nd level cache as per the article referenced.
In database systems, transactions are intended to wrap several smaller atomic operations into one larger operation.
The canonical example is that of moving money from one checking account to another. The two atomic operations are:
Debit $x.xx from one account
Credit $x.xx to another account
These two atomic operations are wrapped in a transaction so that, if one of the operations fails, you can roll back the entire transaction, and the system won't be left in a state where the bank or the customer is left with too much or too little money, or there is money unaccounted for.
So if your operation is a simple atomic one, like changing a single field in a table, then no, I do not believe you need a transaction for that.

How can I get dead lock in this situation?

In my client application I have a method like this (in practice it's more complex, but I've left the main part):
public void btnUpdate_Click(...)
{
...
dataAdapter.Update(...);
...
dataAdapter.Fill(...); // here I got exception one time
}
The exception I found in logs says "Deadlock found when trying to get lock; try restarting transaction". I met this exception only time, so it wasn't repeated.
As I understand, DataAdapter.Fill() method executes only select query. I don't make an explicit transaction and I have autocommit enabled.
So how can I get dead lock on a simple select query which is not a part of bigger transaction?
As I understand, to get a dead lock, two transactions should wait for each other. How is that possible with a single select not inside a transaction? Maybe it's a bug in MySql?
Thank you in advance.
You are right it takes two transactions to make a deadlock. That is to say, No statement or statements within a single transaction can deadlock with other statements within the same transaction.
But it only take one transaction to notice a report of a deadlock. How do you know that the transaction you are seeing the deadlock reported in is the only transaction being executed in the database? Isn't there other activity going on in this database?
Also. your statement "I don't make an explicit transaction", and "... which is not a part of bigger transaction" implies that you do not understand that every SQL statement executed is always in an implicit transaction, even if you do not explicitly start one.
Most databases have reporting mechanisms specifically designed to track, report and/or log instances of deadlocks for diagnostic purposes. In SQL server there is a trace flag that causes a log entry with much detail about each deadlock that occurs, including details about each of the two transactions involved, like what sql statements were being executed, what objects in the database were being locked, and why the lock could not be obtained. I'd guess mySQL has similar disgnostic tool. Find out what it is and turn it on so that the next time this occurs you can look and find out exactly what happened.
You can deadlock a simple SELECT against other statements, like an UPDATE. On my blog I have an example explaining a deadlock between two well tunned statements: Read/Write deadlock. While the example is SQL Server specific, the principle is generic. I don't have enough knowledge of MySQL to claim this is necessarily the case or not, specially in the light of various engines MySQL can deploy, but none the less a simple SELECT can be the victim of a deadlock.
I haven't looked into how MySQL transaction works, but this is based on how MSSQL transactions work:
If you are not using a transaction, each query has a transaction by itself. Otherwise you would get a mess every time an update failed in the middle.
The reason for the deadlock might be lock escalation. The database tries to lock as little as possible for each query, so it starts out by locking only the single rows affected. When most of the rows in a page is locked by the query it might decide that escalating the lock into locking the entire page would be better, which may have the side effect of locking some rows not otherwise affected by the query.
If a select query and an update query are trying to escalate locks on the same table, they may cause a deadlock eventhough only a single table is involved.
I agree that in this particular issue this is unlikely to be the issue but this is supplemental to the other answers in terms of limiting their scope, recorded for posterity in case someone finds it useful.
MySQL can in rare cases have single statements periodically deadlock against themselves. This seems to happen particularly on bulk inserts and the issues are almost certainly a deadlock between different threads relating to the operation. I would expect bulk updates to have the same problem. In the past when faced with this sort of issue I have generally just cut down on the number of rows being inserted (or updated) in a single statement. You won't usually get a deadlock when trying to obtain the lock in this case but other messages.
A colleague of mine and I were discussing similar problems in MS SQL Server (so this is not unique to MySQL!) and he pointed out that the solution there is to tell the server not to parallelize the insert or update. The problems here are spinlock-related deadlocks, not logical lock deadlocks in the RDBMS.

Is there a difference between commit and rollback in a transaction only having selects?

The in-house application framework we use at my company makes it necessary to put every SQL query into transactions, even though if I know that none of the commands will make changes in the database. At the end of the session, before closing the connection, I commit the transaction to close it properly. I wonder if there were any particular difference if I rolled it back, especially in terms of speed.
Please note that I am using Oracle, but I guess other databases have similar behaviour. Also, I can't do anything about the requirement to begin the transaction, that part of the codebase is out of my hands.
Databases often preserve either a before-image journal (what it was before the transaction) or an after-image journal (what it will be when the transaction completes.) If it keeps a before-image, that has to be restored on a rollback. If it keeps an after-image, that has to replace data in the event of a commit.
Oracle has both a journal and rollback space. The transaction journal accumulates blocks which are later written by DB writers. Since these are asychronous, almost nothing DB writer related has any impact on your transaction (if the queue fills up, then you might have to wait.)
Even for a query-only transaction, I'd be willing to bet that there's some little bit of transactional record-keeping in Oracle's rollback areas. I suspect that a rollback requires some work on Oracle's part before it determines there's nothing to actually roll back. And I think this is synchronous with your transaction. You can't really release any locks until the rollback is completed. [Yes, I know you aren't using any in your transaction, but the locking issue is why I think a rollback has to be fully released then all the locks can be released, then your rollback is finished.]
On the other hand, the commit is more-or-less the expected outcome, and I suspect that discarding the rollback area might be slightly faster. You created no transaction entries, so the db writer will never even wake up to check and discover that there was nothing to do.
I also expect that while commit may be faster, the differences will be minor. So minor, that you might not be able to even measure them in a side-by-side comparison.
I agree with the previous answers that there's no difference between COMMIT and ROLLBACK in this case. There might be a negligible difference in the CPU time needed to determine that there's nothing to COMMIT versus the CPU time needed to determine that there's nothing to ROLLBACK. But, if it's a negligible difference, we can safely forget about about it.
However, it's worth pointing out that there's a difference between a session that does a bunch of queries in the context of a single transaction and a session that does the same queries in the context of a series of transactions.
If a client starts a transaction, performs a query, performs a COMMITor ROLLBACK, then starts a second transaction and performs a second query, there's no guarantee that the second query will observe the same database state as the first query. Sometimes, maintaining a single consistent view of the data is of the essence. Sometimes, getting a more current view of the data is of the essence. It depends on what you are doing.
I know, I know, the OP didn't ask this question. But some readers may be asking it in the back of their minds.
In general a COMMIT is much faster than a ROLLBACK, but in the case where you have done nothing they are effectively the same.
The documentation states that:
Oracle recommends that you explicitly end every transaction in your application programs with a COMMIT or ROLLBACK statement, including the last transaction, before disconnecting from Oracle Database. If you do not explicitly commit the transaction and the program terminates abnormally, then the last uncommitted transaction is automatically rolled back. A normal exit from most Oracle utilities and tools causes the current transaction to be committed. A normal exit from an Oracle precompiler program does not commit the transaction and relies on Oracle Database to roll back the current transaction.
http://download.oracle.com/docs/cd/B28359_01/server.111/b28286/statements_4010.htm#SQLRF01110
If you want o choose to do one or the other then you might as well do the one that is the same as doing nothing, and just commit it.
Well, we must take into account what an SELECT returns in Oracle. There are two modes. By default an SELECT returns data as that data looked in the very moment the SELECT statement started executing (this is default behavior in READ COMMITTED isolation mode, the default transactional mode). So if an UPDATE/INSERT was executed after SELECT was issued that won't be visible in result set.
This can be a problem if you need to compare two result sets (for example debta and credit sides of an general ledger app). For that we have a second mode. In that mode SELECT returns data as it looked at the moment the current transaction began (default behavior in READ ONLY and SERIALIZABLE isolation levels).
So, at least sometimes it is necessary to execute SELECTs in transaction.
Since you've not done any DML, I suspect there'd be no difference between a COMMIT and ROLLBACK in Oracle. Either way there's nothing to do.
I'd think a Commit would be more efficient; since generally you'd expect most DB transactions to be committed; so you would think the DB optimizes for this case (as opposed to trying to be more efficient for a rollback).

What are the problems of using transactions in a database?

From this post. One obvious problem is scalability/performance. What are the other problems that transactions use will provoke?
Could you say there are two sets of problems, one for long running transactions and one for short running ones? If yes, how would you define them?
EDIT: Deadlock is another problem, but data inconsistency might be worse, depending on the application domain. Assuming a transaction-worthy domain (banking, to use the canonical example), deadlock possibility is more like a cost to pay for ensuring data consistency, rather than a problem with transactions use, or you would disagree? If so, what other solutions would you use to ensure data consistency which are deadlock free?
It depends a lot on the transactional implementation inside your database and may also depend on the transaction isolation level you use. I'm assuming "repeatable read" or higher here. Holding transactions open for a long time (even ones which haven't modified anything) forces the database to hold on to deleted or updated rows of frequently-changing tables (just in case you decide to read them) which could otherwise be thrown away.
Also, rolling back transactions can be really expensive. I know that in MySQL's InnoDB engine, rolling back a big transaction can take FAR longer than committing it (we've seen a rollback take 30 minutes).
Another problem is to do with database connection state. In a distributed, fault-tolerant application, you can't ever really know what state a database connection is in. Stateful database connections can't be maintained easily as they could fail at any moment (the application needs to remember what it was in the middle of doing it and redo it). Stateless ones can just be reconnected and have the (atomic) command re-issued without (in most cases) breaking state.
You can get deadlocks even without using explicit transactions. For one thing, most relational databases will apply an implicit transaction to each statement you execute.
Deadlocks are fundamentally caused by acquiring multiple locks, and any activity that involves acquiring more than one lock can deadlock with any other activity that involves acquiring at least two of the same locks as the first activity. In a database transaction, some of the acquired locks may be held longer than they would otherwise be held -- to the end of the transaction, in fact. The longer locks are held, the greater the chance for a deadlock. This is why a longer-running transaction has a greater chance of deadlock than a shorter one.
One issue with transactions is that it's possible (unlikely, but possible) to get deadlocks in the DB. You do have to understand how your database works, locks, transacts, etc in order to debug these interesting/frustrating problems.
-Adam
I think the major issue is at the design level. At what level or levels within my application do I utilise transactions.
For example I could:
Create transactions within stored procedures,
Use the data access API (ADO.NET) to control transactions
Use some form of implicit rollback higher in the application
A distributed transaction in (via DTC / COM+).
Using more then one of these levels in the same application often seems to create performance and/or data integrity issues.