What are the problems of using transactions in a database? - sql

From this post. One obvious problem is scalability/performance. What are the other problems that transactions use will provoke?
Could you say there are two sets of problems, one for long running transactions and one for short running ones? If yes, how would you define them?
EDIT: Deadlock is another problem, but data inconsistency might be worse, depending on the application domain. Assuming a transaction-worthy domain (banking, to use the canonical example), deadlock possibility is more like a cost to pay for ensuring data consistency, rather than a problem with transactions use, or you would disagree? If so, what other solutions would you use to ensure data consistency which are deadlock free?

It depends a lot on the transactional implementation inside your database and may also depend on the transaction isolation level you use. I'm assuming "repeatable read" or higher here. Holding transactions open for a long time (even ones which haven't modified anything) forces the database to hold on to deleted or updated rows of frequently-changing tables (just in case you decide to read them) which could otherwise be thrown away.
Also, rolling back transactions can be really expensive. I know that in MySQL's InnoDB engine, rolling back a big transaction can take FAR longer than committing it (we've seen a rollback take 30 minutes).
Another problem is to do with database connection state. In a distributed, fault-tolerant application, you can't ever really know what state a database connection is in. Stateful database connections can't be maintained easily as they could fail at any moment (the application needs to remember what it was in the middle of doing it and redo it). Stateless ones can just be reconnected and have the (atomic) command re-issued without (in most cases) breaking state.

You can get deadlocks even without using explicit transactions. For one thing, most relational databases will apply an implicit transaction to each statement you execute.
Deadlocks are fundamentally caused by acquiring multiple locks, and any activity that involves acquiring more than one lock can deadlock with any other activity that involves acquiring at least two of the same locks as the first activity. In a database transaction, some of the acquired locks may be held longer than they would otherwise be held -- to the end of the transaction, in fact. The longer locks are held, the greater the chance for a deadlock. This is why a longer-running transaction has a greater chance of deadlock than a shorter one.

One issue with transactions is that it's possible (unlikely, but possible) to get deadlocks in the DB. You do have to understand how your database works, locks, transacts, etc in order to debug these interesting/frustrating problems.
-Adam

I think the major issue is at the design level. At what level or levels within my application do I utilise transactions.
For example I could:
Create transactions within stored procedures,
Use the data access API (ADO.NET) to control transactions
Use some form of implicit rollback higher in the application
A distributed transaction in (via DTC / COM+).
Using more then one of these levels in the same application often seems to create performance and/or data integrity issues.

Related

Locking the database

Hi I'm trying to see what's locking the database and found 2 types of locking. Optimistic and Pessimistic Locking. I found some articles on Wiki but I would like to know more ! Can someone explain me about those locking ? We should only use locking when we need exclusive access to something? Locking only happens when we use transaction?
Thanks in advance.
Kevin
Optimistic locking is no locking at all.
It works by noting the state the system was in before you started making your changes, and then going ahead and just making those changes, assuming (optimistically) that no one else will want to make conflicting updates. Just as you are about to atomically commit those changes, you would check if in the mean-time someone else has also updated the same data. In which case, your commit fails.
Subversion for example using optimistic locking. When you try to commit, you have to handle any conflicts, but before that, you can do on your working copy whatever you want.
Pessimistic locks work with real locks. Assuming that there will be contention, you lock everything you want to update before touching it. Everyone else will have to wait for you to commit or rollback.
When using a relational database with transaction support, the database usually takes care of locking internally (such as when you issue an UPDATE statement), so for normal online processing you do not need to handle this yourself. Only if you want to do maintenance work or large batches do you sometimes want to lock down tables.
We should only use locking when we need exclusive access to something?
You need it to prevent conflicting operations from other sessions. In general, this means updates. Reading data can normally go on concurrently.
Locking only happens when we use transaction?
Yes. You will accumulate locks while proceeding with your transaction, releasing all of them at the end of it. Note that a single SQL command in auto-commit mode is still a transaction by itself.
Transactions isolation levels also specify the locking behaviour. BOL refers:Transaction isolation levels control:
Whether locks are taken when data is read, and what type of locks are requested.
How long the read locks are held.
Whether a read operation referencing rows modified by another transaction:
Blocks until the exclusive lock on the row is freed.
Retrieves the committed version of the row that existed at the time the statement or transaction started.
Reads the uncommitted data modification.
The default levels are:
Read uncommitted (the lowest level where transactions are isolated only enough to ensure that physically corrupt data is not read)
Read committed (Database Engine default level)
Repeatable read
Serializable (the highest level, where transactions are completely isolated from one another)

Serializable isolation level atomicity

I have several threads executing some SQL select queries with serializable isolation level. I am not sure which implementation to choose. This:
_repository.Select(...)
or this
lock (_lockObject)
{
_repository.Select(...);
}
In other words, is it possible several transactions will start executing at the same time and partially block records inside Select operation range.
P. S. I am using MySQL but I guess it is a more general question.
Transactions performing SELECT queries place a shared lock on the rows, permitting other transactions to read those rows, but preventing them from making changes to the rows (including inserting new records into the gaps)
Locking in the application is doing something else, it will not allow other threads to enter the code block which fetches the data from the repository, This approach can lead to very bad performance for a few reasons:
If any of the rows are locked by another transaction (outside the application) via a exclusive lock, the lock in the application will not help.
Multiple transactions will not be able to perform reads even on rows that are not locked in exclusive mode (not being updated).
The lock will not be released until all the data is fetched and returned to the client. This includes the network latency and any other overhead that it takes converting the MySql result set to a code object.
Most importantly, Enforcing data integrity & atomicity is the databases job, it knows how to handle it very well, how to detect potential deadlocks. When to perform record locks, and when to add Index gap locks. It is what databases are for, and MySql is ACID complaint and is proven to handle these situations
I suggest you read through Section 13.2.8. The InnoDB Transaction Model and Locking of the MySql docs, it will give you a great insight how locking in InnoDB is performed.

Regarding SQL Server Locking Mechanism

I would like to ask couple of questions regarding SQL Server Locking mechanism
If i am not using the lock hint with SQL Statement, SQL Server uses PAGELOCK hint by default. am I right??? If yes then why? may be its due to the factor of managing too many locks this is the only thing i took as drawback but please let me know if there are others. and also tell me if we can change this default behavior if its reasonable to do.
I am writing a server side application, a Sync Server (not using sync framework) and I have written database queries in C# code file and using ODBC connection to execute them. Now question is what is the best way to change the default locking from Page to Row keeping drawbacks in mind (e.g. adding lock hint in queries this is what i am planning for).
What if a sql query(SELECT/DML) is being executed without the scope of transaction and statement contains lock hint then what kind of lock will be acquired (e.g. shared, update, exclusive)? AND while in transaction scope does Isolation Level of transaction has impact on lock type if ROWLOCK hint is being used.
Lastly, If some could give me sample so i could test and experience all above scenarios my self (e.g. dot net code or sql script)
Thanks
Mubashar
No. It locks as it sees fit and escalates locks as needed
Let the DB engine manage it
See point 2
See point 2
I'd only use lock hints if you want specific and certain behaviours eg queues or non-blocking (dirty) reads.
More generally, why do you think the DB engine can't do what you want by default?
The default locking is row locks not page locks, although the way in which the locking mechanism works means you will be placing locks on all the objects within the hierarchy e.g. reading a single row will place a shared lock on the table, a shared lock on the page and then a shared lock on the row.
This enables an action requesting an exclusive lock on the table to know it may not take it yet, since there is a shared lock present (otherwise it would have to check every page / row for locks.)
If you issue too many locks for an individual query however, it performs lock escalation which reduces the granularity of the lock - so that is it managing less locks.
This can be turned off using a trace flag but I wouldn't consider it.
Until you know you actually have a locking / lock escalation issue you risk prematurely optimizing a non-existant problem.

distribution transaction lock in oracle database

I have some question around transaction lock in oracle database. What I have found out so far is that:
Cause: The time to wait on a lock in a distributed transaction has been exceeded. This time is specified in the initialization parameter DISTRIBUTED_LOCK_TIMEOUT.
Action: This situation is treated as a deadlock and the statement was rolled back. To set the time-out interval to a longer interval, adjust the initialization parameter DISTRIBUTED_LOCK_TIMEOUT, then shut down and restart the instance.
Some other things that I want to know in more details are things like:
It is mentioned that a lock in 'distributed transaction' happened. So what kind of database operation that can cause this ? Updating a record ? Selecting a record ?
What does 'Distributed' means anyway. I have seen this term coined all over the place, but I can't seem to deduce what it means.
What can we do to reduce instances of such lock ?
A distributed transaction means that you had a transaction that had two different participants. If you are using PL/SQL, that generally implies that there are multiple databases involved. But it may simply indicate that an application is using an external transaction coordinator in its interactions with the database. A J2EE application, for example, might want to create a distributed transaction that covers both issuing SQL statements against a database to move $100 from account A to account B as well as the application server action of creating a JMS message for this transaction that would eventually cause an email notification of the transfer to be sent. In this case, the application wants to ensure that the state of the middle tier matches the state of the back end.
Distributed transactions are not free. They involve potentially quite a bit of additional overhead because, at a minimum, you need to use the two-phase commit protocol to verify that all the components that are part of the distributed transaction are ready to commit and to verify that they all did commit. That involves sending a number of network packets which can be a significant fraction of the time an OLTP transaction is waiting. Distributed transactions also cause administrative issues because you end up with cases where one participant's transaction fails after it indicated it was ready to commit or a transaction coordinator failing while various participants have open transactions.
So the first question would be whether your application actually needs distributed transactions. Sometimes, developers find that they are accidentally requesting distributed transactions when they really aren't necessary. If you're not sure what a distributed transaction is, it's entirely possible that you don't really need them.
There is a guide here that will walk you through the steps to simulate an ORA-02049: timeout: distributed transaction waiting for lock if you want a better understanding of one of its causes:

ORM Support for Handling Deadlocks

Do you know of any ORM tool that offers deadlock recovery? I know deadlocks are a bad thing but sometimes any system will suffer from it given the right amount of load. In Sql Server, the deadlock message says "Rerun the transaction" so I would suspect that rerunning a deadlock statement is a desirable feature on ORM's.
I don't know of any special ORM tool support for automatically rerunning transactions that failed because of deadlocks. However I don't think that a ORM makes dealing with locking/deadlocking issues very different. Firstly, you should analyze the root cause for your deadlocks, then redesign your transactions and queries in a way that deadlocks are avoided or at least reduced. There are lots of options for improvement, like choosing the right isolation level for (parts) of your transactions, using lock hints etc. This depends much more on your database system then on your ORM. Of course it helps if your ORM allows you to use stored procedures for some fine-tuned command etc.
If this doesn't help to avoid deadlocks completely, or you don't have the time to implement and test the real fix now, of course you could simply place a try/catch around your save/commit/persist or whatever call, check catched exceptions if they indicate that the failed transaction is a "deadlock victim", and then simply recall save/commit/persist after a few seconds sleeping. Waiting a few seconds is a good idea since deadlocks are often an indication that there is a temporary peak of transactions competing for the same resources, and rerunning the same transaction quickly again and again would probably make things even worse.
For the same reason you probably would wont to make sure that you only try once to rerun the same transaction.
In a real world scenario we once implemented this kind of workaround, and about 80% of the "deadlock victims" succeeded on the second go. But I strongly recommend to digg deeper to fix the actual reason for the deadlocking, because these problems usually increase exponentially with the number of users. Hope that helps.
Deadlocks are to be expected, and SQL Server seems to be worse off in this front than other database servers. First, you should try to minimize your deadlocks. Try using the SQL Server Profiler to figure out why its happening and what you can do about it. Next, configure your ORM to not read after making an update in the same transaction, if possible. Finally, after you've done that, if you happen to use Spring and Hibernate together, you can put in an interceptor to watch for this situation. Extend MethodInterceptor and place it in your Spring bean under interceptorNames. When the interceptor is run, use invocation.proceed() to execute the transaction. Catch any exceptions, and define a number of times you want to retry.
An o/r mapper can't detect this, as the deadlock is always occuring inside the DBMS, which could be caused by locks set by other threads or other apps even.
To be sure a piece of code doesn't create a deadlock, always use these rules:
- do fetching outside the transaction. So first fetch, then perform processing then perform DML statements like insert, delete and update
- every action inside a method or series of methods which contain / work with a transaction have to use the same connection to the database. This is required because for example write locks are ignored by statements executed over the same connection (as that same connection set the locks ;)).
Often, deadlocks occur because either code fetches data inside a transaction which causes a NEW connection to be opened (which has to wait for locks) or uses different connections for the statements in a transaction.
I had a quick look (no doubt you have too) and couldn't find anything suggesting that hibernate at least offers this. This is probably because ORMs consider this outside of the scope of the problem they are trying to solve.
If you are having issues with deadlocks certainly follow some of the suggestions posted here to try and resolve them. After that you just need to make sure all your database access code gets wrapped with something which can detect a deadlock and retry the transaction.
One system I worked on was based on “commands” that were then committed to the database when the user pressed save, it worked like this:
While(true)
start a database transaction
Foreach command to process
read data the command need into objects
update the object by calling the command.run method
EndForeach
Save the objects to the database
If not deadlock
commit the database transaction
we are done
Else
abort the database transaction
log deadlock and try again
EndIf
EndWhile
You may be able to do something like with any ORM; we used an in house data access system, as ORM were too new at the time.
We run the commands outside of a transaction while the user was interacting with the system. Then rerun them as above (when you use did a "save") to cope with changes other people have made. As we already had a good ideal of the rows the command would change, we could even use locking hints or “select for update” to take out all the write locks we needed at the start of the transaction. (We shorted the set of rows to be updated to reduce the number of deadlocks even more)