Update table statement - sql

have a great day to everyone.
I have some confusion about UPDATE TABLE statement in Oracle DB 12cr2.
Let's assume we have 3 users:
U1;
U2;
U3;
U1 has a table called TEST_1, and U2 and U3 both have UPDATE privilege on that table.
My question is that: If U2 and U3 try to update same rows in that particular table at the same time what will happen? How Oracle will control such kind of processes?
Thanks in advance

Although the first answer already explains very well what is the locking mechanism, let me add a bit more information.
In your case, we are talking about Row Locks (TX). Row-level locks are primarily used to prevent two transactions from modifying the same row. When a transaction needs to modify a row, a row lock is acquired.
There is no limit to the number of row locks held by a statement or transaction, and Oracle does not escalate locks from the row level to a coarser granularity. Row locking provides the finest grain locking possible and so provides the best possible concurrency and throughput.
When two transactions ( updates in your case ) are attacking the same row, the first will acquire the lock , and it won't release it until it either commits or rollback. A system change number (SCN) which is a logical, internal time stamp used by Oracle Database will be assigned to each transactions. System Change Numbers or SCNs order events that occur within the database, which is necessary to satisfy the ACID properties of a transaction.
SCNs occur in a monotonically increasing sequence. Oracle Database can use an SCN like a clock because an observed SCN indicates a logical point in time and repeated observations return equal or greater values. If one event has a lower SCN than another event, then it occurred at an earlier time with respect to the database. Several events may share the same SCN, which means that they occurred at the same time with respect to the database.
Every transaction has an SCN. For example, if a transaction updates a row, then the database records the SCN at which this update occurred. Other modifications in this transaction have the same SCN. When a transaction commits, the database records an SCN for this commit.
Oracle Database increments SCNs in the system global area (SGA). When a transaction modifies data, the database writes a new SCN to the undo data segment assigned to the transaction. The log writer process then writes the commit record of the transaction immediately to the online redo log. The commit record has the unique SCN of the transaction. Oracle Database also uses SCNs as part of its instance recovery and media recovery mechanisms.
When two transactions occur at the very same time, for example the same second, the one which its timestamp is sooner is the one acquiring lock. Keep in mind that TIMESTAMP stores fractional_seconds_precision which specifies the number of digits in the fractional part of a SECOND. This fraction can be a number in the range 0 to 9.

In short One of the three users will acquire a lock on the table while updating it (whichever users executes the query first by micro secs differences) and rest of the two users have to wait for U1 to commit and release the lock, So now U2 or U3 will get updated data to work upon.
Snippet from another article:
COMMIT
When a COMMIT statement is issued to the database, the transaction has ended, and the following results are true:
All work done by the transaction becomes permanent.
Other users can see changes in data made by the transaction.
Any locks acquired by the transaction are released.

Related

Some confusion on the description of read consistency in Oracle

Below is a short brief of read consistency from oracle concepts guide.
What is a sql statement, just one sql? Or Pl/SQL or Store Procedure? Anyone can help provide me one opposite example which can indicates the un-consistency read?
read consistency
A consistent view of data seen by a user. For example, in statement-level read
consistency the set of data seen by a SQL statement remains constant throughout
statement execution.
A "statement" in this context is one DML statement: a single SELECT, INSERT, UPDATE, DELETE, MERGE.
It is not a PL/SQL block. Similarly, multiple executions of the same DML statement (say, within a PL/SQL loop) are separate "statements". If you need consistency over multiple statements or within a PL/SQL block, you can achieve that using SET TRANSACTION ISOLATION LEVEL SERIALIZABLE or SET TRANSACTION READ ONLY. Both introduce limitations.
An opposite example of an inconsistent read would be as follows.
Starting conditions: table BIG_TABLE has 10 million rows.
User A at 10:00:
SELECT COUNT(*) FROM BIG_TABLE;
User B at 10:01:
DELETE FROM BIG_TABLE WHERE ID >= 9000000; -- delete the last million rows
User B at 10:02:
COMMIT;
User A at 10:03: query completes:
COUNT(*)
--------------
9309129
That is wrong. User A should have either gotten 10 million rows or 9 million rows. At no point were there 9309129 committed rows in the table. What has happened is that user A had read 309,129 rows that user B was deleting before Oracle actually processed the deletion (or before the COMMIT). Then, after the user B delete/commit, user A's query stopped seeing the deleted rows and stopped counting them.
This sort of problem is impossible in Oracle, thanks to its implementation of Multiversion Read Consistency.
In Oracle, in the above situation, as it encountered blocks that had rows deleted (and committed) by User B, User A's query would have used the UNDO data reconstruct what those blocks looked like at 10:00 -- the time when user A's query started.
That's basically it -- Oracle statements operate on the a version of the database as it existed as of a single point in time. This point in time is almost always the time when the statement started. There are some exception cases involving updates when that point in time will be moved to a point in time "mid statement". But it is always consistent as of one point in time or another.

How is concurrency control implemented in any ORDBMS

I have a weird question about concurrency control in ORDBMS. This is completely theoretical.
I have two transactions T1 and T2 trying to update a particular row on a table.
Now both the transactions T1 and T2 hits the database simultaneously.
By simultaneously, I mean both hits at the same time calculated till nanoseconds.
So if both the transactions have a timestamp that is exactly same, then how does a DBMS (be it Oracle, DB2, SQL Server) identifies which transaction to process first and which transaction to process later.
I understand that a row level lock will be achieved by one transaction and the other will wait till the lock is released. But how will it identify whether T1 or T2 will acquire the lock. Is there some other parameter that is taken into account other than timestamp.
Thanks
Nirmalya
This question seems to be related more to concurrency control of DBMS in general, rather then of ORDBMS.
Anyway, as far as I know, even if two requests are issued exactly at the same time, they will be processed sequentially by the scheduler, which is responsible for acquiring locks and assigning timestamps. Obviously, only the scheduler is sequential: after scheduling, the queries can be processed parallely, if this is allowed by the locks and by timestamps ordering.
Wikipedia has an accurate explanation of timestamp-based concurrency control: https://en.wikipedia.org/wiki/Timestamp-based_concurrency_control. There you can see that there are few assumptions to be made. Look at the first two:
Every timestamp value is unique and accurately represents an instant in time.
No two timestamps can be the same.
These assumption can be guaranteed only using a thread-safe scheduler, which assings timestamps to transactions sequentially. Also in lock-based concurrency control the scheduler must be thread-safe. Otherwise when it locks a record it cannot be sure that no another transactions acquired a lock on the same record.

When does sql exclusively lock a row in an update statement?

Can a race condition occur in sql under these conditions?
If I have this SQL update running in one thread call it statement 1:
Update Items
Set Flag = B
where Flag = A;
And this SQL update running in another call it statement 2:
Update Items
Set Flag = C
where Flag = A;
Is it possible for each thread to read the same record where Flag is equal to A and write the record with their own values? Such that statement 1 can write it first and then statement 2 writes it or visa versa?
The answer to this question depends on when the database exclusively locks the update. Does it happen before it finds the records or after it finds the records and evaluates the where clause?
First, there are three lock contexts:
Database level lock
Table level lock
Row level lock
Then you have four lock modes:
IX
IS
X
S
IX and IS locks are "intention" locks. These locks are held before acquiring other types of locks. X locks are exclusive (write) locks and S locks are shared (read) locks.
The locks (IX,IS,X or S) locks can be taken at any context level. An X lock at the database level will block all other operations in the database for example. This is the type of lock that SQLlite takes. An S lock is taken for the entire database during reads, and an X lock is taken for the entire database during writes. Writes will wait for any S locks to complete and will block new S and X locks until the write lock is released. This provides a serializable isolation transaction level.
For MySQL, the locking depends on the storage engine. MyISAM will take X and S locks on entire (sets of) tables. X locks will wait on existing S or X locks and block new locks. New X locks will be given higher priority in the queue, moved ahead of new S locks. This behavior can be changed by setting LOW_PRIORITY_UPDATES, which could result in write starvation because writes will be de-prioritized in favor of reads.
It is possible in MySQL to obtain an X lock over the entire database using 'FLUSH TABLES WITH READ LOCK'.
InnoDB locks rows as they are encountered via an index read. InnoDB locks index records and locks the records when the index records are traversed. InnoDB uses special locks called 'gap' locks to ensure REPEATABLE-READ transaction isolation level. Locks are held on index entries, so if a table is not well indexed for an UPDATE query, then many rows will be locked. Note that InnoDB does not create S locks for normal SELECT queries. It uses row versioning, not row level locking for consistent snapshots.
When acquiring X locks, the database needs to detect deadlocks. Consider the following:
>connection 1
start transaction;
update T set c = c + 1 order by id asc;
>connection 2
start transaction;
update T set c = c - 1 order by id desc;
In a row locking model, these two statements can not both complete successfully. The first would wait forever to acquire locks the second holds, and vice-versa. The database will pick one of the connections to roll back. InnoDB will pick the connection which has made the fewest number of changes. MyISAM will lock the whole table for whichever connection acquires the lock first, and then the second will run after the first completes.
The simple example given by you will be resolved by X locks at any context (database, table or row). If two connections begin at exactly the same type, both running two updates which try to update the same row, both will attempt to acquire an X lock. Only one connection can acquire the X lock. It is not possible to determine exactly which one will acquire the lock. The other connection will have to wait until the lock is released until it can acquire the X lock. Keep in mind, that if the row was locked by a DELETE or UPDATE, then the waiter might end up not acquiring a lock after waiting, because there is nothing left in the database to lock.
In your example, the first UPDATE to acquire the X lock, and the second UPDATE will then wait on the X lock and will eventually execute but not match any rows.
Exclusive lock, used for data-modification operations, such as INSERT, UPDATE, or DELETE will be used in this scenario.
An exclusive lock ensures that multiple updates cannot be made to the same resource at the same time.
You will not get a race condition in this scenario.
If you have a more complex scenario involving multiple tables then you may get race conditions, or deadlocks. There are many ways to avoid this, simplifying and separating queries, etc.
You can also apply hints to queries that tell SQL what type of lock to use.
http://msdn.microsoft.com/en-us/library/aa213026(v=sql.80).aspx
Sounds like you should read about locking. SQL server has a complex set of logic and will perform either table or row level locks based on the number of rows it estimates will require updates. Unless you specifically tell it which you want it to perform it can even vary from query to query. Usually if you are modifying a small subset of the table it will choose a row level lock.
SQL Server is designed with ACID in mind, thus it writes changes to its logs before performing any actual updates to the data. This allows any failed updates to be rolled back and allows consistency between queries (like your asking about). You can perform dirty reads to get around locking issues, however you cannot prevent SQL Server from locking inserted, updated and/or deleted records.
SQL Server Locking
EDIT: Here is an article about ACID.
ACID - Wikipedia
All SQL databases pretty much guarantee that such a collision will not occur. "When" locking occurs depends on whether locking is at the table, partition, page, or row level. Or, whether you have turned off such locking in your database.
What can happen, if you have concurrent update statements and multiple rows being updated, is that sone row are updated with the first, some with the second.
In general, I think of the where clause as being evaluated to select the row set, lock the rows one at a time, do the update and unlock. However, this depends on the type of locking. In this case, the scenario above would continue with the values flipping.
If you are concerned about this situation, use table level locking to force serialization when concurrent update requests are being processed.

race condition UPDATE modification of credit column - what happens on rollback?

ok, I tried searching and have not found an answer to this - I am curious how the ROLLBACK handles race conditions. For example:
If I have a table (CompanyAccount) which keeps track of how many credits an company has available for purchase (there is only one row in a database table per company) and there are potentially multiple users from the same company who can decrement the credits from the single company account, what happens in case of an error when a ROLLBACK occurs?
Example:
Assumptions: I have written the update properly to calculate the "Credit" new balance instead of guessing what the new credit balance is (i.e. we don't try to tell the UPDATE statement what the new Credit balance/value is, we say take whatever is in the credit column and subtract my decrement value in the UPDATE statement)...
here is an example of how the update statement is written:
UPDATE dbo.CompanyAccount
SET Credit = Credit - #DecrementAmount
WHERE CompanyAccountId = #CompanyAccountId
If the "Credit" column has 10,000 credits. User A causes a decrement of 4,000 credits and User B causes a decrement of 1000 credits. For some reason a rollback is triggered during User A's decrement (there are about a 1/2 dozen more tables with rows getting INSERTED during the TRANSACTION). If User A wins the race condition and the new balance is 6,000 (but not yet COMMIT'ed) what happens if User B's decrement occurs before the rollback is applied? does the balance column go from 6,000 to 5,000 and then gets ROLLBACK to 10,000?
I am not too clear on how the ROLLBACK will handle this. Perhaps I am over-simplifying. Can someone please tell me if I misunderstand how ROLLBACK will work or if there are other risks I need to worry about for this style.
Thanks for your input.
In the example you have given there will be no problem.
The first transaction will have an exclusive lock meaning the second one can not modify that row until after the first one has committed or rolled back. It will just have to wait (blocked) until the lock is released.
It gets a bit more complicated if you have multiple statements. You should probably read up on different isolation levels and how they can allow or prevent such phenomena as "lost updates".
Rollback is part of the transaction and locks will be maintained during the rollback. The *A*tomic in ACID.
User B will not start until all locks are released.
What happens:
User A locks rows
User B won't see the rows until locks are released
User A rolls back, release locks, changes never happened.
User B sees the rows. -1000 will result in 9000
However, if User B has already read the balance then it my be inconsistent when it comes to UPDATE. It depends on what you're actually doing and in what order, hence the need to understand isolation levels (and the issues with phantom and non-repeatable reads)
An alternative to SERIALIZABLE or REPEATABLE READ may to use sp_getapplock in transaction mode to semaphore parts of the transaction.

MySQL: Transactions vs Locking Tables

I'm a bit confused with transactions vs locking tables to ensure database integrity and make sure a SELECT and UPDATE remain in sync and no other connection interferes with it. I need to:
SELECT * FROM table WHERE (...) LIMIT 1
if (condition passes) {
// Update row I got from the select
UPDATE table SET column = "value" WHERE (...)
... other logic (including INSERT some data) ...
}
I need to ensure that no other queries will interfere and perform the same SELECT (reading the 'old value' before that connection finishes updating the row.
I know I can default to LOCK TABLES table to just make sure that only 1 connection is doing this at a time, and unlock it when I'm done, but that seems like overkill. Would wrapping that in a transaction do the same thing (ensuring no other connection attempts the same process while another is still processing)? Or would a SELECT ... FOR UPDATE or SELECT ... LOCK IN SHARE MODE be better?
Locking tables prevents other DB users from affecting the rows/tables you've locked. But locks, in and of themselves, will NOT ensure that your logic comes out in a consistent state.
Think of a banking system. When you pay a bill online, there's at least two accounts affected by the transaction: Your account, from which the money is taken. And the receiver's account, into which the money is transferred. And the bank's account, into which they'll happily deposit all the service fees charged on the transaction. Given (as everyone knows these days) that banks are extraordinarily stupid, let's say their system works like this:
$balance = "GET BALANCE FROM your ACCOUNT";
if ($balance < $amount_being_paid) {
charge_huge_overdraft_fees();
}
$balance = $balance - $amount_being paid;
UPDATE your ACCOUNT SET BALANCE = $balance;
$balance = "GET BALANCE FROM receiver ACCOUNT"
charge_insane_transaction_fee();
$balance = $balance + $amount_being_paid
UPDATE receiver ACCOUNT SET BALANCE = $balance
Now, with no locks and no transactions, this system is vulnerable to various race conditions, the biggest of which is multiple payments being performed on your account, or the receiver's account in parallel. While your code has your balance retrieved and is doing the huge_overdraft_fees() and whatnot, it's entirely possible that some other payment will be running the same type of code in parallel. They'll be retrieve your balance (say, $100), do their transactions (take out the $20 you're paying, and the $30 they're screwing you over with), and now both code paths have two different balances: $80 and $70. Depending on which ones finishes last, you'll end up with either of those two balances in your account, instead of the $50 you should have ended up with ($100 - $20 - $30). In this case, "bank error in your favor".
Now, let's say you use locks. Your bill payment ($20) hits the pipe first, so it wins and locks your account record. Now you've got exclusive use, and can deduct the $20 from the balance, and write the new balance back in peace... and your account ends up with $80 as is expected. But... uhoh... You try to go update the receiver's account, and it's locked, and locked longer than the code allows, timing out your transaction... We're dealing with stupid banks, so instead of having proper error handling, the code just pulls an exit(), and your $20 vanishes into a puff of electrons. Now you're out $20, and you still owe $20 to the receiver, and your telephone gets repossessed.
So... enter transactions. You start a transaction, you debit your account $20, you try to credit the receiver with $20... and something blows up again. But this time, instead of exit(), the code can just do rollback, and poof, your $20 is magically added back to your account.
In the end, it boils down to this:
Locks keep anyone else from interfering with any database records you're dealing with. Transactions keep any "later" errors from interfering with "earlier" things you've done. Neither alone can guarantee that things work out ok in the end. But together, they do.
in tomorrow's lesson: The Joy of Deadlocks.
I've started to research the same topic for the same reasons as you indicated in your question. I was confused by the answers given in SO due to them being partial answers and not providing the big picture. After I read couple documentation pages from different RDMS providers these are my takes:
TRANSACTIONS
Statements are database commands mainly to read and modify the data in the database. Transactions are scope of single or multiple statement executions. They provide two things:
A mechanism which guaranties that all statements in a transaction are executed correctly or in case of a single error any data modified by those statements will be reverted to its last correct state (i.e. rollback). What this mechanism provides is called atomicity.
A mechanism which guaranties that concurrent read statements can view the data without the occurrence of some or all phenomena described below.
Dirty read: A transaction reads data written by a concurrent
uncommitted transaction.
Nonrepeatable read: A transaction re-reads data it has previously read
and finds that data has been modified by another transaction (that
committed since the initial read).
Phantom read: A transaction re-executes a query returning a set of
rows that satisfy a search condition and finds that the set of rows
satisfying the condition has changed due to another recently-committed
transaction.
Serialization anomaly: The result of successfully committing a group
of transactions is inconsistent with all possible orderings of running
those transactions one at a time.
What this mechanism provides is called isolation and the mechanism which lets the statements to chose which phenomena should not occur in a transaction is called isolation levels.
As an example this is the isolation-level / phenomena table for PostgreSQL:
If any of the described promises is broken by the database system, changes are rolled back and the caller notified about it.
How these mechanisms are implemented to provide these guaranties is described below.
LOCK TYPES
Exclusive Locks: When an exclusive lock acquired over a resource no other exclusive lock can be acquired over that resource. Exclusive locks are always acquired before a modify statement (INSERT, UPDATE or DELETE) and they are released after the transaction is finished. To explicitly acquire exclusive locks before a modify statement you can use hints like FOR UPDATE(PostgreSQL, MySQL) or UPDLOCK (T-SQL).
Shared Locks: Multiple shared locks can be acquired over a resource. However, shared locks and exclusive locks can not be acquired at the same time over a resource. Shared locks might or might not be acquired before a read statement (SELECT, JOIN) based on database implementation of isolation levels.
LOCK RESOURCE RANGES
Row: single row the statements executes on.
Range: a specific range based on the condition given in the statement (SELECT ... WHERE).
Table: whole table. (Mostly used to prevent deadlocks on big statements like batch update.)
As an example the default shared lock behavior of different isolation levels for SQL-Server :
DEADLOCKS
One of the downsides of locking mechanism is deadlocks. A deadlock occurs when a statement enters a waiting state because a requested resource is held by another waiting statement, which in turn is waiting for another resource held by another waiting statement. In such case database system detects the deadlock and terminates one of the transactions. Careless use of locks can increase the chance of deadlocks however they can occur even without human error.
SNAPSHOTS (DATA VERSIONING)
This is a isolation mechanism which provides to a statement a copy of the data taken at a specific time.
Statement beginning: provides data copy to the statement taken at the beginning of the statement execution. It also helps for the rollback mechanism by keeping this data until transaction is finished.
Transaction beginning: provides data copy to the statement taken at the beginning of the transaction.
All of those mechanisms together provide consistency.
When it comes to Optimistic and Pessimistic locks, they are just namings for the classification of approaches to concurrency problem.
Pessimistic concurrency control:
A system of locks prevents users from modifying data in a way that
affects other users. After a user performs an action that causes a
lock to be applied, other users cannot perform actions that would
conflict with the lock until the owner releases it. This is called
pessimistic control because it is mainly used in environments where
there is high contention for data, where the cost of protecting data
with locks is less than the cost of rolling back transactions if
concurrency conflicts occur.
Optimistic concurrency control:
In optimistic concurrency control, users do not lock data when they
read it. When a user updates data, the system checks to see if another
user changed the data after it was read. If another user updated the
data, an error is raised. Typically, the user receiving the error
rolls back the transaction and starts over. This is called optimistic
because it is mainly used in environments where there is low
contention for data, and where the cost of occasionally rolling back a
transaction is lower than the cost of locking data when read.
For example by default PostgreSQL uses snapshots to make sure the read data didn't change and rolls back if it changed which is an optimistic approach. However, SQL-Server use read locks by default to provide these promises.
The implementation details might change according to database system you chose. However, according to database standards they need to provide those stated transaction guarantees in one way or another using these mechanisms. If you want to know more about the topic or about a specific implementation details below are some useful links for you.
SQL-Server - Transaction Locking and Row Versioning Guide
PostgreSQL - Transaction Isolation
PostgreSQL - Explicit Locking
MySQL - Consistent Nonlocking Reads
MySQL - Locking
Understanding Isolation Levels (Video)
You want a SELECT ... FOR UPDATE or SELECT ... LOCK IN SHARE MODE inside a transaction, as you said, since normally SELECTs, no matter whether they are in a transaction or not, will not lock a table. Which one you choose would depend on whether you want other transactions to be able to read that row while your transaction is in progress.
http://dev.mysql.com/doc/refman/5.0/en/innodb-locking-reads.html
START TRANSACTION WITH CONSISTENT SNAPSHOT will not do the trick for you, as other transactions can still come along and modify that row. This is mentioned right at the top of the link below.
If other sessions simultaneously
update the same table [...] you may
see the table in a state that never
existed in the database.
http://dev.mysql.com/doc/refman/5.0/en/innodb-consistent-read.html
Transaction concepts and locks are different. However, transaction used locks to help it to follow the ACID principles.
If you want to the table to prevent others to read/write at the same time point while you are read/write, you need a lock to do this.
If you want to make sure the data integrity and consistence, you had better use transactions.
I think mixed concepts of isolation levels in transactions with locks.
Please search isolation levels of transactions, SERIALIZE should be the level you want.
I had a similar problem when attempting a IF NOT EXISTS ... and then performing an INSERT which caused a race condition when multiple threads were updating the same table.
I found the solution to the problem here: How to write INSERT IF NOT EXISTS queries in standard SQL
I realise this does not directly answer your question but the same principle of performing an check and insert as a single statement is very useful; you should be able to modify it to perform your update.
I'd use a
START TRANSACTION WITH CONSISTENT SNAPSHOT;
to begin with, and a
COMMIT;
to end with.
Anything you do in between is isolated from the others users of your database if your storage engine supports transactions (which is InnoDB).
You are confused with lock & transaction. They are two different things in RMDB. Lock prevents concurrent operations while transaction focuses on data isolation. Check out this great article for the clarification and some graceful solution.