Transaction lock in sql - sql

Two processes are running simultaneously on a table. One is updating the records. The other is reading the data. While updating it is locking the table.so I am unable to read the data. Help to to handle the same.
Thanks,
Bibhu

I am assuming you are using SQL Server. You can use the WITH NOLOCK hint in your select statement, so the read goes through. However, you should be aware that it is a dirty read. Since the update statement is changing the data and the changes may or may not have been committed, the result of the read (select) statement can be problematic.
Here is a link to a page with more details and simple examples
https://www.mssqltips.com/sqlservertip/2470/understanding-the-sql-server-nolock-hint/

Related

Method for updating tables that users are looking at?

I'm looking for a method or solution to allow for a table to be updated that others are running select queries on?
We have an MS SQL Database storing tables which are linked through ODBC to an Access Database front-end.
We're trying to have a query run an update on one of these linked tables but often it is interrupted by users running select statements on the table to look at data though forms inside access.
Is there a way to maybe create a copy of this database table for the users to look at so that the table can still be updated?
I was thinking maybe a transaction but can you perform transactions for select statements? Do they work that way?
The error we get from inside access when we try to run the update while a user has the table open is:
Any help is much appreciated,
Cheers
As a general rule, this should not be occurring. Those reports should not lock nor prevent the sql system from not allowing inserts.
For a quick fix, you can (should) link the reports to some sql server views for their source. And use this for the view:
SELECT * from tblHotels WITH (NOLOCK)
In fact in MOST cases this locking occurs due to combo boxes being driven by a larger table in from SQL server - if the query does not complete (and access has the nasty ability to STOP the flow of data, then you get a sql server table lock).
You also can see the above "holding" of a lock when you launch a form with a LARGE dataset If access does not finish pulling the table/query from SQL server - again a holding lock on the table can remain.
However, I as a general rule NOT seen this occur for reports.
However, it not all clear how the reports are being used and how their data sources are setup.
But, as noted, the quick fix is to create some views for the reports, and use the no-lock hint as per above. That will prevent the tables from holding locks.
Another HUGE idea? For the reports, if they often use some date range or other critera? MAKE 100% sure that sql server has index on the filter or critera. If you don't, then SQL server will scan/lock the whole table. This advice ALSO applies VERY much to say a form in which you filter - put indexing (sql server side) on those common used columns.
And in fact, the notes about the combo box above? We found that JUST adding a indexing to the sort column used in the combo box made most if not all locking issues go away.
Another fix that often works - and requires ZERO changes to the ms-access client side software?
You can change this on the server:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
The above also will in most cases fix the locking issue.

SQL - Why SELECT query has been BLOCKED by INSERT?

I always thought the writers never block the readers (and vice versa).
However, what am seeing right now is very strange. I'm probably wrong and am missing something here. So please help as this is driving me crazy!
Today I created a very simple table:
USE [testdb]
GO
CREATE TABLE [dbo].[MyTab](
[N] [int] NULL
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[MyTab] WITH CHECK ADD CHECK (([n]>(10)))
GO
Then I populated it with a few rows.
Next I decided to set IMPLICIT_TRANSACTIONS to ON and select from the table in another session in a completely separate execution of the SQL Server Management Studio. The below snapshot demonstrates what hapened:
You see the issue? The select query is still executing! This query never returns. It returns only after I commit or rollback the insert statement. I tested the same scenario multiple times and the same thing happened over and over again.
To further confirm my observation, have a look at the below report:
Can you help and let me know what am I doing wrong?
or if my assumption that readers cannot be blocked is entirely (or partially) wrong?
Thanks!
Note: Initially I was connected as the same user as the insert session when I wanted to query the table. Once I saw my select was blocked, I decided to login and test using another user. Hence, the using as 'sa' account. :)
Readers do not block writers (and visa-versa) in SQL Server if you turn on the READ_COMMITTED_SNAPSHOT database option. SQL Server will then use row versioning instead of locking to provide read consistency for READ_COMMITTED transactions, behaving similarly to the Oracle DBMS you are more familiar with.
READ_COMMITTED_SNAPSHOT is on by default with Azure SQL Database but not in on-prem SQL Server versions for backwards compatibility.
After doing some research I just realized what the issue was. In SQL Server readers and writers actually do block each other sometimes. This is unlike Oracle where readers and writers never block each other.
Further Explanation
I am an Oracle DBA and do not know too much about SQL Server databases. My observation today came as a surprise for me because in Oracle I had never seen a select query get blocked by an insert statement, which is because as per Oracle's documentation:
Readers and writers do not block one another in Oracle Database.
Therefore, while queries still see consistent data, both read
committed and serializable isolation provide a high level of
concurrency for high performance, without the need for reading
uncommitted data.
This is entirely different from SQL Server where read queries may be blocked under certain circumstances.

Will 2 select statements cause deadlock to each other?

My understanding is 2 select statements will not cause deadlock to each other.
Recently, I got an error in my code. It happened maybe once or twice a week. I got a deadlock error when my code tries to query a view. The table is maintained by a third party application. We are not allowed to modify it. We just query the data via a view created from that table.
My understanding is even I have multiple threads querying this view. It won't cause the deadlock. I am sure that the rows I am querying will never be updated. So the deadlocks here can only be caused by multiple select statements?
If so, is there a way to overcome this issue?
I have read the post Transaction deadlock for select query.
I am using sql server 2008. There is no extended events in 2008. There is no solutions for this issue in that post? Does anyone know how to solve this issue?
Thank you.

Full SQL Statement History

I am facing a problem on a particular table in my database. The rows are being deleted without any reason (I have some procedures and triggers that modify the information inside the table but they are already tested).
So I need to see which DML statements are executed against the table.
I have already tried some methods, like using this query:
select SQL_FULLTEXT, FIRST_LOAD_TIME, ROWS_PROCESSED, PARSING_SCHEMA_NAME from v$sql;
filtering by the name of my table, or tried the SQL log.
Both methods don't show me the complete history of SQL executed (for example I can't see the statements executed by the procedures).
Can anyone give me some advice of where I can see ALL the DML executed in the database?
You're using a few terms that aren't defined within the context of Oracle Database, both 'sentence' and 'register.'
However.
If you want to see WHO is touching your data in a bad place, causing it to be deleted or changed, then you have 2 options.
Immediately, check your REDO logs. We have a package, dbms_logmnr, that will allow you to see what activity has been logged. Assuming that your tables weren't created with NOLOGGING clause, those UPDATEs and DELETEs should be recorded.
Tim has a nice article on this feature here.
The better solution going forward is AUDITING. You'll want to enable auditing in the database to record WHO is doing WHAT to your tables/data. This is included as part of the Enterprise Edition of the database. There is a performance hit, the more you decide to record, the more resources it will require. But it will probably be worth paying that price. And of course you'll have to manage the space required to maintain those logs.
Now, as to 'SQL Developer' and it's HISTORY feature. It ONLY records what you are executing in a SQL Worksheet. It won't see what others are doing. It can't help you here - unless this is a 1-man database, and you're only making changes with SQL Developer. Even then, it wouldn't be reliable as it has a limit, and only records changes done via the Worksheet.

Find details of a query or statement that caused an unexpected table update

We have been having problems with ghost updates in our DB (SQL Server 2005). Fields are changing, and we cannot find the routine that is performing the update.
Is there any way, (perhaps using an update trigger ?) to determine what caused the update? The SQL statement, process, username/login,etc?
Use SQL Server Profiler
You'll probably want to filter away the things you don't need so it might take a while to get it setup.
At least it'll get you to the procedure / query that is responsible as well as user / computer for the alterations, which leaves finding that in your code.
I found and article that might help you out over here:
http://aspadvice.com/blogs/andrewmooney/archive/2007/08/20/SQL-Server-2005-Audit-Log-Using-Triggers.aspx
All the information that you are asking for is available at the time the update is performed. The SQL Profiler will certainly work, but it is a bit of work to craft a filter that does not overwhelm you with data, particularly if you need to run it for days or weeks at a time. An update trigger is easy enough the create, and you can log the information that you need in a new table.
I would probably use AutoAudit to generate triggers on the table first.
It's somewhat limited in terms of knowing exactly what is changing your data, but it's a start.
You could always look at the triggers and modify them to only log certain columns you are interested in and perhaps get more information which it doesn't currently log.