SQL - Why SELECT query has been BLOCKED by INSERT? - sql

I always thought the writers never block the readers (and vice versa).
However, what am seeing right now is very strange. I'm probably wrong and am missing something here. So please help as this is driving me crazy!
Today I created a very simple table:
USE [testdb]
GO
CREATE TABLE [dbo].[MyTab](
[N] [int] NULL
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[MyTab] WITH CHECK ADD CHECK (([n]>(10)))
GO
Then I populated it with a few rows.
Next I decided to set IMPLICIT_TRANSACTIONS to ON and select from the table in another session in a completely separate execution of the SQL Server Management Studio. The below snapshot demonstrates what hapened:
You see the issue? The select query is still executing! This query never returns. It returns only after I commit or rollback the insert statement. I tested the same scenario multiple times and the same thing happened over and over again.
To further confirm my observation, have a look at the below report:
Can you help and let me know what am I doing wrong?
or if my assumption that readers cannot be blocked is entirely (or partially) wrong?
Thanks!
Note: Initially I was connected as the same user as the insert session when I wanted to query the table. Once I saw my select was blocked, I decided to login and test using another user. Hence, the using as 'sa' account. :)

Readers do not block writers (and visa-versa) in SQL Server if you turn on the READ_COMMITTED_SNAPSHOT database option. SQL Server will then use row versioning instead of locking to provide read consistency for READ_COMMITTED transactions, behaving similarly to the Oracle DBMS you are more familiar with.
READ_COMMITTED_SNAPSHOT is on by default with Azure SQL Database but not in on-prem SQL Server versions for backwards compatibility.

After doing some research I just realized what the issue was. In SQL Server readers and writers actually do block each other sometimes. This is unlike Oracle where readers and writers never block each other.
Further Explanation
I am an Oracle DBA and do not know too much about SQL Server databases. My observation today came as a surprise for me because in Oracle I had never seen a select query get blocked by an insert statement, which is because as per Oracle's documentation:
Readers and writers do not block one another in Oracle Database.
Therefore, while queries still see consistent data, both read
committed and serializable isolation provide a high level of
concurrency for high performance, without the need for reading
uncommitted data.
This is entirely different from SQL Server where read queries may be blocked under certain circumstances.

Related

Full SQL Statement History

I am facing a problem on a particular table in my database. The rows are being deleted without any reason (I have some procedures and triggers that modify the information inside the table but they are already tested).
So I need to see which DML statements are executed against the table.
I have already tried some methods, like using this query:
select SQL_FULLTEXT, FIRST_LOAD_TIME, ROWS_PROCESSED, PARSING_SCHEMA_NAME from v$sql;
filtering by the name of my table, or tried the SQL log.
Both methods don't show me the complete history of SQL executed (for example I can't see the statements executed by the procedures).
Can anyone give me some advice of where I can see ALL the DML executed in the database?
You're using a few terms that aren't defined within the context of Oracle Database, both 'sentence' and 'register.'
However.
If you want to see WHO is touching your data in a bad place, causing it to be deleted or changed, then you have 2 options.
Immediately, check your REDO logs. We have a package, dbms_logmnr, that will allow you to see what activity has been logged. Assuming that your tables weren't created with NOLOGGING clause, those UPDATEs and DELETEs should be recorded.
Tim has a nice article on this feature here.
The better solution going forward is AUDITING. You'll want to enable auditing in the database to record WHO is doing WHAT to your tables/data. This is included as part of the Enterprise Edition of the database. There is a performance hit, the more you decide to record, the more resources it will require. But it will probably be worth paying that price. And of course you'll have to manage the space required to maintain those logs.
Now, as to 'SQL Developer' and it's HISTORY feature. It ONLY records what you are executing in a SQL Worksheet. It won't see what others are doing. It can't help you here - unless this is a 1-man database, and you're only making changes with SQL Developer. Even then, it wouldn't be reliable as it has a limit, and only records changes done via the Worksheet.

Really slow schema information queries on SQL Server 2005

I have a database with a rather large number of tables, about 3500, and an application that needs to access a table list.
On a particular server this takes over 2.5 min to return.
EXEC sp_tables #table_type="'TABLE'"
I know there are faster ways to do that but sadly I'm not in a position to modify the application and need to find a way to push it below 30 seconds so the application doesn't throw timeout errors.
So. What, if anything, can I do to improve the performance of this sp within sql server?
I have seen these stored procedures run slow if you do not have the GRANT VIEW DEFINITION permission set on your user account. From what I read, this will cause a security check to occur slowing down the query.
Maybe a SQL guru can comment on why, if this does help your problem.
Well, sp_tables is system code and can't be changed (could workaround in SQL Server 2000, not SQL Server 2005+)
Your options are
Change the SQL
Change command timeout
Bigger server
You've already said "no" to the obvious solutions...
You need to approach this just like any other performance problem. Why is it slow? Namely, where does it block? Disk IO? CPU? Network? Lock contention? The scientific method is to use a methodology like Waits and Queues, or its newer SQL 2008 equivalent Troubleshooting Performance Problems in SQL Server 2008. The lazy way is to simply check the wait_type, wait_time and wait_resource columns in sys.dm_exec_requests for the session executing the sp_tables call. Once you find out what is blocking the execution, you can proceed accordingly.
If I'd venture a guess, you'll discover contention as the cause: other sessions are locking table's metadata exclusively and thus block the execution of sp_tables, which has to wait until all operations in front of it finish.

Is there any single sql command to export entire database from one sql server to another?

I was asked in the interview tell me the different ways of exporting database from one sql server to another, I knew only about creating a .bak file and then restoring it to another sql server which I told them. However, they asked me about a single SQL INSERT command which will perform this task.
I have googled it and can not find it. Please tell me if there is any such command ?
I have never heard of such a command and this is the MS support article that tells you how to move database between servers. It gives three options none of which are a single insert statement, the closest is using sp_detach_db and sp_attach_db.
Well with a SQL Statement you can do a backup and a restore. Doing it with one SQL INSERT... I've never heard something like this. Maybe one table. But not the whole database.
The other way would be to use the "Copy Database Wizzard".
I am doing also interviews and sometimes you just ask stuff that does not exist or does not work and see what is happening.
If you had a linked server already, I would guess you could use sp_msforeachtable around an INSERT INTO server2.tbl SELECT * FROM tbl.
But that's not going to handle referential integrity order dependencies or scenarios where you might need IDENTITY INSERT, disabling triggers or whatever. Handling trivial cases is usually, by definition, trivial.
you need to say linked server
http://www.databasejournal.com/features/mssql/article.php/3085211/Linked-Servers-on-MS-SQL-Part-1.htm
http://www.databasejournal.com/features/mssql/article.php/3691721/Setting-up-a-Linked-Server-for-a-Remote-SQL-Server-Instance.htm

Is it possible to run multiple DDL statements inside a transaction (within SQL Server)?

I'm wondering if it is possible to run multiple DDL statements inside a transaction. I'm specially interested on SQL Server, even though answers with other databases (Oracle, PostgreSQL at least) could also be interesting.
I've been doing some "CREATE TABLE" and "CREATE VIEW" for the created table inside a transaction and there seems to be some inconsistencies and I'm wondering if the DDLs shouldn't be done inside the transaction...
I could probably move the DDL outside the transaction but
I'd like to get some reference for this. What I have found this far:
MSDN page Isolation Levels in the Database Engine tells clearly that there are restrictions on what DDL operations can be performed in an explicit transaction that is running under snapshot isolation - but I'm not using snapshot isolation and this should result as an error.
This could be interpreted so that DDL operations can be performend in an explicit transaction under different isolation levels?
Oracle® Database Gateway for SQL Server User's Guide#DDL Statements states that only one DDL statement can be executed in a given transaction - is this valid also for SQL Server used straight?
For Oracle:
Within SO question Unit testing DDL statements that need to be in a transaction it is said that Oracle does implicit commit for a DDL statement? (even though no references)
If it matters something, I'm doing this with Java through the JTDS JDBC driver.
b.r. Touko
I know most databases have restrictions, but Postgres doesn't. You can run any number table creations, column changes and index changes in a transaction, and the changes aren't visible to other users unit COMMIT succeeds. That's how databases should be! :-)
As for SQL Server you can run DDL inside of a transaction, but SQL Server does not version metadata, and so changes would be visible to others before the transaction commits. But some DDL statements can be rolled back if you are in a transaction, but for which ones work and which ones don't you'll need to run some tests.
If you are creating tables, views, etc on the fly (other than table variables or temp tables), you may truly need to rethink your design. This is not stuff that should normally happen from the user interface. Even if you must allow some customization, the DDL statements should not be happening at the same time as running transactional inserts/updates/deletes. It is far better to separate these functions.
This is also something that needs a healthy dose of consideration and testing as to what happens when two users try to change the structure of the same table at the same time and then run a transaction to insert data. There's some truly scary stuff that can happen when you allow users to make adjustments to your database structure.
Also some DDL statements must always be the first statement of a batch. Look out for that too when you are running them.
For the general case and IIRC, it's not safe to assume DDL statements are transactional.
That is to say, there is a great deal of leeway on how schema alterations interact within a transaction (assuming it does at all). This can be by vendor or even by the particular installation (i.e., up to the dba) I believe. So at the very least, don't use one DBMS to assume that others will treat DDL statements the say.
Edit: MySql is an example of a DBMS which doesn't support DDL transactions at all. Also, if you have database replication/mirroring you have to be very careful that the replication service (Sybase's replication is the norm, believe it or not) will actually replicate the DDL statement.
Could it be that in MS SQL, Implicit transactions are triggered when DDL and DML statements are run. If you toggle this off does this help, use
SET IMPLICIT_TRANSACTIONS
EDIT: another possibility
- You can't combine CREATE VIEW with other statements in the same batch. CREATE TABLE is ok.
You separate batches with GO.
EDIT2: You CAN use multiple DDL in a transaction as long as separated with GO to create different batches.

MS SQL Concurrency, excess Locks

I have a database on ms sql 2000 that is being hit by hundreds of users at a time. There are intense reports using reporting services 2005 hitting the same database.
When there are lots of reports running and people using the database concurrently we see blocking processes to the level that the system starts to give time out to any transaction made after some time in that situation.
Is there a global way of minimize blocking so the transaction can continue to flow.
Use optimistic locking, if updates are not happening often and the database is mainly used for reporting.
SQL Server has quite a pessimistic locking default.
A look into SQL Server Table Hints might get you started.
The reports can use WITH(NOLOCK).
Other possibilities are having the reports run off a read-only replica of the database or running off a datawarehouse version of the database which is optimized for the reporting needs.
Since you are already using NOLOCK hints and READ UNCOMMITTED isolation level for your reports, the investigation needs to turn to the transactional queries coming in. This may get deep. Perhaps applications are keeping transactions open too long. It may also be the case that you have a lot of table scans or range scans in some of the other query volume, and those may be holding shared locks for long-running transactions. Those shared locks will block your writers.
You need to start looking at sp_lock, and seeing what kinds of locks are outstanding, see what locks the blocked queries are trying to obtain, and then examine the queries that are blocking the requestors.
This will help you if you are unfamiliar with SQL Server locking:
Understanding SQL Server 2000 Locking
Also, perhaps you could describe your disk subsystem. It may be undersized.
Thanks everyone for your support. What we do to mitigate the problem was to create a new database whit a logshipping procedure every hour to mantain in sync to the real one. The reports that do no need real time data where point to that database and the ones that needs real time data where restricted so only a few people can access them. The drawbacks whit the method is tha the data will be up to one hour out of sync and we need to create a new server for that purpose only. Also when the loggshipping procedure runs every connetion is drop for a very short period of time but it can be a problem to really long procedures or reports. After this I will verify the querys from the reports so I can understand what can be optimize. Thanks and I will recomend the site to the whole IT department.