Simplest SQL query never returns - sql

I have a SQL query that is quite simply select * from tblOrders where customerID = 5000but it never returns. I waited 10 minutes and gave up.
The weirdest thing is that other queries on the same DB, but on another table, works fine. Removing the where-clause doesn't help either, so it seems like the table is somehow not responding. It's about 30000 lines, so it's not the biggest table either.
I'm using MS SQL SMS 2008 Express against a SQL Server 2008 express running on a remote server.

Try this to by-pass any locks on table -
select * from tblOrders(nolock) where customerID = 5000

It sounds like your table is locked
run this query to see what locks are held against it.
USE master;
GO
EXEC sp_lock;
GO
but table locking is a whole mindfield of its own
here is some info in the sp_lock system stored proc
http://msdn.microsoft.com/en-us/library/ms187749.aspx
when you find the lock you can kill it
KILL { session ID | UOW } [ WITH STATUSONLY ]
http://msdn.microsoft.com/en-us/library/ms173730.aspx

I agree with the others that this is most probably a locking issue. By default write access to a table still blocks read (only) access.
Since SQL Server 2005 this can be fixed by using the "row versioning". You need to change the settings of the database to enable this.
See the manual for a more detailed explanation:
http://msdn.microsoft.com/en-us/library/ms345124%28SQL.90%29.aspx

Related

Check database / server before executing query

I am frequently testing certain areas on a development server and so running a pre-defined SQL statement to truncate the tables in question before testing again. It would only be a slip of a key to switch to the live server.
I'm looking for an IF statement or similar to prevent that.
Either to check the server name, database name, or even that a certain record in a different table exists before running the query.
Any help appreciated
For such cases I use stored procedures. I'd call them TestTruncateTables, etc.
Then instead of calling TRUNCATE TABLE you should CALL TestTruncateTables.
Just make sure that the procedures are not created on the live server. If by any chance you happen to run CALL TestTruncateTables on the live server you only get an error about non-existing proc.

oracle sql how to do asynchronous queries

I have multiple select queries which I want to execute asynchronously.
How can I do this in oracle sql ?
I basically want to test something and so want to simulate workload so I don't really care about the result and I know I can do this in multiple threads but this is specific and so would prefer if I can do this entirely in sql. procedures are fine though.
NOTE: there are no update queries only select.
I read about nowait but am not sure how to use it in oracle.
I tried something like -
select * from foo with(nowait) where col1="something";
This is the error I got -
with(nowait)
*
ERROR at line 3:
ORA-00933: SQL command not properly ended
The Oracle info on NOWAIT says:
Specify NOWAIT if you want the database to return control to you immediately
if the specified table, partition, or table subpartition is already locked by
another user. In this case, the database returns a message indicating that the
table, partition, or subpartition is already locked by another user.
This will not do what you want.
Asynchronous queries are an application thing, not a SQL thing. For example I can open TOAD and open a dozen windows and run long queries in all of them and still open another window and run another query. I could open a dozen instances of SQLPLUS and do the same thing. Nothing in the query lets me do this, it's in the application.
I think you could use DBMS_SCHEDULER to schedule some sql or procs that execute SQL.
However this is probably not the best way to do this
There are tools for this. The best way maybe to write a procedure you can call from the web and then you can use any performance testing tool that can make a web call...its worked for me before.
You may also consider:
http://sqlmag.com/database-performance-tuning/testing-heavy-load-simulating-multiple-concurrent-operations

SQL Server Update Permissions

I'm currently working with SQL Server 2008 R2, and I have only READ access to a few tables that house production data.
I'm finding that in many cases, it would be extremely nice if I could run something like the following, and get the total record count back that was affected :
USE DB
GO
BEGIN TRANSACTION
UPDATE Person
SET pType = 'retailer'
WHERE pTrackId = 20
AND pWebId LIKE 'rtlr%';
ROLLBACK TRANSACTION
However, seeing as I don't have the UPDATE permission, I cannot successfully run this script without getting :
Msg 229, Level 14, State 5, Line 5
The UPDATE permission was denied on the object 'Person', database 'DB', schema 'dbo'.
My questions :
Is there any way that my account in SQL Server can be configured so that if I want to run an UPDATE script, it would automatically be wrapped in a transaction with an rollback (so no data is actually affected)
I know I could make a copy of that data and run my script against a local SSMS instance, but I'm wondering if there is a permission-based way of accomplishing this.
I don't think there is a way to bypass SQL Server permissions. And I don't think it's a good idea to develop on production database anyway. It would be much better to have development version of the database you work with.
If the number of affected rows is all you need then you can run select instead of update.
For example:
select count(*)
from Person
where pTrackId = 20
AND pWebId LIKE 'rtlr%';
If you are only after the amount of rows that would be affected with this update, that would be same amount of rows that currently comply to the WHERE clause.
So you can just run a SELECT statement as such:
SELECT COUNT(pType)
FROM Person WHERE pTrackId = 20
AND pWebId LIKE 'rtlr%';
And you'd get the resulting potential rows affected.
1.First Login as admin in sqlserver
2.Goto login->your name->Check the roles.
3.IF u have write access,then you can accomplish the above task.
4.If not make sure you grant access to write.
If it's strictly necessary to try the update, you could write a stored procedure, accepting dynamic SQL as a string (Your UPDATE query) and wrapping the dynamic SQL in a transaction context which is then rolled back. Your account could then be granted access to that stored procedure.
Personally, I think that's a terrible idea, and incredibly unsafe - some queries break out of such transaction contexts (e.g. ALTER TABLE). You may be able to block those somehow, but it would still be a security/auditing problem.
I recommend writing a query to count the relevant rows:
SELECT COUNT(*)
FROM --tables
WHERE --your where clause
-- any other clauses here e.g. GROUP BY, HAVING ...

SQL Server 2005: Why would a delete from a temp table hang forever?

DELETE FROM #tItem_ID
WHERE #tItem_ID.Item_ID NOT IN (SELECT DISTINCT Item_ID
FROM Item_Keyword
JOIN Keyword ON Item_Keyword.Keyword_ID = Keyword.Record_ID
WHERE Keyword LIKE #tmpKW)
The Keyword is %macaroni%. Both temp tables are 3300 items long. The inner select executes in under a second. All strings are nvarchar(249). All IDs are int.
Any ideas? I executed it (it's in a stored proc) for over 12 minutes without it finishing.
Anything that has a lock on the table could prevent the query from proceeding. Do you have any other sessions open in SSMS where you have done something to one of the tables?
You can also use the system sproc sp_who2 to see if there are any locks open. look in the column BlkBy and see if anything is hanging on a lock held by another process.
The classic case where SQL Server "hangs" is when you open a transaction but don't commit or rollback. Don't get so wrapped up in your actual delete (unless you are working with a truly huge dataset) that you fail to consider this possibility.
When i run into issues like this i fire up SQL Heartbeat and it can show me what is causing the conflict. This also shows deadlocks related to transactions that were not closed correctly as Mark states above.
Sounds like you might want to read up on deadlocking...
** Ignore my answer - I'm leaving it up so that Dr. Zim's comment is preserved, but it is incorrect **

Why would this SELECT statement lock up on SQL Server?

I have a simple query like this
SELECT * FROM MY_TABLE;
When I run it, SQL Server Management Studio hangs.
Other tables and views are working fine.
What can cause this? I've had locks while running UPDATE statements before, and I know how to approach those. But what could cause a SELECT to lock?
I have run the "All Blocking Transactions" report, and it says there are none.
It is probably not the select that is locking up, but some other process that is editing (udpate/delete/insert) the table that is causing the locks.
You can view which process is blocking by runing exec sp_who2 on your SQL Server.
Alternatively, if you are OK with dirty reads, you can do one of two things
SELECT * FROM Table WITH (NOLOCK)
OR
SET Transaction Isolation Level Read Uncommitted
SELECT * FROM Table
If there's a lot of other activity going on, something else might be causing locks, and your SELECT might be the deadlock victim. if you run the following
SELECT * FROM my_table WITH(nolock)
you're telling the database that you're ok to read dirty (uncomitted) data, and that locks caused by other activity can be safely ignored.
Also, if a query like that causes management studio to hang, your table might use some optimization
Use this:
SELECT * FROM MY_TABLE with (NOLOCK)
Two possibilities:
Its a really massive table, and you're trying to return 500m rows.
Some other process has a lock on the table, preventing your select from going through until that lock is released.
MY_TABLE could be also locked up by some uncommitted transaction -- i.e. script/stored procedure running (or failed while running) in another MSMM window.