I am using SQL server 2008 and its management studio. I executed a query that yields many rows. I tried to cancel it via the red cancel button, but it has not stopped for the past 10 minutes. It usually stops within 3 minutes.
What could the reason be and how do I stop it immediately ?
sp_who2 'active'
Check values under CPUTime and DiskIO. Note the SPID of process having large value comparatively.
kill {SPID value}
What could the reason be
A query cancel is immediate, provided that your attention can reach the server and be processed. A query must be in a cancelable state, which is almost always true except if you do certain operations like calling a web service from SQLCLR. If your attention cannot reach the server it's usually due to scheduler overload.
But if your query is part of a transaction that must rollback, then rollback cannot be interrupted. If it takes 10 minutes then it needs 10 minutes and there's nothing you can do about it. Even restarting the server will not help, it will only make startup longer since recovery must finish the rollback.
To answer which specific reason applies to your case, you'll need to investigate yourself.
First execute the below command:
sp_who2
After that execute the below command with SPID, which you got from above command:
KILL {SPID value}
This is kind of a silly answer, but it works reliably at least in my case:
In management studio, when the "Cancel Executing Query" doesn't stop the query I just click to close the current sql document. it asks me if I want to cancel the query, I say yes, and lo and behold in a few seconds it stops executing. After that it asks me if I want to save the document before closing. At this point I can click Cancel to keep the document open and continue working. No idea what's going on behind the scenes, but it seems to work.
If you cancel and see that run
sp_who2 'active'
(Activity Monitor won't be available on old sql server 2000 FYI )
Spot the SPID you wish to kill e.g. 81
Kill 81
Run the sp_who2 'active' again and you will probably notice it is sleeping ... rolling back
To get the STATUS run again the KILL
Kill 81
Then you will get a message like this
SPID 81: transaction rollback in progress. Estimated rollback completion: 63%. Estimated time remaining: 992 seconds.
First, you need to display/check all running queries using below query-
SELECT text, GETDATE(), *
FROM sys.dm_exec_requests
CROSS APPLY sys.dm_exec_sql_text(sql_handle)
Find Session-Id and Description for respective all running queries and then copy specific query's Session-Id which you want to kill/stop immediately.
Kill/stop specific query using Session-Id using this query:
Kill Session-id
Example:
kill 125 --125 is my Session-Id
You can use a keyboard shortcut ALT + Break to stop the query execution. However, this may not succeed in all cases.
I Have Been suffering from same thing since long time. It specially happens when you're connected to remote server(Which might be slow), or you have poor network connection.
I doubt if Microsoft knows what the right answer is.
But since I've tried to find the solution. Only 1 layman approach worked
Click the close button over the tab of query which you are being suffered of.
After a while (If Microsoft is not harsh on you !!!) you might get a window asking this
"The query is currently executing. Do you want to cancel the query?"
Click on "Yes"
After a while it will ask to whether you want to save this query or not?
Click on "Cancel"
And post that, may be you're studio is stable again to execute your query.
What it does in background is disconnecting your query window with the connection. So for running the query again, it will take time for connecting the remote server again. But trust me this trade-off is far better than the suffering of seeing that timer which runs for eternity.
PS: This works for me, Kudos if works for you too. !!!
apparently on sql server 2008 r2 64bit, with long running query from IIS the kill spid doesn't seem to work, the query just gets restarted again and again. and it seems to be reusing the spid's. the query is causing sql server to take like 35% cpu constantly and hang the website. I'm guessing bc/ it can't respond to other queries for logging in
A simple answer, if the red "stop" box is not working, is to try pressing the "Ctrl + Break" buttons on the keyboard.
If you are running SQL Server on Linux, there is an app you can add to your systray called "killall"
Just click on the "killall" button and then click on the program that is caught in a loop and it will terminate the program.
Hope that helps.
In my part my sql hanged up when I tried to close it while endlessly running. So what I did is I open my task manager and end task my sql query. This stop my sql and restarted it.
If using VSCode mssql Extension, click F1, write mssql in the prompt and choose 'cancel query', as shown in this thread about the extension.
My studio version: Microsoft SQL Server Management Studio 18
I manually closed the computer network card, and the infinite loop query was successfully terminated!
Related
First I run this query to see the running queries:
select * from pg_stat_activity;
then I run this query to stop them:
SELECT pg_cancel_backend(pid);
but, when I run the pg_stat_activity again, it still shows all the queries!
why it didn't kill the queries?
A number of possible explanations:
You're not looking at an active query, the query text is just the last query that ran on a currently-idle backed. In that case pg_cancel_backend will do nothing since there's nothing to cancel. Check the state field in pg_stat_activity.
The active query is running in extension code that does not CHECK_FOR_INTERRUPTS() during whatever it is doing. This is most typically the case when you're running some extension that does lots of CPU, I/O or network activity using its own libraries, sockets, etc. Particularly things like PL/Perl, PL/Python, etc.
The active query is running in PostgreSQL back-end code that doesn't check for interrupts in a long running loop or similar. This is a bug; if you find such a case, report it.
The backend is stuck waiting on a blocking operating system call, commonly disk I/O or a network socket write. It may be unable to respond to a cancel message until that blocking operation ends, but if it receives a SIGTERM its signal handler can usually cause it to bail out, but not always.
In general it's safe to use pg_terminate_backend as a "bigger hammer". SIGTERM as sent by pg_terminate_backend() will often, but not always, cause a backend that can't respond to a cancel to exit.
Do not kill -9 (SIGKILL) a PostgreSQL backend (postgres process). It will cause the whole PostgreSQL server to emergency-restart to protect shared memory safety.
I should use pg_terminate_backend(pid) instead of pg_cancel_backend(pid).
Quick question: Does exiting out of TOAD (for Oracle) while it is trying to cancel a pending query harmful?
Should I be letting this dialog box run its course?
I did have the screenshot but am unable to post pics until I have 10 reps.
EDIT: It has been going for around 30 minutes now.
EDIT2: I should mention it is not an update query, purely search.
Thanks,
When this happens and I've already waited long enough (and the Cancel button has no effect), I open Task Manager and apply "End Process Tree" command on the Toad.exe process.
If a database connection is lost, all uncommitted changes made are automatically rolled back by the database. So it is not harmful.
Once I investigated this by looking up the sessions list. It looks like this happens when Toad somehow loses the connection to the server in the midst of executing a query.
When you wonder why the query is taking so long (when it shouldn't) and click the Cancel button, Toad enters a state of "limbo" where it's waiting for the result of the cancel operation from the server (not aware of connection loss).
The problem is that there is no way to stop this waiting and go back to normal. This is a bug in Toad. There is no other way around this. I am not sure when they will fix it, if at all.
I just had same problem.
LIKE THIS FOR DAYS
One solution to cancel the running process (processing or canceling) in TOAD which take time (keep processing for longer hours and same going for hours) is
JUST disable the internet connection which auto cancels the toad process.
Later go to Session > Test connection to reconnect to servers
These steps helped me at least:
Session -> TestConnection (Reconnect)
This step will take some time, have patience (You may get TOAD: Not Responding)
Debug -> Halt Execution
At least You do not have to forcibly kill the process from TASK MANAGER.
Keep querying :)
If you've got the sys / dba level access to the target database then go and kill the toad session, I wrote the following query to identify and kill my toad session which was running for 30 mins and wasting my precious time.
SQL> select sid, username, serial#, status from v$session where machine like '%ACC%';
SID USERNAME SERIAL# STATUS
989 SYS 1307 INACTIVE
991 PIN 15780 ACTIVE
SQL> alter system kill session '991,15780' immediate;
ORA-00031: session marked for kill
And boom !! Toad returned the session and control :)
Regards,
Sayeed Shaikh
Here's the sequence of events my hypothetical program makes...
Open a connection to server.
Run an UPDATE command.
Go off and do something that might take a significant amount of time.
Run another UPDATE that reverses the change in step 2.
Close connection.
But oh-no! During step 3, the machine running this program literally exploded. Other machines querying the same database will now think that the exploded machine is still working and doing something.
What I'd like to do, is just as the connection is opened, but before any changes have been made, tell the server that should this connection close for whatever reason, to run some SQL. That way, I can be sure that if something goes wrong, the closing update will run.
(To pre-empt the answer, I'm not looking for table/record locks or transactions. I'm not doing resource claims here.)
Many thanks, billpg.
I'm not sure there's anything built in, so I think you'll have to do some bespoke stuff...
This is totally hypothetical and straight off the top of my head, but:
Take the SPID of the connection you
opened and store it in some temp
table, with the text of the reversal
update.
Use an a background process (either
SSIS or something else) to monitor
the temp table and check that the
SPID is still present as an open connection.
If the connection dies then the background process can execute the stored revert command
If the connection completes properly then the SPID can be removed from the temp table so that the background process no longer reverts it when the connection closes.
Comments or improvements welcome!
I'll expand on my comment. In general, I think you should reconsider your approach. All database access code should open a connection, execute a query then close the connection where you rely on connection pooling to mitigate the expense of opening lots of database connections.
If it is the case that we are talking about a single SQL command whose rows on which it operates should not change, that is a problem that should be handled by the transaction isolation level. For that you might investigate the Snapshot isolation level in SQL Server 2005+.
If we are talking about a series of queries that are part of a long running transaction, that is more complicated and can be handled via storage of a transaction state which other connections read in order to determine whether they can proceed. Going down this road, you need to provide users with tools where they can cancel a long running transaction that might no longer be applicable.
Assuming it's even possible... this will only help you if the client machine explodes during the transaction. Also, there's a risk of false positives - the connection might get dropped for a few seconds due to network noise.
The approach that I'd take is to start a process on another machine that periodically pings the first one to check if it's still on-line, then takes action if it becomes unreachable.
We have a huge Oracle database and I frequently fetch data using SQL Navigator (v5.5). From time to time, I need to stop code execution by clicking on the Stop button because I realize that there are missing parts in my code. The problem is, after clicking on the Stop button, it takes a very long time to complete the stopping process (sometimes it takes hours!). The program says Stopping... at the bottom bar and I lose a lot of time till it finishes.
What is the rationale behind this? How can I speed up the stopping process? Just in case, I'm not an admin; I'm a limited user who uses some views to access the database.
Two things need to happen to stop a query:
The actual Oracle process has to be notified that you want to cancel the query
If the query has made any modification to the DB (DDL, DML), the work needs to be rolled back.
For the first point, the Oracle process that is executing the query should check from time to time if it should cancel the query or not. Even when it is doing a long task (big HASH JOIN for example), I think it checks every 3 seconds or so (I'm looking for the source of this info, I'll update the answer if I find it). Now is your software able to communicate correctly with Oracle? I'm not familiar with SLQ Navigator but I suppose the cancel mechanism should work like with any other tool so I'm guessing you're waiting for the second point:
Once the process has been notified to stop working, it has to undo everything it has already accomplished in this query (all statements are atomic in Oracle, they can't be stopped in the middle without rolling back). Most of the time in a DML statement the rollback will take longer than the work already accomplished (I see it like this: Oracle is optimized to work forward, not backward). If you are in this case (big DML), you will have to be patient during rollback, there is not much you can do to speed up the process.
If your query is a simple SELECT and your tool won't let you cancel, you could kill your session (needs admin rights from another session) -- this should be instantaneous.
When you cancel a query, the Oracle client should send OCIBreak() but this isn't implemented on a Windows server, that could be the cause.
Also, have your DBA check the value of SQLNET.EXPIRE_TIME.
I restored a 35Gb database on my dev machine yesterday and it was all going fine until this morning when my client app couldn't connect. So I opened SQL Management Studio to find the database 'In Recovery'.
I don't know a huge amount about this other than it is usually something to do with uncommitted transactions. Now since i know there aren't any uncommitted transactions it must be something else. So first off, I'd like to know under what conditions this can happen. Secondly, while this is going on I can't work so if there are any ways of either stopping the recovery, speeding it up or at least finding roughly how long it's gonna be that would help.
Do not shut down SQL while recovery is in progress. Let it finish. Check the error logs. If it doesn't finish, restore from backup.
You can find out how long it's going to take by looking in the event viewer. In the Application section on the Windows Logs you should get information messages from MSSQLSERVER with EventID 3450 telling you what it's up to. Something like:
Recovery of database 'XYZ' is 10% complete (approximately 123456 seconds remain) etc etc
I'm afraid I don't know how to stop it (yet).