Stopping the Delete Operation in SQL Server - sql-server-2005

I wanted to delete massive amount of data on a SQL Server 2005 table (about 80 millions of records). Without stating begin tran, I wrote my delete query like
Delete from myTable
where columnA > X'
where columnA is not the table's primary key. It has been started since 4 hours ago and still not finished. I've tested in a similar scenario with similar number of rows and similar condition and the operation was done in about 70 minutes but meanwhile the main server is more powerful but it is not stopping after 4 hours being spent. The database has been configured to provide Full recovery model.
I want to know if I can stop this never ending operation by killing the SPID of the corresponding process, and If I done so, what happens? Will sql server start to rollback the operation? will the database gets suspended? What is the solution?

Yes, server will roll back the delete operation,
BUT
such a rollbacks usually take more time to finish, than time previously spent to operation.
DB will not be suspended, but lock on the table will be held until rollback finishes.

Related

Understanding locks and query status in Snowflake (multiple updates to a single table)

While using the python connector for snowflake with queries of the form
UPDATE X.TABLEY SET STATUS = %(status)s, STATUS_DETAILS = %(status_details)s WHERE ID = %(entry_id)s
, sometimes I get the following message:
(snowflake.connector.errors.ProgrammingError) 000625 (57014): Statement 'X' has locked table 'XX' in transaction 1588294931722 and this lock has not yet been released.
and soon after that
Your statement X' was aborted because the number of waiters for this lock exceeds the 20 statements limit
This usually happens when multiple queries are trying to update a single table. What I don't understand is that when I see the query history in Snowflake, it says the query finished successfully (Succeded Status) but in reality, the Update never happened, because the table did not alter.
So according to https://community.snowflake.com/s/article/how-to-resolve-blocked-queries I used
SELECT SYSTEM$ABORT_TRANSACTION(<transaction_id>);
to release the lock, but still, nothing happened and even with the succeed status the query seems to not have executed at all. So my question is, how does this really work and how can a lock be released without losing the execution of the query (also, what happens to the other 20+ queries that are queued because of the lock, sometimes it seems that when the lock is released the next one takes the lock and have to be aborted as well).
I would appreciate it if you could help me. Thanks!
Not sure if Sergio got an answer to this. The problem in this case is not with the table. Based on my experience with snowflake below is my understanding.
In snowflake, every table operations also involves a change in the meta table which keeps track of micro partitions, min and max. This meta table supports only 20 concurrent DML statements by default. So if a table is continuously getting updated and getting hit at the same partition, there is a chance that this limit will exceed. In this case, we should look at redesigning the table updation/insertion logic. In one of our use cases, we increased the limit to 50 after speaking to snowflake support team
UPDATE, DELETE, and MERGE cannot run concurrently on a single table; they will be serialized as only one can take a lock on a table at at a time. Others will queue up in the "blocked" state until it is their turn to take the lock. There is a limit on the number of queries that can be waiting on a single lock.
If you see an update finish successfully but don't see the updated data in the table, then you are most likely not COMMITting your transactions. Make sure you run COMMIT after an update so that the new data is committed to the table and the lock is released.
Alternatively, you can make sure AUTOCOMMIT is enabled so that DML will commit automatically after completion. You can enable it with ALTER SESSION SET AUTOCOMMIT=TRUE; in any sessions that are going to run an UPDATE.

Is it ok to KILL this DELETE query?

I ran a query to delete around 4 million rows from my database. It ran for about 12 hours before my laptop lost the network connection. At that point, I decided to take a look at the status of the query in the database. I found that it was in the suspended state. Specifically:
Start Time SPID Database Executing SQL Status command wait_type wait_time wait_resource last_wait_type
---------------------------------------------------------------------------------------------------------------------------------------------------
2018/08/15 11:28:39.490 115 RingClone *see below suspended DELETE PAGEIOLATCH_EX 41 5:1:1116111 PAGEIOLATCH_EX
*Here is the sql query in question:
DELETE FROM T_INDEXRAWDATA WHERE INDEXRAWDATAID IN (SELECT INDEXRAWDATAID FROM T_INDEX WHERE OWNERID='1486836020')
After reading this;
https://dba.stackexchange.com/questions/87066/sql-query-in-suspended-state-causing-high-cpu-usage
I realize I probably should have broken this up into smaller pieces to delete them (or even delete them one-by-one). But now I just want to know if it is "safe" for me to KILL this query, as the answer in that post suggests. One thing the selected answer states is that "you may run into data consistency problems" if you KILL a query while it's executing. If it causes some issues with the data I am trying to delete, I'm not that concerned. However, I'm more concerned about this causing some issues with other data, or with the table structure itself.
Is it safe to KILL this query?
If you ran the delete from your laptop over the network and it lost connection with the server, you can either kill the spid or wait when it will disappear by itself. Depending on the ##version of your SQL Server instance, in particular how well it's patched, the latter might require instance restart.
Regarding the consistency issues, you seem to misunderstand it. It is possible only if you had multiple statements run in a single batch without being wrapped with a transaction. As I understand, you had a single statement; if that's the case, don't worry about consistency, SQL Server wouldn't have become what it is now if it would be so easy to corrupt its data.
I would have rewritten the query however, if T_INDEX.INDEXRAWDATAID column has NULLs then you can run into issues. It's better to rewrite it via join, also adding batch splitting:
while 1=1 begin
DELETE top (10000) t
FROM T_INDEXRAWDATA t
inner join T_INDEX i on t.INDEXRAWDATAID = i.INDEXRAWDATAID
WHERE i.OWNERID = '1486836020';
if ##rowcount = 0
break;
checkpoint;
end;
It definitely will not be any slower, but it can boost performance, depending on your schema, data and the state of any indices the tables have.

i couldn't do a simple select top 1 * from table, and nothing with one table, which posible problems could be?

i got this error
timeout value expired. The timeout period elapsed prior to completion
of the operation or the server is not responding
i have a process, which do inserts and update at night, another process
which does query at nights too, (etl or dts) at sql server 2005 so, now we need to do a query to this table, and this doesn't work, i want to run my process again, and this never finish, and noneone can do a query to this table, (another tables could do) users commented me, yesterday they could do, but today, they coulden't is it posible, my process is execute at night has never finished, and it let a begin transaccion open?
how can i to be sure of this? and close it from ssms ?
this is not a problem of permissions we could do queries and inserts/updates yesterday.
it happens only with one table.
Try this:
SELECT TOP 1 * FROM Table (nolock)
Does that return results? If so, sounds like a locking issue..

Master database DB STARTUP problem

I have a SQL Server 2008 database and I have a problem with this database that I don't understand.
The steps that caused the problems are:
I ran a SQL query to update a table called authors from another table called authorAff
The authors table is 123,385,300 records and the authorsAff table is 139,036,077
The query took about 7 days executing but it didn't finish
I decided to cancel the query to do it another way.
The connection on which I was running the query disconnected suddenly so the database became in recovery until the query cancels
The server was shut down many times afterwards because of some electricity problems
The database took about two days and then recovered.
Now when I run this query
SELECT TOP 1000 *
FROM AUTHORS WITH(READUNCOMMITTED)
It executes and returns the results but when I remove WITH(READUNCOMMITTED) hint it gets locked by a process running on the master database that appears only on the Activity Monitor with Command [DB STARTUP] and no results show up.
so what is the DB STARTUP command and if it's a problem, how can I solve it?
Thank you in advance.
I suspect that your user database is still trying to rollback the transaction that you canceled. A general rule of thumb indicates that it will take about the same amount of time, or more, for an aborted transaction to rollback as it has taken to run.
The rollback can't be avoided even with the SQL Server stops and starts you had.
The reason you can run a query WITH(READUNCOMMITTED) is because it's ignoring the locks associated with transaction that is rolling back. Your query results are considered unreliable, but ironically, the results are probably what you want to see since the blocking process is a rollback.
The best solution is to wait it out, if you can afford to do so. You may be able to find ways to kill the blocking process, but then you should be concerned with database integrity.

Concurrency during long running update in TSQL

Using Sql Server 2005. I have a long running update that may take about 60 seconds in our production environment. The update is not part of any explicit transactions nor has any sql hints. While the update is running, what's to be expected from other requests that occur on those rows that will be updated? There's about 6 million total rows in the table that will be updated of which about 500,000 rows will be updated.
Some concurrency concerns/questions:
1) What if another select query (with nolock hint) is performed on this table among some of the rows that are being updated. Will the query wait until the update is finished?
2) What the other select query does not have a nolock hint? Will this query have to wait until the update is finished?
3) What if another update query is performing an update on one of these rows? Will this query have to wait until it's finished?
4) What about deletes?
5) What about inserts?
Thanks!
Dave
Every statement in sql server runs in a transaction. If you don't explicitly start one, the server starts one for every statement and commits it if the statement is successful, and rolls it back if it is not.
The exact locking you'll see with your update, unfortunately, depends. It will start off as row locks, but it is likely that it will be elevated to at least a few page locks based on the number of rows you're updating. Full elevation to a table lock is unlikely, but this depends in some amount on your server - SQL Server will elevate it if the page locks are using too much memory.
If your select is ran with nolock then you will get dirty reads if you happen to select any rows which are involved in the update. This means you will read the uncommitted data, and it may not be consistent with other rows (since those may not have been updated yet).
For all other cases if your statement encounters a row involved in the update, or a row on a locked page (assuming the lock has been elevated) then the statement will have to wait for the update to finish.