We use service brokers more than two months and suddenly I found that the overall performance of the system is decreased. After digging a lot and analyzing metrics, I found that my page life expectancy is very low. then I take a look at my buffer pool with the help of simplesqlserver, it fills with sys.sysdercv object (5 GB). after searching a bit more I found that the sys.conversation_endpoint has 19 million rows with 10 GB data. it was my mistake to not end conversation in target queue but after ending conversation in both side (the initiator and the target queue) it still remains in the sys.conversation_endpoints with CD state (Closed). one way to delete this rows is to call end conversation with cleanup but it takes a lot of time to complete. I write this job but I guess it takes more than 7 days to be completed. here is my job:
DECLARE #BatchSize Int
SELECT #BatchSize = 1000
declare #i int = #BatchSize;
declare #sum int = 0;
WHILE #i = #BatchSize BEGIN
print concat('while ', #sum);
select #i = 0;
declare #handle uniqueidentifier
declare conv cursor for select top (#BatchSize) conversation_handle from sys.conversation_endpoints
open conv
fetch NEXT FROM conv into #handle
while ##FETCH_STATUS = 0
Begin
select #i = #i + 1;
END Conversation #handle with cleanup
fetch NEXT FROM conv into #handle
End
close conv
deallocate conv
select #sum = #sum + #i;
END
so, here is my question: How can I truncate this table and its data all at once. just like truncate command truncate table sys.conversation_endpoints, this command generate this error:
Cannot find the object "conversation_endpoints" because it does not exist or you do not have permissions.
I want to know, Is there any service to restart it or do any thing that safely remove this huge amount of data?
I've been teaching myself to use WHILE loops and decided to try making a fun Russian Roulette simulation. That is, a query that will randomly SELECT (or PRINT) up to 6 statements (one for each of the chambers in a revolver), the last of which reads "you die!" and any prior to this reading "you survive."
I did this by first creating a table #Nums which contains the numbers 1-6 in random order. I then have a WHILE loop as follows, with a BREAK if the chamber containing the "bullet" (1) is selected (I know there are simpler ways of selecting a random number, but this is adapted from something else I was playing with before and I had no interest in changing it):
SET NOCOUNT ON
CREATE TABLE #Nums ([Num] INT)
DECLARE #Count INT = 1
DECLARE #Limit INT = 6
DECLARE #Number INT
WHILE #Count <= #Limit
BEGIN
SET #Number = ROUND(RAND(CONVERT(varbinary,NEWID()))*#Limit,0,1)+1
IF NOT EXISTS (SELECT [Num] FROM #Nums WHERE [Num] = #Number)
BEGIN
INSERT INTO #Nums VALUES(#Number)
SET #Count += 1
END
END
DECLARE #Chamber INT
WHILE 1=1
BEGIN
SET #Chamber = (SELECT TOP 1 [Num] FROM #Nums)
IF #Chamber = 1
BEGIN
SELECT 'you die!' [Unlucky...]
BREAK
END
SELECT
'you survive.' [Phew...]
DELETE FROM #Nums WHERE [Num] = #Chamber
END
DROP TABLE #Nums
This works fine, but the results all appear instantaneously, and I want to add a delay between each one to add a bit of tension.
I tried using WAITFOR DELAY as follows:
WHILE 1=1
BEGIN
WAITFOR DELAY '00:00:03'
SET #Chamber = (SELECT TOP 1 [Num] FROM #Nums)
IF #Chamber = 1
BEGIN
SELECT 'you die!' [Unlucky...]
BREAK
END
SELECT
'you survive.' [Phew...]
DELETE FROM #Nums WHERE [Num] = #Chamber
END
I would expect the WAITFOR DELAY to initially cause a 3 second delay, then for the first SELECT statement to be executed and for the text to appear in the results grid, and then, assuming the live chamber was not selected, for there to be another 3 second delay and so on, until the live chamber is selected.
However, before anything appears in my results grid, there is a delay of 3 seconds per number of SELECT statements that are executed, after which all results appear at the same time.
I tried using PRINT instead of SELECT but encounter the same issue.
Clearly there's something I'm missing here - can anyone shed some light on this?
It's called buffering. The server doesn't want to return an only partially full response because most of the time, there's all of the networking overheads to account for. Lots of very small packets is more expensive than a few larger packets1.
If you use RAISERROR (don't worry about the name here where we're using 10) you can specify NOWAIT to say "send this immediately". There's no equivalent with PRINT or returning result sets:
SET NOCOUNT ON
CREATE TABLE #Nums ([Num] INT)
DECLARE #Count INT = 1
DECLARE #Limit INT = 6
DECLARE #Number INT
WHILE #Count <= #Limit
BEGIN
SET #Number = ROUND(RAND(CONVERT(varbinary,NEWID()))*#Limit,0,1)+1
IF NOT EXISTS (SELECT [Num] FROM #Nums WHERE [Num] = #Number)
BEGIN
INSERT INTO #Nums VALUES(#Number)
SET #Count += 1
END
END
DECLARE #Chamber INT
WHILE 1=1
BEGIN
WAITFOR DELAY '00:00:03'
SET #Chamber = (SELECT TOP 1 [Num] FROM #Nums)
IF #Chamber = 1
BEGIN
RAISERROR('you die!, Unlucky',10,1) WITH NOWAIT
BREAK
END
RAISERROR('you survive., Phew...',10,1) WITH NOWAIT
DELETE FROM #Nums WHERE [Num] = #Chamber
END
DROP TABLE #Nums
As Larnu already aluded to in comments, this isn't a good use of T-SQL.
SQL is a set-oriented language. We try not to write procedural code (do this, then do that, then run this block of code multiple times). We try to give the server as much as possible in a single query and let it work out how to process it. Whilst T-SQL does have language support for loops, we try to avoid them if possible.
1I'm using packets very loosely here. Note that it applies the same optimizations no matter what networking (or no-networking-local-memory) option is actually being used to carry the connection between client and server.
I have a very simple sql update statement in postgres.
UPDATE p2sa.observation SET file_path = replace(file_path, 'path/sps', 'newpath/p2s')
The observation table has 1513128 rows. The query so far has been running for around 18 hours with no end in sight.
The file_path column is not indexed so I guess it is doing a top to bottom scan but it seems a bit excessive the time. Probably replace is also a slow operation.
Is there some alternative or better approach for doing this one off kind of update which affects all rows. It is essentially updating an old file path to a new location. It only needs to be updated once or maybe again in the future.
Thanks.
In SQL you could do a while loop to update in batches.
Try this to see how it performs.
Declare #counter int
Declare #RowsEffected int
Declare #RowsCnt int
Declare #CodeId int
Declare #Err int
DECLARE #MaxNumber int = (select COUNT(*) from p2sa.observation)
SELECT #COUNTER = 1
SELECT #RowsEffected = 0
WHILE ( #RowsEffected < #MaxNumber)
BEGIN
SET ROWCOUNT 10000
UPDATE p2sa.observation
SET file_path = replace(file_path, 'path/sps', 'newpath/p2s')
where file_path != 'newpath/p2s'
SELECT #RowsCnt = ##ROWCOUNT ,#Err = ##error
IF #Err <> 0
BEGIN
Print 'Problem Updating the records'
BREAK
END
ELSE
SELECT #RowsEffected = #RowsEffected + #RowsCnt
PRINT 'The total number of rows effected :'+convert(varchar,#RowsEffected)
/*delaying the Loop for 10 secs , so that Update is completed*/
WAITFOR DELAY '00:00:10'
END
SET ROWCOUNT 0
I'm using SQL Server 2008 R2 and C#.
I'm inserting a batch of rows in SQL Server with a column Status set with value P.
Afterwards, I check how many rows already have the status R and if there are less than 20, I update the row to Status R.
While inserting and updating, more rows are getting added and updated all the time.
I've tried transactions and locking in multiple ways but still: at the moment that a new batch is activated, there are more than 20 rows with status R for a few milliseconds. After those few milliseconds it stabilizes back to 20.
Does anyone have an idea why at bursts the locking doesn't seem to work?
Sample code, reasons, whatever you can share on this subject can be useful!
Thanks!
Following is my stored proc:
DECLARE #return BIT
SET #return = -1
DECLARE #previousValue INT
--insert the started orchestration
INSERT INTO torchestrationcontroller WITH (ROWLOCK)
([flowname],[orchestrationid],[status])
VALUES (#FlowName, #OrchestrationID, 'P')
--check settings
DECLARE #maxRunning INT
SELECT #maxRunning = maxinstances
FROM torchestrationflows WITH (NOLOCK)
WHERE [flowname] = #FlowName
--if running is 0, than you can pass, no limitation here
IF( #maxRunning = 0 )
BEGIN
SET #return = 1
UPDATE torchestrationcontroller WITH(ROWLOCK)
SET [status] = 'R'
WHERE [orchestrationid] = #OrchestrationID
END
ELSE
-- BEGIN
RETRY: -- Label RETRY
BEGIN TRY
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION T1
--else: check how many orchestrations are now running
--start lock table
DECLARE #currentRunning INT
SELECT #currentRunning = Count(*)
FROM torchestrationcontroller WITH (TABLOCKX) --Use an exclusive lock that will be held until the end of the transaction on all data processed by the statement
WHERE [flowname] = #FlowName
AND [status] = 'R'
--CASE
IF( #currentRunning < #maxRunning )
BEGIN
-- less orchestrations are running than allowed
SET #return = 1
UPDATE torchestrationcontroller WITH(TABLOCKX)
SET [status] = 'R'
WHERE [orchestrationid] = #OrchestrationID
END
ELSE
-- more or equal orchestrations are running than allowed
SET #return = 0
--end lock table
SELECT #Return
COMMIT TRANSACTION T1
END TRY
BEGIN CATCH
--PRINT 'Rollback Transaction'
ROLLBACK TRANSACTION
IF ERROR_NUMBER() = 1205 -- Deadlock Error Number
BEGIN
WAITFOR DELAY '00:00:00.05' -- Wait for 5 ms
GOTO RETRY -- Go to Label RETRY
END
END CATCH
I've been able to fix it by setting the isolation level serializable + transaction on the tree stored procedures (two of which I didn't mention because I didn't think they were of importance to this problem). Apparently it was the combination of multiple stored procs that were interfering with each other.
If you know a better way to fix it that could give me a better performance, please let me know!
I have a table in SQL Server 2005 which has approx 4 billion rows in it. I need to delete approximately 2 billion of these rows. If I try and do it in a single transaction, the transaction log fills up and it fails. I don't have any extra space to make the transaction log bigger. I assume the best way forward is to batch up the delete statements (in batches of ~ 10,000?).
I can probably do this using a cursor, but is the a standard/easy/clever way of doing this?
P.S. This table does not have an identity column as a PK. The PK is made up of an integer foreign key and a date.
You can 'nibble' the delete's which also means that you don't cause a massive load on the database. If your t-log backups run every 10 mins, then you should be ok to run this once or twice over the same interval. You can schedule it as a SQL Agent job
try something like this:
DECLARE #count int
SET #count = 10000
DELETE FROM table1
WHERE table1id IN (
SELECT TOP (#count) tableid
FROM table1
WHERE x='y'
)
What distinguishes the rows you want to delete from those you want to keep? Will this work for you:
while exists (select 1 from your_table where <your_condition>)
delete top(10000) from your_table
where <your_condition>
In addition to putting this in a batch with a statement to truncate the log, you also might want to try these tricks:
Add criteria that matches the first column in your clustered index in addition to your other criteria
Drop any indexes from the table and then put them back after the delete is done if that's possible and won't interfere with anything else going on in the DB, but KEEP the clustered index
For the first point above, for example, if your PK is clustered then find a range which approximately matches the number of rows that you want to delete each batch and use that:
DECLARE #max_id INT, #start_id INT, #end_id INT, #interval INT
SELECT #start_id = MIN(id), #max_id = MAX(id) FROM My_Table
SET #interval = 100000 -- You need to determine the right number here
SET #end_id = #start_id + #interval
WHILE (#start_id <= #max_id)
BEGIN
DELETE FROM My_Table WHERE id BETWEEN #start_id AND #end_id AND <your criteria>
SET #start_id = #end_id + 1
SET #end_id = #end_id + #interval
END
Sounds like this is one-off operation (I hope for you) and you don't need to go back to a state that's halfway this batched delete - if that's the case why don't you just switch to SIMPLE transaction mode before running and then back to FULL when you're done?
This way the transaction log won't grow as much. This might not be ideal in most situations but I don't see anything wrong here (assuming as above you don't need to go back to a state that's in between your deletes).
you can do this in your script with smt like:
ALTER DATABASE myDB SET RECOVERY FULL/SIMPLE
Alternatively you can setup a job to shrink the transaction log every given interval of time - while your delete is running. This is kinda bad but I reckon it'd do the trick.
Well, if you were using SQL Server Partitioning, say based on the date column, you would have possibly switched out the partitions that are no longer required. A consideration for a future implementation perhaps.
I think the best option may be as you say, to delete the data in smaller batches, rather than in one hit, so as to avoid any potential blocking issues.
You could also consider the following method:
Copy the data to keep into a temporary table
Truncate the original table to purge all data
Move everything from the temporary table back into the original table
Your indexes would also be rebuilt as the data was added back to the original table.
I would do something similar to the temp table suggestions but I'd select into a new permanent table the rows you want to keep, drop the original table and then rename the new one. This should have a relatively low tran log impact. Obviously remember to recreate any indexes that are required on the new table after you've renamed it.
Just my two p'enneth.
Here is my example:
-- configure script
-- Script limits - transaction per commit (default 10,000)
-- And time to allow script to run (in seconds, default 2 hours)
--
DECLARE #MAX INT
DECLARE #MAXT INT
--
-- These 4 variables are substituted by shell script.
--
SET #MAX = $MAX
SET #MAXT = $MAXT
SET #TABLE = $TABLE
SET #WHERE = $WHERE
-- step 1 - Main loop
DECLARE #continue INT
-- deleted in one transaction
DECLARE #deleted INT
-- deleted total in script
DECLARE #total INT
SET #total = 0
DECLARE #max_id INT, #start_id INT, #end_id INT, #interval INT
SET #interval = #MAX
SELECT #start_id = MIN(id), #max_id = MAX(id) from #TABLE
SET #end_id = #start_id + #interval
-- timing
DECLARE #start DATETIME
DECLARE #now DATETIME
DECLARE #timee INT
SET #start = GETDATE()
--
SET #continue = 1
IF OBJECT_ID (N'EntryID', 'U') IS NULL
BEGIN
CREATE TABLE EntryID (startid INT)
INSERT INTO EntryID(startid) VALUES(#start_id)
END
ELSE
BEGIN
SELECT #start_id = startid FROM EntryID
END
WHILE (#continue = 1 AND #start_id <= #max_id)
BEGIN
PRINT 'Start issued: ' + CONVERT(varchar(19), GETDATE(), 120)
BEGIN TRANSACTION
DELETE
FROM #TABLE
WHERE id BETWEEN #start_id AND #end_id AND #WHERE
SET #deleted = ##ROWCOUNT
UPDATE EntryID SET EntryID.startid = #end_id + 1
COMMIT
PRINT 'Deleted issued: ' + STR(#deleted) + ' records. ' + CONVERT(varchar(19), GETDATE(), 120)
SET #total = #total + #deleted
SET #start_id = #end_id + 1
SET #end_id = #end_id + #interval
IF #end_id > #max_id
SET #end_id = #max_id
SET #now = GETDATE()
SET #timee = DATEDIFF (second, #start, #now)
if #timee > #MAXT
BEGIN
PRINT 'Time limit exceeded for the script, exiting'
SET #continue = 0
END
-- ELSE
-- BEGIN
-- SELECT #total 'Removed now', #timee 'Total time, seconds'
-- END
END
SELECT #total 'Removed records', #timee 'Total time sec' , #start_id 'Next id', #max_id 'Max id', #continue 'COMPLETED? '
SELECT * from EntryID next_start_id
GO
The short answer is, you can't delete 2 billion rows without incurring some kind of major database downtime.
Your best option may be to copy the data to a temp table and truncate the original table, but this will fill your tempDB and would use no less logging than deleting the data.
You will need to delete as many rows as you can until the transaction log fills up, then truncate it each time. The answer provided by Stanislav Kniazev could be modified to do this by increasing the batch size and adding a call to truncate the log file.
I agree with the people who want you loop over a smaller set of records, this will be faster than trying to do the whole operation in one step. You may to experience withthe number of records you should include inthe loop. About 2000 at a time seems to be the sweet spot in most of the tables I do large deltes from althouhg a few need smaller amounts like 500. Depends on number of forign keys, size of the record, triggers etc, so it really will take some experimenting to find what you need. It also depends on how heavy the use of the table is. A heavily accessed table will need each iteration of the loop to run a shorter amount of time. If you can run during off hours, or best yet in single user mode, then you can have more records deleted in one loop.
If you don't think you do this in one night during off hours, it might be best to design the loop with a counter and only do a set number of iterations each night until it is done.
Further, if you use an implicit transaction rather than an explicit one, you can kill the loop query at any time and records already deleted will stay deleted except those in the current round of the loop. Much faster than trying to rollback half a million records becasue you've brought the system to a halt.
It is usually a good idea to backup a database immediately before undertaking an operation of this nature.