Batch updates to SQL Table wont stop - sql

I have a large table (18 million records) which I am updating using the following batch update snippet:
SET NOCOUNT ON;
DECLARE #rows INT, #count INT, #message VARCHAR(100);
SET #rows = 1;
SET #count = 0;
WHILE #rows > 0
BEGIN
BEGIN TRAN
UPDATE TOP 100000 tblName
SET col_name = 'xxxxxx'
SET #rows = ##ROWCOUNT
SET #COUNT = #count + #rows
RAISERROR ('count %d', 0, 1, #count) WITH NOWAIT
COMMIT TRAN
END
Even though the code has the #count increment logic, it races past the 18 million records I am trying to update. What am I missing here and what should I add/remove to make the updates stop at the 18,206,650 records that I have in the table?
Thanks,
RV.

Silly me. I was missing where clause on the update statement. Sorry y'all.

Related

How can i reproduce this procedure to CQL (Cassandra)?

i have this procedure and now some how i need reproduce this procedure from SQL, to CQL, can you please help me? If Cassandra have same as procedures in SQL or not? And if not, how i can set variables for this?
CREATE PROCEDURE [dbo].[sp_DeleteOldTransactionLogs]
#p_daysback INT
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
DECLARE
#r int
,#i int
,#maxLogId int
,#billion int = 1000000000
,#deleted int;
SET #r = 1;
SET #i = 1;
SET #deleted = 0;
SELECT
#maxLogId = MAX(ictransactionlogid)
FROM dbo.ictransactionLog WITH (INDEX(IX_ictransactionLog_time))
WHERE
time < DATEADD(day, #p_daysback, GETDATE());
WHILE #r > 0 AND #i <= 7
BEGIN
DELETE TOP (5000) -- this will change
dbo.ictransactionLog
WHERE ictransactionlogid <= #maxLogId;
SET #r = ##ROWCOUNT;
SET #deleted = #deleted + #r;
set #i = #i + 1;
END
SELECT #maxLogId = IDENT_CURRENT( 'ictransactionLog' );
IF #maxLogId > #billion
BEGIN
delete from ictransactionLog
where ictransactionlogid < 500000000
DBCC CHECKIDENT ('ictransactionLog', RESEED, 0);
END
SELECT #deleted;
END
No, it's not possible to do that in the CQL. There is only limited for user-defined functions & user-defined aggregates, and even they are quite limited.
You need to implement that as a code in some language, such as, Python, etc.

I have a trigger on my SQL Server table which takes user updates and log them, in case we need to revert, but it has a problem

The problem is, sometimes in a day that no one is changing anything, a random user just enter the page and the trigger saves a change. The problem is, not only it logs a change that never has occurred in that day/moment(Because he/she didn't made a change), but it also gets a random data from INSERTED/DELETED, like we have a log of a change on may 5 2019 that has the date of change set in 2014, which is a long time ago.
My trigger is similar to this one below, just without personal information. We simulated this problem by making changes on a day, then trigger logs it correctly, after that we change the date on our computer, log in and wait a little bit, than it logs something random. Sometimes it takes a lot of time, and enter/exiting pages, but eventually something completely random appears from another date from long ago. Thanks for the help!
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER TRIGGER [dbo].[tablelog]
ON [dbo].[tablechanged]
AFTER UPDATE
AS
BEGIN
declare #OLD_DATA nvarchar(2000);
declare #NEW_DATA nvarchar(2000);
declare #Counter INT;
declare #Occurrences INT;
declare #col varchar(1000);
declare #SELECDELET nvarchar(2000);
declare #SELECINSER nvarchar(2000);
declare #user varchar(50);
declare #cod int;
declare #emp INT;
declare #isReg bit;
set #Occurrences = (SELECT COUNT(COLUMN_NAME) FROM information_schema.columns WHERE table_name = 'tablechanged')
set #Counter = 0;
set #user = (SELECT TOP 1 usuarioUltimaAlteracao FROM INSERTED);
set #emp = (SELECT TOP 1 empCodigo FROM INSERTED);
set #cod = (SELECT TOP 1 cedCodigo FROM INSERTED);
set #isReg = (SELECT TOP 1 alteracaoViaCadastro FROM INSERTED);
SELECT * INTO #Del FROM DELETED
SELECT * INTO #Ins FROM INSERTED
if(#isReg = 1)
begin
while #Counter < #Occurrences
begin
set #Counter = #Counter + 1;
set #col = (select COLUMN_NAME FROM information_schema.columns WHERE table_name = 'tablechanged' and ordinal_position = #Counter);
select #SELECDELET = 'SELECT #OLD_DATA='+#col+' FROM #Del' ;
select #SELECINSER = 'SELECT #NEW_DATA='+#col+' FROM #Ins' ;
exec sp_executesql #SELECDELET, N'#OLD_DATA nvarchar(40) OUTPUT', #OLD_DATA OUTPUT
exec sp_executesql #SELECINSER, N'#NEW_DATA nvarchar(40) OUTPUT', #NEW_DATA OUTPUT
if(#OLD_DATA <> #NEW_DATA)
begin
INSERT INTO TABLELOG (OPE_DATE,OPE_USER,OPE_TABLE,OPE_COD,OPE_EMP,OPE_FIELD,OPE,OLD_DATA,NEW_DATA)
VALUES (getdate(), #user, 'tablechanged', #cod, #emp, #col, 'UPDATE', #OLD_DATA,#NEW_DATA)
end
end
end
END
SQL Server triggers fire for every statement. Not for every row. Your trigger is obviously broken for the case of a multi-row update.
In the case of a multi-row update, the value of #NEW_DATA after running
SELECT #NEW_DATA='+#col+' FROM #Ins' ;
will be the last value in #Ins, and without an ORDER BY, it's undocumented which row it come from.

SQL Server trigger with loop for multiple row insertion

I've created trigger for my database which handles some insertion but when I add multiple values in 1 SQL query it doesn't work:
ALTER TRIGGER [dbo].[ConferenceDayTrigger]
ON [dbo].[Conferences]
AFTER INSERT
AS
BEGIN
DECLARE #ID INT
DECLARE #dayC INT
DECLARE #counter INT
SET #counter = 1
SET #ID = (SELECT IDConference FROM Inserted)
SET #dayC = (SELECT DATEDIFF(DAY, start,finish) FROM Inserted)
WHILE #counter <= #dayC + 1
BEGIN
EXEC AddConferenceDay #Id, #counter
SET #counter = #counter +1
END
END
For single insertion it works ok. But what should I change/add to make it execute for each row of inserted values?
If you cannot change the stored procedure, then this might be one of the (very few) cases when a cursor comes to the rescue. Double loops, in fact:
ALTER TRIGGER [dbo].[ConferenceDayTrigger]
ON [dbo].[Conferences]
AFTER INSERT
AS
BEGIN
DECLARE #ID INT;
DECLARE #dayC INT;
DECLARE #counter INT
SET #counter = 1;
DECLARE yucky_Cursor CURSOR FOR
SELECT IDConference, DATEDIFF(DAY, start,finish) FROM Inserted;
OPEN yucky_Cursor; /*Open cursor for reading*/
FETCH NEXT FROM yucky_Cursor INTO #ID, #dayC;
WHILE ##FETCH_STATUS = 0
BEGIN
WHILE #counter <= #dayC + 1
BEGIN
EXEC AddConferenceDay #Id, #counter;
SET #counter = #counter + 1;
END;
FETCH NEXT FROM yucky_Cursor INTO #ID, #dayC;
END;
CLOSE yucky_Cursor;
DEALLOCATE yucky_Cursor;
END;
I suspect there is a way to refactor and get rid of the cursor and use set-based operations.
When you insert more than one record, you need to cursor/while to call the AddConferenceDay procedure for each record.
But I will suggest you to alter your procedure to accept table type as input parameter. So that more than one ID and dayC as input to AddConferenceDay procedure. It is more efficient than your current approach.
something like this
create type udt_Conferences as table (ID int,dayC int)
Alter the procedure to use udt_Conferences as input parameter
Alter procedure AddConferenceDay (#input udt_Conferences readonly)
as
begin
/* use #input table type instead of #Id and #counter variables */
end
To call the procedure update the trigger with created udt
ALTER TRIGGER [dbo].[ConferenceDayTrigger]
ON [dbo].[Conferences]
AFTER INSERT
AS
BEGIN
Declare #input udt_Conferences
insert into #input (ID,dayC)
select IDConference,DATEDIFF(DAY, start,finish) from Inserted
END
add these lines to your trigger
AFTER INSERT
AS
BEGIN
AFTER INSERT
AS
BEGIN
Declare #Count int;
Set #Count=##ROWCOUNT;
IF #Count=0
Return;
SET NOCOUNT ON;
-- Insert statements for trigger here

Deleting large number of rows in chunks

I have about 8 tables that have 10 million rows or more each and I want to do the fastest/elegant delete on them. I have decided to delete them in chunks at a time. When I added my changes, it looks very very ugly, and want to know how to format it to look better. Also, is this the best way to be doing this?
DECLARE #ChunkSize int
SET #ChunkSize = 50000
WHILE #ChunkSize <> 0
BEGIN
DELETE TOP (#ChunkSize) FROM TABLE1
WHERE CREATED < #DATE
SET #ChunkSize = ##rowcount
END
DECLARE #ChunkSize int
SET #ChunkSize = 50000
WHILE #ChunkSize <> 0
BEGIN
DELETE TOP (#ChunkSize) FROM TABLE2
WHERE CREATED < #DATE
SET #ChunkSize = ##rowcount
END
.......
I would be doing this for all 8 tables which doesn't seem practical. Any advice on how to clean this up?
Prior to 2016 SP1 when partitioning is only available in Enterprise you can either delete in batches or if the amount of data to be removed is small compared to the total data you can copy the good data to another table.
For doing the batch work I would make some suggestions to your code so it is a bit simpler.
DECLARE #ChunkSize int
SELECT #ChunkSize = 50000 --use select instead of set so ##rowcount will <> 0
WHILE ##rowcount <> 0
BEGIN
DELETE TOP (#ChunkSize) FROM TABLE1
WHERE CREATED < #DATE
END
SELECT #ChunkSize = #ChunkSize --this will ensure that ##rowcount = 1 again.
WHILE ##rowcount <> 0
BEGIN
DELETE TOP (#ChunkSize) FROM TABLE2
WHERE CREATED < #DATE
END
You may have to play with the ChunkSize to work well with your data but 50k is a reasonable starting point.
If you want to avoid repeating your loop for each table, you could use dynamic SQL
IF OBJECT_ID('tempdb..#tableNames') IS NOT NULL DROP TABLE tempdb..#tableNames
SELECT name INTO #tableNames FROM sys.tables WHERE name IN (/* Names of tables you want to delete from */)
DECLARE #table varchar(50)
DECLARE #query nvarchar(max)
WHILE EXISTS (select '1' from #tableNames)
BEGIN
SET #table = (select top 1 name from #tableNames)
DELETE FROM #tableNames WHERE name = #table
SET #query = 'DECLARE #ChunkSize int
SET #ChunkSize = 50000
WHILE #ChunkSize <> 0
BEGIN
DELETE TOP (#ChunkSize) FROM ' + #table + '
WHERE CREATED < #DATE
SET #ChunkSize = ##rowcount
END'
EXEC sp_executesql #query
END

Update table in chunks

I am trying to update a large table in chunks and transactions.
This query runs endlessly in case column1 is not updated for some reason. I have another nested query there, which does not necessarily return a value, so some column1's remain as null after the update. This puts my query into an endless loop.
How can I specify a position for "update top" to start with?
Thanks in advance.
declare #counter int
declare #total int
declare #batch int
set #total = (SELECT COUNT(*) FROM table with(nolock))
set #counter = 0
set #batch = 1000
while (#counter < (#total/#batch) + 1)
begin
BEGIN TRANSACTION
set #counter = #counter + 1
Update TOP ( #batch ) table
SET column1 = 'something'
where column1 is null
Commit transaction
end