We are using SQL Server 2000. We have a heavy database with over 100000 images. Currently I'm deleting records with this query:
DELETE FROM T_JBSHEETDATA
WHERE (F_JBREF NOT IN (SELECT JOB_REF_NUMBER
FROM T_JBDTLS))
but unfortunately it only deletes 500 records at a time. If I take more records the server dies (server timeout). How do I create a loop of x rows until it's finished?
Below is a way to do "top N deletes".
A few ideas. You can configure the #TopSize until you find the good "goldie locks" value. Not too big, not too small.
You could also remove the while loop, and then track the #RowCount (after the delete statement)......and if you had client code....return the delete-count, and keep calling the (stored procedure?) over and over until the delete count was zero.
NOW, I would see if an index could improve performance before resorting to the below.
But I'm trying to answer your question..............as asked.
/* START TSQL */
if exists (SELECT * FROM information_schema.tables WHERE table_schema = 'dbo' and table_name = 'Television')
BEGIN
DROP TABLE [dbo].[Television]
END
GO
CREATE TABLE [dbo].[Television] (
TelevisionUUID [uniqueidentifier] not null default NEWSEQUENTIALID() ,
TelevisionName varchar(64) not null ,
TelevisionKey int not null ,
IsCheckedOut bit default 0
)
GO
ALTER TABLE dbo.Television ADD CONSTRAINT PK_Television_TelevisionUUID
PRIMARY KEY CLUSTERED (TelevisionUUID)
GO
ALTER TABLE dbo.Television ADD CONSTRAINT CK_Television_TelevisionName_UNIQUE
UNIQUE (TelevisionName)
GO
set nocount on
declare #counter int
select #counter = 11000
declare #currentTVName varchar(24)
declare #TopSize int
select #TopSize = 10
while #counter > 10000 /* this loop counter is ONLY here for fake data,….do not use this syntax for production code */
begin
select #currentTVName = 'TV: '+ convert(varchar(24) , #counter)
INSERT into dbo.Television ( TelevisionName , TelevisionKey ) values ( #currentTVName , #counter)
select #counter = #counter - 1
end
/* Everything above is just setup data, the crux of the code is below this line */
select count(*) as TV_Total_COUNT_Pre from dbo.Television
declare #DeleteLoopCounter int
select #DeleteLoopCounter = 0
while exists ( select top 1 * from dbo.Television )
BEGIN
select #DeleteLoopCounter = #DeleteLoopCounter + 1
;
WITH cte1 AS
( SELECT
TOP (#TopSize)
TelevisionUUID , /* <<Note, the columns here must be available to the output */
IsCheckedOut
FROM
dbo.Television tv
WITH ( UPDLOCK, READPAST , ROWLOCK ) /* <<Optional Hints, but helps with concurrency issues */
/* WHERE conditions can be put there as well */
ORDER BY /* order by is optional, and I would probably remove it for a delete operation */
tv.TelevisionKey DESC
)
/* UPDATE cte1 SET IsCheckedOut = 1 */ /* this code has nothing to do with the delete solution, but shows how you could use this same trick for an "update top N" */
Delete deleteAlias
from dbo.Television deleteAlias
where exists ( select null from cte1 innerAlias where innerAlias.TelevisionUUID = deleteAlias.TelevisionUUID )
;
print '/#DeleteLoopCounter/'
print #DeleteLoopCounter
print ''
select count(*) as TV_Total_COUNT_Post from dbo.Television
END
EDIT
Sql Server 2000 specific info:
NOTE. Since you have 2000, you will have to HARD CODE the #TopSize value. But I will leave the code "as is" for future readers. Again, you'll have to remove #TopSize and then use a value like "1000" or similar. Remove the () around the #TopSize near the select.
You might be able to improve the performance of your query instead of trying to figure out how to delete in a loop.
1. Make sure there is an index on table T_JBDTLS.JOB_REF_NUMBER. If it's missing that might be the cause of the slowdown.
2. Change your query to use a join instead of a sub-select. Something like:
DELETE FROM T_JBSHEETDATA
FROM T_JBSHEETDATA D
LEFT JOIN T_JBDTLS J ON D.F_JBREF = J.JOB_REF_NUMBER
WHERE J.JOB_REF_NUMBER IS NULL
3. Are there any triggers on the table that you're deleting from? What about cascade deletes? Triggers on the table that is being cascade-deleted?
You may also want to try a "not exists" clause instead of a "not in".
I believe the below is the correct translation.
Delete deleteAlias
/* select deleteAlias.* */
from dbo.T_JBSHEETDATA deleteAlias
where not exists ( select null from dbo.[T_JBDTLS] innerDets where innerDets.JOB_REF_NUMBER = deleteAlias.F_JBREF )
Here is a generic Northwind version that will delete any Order(s) that do(es) not have any (child) Order-Details.
Use Northwind
GO
Delete deleteAlias
/* select deleteAlias.* */
from dbo.Orders deleteAlias
where not exists ( select null from dbo.[Order Details] innerDets where innerDets.OrderId = deleteAlias.OrderId )
Related
I've created a stored procedure that filters and paginates for a DataTable.
Problem: I need to set an OUTPUT variable for #TotalRecords found before an OFFSET occurs, otherwise it sets #TotalRecord to #RecordPerPage.
I've messed around with CTE's and also simply trying this:
SELECT *, #TotalRecord = COUNT(1)
FROM dbo
But that doesn't work either.
Here is my stored procedure, with most of the stuff pulled out:
ALTER PROCEDURE [dbo].[SearchErrorReports]
#FundNumber varchar(50) = null,
#ProfitSelected bit = 0,
#SortColumnName varchar(30) = null,
#SortDirection varchar(10) = null,
#StartIndex int = 0,
#RecordPerPage int = null,
#TotalRecord INT = 0 OUTPUT --NEED TO SET THIS BEFORE OFFSET!
AS
BEGIN
SET NOCOUNT ON;
SELECT *
FROM
(SELECT *
FROM dbo.View
WHERE (#ProfitSelected = 1 AND Profit = 1)) AS ERP
WHERE
((#FundNumber IS NULL OR #FundNumber = '')
OR (ERP.FundNumber LIKE '%' + #FundNumber + '%'))
ORDER BY
CASE
WHEN #SortColumnName = 'FundNumber' AND #SortDirection = 'asc'
THEN ERP.FundNumber
END ASC,
CASE
WHEN #SortColumnName = 'FundNumber' AND #SortDirection = 'desc'
THEN ERP.FundNumber
END DESC
OFFSET #StartIndex ROWS
FETCH NEXT #RecordPerPage ROWS ONLY
Thank you in advance!
You could try something like this:
create a CTE that gets the data you want to return
include a COUNT(*) OVER() in there to get the total count of rows
return just a subset (based on your OFFSET .. FETCH NEXT) from the CTE
So your code would look something along those lines:
-- CTE definition - call it whatever you like
WITH BaseData AS
(
SELECT
-- select all the relevant columns you need
p.ProductID,
p.ProductName,
-- using COUNT(*) OVER() returns the total count over all rows
TotalCount = COUNT(*) OVER()
FROM
dbo.Products p
)
-- now select from the CTE - using OFFSET/FETCH NEXT, get only those rows you
-- want - but the "TotalCount" column still contains the total count - before
-- the OFFSET/FETCH
SELECT *
FROM BaseData
ORDER BY ProductID
OFFSET 20 ROWS FETCH NEXT 15 ROWS ONLY
As a habit, I prefer non-null entries before possible null. I did not reference those in my response below, and limited a working example to just the two inputs you are most concerned with.
I believe there could be some more clean ways to apply your local variables to filter the query results without having to perform an offset. You could return to a temp table or a permanent usage table that cleans itself up and use IDs that aren't returned as a way to set pages. Smoother, with less fuss.
However, I understand that isn't always feasible, and I become frustrated myself with those attempting to solve your use case for you without attempting to answer the question. Quite often there are multiple ways to tackle any issue. Your job is to decide which one is best in your scenario. Our job is to help you figure out the script.
With that said, here's a potential solution using dynamic SQL.
I'm a huge believer in dynamic SQL, and use it extensively for user based table control and ease of ETL mapping control.
use TestCatalog;
set nocount on;
--Builds a temp table, just for test purposes
drop table if exists ##TestOffset;
create table ##TestOffset
(
Id int identity(1,1)
, RandomNumber decimal (10,7)
);
--Inserts 1000 random numbers between 0 and 100
while (select count(*) from ##TestOffset) < 1000
begin
insert into ##TestOffset
(RandomNumber)
values
(RAND()*100)
end;
set nocount off;
go
create procedure dbo.TestOffsetProc
#StartIndex int = null --I'll reference this like a page number below
, #RecordsPerPage int = null
as
begin
declare #MaxRows int = 30; --your front end will probably manage this, but don't trust it. I personally would store this on a table against each display so it can also be returned dynamically with less manual intrusion to this procedure.
declare #FirstRow int;
--Quick entry to ensure your record count returned doesn't excede max allowed.
if #RecordsPerPage is null or #RecordsPerPage > #MaxRows
begin
set #RecordsPerPage = #MaxRows
end;
--Same here, making sure not to return NULL to your dynamic statement. If null is returned from any variable, the entire statement will become null.
if #StartIndex is null
begin
set #StartIndex = 0
end;
set #FirstRow = #StartIndex * #RecordsPerPage
declare #Sql nvarchar(2000) = 'select
tos.*
from ##TestOffset as tos
order by tos.RandomNumber desc
offset ' + convert(nvarchar,#FirstRow) + ' rows
fetch next ' + convert(nvarchar,#RecordsPerPage) + ' rows only'
exec (#Sql);
end
go
exec dbo.TestOffsetProc;
drop table ##TestOffset;
drop procedure dbo.TestOffsetProc;
I am trying to Insert data in a table named "Candidate_Post_Info_Table_ChangeLogs" whenever a record is updated in another table named "Candidate_Personal_Info_Table". my code works fine whenever a single record is updated but when i try to updated multiple rows it gives error:
"Sub query returned more then 1 value".
Following is my code :
ALTER TRIGGER [dbo].[Candidate_PostInfo_UPDATE]
ON [dbo].[Candidate_Post_Info_Table]
AFTER UPDATE
AS
BEGIN
IF ##ROWCOUNT = 0
RETURN
DECLARE #Candidate_Post_ID int
DECLARE #Candidate_ID varchar(50)
DECLARE #Action VARCHAR(50)
DECLARE #OldValue VARCHAR(MAX)
DECLARE #NewValue VARCHAR(MAX)
DECLARE #Admin_id int
IF UPDATE(Verified)
BEGIN
SET #Action = 'Changed Verification Status'
SET #Candidate_Post_ID = (Select ID From inserted)
SET #Candidate_ID = (Select Identity_Number from inserted)
SET #NewValue = (Select Verified From inserted)
SET #OldValue = (Select Verified From deleted)
IF(#NewValue != #OldValue)
BEGIN
INSERT INTO Candidate_Post_Info_Table_ChangeLogs(Candidate_Post_ID, Candidate_ID, Change_DateTime, action, NewValue, OldValue, Admin_ID)
VALUES(#Candidate_Post_ID, #Candidate_ID, GETDATE(), #Action, #NewValue, #OldValue, '1')
END
END
END
i have searched stack overflow for this issue but couldn't get any related answer specific to this scenario.
When you insert/update multiple rows into a table, the Inserted temporary table used by the system holds all of the values from all of the rows that were inserted or updated.
Therefore, if you do an update to 6 rows, the Inserted table will also have 6 rows, and doing something like this:
SET #Candidate_Post_ID = (Select ID From inserted)
Will return an error, just the same as doing this:
SET #Candidate_Post_ID = (SELECT 1 UNION ALL SELECT 2 UNION ALL SELECT 3 UNION ALL SELECT 4 UNION ALL SELECT 5 UNION ALL SELECT 6)
From the looks of things, you tried to do this with an iterative approach. Set-based is better. Maybe consider doing it like this in the body of your TRIGGER (without all of the parameters...):
IF UPDATE(Verified)
BEGIN
INSERT INTO Candidate_Post_Info_Table_ChangeLogs
(
Candidate_Post_ID
,Candidate_ID
,Change_DateTime
,action
,NewValue
,OldValue
,Admin_ID
)
SELECT
I.ID
,I.Identity_Number
,GETDATE()
,'Changed Verification Status'
,I.Verified
,O.Verified
,'1'
FROM Inserted I
INNER JOIN Deleted O
ON I.ID = O.ID -- Check this condition to make sure it's a unique join per row
WHERE I.Verified <> O.Verified
END
A similar case was solved in the following thread using cursors.... please check it
SQL Server A trigger to work on multiple row inserts
Also the below thread gives the solution based on set based approach
SQL Server - Rewrite trigger to avoid cursor based approach
*Both the above threads are from stack overflow...
Spec for the stored procedure is:
To select and return the Id from my table tb_r12028dxi_SandpitConsoleProofClient (order is not important just the top 1 found will do) and as soon as I've selected that record it needs to be marked 'P' so that it does not get selected again.
Here is the stored procedure:
ALTER PROCEDURE [dbo].[r12028dxi_SandpitConsoleProofSweep]
#myId INT OUTPUT
AS
/*
DECLARE #X INT
EXECUTE [xxx].[dbo].[r12028dxi_SandpitConsoleProofSweep] #X OUTPUT
SELECT #X
*/
DECLARE #NumQueue INT = (
SELECT [cnt] = COUNT(*)
FROM xxx.DBO.tb_r12028dxi_SandpitConsoleProofClient
WHERE [Status] IS NULL
);
IF #NumQueue > 0
BEGIN
BEGIN TRANSACTION;
DECLARE #foundID INT = (SELECT TOP 1 Id FROM xxx.DBO.tb_r12028dxi_SandpitConsoleProofClient WHERE [Status] IS NULL);
UPDATE x
SET x.[Status] = 'P'
FROM xxx.DBO.tb_r12028dxi_SandpitConsoleProofClient x
WHERE x.Id = #foundID
SET #myId = #foundID;
RETURN;
COMMIT TRANSACTION;
END;
GO
It is returning the error message:
Transaction count after EXECUTE indicates a mismatching number of
BEGIN and COMMIT statements. Previous count = 0, current count = 1.
I've just added the Update script and the BEGIN TRANSACTION; and COMMIT TRANSACTION; before that it worked fine when it looked like the following...
ALTER PROCEDURE [dbo].[r12028dxi_SandpitConsoleProofSweep]
#myId INT OUTPUT
AS
/*
DECLARE #X INT
EXECUTE [xxx].[dbo].[r12028dxi_SandpitConsoleProofSweep] #X OUTPUT
SELECT #X
*/
DECLARE #NumQueue INT = (
SELECT [cnt] = COUNT(*)
FROM xxx.DBO.tb_r12028dxi_SandpitConsoleProofClient
WHERE [Status] IS NULL
);
IF #NumQueue > 0
BEGIN
SELECT TOP 1 #myId = Id FROM xxx.DBO.tb_r12028dxi_SandpitConsoleProofClient;
RETURN;
END;
GO
I added the BEGIN TRANSACTION; / COMMIT TRANSACTION; because I wanted to ensure that the data gets read into the output variable AND that the UPDATE happens. Should I just leave out this section of the procedure?
You have "RETURN;" before "COMMIT TRANSACTION;" which means "COMMIT TRANSACTION;" is never executed.
Give that you want:
and as soon as I've selected that record it needs to be marked 'P' so
that it does not get selected again.
you can achieve that in a single statment (and not in a transaction)
ALTER PROCEDURE [dbo].[r12028dxi_SandpitConsoleProofSweep]
#myId INT OUTPUT
AS
BEGIN
UPDATE x
SET x.[Status] = 'P',
#myID = x.ID
FROM xxx.DBO.tb_r12028dxi_SandpitConsoleProofClient x
/* a sample join to get your single row in an update statement */
WHERE x.ID = (SELECT MIN(ID)
FROM xxx.DBO.tb_r12028dxi_SandpitConsoleProofClient sub
WHERE ISNULL(sub.[Status], '') != 'P')
END
Note: When dealing with concurrent processing (ie: two threads trying to select from a single queue) it's more about the locking behavior than doing it inside a single transaction.
As an alternative to a perfectly reasonable suggestion by #Andrew Bickerton, you could also use a CTE and the ROW_NUMBER() function, like this:
WITH ranked AS (
SELECT
Id,
[Status],
rnk = ROW_NUMBER() OVER (ORDER BY Id)
FROM xxx.DBO.tb_r12028dxi_SandpitConsoleProofClient
WHERE [Status] IS NULL
)
UPDATE ranked
SET
[Status] = 'P',
#myId = Id
WHERE rnk = 1
;
The ROW_NUMBER() function assigns rankings to all rows where [Status] IS NULL, which allows you to update only a specific one.
The use of the CTE as the direct target of the UPDATE statement is absolutely legitimate in this case, as the CTE only pulls rows from one table. (This is similar to the use of views in UPDATE statements.)
i'm using a sql compact database(sdf) in MS SQL 2008.
in the table 'Job', each id has multiple jobs.
there is a system regularly add jobs into the table.
I would like to keep the 10 latest records for each id order by their 'datecompleted'
and delete the rest of the records
how can i construct my query? failed in using #temp table and cursor
Well it is fast approaching Christmas, so here is my gift to you, an example script that demonstrates what I believe it is that you are trying to achieve. No I don't have a big white fluffy beard ;-)
CREATE TABLE TestJobSetTable
(
ID INT IDENTITY(1,1) not null PRIMARY KEY,
JobID INT not null,
DateCompleted DATETIME not null
);
--Create some test data
DECLARE #iX INT;
SET #iX = 0
WHILE(#iX < 15)
BEGIN
INSERT INTO TestJobSetTable(JobID,DateCompleted) VALUES(1,getDate())
INSERT INTO TestJobSetTable(JobID,DateCompleted) VALUES(34,getDate())
SET #iX = #iX + 1;
WAITFOR DELAY '00:00:0:01'
END
--Create some more test data, for when there may be job groups with less than 10 records.
SET #iX = 0
WHILE(#iX < 6)
BEGIN
INSERT INTO TestJobSetTable(JobID,DateCompleted) VALUES(23,getDate())
SET #iX = #iX + 1;
WAITFOR DELAY '00:00:0:01'
END
--Review the data set
SELECT * FROM TestJobSetTable;
--Apply the deletion to the remainder of the data set.
WITH TenMostRecentCompletedJobs AS
(
SELECT ID, JobID, DateCompleted
FROM TestJobSetTable A
WHERE ID in
(
SELECT TOP 10 ID
FROM TestJobSetTable
WHERE JobID = A.JobID
ORDER BY DateCompleted DESC
)
)
--SELECT * FROM TenMostRecentCompletedJobs ORDER BY JobID,DateCompleted desc;
DELETE FROM TestJobSetTable
WHERE ID NOT IN(SELECT ID FROM TenMostRecentCompletedJobs)
--Now only data of interest remains
SELECT * FROM TestJobSetTable
DROP TABLE TestJobSetTable;
How about something like:
DELETE FROM
Job
WHERE NOT
id IN (
SELECT TOP 10 id
FROM Job
ORDER BY datecompleted)
This is assuming you're using 3.5 because nested SELECT is only available in this version or higher.
I did not read the question correctly. I suspect something more along the lines of a CTE will solve the problem, using similar logic. You want to build a query that identifies the records you want to keep, as your starting point.
Using CTE on SQL Server Compact 3.5
I need to write a T-SQL stored procedure that updates a row in a table. If the row doesn't exist, insert it. All this steps wrapped by a transaction.
This is for a booking system, so it must be atomic and reliable. It must return true if the transaction was committed and the flight booked.
I'm sure on how to use ##rowcount. This is what I've written until now. Am I on the right road?
-- BEGIN TRANSACTION (HOW TO DO?)
UPDATE Bookings
SET TicketsBooked = TicketsBooked + #TicketsToBook
WHERE FlightId = #Id AND TicketsMax < (TicketsBooked + #TicketsToBook)
-- Here I need to insert only if the row doesn't exists.
-- If the row exists but the condition TicketsMax is violated, I must not insert
-- the row and return FALSE
IF ##ROWCOUNT = 0
BEGIN
INSERT INTO Bookings ... (omitted)
END
-- END TRANSACTION (HOW TO DO?)
-- Return TRUE (How to do?)
I assume a single row for each flight? If so:
IF EXISTS (SELECT * FROM Bookings WHERE FLightID = #Id)
BEGIN
--UPDATE HERE
END
ELSE
BEGIN
-- INSERT HERE
END
I assume what I said, as your way of doing things can overbook a flight, as it will insert a new row when there are 10 tickets max and you are booking 20.
Take a look at MERGE command. You can do UPDATE, INSERT & DELETE in one statement.
Here is a working implementation on using MERGE
- It checks whether flight is full before doing an update, else does an insert.
if exists(select 1 from INFORMATION_SCHEMA.TABLES T
where T.TABLE_NAME = 'Bookings')
begin
drop table Bookings
end
GO
create table Bookings(
FlightID int identity(1, 1) primary key,
TicketsMax int not null,
TicketsBooked int not null
)
GO
insert Bookings(TicketsMax, TicketsBooked) select 1, 0
insert Bookings(TicketsMax, TicketsBooked) select 2, 2
insert Bookings(TicketsMax, TicketsBooked) select 3, 1
GO
select * from Bookings
And then ...
declare #FlightID int = 1
declare #TicketsToBook int = 2
--; This should add a new record
merge Bookings as T
using (select #FlightID as FlightID, #TicketsToBook as TicketsToBook) as S
on T.FlightID = S.FlightID
and T.TicketsMax > (T.TicketsBooked + S.TicketsToBook)
when matched then
update set T.TicketsBooked = T.TicketsBooked + S.TicketsToBook
when not matched then
insert (TicketsMax, TicketsBooked)
values(S.TicketsToBook, S.TicketsToBook);
select * from Bookings
Pass updlock, rowlock, holdlock hints when testing for existence of the row.
begin tran /* default read committed isolation level is fine */
if not exists (select * from Table with (updlock, rowlock, holdlock) where ...)
/* insert */
else
/* update */
commit /* locks are released here */
The updlock hint forces the query to take an update lock on the row if it already exists, preventing other transactions from modifying it until you commit or roll back.
The holdlock hint forces the query to take a range lock, preventing other transactions from adding a row matching your filter criteria until you commit or roll back.
The rowlock hint forces lock granularity to row level instead of the default page level, so your transaction won't block other transactions trying to update unrelated rows in the same page (but be aware of the trade-off between reduced contention and the increase in locking overhead - you should avoid taking large numbers of row-level locks in a single transaction).
See http://msdn.microsoft.com/en-us/library/ms187373.aspx for more information.
Note that locks are taken as the statements which take them are executed - invoking begin tran doesn't give you immunity against another transaction pinching locks on something before you get to it. You should try and factor your SQL to hold locks for the shortest possible time by committing the transaction as soon as possible (acquire late, release early).
Note that row-level locks may be less effective if your PK is a bigint, as the internal hashing on SQL Server is degenerate for 64-bit values (different key values may hash to the same lock id).
i'm writing my solution. my method doesn't stand 'if' or 'merge'. my method is easy.
INSERT INTO TableName (col1,col2)
SELECT #par1, #par2
WHERE NOT EXISTS (SELECT col1,col2 FROM TableName
WHERE col1=#par1 AND col2=#par2)
For Example:
INSERT INTO Members (username)
SELECT 'Cem'
WHERE NOT EXISTS (SELECT username FROM Members
WHERE username='Cem')
Explanation:
(1) SELECT col1,col2 FROM TableName WHERE col1=#par1 AND col2=#par2
It selects from TableName searched values
(2) SELECT #par1, #par2 WHERE NOT EXISTS
It takes if not exists from (1) subquery
(3) Inserts into TableName (2) step values
I finally was able to insert a row, on the condition that it didn't already exist, using the following model:
INSERT INTO table ( column1, column2, column3 )
(
SELECT $column1, $column2, $column3
WHERE NOT EXISTS (
SELECT 1
FROM table
WHERE column1 = $column1
AND column2 = $column2
AND column3 = $column3
)
)
which I found at:
http://www.postgresql.org/message-id/87hdow4ld1.fsf#stark.xeocode.com
This is something I just recently had to do:
set ANSI_NULLS ON
set QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[cjso_UpdateCustomerLogin]
(
#CustomerID AS INT,
#UserName AS VARCHAR(25),
#Password AS BINARY(16)
)
AS
BEGIN
IF ISNULL((SELECT CustomerID FROM tblOnline_CustomerAccount WHERE CustomerID = #CustomerID), 0) = 0
BEGIN
INSERT INTO [tblOnline_CustomerAccount] (
[CustomerID],
[UserName],
[Password],
[LastLogin]
) VALUES (
/* CustomerID - int */ #CustomerID,
/* UserName - varchar(25) */ #UserName,
/* Password - binary(16) */ #Password,
/* LastLogin - datetime */ NULL )
END
ELSE
BEGIN
UPDATE [tblOnline_CustomerAccount]
SET UserName = #UserName,
Password = #Password
WHERE CustomerID = #CustomerID
END
END
You could use the Merge Functionality to achieve. Otherwise you can do:
declare #rowCount int
select #rowCount=##RowCount
if #rowCount=0
begin
--insert....
INSERT INTO [DatabaseName1].dbo.[TableName1] SELECT * FROM [DatabaseName2].dbo.[TableName2]
WHERE [YourPK] not in (select [YourPK] from [DatabaseName1].dbo.[TableName1])
Full solution is below (including cursor structure). Many thanks to Cassius Porcus for the begin trans ... commit code from posting above.
declare #mystat6 bigint
declare #mystat6p varchar(50)
declare #mystat6b bigint
DECLARE mycur1 CURSOR for
select result1,picture,bittot from all_Tempnogos2results11
OPEN mycur1
FETCH NEXT FROM mycur1 INTO #mystat6, #mystat6p , #mystat6b
WHILE ##Fetch_Status = 0
BEGIN
begin tran /* default read committed isolation level is fine */
if not exists (select * from all_Tempnogos2results11_uniq with (updlock, rowlock, holdlock)
where all_Tempnogos2results11_uniq.result1 = #mystat6
and all_Tempnogos2results11_uniq.bittot = #mystat6b )
insert all_Tempnogos2results11_uniq values (#mystat6 , #mystat6p , #mystat6b)
--else
-- /* update */
commit /* locks are released here */
FETCH NEXT FROM mycur1 INTO #mystat6 , #mystat6p , #mystat6b
END
CLOSE mycur1
DEALLOCATE mycur1
go
Simple way to copy data from T1 to T2 and avoid duplicate in T2
--Insert a new record
INSERT INTO dbo.Table2(NoEtu, FirstName, LastName)
SELECT t1.NoEtuDos, t1.FName, t1.LName
FROM dbo.Table1 as t1
WHERE NOT EXISTS (SELECT (1) FROM dbo.Table2 AS t2
WHERE t1.FName = t2.FirstName
AND t1.LName = t2.LastName
AND t1.NoEtuDos = t2.NoEtu)
INSERT INTO table ( column1, column2, column3 )
SELECT $column1, $column2, $column3
EXCEPT SELECT column1, column2, column3
FROM table
The best approach to this problem is first making the database column UNIQUE
ALTER TABLE table_name ADD UNIQUE KEY
THEN INSERT IGNORE INTO table_name ,the value won't be inserted if it results in a duplicate key/already exists in the table.