Using Trigger to keep data integrity - sql

I have scoured the internet for a solution (mostly scouring stack overflow) and I cannot come up with anything.
Here is my goal: I have a local database and I have set up a linked server to another database. I am creating a trigger on one of my local tables. One of the column values is a Hotel ID. In the linked server there is a table called "Hotel". The point of this trigger is to check and make sure that the HotelID I am trying to insert into my local table is a value that exists in the linked server's Hotel table.
Example: If I want to insert a new row into my "Store Table" from local, I want to make sure that the HotelID I am trying to insert exists in the "Hotel" table in my linked server. If it does not exist, I want to rollback the transaction and display a message.
Below is the code I have been playing with. I feel like I could be close, but I am open to the idea that I am extremely far away.
FYI: The code inside of the IF NOT EXISTS statement is incorrect. I am just confused as to what needs to go in there.
CREATE TRIGGER tr_trigger ON Store
AFTER Insert
AS
DECLARE #HotelID smallint = (SELECT HotelID FROM inserted)
DECLARE #query NVARCHAR(MAX) = N'SELECT * FROM OPENQUERY (test,''
SELECT HotelID FROM test.dbo.Hotel WHERE HotelID = ''''' +
CONVERT(nvarchar(15),#HotelID) +''''''')'
DECLARE #StoredResult Nvarchar(20)
BEGIN
EXEC sp_executesql #query, N'#StoredResult NVARCHAR(MAX) OUTPUT', #StoredResult =
#StoredResult OUTPUT
SELECT #StoredResult
IF NOT EXISTS (SELECT * FROM OPENQUERY (test,' SELECT HotelID FROM test.dbo.Hotel'))
BEGIN
PRINT'That HotelID does not exist. Please try again.'
ROLLBACK TRANSACTION
END
END
GO
EDIT: This has been solved thanks to a couple of suggestions from marc_s. Below is my new code that works how I need it to.
CREATE TRIGGER tr_trigger ON Store
AFTER Update, Insert
AS
BEGIN
IF NOT EXISTS (SELECT A.* FROM OPENQUERY (test, 'SELECT HotelID FROM test.dbo.hotel') A
INNER JOIN inserted i
ON A.HotelID = i.HotelID)
BEGIN
PRINT'Please enter a valid HotelID'
ROLLBACK TRANSACTION
END
END
GO

How about:
CREATE TRIGGER tr_DataIntegrity ON Store
AFTER Update, Insert
AS
BEGIN
IF EXISTS (
SELECT * FROM inserted i
WHERE NOT EXISTS (
SELECT A.*
FROM OPENQUERY (TITAN_Prescott_Store, 'SELECT HotelID FROM FARMS_Prescott.dbo.hotel') A
WHERE A.HotelID = i.HotelID))
BEGIN
PRINT'Please do not enter an invalid HotelID'
ROLLBACK TRANSACTION
END
END
GO

Related

'INSERT' stored procedure with validation

I'm trying to create a stored procedure where I'm inserting a new office into the OFFICE table I have in my database.
I want to first check whether the office I'm trying to create already exists or not.
Here is some code from where I've gotten so far, but I'm not able to quite get it right. I would greatly appreciate some input.
CREATE PROCEDURE stored_proc_new_office
AS
BEGIN
DECLARE #office_id int
SELECT #office_id = (SELECT office_id FROM inserted)
IF NOT EXISTS (SELECT 1 FROM OFFICE WHERE office_id = #office_id)
BEGIN
ROLLBACK TRANSACTION
PRINT 'Office already exists.'
END
END
Here is a bare bones example of how you can use a stored procedure to insert a new record with a check to ensure it doesn't already exist.
create procedure dbo.AddNewOffice
(
#Name nvarchar(128)
-- ... add parameters for other office details
, #NewId int out
)
as
begin
set nocount on;
insert into dbo.Office([Name]) -- ... add additional columns
select #Name -- ... add additional parameters to match the columns above
where not exists (select 1 from dbo.Office where [Name] = #Name); -- ... add any additional conditions for testing for uniqueness
-- If nothing inserted return an error code for the calling app to use to display something meaningful to the user
if ##rowcount = 0 return 99;
-- return the new id to the calling app.
set #NewId = scope_identity();
return 0;
end

Insert Large data row by row in multiple reference tables

I went through a lot of posts on SO. However, they do not fit my situation.
We have a situation where we want to store a large dataset on sqlserver 2017 into multiple reference tables.
We have tried with cursor and it is working fine. However, we are concerned about the performance issue of loading large data(1+ million rows)
Example
T_Bulk is a input table, T_Bulk_Orignal is destination table and T_Bulk_reference is a reference table for t_Bulk_orignal
create table T_Bulk
(
Id uniqueidentifier,
ElementType nvarchar(max),
[Description] nvarchar(max)
)
create table T_Bulk_orignal
(
Id uniqueidentifier,
ElementType nvarchar(max),
[Description] nvarchar(max)
)
create table T_Bulk_reference
(
Id uniqueidentifier,
Description2 nvarchar(max)
)
create proc UseCursor
(
#udtT_Bulk as dbo.udt_T_Bulk READONLY
)
as
begin
DECLARE #Id uniqueidentifier, #ElementType varchar(500), #Description varchar(500),#Description2 varchar(500)
DECLARE MY_CURSOR CURSOR
LOCAL STATIC READ_ONLY FORWARD_ONLY
FOR
SELECT Id, ElementType, [Description]
FROM dbo.T_BULK
OPEN MY_CURSOR
FETCH NEXT FROM MY_CURSOR INTO #Id, #ElementType, #Description,#Description2
WHILE ##FETCH_STATUS = 0
BEGIN
BEGIN Transaction Trans1
BEgin TRy
IF EXISTS (select Id from T_Bulk_orignal where ElementType=#ElementType and Description=#Description)
select #Id = Id from T_Bulk_orignal where ElementType=#ElementType and Description=#Description
ELSE
BEGIN
insert T_Bulk_orignal(Id,ElementType,Description) values (#id, #ElementType,#Description)
END
INSERT T_Bulk_reference(Id,description2)
SELECT Id, Description2
FROM (select #Id as Id, #Description2 as Description2) F
WHERE NOT EXISTS (SELECT * FROM T_Bulk_reference C WHERE C.Id = F.Id and C.Description2 = F.Description2);
COMMIT TRANSACTION [DeleteTransaction]
FETCH NEXT FROM MY_CURSOR INTO #Id, #ElementType, #Description,#Description2
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION [Trans1]
SELECT ##Error
END CATCH
END
CLOSE MY_CURSOR
DEALLOCATE MY_CURSOR
end
We want this operation to execute in one go like bulk insertion however we also need to crosscheck any data discrepancy and if one row is not able to insert we need to rollback only that specific record
The only catch for bulk insertion is as there are reference table data present.
Please suggest best approach on this
This sounds like a job for SSIS (SQL Server Integration Services).
https://learn.microsoft.com/en-us/sql/integration-services/ssis-how-to-create-an-etl-package
In SSIS you can create a data migration job that can do reference checks. You can set it up to fail ,warn or ignore errors at each stage. To find resources on this google for ETL and SSIS.
I have done jobs like yours on 50+ million rows.
Sure it takes a while, and it rolls back everything (if set up like that) on an error, but it is the best tool for this kind of job.
I got a solution to upload large file with a go like bulk insert.
There is a Merge statement present in SQL.
The MERGE statement is used to make changes in one table based on
values matched from anther. It can be used to combine insert,
update, and delete operations into one statement
So we can pass the data using DataTable to StoredProcedure and then Source will be your UserDefinedDataTable and Target will be your actual SQL Table

How to use nested If statements in SQL trigger

I'm trying to learn SQL triggers to automatically handle events in my database but I'm having some problems with execution.
If I run the following code:
declare #userid numeric(18,0);
declare #username nvarchar(max);
set #userid = 400
execute GetUserNameFromID #userid,#username output
select #username
which calls the following stored procedure:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE GetUserNameFromID
-- Add the parameters for the stored procedure here
#UserID numeric(18,0),
#UserName nvarchar(MAX) OUT
AS
BEGIN
SET NOCOUNT ON;
SELECT #UserName = u.name from Users u where ID=#UserID
END
GO
I get a nice result 'sometestuser'
But when calling it from my trigger it fails to return a value from the stored procedure:
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER Trigger [dbo].[CheckIfUserHasNoItemsLeft] on [dbo].[Items] for update
As
Begin
set nocount on
declare #inactive_user nvarchar(50);
declare #userid numeric(18,0);
declare #username nvarchar(MAX);
if ( select Count(*) from inserted ) > 1 RaIsError( 'CheckIfIserHasNoItemsLeft: No more than one row may be processed.', 25, 42 ) with log
if update(InactiveUser)
set #inactive_user = (select InactiveUser from inserted)
if #inactive_user is not null
set #userid = (select CID from inserted)
execute GetuserNameFromID #userid,#username output
if #username is not null
insert into tasks (Task) values ('The last item for ' + #username + ' has been marked inactive, check if this user should now be also marked inactive.')
End
InactiveUser is the name of the app user who has marked this item inactive, it is what I am using as a check to see if the item has been set inactive rather than create an additional boolean column just for this purpose.
I'm sure it's something simple but information on If...Then statements for SQL seems to be limited and a lot of answers suggest using Case but the query editor gives me errors about incorrect syntax no matter which way I try to do it that way.
As I'm learning I'm more than happy for someone to show me a completely new way of handling this if what I've done is wrong or bad design. I'm hoping to create a set of triggers that will add items to the tasks table for administrators to check when user accounts appear to be stale and other maintenance checks etc.
I am using SQL server 2005.
Thank you.
Edit: Changed 'value <> null' to 'value is not null'
Edit2: Added HABO's suggestion to throw an error if more than one row is detected.
How about we take a whole new approach to this. Processes like this are exactly why the inline table valued functions were created.
Let's start by converting your stored procedure to an inline table valued function.
CREATE FUNCTION GetUserNameFromID
(
#UserID numeric(18,0)
) RETURNS TABLE
AS RETURN
SELECT u.name
from Users u
where ID = #UserID
GO
That is a LOT simpler and cleaner than that stored procedure with an output variable.
Here is where it really starts to make a difference. Here is what you could do with that trigger using the newly created iTVF.
ALTER Trigger [dbo].[CheckIfUserHasNoItemsLeft] on [dbo].[Items] for update
As Begin
set nocount on
if update(InactiveUser)
insert into tasks (Task)
select 'The last item for ' + u.name + ' has been marked inactive, check if this user should now be also marked inactive.'
from inserted i
cross apply dbo.GetUserNameFromID(i.CID) u
end
This is super simple AND it is fully set based so if you update 1 or 1,000 rows it will work correctly.

stored procedure running continuosly in background

I am using one class file for updating my tables. In that I am either inserting or updating tables and after each update or insert, I am calling one stored procedure to save the last updated ID of the table. But once this stored procedure runs it never releases the resource. It is executing always in background. Why is this happening and how can I stop it?
Here is the stored procedure:-
Create procedure [dbo].[Updlastusedkey]
(
#tablename varchar(50)
)
as
Begin
DECLARE #sql varchar(300)
SET #SQL='UPDATE primarykeyTab SET lastKeyUsed = ISNULL(( SELECT Max(ID) from '+#tablename +'),1) WHERE Tablename='''+#tablename +''''
print #SQL
EXEC(#SQL)
END
Do you have Auto-Commit turned on? I think implicit_transactions = OFF means Auto Commit = ON in SQL Server. If not your Update operation may not be executing a COMMIT for the transaction it opened so leaving a write lock on the table. Alternatively just explicitly COMMIT your update perhaps.
Why don't you just create a view?
CREATE VIEW dbo.vPrimaryKeyTab
AS
SELECT tablename = 'table1', MAX(id_column) FROM table1
UNION
SELECT tablename = 'table2', MAX(id_column) FROM table2
/* ... */
;
Now you don't need to update anything or run anything in the background, and the view is always going to be up to date (it won't be the fastest query in the world, but at least you only pay that cost when you need that information, rather than constantly keeping it up to date).
Try this -
UPDATE primarykeyTab SET lastKeyUsed = ISNULL(( SELECT Max(ID) from '+#tablename
+' WITH (NOLOCK)),1) WHERE Tablename='''+#tablename +'''' WITH (NOLOCK)

Is there a way to persist a variable across a go?

Is there a way to persist a variable across a go?
Declare #bob as varchar(50);
Set #bob = 'SweetDB';
GO
USE #bob --- see note below
GO
INSERT INTO #bob.[dbo].[ProjectVersion] ([DB_Name], [Script]) VALUES (#bob,'1.2')
See this SO question for the 'USE #bob' line.
Use a temporary table:
CREATE TABLE #variables
(
VarName VARCHAR(20) PRIMARY KEY,
Value VARCHAR(255)
)
GO
Insert into #variables Select 'Bob', 'SweetDB'
GO
Select Value From #variables Where VarName = 'Bob'
GO
DROP TABLE #variables
go
The go command is used to split code into separate batches. If that is exactly what you want to do, then you should use it, but it means that the batches are actually separate, and you can't share variables between them.
In your case the solution is simple; you can just remove the go statements, they are not needed in that code.
Side note: You can't use a variable in a use statement, it has to be the name of a database.
I prefer the this answer from this question
Global Variables with GO
Which has the added benefit of being able to do what you originally wanted to do as well.
The caveat is that you need to turn on SQLCMD mode (under Query->SQLCMD) or turn it on by default for all query windows (Tools->Options then Query Results->By Default, open new queries in SQLCMD mode)
Then you can use the following type of code (completely ripped off from that same answer by Oscar E. Fraxedas Tormo)
--Declare the variable
:setvar MYDATABASE master
--Use the variable
USE $(MYDATABASE);
SELECT * FROM [dbo].[refresh_indexes]
GO
--Use again after a GO
SELECT * from $(MYDATABASE).[dbo].[refresh_indexes];
GO
If you are using SQL Server you can setup global variables for entire scripts like:
:setvar sourceDB "lalalallalal"
and use later in script as:
$(sourceDB)
Make sure SQLCMD mode is on in Server Managment Studi, you can do that via top menu Click Query and toggle SQLCMD Mode on.
More on topic can be found here:
MS Documentation
Temp tables are retained over GO statements, so...
SELECT 'value1' as variable1, 'mydatabasename' as DbName INTO #TMP
-- get a variable from the temp table
DECLARE #dbName VARCHAR(10) = (select top 1 #TMP.DbName from #TMP)
EXEC ('USE ' + #dbName)
GO
-- get another variable from the temp table
DECLARE #value1 VARCHAR(10) = (select top 1 #TMP.variable1 from #TMP)
DROP TABLE #TMP
It's not pretty, but it works
Create your own stored procedures which save/load to a temporary table.
MyVariableSave -- Saves variable to temporary table.
MyVariableLoad -- Loads variable from temporary table.
Then you can use this:
print('Test stored procedures for load/save of variables across GO statements:')
declare #MyVariable int = 42
exec dbo.MyVariableSave #Name = 'test', #Value=#MyVariable
print(' - Set #MyVariable = ' + CAST(#MyVariable AS VARCHAR(100)))
print(' - GO statement resets all variables')
GO -- This resets all variables including #MyVariable
declare #MyVariable int
exec dbo.MyVariableLoad 'test', #MyVariable output
print(' - Get #MyVariable = ' + CAST(#MyVariable AS VARCHAR(100)))
Output:
Test stored procedures for load/save of variables across GO statements:
- Set #MyVariable = 42
- GO statement resets all variables
- Get #MyVariable = 42
You can also use these:
exec dbo.MyVariableList -- Lists all variables in the temporary table.
exec dbo.MyVariableDeleteAll -- Deletes all variables in the temporary table.
Output of exec dbo.MyVariableList:
Name Value
test 42
It turns out that being able to list all of the variables in a table is actually quite useful. So even if you do not load a variable later, its great for debugging purposes to see everything in one place.
This uses a temporary table with a ## prefix, so it's just enough to survive a GO statement. It is intended to be used within a single script.
And the stored procedures:
-- Stored procedure to save a variable to a temp table.
CREATE OR ALTER PROCEDURE MyVariableSave
#Name varchar(255),
#Value varchar(MAX)
WITH EXECUTE AS CALLER
AS
BEGIN
SET NOCOUNT ON
IF NOT EXISTS (select TOP 1 * from tempdb.sys.objects where name = '##VariableLoadSave')
BEGIN
DROP TABLE IF EXISTS ##VariableLoadSave
CREATE TABLE ##VariableLoadSave
(
Name varchar(255),
Value varchar(MAX)
)
END
UPDATE ##VariableLoadSave SET Value=#Value WHERE Name=#Name
IF ##ROWCOUNT = 0
INSERT INTO ##VariableLoadSave SELECT #Name, #Value
END
GO
-- Stored procedure to load a variable from a temp table.
CREATE OR ALTER PROCEDURE MyVariableLoad
#Name varchar(255),
#Value varchar(MAX) OUT
WITH EXECUTE AS CALLER
AS
BEGIN
IF EXISTS (select TOP 1 * from tempdb.sys.objects where name = '##VariableLoadSave')
BEGIN
IF NOT EXISTS(SELECT TOP 1 * FROM ##VariableLoadSave WHERE Name=#Name)
BEGIN
declare #ErrorMessage1 as varchar(200) = 'Error: cannot find saved variable to load: ' + #Name
raiserror(#ErrorMessage1, 20, -1) with log
END
SELECT #Value=CAST(Value AS varchar(MAX)) FROM ##VariableLoadSave
WHERE Name=#Name
END
ELSE
BEGIN
declare #ErrorMessage2 as varchar(200) = 'Error: cannot find saved variable to load: ' + #Name
raiserror(#ErrorMessage2, 20, -1) with log
END
END
GO
-- Stored procedure to list all saved variables.
CREATE OR ALTER PROCEDURE MyVariableList
WITH EXECUTE AS CALLER
AS
BEGIN
IF EXISTS (select TOP 1 * from tempdb.sys.objects where name = '##VariableLoadSave')
BEGIN
SELECT * FROM ##VariableLoadSave
ORDER BY Name
END
END
GO
-- Stored procedure to delete all saved variables.
CREATE OR ALTER PROCEDURE MyVariableDeleteAll
WITH EXECUTE AS CALLER
AS
BEGIN
DROP TABLE IF EXISTS ##VariableLoadSave
CREATE TABLE ##VariableLoadSave
(
Name varchar(255),
Value varchar(MAX)
)
END
If you just need a binary yes/no (like if a column exists) then you can use SET NOEXEC ON to disable execution of statements. SET NOEXEC ON works across GO (across batches). But remember to turn EXEC back on with SET NOEXEC OFF at the end of the script.
IF COL_LENGTH('StuffTable', 'EnableGA') IS NOT NULL
SET NOEXEC ON -- script will not do anything when column already exists
ALTER TABLE dbo.StuffTable ADD EnableGA BIT NOT NULL CONSTRAINT DF_StuffTable_EnableGA DEFAULT(0)
ALTER TABLE dbo.StuffTable SET (LOCK_ESCALATION = TABLE)
GO
UPDATE dbo.StuffTable SET EnableGA = 1 WHERE StuffUrl IS NOT NULL
GO
SET NOEXEC OFF
This compiles statements but does not execute them. So you'll still get "compile errors" if you reference schema that doesn't exist. So it works to "turn off" the script 2nd run (what I'm doing), but does not work to turn off parts of the script on 1st run, because you'll still get compile errors if referencing columns or tables that don't exist yet.
You can make use of NOEXEC follow he steps below:
Create table
#temp_procedure_version(procedure_version varchar(5),pointer varchar(20))
insert procedure versions and pointer to the version into a temp table #temp_procedure_version
--example procedure_version pointer
insert into temp_procedure_version values(1.0,'first version')
insert into temp_procedure_version values(2.0,'final version')
then retrieve the procedure version, you can use where condition as in the following statement
Select #ProcedureVersion=ProcedureVersion from #temp_procedure_version where
pointer='first version'
IF (#ProcedureVersion='1.0')
BEGIN
SET NOEXEC OFF --code execution on
END
ELSE
BEGIN
SET NOEXEC ON --code execution off
END
--insert procedure version 1.0 here
Create procedure version 1.0 as.....
SET NOEXEC OFF -- execution is ON
Select #ProcedureVersion=ProcedureVersion from #temp_procedure_version where
pointer='final version'
IF (#ProcedureVersion='2.0')
BEGIN
SET NOEXEC OFF --code execution on
END
ELSE
BEGIN
SET NOEXEC ON --code execution off
END
Create procedure version 2.0 as.....
SET NOEXEC OFF -- execution is ON
--drop the temp table
Drop table #temp_procedure_version