Validate new Create Trigger for SQL Server 2000 - sql

I have been told to create a trigger for inserts on our SQL Server 2000.
I've never written a trigger before, and our old server does not appear to have any triggers defined on it.
Following the Triggers in SQL Server tutorial, I have created this trigger that I have not executed yet:
create trigger trgAfterMachine1Insert on Test_Results
after insert
as
declare #sn varchar(20), #sysID varchar(50),
#opID varchar(50), #testResult varchar(255)
select #sn=Serial_Number from inserted
select #sysID=System_ID from inserted
select #opID=Op_ID from inserted
select #testResult=Test_Result from inserted
exec sp1_AddSnRecord(#sn, #sysID, #opID, #testResult)
print 'Machine1 After Insert Trigger called AddSnRecord'
go
First, notice that I have written a stored procedure called sp1_AddSnRecord to insert this data into a new table (so I do not mess up the existing table). I certainly hope a stored procedure can be called on a trigger, because it performs data validation and enumeration on the data before inserting anything into the other tables.
I really don't see a way in SQL Server 2000 to test to see if this will work, and I'm a bit nervous about just hitting that Execute button in Management Studio.
So, I've been looking at this for a while and trying to read up on some other SO techniques.
From Aaron Bertrand's example HERE, it looks like I can combine all of my select calls into one line:
create trigger trgAfterMachine1Insert on Test_Results
after insert
as
declare #sn varchar(20), #sysID varchar(50),
#opID varchar(50), #testResult varchar(255)
select #sn=Serial_Number, #sysID=System_ID,
#opID=Op_ID, #testResult=Test_Result
from inserted
exec sp1_AddSnRecord(#sn, #sysID, #opID, #testResult)
print 'Machine1 After Insert Trigger called AddSnRecord'
go
Otherwise, I don't see anything more enlightening anywhere or see anyone asking about techniques to test triggers before creating them.
One of my colleges here at work does more SQL work than I do, but he admits that he has never written triggers. All he was able to tell me was, "Man, if you screw that up, you could cause a lot of problems on the server!" All that did was make me nervous, which is why I am here. (98% of what I do is write C# code for Windows Forms and old Windows Mobile devices).
So, how would I verify that this trigger is valid and will not cause any issues on the Server before creating? I've got a local SQL Server Express on my machine, but it is much newer than SQL 2000 and does not have the live data running on it from our Production floor.
If the trigger proves to be faulty afterwards, would I be able to remove it with a simple delete trigger trgAfterMachine1Insert? My search for "delete trigger" seems to have returned mostly triggers for AFTER DELETE.
Thanks in advance.
UPDATE: Including the stored procedure at Martin's request:
ALTER PROCEDURE [dbo].[sp1_AddSnRecord](
#serial_Number varchar(20),
#system_ID varchar(50),
#op_ID varchar(50),
#test_Result varchar(255)) as begin
set NOCOUNT ON;
declare #sn as VarChar(20);
set #sn=dbo.fn_ValidSN(#serial_Number);
if (7<Len(#sn)) begin
declare #badge varchar(50), #result varchar(50), #sysID varchar(50);
set #badge=dbo.fn_GetBadge(#op_ID);
set #result=dbo.fn_GetTestResult(#test_Result);
set #sysID=dbo.fn_GetSysType(#system_ID);
if ((0<Len(#badge)) and (0<Len(#result)) and (0<Len(#sysID))) begin
declare #id int;
select #id=ID from Serial_Numbers where Serial_Number=#sn;
if (#id<1) begin -- this serial number has not been entered
insert into Serial_Numbers (Serial_Number) values (#sn);
select #id=##IDENTITY from Serial_Numbers;
end
if (0<#id) begin -- now insert into SN_Records
insert into SN_Records (SN_ID, SYS_ID, OP_ID, Date_Time, Test_Result)
values (#id, #sysID, #badge, GetDate(), #result);
end
end
end
end

So, let me re-phrase what you are saying:
you have no experience writing triggers
there is no one else in the company with experience to write triggers
you only have a production environment and no other place to test you code
management is telling you to get this done by tonight
This is a sure recipe for disaster.
First you need to stand up against requests where your only option is to fail. Tell management that their data is too important to do something like this without proper testing.
Then get an appropriate testing environment. If your company is a MSDN subscriber you will have access to a copy of SQL Server 2000 Developer Edition that you can install on you laptop or better in some virtual machine.
While you are waiting for that install read about professional behavior in software development. Start with http://en.wikipedia.org/wiki/Robert_Cecil_Martin and then go to software craftsmanship.
But, as I know that won't happen tonight, you can do this in the meantime:
1) Create a new database on the production server
2) Copy the table in question: SELECT TOP(10) * INTO NewDb.dbo.Table FROM OldDb.dbo.Table;
You don't need more data as this is an insert trigger
3) Copy the other tables you need in the same way
4) apply your trigger to the table in NewDb
5) test
6) fix and go back to 5
7) if you are satisfied, copy the trigger to OldDb
Some things to consider:
Make sure you test inserts of more than one row
Don't call a procedure in the trigger. Not that that is wrong in it self, but you won't be able to get multi row inserts working with it
do not ever use ##IDENTITY. That's an order. (reasons and solutions are here: http://sqlity.net/en/351/identity-crisis/ )
After all that start looking into TDD in the database here: tSQLt.org
(Most ideas work in SQL 2000, however the framework does not.)
Hope that helps.

Related

Stored procedure with multiple 'INSERT INTO Table_Variable EXECUTE stored_procedure' statements [duplicate]

I have three stored procedures Sp1, Sp2 and Sp3.
The first one (Sp1) will execute the second one (Sp2) and save returned data into #tempTB1 and the second one will execute the third one (Sp3) and save data into #tempTB2.
If I execute the Sp2 it will work and it will return me all my data from the Sp3, but the problem is in the Sp1, when I execute it it will display this error:
INSERT EXEC statement cannot be nested
I tried to change the place of execute Sp2 and it display me another error:
Cannot use the ROLLBACK statement
within an INSERT-EXEC statement.
This is a common issue when attempting to 'bubble' up data from a chain of stored procedures. A restriction in SQL Server is you can only have one INSERT-EXEC active at a time. I recommend looking at How to Share Data Between Stored Procedures which is a very thorough article on patterns to work around this type of problem.
For example a work around could be to turn Sp3 into a Table-valued function.
This is the only "simple" way to do this in SQL Server without some giant convoluted created function or executed sql string call, both of which are terrible solutions:
create a temp table
openrowset your stored procedure data into it
EXAMPLE:
INSERT INTO #YOUR_TEMP_TABLE
SELECT * FROM OPENROWSET ('SQLOLEDB','Server=(local);TRUSTED_CONNECTION=YES;','set fmtonly off EXEC [ServerName].dbo.[StoredProcedureName] 1,2,3')
Note: You MUST use 'set fmtonly off', AND you CANNOT add dynamic sql to this either inside the openrowset call, either for the string containing your stored procedure parameters or for the table name. Thats why you have to use a temp table rather than table variables, which would have been better, as it out performs temp table in most cases.
OK, encouraged by jimhark here is an example of the old single hash table approach: -
CREATE PROCEDURE SP3 as
BEGIN
SELECT 1, 'Data1'
UNION ALL
SELECT 2, 'Data2'
END
go
CREATE PROCEDURE SP2 as
BEGIN
if exists (select * from tempdb.dbo.sysobjects o where o.xtype in ('U') and o.id = object_id(N'tempdb..#tmp1'))
INSERT INTO #tmp1
EXEC SP3
else
EXEC SP3
END
go
CREATE PROCEDURE SP1 as
BEGIN
EXEC SP2
END
GO
/*
--I want some data back from SP3
-- Just run the SP1
EXEC SP1
*/
/*
--I want some data back from SP3 into a table to do something useful
--Try run this - get an error - can't nest Execs
if exists (select * from tempdb.dbo.sysobjects o where o.xtype in ('U') and o.id = object_id(N'tempdb..#tmp1'))
DROP TABLE #tmp1
CREATE TABLE #tmp1 (ID INT, Data VARCHAR(20))
INSERT INTO #tmp1
EXEC SP1
*/
/*
--I want some data back from SP3 into a table to do something useful
--However, if we run this single hash temp table it is in scope anyway so
--no need for the exec insert
if exists (select * from tempdb.dbo.sysobjects o where o.xtype in ('U') and o.id = object_id(N'tempdb..#tmp1'))
DROP TABLE #tmp1
CREATE TABLE #tmp1 (ID INT, Data VARCHAR(20))
EXEC SP1
SELECT * FROM #tmp1
*/
My work around for this problem has always been to use the principle that single hash temp tables are in scope to any called procs. So, I have an option switch in the proc parameters (default set to off). If this is switched on, the called proc will insert the results into the temp table created in the calling proc. I think in the past I have taken it a step further and put some code in the called proc to check if the single hash table exists in scope, if it does then insert the code, otherwise return the result set. Seems to work well - best way of passing large data sets between procs.
This trick works for me.
You don't have this problem on remote server, because on remote server, the last insert command waits for the result of previous command to execute. It's not the case on same server.
Profit that situation for a workaround.
If you have the right permission to create a Linked Server, do it.
Create the same server as linked server.
in SSMS, log into your server
go to "Server Object
Right Click on "Linked Servers", then "New Linked Server"
on the dialog, give any name of your linked server : eg: THISSERVER
server type is "Other data source"
Provider : Microsoft OLE DB Provider for SQL server
Data source: your IP, it can be also just a dot (.), because it's localhost
Go to the tab "Security" and choose the 3rd one "Be made using the login's current security context"
You can edit the server options (3rd tab) if you want
Press OK, your linked server is created
now your Sql command in the SP1 is
insert into #myTempTable
exec THISSERVER.MY_DATABASE_NAME.MY_SCHEMA.SP2
Believe me, it works even you have dynamic insert in SP2
I found a work around is to convert one of the prods into a table valued function. I realize that is not always possible, and introduces its own limitations. However, I have been able to always find at least one of the procedures a good candidate for this. I like this solution, because it doesn't introduce any "hacks" to the solution.
I encountered this issue when trying to import the results of a Stored Proc into a temp table, and that Stored Proc inserted into a temp table as part of its own operation. The issue being that SQL Server does not allow the same process to write to two different temp tables at the same time.
The accepted OPENROWSET answer works fine, but I needed to avoid using any Dynamic SQL or an external OLE provider in my process, so I went a different route.
One easy workaround I found was to change the temporary table in my stored procedure to a table variable. It works exactly the same as it did with a temp table, but no longer conflicts with my other temp table insert.
Just to head off the comment I know that a few of you are about to write, warning me off Table Variables as performance killers... All I can say to you is that in 2020 it pays dividends not to be afraid of Table Variables. If this was 2008 and my Database was hosted on a server with 16GB RAM and running off 5400RPM HDDs, I might agree with you. But it's 2020 and I have an SSD array as my primary storage and hundreds of gigs of RAM. I could load my entire company's database to a table variable and still have plenty of RAM to spare.
Table Variables are back on the menu!
I recommend to read this entire article. Below is the most relevant section of that article that addresses your question:
Rollback and Error Handling is Difficult
In my articles on Error and Transaction Handling in SQL Server, I suggest that you should always have an error handler like
BEGIN CATCH
IF ##trancount > 0 ROLLBACK TRANSACTION
EXEC error_handler_sp
RETURN 55555
END CATCH
The idea is that even if you do not start a transaction in the procedure, you should always include a ROLLBACK, because if you were not able to fulfil your contract, the transaction is not valid.
Unfortunately, this does not work well with INSERT-EXEC. If the called procedure executes a ROLLBACK statement, this happens:
Msg 3915, Level 16, State 0, Procedure SalesByStore, Line 9 Cannot use the ROLLBACK statement within an INSERT-EXEC statement.
The execution of the stored procedure is aborted. If there is no CATCH handler anywhere, the entire batch is aborted, and the transaction is rolled back. If the INSERT-EXEC is inside TRY-CATCH, that CATCH handler will fire, but the transaction is doomed, that is, you must roll it back. The net effect is that the rollback is achieved as requested, but the original error message that triggered the rollback is lost. That may seem like a small thing, but it makes troubleshooting much more difficult, because when you see this error, all you know is that something went wrong, but you don't know what.
I had the same issue and concern over duplicate code in two or more sprocs. I ended up adding an additional attribute for "mode". This allowed common code to exist inside one sproc and the mode directed flow and result set of the sproc.
what about just store the output to the static table ? Like
-- SubProcedure: subProcedureName
---------------------------------
-- Save the value
DELETE lastValue_subProcedureName
INSERT INTO lastValue_subProcedureName (Value)
SELECT #Value
-- Return the value
SELECT #Value
-- Procedure
--------------------------------------------
-- get last value of subProcedureName
SELECT Value FROM lastValue_subProcedureName
its not ideal, but its so simple and you don't need to rewrite everything.
UPDATE:
the previous solution does not work well with parallel queries (async and multiuser accessing) therefore now Iam using temp tables
-- A local temporary table created in a stored procedure is dropped automatically when the stored procedure is finished.
-- The table can be referenced by any nested stored procedures executed by the stored procedure that created the table.
-- The table cannot be referenced by the process that called the stored procedure that created the table.
IF OBJECT_ID('tempdb..#lastValue_spGetData') IS NULL
CREATE TABLE #lastValue_spGetData (Value INT)
-- trigger stored procedure with special silent parameter
EXEC dbo.spGetData 1 --silent mode parameter
nested spGetData stored procedure content
-- Save the output if temporary table exists.
IF OBJECT_ID('tempdb..#lastValue_spGetData') IS NOT NULL
BEGIN
DELETE #lastValue_spGetData
INSERT INTO #lastValue_spGetData(Value)
SELECT Col1 FROM dbo.Table1
END
-- stored procedure return
IF #silentMode = 0
SELECT Col1 FROM dbo.Table1
Declare an output cursor variable to the inner sp :
#c CURSOR VARYING OUTPUT
Then declare a cursor c to the select you want to return.
Then open the cursor.
Then set the reference:
DECLARE c CURSOR LOCAL FAST_FORWARD READ_ONLY FOR
SELECT ...
OPEN c
SET #c = c
DO NOT close or reallocate.
Now call the inner sp from the outer one supplying a cursor parameter like:
exec sp_abc a,b,c,, #cOUT OUTPUT
Once the inner sp executes, your #cOUT is ready to fetch. Loop and then close and deallocate.
If you are able to use other associated technologies such as C#, I suggest using the built in SQL command with Transaction parameter.
var sqlCommand = new SqlCommand(commandText, null, transaction);
I've created a simple Console App that demonstrates this ability which can be found here:
https://github.com/hecked12/SQL-Transaction-Using-C-Sharp
In short, C# allows you to overcome this limitation where you can inspect the output of each stored procedure and use that output however you like, for example you can feed it to another stored procedure. If the output is ok, you can commit the transaction, otherwise, you can revert the changes using rollback.
On SQL Server 2008 R2, I had a mismatch in table columns that caused the Rollback error. It went away when I fixed my sqlcmd table variable populated by the insert-exec statement to match that returned by the stored proc. It was missing org_code. In a windows cmd file, it loads result of stored procedure and selects it.
set SQLTXT= declare #resets as table (org_id nvarchar(9), org_code char(4), ^
tin(char9), old_strt_dt char(10), strt_dt char(10)); ^
insert #resets exec rsp_reset; ^
select * from #resets;
sqlcmd -U user -P pass -d database -S server -Q "%SQLTXT%" -o "OrgReport.txt"

creating a trigger that updates row in a linked mysql server

i created a linked Mysql server on SQL server 2008 r2. i'm trying to create a trigger on sql table that automatically updates a field in the linked server table, i have a table called "QFORCHOICE" in sql that has fields "Prodcode,prodname and avqty" and a table "que_for_choie" in mysql that has fields "procode,proname and avqty"
i want the trigger to update the value of "procode" in the linked server if the value of "prodcode" in sql server changes. this is what i have so far but it has errors,
create trigger [QFORCHOICE]
ON dbo.QFORCHOICE
FOR INSERT
AS
DECLARE #prodcode numeric(18,0)
DECLARE #prodname varchar(50)
DECLARE #avqty numeric(18,0)
BEGIN
SELECT
#procode = procode,
#proname = proname,
#avqty = avqty
FROM inserted
update [LINKED_MYSQL].[que_for_choice]
SET prodname=#prodname,avqty=#avqty
WHERE prodcode = #prodcode
end
can anybody please help.
thanks in advance
1- From within a trigger, you shouldn't attempt to access anything external to the current database. It will severely slow down any insert activity, and if there are any networking issues or the remote server is down for any reason, you'll then cause the original transaction to roll back. This is rarely the right thing to do
2- you're making the reliability of your system dependent on the reliability of two servers rather than one (say they both have 99% reliability - your system that ties them together with a trigger now has 98% overall reliability).

how to use multiple database from a stored procedure dynamically

In Sql Server 2005,
I have a stored procedure, in which i have wrote some commands to create a table and add some records in it.
Create Procedure Procedure1 AS
Begin
create table TmpTable(CD Decimal(10,0), Descr Varchar(50));
Insert Into TmpTable Values(0,'Not Applicable');
Insert Into TmpTable Values(1,'ALL');
Insert Into TmpTable Values(2,'Selected');
Insert Into TmpTable Values(3,'Only New');
END
i want to create this table in all available databases, i don't know, how many databases are available when i call this store procedure.
You can try looping through the list of Databases on your server using a cursor and then inside the loop for the cursor doing the above code. SQL Server through its Master database which every server instance has has functions in it that you can call to get system vise information.
Ive done some thing in the past.
Try having a look at this link, hopefully it will be useful to you:
http://sqlserverplanet.com/tsql/list-all-tables-in-a-database/

How to capture stored procedure text when passing from Dephi 2006

HI all,
I have a large (100+) parameter list for my SP in my delphi code. This is for MS SQL Server 2005. For debugging purposes, I want to capture the text of the stored procedure command, so i can execute it on the SQL server and debug the SP. Is there a way i can capture what is exactly passed to the database? I thought about using a trace, and I'll try that tomorrow if this fails, but its cumbersome to set up and sift through and catch the SP.
Thanks
You should use the SQL Server Profiler for this. Start a new trace with default settings. Let it run while your client executes the SP. Stop the trace. Use ctrl-F and search for you SP name.
I normally don't care for playing with the programming environment.
Profiling would be a good option if you can identify the ClientProcessID (the PID showing in Task Manager of your client program) - that should narrow it down enough.
Another alternative I like is to simply capture it at the SQL Server end.
Sample proc
create proc takes3params
#a int, #b varchar(100), #c datetime
as
select #a, #b, #c
Becomes
alter proc takes3params
#a int, #b varchar(100), #c datetime
as
insert capture_takes3params(a,b,c) select #a, #b, #c -- << -- added
select #a, #b, #c
The support table is a mirror of the params, with 2 additional control columns
create table capture_takes3params(
id int identity primary key, captured datetime default(getdate()), -- control
a int, b varchar(100), c datetime
)
This doesn't work when the proc has defaults though.
EDIT
We use ADO to connect to MS SQL. Not sure what the alternative to 100+ params is, maybe pass table structures? Advice welcome! We are passing in HL7 messages which typically have 100 or so fields. –
Table valued parameters are only available from SQL Server 2008 onwards, from what I recall. That seems unwieldy as well from Delphi - I would instead look at a single XML parameter dissected in TSQL, which 2005+ has good support for.

Updating records from a XML

I need to provide 4 MySQL stored procedures for each table in a database. They are for get, update, insert and delete.
"Get", "delete" and "insert" are straightforward. The problem is "update", because I don't know which parameters will be set and which ones not. Some parameters could be set to NULL, and other simply won't change so they won't be provided.
As I'm already working with XML, after several search in Google I've found that is possible to use a function called UpdateXML, but the examples are too complex and some articles are from 2007. So I don't know if there is a better technique at this moment or something easier.
Any comment, documentation, link, article or whatever of something that you've used and you're happy with, will be well appreciated :D
Cheers.
Usually when you have data from a row in your database in the front-end, you should have all of the values that you might use to update that row in the database. You should pass all of those values into your update, regardless of whether or not they have actually changed. Otherwise, your database doesn't really know whether it's getting a NULL value for a column because that's what it's supposed to be or because you just didn't pass the real value along.
If you are going to have areas of the application where you don't need certain columns from a table, then it's possible to set up additional stored procedures that do not use those columns. It's often easier though to just retrieve all of the columns from the database when you fill your front-end object. The overhead of the extra columns is usually minimal and worth the saved maintenance of multiple update stored procedures.
Here's an example. It's MS SQL Server syntax, so you may have to alter it slightly, but hopefully it illustrates the idea:
CREATE PROCEDURE Update_My_Table
#my_table_id INT,
#name VARCHAR(40),
#description VARCHAR(500),
#some_other_col INT
AS
BEGIN
UPDATE
My_Table
SET
name = #name,
description = #description,
some_other_col = #some_other_col
WHERE
my_table_id = #my_table_id
END
CREATE PROCEDURE Update_My_Table_Limited
#my_table_id INT,
#name VARCHAR(40),
#description VARCHAR(500)
AS
BEGIN
UPDATE
My_Table
SET
name = #name,
description = #description
WHERE
my_table_id = #my_table_id
END
As you can see, just eliminate those columns that you're not updating from the UPDATE statement. Just don't go overboard and try to have a stored procedure for every possible combination of columns that you might want to update. It's much easier to just get the extra columns from the DB when you select from the table in the first place. You'll end up passing the same value back and your server will wind up updating the column with the same exact value, but that's not a big deal. You can code your front end to make sure that at least one column has changed before it will actually try to update anything in the database.