Trigger to update value with no of records - sql

trying to set up a trigger but struggling to it to work the way i want, i want to update a field oppo_pono with the no of opportunity records created for a particular company record
so 1 company can have multiple opportunities and i want to record the no of master opportunities created for a company, so the first master opp created for a company would be set to 1 and so on
ive set the trigger up below but its setting the oppo_pono with the count from all companies rather then the one i am creating the opportunity for
my trigger below
USE [CRM]
GO
/****** Object: Trigger [dbo].[GeneratePNo] Script Date: 1/7/2021 3:55:27 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER TRIGGER [dbo].[GeneratePNo]
ON [dbo].[Opportunity]
FOR insert
AS
declare #OppPrimary Int
declare #company Int
declare #compid Int
declare #type nvarchar(40)
declare #childopp nchar(1)
declare #pono int
Select #OppPrimary = Oppo_OpportunityId,
#company = Oppo_PrimaryCompanyId,
#compid = comp_companyid,
#type = Oppo_Type,
#childopp = oppo_childoppo,
#pono = oppo_pono
FROM Inserted inner join company on Oppo_PrimaryCompanyId = #compid
Begin
UPDATE [Opportunity] SET oppo_pono = (select count(*) from vSearchListOpportunity where Oppo_Deleted is null and #type = 'Master' and Oppo_PrimaryCompanyId = ) +1
WHERE Oppo_OpportunityId =#OppPrimary
End

As mentioned in the comments, you have not taken into account the inserted pseudo-table having multiple rows. You also have a number of outright syntax errors.
EDIT: Following your comments I think I understand what you are trying to do. My original solution will not work here because indexed view cannot have ranking functions, but I have modified it to work with what you need.
Ideally, you wouldn't care about the actual IDs lining up, and just use an IDENTITY column, but often an ID series per group is needed.
Generally a view with correct indexing will be the more performant option, but it depends what you need.
Using a View
I will show you a solution that can be used for a lot of different types of aggregations which normally require triggers. This only works for your problem if you intend to have the numbering change if a row gets deleted out the middle of the grouping. If you want the numbering to remain then use a trigger instead
I am unsure the exact relation of Opportunity, Company and vSearchListOpportunity (seems to be a view on Opportunity) but you should be able to modify this to suit.
Create an indexed view on the data, and include a row number for each row:
CREATE VIEW vOpportunityNumbered
AS
SELECT
o.Oppo_OpportunityId,
o.Oppo_PrimaryCompanyId,
o.Oppo_Type,
o.oppo_childoppo,
ROW_NUMBER() OVER
(PARTITION BY o.Oppo_PrimaryCompanyId ORDER BY o.Oppo_OpportunityId)
-- Order by primary key to get deterministic ordering
FROM Opportunity AS o
WHERE o.Oppo_Deleted IS NULL;
GO
Now, to support this view, we cannot index it directly, as mentioned. We can, however, create an index on the base table that will support it:
CREATE UNIQUE CLUSTERED INDEX opp_CompanyOpportunity
ON Opportunity (Oppo_PrimaryCompanyId, Oppo_OpportunityId)
-- note the ordering of the columns
INCLUDE (Oppo_Type, oppo_childoppo)
WITH (OPTIMIZE_FOR_SEQUENTIAL_KEY = ON) -- ONLY FOR SQL2019
;
GO
This view will now give you a sequential row numbering of Opportunity for each distinct Company.
Triggers
If you wish for the IDs to always remain the same no matter what happens to intervening rows, you will need a trigger (i.e. a deleted row will leave a gap in the numbers).
Every trigger has two tables, inserted and deleted, which contain the data that was changed. For update triggers, both tables have data, a row in each for each changed row.
This means that the trigger is executed once per statement, and these tables contain all the relevant rows. You cannot, however, update them directly; you must join the real tables to them.
So let's take a look at how to write a trigger. Again, I'm somewhat guessing as to the relations of the tables:
CREATE OR ALTER TRIGGER [dbo].[GeneratePNo]
ON [dbo].[Opportunity]
AFTER INSERT -- FOR is an alternative syntax, AFTER is more usual
AS
-- No need for BEGIN and END, the whole batch until GO is the trigger
SET NOCOUNT ON; -- Prevent DONE_IN_PROC rowcount messages
IF (NOT EXISTS (SELECT 1 FROM inserted))
RETURN; -- Bail-out early if no rows
-- We do not declare variables because we cannot store multiple rows in variables
UPDATE
o
SET oppo_pono =
ISNULL( --If there are no other rows we would get a null
(SELECT MAX(allO.oppo_pono)
FROM Opportunity allO
-- no need for the following two filters as the oppo_pono needs to be unique anyway
-- where allO.Oppo_Deleted is null and allO.type = 'Master'
WHERE allO.Oppo_PrimaryCompanyId = inserted.Oppo_PrimaryCompanyId
), 0) + 1
FROM inserted i
JOIN Opportunity o ON o.Oppo_OpportunityId = i.Oppo_OpportunityId;
-- We join inserted table on primary key always
GO
There are more efficient ways to write that update, but it depends whether you are inserting a lot of rows. An INSTEAD OF trigger will also be more performant here, but I haven't attempted that as I don't have your table definition.

Related

SQL Server : make update trigger don't activate with no changing value

I want to track the update changes in a table via a trigger:
CREATE TABLE dbo.TrackTable(...columns same as target table)
GO
CREATE TRIGGER dboTrackTable
ON dbo.TargetTable
AFTER UPDATE
AS
INSERT INTO dbo.TrackTable (...columns)
SELECT (...columns)
FROM Inserted
However in real production some of the update queries select rows with vague conditions and update them all regardless of whether they are actually changed, like
UPDATE Targettable
SET customer_type = 'VIP'
WHERE 1 = 1
--or is_obsolete = 0 or register_date < '20160101' something
But due to table size and to analyze, I only want to choose those actually modified data for tracking. How to achieve this goal?
My track table has many columns (so I do not prefer checking inserted and deleted column one by one) but it seldom changes structure.
I guess the following code will be useful.
CREATE TABLE dbo.TrackTable(...columns same as target table)
GO
CREATE TRIGGER dboTrackTable
ON dbo.TargetTable
AFTER UPDATE
AS
INSERT INTO dbo.TrackTable (...columns)
SELECT *
FROM Inserted
EXCEPT
SELECT *
FROM Deleted
I realize this post is a couple months old now, but for anyone looking for a well-rounded answer:
To exit the trigger if no rows were affected on SQL Server 2016 and up, Microsoft recommends using the built-in ROWCOUNT_BIG() function in the Optimizing DML Triggers section of the Create Trigger documentation.
Usage:
IF ROWCOUNT_BIG() = 0
RETURN;
To ensure you are excluding rows that were not changed, you'll need to do a compare of the inserted and deleted tables inside the trigger. Taking your example code:
INSERT INTO dbo.TrackTable (...columns)
SELECT (...columns)
FROM Inserted i
INNER JOIN deleted d
ON d.[SomePrimaryKeyCol]=i.[SomePrimaryKeyCol] AND
i.customer_type<>d.customer_type
Microsoft documentation and w3schools are great resources for learning how to leverage various types of queries and trigger best practices.
Prevent trigger from doing anything if no rows changed.
Writing-triggers-the-right-way
CREATE TRIGGER the_trigger on dbo.Data
after update
as
begin
if ##ROWCOUNT = 0
return
set nocount on
/* Some Code Here */
end
Get a list of rows that changed:
CREATE TRIGGER the_trigger on dbo.data
AFTER UPDATE
AS
SELECT * from inserted
Previous stack overflow on triggers
#anna - as per #Oded's answer, when an update is performed, the rows are in the deleted table with the old information, and the inserted table with the new information –

SQL trigger leaves last matching row untouched

This is a trigger used to add the number of pages when a document's metadata row is added to a table.
USE [DD1234]
GO
/****** Object: Trigger [dbo].[AfterIns_Pages_ABC_LandCont] Script Date: 10/02/2014 16:30:33 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER TRIGGER [dbo].[AfterIns_Pages_ABC_LandCont]
ON [dbo].[PVDM_DOCS_1234_13]
AFTER INSERT
AS
BEGIN
SET NOCOUNT ON;
UPDATE D
SET D.DOCINDEX13 = O.Tot_Pages
FROM dbo.PVDM_DOCS_1234_13 D,
(SELECT DOCID, Sum(PAGES) AS Tot_Pages FROM dbo.PVDM_OBJS_1234_13
GROUP BY DOCID) O
WHERE D.DOCID = O.DOCID
AND D.DOCINDEX13 IS NULL
END
GO
So basically after a row (or many) are added to the PVDM_DOCS_1234_13 table, use the DOCID from that table to match the same DOCID in the Object (PVDM_OBJS_1234_13) table to retrieve the PAGES value, and then insert that into DOCINDEX13 (the field where we're storing the user visible page count) where DOCINDEX13 is null.
If a batch of 5, or 500 rows are inserted into PVDM_DOCS_1234_13, the last one inserted never gets the page count inserted, it remains NULL. All the rest get the page count inserted. Cannot figure out why the last row always gets left behind.
Note I'm an SQL novice, this was coded by someone no longer available.
Any ideas why this would work for all new rows except the last one inserted?
Thanks!
One possibility is that DOCINDEX13is notNULL` on the row being updated. Without sample data, it is had to see what is going wrong.
By the way, I'd be inclined to write this query as:
with toupdate as (
select d.*, sum(pages) over (order by docid) as tot_pages
from dbo.PVDM_DOCS_1234_13
)
update toupdate
set docindex13 = tot_pages
where docindex13 is null;
As a general rules in SQL, don't use commas in the from clause. Always use explicit join syntax.

After insert trigger for updating a column

I am writing an after insert trigger trying to find a solution to this problem here:
https://stackoverflow.com/questions/19355644/dead-ends-all-around-trying-to-update-geography-column
What I am unsure of is how to write the trigger to take into consideration multiple records as explained here as a potential you need to code for.
So far I had this but it applies only to a single record so if the table had 100 records inserted in a batch 99 would not be updated. This is my understanding so far and may not be correct.
create trigger tri_inserts on [dbo].[Address]
after insert
as
set nocount on
update Address
SET AddyGeoCode = GEOGRAPHY::Point(inserted.AddyLat, inserted.Addylong, 4326)
GO
Would I say join to the inserted table to discover / update all the new records?
In case it is needed my Address table schema is AddyLat & AddyLong decimal(7,4) and AddyGeoCode Geography.
TIA
Yes, you need to join on inserted table.
UPDATE a
SET a.AddyGeoCode = GEOGRAPHY::Point(a.AddyLat, a.Addylong, 4326) --you can use AddyLat&Long from either a or i
FROM Address a
INNER JOIN inserted i ON a.id = i.id --whatever are your PK columns

How to store historical records in a history table in SQL Server

I have 2 tables, Table-A and Table-A-History.
Table-A contains current data rows.
Table-A-History contains historical data
I would like to have the most current row of my data in Table-A, and Table-A-History containing historical rows.
I can think of 2 ways to accomplish this:
whenever a new data row is available, move the current row from Table-A to Table-A-History and update the Table-A row with the latest data (via insert into select or select into table)
or
whenever a new data row is available, update Table-A's row and insert a new row into Table-A-History.
In regards to performance is method 1 or 2 better? Is there a better different way to accomplish this?
Basically you are looking to track/audit changes to a table while keeping the primary table small in size.
There are several ways to solve this issue. The cons and pros of each way is discussed below.
1 - Auditing of the table with triggers.
If you are looking to audit the table (inserts, updates, deletes), look at my how to revent unwanted transactions - SQL Saturday slide deck w/code - http://craftydba.com/?page_id=880. The trigger that fills the audit table can hold information from multiple tables, if you choose, since the data is saved as XML. Therefore, you can un-delete an action if necessary by parsing the XML. It tracks who and what made the change.
Optionally, you can have the audit table on it's own file group.
Description:
Table Triggers For (Insert, Update, Delete)
Active table has current records.
Audit (history) table for non-active records.
Pros:
Active table has smaller # of records.
Index in active table is small.
Change is quickly reported in audit table.
Tells you what change was made (ins, del, upd)
Cons:
Have to join two tables to do historical reporting.
Does not track schema changes.
2 - Effective dating the records
If you are never going to purge the data from the audit table, why not mark the row as deleted but keep it for ever? Many systems like people soft use effective dating to show if a record is no longer active. In the BI world this is called a type 2 dimensional table (slowly changing dimensions). See the data warehouse institute article. http://www.bidw.org/datawarehousing/scd-type-2/ Each record has a begin and end date.
All active records have a end date of null.
Description:
Table Triggers For (Insert, Update, Delete)
Main table has both active and historical records.
Pros:
Historical reporting is easy.
Change is quickly shown in main table.
Cons:
Main table has a large # of records.
Index of main table is large.
Both active & history records in same filegroup.
Does not tell you what change was made (ins, del, upd)
Does not track schema changes.
3 - Change Data Capture (Enterprise Feature).
Micorsoft SQL Server 2008 introduced the change data capture feature. While this tracks data change (CDC) using a LOG reader after the fact,
it lacks things like who and what made the change. MSDN Details - http://technet.microsoft.com/en-us/library/bb522489(v=sql.105).aspx
This solution is dependent upon the CDC jobs running. Any issues with sql agent will cause delays in data showing up.
See change data capture tables.
http://technet.microsoft.com/en-us/library/bb500353(v=sql.105).aspx
Description:
Enable change data capture
Pros:
Do not need to add triggers or tables to capture data.
Tells you what change was made (ins, del, upd) the _$operation field in
<user_defined_table_CT>
Tracks schema changes.
Cons:
Only available in enterprise version.
Since it reads the log after the fact, time delay in data showing up.
The CDC tables do not track who or what made the change.
Disabling CDC removes the tables (not nice)!
Need to decode and use the _$update_mask to figure out what columns changed.
4 - Change Tracking Feature (All Versions).
Micorsoft SQL Server 2008 introduced the change tracking feature. Unlike CDC, it comes with all versions; However, it comes with a bunch of TSQL functions that you have to call to figure out what happened.
It was designed for the purpose of synchronization one data source with SQL server via an application. There is a whole synchronization frame work on TechNet.
http://msdn.microsoft.com/en-us/library/bb933874.aspx
http://msdn.microsoft.com/en-us/library/bb933994.aspx
http://technet.microsoft.com/en-us/library/bb934145(v=sql.105).aspx
Unlike CDC, you specify how long changes last in the database before being purged. Also, inserts and deletes do not record data. Updates only record what field changed.
Since you are synchronizing the SQL server source to another target, this works fine.
It is not good for auditing unless you write a periodic job to figure out changes.
You will still have to store that information somewhere.
Description:
Enable change tracking
Cons:
Not a good auditing solution
The first three solutions will work for your auditing. I like the first solution since I use it extensively in my environment.
Sincerely
John
Code Snippet From Presentation (Autos Database)
--
-- 7 - Auditing data changes (table for DML trigger)
--
-- Delete existing table
IF OBJECT_ID('[AUDIT].[LOG_TABLE_CHANGES]') IS NOT NULL
DROP TABLE [AUDIT].[LOG_TABLE_CHANGES]
GO
-- Add the table
CREATE TABLE [AUDIT].[LOG_TABLE_CHANGES]
(
[CHG_ID] [numeric](18, 0) IDENTITY(1,1) NOT NULL,
[CHG_DATE] [datetime] NOT NULL,
[CHG_TYPE] [varchar](20) NOT NULL,
[CHG_BY] [nvarchar](256) NOT NULL,
[APP_NAME] [nvarchar](128) NOT NULL,
[HOST_NAME] [nvarchar](128) NOT NULL,
[SCHEMA_NAME] [sysname] NOT NULL,
[OBJECT_NAME] [sysname] NOT NULL,
[XML_RECSET] [xml] NULL,
CONSTRAINT [PK_LTC_CHG_ID] PRIMARY KEY CLUSTERED ([CHG_ID] ASC)
) ON [PRIMARY]
GO
-- Add defaults for key information
ALTER TABLE [AUDIT].[LOG_TABLE_CHANGES] ADD CONSTRAINT [DF_LTC_CHG_DATE] DEFAULT (getdate()) FOR [CHG_DATE];
ALTER TABLE [AUDIT].[LOG_TABLE_CHANGES] ADD CONSTRAINT [DF_LTC_CHG_TYPE] DEFAULT ('') FOR [CHG_TYPE];
ALTER TABLE [AUDIT].[LOG_TABLE_CHANGES] ADD CONSTRAINT [DF_LTC_CHG_BY] DEFAULT (coalesce(suser_sname(),'?')) FOR [CHG_BY];
ALTER TABLE [AUDIT].[LOG_TABLE_CHANGES] ADD CONSTRAINT [DF_LTC_APP_NAME] DEFAULT (coalesce(app_name(),'?')) FOR [APP_NAME];
ALTER TABLE [AUDIT].[LOG_TABLE_CHANGES] ADD CONSTRAINT [DF_LTC_HOST_NAME] DEFAULT (coalesce(host_name(),'?')) FOR [HOST_NAME];
GO
--
-- 8 - Make DML trigger to capture changes
--
-- Delete existing trigger
IF OBJECT_ID('[ACTIVE].[TRG_FLUID_DATA]') IS NOT NULL
DROP TRIGGER [ACTIVE].[TRG_FLUID_DATA]
GO
-- Add trigger to log all changes
CREATE TRIGGER [ACTIVE].[TRG_FLUID_DATA] ON [ACTIVE].[CARS_BY_COUNTRY]
FOR INSERT, UPDATE, DELETE AS
BEGIN
-- Detect inserts
IF EXISTS (select * from inserted) AND NOT EXISTS (select * from deleted)
BEGIN
INSERT [AUDIT].[LOG_TABLE_CHANGES] ([CHG_TYPE], [SCHEMA_NAME], [OBJECT_NAME], [XML_RECSET])
SELECT 'INSERT', '[ACTIVE]', '[CARS_BY_COUNTRY]', (SELECT * FROM inserted as Record for xml auto, elements , root('RecordSet'), type)
RETURN;
END
-- Detect deletes
IF EXISTS (select * from deleted) AND NOT EXISTS (select * from inserted)
BEGIN
INSERT [AUDIT].[LOG_TABLE_CHANGES] ([CHG_TYPE], [SCHEMA_NAME], [OBJECT_NAME], [XML_RECSET])
SELECT 'DELETE', '[ACTIVE]', '[CARS_BY_COUNTRY]', (SELECT * FROM deleted as Record for xml auto, elements , root('RecordSet'), type)
RETURN;
END
-- Update inserts
IF EXISTS (select * from inserted) AND EXISTS (select * from deleted)
BEGIN
INSERT [AUDIT].[LOG_TABLE_CHANGES] ([CHG_TYPE], [SCHEMA_NAME], [OBJECT_NAME], [XML_RECSET])
SELECT 'UPDATE', '[ACTIVE]', '[CARS_BY_COUNTRY]', (SELECT * FROM deleted as Record for xml auto, elements , root('RecordSet'), type)
RETURN;
END
END;
GO
--
-- 9 - Test DML trigger by updating, deleting and inserting data
--
-- Execute an update
UPDATE [ACTIVE].[CARS_BY_COUNTRY]
SET COUNTRY_NAME = 'Czech Republic'
WHERE COUNTRY_ID = 8
GO
-- Remove all data
DELETE FROM [ACTIVE].[CARS_BY_COUNTRY];
GO
-- Execute the load
EXECUTE [ACTIVE].[USP_LOAD_CARS_BY_COUNTRY];
GO
-- Show the audit trail
SELECT * FROM [AUDIT].[LOG_TABLE_CHANGES]
GO
-- Disable the trigger
ALTER TABLE [ACTIVE].[CARS_BY_COUNTRY] DISABLE TRIGGER [TRG_FLUID_DATA];
** Look & Feel of audit table **
The recent versions of SQL server (2016+ and Azure) have temporal tables which provide the exact functionality requested, as a first class feature.
https://learn.microsoft.com/en-us/sql/relational-databases/tables/temporal-tables
Somebody at Microsoft probably read this page. :)
Logging changes is something I've generally done using triggers on a base table to record changes in a log table. The log table has additional columns to record the database user, action and date/time.
create trigger Table-A_LogDelete on dbo.Table-A
for delete
as
declare #Now as DateTime = GetDate()
set nocount on
insert into Table-A-History
select SUser_SName(), 'delete-deleted', #Now, *
from deleted
go
exec sp_settriggerorder #triggername = 'Table-A_LogDelete', #order = 'last', #stmttype = 'delete'
go
create trigger Table-A_LogInsert on dbo.Table-A
for insert
as
declare #Now as DateTime = GetDate()
set nocount on
insert into Table-A-History
select SUser_SName(), 'insert-inserted', #Now, *
from inserted
go
exec sp_settriggerorder #triggername = 'Table-A_LogInsert', #order = 'last', #stmttype = 'insert'
go
create trigger Table-A_LogUpdate on dbo.Table-A
for update
as
declare #Now as DateTime = GetDate()
set nocount on
insert into Table-A-History
select SUser_SName(), 'update-deleted', #Now, *
from deleted
insert into Table-A-History
select SUser_SName(), 'update-inserted', #Now, *
from inserted
go
exec sp_settriggerorder #triggername = 'Table-A_LogUpdate', #order = 'last', #stmttype = 'update'
Logging triggers should always be set to fire last. Otherwise, a subsequent trigger may rollback the original transaction, but the log table will have already been updated. This is a confusing state of affairs.
How about method 3: Make Table-A a view against Table-A-History. Insert into Table-A-History and let appropriate filtering logic generate Table-A. That way you're only inserting into one table.
Even though it consumes more space, having the history table containing the most recent record as well will save you pain on writing reports and seeing how changes occurred and when. Something worth thinking about in my opinion.
As far as performance, I would expect them to be identical. But, you certainly wouldn't want to delete the record (option 1's "move") from the non-hist table because you are using referential integrity between the two tables, right?
I would prefer method 1
In addition, I will have also maintain the current record in the history table too
it depends on the need.
Option 1 is OK.
But you have method 4 too :)
Insert new record to your table,
Move old record to archive table on regular base using mysql scheduler. You can schedule data archivation at the time of minimal load, for example at night hours.
You can simply create procedure or job to overcome this issue like this:
create procedure [dbo].[sp_LoadNewData]
AS
INSERT INTO [dbo].[Table-A-History]
(
[1.Column Name], [2.Column Name], [3.Column Name], [4.Column Name]
)
SELECT [1.Column Name], [2.Column Name], [3.Column Name], [4.Column Name]
FROM dbo.[Table-A] S
WHERE NOT EXISTS
(
SELECT * FROM [dbo].[Table-A-History] D WHERE D.[1.Column Name] =S.[1.Column Name]
)
Note: [1.Column Name] is common column for the tables.

infinite trigger loop....by design(!). How to work-around?

I know I'm going to get flamed for this, but....
I have table ProductA, ProductB, and ProductC which have very similar schema but for 2 or 3 columns in each. Each table has an insert trigger which fires a duplicate row for each insert in A, B, or C to table Products, which is a consolidation of all products. In addition, update triggers on A,B, or C will likewise update their equivalent row in Table Products, as do delete triggers. All working flawlessly until.....we update, say, Table Products Column A, which also exists in Table A, B, and C.
I'm looking to develop a trigger on Table Products that will propogate that update in Column A to Column A in each of tables A, B, and C, BUT, without invoking the update triggers on Tables A, B, and C. The desired behavior is for updates to work in both directions without incurring an endless loop.(Note, only 2 columns in table products need to be replicated BACK to tables A, B, and C)
Options are:
redesign the schema so this situation doesn't exist (not in the
cards, this is a quick solution, redesign can be done by someone
else);
Manually disable the triggers when I update table products
(this is all done at the application level, users won't have the
ability to log into SSMA and disable triggers when they update table
products);
Come to Stack Overflow and hope someone has already encountered this type of problem!
Conceptually, how could this be done?
6/7 Update:
Here is the trigger code on Table A (e.g):
ALTER TRIGGER [dbo].[GRSM_WETLANDS_Point_GIS_tbl_locations_update]
ON [dbo].[GRSM_WETLANDS_POINT]
after update
AS
BEGIN
SET NOCOUNT ON;
update dbo.TBL_LOCATIONS
set
X_Coord = i.X_Coord,
Y_Coord = i.Y_Coord,
PlaceName = i.PlaceName,
FCSubtype = case
when i.FCSubtype = 1 then 'Point: Too Small to Determin Boundary'
when i.FCSubtype = 2 then 'Point: Boundary Determined by Contractor but not Surveyed'
when i.FCSubtype = 3 then 'Point: Wetland Reported but not yet Surveyed'
end ,
Landform = i.Landform
from dbo.TBL_LOCATIONS
Join inserted i
on TBL_LOCATIONS.GIS_Location_ID = i.GIS_Location_ID
end
GO
And
ALTER TRIGGER [dbo].[GRSM_WETLANDS_POINT_GIS_tbl_locations]
ON
[dbo].[GRSM_WETLANDS_POINT]
after INSERT
AS
BEGIN
SET NOCOUNT ON;
INSERT dbo.TBL_LOCATIONS(
X_Coord, Y_Coord,
PlaceName,
FCSubtype, Landform
)
SELECT
a.X_Coord, a.Y_Coord,
a.PlaceName,
a.FCSubtype, a.Landform
From
(
SELECT
X_Coord, Y_Coord,
PlaceName,
FCSubtype = case
when FCSubtype = 1 then 'Point: Too Small to Determin Boundary'
when FCSubtype = 2 then 'Point: Boundary Determined by Contractor but not Surveyed'
when FCSubtype = 3 then 'Point: Wetland Reported but not yet Surveyed'
end ,
Landform
FROM inserted
) AS a
end
GO
And here is the currently disabled update trigger on table products:
ALTER TRIGGER [dbo].[tbl_locations_updateto_geo]
ON [dbo].[TBL_LOCATIONS]
for update
AS
BEGIN
--IF ##NESTLEVEL>1 RETURN
SET NOCOUNT ON;
update dbo.GRSM_Wetlands_Point
set
X_Coord = i.X_Coord,
Y_Coord = i.Y_Coord,
PlaceName = i.PlaceName,
FCSubtype = i.FCSubtype,
Landform = i.Landform,
from dbo.TBL_LOCATIONS
Join inserted i
on TBL_LOCATIONS.GIS_Location_ID = i.GIS_Location_ID
where TBL_LOCATIONS.FCSubtype = 'Polygon: Determination Made by GPS Survey'
or TBL_LOCATIONS.FCSubtype = 'Polygon: Determination Derived from NWI'
or TBL_LOCATIONS.FCSubtype = 'Polygon: Determination Made by Other Means'
or TBL_LOCATIONS.FCSubtype = 'Polygon: Legal Jurisdictional Determination';
end
GO
(tbl names changed to keep with the posting text)
There are two types of recursion, direct and indirect: http://msdn.microsoft.com/en-us/library/ms190739.aspx
You can use the RECURSIVE_TRIGGERS option to stop direct recursion, but your case is indirect recursion so you'd have to set the nested triggers option. This will fix your problem, but if anything else in the system relies on recursion then it won't be a good option.
USE DatabaseName
GO
EXEC sp_configure 'show advanced options', 1
GO
RECONFIGURE
GO
EXEC sp_configure 'nested triggers', 0
GO
RECONFIGURE
GO
EDIT in response to your updated post:
I almost hate to give you this solution because you're ultimately taking a really crappy design and extending it... making even more of a mess than it is already instead of taking the time to understand what's going on and just fixing it. You should honestly just create another table to hold the values that need to be in sync between the two tables so the data is only in one place, and then relate those tables to that one through a key. But nonetheless...
You need a flag to set you're updating in one trigger so the other trigger can abort its operation if it sees it's true. Since (as far as I know) you can only have locally scoped variable, that means you'll need a table to store this flag value in and look it up from.
You can implement this solution with varying levels of complexity, but the easiest way is to just have all triggers set the flag to true when starting and false when ending. And before they start they check the flag and stop executing if it's true;
The problem with this is that there could be another update that isn't related to a trigger happening at the same time and it wouldn't get propogated to the next table. If you want to take this route, then I'll leave it up to you to figure out how to solve that problem.