i created a linked Mysql server on SQL server 2008 r2. i'm trying to create a trigger on sql table that automatically updates a field in the linked server table, i have a table called "QFORCHOICE" in sql that has fields "Prodcode,prodname and avqty" and a table "que_for_choie" in mysql that has fields "procode,proname and avqty"
i want the trigger to update the value of "procode" in the linked server if the value of "prodcode" in sql server changes. this is what i have so far but it has errors,
create trigger [QFORCHOICE]
ON dbo.QFORCHOICE
FOR INSERT
AS
DECLARE #prodcode numeric(18,0)
DECLARE #prodname varchar(50)
DECLARE #avqty numeric(18,0)
BEGIN
SELECT
#procode = procode,
#proname = proname,
#avqty = avqty
FROM inserted
update [LINKED_MYSQL].[que_for_choice]
SET prodname=#prodname,avqty=#avqty
WHERE prodcode = #prodcode
end
can anybody please help.
thanks in advance
1- From within a trigger, you shouldn't attempt to access anything external to the current database. It will severely slow down any insert activity, and if there are any networking issues or the remote server is down for any reason, you'll then cause the original transaction to roll back. This is rarely the right thing to do
2- you're making the reliability of your system dependent on the reliability of two servers rather than one (say they both have 99% reliability - your system that ties them together with a trigger now has 98% overall reliability).
Related
I have a table that contains two columns: a resource key, and (very roughly) when it was last accessed.
I have a number of servers that are periodically dumping data about resource accesses to the table. They should either update the access time for a resource key if it already exists, or insert it if it doesn't.
Another server will very rarely generate a report from this table.
I don't require this table to be consistent. I'm okay with the reporting server reading the table in the middle of a dump. If two writing servers try to update the same row, I don't care which gets it's data in.
There are two major questions:
Is what I'm looking for even possible with SQL Server?
If it is possible, I'm potentially going to have multiple servers racing on their 'insert or update' and resulting in primary key constraint violations. Is there any way to resolve this problem?
I'm okay with the reporting server reading the table in the middle of a dump.
Look into the READ COMMITTED SNAPSHOT ISOLATION option. This was introduced in SQL Server 2005 and appears to be available across all editions. It is typically better than using the WITH (NOLOCK) table hint. For more info, check out:
Snapshot Isolation in SQL Server
Understanding Row Versioning-Based Isolation Levels
If two writing servers try to update the same row, I don't care which gets it's data in.
It is not possible for two operations to write the same row at the same time. One will wait.
Regarding two trying to INSERT the same value at the same time, since you don't care which one "wins", just trap and discard the error ;-).
Maybe something along the lines of:
BEGIN TRY
UPDATE tbl
SET tbl.AccessTime = GETDATE()
FROM SchemaName.TableName tbl
WHERE tbl.ResourceKey = #ResourceKey;
IF (##ROWCOUNT = 0)
BEGIN
INSERT INTO SchemaName.TableName (ResourceKey, AccessTime)
VALUES (#ResourceKey, GETDATE());
END;
END TRY
BEGIN CATCH
IF (ERROR_NUMBER() <> 2627) -- 2627 = Violation of PRIMARY KEY constraint
BEGIN
;THROW;
END;
END CATCH;
If you are on SQL Server 2014 (or newer, whenever that happens), then you can look into using:
the WITH DELAYED_DURABILITY = ON option for COMMIT TRAN. Look here for more info: Control Transaction Durability
In-Memory OLTP (64 bit, Enterprise Edition only)
I have been told to create a trigger for inserts on our SQL Server 2000.
I've never written a trigger before, and our old server does not appear to have any triggers defined on it.
Following the Triggers in SQL Server tutorial, I have created this trigger that I have not executed yet:
create trigger trgAfterMachine1Insert on Test_Results
after insert
as
declare #sn varchar(20), #sysID varchar(50),
#opID varchar(50), #testResult varchar(255)
select #sn=Serial_Number from inserted
select #sysID=System_ID from inserted
select #opID=Op_ID from inserted
select #testResult=Test_Result from inserted
exec sp1_AddSnRecord(#sn, #sysID, #opID, #testResult)
print 'Machine1 After Insert Trigger called AddSnRecord'
go
First, notice that I have written a stored procedure called sp1_AddSnRecord to insert this data into a new table (so I do not mess up the existing table). I certainly hope a stored procedure can be called on a trigger, because it performs data validation and enumeration on the data before inserting anything into the other tables.
I really don't see a way in SQL Server 2000 to test to see if this will work, and I'm a bit nervous about just hitting that Execute button in Management Studio.
So, I've been looking at this for a while and trying to read up on some other SO techniques.
From Aaron Bertrand's example HERE, it looks like I can combine all of my select calls into one line:
create trigger trgAfterMachine1Insert on Test_Results
after insert
as
declare #sn varchar(20), #sysID varchar(50),
#opID varchar(50), #testResult varchar(255)
select #sn=Serial_Number, #sysID=System_ID,
#opID=Op_ID, #testResult=Test_Result
from inserted
exec sp1_AddSnRecord(#sn, #sysID, #opID, #testResult)
print 'Machine1 After Insert Trigger called AddSnRecord'
go
Otherwise, I don't see anything more enlightening anywhere or see anyone asking about techniques to test triggers before creating them.
One of my colleges here at work does more SQL work than I do, but he admits that he has never written triggers. All he was able to tell me was, "Man, if you screw that up, you could cause a lot of problems on the server!" All that did was make me nervous, which is why I am here. (98% of what I do is write C# code for Windows Forms and old Windows Mobile devices).
So, how would I verify that this trigger is valid and will not cause any issues on the Server before creating? I've got a local SQL Server Express on my machine, but it is much newer than SQL 2000 and does not have the live data running on it from our Production floor.
If the trigger proves to be faulty afterwards, would I be able to remove it with a simple delete trigger trgAfterMachine1Insert? My search for "delete trigger" seems to have returned mostly triggers for AFTER DELETE.
Thanks in advance.
UPDATE: Including the stored procedure at Martin's request:
ALTER PROCEDURE [dbo].[sp1_AddSnRecord](
#serial_Number varchar(20),
#system_ID varchar(50),
#op_ID varchar(50),
#test_Result varchar(255)) as begin
set NOCOUNT ON;
declare #sn as VarChar(20);
set #sn=dbo.fn_ValidSN(#serial_Number);
if (7<Len(#sn)) begin
declare #badge varchar(50), #result varchar(50), #sysID varchar(50);
set #badge=dbo.fn_GetBadge(#op_ID);
set #result=dbo.fn_GetTestResult(#test_Result);
set #sysID=dbo.fn_GetSysType(#system_ID);
if ((0<Len(#badge)) and (0<Len(#result)) and (0<Len(#sysID))) begin
declare #id int;
select #id=ID from Serial_Numbers where Serial_Number=#sn;
if (#id<1) begin -- this serial number has not been entered
insert into Serial_Numbers (Serial_Number) values (#sn);
select #id=##IDENTITY from Serial_Numbers;
end
if (0<#id) begin -- now insert into SN_Records
insert into SN_Records (SN_ID, SYS_ID, OP_ID, Date_Time, Test_Result)
values (#id, #sysID, #badge, GetDate(), #result);
end
end
end
end
So, let me re-phrase what you are saying:
you have no experience writing triggers
there is no one else in the company with experience to write triggers
you only have a production environment and no other place to test you code
management is telling you to get this done by tonight
This is a sure recipe for disaster.
First you need to stand up against requests where your only option is to fail. Tell management that their data is too important to do something like this without proper testing.
Then get an appropriate testing environment. If your company is a MSDN subscriber you will have access to a copy of SQL Server 2000 Developer Edition that you can install on you laptop or better in some virtual machine.
While you are waiting for that install read about professional behavior in software development. Start with http://en.wikipedia.org/wiki/Robert_Cecil_Martin and then go to software craftsmanship.
But, as I know that won't happen tonight, you can do this in the meantime:
1) Create a new database on the production server
2) Copy the table in question: SELECT TOP(10) * INTO NewDb.dbo.Table FROM OldDb.dbo.Table;
You don't need more data as this is an insert trigger
3) Copy the other tables you need in the same way
4) apply your trigger to the table in NewDb
5) test
6) fix and go back to 5
7) if you are satisfied, copy the trigger to OldDb
Some things to consider:
Make sure you test inserts of more than one row
Don't call a procedure in the trigger. Not that that is wrong in it self, but you won't be able to get multi row inserts working with it
do not ever use ##IDENTITY. That's an order. (reasons and solutions are here: http://sqlity.net/en/351/identity-crisis/ )
After all that start looking into TDD in the database here: tSQLt.org
(Most ideas work in SQL 2000, however the framework does not.)
Hope that helps.
I have a sql script running on a server (ServerA)
This server, has a linked server set up (ServerB) - this is located off site in a datacenter.
This query works relatively speeidily:
SELECT OrderID
FROM [ServerB].[DBName].[dbo].[MyTable]
WHERE Transferred = 0
However, when updating the same table using this query:
UPDATE [ServerB].[DBName].[dbo].[MyTable]
SET Transferred = 1
It takes > 1 minute to complete (even if there's only 1 column where Transferred = 0)
Is there any reason this would be acting so slowly?
Should I have an index on MyTable for the "Transferred" column?
If you (I mean SQL server) cannot use index on remote side to select records, such remote update in fact reads all records (primary key and other needed fields) from remote side, updates these locally and sends updated records back. If your link is slow (say 10Mbit/s or less), then this scenario takes lot of time.
I've used stored procedure on remote side - this way you should only call that procedure remotely (with set of optional parameters). If your updateable subset is small, then proper indexes may help too - but stored procedure is usually faster.
UPDATE [ServerB].[DBName].[dbo].[MyTable]
SET Transferred = 1
WHERE Transferred = 0 -- missing this condition?
How often is this table being used?
If this table is used by many users at the same time, you may have a problem with lock/block.
Everytime some process updates a table without filtering the records, the entire table is locked by the transaction and the other processess that needs to update the table stand waiting.
It his case, you may be waiting for some other process to unlock the table.
I'm doing some fairly complex queries against a remote linked server, and it would be useful to be able to store some information in temp tables and then perform joins against it - all with the remote data. Creating the temp tables locally and joining against them over the wire is prohibitively slow.
Is it possible to force the temp table to be created on the remote server? Assume I don't have sufficient privileges to create my own real (permanent) tables.
This works from SQL 2005 SP3 linked to SQL 2005 SP3 in my environment. However if you inspect the tempdb you will find that the table is actually on the local instance and not the remote instance. I have seen this as a resolution on other forums and wanted to steer you away from this.
create table SecondServer.#doll
(
name varchar(128)
)
GO
insert SecondServer.#Doll
select name from sys.objects where type = 'u'
select * from SecondServer.#Doll
I am 2 years late to the party but you can accomplish this using sp_executeSQL and feeding it a dynamic query to create the table remotely.
Exec RemoteServer.RemoteDatabase.RemoteSchema.SP_ExecuteSQL N'Create Table here'
This will execute the temp table creation at the remote location..
It's not possible to directly create temporary tables on a linked remote server. In fact you can't use any DDL against a linked server.
For more info on the guidelines and limitations of using linked servers see:
Guidelines for Using Distributed Queries (SQL 2008 Books Online)
One work around (and off the top of my head, and this would only work if you had permissions on the remote server) you could:
on the remote server have a stored procedure that would create a persistent table, with a name based on an IN parameter
the remote stored procedure would run a query then insert the results into this table
You then query locally against that table perform any joins to any local tables required
Call another stored procedure on the remote server to drop the remote table when you're done
Not ideal, but a possible work around.
Yes you can but it only lasts for the duration of the connection.
You need to use the EXECUTE AT syntax;
EXECUTE('SELECT * INTO ##example FROM sys.objects; WAITFOR DELAY ''00:01:00''') AT [SERVER2]
On SERVER2 the following will work (for 1 minute);
SELECT * FROM ##example
but it will not work on the local server.
Incidently if you open a transaction on the second server that uses ##example the object remains until the transaction is closed. It also stops the creating statement on the first server from completing. i.e. on server2 run and the transaction on server1 will continue indefinately.
BEGIN TRAN
SELECT * FROM ##example WITH (TABLOCKX)
This is more accademic than of practical use!
If memory is not much of an issue, you could also use table variables as an alternative to temporary tables. This worked for me when running a stored procedure with need of temporary data storage against a Linked Server.
More info: eg this comparison of table variables and temporary tables, including drawbacks of using table variables.
I have this code in a trigger.
if isnull(#d_email,'') <> isnull(#i_email,'')
begin
update server2.database2.dbo.Table2
set
email = #i_email,
where user_id = (select user_id from server2.database2.dbo.Table1 where login = #login)
end
I would like to update a table on another db server, both are MSSQL. the query above works for me but it is taking over 10 seconds to complete. table2 has over 200k records. When I run the execution plan it says that the remote scan has a 99% cost.
Any help would be appreciated.
First, the obvious. Check the indexes on the linked server. If I saw this problem without the linked server issue, that would be the first thing I would check.
Suggestion:
Instead of embedding the UPDATE in the server 1 trigger, create a stored procedure on the linked server and update the records by calling the stored procedure.
Try to remove the sub-query from the UPDATE:
if isnull(#d_email,'') <> isnull(#i_email,'')
begin
update server2.database2.dbo.Table2
set email = #i_email
from server2.database2.dbo.Table2 t2
inner join
server2.database2.dbo.Table1 t1
on (t1.user_id = t2.user_id)
where t1.login = #login
end
Whoa, bad trigger! Never and I mean never, never write a trigger assuming only one record will be inserted/updated or deleted. You SHOULD NOT use variables this way in a trigger. Triggers operate on batches of data, if you assume one record, you will create integrity problems with your database.
What you need to do is join to the inserted table rather than using a varaible for the value.
Also really updating to a remote server may not be such a dandy idea in a trigger. If the remote server goes down then you can't insert anything to the orginal table. If the data can be somewaht less than real time, the normal technique is to have the trigger go to a table on the same server and then a job pick up the new info every 5-10 minutes. That way if the remote server is down, the records can still be inserted and they are stored until the job can pick them up and send them to the remote server.