Best practice: handling errors in linked servers - sql

I am using SQL Server 2008 R2 to connect to a number of other servers of the same type from within triggers and stored procedures. These servers are geographically distributed around the world and it is vital that any errors in communication between the servers are logged along with the data that was supposed to be sent so the communication may be re-attempted at a later time. The servers are participating in an Observer pattern with one of the servers acting as the observer and handling routing of messages between the other servers.
I am looking for specific advice on how best to handle errors in this situation, particularly connectivity errors and any potential pitfalls to look out for when performing queries on remote servers.

If you are using the Linked Server and sending the data to the other server over linked server connection, there is no inherent way to log these request, unless you add an application logic to do so.
with a linked server, if one of the server goes down then there will be an error thrown in the application logic, i.e. in your case the stored procedure or the trigger will fail, saying the server does not exist or the server is down.
In order to avoid this, we try to use the Service Broker, where it implements the Queue Logic, in this case you can always keep the logging and also ensure that the messages will be delivered irrespective of the server down times ( in case of server down time, the message waits until it is read).
http://technet.microsoft.com/en-us/library/ms166104%28v=sql.105%29.aspx
Hope this helps

Linked servers may not be the best solution for the model you're trying to implement, since the resilience you require is very difficult to achieve in the case of a linked server communication failure.
The fundamental problem is that in the case of a linked server communication failure the database engine raises an error with a severity of 20, which is high enough to abort the currently executing batch - bypassing any error handling code in the batch (for example TRY...CATCH).
SQL 2005 and later include the procedure sp_testlinkedserver which enable the availability of the linked server to be tested before attempting to execute commands - however, this doesn't get around problems created by communication errors encountered during a command.
There are a couple of more robust options you could consider. One is the Service Broker, which provides an asynchronous message queuing model. This isn't a perfect fit for the observer pattern but the activation feature provides a means to implement push-notifications from a central point. Since you mention messaging, the conversation model employed by Service Broker might suit your aims.
The other option is transactional replication; this might be more suitable if the data flow is purely from the central server to the observers.

Related

Can someone please tell me how to link Sql 2012 database to activemq?

Please tell me what are the best practice for linking sql 2012 database server with activemq
What i want to achieve is whenever something changes in database my ativemq should get updated with a message that has the change that has happened in the db.
1.So how should i code in my db (like trigger?).
2.How should the data be sent to activemq? Do we need any interfaces like a java app?
Don't do it in the database. Do it in your app.
While SQL Server can host CLR procedures that can connect to your ActiveMQ server and submit messages, doing so is a bad idea:
writing SQLCLR code that connects to external resources is tricky (you need to understand SQL scheduling and impact of preemption, read CLR Hosted Environment)
You add coupling (your trigger now will fail if the ActiveMQ server is not available)
In order to achieve transactional correctness you must enroll ActiveMQ operation into your current transaction, so that the current transaction rolls back after the trigger executed, the ActiveMQ operation is also rolled back. DTC using XA and SQL Server are not for the faint of heart.
A much better solution is to queue locally, into your own DB. Either use a table as a queue or use Service Broker queues (which, btw, do support Activation). Then process this local queue instead, post-commit, from a separate listener process. More availability, less coupling, higher throughput. If ActiveMQ is really needed, at least a local intermediate queue will decouple the availability. Having each DB operation wait for an external ActiveMQ XA enrolled operation will plumed your throughput (DTC is slow) and you have to deal with the availability problem (no ActiveMQ and XA available, no updates in DB possible).

In 2012, is MSMQ still valid for queueing calls in WCF?

I have created a WCF web service that will upload data from SQL Server to our ISeries. When an end user is finished with their data entry (a batch), they will "send" the batch number to the web service. The web service will then upload that data to the ISeries. It cannot be assumed that this will be a quick process and there may be many end users hitting the web service at once. Likewise, because the way the database is setup on the ISeries, I can't upload this data simultaneously because we may run into locks, misfired triggers, etc. So, I want somehow to queue these calls so that they are done in order received.
I have been searching for methods to do this and there's a lot of information in 2011 and earlier discussing MSMQ. Is that the still preferred way to do this? Would Reentrant Concurrency mode be a more "modern" option?
There are a lot of alternative queuing systems. Since you have a SQL Server in place I would recommend using MSMQ. In this combination you can use TransactionScope out-of-the-box to handle transcations spanning your DB and the Queuing system.
From my own experience, MSMQ is a proven and stable technology stack.

nServiceBus : How do I make a non-transactional call to a database from within the context of a transactional operation

Quick overview of our topology:
Web sites sending commands to an nServiceBus server, which accepts the commands and then publishes the correct pub/sub events. This service also has message handlers that can do some process against the DB in response to the command, for instance:
1 user registers on web site
2 web site sends nServicebus command to nServicebus service on another server.
3 nServicebus server has a handler for that specific type of command, which logs something to the database and sends a welcome email
Since instituting this architecture we started to get deadlocks on the DB. I have traced it down to MSDTC on the database server. If I turn that service OFF on the database server nServicebus starts throwing up errors, which to me shows that nServiceBus has been enlisting the DB update in the transaction.
I don't wish this to happen, I want to handle the DB failing myself, I only want the transaction to ensure the message is delivered to my nServicebus proxy service. I don't want a transaction from the web all the way through 2 servers to the DB and back.
Any suggestions?
EDIT: this post provides some clues, however I'm not entirely sure it's the proper way to proceed.. NServiceBus - Problem with using TransactionScopeOption.Suppress in message handler
EDIT2: The reason that we want the DB work outside the scope of the transaction is that the intent is to 'asynchronously' process these commands on another server so as not to slow down the web site and/or cause users to wait for these long running aggregation commands. If the DB is within the scope of the transaction, is that blocking execution on the website at the point where the original command is fired to the distributor? Is there a better nServicebus architecture for this scenario? We want the command to fire quickly and return control to the web site so the user can quickly proceed and not have to wait for our longish running DB command, which is updating aggregate counts and sending emails etc.
I wouldn't recommend having the DB work outside the context of the NServiceBus transaction. Instead, try reducing the isolation level of the transactions. This can be done by calling:
.IsolationLevel(System.Transactions.IsolationLevel.ReadCommited)
in the fluent configuration. You'll have to put this after .MsmqTransport() in v2.6. In v3.0 you can put this call almost anywhere.
RESPONSE TO EDIT2:
Just using NServiceBus will achieve your objective of not slowing down the website, regardless of the level of the transactions run on the other server. The use of transactions is to provide a guarantee that messages won't be lost in case of failure and also that you won't have to write your own deduplication logic.

Continuously checking database from a Windows service

I am making a Windows service which needs to continuously check for database entries that can be added at any time to tell it to execute some code. It is looking to see if it's status is set to pending, and it's execute time entry is > than the current time. Is the only way to do this to just run select statements over and over? It might need to execute the code every minute which means I need to run the select statement every minute looking for entries in the database. I'm trying to avoid unneccesary cpu time because I'm probably going to end up paying for cpu cycles on the hosting provider
Be aware that Notification Services is only for SQL 2005, and has been dropped from SQL 2008.
Rather than polling the database for changes, I would recommend writing a CLR stored procedure that is called from a trigger, which is raised when an appropriate change occurs (e.g. insert or update). The CLR sproc alerts your service which then performs its work.
Sending the service alert via a TCP/IP or HTTP channel is a good choice since you can deploy your service anywhere, just by modifying some configuration parameter that is read by the sproc. It also makes it easy to test the service.
I would use an event driven model in your service. The service waits on an auto-reset event, starting a block of work when the event is raised. The sproc communications channel runs on another thread and sets the event on each incoming request.
Assuming the service is doing a block of work and a set of multiple pending requests are outstanding, this design ensures that those requests trigger just 1 more block of work when the current one is finished.
You can also have multiple workers waiting on the same event if overlapping processing is desired.
Note: for external network access the CREATE ASSEMBLY statement will require the PERMISSION_SET option to be set to EXTERNAL_ACCESS.
Given you talk about the service provider, I suspect one of the main alternatives will not be open to you, which is notification services. It allows you to register for data changed events and be notified, without the need to poll the database. It does however require service broker enabled for it to work, and that potentially could be a problem if it is hosted - some companies keep it switched off.
The question is not tagged to a specific database just SQL, the notification services is a SQL Server facility.
If you're using SQL Server and open to a different approach, check out SQL Server Notification Services.
Oracle also provides notifications, the call it Database Change Notification

MSMQ v Database Table

An existing process changes the status field of a booking record in a table, in response to user input.
I have another process to write, that will run asynchronously for records with a particular status. It will read the table record, perform some operations (including calls to third party web services), and update the record's status field to indicate that processing is completed (or In Error, with an error count).
This operation sounds very similar to a queue. What are the benefits and tradeoffs of using MSMQ over a SQL Table in this situation, and why should I choose one over the other?
It is our software that is adding and updating records in the table.
It is a new piece of work (a Windows Service) that will be performing the asynchronous processing. This needs to be "always up".
There are several reasons, which were discussed on the Fog Creek forum here: http://discuss.fogcreek.com/joelonsoftware5/default.asp?cmd=show&ixPost=173704&ixReplies=5
The main benefit is that MSMQ can still be used when there is intermittant connectivity between computers (using a store and forward mechanism on the local machine). As far as the application is concerned it delivered the message to MSMQ, even though MSMQ will possibly deliver the message later.
You can only insert a record to a table when you can connect to the database.
A table approach is better when a workflow approach is required, and the process will move through various stages, and these stages need persisting in the DB.
If the rate at which booking records is created is low I would have the second process periodically check the table for new bookings.
Unless you are already using MSMQ, introducing it just gives you an extra platform component to support.
If the database is heavily loaded, or you get a lot of lock contention with two process reading and writing to the same region of the bookings table, then consider introducing MSMQ.
I also like this answer from le dorfier in the previous discussion:
I've used tables first, then refactor
to a full-fledged msg queue when (and
if) there's reason - which is trivial
if your design is reasonable.
Thanks, folks, for all the answers. Most helpful.
With MSMQ you can also offload the work to another server very easy by changing the location of the queue to another machine rather then the db server.
By the way, as of SQL Server 2005 there is built in queue in the DB. Its called SQL server Service Broker.
See : http://msdn.microsoft.com/en-us/library/ms345108.aspx
Also see previous discussion.
If you have MSMQ expertise, it's a good option. If you know databases but not MSMQ, ask yourself if you want to become expert in another technology; whether your application is a critical one; and which you'd rather debug when there's a problem.
I have recently been investigating this myself so wanted to mention my findings. The location of the Database in comparison to your application is a big factor on deciding which option is faster.
I tested inserting the time it took to insert 100 database entries versus logging the exact same data into a local MSMQ message. I then took the average of the results of performing this test several times.
What I found was that when the database is on the local network, inserting a row was up to 4 times faster than logging to an MSMQ.
When the database was being accessed over a decent internet connection, inserting a row into the database was up to 6 times slower than logging to an MSMQ.
So:
Local database - DB is faster, otherwise MSMQ is.
Instead of making raw MSMQ calls, it might be easier if you implement your sevice as a queued COM+ component and make queued function calls from your client application. In the end, the asynchronous service still uses MSMQ in the background, but your code will be much clearer and easier to use.
I would probably go with MSMQ, or ActiveMQ myself. I would suggest (presuming that you are considering MSMQ you are using windows, with MS technology) looking into WCF, or if you are using MS-SQL 2005+ having a trigger that calls into .net code to run your processing.
Service Broker was introduced in SQL 2005 and it is designed to be very quick at handling messages as the process is relatively simple (I believe its roots were in triggers). If you are concerned about scalability, in SQL 2008 they have released an independant processing executable to separate the processing from SQL Server (in standard Service Broker, everything is controlled by the SQL Server instances).
I would definitely consider using Service Broker over MSMQ but this is dependant on your SQL Development/DBA resources and their knowledge.
Besides of Mitch's answer, some other scenarios:
1. each of your message have its own due date to trigger the action, this can be done through MQ as well, but in this case I prefer to store it into db as it is more controllable;
2. subscriber needs to filter message and then process a portion of it, this can be done by LINQ too, depends on how complex the filter is, the db approach is better because I can use linq to EF do complex query easily;
3. For deployment, i want fully automated deployment process so that DB is a better choice for me. I am not a big fan of manual configurations.