I am making a Windows service which needs to continuously check for database entries that can be added at any time to tell it to execute some code. It is looking to see if it's status is set to pending, and it's execute time entry is > than the current time. Is the only way to do this to just run select statements over and over? It might need to execute the code every minute which means I need to run the select statement every minute looking for entries in the database. I'm trying to avoid unneccesary cpu time because I'm probably going to end up paying for cpu cycles on the hosting provider
Be aware that Notification Services is only for SQL 2005, and has been dropped from SQL 2008.
Rather than polling the database for changes, I would recommend writing a CLR stored procedure that is called from a trigger, which is raised when an appropriate change occurs (e.g. insert or update). The CLR sproc alerts your service which then performs its work.
Sending the service alert via a TCP/IP or HTTP channel is a good choice since you can deploy your service anywhere, just by modifying some configuration parameter that is read by the sproc. It also makes it easy to test the service.
I would use an event driven model in your service. The service waits on an auto-reset event, starting a block of work when the event is raised. The sproc communications channel runs on another thread and sets the event on each incoming request.
Assuming the service is doing a block of work and a set of multiple pending requests are outstanding, this design ensures that those requests trigger just 1 more block of work when the current one is finished.
You can also have multiple workers waiting on the same event if overlapping processing is desired.
Note: for external network access the CREATE ASSEMBLY statement will require the PERMISSION_SET option to be set to EXTERNAL_ACCESS.
Given you talk about the service provider, I suspect one of the main alternatives will not be open to you, which is notification services. It allows you to register for data changed events and be notified, without the need to poll the database. It does however require service broker enabled for it to work, and that potentially could be a problem if it is hosted - some companies keep it switched off.
The question is not tagged to a specific database just SQL, the notification services is a SQL Server facility.
If you're using SQL Server and open to a different approach, check out SQL Server Notification Services.
Oracle also provides notifications, the call it Database Change Notification
Related
I am new to the world of Tibco... I have been asked to create an VB.net application to do couple of things:
Update the value of a column in a database (which then generates a message in TIBCO EMS).
My application then needs to read this message from TIBCO and determine if the message has a particular word in it, and display the result as Pass or Fail
I have already written the first piece of the task, however, I have no clue on how to proceed on the second one. I am hoping to get some kind of help/guidance on how to proceed! Any suggestions?
Thanks,
NewTibcoUser
This can be done easily depending on which Tibco Tools you own. If you have BW and ADB (Active Database Adapter) then you can use that.
option 1:
If you don't have adb you can mimic it by doing something like the following (ADB isn't magical its pretty strait forward)
1) Create a Mirror of the table that is being monitored for changes (You could just put in the column you want to monitor plus the key)
Key
ColumnYouWantToMonitor
DeliveryStatus (Adb_L_DeliverStatus)
Transaction type (adb_opCode)
Time It happened (Adb_timestamp)
Delivery Status (ADB_L_DeliveryStatus)
2) Create a trigger on the table That inserts a record into the table.
3) Write a .Net Process that monitors the table every 5 seconds or 10 or whatever (Make it configurable) (select * from tableX where DeliveryStatus = 'N' order by transactionTime)
4) Place the message on the EMS Queue or do a service call to you .Net App.
Option 2
1) Create a trigger on the table and write the event to a SQL Server Brokering Service Queue
2) Write a .Net app that reads from that SSBS queue and converts it into a EMS Message
some design considerations
Try not to continually query (Aka poll) for changes on your main table (prevent blocking)
If your app is not running and DB changes are happening ensure that you have a message expire time. So when your app starts it doesn't have to process 1000's of messages off the queue (Depending if you need the message or not)
If you do need the messages you may want to set the Queue to be persistent to disk so you don't loose messages. Also Client acknowledgement in your .Net app would be a good idea not just auto ack.
As you mention, the first point is already done (Perhaps with ADB or a custom program reacting to the DB insert).
So, your problem is strictly the "React to content of an EMS message from VB.Net" part.
I see two possibilities :
1- If you have EMS, ADB and BW, make a custom Adapter subscriber (a BW config) to change the DB in some way in reaction to messages on the bus. Your VB application can then simply query the DB to get the response status.
2- If you don't have so many products from the TIBCO stack, then you should make a simple C# EMS client program (see examples provided within EMS docs). This client can then signal you VB application (some kind of .Net internal signaling maybe, I am not an expert myself) or write the response status in DB.
I am using SQL Server 2008 R2 to connect to a number of other servers of the same type from within triggers and stored procedures. These servers are geographically distributed around the world and it is vital that any errors in communication between the servers are logged along with the data that was supposed to be sent so the communication may be re-attempted at a later time. The servers are participating in an Observer pattern with one of the servers acting as the observer and handling routing of messages between the other servers.
I am looking for specific advice on how best to handle errors in this situation, particularly connectivity errors and any potential pitfalls to look out for when performing queries on remote servers.
If you are using the Linked Server and sending the data to the other server over linked server connection, there is no inherent way to log these request, unless you add an application logic to do so.
with a linked server, if one of the server goes down then there will be an error thrown in the application logic, i.e. in your case the stored procedure or the trigger will fail, saying the server does not exist or the server is down.
In order to avoid this, we try to use the Service Broker, where it implements the Queue Logic, in this case you can always keep the logging and also ensure that the messages will be delivered irrespective of the server down times ( in case of server down time, the message waits until it is read).
http://technet.microsoft.com/en-us/library/ms166104%28v=sql.105%29.aspx
Hope this helps
Linked servers may not be the best solution for the model you're trying to implement, since the resilience you require is very difficult to achieve in the case of a linked server communication failure.
The fundamental problem is that in the case of a linked server communication failure the database engine raises an error with a severity of 20, which is high enough to abort the currently executing batch - bypassing any error handling code in the batch (for example TRY...CATCH).
SQL 2005 and later include the procedure sp_testlinkedserver which enable the availability of the linked server to be tested before attempting to execute commands - however, this doesn't get around problems created by communication errors encountered during a command.
There are a couple of more robust options you could consider. One is the Service Broker, which provides an asynchronous message queuing model. This isn't a perfect fit for the observer pattern but the activation feature provides a means to implement push-notifications from a central point. Since you mention messaging, the conversation model employed by Service Broker might suit your aims.
The other option is transactional replication; this might be more suitable if the data flow is purely from the central server to the observers.
Quick overview of our topology:
Web sites sending commands to an nServiceBus server, which accepts the commands and then publishes the correct pub/sub events. This service also has message handlers that can do some process against the DB in response to the command, for instance:
1 user registers on web site
2 web site sends nServicebus command to nServicebus service on another server.
3 nServicebus server has a handler for that specific type of command, which logs something to the database and sends a welcome email
Since instituting this architecture we started to get deadlocks on the DB. I have traced it down to MSDTC on the database server. If I turn that service OFF on the database server nServicebus starts throwing up errors, which to me shows that nServiceBus has been enlisting the DB update in the transaction.
I don't wish this to happen, I want to handle the DB failing myself, I only want the transaction to ensure the message is delivered to my nServicebus proxy service. I don't want a transaction from the web all the way through 2 servers to the DB and back.
Any suggestions?
EDIT: this post provides some clues, however I'm not entirely sure it's the proper way to proceed.. NServiceBus - Problem with using TransactionScopeOption.Suppress in message handler
EDIT2: The reason that we want the DB work outside the scope of the transaction is that the intent is to 'asynchronously' process these commands on another server so as not to slow down the web site and/or cause users to wait for these long running aggregation commands. If the DB is within the scope of the transaction, is that blocking execution on the website at the point where the original command is fired to the distributor? Is there a better nServicebus architecture for this scenario? We want the command to fire quickly and return control to the web site so the user can quickly proceed and not have to wait for our longish running DB command, which is updating aggregate counts and sending emails etc.
I wouldn't recommend having the DB work outside the context of the NServiceBus transaction. Instead, try reducing the isolation level of the transactions. This can be done by calling:
.IsolationLevel(System.Transactions.IsolationLevel.ReadCommited)
in the fluent configuration. You'll have to put this after .MsmqTransport() in v2.6. In v3.0 you can put this call almost anywhere.
RESPONSE TO EDIT2:
Just using NServiceBus will achieve your objective of not slowing down the website, regardless of the level of the transactions run on the other server. The use of transactions is to provide a guarantee that messages won't be lost in case of failure and also that you won't have to write your own deduplication logic.
I have a ASP .NET web service that leverages a long lived connection from the client.
The client connects in and waits for 15 minutes for a response.
Just prior to 15 minutes, the ASP .NET Web Service responds with an OK.
The client repeats this connection establishment.
During the 15 minutes, the Web Service checks for a change in a field value in a record in an SQL table. If that value changes it then immediately sends a response to the client with ReadMessage. This checking / polling of the database is done every 30 seconds. This has several drawbacks:
it does not scale well. It works well with 1 or 2 clients, but when you end up with 10,000 client connection that is a lot of polling on the database.
It leads to latency in processing as it may take up to 30 seconds for the client to be notified.
What I would like is to find a way of notifying the Web Service for the active http client that the record has been updated.
It should also be noted that each client connection to the web service has it's own specific record in the table.
I think SqlDependency is what you are looking for. Query Notifications allow applications to receive a notice when the results of a query have been changed
Have you considered setting up some triggers in the db? If you are using SQL Server you can use SQL Server CLR integration.
http://msdn.microsoft.com/en-us/library/ms254963%28v=VS.80%29.aspx
You could put a trigger on the table. Disclaimer: I try to stay away from triggers because it's very easy to write one poorly and when it errors it's hard to debug. However, I haven't ever written a CLR trigger and I imagine there's a little more safety in that since you have more control over error handling.
But even better would be to have whatever process is updating the table to begin with notify your webservice of the change if that's an option.
I am currently working on a project with specific requirements. A brief overview of these are as follows:
Data is retrieved from external webservices
Data is stored in SQL 2005
Data is manipulated via a web GUI
The windows service that communicates with the web services has no coupling with our internal web UI, except via the database.
Communication with the web services needs to be both time-based, and triggered via user intervention on the web UI.
The current (pre-pre-production) model for web service communication triggering is via a database table that stores trigger requests generated from the manual intervention. I do not really want to have multiple trigger mechanisms, but would like to be able to populate the database table with triggers based upon the time of the call. As I see it there are two ways to accomplish this.
1) Adapt the trigger table to store two extra parameters. One being "Is this time-based or manually added?" and a nullable field to store the timing details (exact format to be determined). If it is a manaully created trigger, mark it as processed when the trigger has been fired, but not if it is a timed trigger.
or
2) Create a second windows service that creates the triggers on-the-fly at timed intervals.
The second option seems like a fudge to me, but the management of option 1 could easily turn into a programming nightmare (how do you know if the last poll of the table returned the event that needs to fire, and how do you then stop it re-triggering on the next poll)
I'd appreciate it if anyone could spare a few minutes to help me decide which route (one of these two, or possibly a third, unlisted one) to take.
Why not use a SQL Job instead of the Windows Service? You can encapsulate all of you db "trigger" code in Stored Procedures. Then your UI and SQL Job can call the same Stored Procedures and create the triggers the same way whether it's manually or at a time interval.
The way I see it is this.
You have a Windows Service, which is playing the role of a scheduler and in it there are some classes which simply call the webservices and put the data in your databases.
So, you can use these classes directly from the WebUI as well and import the data based on the WebUI trigger.
I don't like the idea of storing a user generated action as a flag (trigger) in the database where some service will poll it (at an interval which is not under the user's control) to execute that action.
You could even convert the whole code into an exe which you can then schedule using the Windows Scheduler. And call the same exe whenever the user triggers the action from the Web UI.
#Vaibhav
Unfortunately, the physical architecture of the solution will not allow any direct communication between the components, other than Web UI to Database, and database to service (which can then call out to the web services). I do, however, agree that re-use of the communication classes would be the ideal here - I just can't do it within the confines of our business*
*Isn't it always the way that a technically "better" solution is stymied by external factors?