How to efficiently trigger system command with SQL query or table change? - sql

I have data conversion and caching service running as self-hosted WCF service.
Now it uses database polling in constant short intervals to update its data.
I think it's unnecessary. The data can be changed only if one of the tables is changed, and when the data is changed depends on system users actions.
There is no problem in setting a trigger for specific tables, however I would need an action outside SQL-Server to update my cache. My WCF service could perform update when receiving specific URI via HTTP. So all I need is a command in table trigger which would send a request. Is it even possible?
I think about a hack I used back in the days with HTTP requests. I halted HTTP request response at server until data packet from somewhere else arrived. There was no delay between polling requests. I achieved fully asynchronous, "real-time" updates.
Maybe this approach is possible to apply with SQL? I think about a query which blocks termination until receives a signal. Well, it eventually times out, but it's good enough to try. Then - how to signal and wait in SQL? By locking and unlocking shared resource, like cursor or dummy table?
Any other options?
I need the cache update done at lowest possible frequency (because it's pretty expensive, so once per minute is great), but I need immediate update when the data is changed.

To answer your question, have you looked at xp_cmdshell?
https://msdn.microsoft.com/en-us/library/ms175046.aspx
However, the security/performance implications of such a decision could be non-trivial depending on your use case.

Related

Best Way to Transmit LARGE data packages via SOAP web service

We are working with a .NET 3.5 app which is fast approaching legacy status. We have an existing SOAP service which reads records from our database and saves them to a third party MS SQL database, sending all the data rows in a single batch.
This has always worked fine, but recently we've taken on a much larger client than any we've had before, and they are transmitting much larger batches, so much so that they have begun to fail. We've upped the time out and max memory sizes in IIS, and maxed out the maxRequestLength in the web.config, but we are still bumping up against size problems.
So, I understand that long term, we should consider moving away from SOAP and into WCF, and plans for that are in the works. But in the mean time, we need a short term fix for this new client. And of course, to make the business and sales people happy, we need it kinda quickly.
I'm wondering what the best-practice approach might be. Initially I'm thinking something like this, but I could be thinking inside the box too much:
Establish a bench mark of # of records over which we don’t want to attempt to sync all at once.
Before attempting to save the data, check the number of records against that bench mark
If it's above it, then break the transmission down into segments which are each below that benchmark. SELECT TOP 10000 * FROM table WHERE sent = false, etc., if the benchmark is 10000. Then update sent to true for those records once submitted. Repeat.
Obviously, this will slow the process down, so to handle the user experience, we may want to toss in a status bar so they can see the progress.
Am I on the right track?
In addition to the comments from John, you should consider if you are solving the problem in the most optimal way.
It looks like you are triggering a one way sync between 2 database by calling a web service. This approach leads to the time out and memory problems that you are experiencing.
If your goal is to do the one way sync, you could use a free framework such as Microsofts sync framework: http://msdn.microsoft.com/en-US/sync

Good ways to decouple GUIs from SOAP/WS-API update/write calls?

Let's assume we have some configuration GUI that in its current form uses direct DB transactions to submit new configurations for more than one configurable component in a consistent manner.
Now let's move the data (DB) stuff behind some SOAP/WS API. The GUI has no direct DB access anymore. The transactional behaviour must remain, but the API should NOT be designed to explcitly accommodate the GUI form submissions. In fact, I don't even know how the new GUI will work or how the user input will be structured. Therefore I need to provide something like WS-AtomicTransaction on the API server side. However, there are (at least) two caveats:
The GUI is written in PHP: I don't think there is any WS-Transaction support in PHP available.
I don't want to keep DB transactions open on the server side while waiting for additional client requests.
Solutions I can think of:
using Camel's aggregation. However, that would make things more complicated in at least two ways:
You cannot use DB row ids of newly inserted rows in the subsequent calls inside the same transaction. You need to use some sort of symbolic back-referencing because there would be no communication between client and server while processing the aggregated messages.
call replies would not be immediate (or the immediate and separate reply to each single call would only be some sort of a stub, ie. not containing any useful information beyond "your message has been attached to TX xyz" -- if that's at all possible in the Camel aggregation case).
the two disadvantages of the previous solution make me think of request batches where possibly the WS standards provide means for referencing call results in subsequent calls inside the batch transaction. Is there any such thing already available? Maybe even as a PHP client?
trying to eliminate lock contention in the database by carefully using row-level locks etc. However, when inserting new elements, my guess is that usually pages and index pages need to be locked by the DB.
maybe some server-side persistence layer using optimistic locking? But again, that would not return any DB IDs back to the client before the final commit if DB writes would be postponed until the commit (don't know if that's possible at all).
What do YOU think?
Transactions are a powerful tool and we easily get into a thinking pattern in which we see every problem as a nail we hit with this big hammer. I can relate to your confusion because I've experienced it myself. Unfortunately I have no better advice for you than to try not think in terms of transactions but of atomic API calls.
When I think in terms of transactions, my thought pattern usually goes like this:
start transaction
read (repeat as required)
update (repeat as required)
commit/roll back
It takes some time to realize that we overuse this pattern. Actual conflicts are rare and there are many other ways of dealing with them. Here is a commonly used one in APIs
read and send data to client (atomic API call)
update data (on the client)
send original + updates back to the server (atomic API call)
start transaction (on server)
read
compare with original from client
if not same, return error (client should retry)
if same, update
commit
The last six points are part of the implementation of the API call.
Ferenc Mihaly
http://theamiableapi.com

Best way to use a DB queue to make http calls and update the status in the db

I am using c#.net, the db is MS SQL 2008 R2.
I have a question that seems to have been asked a lot in the forums here. I want to use a database table as a a queue...but the processing of these messages cannot be done from the database.
I have a table that stores the requests i get from a .Net component. I now have to read the data from these tables and make http calls to 2 webservices. Based on the response received from the webservices, the data gets archived or deleted.
I had a few specific questions:
1. How do i make sure that if i pick a record for processing and the http call fails I should be able to go on to the next record, and then come back to this record at the end of the run
2. Is there an alternative to using the database as a queue(like MSMQ etc.), which option is better
3. I want to maintain an audit trail of the record status. Is creating a trigger to log the changes before the edit the best way to do it?
Regards
Leo
Use Service Broker!
I am using it for a while and think its great thing althought its takes time to understand how it works. Was using book to learn.
Service Broker solves:
concurency
application state
... many, many other things

SQL Server, using a table as a queue

I'm using an SQL Server 2008 R2 as a queuing mechanism. I add items to the table, and an external service reads and processes these items. This works great, but is missing one thing - I need mechanism whereby I can attempt to select a single row from the table and, if there isn't one, block until there is (preferably for a specific period of time).
Can anyone advise on how I might achieve this?
The only way to achieve a non-pooling blocking dequeue is WAITFOR (RECEIVE). Which implies Service Broker queues, with all the added overhead.
If you're using ordinary tables as queues you will not be able to achieve non-polling blocking. You must poll the queue by asking for a dequeue operation, and if it returns nothing, sleep and try again later.
I'm afraid I'm going to disagree with Andomar here: while his answer works as a generic question 'are there any rows in the table?' when it comes to queueing, due to the busy nature of overlapping enqueue/dequeue, checking for rows like this is a (almost) guaranteed deadlock under load. When it comes to using tables as queue, one must always stick to the basic enqueue/dequeue operations and don't try fancy stuff.
"since SQL Server 2005 introduced the OUTPUT clause, using tables as queues is no longer a hard problem". A great post on how to do this.
http://rusanu.com/2010/03/26/using-tables-as-queues/
I need mechanism whereby I can attempt
to select a single row from the table
and, if there isn't one, block until
there is (preferably for a specific
period of time).
You can loop and check for new rows every second:
while not exists (select * from QueueTable)
begin
wait for delay '00:01'
end
Disclaimer: this is not code I would use for a production system, but it does what you ask.
The previous commenter that suggested using Service Broker likely had the best answer. Service Broker allows you to essentially block while waiting for more input.
If Service Broker is overkill, you should consider a different approach to your problem. Can you provide more details of what you're trying to do?
Let me share my experiences with you in this area, you may find it helpful.
My team first used MSMQ transactional queues that would feed our asynchronous services (be they IIS hosted or WAS). The biggest problem we encountered was MS DTC issues under heavy load, like 100+ messages/second load; all it took was one slow database operation somewhere to start causing timeout exceptions and MS DTC would bring the house down so to speak (transactions would actually become lost if things got bad enough), and although we're not 100% certain of the root cause to this day, we do suspect MS DTC in a clustered environment has some serious issues.
Because of this, we started looking into different solutions. Service Bus for Windows Server (the on-premise version of Azure Service Bus) looked promising, but it was non-transactional so didn't suit our requirements.
We finally decided on the roll-your-own approach, an approach suggested to us by the guys who built the Azure Service Bus, because of our transactional requirements. Essentially, we followed the Azure Worker Role model for a worker role that would be fed via some queue; a polling-blocking model.
Honestly, this has been far better for us than anything else we've used. The pseudocode for such a service is:
hasMsg = true
while(true)
if(!hasMsg)
sleep
msg = GetNextMessage
if(msg == null)
hasMsg = false
else
hasMsg = true
Process(msg);
We've found that CPU usage is significantly lower this way (lower than traditional WCF services).
The tricky part of course is handling transactions. If you'd like to have multiple instances of your service read from the queue, you'll need to employ read-past/updlock in your sql, and also have your .net service enlist in the transactions in a way that will roll-back should the service fail. in this case, you'll want to go with retry/poison queues as tables in addition to your regular queues.

MSMQ v Database Table

An existing process changes the status field of a booking record in a table, in response to user input.
I have another process to write, that will run asynchronously for records with a particular status. It will read the table record, perform some operations (including calls to third party web services), and update the record's status field to indicate that processing is completed (or In Error, with an error count).
This operation sounds very similar to a queue. What are the benefits and tradeoffs of using MSMQ over a SQL Table in this situation, and why should I choose one over the other?
It is our software that is adding and updating records in the table.
It is a new piece of work (a Windows Service) that will be performing the asynchronous processing. This needs to be "always up".
There are several reasons, which were discussed on the Fog Creek forum here: http://discuss.fogcreek.com/joelonsoftware5/default.asp?cmd=show&ixPost=173704&ixReplies=5
The main benefit is that MSMQ can still be used when there is intermittant connectivity between computers (using a store and forward mechanism on the local machine). As far as the application is concerned it delivered the message to MSMQ, even though MSMQ will possibly deliver the message later.
You can only insert a record to a table when you can connect to the database.
A table approach is better when a workflow approach is required, and the process will move through various stages, and these stages need persisting in the DB.
If the rate at which booking records is created is low I would have the second process periodically check the table for new bookings.
Unless you are already using MSMQ, introducing it just gives you an extra platform component to support.
If the database is heavily loaded, or you get a lot of lock contention with two process reading and writing to the same region of the bookings table, then consider introducing MSMQ.
I also like this answer from le dorfier in the previous discussion:
I've used tables first, then refactor
to a full-fledged msg queue when (and
if) there's reason - which is trivial
if your design is reasonable.
Thanks, folks, for all the answers. Most helpful.
With MSMQ you can also offload the work to another server very easy by changing the location of the queue to another machine rather then the db server.
By the way, as of SQL Server 2005 there is built in queue in the DB. Its called SQL server Service Broker.
See : http://msdn.microsoft.com/en-us/library/ms345108.aspx
Also see previous discussion.
If you have MSMQ expertise, it's a good option. If you know databases but not MSMQ, ask yourself if you want to become expert in another technology; whether your application is a critical one; and which you'd rather debug when there's a problem.
I have recently been investigating this myself so wanted to mention my findings. The location of the Database in comparison to your application is a big factor on deciding which option is faster.
I tested inserting the time it took to insert 100 database entries versus logging the exact same data into a local MSMQ message. I then took the average of the results of performing this test several times.
What I found was that when the database is on the local network, inserting a row was up to 4 times faster than logging to an MSMQ.
When the database was being accessed over a decent internet connection, inserting a row into the database was up to 6 times slower than logging to an MSMQ.
So:
Local database - DB is faster, otherwise MSMQ is.
Instead of making raw MSMQ calls, it might be easier if you implement your sevice as a queued COM+ component and make queued function calls from your client application. In the end, the asynchronous service still uses MSMQ in the background, but your code will be much clearer and easier to use.
I would probably go with MSMQ, or ActiveMQ myself. I would suggest (presuming that you are considering MSMQ you are using windows, with MS technology) looking into WCF, or if you are using MS-SQL 2005+ having a trigger that calls into .net code to run your processing.
Service Broker was introduced in SQL 2005 and it is designed to be very quick at handling messages as the process is relatively simple (I believe its roots were in triggers). If you are concerned about scalability, in SQL 2008 they have released an independant processing executable to separate the processing from SQL Server (in standard Service Broker, everything is controlled by the SQL Server instances).
I would definitely consider using Service Broker over MSMQ but this is dependant on your SQL Development/DBA resources and their knowledge.
Besides of Mitch's answer, some other scenarios:
1. each of your message have its own due date to trigger the action, this can be done through MQ as well, but in this case I prefer to store it into db as it is more controllable;
2. subscriber needs to filter message and then process a portion of it, this can be done by LINQ too, depends on how complex the filter is, the db approach is better because I can use linq to EF do complex query easily;
3. For deployment, i want fully automated deployment process so that DB is a better choice for me. I am not a big fan of manual configurations.