I have MS SQL Server 2012 Express on one end and a Node.JS web application on the other.
I need some way of tracking DML changes (insert, update or delete operations) to given tables.
Update Example:
If a record is updated, I need to send the updated record to my web application (could be a HTTP POST or write data over TCP)
The MS SQL Server is firewalled to prevent direct access nor would I want to poll the database.
How would I go about achieving this on the MS SQL Server end?
Consider posting to queues. Check out RabbitMQ. Node.js does have a rabbitmq client.
Post a message to the queue when a row is updated.
Make your node webapp listen to that queue.
Related
We are using Sql 2012 database server. When ever the db modifies we want it to trigger a message that can be stored in a queue using activemq.
We are not sure how can we code to trigger a db so that it sends a message.
Can we directly make the message generated from db to get queued in activemq without any java interface in between. I would want to know whether we can achieve this or not.
3.Are there any other ways to set up a communication between sql server and activemq say between database services and activemq services(does activemq have that)
PS i am a new user of activemq. Any leads to solve these queries is appreciated.
Please don't as SQL Server to do this. SQL Server is designed to store data. You are asking too much of it. Depending on how many places you would want to add to this queue from, I would choose one of the following solutions:
If you want to add to this queue from a bunch of different places, and don't want to change existing code, create an application to move items from SQL Server to ActiveMQ. The items in SQL Server can be populated by a trigger.
If there are only a few places that add to this queue, add that logic to the application so that every write to SQL Server will also write to ActiveMQ.
If you really still don't want to modify any code, you can configure ActiveMQ to use SQL Server as its persistence database. Then you can modify its data and hope that it plays nice. This is definitely not preferable. I would rather put CLR code into SQL Server to push data to ActiveMQ.
I'm quite new on merge replication, but a scenario. If I have a server, and two clients with pull subscriptions does it matter if the time on those machines goes out of sync with each other or the server?
When I modify some data on one of those clients does it store the time against that change?
I am using MS SQL 2012.
I will be using a setup as described here,
http://msdn.microsoft.com/en-us/library/ms151329(v=sql.105).aspx
OK, I've found that merge replication doesn't store the time against a specific change.
If you have client subscriptions then the first person to sync will win in the case of a conflict.
If you have server subscriptions then the priority of those server subscriptions will determine who will win.
Neither of these options rely on time.
I currently have a 3rd-party application in production that uses merge replication. This includes a SQL 2008 R2 database server as publisher with ~100 desktop clients interacting with the database via the application, plus another 65 laptops with SQL Server Express 2008 R2 as subscribers to the publisher. The laptops synchronize via wireless when they get to their home base locations, with any deltas being pushed to the subscribers.
Now to get to your question. Time on the clients down to a millisecond likely won't make any difference. What will make a difference is whether or not the publisher and subscriber are out of sync for more than a specific period of time. Using SQL Server Mgmt Studio, look at the server publishing the database. Expand the Replication section, between Server Objects & Management, then right-click on the publisher, under Local Publications, and select Properties.
Under the General page, look at the Subscription expiration area. The default from Microsoft is 14 days and can be moved up/down as you need, or you can specify that it doesn't expire at all. Note the phrase "Replication metadata is never kept longer than this amount of time."
In other words, if one of your subscribers is off the reservation for more than 14 days, it won't matter, you will have to unsubscribe, then re-subscribe that unit. That's because of the gap between the oldest metadata on the publisher & the newest record/metadata on the subscriber. No overlap, no sync up.
I'm currently trying to build a high-availability and load-balanced web application with SQL Server Replication Services technologies. Automatic fail-over is built into the application logic. Basically, there are two groups of application servers running the same, each with its own SQL Server instance. They are set to use the other instance in case of failure. Data is continously replicated between the two SQL Server instances via transactional replication. (A few seconds lag exists, that's okay.)
I set up both servers in a way that Distribution agents run on the Distributor (= Publisher). My idea is, that as long as Server A (publisher) is working, it 'collects' the transactions and forwards them to Server B (subscriber) as soon as it's available. The same for Server B. The default option (distributor on the subscriber) would 'lose' changes while the subscriber is offline. Am I right with that? UPDATE to this first question: transactions (waiting to be delivered) are stored in the distribution database, so they are "safe" anyway. This leads be to another question: if the Distributor Agent is on the subscriber, how will it know about new transactions to be delivered? Via frequent polling?
The two databases have the same schema (except identity seed and increment). Each read-write table is cross-replicated. Why don't I see a loop, when I insert a row into a table? When I read about bi-directional replication in documents or blogs, is this scenario what they mean or rather updateable transactional replication?
What do you think about this scenario in general? This is my first time with replication, and I fear the risks. Therefore any comments are very welcome.
I have a WCF service that needs to notify it's clients when changes occur to the database (sql server 2005). This is relatively easy accomplished, as long as I find a way to notify my service of any changes. I can probably create a database trigger on a table and have that trigger start a small service client that notifies my service, but I'm wondering if there's a better way to do this? It would be a viable solution to have the service poll the database for changes, but I'm not sure on the best way to do it (and sendign a notification to my service would be preferred).
As the relevant updates apply only to a certain part of the database, I was also wondering if it's also possible to link such a trigger (or other mechanism) to a database diagram.
All help is appreciated!
rinze
If your database is SQL Server 2005 and above you can try this solution: Remove pooling for data changes from a WCF front end.
As a side note, never call external processes from a trigger, don't make web calls from a trigger. Is a guaranteed recipe for disaster.
Update
For those interested in mixing Query Notifications with LINQ to SQL I recommend Using SQLDependency objects with LINQ.
Look at
SQL Server 2005 Query Notifications Tell .NET 2.0 Apps When Critical Data Changes
Change Notification with Sql Server 2008
How can I trigger a stored procedure in SQL Server 2005 based on emails arriving in an Exchange inbox (with POP3/IMAP enabled)? I'd rather not use Windows Services if possible, and use the SQL Server functionality instead.
Exchange has Event Sinks which could write data to the DB.
Sample: http://www.codeproject.com/KB/cs/csmanagedeventsinkshooks.aspx
Doing thins using SQL Server somehow or a Windows Service would require polling for changes, which is less efficient; either you consume much resources through intensive polling or you have some delay until you notice a new message. The event sinks are basically invoked right away, and depending on the sink you can even influence the message.