LightSwitch: Distributed transactions on Azure - azure-sql-database

Dears,
I have scenarios where I have to perform updates on the intrinsic database of LightSwitch and call some SQL stored procedures in the save pipeline all in ONE TRANSACTION, such that if an error happened in the LS save pipeline then my stored procedure calls are rolled back.
The recommended way of doing this is to set up an ambient transaction in the SaveChanges_Executing event, and to dispose of it in the SaveChanges_Executed and SaveChanges_ExecuteFailed events. As described in this articles http://www.codemag.com/Article/1103071
But this has two fatal problems:
It does not work when I publish the app on Azure since distributed transactions are not supported there.
Also it throws an error when I try to save changes to ApplicationData sources using the ServerApplicationContext. The error is this: The underlying provider failed on EnlistTransaction
Has anyone found a cleaner way to handle transactions in LightSwitch that works both on Azure and through ServerApplicationContext??
Thanks a lot

Currently, distributed transactions using MSDTC do not work against SQL Azure. They will work just fine against SQL Server in a VM running in Azure, however. MSDTC is tied to running on a domain controller, generally, and that doesn't make sense in the public cloud context. An alternative DTC is likely needed, but this is not something that has been publicly announced as of today.
I don't think Lightswitch is the main issue here (though perhaps it has some additional issue beyond what I have described).
I hope that at least explains why it doesn't work today - I wish I had a better answer for you, but right now it is not possible. The "workarounds" being used are to build applications that can be resilient to the commits happening on each side (or not) and recovering from the failed cases.

Related

Peer-to-peer replication fails due to missing sp_MS* stored procedures

We've had a two node peer-to-peer (P2P) replication setup running for almost a year but for some reason it was recently marked as inactive forcing us to re-build it. Unfortunately no matter how I try to re-build it I keep getting the same error.
Basically once P2P is configured it would instantly fail because it couldn’t find some auto-created “sp_MS[upd|ins|del]*******” stored procedures. While doing some investigation (and selecting directly from sys.procedures) I found that these stored procedures are only created when the DB becomes a subscriber as a P2P node, similar to transactional replication. As every DB in P2P is a publisher and a subscriber the fact that these were missing is strange especially considering it clearly succeeds in doing this on one of the DBs but not the other. It would appear that one of the DBs does not get set up as a subscriber but I receive no error or indication of this (in fact Management Studio does show two subscriptions). I tried pretty much everything I could think of, changing which DB is configured first, completely disabling distribution and re-creating it, etc. but still no luck.
Has anyone else run into this or could does anyone have any suggestions as to what I should look at trying next?
Thanks in advance!
The issue is that the database that the P2P backup was taken from was modified before it was restored on the new node. This happened even when the option specifying that a change has happened was used and is likely a bug. The "solution" is to ensure that the original DB has not been modified at all before replication gets back up and running.

Best practice: handling errors in linked servers

I am using SQL Server 2008 R2 to connect to a number of other servers of the same type from within triggers and stored procedures. These servers are geographically distributed around the world and it is vital that any errors in communication between the servers are logged along with the data that was supposed to be sent so the communication may be re-attempted at a later time. The servers are participating in an Observer pattern with one of the servers acting as the observer and handling routing of messages between the other servers.
I am looking for specific advice on how best to handle errors in this situation, particularly connectivity errors and any potential pitfalls to look out for when performing queries on remote servers.
If you are using the Linked Server and sending the data to the other server over linked server connection, there is no inherent way to log these request, unless you add an application logic to do so.
with a linked server, if one of the server goes down then there will be an error thrown in the application logic, i.e. in your case the stored procedure or the trigger will fail, saying the server does not exist or the server is down.
In order to avoid this, we try to use the Service Broker, where it implements the Queue Logic, in this case you can always keep the logging and also ensure that the messages will be delivered irrespective of the server down times ( in case of server down time, the message waits until it is read).
http://technet.microsoft.com/en-us/library/ms166104%28v=sql.105%29.aspx
Hope this helps
Linked servers may not be the best solution for the model you're trying to implement, since the resilience you require is very difficult to achieve in the case of a linked server communication failure.
The fundamental problem is that in the case of a linked server communication failure the database engine raises an error with a severity of 20, which is high enough to abort the currently executing batch - bypassing any error handling code in the batch (for example TRY...CATCH).
SQL 2005 and later include the procedure sp_testlinkedserver which enable the availability of the linked server to be tested before attempting to execute commands - however, this doesn't get around problems created by communication errors encountered during a command.
There are a couple of more robust options you could consider. One is the Service Broker, which provides an asynchronous message queuing model. This isn't a perfect fit for the observer pattern but the activation feature provides a means to implement push-notifications from a central point. Since you mention messaging, the conversation model employed by Service Broker might suit your aims.
The other option is transactional replication; this might be more suitable if the data flow is purely from the central server to the observers.

Using transactions with EF4.1 and SQL 2012 - why is DTC required?

I've been doing a lot of reading on this one, and some of the documentation doesn't seem to relate to reality. Some of the potential causes would be appropriate here, however they're only related to 2008 or earlier.
I define a transaction scope. I use a number of different EF contexts (in different method calls) within the transaction scope, however all but one of them are only for data reads. The final use of a Context is to create and add some new objects to the context, and then call
context.SaveChanges()
IIS is running on one server. The DB (Sql2012) is running on another server (WinServer 2012).
When I execute this code, I receive the error:
Network access for Distributed Transaction Manager (MSDTC) has been
disabled. Please enable DTC for network access in the security
configuration for MSDTC using the Component Services Administrative
tool.
Obviously, if I enable DTC on the IIS machine, this goes away. However why should I need to?
This:
http://msdn.microsoft.com/en-us/library/ms229978.aspx
states:
• At least one durable resource that does not support single-phase
notifications is enlisted in the transaction. • At least two durable
resources that support single-phase notifications are enlisted in the
transaction
Which I understand is not the case here.
Ok. I'm not entirely sure if this should have been happening (according to the MS doco), but I have figured out why and the solution.
I'm using the ASPNet membership provider, and have two connection strings in my web.config. I thought the fact that they were pointing to the same DB was enough for them to be considered the same "durable resource".
However I found that the membership connection string also had:
Connection Timeout=60;App=EntityFramework
whereas the Entity Framework connection string didn't.
Setting these values to the same connection string meant that the transaction is not escalated to MSDTC.

Why does Quartz Scheduler(JobSToreCMT) require the use of two datasources?

I found this annswer:
1. Long answer to Quartz requiring to data sources, however, if you want an even deeper answer, I believe I’ll need to dig into the source code or do more research:
a. JobStoreCMT relies upon transactions being managed by the application which is using Quartz. A JTA transaction must be in progress before attempt to schedule (or unschedule) jobs/triggers. This allows the "work" of scheduling to be part of the applications "larger" transaction. JobStoreCMT actually requires the use of two datasources - one that has it's connection's transactions managed by the application server (via JTA) and one datasource that has connections that do not participate in global (JTA) transactions. JobStoreCMT is appropriate when applications are using JTA transactions (such as via EJB Session Beans) to perform their work. (Ref; http://quartz-scheduler.org/documentation/quartz-1.x/configuration/ConfigJobStoreCMT)
However, there is a believed conflict with a non transactional driver in our particular application. Does anyone know if Quartz (JobsStoreCMT) can just work with just a transactional data source?
Does anyone know if Quartz (JobsStoreCMT) can just work with just a transactional data source?
No you must have a datasource of each type. Invocations on the API by the client application use the connections that are XA-capable, so that the work join's the application's transaction. Work done by the scheduler's internal threads use the non-XA connections.

Using Sql Server Replication

We are using Replication and seem to be having endless problems with it. It seems to shut down for unknown reasons. It needs to be shut down to remove a column and only starts back up half the time. Does anyone have any advice on how to properly use replication or some alternatives to it.
Edit:
We are using Sql Server 2005, We cannot use database mirroring as we used the other database for reporting. As far as I am aware you cannot query from a mirrored database.
If you need just couple of tables from your DB for reports, replication is more useful, but you also can set up log shipping with secondary server in STAND BY mode (especially if you need significant part of your data for reports), then you can run reports on secondary server. You just have to remember that log shipping will interfere with transaction log backups, so you have to use the same folder with log backup files for both processes.
I would think the combination of database mirroring and database snapshots will solve your issues.
First, database mirroring is very easy to setup and I have never had any problems with it (using it for the past 4+ years).
Second, creating a database snapshot on your failover server will allow you to run reports. You can setup a sql agent job to drop and re-create the snapshot on whatever acceptable interval you like.
Of course this is all dependent on if you need your reports to run on real-time data or if they can be delayed somewhat.
Here are a list of the problems that I have had to resolve to get replication working:
1) The replication sometimes lies to me and tells me this, even when its working fine.
"The server 'Bob' is not a Subscriber. (.Net SqlClient Data Provider)" I have tried to re-initialise it thinking that it was broken and it never was...
2) It can take a little while to restart itself, especially if your remote DB is on the other side of the planet, which it is in my case. If you are on a slow network connection, or it is not 100% reliable, then you can have problems. Also, the jobs which restart the process can sometimes take a while to run, which also delays things further.
3) Some changes require full re-initalisation which involves sending a new snapshot out. If you don't have your permissions quite right, and you can re-initialise manually, but it doesn't happen automatically, then this can be a another reason for problems.
We have a SQL transactional replication which runs perfectly happily. You seem to say that it is when you are making schema changes to the publisher that you get problems. Each time we do a schema change we drop the publication, subscription and the subscription database. Do the change, then re-build it all. We can do this becuase we can tolerate the time it takes to re-apply the snapshot. There are ways to apply schema changes to the publication and have them propogate to the subscriber. Take a look at sp_register_custom_scripting. We have made this work once, so I can give some more information about it if you need.
As #Jason says, you can report from a mirrored database by using a snapshot. Beware that the snapshot will take up space, and cause more work for the mirror server. Although how much space will depend on how much data is changing and how big your original database is. We do use a snapshot on a mirrored database for occasional reports because our entire database is not replicated.
log shipping http://msdn.microsoft.com/en-us/library/ms187103.aspx
What version of SQL Server are you using?
We're using replication now for a particular solution, and it seems to just work, day in, day out.
I would examine your event log's, and SQL Server logs to see if you can determine why it is shutting down, and why it doesn't start up.
Are you possibly patching the servers, or are you having network errors?
The alternatives to replication are log shipping, or database mirroring.
I personally prefer Database Mirroring, but it really depends what you're trying to do, as some of these aren't appropriate for certain situations.
We also have used SQL transactional replication. We had the same pains with updating schema, which requires dropping the publication on all servers, performing the updates, and then reinitializing replication, and hoping for the best. Sometimes it would not initialize, or a node would fall behind and we'd get little warning for it. A few times we even lost all the stored procedure execute permissions causing pretty much total failure on the websites.
We have a rather large database so reinitialization could take quite some time, meaning all updates had to be done at 2am on Sunday - not exactly when we're awake and alert and able to use all our faculties to deal with a problem that might arise.
We are ditching replication in favor of failover clustering on SQL 2008, but it can still be done all the way back to SQL 2000.
http://technet.microsoft.com/en-us/library/cc917693.aspx