I've been doing a lot of reading on this one, and some of the documentation doesn't seem to relate to reality. Some of the potential causes would be appropriate here, however they're only related to 2008 or earlier.
I define a transaction scope. I use a number of different EF contexts (in different method calls) within the transaction scope, however all but one of them are only for data reads. The final use of a Context is to create and add some new objects to the context, and then call
context.SaveChanges()
IIS is running on one server. The DB (Sql2012) is running on another server (WinServer 2012).
When I execute this code, I receive the error:
Network access for Distributed Transaction Manager (MSDTC) has been
disabled. Please enable DTC for network access in the security
configuration for MSDTC using the Component Services Administrative
tool.
Obviously, if I enable DTC on the IIS machine, this goes away. However why should I need to?
This:
http://msdn.microsoft.com/en-us/library/ms229978.aspx
states:
• At least one durable resource that does not support single-phase
notifications is enlisted in the transaction. • At least two durable
resources that support single-phase notifications are enlisted in the
transaction
Which I understand is not the case here.
Ok. I'm not entirely sure if this should have been happening (according to the MS doco), but I have figured out why and the solution.
I'm using the ASPNet membership provider, and have two connection strings in my web.config. I thought the fact that they were pointing to the same DB was enough for them to be considered the same "durable resource".
However I found that the membership connection string also had:
Connection Timeout=60;App=EntityFramework
whereas the Entity Framework connection string didn't.
Setting these values to the same connection string meant that the transaction is not escalated to MSDTC.
Related
Dears,
I have scenarios where I have to perform updates on the intrinsic database of LightSwitch and call some SQL stored procedures in the save pipeline all in ONE TRANSACTION, such that if an error happened in the LS save pipeline then my stored procedure calls are rolled back.
The recommended way of doing this is to set up an ambient transaction in the SaveChanges_Executing event, and to dispose of it in the SaveChanges_Executed and SaveChanges_ExecuteFailed events. As described in this articles http://www.codemag.com/Article/1103071
But this has two fatal problems:
It does not work when I publish the app on Azure since distributed transactions are not supported there.
Also it throws an error when I try to save changes to ApplicationData sources using the ServerApplicationContext. The error is this: The underlying provider failed on EnlistTransaction
Has anyone found a cleaner way to handle transactions in LightSwitch that works both on Azure and through ServerApplicationContext??
Thanks a lot
Currently, distributed transactions using MSDTC do not work against SQL Azure. They will work just fine against SQL Server in a VM running in Azure, however. MSDTC is tied to running on a domain controller, generally, and that doesn't make sense in the public cloud context. An alternative DTC is likely needed, but this is not something that has been publicly announced as of today.
I don't think Lightswitch is the main issue here (though perhaps it has some additional issue beyond what I have described).
I hope that at least explains why it doesn't work today - I wish I had a better answer for you, but right now it is not possible. The "workarounds" being used are to build applications that can be resilient to the commits happening on each side (or not) and recovering from the failed cases.
I am using SQL Server 2008 R2 to connect to a number of other servers of the same type from within triggers and stored procedures. These servers are geographically distributed around the world and it is vital that any errors in communication between the servers are logged along with the data that was supposed to be sent so the communication may be re-attempted at a later time. The servers are participating in an Observer pattern with one of the servers acting as the observer and handling routing of messages between the other servers.
I am looking for specific advice on how best to handle errors in this situation, particularly connectivity errors and any potential pitfalls to look out for when performing queries on remote servers.
If you are using the Linked Server and sending the data to the other server over linked server connection, there is no inherent way to log these request, unless you add an application logic to do so.
with a linked server, if one of the server goes down then there will be an error thrown in the application logic, i.e. in your case the stored procedure or the trigger will fail, saying the server does not exist or the server is down.
In order to avoid this, we try to use the Service Broker, where it implements the Queue Logic, in this case you can always keep the logging and also ensure that the messages will be delivered irrespective of the server down times ( in case of server down time, the message waits until it is read).
http://technet.microsoft.com/en-us/library/ms166104%28v=sql.105%29.aspx
Hope this helps
Linked servers may not be the best solution for the model you're trying to implement, since the resilience you require is very difficult to achieve in the case of a linked server communication failure.
The fundamental problem is that in the case of a linked server communication failure the database engine raises an error with a severity of 20, which is high enough to abort the currently executing batch - bypassing any error handling code in the batch (for example TRY...CATCH).
SQL 2005 and later include the procedure sp_testlinkedserver which enable the availability of the linked server to be tested before attempting to execute commands - however, this doesn't get around problems created by communication errors encountered during a command.
There are a couple of more robust options you could consider. One is the Service Broker, which provides an asynchronous message queuing model. This isn't a perfect fit for the observer pattern but the activation feature provides a means to implement push-notifications from a central point. Since you mention messaging, the conversation model employed by Service Broker might suit your aims.
The other option is transactional replication; this might be more suitable if the data flow is purely from the central server to the observers.
Quick overview of our topology:
Web sites sending commands to an nServiceBus server, which accepts the commands and then publishes the correct pub/sub events. This service also has message handlers that can do some process against the DB in response to the command, for instance:
1 user registers on web site
2 web site sends nServicebus command to nServicebus service on another server.
3 nServicebus server has a handler for that specific type of command, which logs something to the database and sends a welcome email
Since instituting this architecture we started to get deadlocks on the DB. I have traced it down to MSDTC on the database server. If I turn that service OFF on the database server nServicebus starts throwing up errors, which to me shows that nServiceBus has been enlisting the DB update in the transaction.
I don't wish this to happen, I want to handle the DB failing myself, I only want the transaction to ensure the message is delivered to my nServicebus proxy service. I don't want a transaction from the web all the way through 2 servers to the DB and back.
Any suggestions?
EDIT: this post provides some clues, however I'm not entirely sure it's the proper way to proceed.. NServiceBus - Problem with using TransactionScopeOption.Suppress in message handler
EDIT2: The reason that we want the DB work outside the scope of the transaction is that the intent is to 'asynchronously' process these commands on another server so as not to slow down the web site and/or cause users to wait for these long running aggregation commands. If the DB is within the scope of the transaction, is that blocking execution on the website at the point where the original command is fired to the distributor? Is there a better nServicebus architecture for this scenario? We want the command to fire quickly and return control to the web site so the user can quickly proceed and not have to wait for our longish running DB command, which is updating aggregate counts and sending emails etc.
I wouldn't recommend having the DB work outside the context of the NServiceBus transaction. Instead, try reducing the isolation level of the transactions. This can be done by calling:
.IsolationLevel(System.Transactions.IsolationLevel.ReadCommited)
in the fluent configuration. You'll have to put this after .MsmqTransport() in v2.6. In v3.0 you can put this call almost anywhere.
RESPONSE TO EDIT2:
Just using NServiceBus will achieve your objective of not slowing down the website, regardless of the level of the transactions run on the other server. The use of transactions is to provide a guarantee that messages won't be lost in case of failure and also that you won't have to write your own deduplication logic.
We are using JPA 1.0 for ORM based operations and we want to have JTA datasource for our application. We are having only 1 database to which our application will connect.
We start our transaction boundary in controller class and it goes till DAO layer controller--> BOImpl--> DAO.
In websphere application server admin console when I am defining datasource should I use non-XA datasource or XA-Datasource.
My understanding is that for single datasource I should not use XADatasource.
Please let me know what should I need to use.
For a single resource (like a single DB) you indeed do not need an XA-datasource.
On the other hand, bear in mind that most JTA/JTS implementations actually recognize that there is only 1 resource participating in a transactions, so the overhead for XA would be minimal or none then. There can also be additional participants in the transactions that you might now not think about, like sending JMS messages.
But if you're really sure you only have 1 resource participating, you can safely go for non-XA.
I hope your doubt may be clear by now, but here is more information on that just in case.
The typical XA resources are databases, messaging queuing products such as JMS or WebSphere MQ, mainframe applications, ERP packages, or anything else that can be coordinated with the transaction manager. XA is used to coordinate what is commonly called a two-phase commit (2PC) transaction. The classic example of a 2PC transaction is when two different databases need to be updated atomically. Most people think of something like a bank that has one database for savings accounts and a different one for checking accounts. If a customer wants to transfer money between his checking and savings accounts, both databases have to participate in the transaction or the bank risks losing track of some money.
The problem is that most developers think, "Well, my application uses only one database, so I don't need to use XA on that database." This may not be true. The question that should be asked is, "Does the application require shared access to multiple resources that need to ensure the integrity of the transaction being performed?" For instance, does the application use Java 2 Connector Architecture adapters or the Java Message Service (JMS)? If the application needs to update the database and any of these other resources in the same transaction, then both the database and the other resource need to be treated as XA resources.
I found this annswer:
1. Long answer to Quartz requiring to data sources, however, if you want an even deeper answer, I believe I’ll need to dig into the source code or do more research:
a. JobStoreCMT relies upon transactions being managed by the application which is using Quartz. A JTA transaction must be in progress before attempt to schedule (or unschedule) jobs/triggers. This allows the "work" of scheduling to be part of the applications "larger" transaction. JobStoreCMT actually requires the use of two datasources - one that has it's connection's transactions managed by the application server (via JTA) and one datasource that has connections that do not participate in global (JTA) transactions. JobStoreCMT is appropriate when applications are using JTA transactions (such as via EJB Session Beans) to perform their work. (Ref; http://quartz-scheduler.org/documentation/quartz-1.x/configuration/ConfigJobStoreCMT)
However, there is a believed conflict with a non transactional driver in our particular application. Does anyone know if Quartz (JobsStoreCMT) can just work with just a transactional data source?
Does anyone know if Quartz (JobsStoreCMT) can just work with just a transactional data source?
No you must have a datasource of each type. Invocations on the API by the client application use the connections that are XA-capable, so that the work join's the application's transaction. Work done by the scheduler's internal threads use the non-XA connections.