When exactly do we use WCF transactions? - wcf

I was trying to get some info on WCF transactions, and I did manage to get info on how to use them. What I didn't get much info on is Why/When to use them.
What is the difference between database transactions and WCF transactions? Is there any specific case when either of these approach is preferred over the other?

By WCF transactions what you are really asking about is Microsoft's implementation of the WS-AtomicTransaction web service extension standard.
Why/When to use them
Similar to using a database transaction to guarantee consistency within the database, WS-AtomicTransaction is used to guarantee consistency within a larger, distributed system, based on communication over SOAP 1.2 services. This distributed system may or may not include database writes, but more often than not it will do.
Transactions propagated to the service from clients will cause the internal code of the service to execute within the context of the clients transaction.
So, in that same way a database transaction can wrap multiple database writes into a single unit of work, a WCF transaction can wrap multiple service calls into a single unit of work, so that a failure in one will roll back the others.
This, as you can imagine, is hugely costly from a resource perspective, so these kinds of cross-network transactions should be rarely (if ever) used unless absolutely necessary.

Related

Microservices are compatible with existing SQL database?

I'm creating a microservice architecture with Core, rabbitMQ, strangler pattern ... but I have to use an existing SQL database (Transaction requeriment).
Doing a research I don't found a lot of information about how implement SQL database, but I think it's impossible to do a transactional operation on different services at the same time.
1- Every service must have access to entirely database?
2- Is a good idea do a service exclusive to do transactionals operations?
3- SQL with microservices it's maybe too much slow?
I don't know if exist a standard for this.
Thanks.
The whole point of microservices is about having small, independent services that are decoupled as much as possible.
Sharing a common database introduces very strong coupling, and is not recommended.
If two services need the same data, you could either (a) have a different database for each, and replicate the data, or (b) introduce a third service that is responsible for access to the database.
If you're looking for a bigger-scale distributed transaction across microservices, then you should look into things like sagas. Typically you'll have a coordinator ("process manager" in some literature) that tracks the various operations, and can compensate or cancel actions that have been performed if the transaction as a whole is bound to fail.
3- SQL with microservices it's maybe too much slow?
What makes you think so?
There is nothing about SQL that makes it inadequate for microservices. Microservices may vary wildly in terms of what they do and what they require. SQL will be perfectly suitable for some microservices, and possibly not so suitable for others. It depends on the service.
It look like you need a distributed transactions in your system
https://msdn.microsoft.com/en-us/library/windows/desktop/ms681205(v=vs.85).aspx
Also there is a nice book devoted to microservices. It includes distributed transactions and other patters used in microservice bases apps.
http://shop.oreilly.com/product/0636920033158.do
1- Every service must have access to entirely database?
No. A microservice has its own schema related to the Aggregate Root / Service that it offers. If a service needs data of another entity, it invokes the APIs provided by another micro service.
2- Is a good idea do a service exclusive to do transactionals
operations?
No. Each microservice is a transaction boundary in its own right. Distributed transactions, particularly using 2PC, do not perform particularly well.
3- SQL with microservices it's maybe too much slow?
I am not totally clear as to why you make such a statement.

nservicebus db insert duplicate

We have a Data loader service that uses NServiceBus to insert data(if not already present)into SQL DB. The queue is configured with Concurrencylevel > 1 as the data to load might get huge. Since the Concurrencylevel > 1, it results in duplicate inserts. Is there a way to handle this within NServiceBus.
Note: We have already considered and ruled out creating thread safe locks
Generally speaking, there's no need to run the endpoint with Concurrency Level of one. You also don't need to manage the threading and fiddle with concurrency/locks when it comes to NServiceBus. There are other factors on how the system needs to be designed to make it work:
Different transports have different levels of transaction support. Choose one that supports Transactions. It means if the message is retried, you won't get duplicated messages/data.
Try to work your system with idempotency. It means that with the lack of transactions (not supported by the transport or disabled by the code) if you process a message twice, you won't have multiple data/side effects. The 'how' part requires better knowledge about the data you're dealing with and your domain.

Using XADatasource or non-XA Datasource for JTA based transactionsn in JPA

We are using JPA 1.0 for ORM based operations and we want to have JTA datasource for our application. We are having only 1 database to which our application will connect.
We start our transaction boundary in controller class and it goes till DAO layer controller--> BOImpl--> DAO.
In websphere application server admin console when I am defining datasource should I use non-XA datasource or XA-Datasource.
My understanding is that for single datasource I should not use XADatasource.
Please let me know what should I need to use.
For a single resource (like a single DB) you indeed do not need an XA-datasource.
On the other hand, bear in mind that most JTA/JTS implementations actually recognize that there is only 1 resource participating in a transactions, so the overhead for XA would be minimal or none then. There can also be additional participants in the transactions that you might now not think about, like sending JMS messages.
But if you're really sure you only have 1 resource participating, you can safely go for non-XA.
I hope your doubt may be clear by now, but here is more information on that just in case.
The typical XA resources are databases, messaging queuing products such as JMS or WebSphere MQ, mainframe applications, ERP packages, or anything else that can be coordinated with the transaction manager. XA is used to coordinate what is commonly called a two-phase commit (2PC) transaction. The classic example of a 2PC transaction is when two different databases need to be updated atomically. Most people think of something like a bank that has one database for savings accounts and a different one for checking accounts. If a customer wants to transfer money between his checking and savings accounts, both databases have to participate in the transaction or the bank risks losing track of some money.
The problem is that most developers think, "Well, my application uses only one database, so I don't need to use XA on that database." This may not be true. The question that should be asked is, "Does the application require shared access to multiple resources that need to ensure the integrity of the transaction being performed?" For instance, does the application use Java 2 Connector Architecture adapters or the Java Message Service (JMS)? If the application needs to update the database and any of these other resources in the same transaction, then both the database and the other resource need to be treated as XA resources.

Why does Quartz Scheduler(JobSToreCMT) require the use of two datasources?

I found this annswer:
1. Long answer to Quartz requiring to data sources, however, if you want an even deeper answer, I believe I’ll need to dig into the source code or do more research:
a. JobStoreCMT relies upon transactions being managed by the application which is using Quartz. A JTA transaction must be in progress before attempt to schedule (or unschedule) jobs/triggers. This allows the "work" of scheduling to be part of the applications "larger" transaction. JobStoreCMT actually requires the use of two datasources - one that has it's connection's transactions managed by the application server (via JTA) and one datasource that has connections that do not participate in global (JTA) transactions. JobStoreCMT is appropriate when applications are using JTA transactions (such as via EJB Session Beans) to perform their work. (Ref; http://quartz-scheduler.org/documentation/quartz-1.x/configuration/ConfigJobStoreCMT)
However, there is a believed conflict with a non transactional driver in our particular application. Does anyone know if Quartz (JobsStoreCMT) can just work with just a transactional data source?
Does anyone know if Quartz (JobsStoreCMT) can just work with just a transactional data source?
No you must have a datasource of each type. Invocations on the API by the client application use the connections that are XA-capable, so that the work join's the application's transaction. Work done by the scheduler's internal threads use the non-XA connections.

ESB + WCF , Transaction over multiple services

I have an ESB (Aqualogic) that have a proxy.
This proxy will call 3 different services, and i have to put those 3 services in a transaction scope...
ESB doesn't have support to transactions...
Someone know any solution to that?
I am not familiar with Aqualogic, but in general I can say that what you want to do is very, very difficult.
If Aqualogic uses MSMQ for transport then you may have some form of support for transactions by using transactional queues. But that's only the start.
If you want to integrate WCF services with a transactional context, you need to set up support for the WS-Atomic protocol (see http://msdn.microsoft.com/en-us/library/ms729784.aspx and http://social.msdn.microsoft.com/Forums/en/wcf/thread/cae32545-6536-4631-b89f-54f55da62199). This is a serious pain in the butt.
Not just to configure it, but also to use it. Using WS-Atomic across servers means you need to activate MSDTC on all machines, and coordination between these MSDTC's is very slow and prone to long timeouts.
It's a better bet to not expect to run everything in a single transaction, but to use a workflow that compensates for partial success/partial failure of your operation. See also http://msdn.microsoft.com/en-us/library/dd483319.aspx for an example.