I have a WCF service that I'm trying to make transactional. When I call the service using a transaction, the transaction is not being used and if I roll back the transaction, my database updates happen anyway.
This is the service interface (I'm only including the single method that I've been testing with:
[ServiceContract(SessionMode=SessionMode.Required)]
public interface IUserMenuPermissionService
{
[OperationContract]
[WebInvoke(Method = "POST", RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json)]
[TransactionFlow(TransactionFlowOption.Allowed)]
void InsertUserMenuPermission(string userId, string menuId, int permission);
}
The service implementation is:
[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerSession)]
public class UserMenuPermissionService : IUserMenuPermissionService
{
private static IUserMenuPermissionProvider _provider = new CoreDataFactory().GetUserMenuPermissionProvider();
[OperationBehavior(TransactionScopeRequired = true, TransactionAutoComplete = false)]
public void InsertUserMenuPermission(string userId, string menuId, int permission)
{
_provider.InsertUserMenuPermission(userId, menuId, permission);
}
}
The actual underlying provider isn't doing anything with transactions directly, but at this point, that's not my issue, as will become clear in a moment.
The WCF's app.config has:
<bindings>
<wsHttpBinding>
<binding name="TransactionBinding" transactionFlow="True" >
</binding>
</wsHttpBinding>
</bindings>
...
...
<service name="GEMS.Core.WCFService.UserMenuPermissionService">
<endpoint address="" bindingConfiguration="TransactionBinding" binding="wsHttpBinding" contract="SvcProvider.Core.WCFService.IUserMenuPermissionService">
<identity>
<dns value="localhost" />
</identity>
</endpoint>
<endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" />
<host>
<baseAddresses>
<add baseAddress="http://localhost/SvcProvider.WebService/SvcProvider.Core.WCFService/UserMenuPermissionService/" />
</baseAddresses>
</host>
</service>
The client is an ASP.NET MVC web app. It's web config has:
<wsHttpBinding>
<binding name="TransactionBinding" transactionFlow="true" />
</wsHttpBinding>
....
....
<endpoint address="http://localhost/GEMS.WebService/SvcProvider.Core.WCFService.UserMenuPermissionService.svc"
binding="wsHttpBinding" bindingConfiguration="TransactionBinding"
contract="UserMenuPermissionService.IUserMenuPermissionService"
name="WsHttpBinding_IUserMenuPermissionService" />
When I call the web service from the client, I do it inside of a transaction scope. If I look at System.Transactions.Transaction.Current, it is set to a valid System.Transactions.Transaction and it has a DistributedIdentifier set.
When I'm in the WCF service, however, System.Transactions.Transaction.Current is set to null.
So it appears my transaction is not being passed along to the service. I want transactions to be optional, but when there's a transaction, obviously, I want it to be used.
Have I missed a step?
Update
As per the comments from BNL, I have now made TransactionScopeRequired = true (updated code above to reflect that).
If I call methods without a transaction scope in the client, they appear to operate just fine (I'm assuming the service creates a transaction).
If I create a transaction scope on the client and try to call the method, the first time I try to call it, it hangs for about 55 seconds and then I get a transaction aborted exception. I set the timeout for the transaction to be 3 minutes (since the default is 1 minute), but that continued to happen at about 55 seconds. The delay seems to be on the client side as it's 55 seconds after calling the autogenerated client proxy method before the WCF service actually gets called. Subsequent calls do not have the 55 second delay. The code is as follows:
using (TransactionScope ts = new TransactionScope(TransactionScopeOption.RequiresNew, new TimeSpan(0, 3, 0)))
{
_userMenuPermissionManager.SetPermissions(clientCode, menuPermissions);
ts.Complete();
}
The exception happens in the disposal, not in the ts.Complete() call.
at System.Transactions.TransactionStatePromotedAborted.PromotedTransactionOutcome(InternalTransaction tx)
at System.Transactions.TransactionStatePromotedEnded.EndCommit(InternalTransaction tx)
at System.Transactions.CommittableTransaction.Commit()
at System.Transactions.TransactionScope.InternalDispose()
at System.Transactions.TransactionScope.Dispose()
at WebApp.Controllers.AdminController.SetUserMenuPermissions(String clientCode, MenuPermission[] permissions) in Controllers\\AdminController.cs:line 160
The service is being called and the transaction is being passed to it. According to the service trace log, there are no errors. Everything seems to go smoothly, but at the end, it receives an abort from the client and I guess rolls back the transaction.
The client service log shows nothing unusual that would explain the 55 second delay either.
So while I'm a bit further along (thanks BNL), I'm still not entirely there.
Update 2
I added:
[ServiceBehavior(TransactionTimeout = "00:03:00", InstanceContextMode = InstanceContextMode.PerSession)]
to the service implementation and the 55 second delay turned into a 2 minute and 55 second delay, so it appears that the transaction is timing out and THEN the service method is getting called (with the transaction) and the service does all its stuff and then the client sends an abort at the end. Subsequent calls under separate TransactionScopes abort immediately...
Update 3
It appears the timeout was being caused by an earlier call that I had missed that wasn't happening within an explicit transaction and because I don't commit transactions automatically, it was hanging because there was an uncommitted transaction. I now have all my calls wrapped in transactions. The very first call is now aborting immediately. But still no signs of any problems in the service or client trace logs. There's no inner exception, nothing. Just an abort...
Update 4
In addition to BNL's answer, apparently using TransactionAutoComplete = false is BAD. Beyond that I can't really say why it was the problem, but setting it to true fixed my issues and still allows the client to properly commit or roll back transactions (I was under the impression this was not the case with TransactionAutoComplete = true
In your second code block, you have TransactionScopeRequired = false.
If you look at the Remarks section of this document, you can determine that you won't have a transaction.
http://msdn.microsoft.com/en-us/library/system.servicemodel.operationbehaviorattribute.transactionscoperequired.aspx
TransactionScopeRequired = false
Binding permits transaction flow = true
Caller flows transaction = true
Result = Method executes without a transaction.
What is oddly not specified is what happens if both of the first two columns are true but the client doesn't flow a transaction. It looks like the service will create a new transaction, but I'm not 100% sure of that.
Related
I have a silverlight application, when I call a WCF service that takes longer than two minutes, I get the error "remote server returned an error Not Found", execution continues on the server, it ends perfectly. I know that, because of the results in the database tables and because I get an email at the end of the run.
in client:
cliente.InnerChannel.OperationTimeout = new TimeSpan(1, 0, 0);
in server
<binding name="MyBasicHttpBinding" closeTimeout="02:30:00" openTimeout="02:30:00" receiveTimeout="02:30:00" sendTimeout="02:30:00" maxBufferPoolSize="2147483647"
maxReceivedMessageSize="2147483647" maxBufferSize="2147483647">
If the execution takes less than two minutes, I got a return. Already tested with a loop that expected 2 minutes and get a return, it worked perfectly. By increasing the loop time to three minutes I get the same error: "remote server returned an error Not Found".
I want to create a WCF service which uses an MSMQ binding as I have a high volume of notifications the service is to process. It is important that clients are not held up by the service and that the notifications are processed in the order they are raised, hence the queue implementation.
Another consideration is resilience. I know I could cluster MSMQ itself to make the queue more robust, but I want to be able to run an instance of my service on different servers, so if a server crashes notifications do not build up in the queue but another server carries on processing.
I have experimented with the MSMQ binding and found that you can have multiple instances of a service listening on the same queue, and left to themselves they end up doing a sort of round-robin with the load spread across the available services. This is great, but I end up losing the sequencing of the queue as different instances take a different amount of time to process the request.
I've been using a simple console app to experiment, which is the epic code dump below. When it's run I get an output like this:
host1 open
host2 open
S1: 01
S1: 03
S1: 05
S2: 02
S1: 06
S1: 08
S1: 09
S2: 04
S1: 10
host1 closed
S2: 07
host2 closed
What I want to happen is:
host1 open
host2 open
S1: 01
<pause while S2 completes>
S2: 02
S1: 03
<pause while S2 completes>
S2: 04
S1: 05
S1: 06
etc.
I would have thought that as S2 has not completed, it might still fail and return the message it was processing to the queue. Therefore S1 should not be allowed to pull another message off of the queue. My queue us transactional and I have tried setting TransactionScopeRequired = true on the service but to no avail.
Is this even possible? Am I going about it the wrong way? Is there some other way to build a failover service without some kind of central synchronisation mechanism?
class WcfMsmqProgram
{
private const string QueueName = "testq1";
static void Main()
{
// Create a transactional queue
string qPath = ".\\private$\\" + QueueName;
if (!MessageQueue.Exists(qPath))
MessageQueue.Create(qPath, true);
else
new MessageQueue(qPath).Purge();
// S1 processes as fast as it can
IService s1 = new ServiceImpl("S1");
// S2 is slow
IService s2 = new ServiceImpl("S2", 2000);
// MSMQ binding
NetMsmqBinding binding = new NetMsmqBinding(NetMsmqSecurityMode.None);
// Host S1
ServiceHost host1 = new ServiceHost(s1, new Uri("net.msmq://localhost/private"));
ConfigureService(host1, binding);
host1.Open();
Console.WriteLine("host1 open");
// Host S2
ServiceHost host2 = new ServiceHost(s2, new Uri("net.msmq://localhost/private"));
ConfigureService(host2, binding);
host2.Open();
Console.WriteLine("host2 open");
// Create a client
ChannelFactory<IService> factory = new ChannelFactory<IService>(binding, new EndpointAddress("net.msmq://localhost/private/" + QueueName));
IService client = factory.CreateChannel();
// Periodically call the service with a new number
int counter = 1;
using (Timer t = new Timer(o => client.EchoNumber(counter++), null, 0, 500))
{
// Enter to stop
Console.ReadLine();
}
host1.Close();
Console.WriteLine("host1 closed");
host2.Close();
Console.WriteLine("host2 closed");
// Wait for exit
Console.ReadLine();
}
static void ConfigureService(ServiceHost host, NetMsmqBinding binding)
{
var endpoint = host.AddServiceEndpoint(typeof(IService), binding, QueueName);
}
[ServiceContract]
interface IService
{
[OperationContract(IsOneWay = true)]
void EchoNumber(int number);
}
[ServiceBehavior(InstanceContextMode = InstanceContextMode.Single)]
class ServiceImpl : IService
{
public ServiceImpl(string name, int sleep = 0)
{
this.name = name;
this.sleep = sleep;
}
private string name;
private int sleep;
public void EchoNumber(int number)
{
Thread.Sleep(this.sleep);
Console.WriteLine("{0}: {1:00}", this.name, number);
}
}
}
batwad,
You are trying to manually create a service bus. Why don't you try to use an existing one?
NServiceBus, MassTransit, ServiceStack
At least 2 of those work with MSMQ.
Furthermore, if you absolutely need order it may actually be for another reason - you want to be able to send a message and you don't want dependent messages to be processed before the first message. You are looking for the Saga Pattern. NServiceBus and MassTransit both will allow you to manage Sagas easily, they will both allow you to simply trigger the initial message and then trigger the remaining messages based on conditions. It will allow you to implement the plumping of your distributed application a snap.
You can then even scale up to thousands of clients, queue servers and message processors without having to write a single line of code nor have any issues.
We tried to implement our own service bus over msmq here, we gave up because another issue kept creeping up. We went with NServiceBus but MassTransit is also an excellent product (it's 100% open source, NServiceBus isn't). ServiceStack is awesome at making APIs and using Message Queues - I'm sure you could use it to make Services that act as Queue front-ends in minutes.
Oh, did I mention that in the case of NSB and MT both only require under 10 lines of code to fully implement queues, senders and handlers?
----- ADDED -----
Udi Dahan (one of the main contributers of NServiceBus) talks about this in:
"In-Order Messaging a Myth" by Udi Dahan
"Message Ordering: Is it Cost Effective?" with Udi Dahan
Chris Patterson (one of the main contributers of Mass Transit)
"Using Sagas to ensure proper sequential message order" question
StackOverflow questions/answers:
"Preserve message order when consuming MSMQ messages in a WCF application"
----- QUESTION -----
I must say that I'm baffled as to why you need to guarantee message order - would you be in the same position if you were using an HTTP/SOAP protocol? My guess is no, then why is it a problem in MSMQ?
Good luck, hope this helps,
Ensuring in-order delivery of messages is one of the de-facto sticky issues with high volume messaging.
In an ideal world, your message destinations should be able to handle out-of-order messaging. This can be achieved by ensuring that your message source includes some kind of sequencing information. Again ideally this takes the form of some kind of x-of-n batch stamp (message 1 of 10, 2 of 10, etc). Your message destination is then required to assemble the data into order once it has been delivered.
However, in the real world there often is no scope for changing downstream systems to handle messages arriving out of order. In this instance you have two choices:
Go entirely single threaded - actually you can usually find some kind of 'grouping id' which means you can go single-threaded in a for-each-group sense, meaning you still have concurrency across different message groups.
Implement a re-sequencer wrapper around each of your consumer systems you want to receive in-order messages.
Neither solution is very nice, but that's the only way I think you can have concurrency and in-order message delivery.
I've tried applying several suggestions found on the web, but still experiencing cached results when querying CRM 2011 with linq. My web.config reads as follows, which is supposed to disable result caching:
<configSections>
<section name="microsoft.xrm.client" type="Microsoft.Xrm.Client.Configuration.CrmSection, Microsoft.Xrm.Client"/>
</configSections>
<connectionStrings>
...
<connectionStrings>
<microsoft.xrm.client>
<contexts>
<add name="Xrm" type="Xrm.XrmServiceContext, Xrm" serviceName="Xrm"/>
</contexts>
<services>
<add name="Xrm" type="Microsoft.Xrm.Client.Services.OrganizationService, Microsoft.Xrm.Client"/>
</services>
</microsoft.xrm.client>
In code, I have a little test loop to wait for an external change to some data:
Dim crm As New XrmServiceContext("Xrm")
Dim oOpptyGuid = ' <an existing GUID in the system>
' Get opportunity by guid.
Dim oOppty As Xrm.Opportunity = (From c In crm.OpportunitySet Where c.Id.Equals(oOpptyGuid) Select c).SingleOrDefault
Dim sName As String = oOppty.Name
Dim iTries As Int16 = 0
' Wait till name is changed or tried too many times.
Do
' Sleep between tries.
Threading.Thread.Sleep(10000)
iTries += 1
' Get opportunity by guid.
oOppty = (From c In crm.OpportunitySet Where c.Id.Equals(oOpptyGuid) Select c).SingleOrDefault
Loop Until oOppty.Name <> sName Or iTries > 10
The above loop never detects when the name is changed elsewhere in the CRM. I've tried removing items from the cache manually, before the query in the loop, but with no joy:
oCacheManager = Microsoft.Xrm.Client.Caching.ObjectCacheManager.GetInstance()
For Each x As String In From y In oCacheManager Select y.Key
oCacheManager.Remove(x)
Next
The only thing that works for me is this:
crm.Dispose()
crm = New XrmServiceContext("Xrm")
I can live with that, but it would be nicer, instead of recreating the context, to have a way of ensuring no caching either in code or in web.config. But I can't find a solution anywhere that works for me. Am I missing something?
I don't think your issue is caching of the OrganizationService. I believe your issue is related to the service context tracking the status of the selected entity. If you call IOrganizationServiceContext.Detach on the entity that you are selecting, it will no longer be tracked by the context and a retrieve should return most recent data from the service.
Take a look at the IOrganizationServiceCache.Remove method.
http://msdn.microsoft.com/en-us/library/gg678365.aspx
Nick's right about this, my original thought may be a bit overkill. In your case, you are only caching the entity in the context. So instead of:
crm.Dispose()
crm = New XrmServiceContext("Xrm")
try using:
crm.Detach(oOppty)
This is the code:
private static void CreateCounter()
{
if (PerformanceCounterCategory.Exists("DemoCategory"))
PerformanceCounterCategory.Delete("DemoCategory");
CounterCreationDataCollection ccdArray = new CounterCreationDataCollection();
CounterCreationData ccd = new CounterCreationData();
ccd.CounterName = "RequestsPerSecond";
ccd.CounterType = PerformanceCounterType.NumberOfItems32;
ccd.CounterHelp = "Requests per second";
ccdArray.Add(ccd);
PerformanceCounterCategory.Create("DemoCategory", "Demo category",
PerformanceCounterCategoryType.SingleInstance, ccdArray);
Console.WriteLine("Press any key, to start use the counter");
}
Obviously:
PerformanceCounterCategory.Create("DemoCategory", "Demo category",
PerformanceCounterCategoryType.SingleInstance, ccdArray);
is the line where the exception was thrown.
I have read about PerformanceCounterPermission, what should I do exactly?
Your application's process does not have the appropriate privilege level. That's what the security exception is telling you.
The simple fix is to request that permission when your application starts. You can do this by modifying your application's manifest such that the requestedExecutionLevel is set to requireAdministrator.
The complete section added to your application's manifest will look something like this:
<!-- Identify the application security requirements. -->
<trustInfo xmlns="urn:schemas-microsoft-com:asm.v2">
<security>
<requestedPrivileges>
<requestedExecutionLevel level="requireAdministrator" uiAccess="false" />
</requestedPrivileges>
</security>
</trustInfo>
There are potentially better alternatives if your application does not otherwise require administrative privileges, because you should always run at the lowest privilege level that is absolutely necessary or required. You can investigate these alternatives using Google; it's going to involve spinning off a separate process that requests UAC elevation and runs the performance counter when requested explicitly by the user.
I am benchmarking a self-hosting nettcp WCF service, making requests from 50 threads to a service located no the same computer. The problem is that the CPU utilization never exceeds 35% on Xeon E3-1270. When I run the same test on a two core laptop it does reach 100%.
The WCF method does nothing, so it should not be limited by IO. I tried to increase the number of threads, but that does not help. Each thread creates a service channel and performs thousands calls reusing that channel instance.
Here is the service class I am using:
[ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)]
public class TestService : ITestService
{
public void Void()
{
// DO NOTHING
}
}
Configs:
ServiceThrottlingBehavior:
MaxConcurrentCalls = 1000
MaxConcurrentInstances = 1000,
MaxConcurrentSessions = 1000
NetTcpBinding
ListenBacklog = 2000
MaxConnections = 2000
I would try changing your InstanceContextMode to PerCall. I'm pretty sure your current configuration setting will be ignored because WFC only ever creates a single instance of your class and will process them in order. With PerCall a new instance will be created for each request until the maximum number of threads or your configuration limit has been reached. You shouldn't need the netTcpBinding setting either, but keep your Throttling behaviour but make sure you get your proportions right otherwise might have adverse effects.
MaxConcurrentCalls: 16 * Processor Count
MaxConcurrentSessions: 100 * Processor Count
MaxConcurrentInstance: Sum (116 * Processor Count)