I am trying out the WCF Transaction implementation and I come up with the idea that whether asynchronous transaction is supported by WCF 4.0.
for example,
I have several service operations with client\service transaction enabled, in the client side, I use a TransactionScope and within the transaction, I create Tasks to asynchronously call those operations.
In this situation, I am assuming that the transaction is going to work correctly, is that right?
I doubt that very much. It appears that you if you are starting an ascync operation you are no longer participating on the original transaction.
I wrote a little LINQPad test
void Main()
{
using (var scope = new TransactionScope(TransactionScopeOption.Required))
{
try
{
Transaction.Current.Dump("created");
Task.Factory.StartNew(Test);
scope.Complete();
}
catch (Exception e)
{
Console.WriteLine(e);
}
Thread.Sleep(1000);
}
Console.WriteLine("closed");
Thread.Sleep(5000);
}
public void Test()
{
using (var scope = new TransactionScope(TransactionScopeOption.Required))
{
Transaction.Current.Dump("test start"); // null
Thread.Sleep(5000);
Console.WriteLine("done");
Transaction.Current.Dump("test end"); // null
}
}
You'll need to set both the OperationContext and Transaction.Current in the created Task.
More specifically, in the service you'll need to do like this:
public Task ServiceMethod() {
OperationContext context = OperationContext.Current;
Transaction transaction = Transaction.Current;
return Task.Factory.StartNew(() => {
OperationContext.Current = context;
Transaction.Current = transaction;
// your code, doing awesome stuff
}
}
This gets repetitive as you might suspect, so I'd recommend writing a helper for it.
Related
It was possible in previous (<=0.84.0) versions of Rebus to Send message in TransactionScope and it was sent only if scope is completed
using (var scope = new TransactionScope())
{
var ctx = new AmbientTransactionContext();
sender.Send(recipient.InputQueue, msg, ctx);
scope.Complete();
}
Is it possible to achive the same behaviour in Rebus2
As you have correctly discovered, Rebus version >= 0.90.0 does not automatically enlist in ambient transactions.
(UPDATE: as of 0.99.16, the desired behavior can be had - see the end of this answer for details on how)
However this does not mean that Rebus cannot enlist in a transaction - it just uses its own ambient transaction mechanism (which does not depend on System.Transactions and will be available when Rebus is ported to .NET core).
You can use Rebus' DefaultTransactionContext and "make it ambient" with this AmbientRebusTransactionContext:
/// <summary>
/// Rebus ambient transaction scope helper
/// </summary>
public class AmbientRebusTransactionContext : IDisposable
{
readonly DefaultTransactionContext _transactionContext = new DefaultTransactionContext();
public AmbientRebusTransactionContext()
{
if (AmbientTransactionContext.Current != null)
{
throw new InvalidOperationException("Cannot start a Rebus transaction because one was already active!");
}
AmbientTransactionContext.Current = _transactionContext;
}
public Task Complete()
{
return _transactionContext.Complete();
}
public void Dispose()
{
AmbientTransactionContext.Current = null;
}
}
which you can then use like this:
using(var tx = new AmbientRebusTransactionContext())
{
await bus.Send(new Message());
await tx.Complete();
}
or, if you're using it in a web application, I suggest you wrap it in an OWIN middleware like this:
app.Use(async (context, next) =>
{
using (var transactionContext = new AmbientRebusTransactionContext())
{
await next();
await transactionContext.Complete();
}
});
UPDATE: Since Rebus 0.99.16, the following has been supported (via the Rebus.TransactionScope package):
using (var scope = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled))
{
scope.EnlistRebus(); //< enlist Rebus in ambient .NET tx
await _bus.SendLocal("hallÄ i stuen!1");
scope.Complete();
}
I'm wondering if anyone has encountered this before:
I handle a command, and in the handler, I save an event to the eventstore (joliver).
Right after dispatching, the handler for the same command is handled again.
I know its the same command because the guid on the command is the same.
After five tries, nservicebus says the command failed due to the maximum retries.
So obviously the command failed, but I don't get any indication of what failed.
I've put the contents of the dispatcher in a try catch, but there is no error caught. After the code exits the dispatcher, the event handler will always fire as if something errored out.
Tracing through the code, the events are saved to the database (I see the row), the dispatcher runs, and the Dispatched column is set to true, and then the handler handles the command again, the process repeats, and another row gets inserted into the commits table.
Just what could be failing? Am I not setting a success flag somewhere in the event store?
If I decouple the eventstore from nServicebus, both will run as expected with no retries and failures.
The dispatcher:
public void Dispatch(Commit commit)
{
for (var i = 0; i < commit.Events.Count; i++)
{
try
{
var eventMessage = commit.Events[i];
var busMessage = (T)eventMessage.Body;
//bus.Publish(busMessage);
}
catch (Exception ex)
{
throw ex;
}
}
}
The Wireup.Init()
private static IStoreEvents WireupEventStore()
{
return Wireup.Init()
.LogToOutputWindow()
.UsingSqlPersistence("EventStore")
.InitializeStorageEngine()
.UsingBinarySerialization()
//.UsingJsonSerialization()
// .Compress()
//.UsingAsynchronousDispatchScheduler()
// .DispatchTo(new NServiceBusCommitDispatcher<T>())
.UsingSynchronousDispatchScheduler()
.DispatchTo(new DelegateMessageDispatcher(DispatchCommit))
.Build();
}
I had a transaction scope opened on the save that I never closed.
public static void Save(AggregateRoot root)
{
// we can call CreateStream(StreamId) if we know there isn't going to be any data.
// or we can call OpenStream(StreamId, 0, int.MaxValue) to read all commits,
// if no commits exist then it creates a new stream for us.
using (var scope = new TransactionScope())
using (var eventStore = WireupEventStore())
using (var stream = eventStore.OpenStream(root.Id, 0, int.MaxValue))
{
var events = root.GetUncommittedChanges();
foreach (var e in events)
{
stream.Add(new EventMessage { Body = e });
}
var guid = Guid.NewGuid();
stream.CommitChanges(guid);
root.MarkChangesAsCommitted();
scope.Complete(); // <-- missing this
}
}
I have the following planned architecture for my WCF client library:
using ChannelFactory instead of svcutil generated proxies because
I need more control and also I want to keep the client in a separate
assembly and avoid regenerating when my WCF service changes
need to apply a behavior with a message inspector to my WCF
endpoint, so each channel is able to send its
own authentication token
my client library will be used from a MVC front-end, so I'll have to think about possible threading issues
I'm using .NET 4.5 (maybe it has some helpers or new approaches to implement WCF clients in some better way?)
I have read many articles about various separate bits but I'm still confused about how to put it all together the right way. I have the following questions:
as I understand, it is recommended to cache ChannelFactory in a static variable and then get channels out of it, right?
is endpoint behavior specific to the entire ChannelFactory or I can apply my authentication behavior for each channel separately? If the behavior is specific to the entire factory, this means that I cannot keep any state information in my endpoint behavior objects because the same auth token will get reused for every channel, but obviously I want each channel to have its own auth token for the current user. This means, that I'll have to calculate the token inside of my endpoint behavior (I can keep it in HttpContext, and my message inspector behavior will just add it to the outgoing messages).
my client class is disposable (implements IDispose). How do I dispose the channel correctly, knowing that it might be in any possible state (not opened, opened, failed ...)? Do I just dispose it? Do I abort it and then dispose? Do I close it (but it might be not opened yet at all) and then dispose?
what do I do if I get some fault when working with the channel? Is only the channel broken or entire ChannelFactory is broken?
I guess, a line of code speaks more than a thousand words, so here is my idea in code form. I have marked all my questions above with "???" in the code.
public class MyServiceClient : IDisposable
{
// channel factory cache
private static ChannelFactory<IMyService> _factory;
private static object _lock = new object();
private IMyService _client = null;
private bool _isDisposed = false;
/// <summary>
/// Creates a channel for the service
/// </summary>
public MyServiceClient()
{
lock (_lock)
{
if (_factory == null)
{
// ... set up custom bindings here and get some config values
var endpoint = new EndpointAddress(myServiceUrl);
_factory = new ChannelFactory<IMyService>(binding, endpoint);
// ???? do I add my auth behavior for entire ChannelFactory
// or I can apply it for individual channels when I create them?
}
}
_client = _factory.CreateChannel();
}
public string MyMethod()
{
RequireClientInWorkingState();
try
{
return _client.MyMethod();
}
catch
{
RecoverFromChannelFailure();
throw;
}
}
private void RequireClientInWorkingState()
{
if (_isDisposed)
throw new InvalidOperationException("This client was disposed. Create a new one.");
// ??? is it enough to check for CommunicationState.Opened && Created?
if (state != CommunicationState.Created && state != CommunicationState.Opened)
throw new InvalidOperationException("The client channel is not ready to work. Create a new one.");
}
private void RecoverFromChannelFailure()
{
// ??? is it the best way to check if there was a problem with the channel?
if (((IChannel)_client).State != CommunicationState.Opened)
{
// ??? is it safe to call Abort? won't it throw?
((IChannel)_client).Abort();
}
// ??? and what about ChannelFactory?
// will it still be able to create channels or it also might be broken and must be thrown away?
// In that case, how do I clean up ChannelFactory correctly before creating a new one?
}
#region IDisposable
public void Dispose()
{
// ??? is it how to free the channel correctly?
// I've heard, broken channels might throw when closing
// ??? what if it is not opened yet?
// ??? what if it is in fault state?
try
{
((IChannel)_client).Close();
}
catch
{
((IChannel)_client).Abort();
}
((IDisposable)_client).Dispose();
_client = null;
_isDisposed = true;
}
#endregion
}
I guess better late then never... and looks like author has it working, this might help future WCF users.
1) ChannelFactory arranges the channel which includes all behaviors for the channel. Creating the channel via CreateChannel method "activates" the channel. Channel factories can be cached.
2) You shape the channel factory with bindings and behaviors. This shape is shared with everyone who creates this channel. As you noted in your comment you can attach message inspectors but more common case is to use Header to send custom state information to the service. You can attach headers via OperationContext.Current
using (var op = new OperationContextScope((IContextChannel)proxy))
{
var header = new MessageHeader<string>("Some State");
var hout = header.GetUntypedHeader("message", "urn:someNamespace");
OperationContext.Current.OutgoingMessageHeaders.Add(hout);
}
3) This is my general way of disposing the client channel and factory (this method is part of my ProxyBase class)
public virtual void Dispose()
{
CloseChannel();
CloseFactory();
}
protected void CloseChannel()
{
if (((IChannel)_client).State == CommunicationState.Opened)
{
try
{
((IChannel)_client).Close();
}
catch (TimeoutException /* timeout */)
{
// Handle the timeout exception
((IChannel)innerChannel).Abort();
}
catch (CommunicationException /* communicationException */)
{
// Handle the communication exception
((IChannel)_client).Abort();
}
}
}
protected void CloseFactory()
{
if (Factory.State == CommunicationState.Opened)
{
try
{
Factory.Close();
}
catch (TimeoutException /* timeout */)
{
// Handle the timeout exception
Factory.Abort();
}
catch (CommunicationException /* communicationException */)
{
// Handle the communication exception
Factory.Abort();
}
}
}
4) WCF will fault the channel not the factory. You can implement a re-connect logic but that would require that you create and derive your clients from some custom ProxyBase e.g.
protected I Channel
{
get
{
lock (_channelLock)
{
if (! object.Equals(innerChannel, default(I)))
{
ICommunicationObject channelObject = innerChannel as ICommunicationObject;
if ((channelObject.State == CommunicationState.Faulted) || (channelObject.State == CommunicationState.Closed))
{
// Channel is faulted or closing for some reason, attempt to recreate channel
innerChannel = default(I);
}
}
if (object.Equals(innerChannel, default(I)))
{
Debug.Assert(Factory != null);
innerChannel = Factory.CreateChannel();
((ICommunicationObject)innerChannel).Faulted += new EventHandler(Channel_Faulted);
}
}
return innerChannel;
}
}
5) Do not re-use channels. Open, do something, close is the normal usage pattern.
6) Create common proxy base class and derive all your clients from it. This can be helpful, like re-connecting, using pre-invoke/post invoke logic, consuming events from factory (e.g. Faulted, Opening)
7) Create your own CustomChannelFactory this gives you further control how factory behaves e.g. Set default timeouts, enforce various binding settings (MaxMessageSizes) etc.
public static void SetTimeouts(Binding binding, TimeSpan? timeout = null, TimeSpan? debugTimeout = null)
{
if (timeout == null)
{
timeout = new TimeSpan(0, 0, 1, 0);
}
if (debugTimeout == null)
{
debugTimeout = new TimeSpan(0, 0, 10, 0);
}
if (Debugger.IsAttached)
{
binding.ReceiveTimeout = debugTimeout.Value;
binding.SendTimeout = debugTimeout.Value;
}
else
{
binding.ReceiveTimeout = timeout.Value;
binding.SendTimeout = timeout.Value;
}
}
I have a UI view lossreport.xaml in that below code is there
LossReportTowGlassServiceClient wcf = new LossReportTowGlassServiceClient();
wcf.HouseholdSearchCompleted += (o, ev) =>
{
string a = errorMessg.ToUpper();
//Code to work with ev
};
wcf.HouseholdSearchAsync(lossDate, txtPolicyNumber.Text, errorMessg);
in service.svc page
try
{
policyinq.retrieveHouseHoldPoliciesCompleted += new retrieveHouseHoldPoliciesCompletedEventHandler(policyinq_retrieveHouseHoldPoliciesCompleted);
policyinq.retrieveHouseHoldPoliciesAsync(reqh, searchCriteria, lossdate, true, string.Empty, string.Empty);
break;
}
catch (Exception ex)
{
Logger.Exceptions("", "HouseholdSearch", ex);
errorToSend = "Household error";
}
void policyinq_retrieveHouseHoldPoliciesCompleted(object sender, retrieveHouseHoldPoliciesCompletedEventArgs e)
{
{
if (e.transactionNotification != null && e.transactionNotification.transactionStatus == TransactionState.S)
{
}
else
{
ErrorHandling.ErrorSend(e.transactionNotification, "HouseHold");
}
};
}
now before retrieveHouseHoldPolicies is completed HouseholdSearchCompleted event is fired.How to make it wait
You have an architectural issue here, The service should not invoke async request unless you go ta good reason (maybe invoke some paralleled stuff. Just invoke your server side code synchronously.
A service entry point got it's own handler thread, it should be the one who starts and end the request response process on service side. what you do is call an async method on service side making the thread that handle the request finish his job. So you either make this thread wait or execute the entire logic on him without calling async method, kapish?
using System.Threading;
ManualResetEvent _wait = new ManualResetEvent(false);
_wait.Set();//In completed event
_wait.WaitOne();//After the event is completed WaitOne will wait untill the _wait is set with value
I have a WCF service that is processing a call, sending that processed data onto another service, and alerting the caller and any other instances of that application by firing a callback. Originally the callbacks were being called at the end but I found that if the second service was not running that there would be a twenty second delay while we attempted to discover it. Only then were the callbacks called. I moved the callback notification before the call to the second service but it still had the delay. I even tried firing the callbacks on a background process but that didn't work either. Is there a way to get around this delay, outside of changing the timeout of the discovery? Here is a code snippet.
// Alert the admins of the change.
if (alertPuis)
{
ReportBoxUpdated(data.SerialNumber);
}
// Now send the change to the box if he's online.
var scope = new Uri(string.Format(#"net.tcp://{0}", data.SerialNumber));
var boxAddress = DiscoveryHelper.DiscoverAddress<IAtcBoxService>(scope);
if (boxAddress != null)
{
var proxy = GetBoxServiceProxy(boxAddress);
if (proxy != null)
{
proxy.UpdateBox(boxData);
}
else
{
Log.Write("AtcSystemService failed on call to update toool Box: {0}",
data.SerialNumber);
}
}
else if (mDal.IsBoxDataInPendingUpdates(data.SerialNumber) == false)
mDal.AddPendingUpdate(data.SerialNumber, null, true, null);
}
and
private static void ReportBoxUpdated(string serialNumber)
{
var badCallbacks = new List<string>();
Action<IAtcSystemServiceCallback> invoke = callback =>
callback.OnBoxUpdated(serialNumber);
foreach (var theCallback in AdminCallbacks)
{
var callback = theCallback.Value as IAtcSystemServiceCallback;
try
{
invoke(callback);
}
catch (Exception ex)
{
Log.Write("Failed to execute callback for admin instance {0}: {1}",
theCallback.Key, ex.Message);
badCallbacks.Add(theCallback.Key);
}
}
foreach (var bad in badCallbacks) // Clean out any stale callbacks from the list.
{
AdminCallbacks.Remove(bad);
}
}
Have you considered caching the result?