does nhibernate create implicit transactions within a TransactionScope? - nhibernate

I've created an integration test to verify that a repository handles Concurrency correcly. If I run the test without a TransactionScope, everything works as expect, but if I wrap the test in a TransactionScope, I get an error suggesting that there is a sudden need for distributed transactions (which lead me to believe that there is a second transaction being created). Here is the test:
[Test]
public void Commit_ItemToCommitContainsStaleData_ThrowsStaleObjectStateException()
{
using (new TransactionScope())
{
// arrange
RootUnitOfWorkFactory factory = CreateUnitOfWorkFactory();
const int Id = 1;
WorkItemRepository firstRepository = new WorkItemRepository(factory);
WorkItem itemToChange = WorkItem.Create(Id);
firstRepository.Commit(itemToChange);
WorkItemRepository secondRepository = new WorkItemRepository(factory);
WorkItem copyOfItemToChange = secondRepository.Get(Id);
// act
copyOfItemToChange.ChangeDescription("A");
secondRepository.Commit(copyOfItemToChange);
itemToChange.ChangeDescription("B");
// assert
Assert.Throws<StaleObjectStateException>(() => firstRepository.Commit(itemToChange));
}
}
This is the bottom of the error stack:
failed: NHibernate.Exceptions.GenericADOException : could not load an entity: [TfsTimeMachine.Domain.WorkItem#1][SQL: SELECT workitem0_.Id as Id1_0_, workitem0_.LastChanged as LastChan2_1_0_, workitem0_.Description as Descript3_1_0_ FROM [WorkItem] workitem0_ WHERE workitem0_.Id=?]
----> System.Data.SqlClient.SqlException : MSDTC on server 'ADM4200\SQLEXPRESS' is unavailable.
at NHibernate.Loader.Loader.LoadEntity(ISessionImplementor session, Object id, IType identifierType, Object optionalObject, String optionalEntityName, Object optionalIdentifier, IEntityPersister persister).
I'm running NUnit 2.1, so can someone tell me if Nhibernate creates implicit transactions if there is no session.BeginTransaction() before querying data, regardless of the session running within a TransactionScope?

I got this to work. The problem was (as stated in my comment) that two concurrent sessions were started within the same transactionscope and both started a new dbconnection which enlisted the same transaction, thus forcing DTC to kick in. The solution to this was to create a custom connection provider which ensured that the same connection was returned while inside a transactionscope. I then put this into play in my test and presto, I could test stale object state and rollback the data when the tests completes. Heres my implementation:
/// <summary>
/// A connection provider which returns the same db connetion while
/// there exists a TransactionScope.
/// </summary>
public sealed class AmbientTransactionAwareDriverConnectionProvider : IConnectionProvider
{
private readonly bool disposeDecoratedProviderWhenDisposingThis;
private IConnectionProvider decoratedProvider;
private IDbConnection maintainedConnectionThroughAmbientSession;
public AmbientTransactionAwareDriverConnectionProvider()
: this(new DriverConnectionProvider(), true)
{}
public AmbientTransactionAwareDriverConnectionProvider(IConnectionProvider decoratedProvider,
bool disposeDecoratedProviderWhenDisposingThis)
{
Guard.AssertNotNull(decoratedProvider, "decoratedProvider");
this.decoratedProvider = decoratedProvider;
this.disposeDecoratedProviderWhenDisposingThis = disposeDecoratedProviderWhenDisposingThis;
}
~AmbientTransactionAwareDriverConnectionProvider()
{
Dispose(false);
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
public void Configure(IDictionary<string, string> settings)
{
this.decoratedProvider.Configure(settings);
}
public void CloseConnection(IDbConnection conn)
{
if (Transaction.Current == null)
this.decoratedProvider.CloseConnection(conn);
}
public IDbConnection GetConnection()
{
if (Transaction.Current == null)
{
if (this.maintainedConnectionThroughAmbientSession != null)
this.maintainedConnectionThroughAmbientSession.Dispose();
return this.decoratedProvider.GetConnection();
}
if (this.maintainedConnectionThroughAmbientSession == null)
this.maintainedConnectionThroughAmbientSession = this.decoratedProvider.GetConnection();
return this.maintainedConnectionThroughAmbientSession;
}
private void Dispose(bool disposing)
{
if (this.maintainedConnectionThroughAmbientSession != null)
CloseConnection(this.maintainedConnectionThroughAmbientSession);
if (this.disposeDecoratedProviderWhenDisposingThis && this.decoratedProvider != null)
this.decoratedProvider.Dispose();
if (disposing)
{
this.decoratedProvider = null;
this.maintainedConnectionThroughAmbientSession = null;
}
}
public IDriver Driver
{
get { return this.decoratedProvider.Driver; }
}
}

I'm not sure if Hibernate is using transactions internally, but I also don't think that is your problem here.
It appears that the problem is that you are using two different data sources in the same transaction. In order to coordinate the transaction between both data sources for a two-phase commit, you would need to have DTC enabled. The fact that both data sources are actually the same database is immaterial.

Related

Hangfire - DisableConcurrentExecution - Prevent concurrent execution if same value passed in method parameter

Hangfire DisableConcurrentExecution attribute not working as expected.
I have one method and that can be called with different Id. I want to prevent concurrent execution of method if same Id is passed.
string jobName= $"{Id} - Entry Job";
_recurringJobManager.AddOrUpdate<EntryJob>(jobName, j => j.RunAsync(Id, Null), "0 2 * * *");
My EntryJob interface having RunAsync method.
public class EntryJob: IJob
{
[DisableConcurrentExecution(3600)] <-- Tried here
public async Task RunAsync(int Id, SomeObj obj)
{
//Some coe
}
}
And interface look like this
[DisableConcurrentExecution(3600)] <-- Tried here
public interface IJob
{
[DisableConcurrentExecution(3600)] <-- Tried here
Task RunAsync(int Id, SomeObj obj);
}
Now I want to prevent RunAsync method to call multiple times if Id is same. I have tried to put DisableConcurrentExecution on top of the RunAsync method at both location inside interface declaration and also from where Interface is implemented.
But it seems like not working for me. Is there any way to prevent concurrency based on Id?
The existing implementation of DisableConcurrentExecution does not support this. It will prevent concurrent executions of the method with any args. It would be fairly simple to add support in. Note below is untested pseudo-code:
public class DisableConcurrentExecutionWithArgAttribute : JobFilterAttribute, IServerFilter
{
private readonly int _timeoutInSeconds;
private readonly int _argPos;
// add additional param to pass in which method arg you want to use for
// deduping jobs
public DisableConcurrentExecutionAttribute(int timeoutInSeconds, int argPos)
{
if (timeoutInSeconds < 0) throw new ArgumentException("Timeout argument value should be greater that zero.");
_timeoutInSeconds = timeoutInSeconds;
_argPos = argPos;
}
public void OnPerforming(PerformingContext filterContext)
{
var resource = GetResource(filterContext.BackgroundJob.Job);
var timeout = TimeSpan.FromSeconds(_timeoutInSeconds);
var distributedLock = filterContext.Connection.AcquireDistributedLock(resource, timeout);
filterContext.Items["DistributedLock"] = distributedLock;
}
public void OnPerformed(PerformedContext filterContext)
{
if (!filterContext.Items.ContainsKey("DistributedLock"))
{
throw new InvalidOperationException("Can not release a distributed lock: it was not acquired.");
}
var distributedLock = (IDisposable)filterContext.Items["DistributedLock"];
distributedLock.Dispose();
}
private static string GetResource(Job job)
{
// adjust locked resource to include the argument to make it unique
// for a given ID
return $"{job.Type.ToGenericTypeString()}.{job.Method.Name}.{job.Args[_argPos].ToString()}";
}
}

nHibernate and multiple tasks

I am trying to improve performance of our nHibernate (3.3.2.4000) application (.NET 4.0). Currently, we are performing CRUD operations one by one, which ends up taking a lot of time, so my plan was to use the ConcurrentQueue and Tasks.
I refactored my code into this:
public void ImportProductsFromXml(string path)
{
List<Product> products = Mapper.GetProducts(path);
var addQueue = new ConcurrentQueue<Product>(productsToAddUpdate);
var updateTasks = new List<Task>();
for (int i = 0; i < 5; i++)
{
var taskId = i + 1;
updateTasks.Add(Task.Factory.StartNew(() => ProcessAddQueue(taskId, products, addQueue)));
}
}
private void ProcessAddQueue(int taskId, List<Product> products, ConcurrentQueue<Product> queue)
{
Product result = null;
while (queue.TryDequeue(out result))
{
try
{
UpdateProducts(products, result);
}
catch (Exception ex)
{
Debug.WriteLine(string.Format("ProcessAddQueue: taskId={0}, SKU={1}, ex={2}", taskId, result.ProductId, ex));
}
}
}
private void UpdateProducts(List<Product> productsFromFile, Product product)
{
...code removed...
CatalogItem parentItem = _catalogRepository.GetByCatalogItemId(category);
...code removed...
_catalogRepository.Save(parentItem);
...code removed...
}
public CatalogItem GetByCatalogItemId(string catalogItemId)
{
using (ISession session = SessionFactory.OpenSession())
{
return session
.CreateCriteria(typeof (CatalogItem))
.Add(Restrictions.Eq("CatalogItemId", catalogItemId))
.List<CatalogItem>().FirstOrDefault();
}
}
The "Save"-method of the catalogRepository calls this method, behind the scenes:
public int Add(T entity)
{
using (ISession session = SessionFactory.OpenSession())
using (ITransaction transaction = session.BeginTransaction())
{
var id = (int) session.Save(entity);
transaction.Commit();
return id;
}
}
So my idea was to create a concurrentqueue containing all the products, and then process them 5 at a time.
However, I am getting an 'Thread was being aborted exception':
at System.WeakReference.get_Target()
at System.Transactions.Transaction.JitSafeGetContextTransaction(ContextData contextData)
at System.Transactions.Transaction.FastGetTransaction(TransactionScope currentScope, ContextData contextData, Transaction& contextTransaction)
at System.Transactions.Transaction.get_Current()
at NHibernate.Transaction.AdoNetWithDistributedTransactionFactory.EnlistInDistributedTransactionIfNeeded(ISessionImplementor session)
at NHibernate.Impl.SessionImpl.get_PersistenceContext()
at NHibernate.Event.Default.DefaultSaveOrUpdateEventListener.EntityIsTransient(SaveOrUpdateEvent event)
at NHibernate.Event.Default.DefaultSaveOrUpdateEventListener.OnSaveOrUpdate(SaveOrUpdateEvent event)
at NHibernate.Impl.SessionImpl.FireSave(SaveOrUpdateEvent event)
at NHibernate.Impl.SessionImpl.Save(Object obj)
What am I doing wrong?
Hibernate sessions are meant to be used as unit of work. You open a session, open a transaction on it, load your entity, modify it, call save, commit/rollback the transaction and then dispose the session.
You should be using ONE session to load your entity and then save it. Currently you are loading an entity with one session and saving it with some other session. Combined with concurrent access this could cause problems.
Try loading and saving the entity with the same hibernate session.
When using hibernate as mentioned it should be fully threadsafe. Please note that a single hibernate session is NOT threadsafe.

Storm Kafkaspout KryoSerialization issue for java bean from kafka topic

Hi I am new to Storm and Kafka.
I am using storm 1.0.1 and kafka 0.10.0
we have a kafkaspout that would receive java bean from kafka topic.
I have spent several hours digging to find the right approach for that.
Found few articles which are useful but none of the approaches worked for me so far.
Following is my codes:
StormTopology:
public class StormTopology {
public static void main(String[] args) throws Exception {
//Topo test /zkroot test
if (args.length == 4) {
System.out.println("started");
BrokerHosts hosts = new ZkHosts("localhost:2181");
SpoutConfig kafkaConf1 = new SpoutConfig(hosts, args[1], args[2],
args[3]);
kafkaConf1.zkRoot = args[2];
kafkaConf1.useStartOffsetTimeIfOffsetOutOfRange = true;
kafkaConf1.startOffsetTime = kafka.api.OffsetRequest.LatestTime();
kafkaConf1.scheme = new SchemeAsMultiScheme(new KryoScheme());
KafkaSpout kafkaSpout1 = new KafkaSpout(kafkaConf1);
System.out.println("started");
ShuffleBolt shuffleBolt = new ShuffleBolt(args[1]);
AnalysisBolt analysisBolt = new AnalysisBolt(args[1]);
TopologyBuilder topologyBuilder = new TopologyBuilder();
topologyBuilder.setSpout("kafkaspout", kafkaSpout1, 1);
//builder.setBolt("counterbolt2", countbolt2, 3).shuffleGrouping("kafkaspout");
//This is for field grouping in bolt we need two bolt for field grouping or it wont work
topologyBuilder.setBolt("shuffleBolt", shuffleBolt, 3).shuffleGrouping("kafkaspout");
topologyBuilder.setBolt("analysisBolt", analysisBolt, 5).fieldsGrouping("shuffleBolt", new Fields("trip"));
Config config = new Config();
config.registerSerialization(VehicleTrip.class, VehicleTripKyroSerializer.class);
config.setDebug(true);
config.setNumWorkers(1);
LocalCluster cluster = new LocalCluster();
cluster.submitTopology(args[0], config, topologyBuilder.createTopology());
// StormSubmitter.submitTopology(args[0], config,
// builder.createTopology());
} else {
System.out
.println("Insufficent Arguements - topologyName kafkaTopic ZKRoot ID");
}
}
}
I am serializing the data at kafka using kryo
KafkaProducer:
public class StreamKafkaProducer {
private static Producer producer;
private final Properties props = new Properties();
private static final StreamKafkaProducer KAFKA_PRODUCER = new StreamKafkaProducer();
private StreamKafkaProducer(){
props.put("bootstrap.servers", "localhost:9092");
props.put("acks", "all");
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "com.abc.serializer.MySerializer");
producer = new org.apache.kafka.clients.producer.KafkaProducer(props);
}
public static StreamKafkaProducer getStreamKafkaProducer(){
return KAFKA_PRODUCER;
}
public void produce(String topic, VehicleTrip vehicleTrip){
ProducerRecord<String,VehicleTrip> producerRecord = new ProducerRecord<>(topic,vehicleTrip);
producer.send(producerRecord);
//producer.close();
}
public static void closeProducer(){
producer.close();
}
}
Kyro Serializer:
public class DataKyroSerializer extends Serializer<Data> implements Serializable {
#Override
public void write(Kryo kryo, Output output, VehicleTrip vehicleTrip) {
output.writeLong(data.getStartedOn().getTime());
output.writeLong(data.getEndedOn().getTime());
}
#Override
public Data read(Kryo kryo, Input input, Class<VehicleTrip> aClass) {
Data data = new Data();
data.setStartedOn(new Date(input.readLong()));
data.setEndedOn(new Date(input.readLong()));
return data;
}
I need to get the data back to the Data bean.
As per few articles I need to provide with a custom scheme and make it part of topology but till now I have no luck
Code for Bolt and Scheme
Scheme:
public class KryoScheme implements Scheme {
private ThreadLocal<Kryo> kryos = new ThreadLocal<Kryo>() {
protected Kryo initialValue() {
Kryo kryo = new Kryo();
kryo.addDefaultSerializer(Data.class, new DataKyroSerializer());
return kryo;
};
};
#Override
public List<Object> deserialize(ByteBuffer ser) {
return Utils.tuple(kryos.get().readObject(new ByteBufferInput(ser.array()), Data.class));
}
#Override
public Fields getOutputFields( ) {
return new Fields( "data" );
}
}
and bolt:
public class AnalysisBolt implements IBasicBolt {
/**
*
*/
private static final long serialVersionUID = 1L;
private String topicname = null;
public AnalysisBolt(String topicname) {
this.topicname = topicname;
}
public void prepare(Map stormConf, TopologyContext topologyContext) {
System.out.println("prepare");
}
public void execute(Tuple input, BasicOutputCollector collector) {
System.out.println("execute");
Fields fields = input.getFields();
try {
JSONObject eventJson = (JSONObject) JSONSerializer.toJSON((String) input
.getValueByField(fields.get(1)));
String StartTime = (String) eventJson.get("startedOn");
String EndTime = (String) eventJson.get("endedOn");
String Oid = (String) eventJson.get("_id");
int V_id = (Integer) eventJson.get("vehicleId");
//call method getEventForVehicleWithinTime(Long vehicleId, Date startTime, Date endTime)
System.out.println("==========="+Oid+"| "+V_id+"| "+StartTime+"| "+EndTime);
} catch (Exception e) {
e.printStackTrace();
}
}
but if I submit the storm topology i am getting error:
java.lang.IllegalStateException: Spout 'kafkaspout' contains a
non-serializable field of type com.abc.topology.KryoScheme$1, which
was instantiated prior to topology creation.
com.minda.iconnect.topology.KryoScheme$1 should be instantiated within
the prepare method of 'kafkaspout at the earliest.
Appreciate help to debug the issue and guide to right path.
Thanks
Your ThreadLocal is not Serializable. The preferable solution would be to make your serializer both Serializable and threadsafe. If this is not possible, then I see 2 alternatives since there is no prepare method as you would get in a bolt.
Declare it as static, which is inherently transient.
Declare it transient and access it via a private get method. Then you can initialize the variable on first access.
Within the Storm lifecycle, the topology is instantiated and then serialized to byte format to be stored in ZooKeeper, prior to the topology being executed. Within this step, if a spout or bolt within the topology has an initialized unserializable property, serialization will fail.
If there is a need for a field that is unserializable, initialize it within the bolt or spout's prepare method, which is run after the topology is delivered to the worker.
Source: Best Practices for implementing Apache Storm

Hibernate3 --> Hibernate 4 and issues (Lazy...)

I'm trying to update the libraries of my project (from Hibernate 3.2.1 GA to Hibernate 4.2.8)
This (complex) application use LAZY loading and get the object later only when we need it.
-->it seems to work differently now because I get some org.hibernate.LazyInitializationException: could not initialize proxy - no Session
#Entity
#Table(name = "CLIENTS")
public class Clients {
#ManyToOne(fetch = FetchType.LAZY)
#JoinColumn(name = "INFOIDT", insertable = true, updatable = false)
private Information info;
//...
}
and when I need to know more about the product before using it :
cli.getInfo();
Note that in my persistence.xml I also have the property
hibernate.cache.provider_class set to org.hibernate.cache.EhCacheProvider for a second level cache.
QUESTION : what is the simple way to migrate my existing code with Hibernate4?
(the class given for example above is a fake example to illustrate the many cases using the LAZY loading)
Thank you.
As requested, see my DAO below :
public class MyAppJpaDAO extends GenericJpaDAO implements IMyAppDAO {
protected static Log log = LogFactory.getLog(MyAppJpaDAO.class);
// Entity Manager of the project
#PersistenceContext(unitName = "MyApp.hibernate")
private EntityManager em;
public News readLastNews() {
StringBuffer sql = new StringBuffer("");
sql.append(" select object(n) ");
sql.append(" from News n ");
sql.append(" Where n.flagLastStatus = 'V' ");
sql.append(" order by n.pk.date desc ");
Query aQuery = em.createQuery(sql.toString());
List<News> res = (List<News>) aQuery.getResultList();
if (res != null && res.size() != 0) {
return res.get(0);
}
return null;
}
//...
}
/////////////
public class GenericJpaDAO implements IGenericDAO {
protected static Log log = LogFactory.getLog(GenericJpaDAO.class);
#PersistenceContext(unitName = "MyApp.hibernate")
EntityManager em;
public Object getReference(Class _class, Object _object) {
return em.getReference(_class, _object);
}
public void createObject(Object object) {
try {
em.persist(object);
} catch (LazyInitializationException lie) {
em.merge(em.merge(object));
}
}
public void deleteObject(Object object) {
try {
em.remove(object);
} catch (Exception e) {
em.remove(em.merge(object));
}
}
public void updateObject(Object object) {
em.merge(em.merge(object));
}
//...
}
If you want to use LazyLoading, you need to have the session opened and connected at the time when you calls .getInfo(). org.hibernate.LazyInitializationException occures if you tries to get an entity but the session is disconnected or closed.
I think you have problems with session handling. There is nothing to do with the entities.
If the SessionFactory is configured in a Spring context file, we can use the OpenSessionInViewFilter to keep the session open.
<filter>
<filter-name>Hibernate Session In View Filter</filter-name>
<filter-class>org.springframework.orm.hibernate3.support.OpenSessionInViewFilter</filter-class>
</filter>
<filter-mapping>
<filter-name>Hibernate Session In View Filter</filter-name>
<url-pattern>/*</url-pattern>
</filter-mapping>
Unfortunately, my application is not configured like this...
Interesting...but still not helping
http://www.javacodegeeks.com/2012/07/four-solutions-to-lazyinitializationexc_05.html
But I find something :
1)Hibernate 3.2.1 GA and Spring 2.0
I used to put a Person having a LAZY bag in a Group and when I wanted to get some pencil from the bag of any person of the group, I was able to get it.
2)Hibernate 4.2.8 et Spring 3.2.5.
If I don't explicitely ask to know the content of the bag just after getting the Person and before putting it into the group, I will have the lazy exception.
If someone could explain me why...

Wrong Thread.CurrentPrincipal in async WCF end-method

I have a WCF service which has its Thread.CurrentPrincipal set in the ServiceConfiguration.ClaimsAuthorizationManager.
When I implement the service asynchronously like this:
public IAsyncResult BeginMethod1(AsyncCallback callback, object state)
{
// Audit log call (uses Thread.CurrentPrincipal)
var task = Task<int>.Factory.StartNew(this.WorkerFunction, state);
return task.ContinueWith(res => callback(task));
}
public string EndMethod1(IAsyncResult ar)
{
// Audit log result (uses Thread.CurrentPrincipal)
return ar.AsyncState as string;
}
private int WorkerFunction(object state)
{
// perform work
}
I find that the Thread.CurrentPrincipal is set to the correct ClaimsPrincipal in the Begin-method and also in the WorkerFunction, but in the End-method it's set to a GenericPrincipal.
I know I can enable ASP.NET compatibility for the service and use HttpContext.Current.User which has the correct principal in all methods, but I'd rather not do this.
Is there a way to force the Thread.CurrentPrincipal to the correct ClaimsPrincipal without turning on ASP.NET compatibility?
Starting with a summary of WCF extension points, you'll see the one that is expressly designed to solve your problem. It is called a CallContextInitializer. Take a look at this article which gives CallContextInitializer sample code.
If you make an ICallContextInitializer extension, you will be given control over both the BeginXXX thread context AND the EndXXX thread context. You are saying that the ClaimsAuthorizationManager has correctly established the user principal in your BeginXXX(...) method. In that case, you then make for yourself a custom ICallContextInitializer which either assigns or records the CurrentPrincipal, depending on whether it is handling your BeginXXX() or your EndXXX(). Something like:
public object BeforeInvoke(System.ServiceModel.InstanceContext instanceContext, System.ServiceModel.IClientChannel channel, System.ServiceModel.Channels.Message request){
object principal = null;
if (request.Properties.TryGetValue("userPrincipal", out principal))
{
//If we got here, it means we're about to call the EndXXX(...) method.
Thread.CurrentPrincipal = (IPrincipal)principal;
}
else
{
//If we got here, it means we're about to call the BeginXXX(...) method.
request.Properties["userPrincipal"] = Thread.CurrentPrincipal;
}
...
}
To clarify further, consider two cases. Suppose you implemented both an ICallContextInitializer and an IParameterInspector. Suppose that these hooks are expected to execute with a synchronous WCF service and with an async WCF service (which is your special case).
Below are the sequence of events and the explanation of what is happening:
Synchronous Case
ICallContextInitializer.BeforeInvoke();
IParemeterInspector.BeforeCall();
//...service executes...
IParameterInspector.AfterCall();
ICallContextInitializer.AfterInvoke();
Nothing surprising in the above code. But now look below at what happens with asynchronous service operations...
Asynchronous Case
ICallContextInitializer.BeforeInvoke(); //TryGetValue() fails, so this records the UserPrincipal.
IParameterInspector.BeforeCall();
//...Your BeginXXX() routine now executes...
ICallContextInitializer.AfterInvoke();
//...Now your Task async code executes (or finishes executing)...
ICallContextInitializercut.BeforeInvoke(); //TryGetValue succeeds, so this assigns the UserPrincipal.
//...Your EndXXX() routine now executes...
IParameterInspector.AfterCall();
ICallContextInitializer.AfterInvoke();
As you can see, the CallContextInitializer ensures you have opportunity to initialize values such as your CurrentPrincipal just before the EndXXX() routine runs. It therefore doesn't matter that the EndXXX() routine assuredly is executing on a different thread than did the BeginXXX() routine. And yes, the System.ServiceModel.Channels.Message object which is storing your user principal between Begin/End methods, is preserved and properly transmitted by WCF even though the thread changed.
Overall, this approach allows your EndXXX(IAsyncresult) to execute with the correct IPrincipal, without having to explicitly re-establish the CurrentPrincipal in the EndXXX() routine. And as with any WCF behavior, you can decide if this applies to individual operations, all operations on a contract, or all operations on an endpoint.
Not really the answer to my question, but an alternate approach of implementing the WCF service (in .NET 4.5) that does not exhibit the same issues with Thread.CurrentPrincipal.
public async Task<string> Method1()
{
// Audit log call (uses Thread.CurrentPrincipal)
try
{
return await Task.Factory.StartNew(() => this.WorkerFunction());
}
finally
{
// Audit log result (uses Thread.CurrentPrincipal)
}
}
private string WorkerFunction()
{
// perform work
return string.Empty;
}
The valid approach to this is to create an extension:
public class SLOperationContext : IExtension<OperationContext>
{
private readonly IDictionary<string, object> items;
private static ReaderWriterLockSlim _instanceLock = new ReaderWriterLockSlim();
private SLOperationContext()
{
items = new Dictionary<string, object>();
}
public IDictionary<string, object> Items
{
get { return items; }
}
public static SLOperationContext Current
{
get
{
SLOperationContext context = OperationContext.Current.Extensions.Find<SLOperationContext>();
if (context == null)
{
_instanceLock.EnterWriteLock();
context = new SLOperationContext();
OperationContext.Current.Extensions.Add(context);
_instanceLock.ExitWriteLock();
}
return context;
}
}
public void Attach(OperationContext owner) { }
public void Detach(OperationContext owner) { }
}
Now this extension is used as a container for objects that you want to persist between thread switching as OperationContext.Current will remain the same.
Now you can use this in BeginMethod1 to save current user:
SLOperationContext.Current.Items["Principal"] = OperationContext.Current.ClaimsPrincipal;
And then in EndMethod1 you can get the user by typing:
ClaimsPrincipal principal = SLOperationContext.Current.Items["Principal"];
EDIT (Another approach):
public IAsyncResult BeginMethod1(AsyncCallback callback, object state)
{
var task = Task.Factory.StartNew(this.WorkerFunction, state);
var ec = ExecutionContext.Capture();
return task.ContinueWith(res =>
ExecutionContext.Run(ec, (_) => callback(task), null));
}