NHibernate Session per Call in WCF - How to Rollback - wcf

I've implemented some components to use WCF with both an IoC Container (StructureMap) and the Session per Call pattern. The NHibernate stuff is most taken from here: http://realfiction.net/Content/Entry/133.
It seems to be OK, but I want to open a transaction with each call and commit at the end, rather than just Flush() which how its being done in the article.
Here's where I am running into some problems and could use some advice. I haven't figured out a good way to rollback. I realize I can check the CommunicationState and if there's an exception, rollback, like so:
public void Detach(InstanceContext owner)
{
if (Session != null)
{
try
{
if(owner.State == CommunicationState.Faulted)
RollbackTransaction();
else
CommitTransaction();
}
finally
{
Session.Dispose();
}
}
}
void CommitTransaction()
{
if(Session.Transaction != null && Session.Transaction.IsActive)
Session.Transaction.Commit();
}
void RollbackTransaction()
{
if (Session.Transaction != null && Session.Transaction.IsActive)
Session.Transaction.Rollback();
}
However, I almost never return a faulted state from a service call. I would typically handle the exception and return an appropriate indicator on my response object and rollback the transaction myself.
The only way I can think of handling this would be to inject not only repositories into my WCF services, but also an ISession so I can rollback and handle the way I want. That doesn't sit well with me and seems kind of leaky.
Anyone else handling the same problem?

After further consideration, it seems like the only way to handle this is to inject the ISession into my service. The session is the same one injected to all my repositories and since a WCF service is an Application Service, I've decided it's not really leaky or bad to allow my service to manage the transactions. In fact, that's the whole purpose of an application service - coordinate between infrastructure and domain.
I still get many benefits from using the techniques in this article http://realfiction.net/Content/Entry/133, I'm just not going to implement the automatic transaction start/commit/rollback.

Related

_context.SaveChanges() works but await _context.SaveChangesAsync() doesn't

I'm struggling to understand something. I have a .Net Core 2.2 Web API, with a MySQL 8 database, and using the Pomelo library to connect to MySQL Server.
I have a PUT action method that looks like this:
// PUT: api/Persons/5
[HttpPut("{id}")]
public async Task<IActionResult> PutPerson([FromRoute] int id, Person person)
{
if (id != person.Id)
{
return BadRequest();
}
_context.Entry(person).State = EntityState.Modified;
try
{
_context.SaveChanges(); // Works
// await _context.SaveChangesAsync(); // Doesn't work
}
catch (DbUpdateConcurrencyException)
{
if (!PersonExists(id))
{
return NotFound();
}
else
{
throw;
}
}
return NoContent();
}
As per my comments in the code snippet above, when I call _context.SaveChanges(), it works (i.e. it updates the relevant record in the MySQL database, and returns a 1) but when I call await _context.SaveChangesAsync(), it doesn't work (it does not update the record, and it returns a 0). It's not throwing an exception or anything - it just doesn't update the record.
Any ideas?
As I said in my comment above, EF Core has no true sync methods. The sync methods (e.g. SaveChanges) merely block on the async methods (e.g. SaveChangesAsync). As such, it's impossible that SaveChanges would work, if SaveChangesAsync doesn't, as the former just proxies to the latter. There's some other issue at at play here, which is not evident from the code you've provided.
However, the reason I'm writing this as an answer is that the way you're doing this, in general, is wrong, and I believe by doing it right, the problem may disappear. You should never and I mean never directly save an instance created from the request body directly into your database. This provides an attack vector that would allow a malicious user to alter your database in undesirable ways. You've covered that partially by checking the id has not been modified, but a user could still alter things they should not be allowed to.
That security vulnerability aside, there's a practical reason not to do it this way. An API serves as an anti-corruption layer, but only if you decouple your entity from the object the client interacts with. When you use your entity directly, you're tightly coupling your database to your API layer, such that any change at the database level necessitates a new version of your API, and worse, provides no opportunity for deprecating the previous version. All clients must immediately update or their implementations will break. By exposing a DTO class instead to your client, the database can evolve independently of the API, as you can add any anti-corruption logic necessary to bridge the gap between the two.
Long and short, this is how your method should be structured:
// PUT: api/Persons/5
[HttpPut("{id}")]
public async Task<IActionResult> PutPerson([FromRoute] int id, PersonModel model)
{
// not necessary if using `[ApiController]`
if (!ModelState.IsValid)
return BadRequest();
var person = await _context.People.FindAsync(id);
if (person == null)
return NotFound();
// map `model` onto `person`
try
{
await _context.SaveChangesAsync();
}
catch (DbUpdateConcurrencyException)
{
// use an optimistic concurrency strategy from:
// https://learn.microsoft.com/en-us/ef/core/saving/concurrency#resolving-concurrency-conflicts
}
return NoContent();
}
I wanted to keep the code straight-forward, but for handling optimistic concurrency, I'd actually recommend using the Polly exception handling library. You can set up retry policies with that which can keep trying to make the update after error correction. Otherwise, you'd need try/catch within try/catch within try/catch, etc. Also, the DbUpdateConcurrencyException is something you should always handle in some way, so re-throwing it makes no sense.
I'm truly sorry to anyone who's time I have wasted with this question. I figured out the problem, and it was a stupid mistake I made in my dbContext. I have an audit trail setup, so I am overriding SaveChangesAsync, OnBeforeSaveChanges and OnAfterSaveChanges. There was a bug in that code. However, I am not overriding SaveChanges, which is why that still worked. Sorry!

EF4/WCF SaveChanges() Best Practice

This is how we implement a generic Save() service in WCF for our EF entities. A TT does the work for us. Even though we don't have any problems with it, I hate to assume this is the best approach (even if it might be). You guys seem pretty darn bright and helpful, so I thought I would pose the question:
Is there a better way?
[OperationContract]
public User SaveUser(User entity)
{
bool _IsDeleted = false;
using (DatabaseEntities _Context = new DatabaseEntities())
{
switch (entity.ChangeTracker.State)
{
case ObjectState.Deleted:
//delete
_IsDeleted = true;
_Context.Users.Attach(entity);
_Context.DeleteObject(entity);
break;
default:
//everything else
_Context.Users.ApplyChanges(entity);
break;
}
// now, to the database
try
{
// try to save changes, which may cause a conflict.
_Context.SaveChanges(System.Data.Objects.SaveOptions.None);
}
catch (System.Data.OptimisticConcurrencyException)
{
// resolve the concurrency conflict by refreshing
_Context.Refresh(System.Data.Objects.RefreshMode.ClientWins, entity);
// Save changes.
_Context.SaveChanges();
}
}
// return
if (_IsDeleted)
return null;
entity.AcceptChanges();
return entity;
}
Why are you doing this with Self tracking entities? What was wrong with this:
[OperationContract]
public User SaveUser(User entity)
{
bool isDeleted = false;
using (DatabaseEntities context = new DatabaseEntities())
{
isDeleted = entity.ChangeTracker.State == ObjectState.Deleted;
context.Users.ApplyChanges(entity); // It deletes entities marked for deletion as well
try
{
// no need to postpone accepting changes, they will not be accepted if exception happens
context.SaveChanges();
}
catch (System.Data.OptimisticConcurrencyException)
{
context.Refresh(System.Data.Objects.RefreshMode.ClientWins, entity);
context.SaveChanges();
}
}
return isDeleted ? null : entity;
}
If I'm not mistaken, people typically don't expose their Entity Framework objects directly in a WCF service. Entity Framework is typically thought of as a data-access layer, and WCF is more of a front-end layer, so they are put on different tiers.
A Data-Transfer Object (DTO) is used in the WCF methods. This is typically a POCO which doesn't have any state-tracking on it whatsoever. The DTO is then mapped to an Entity either by hand or via a framework like AutoMapper.
Typically clients should know whether they are "adding" or "updating" an object, and I would personally prefer these to be two separate operations on the service interface. Also, I would definitely require them to use a separate method for deleting an object. However, if you absolutely need a generic "Save", you should be able to tell whether the object you've been given is "new" or not based on the presence (or absence) of a primary key value.
A lot of the code can be put into a generic utility. For example, supposing your T4 template produces attributes on the key values of your entities, you could automatically determine whether the key values are present and perform an Insert/Update accordingly. Also, the try SaveChanges catch retry block you're using--while probably unnecessary--could easily be put into a simple utility method to be more DRY.

Retry mechanism on WCF operation call when channel in fautled state

I'm trying to find an elegant way to retry an operation when a WCF channel is in faulted state. I've tried using the Policy Injection AB to reconnect and retry the operation when a faulted state exception occurs on first call, but the PolicyInjection.Wrap method doesn't seem to like wrapping the TransparentProxy objects (proxy returned from ChannelFactory.CreateChannel).
Is there any other mechanism I could try or how could I try get the PIAB solution working correctly - any links, examples, etc. would be greatly appreciated.
Here is the code I was using that was failing:
var channelFactory = new ChannelFactory(endpointConfigurationName);
var proxy = channelFactory.CreateChannel(...);
proxy = PolicyInjection.Wrap<IService>(proxy);
Thank you.
I would rather use callback functions, something like this:
private SomeServiceClient proxy;
//This method invokes a service method and recreates the proxy if it's in a faulted state
private void TryInvoke(Action<SomeServiceClient> action)
{
try
{
action(this.proxy);
}
catch (FaultException fe)
{
if (proxy.State == CommunicationState.Faulted)
{
this.proxy.Abort();
this.proxy = new SomeServiceClient();
//Probably, there is a better way than recursion
TryInvoke(action);
}
}
}
//Any real method
private void Connect(Action<UserModel> callback)
{
TryInvoke(sc => callback(sc.Connect()));
}
And in your code you should call
ServiceProxy.Instance.Connect(user => MessageBox.Show(user.Name));
instead of
var user = ServiceProxy.Instance.Connect();
MessageBox.Show(user.Name);
Although my code uses proxy-class approach, you can write a similar code with Channels.
Thank you so much for your reply. What I ended up doing was creating a decorator type class that implemented the interface of my service, which then just wrapped the transparent proxy generated by the ChannelFactory. I was then able to use the Policy Injection Application Block to create a layer on top of this that would inject code into each operation call that would try the operation, and if a CommunicationObjectFaultedException occurred, would abort the channel, recreate it and retry the operation. It's working great now - although it works great, the only downside though is the wrapper class mentioned having to implement every service operation, but this was the only way I could use the PIAB as this made sense for me for in case I did find a way in future, it was easy enough to change just using interfaces.

When to commit NHibernate transactions in ASP.NET MVC 2 application?

First, some background: I'm new to ASP.NET MVC 2 and NHibernate. I'm starting my first application and I want to use NHibernate, because I come from JSP + Struts 1 + Hibernate web applications.
No one seems to be talking about this, so I guess it must be pretty obvious. Still I scratch my head because I can't find a solution that accomplish the following things:
1) I want to use the "session per request" strategy. So, everytime a user makes a request, he gets an Nhibernate session, starts a transaction, and when the request is over, the transaction commits, and the NHibernate session closes (and returns to the pool if there is one). This guarantees that my transactions are atomic.
2) When a database exception occurs (PK violation, unique violation, whatever) I want to capture that exception, rollback my transaction and give the user a explicit message: if it was PK violation, then that message, and the same with all integrity errors.
So, what is my problem? I come from the Java World, where I used a Filter to open the session, start the transaction, process the request, then commit the transaction and close the session. This works, except when an DB exception occurs, and by the time you are in the filter there's no way to change the destination page because the response is already committed.
So the user sees the success page when in reality the transaction was rollbacked. To avoid this I have to write a lot of data integrity checks in Java in order to prevent all integrity exceptions, because I could not handle them correctly. This is bad because I'm doing the work instead of leaving it to the database (or maybe I'm wrong and I always have to write all this data integrity code in my app?).
So I've found the IHttpModule interface which I'm guessing is pretty much the same concept as a javax.servlet.Filter (correct me if I'm wrong), so I'm guessing I could have the same problem again.
Where should I put my commits in order to make sure that my transactions are atomic, and when they throw exceptions I can capture them and change my destination page and give the user a comprehensive message?
So far the only possible solution I've come up with is to keep my IHttpModule to start and close the transaction, and put the commit calls in the last line of my controllers methods, thus being able to capture exceptions there and then return an appropiate view with the message. Now I would have to copy those commit and exception handling lines into all my controller methods that require commits. And there is the separation of concerns issue, that my controllers have to know about DB, which I don't like at all.
Is there a better way?
If you're using ASP.NET MVC, you could use an ActionFilter to achieve the same effect.
Something like (this is hacked together from difference pieces of my architecture):
public class TransactionalAttribute : ActionFilterAttribute, IAuthorizationFilter, IExceptionFilter
{
ITransaction transaction = NullTransaction.Instance;
public IsolationLevel IsolationLevel { get; set; }
public TransactionalAttribute()
{
IsolationLevel = IsolationLevel.ReadCommitted;
}
public override void OnResultExecuted(ResultExecutedContext filterContext)
{
try
{
transaction.Commit();
transaction = NullTransaction.Instance;
}
catch (Exception exception)
{
Log.For(this).FatalFormat("Problem trying to commit transaction {0}", exception);
}
}
public override void OnActionExecuting(ActionExecutingContext filterContext)
{
if (transaction == NullTransaction.Instance) transaction = UnitOfWork.Current.BeginTransaction(IsolationLevel);
}
public override void OnActionExecuted(ActionExecutedContext filterContext)
{
if (filterContext.Result != null) return;
transaction.Commit();
transaction = NullTransaction.Instance;
}
public void OnAuthorization(AuthorizationContext filterContext)
{
transaction = UnitOfWork.Current.BeginTransaction(IsolationLevel);
}
public void OnException(ExceptionContext filterContext)
{
try
{
transaction.Rollback();
transaction = NullTransaction.Instance;
}
catch (Exception exception)
{
Log.For(this).FatalFormat("Problem trying to rollback transaction {0}", exception);
}
}
private class NullTransaction : ITransaction
{
public static ITransaction Instance { get { return Singleton<NullTransaction>.Instance; } }
public void Dispose()
{
}
public void Commit()
{
}
public void Rollback()
{
}
}
}
Well after thinking about it and discussed it with coworkers I've come up with a solution that meets almost all my requirements.
I implemented the solution with my Java projects and it worked great. I'll just pust the idea so everybody can use it within any framework.
The solution consist in putting the commit call in the last line of the controller method, inside a try-catch block. If a constraint exception occurs you can get the name of the violated constraint. With the name you can tell the user exactly what went wrong. I used a properties file to store the message to show to the user wich constraint was violated. The keys of the properties file are the constraints names and the values are the constraint violation messages.
Yo can refactor the commit-handle_exception-find_constraint_message to a method, that's what I did.
For now it solves my problem of writing code to check database integrity and I believe it's pretty elegant with the constraint violation messages in a properties file. Now, I still don't like the idea that my controllers need to call the commit, but that's way better than writing integrity checks that the database already does.
I will continue to use a filter just like David Kemp said, just that the filter will only open the (n)hibernate session and the transaction, and then, at the end of the request, close the session.
Comments are more than welcome. Thanks.

What is the proper life-cycle of a WCF service client proxy in Silverlight 3?

I'm finding mixed answers to my question out in the web. To elaborate on the question:
Should I instantiate a service client proxy once per asynchronous invocation, or once per Silverlight app?
Should I close the service client proxy explicitly (as I do in my ASP.NET MVC application calling WCF services synchronously)?
I've found plenty of bloggers and forum posters out contradicting each other. Can anyone point to any definitive sources or evidence to answer this once and for all?
I've been using Silverlight with WCF since V2 (working with V4 now), and here's what I've found. In general, it works very well to open one client and just use that one client for all communications. And if you're not using the DuplexHttBinding, it also works fine to do just the opposite, to open a new connection each time and then close it when you're done. And because of how Microsoft has architected the WCF client in Silverlight, you're not going to see much performance difference between keeping one client open all the time vs. creating a new client with each request. (But if you're creating a new client with each request, make darned sure you're closing it as well.)
Now, if you're using the DuplexHttBinding, i.e., if you want to call methods on the client from the server, it's of course important that you don't close the client with each request. That's just common sense. However, what none of the documentation tells you, but which I've found to be absolutely critical, is that if you're using the DuplexHttBinding, you should only ever have one instance of the client open at once. Otherwise, you're going to run into all sorts of nasty timeout problems that are going to be really, really hard to troubleshoot. Your life will be dramatically easier if you just have one connection.
The way that I've enforced this in my own code is to run all my connections through a single static DataConnectionManager class that throws an Assert if I try to open a second connection before closing the first. A few snippets from that class:
private static int clientsOpen;
public static int ClientsOpen
{
get
{
return clientsOpen;
}
set
{
clientsOpen = value;
Debug.Assert(clientsOpen <= 1, "Bad things seem to happen when there's more than one open client.");
}
}
public static RoomServiceClient GetRoomServiceClient()
{
ClientsCreated++;
ClientsOpen++;
Logger.LogDebugMessage("Clients created: {0}; Clients open: {1}", ClientsCreated, ClientsOpen);
return new RoomServiceClient(GetDuplexHttpBinding(), GetDuplexHttpEndpoint());
}
public static void TryClientClose(RoomServiceClient client, bool waitForPendingCalls, Action<Exception> callback)
{
if (client != null && client.State != CommunicationState.Closed)
{
client.CloseCompleted += (sender, e) =>
{
ClientsClosed++;
ClientsOpen--;
Logger.LogDebugMessage("Clients closed: {0}; Clients open: {1}", ClientsClosed, ClientsOpen);
if (e.Error != null)
{
Logger.LogDebugMessage(e.Error.Message);
client.Abort();
}
closingIntentionally = false;
if (callback != null)
{
callback(e.Error);
}
};
closingIntentionally = true;
if (waitForPendingCalls)
{
WaitForPendingCalls(() => client.CloseAsync());
}
else
{
client.CloseAsync();
}
}
else
{
if (callback != null)
{
callback(null);
}
}
}
The annoying part, of course, is if you only have one connection, you need to trap for when that connection closes unintentionally and try to reopen it. And then you need to reinitialize all the callbacks that your different classes were registered to handle. It's not really all that difficult, but it's annoying to make sure it's done right. And of course, automated testing of that part is difficult if not impossible . . .
You should open your client per call and close it immediately after. If you in doubt browse using IE to a SVC file and look at the example they have there.
WCF have configuration settings that tells it how long it should wait for a call to return, my thinking is that when it does not complete in the allowed time the AsyncClose will close it. Therefore call client.AsyncClose().