_context.SaveChanges() works but await _context.SaveChangesAsync() doesn't - asp.net-core

I'm struggling to understand something. I have a .Net Core 2.2 Web API, with a MySQL 8 database, and using the Pomelo library to connect to MySQL Server.
I have a PUT action method that looks like this:
// PUT: api/Persons/5
[HttpPut("{id}")]
public async Task<IActionResult> PutPerson([FromRoute] int id, Person person)
{
if (id != person.Id)
{
return BadRequest();
}
_context.Entry(person).State = EntityState.Modified;
try
{
_context.SaveChanges(); // Works
// await _context.SaveChangesAsync(); // Doesn't work
}
catch (DbUpdateConcurrencyException)
{
if (!PersonExists(id))
{
return NotFound();
}
else
{
throw;
}
}
return NoContent();
}
As per my comments in the code snippet above, when I call _context.SaveChanges(), it works (i.e. it updates the relevant record in the MySQL database, and returns a 1) but when I call await _context.SaveChangesAsync(), it doesn't work (it does not update the record, and it returns a 0). It's not throwing an exception or anything - it just doesn't update the record.
Any ideas?

As I said in my comment above, EF Core has no true sync methods. The sync methods (e.g. SaveChanges) merely block on the async methods (e.g. SaveChangesAsync). As such, it's impossible that SaveChanges would work, if SaveChangesAsync doesn't, as the former just proxies to the latter. There's some other issue at at play here, which is not evident from the code you've provided.
However, the reason I'm writing this as an answer is that the way you're doing this, in general, is wrong, and I believe by doing it right, the problem may disappear. You should never and I mean never directly save an instance created from the request body directly into your database. This provides an attack vector that would allow a malicious user to alter your database in undesirable ways. You've covered that partially by checking the id has not been modified, but a user could still alter things they should not be allowed to.
That security vulnerability aside, there's a practical reason not to do it this way. An API serves as an anti-corruption layer, but only if you decouple your entity from the object the client interacts with. When you use your entity directly, you're tightly coupling your database to your API layer, such that any change at the database level necessitates a new version of your API, and worse, provides no opportunity for deprecating the previous version. All clients must immediately update or their implementations will break. By exposing a DTO class instead to your client, the database can evolve independently of the API, as you can add any anti-corruption logic necessary to bridge the gap between the two.
Long and short, this is how your method should be structured:
// PUT: api/Persons/5
[HttpPut("{id}")]
public async Task<IActionResult> PutPerson([FromRoute] int id, PersonModel model)
{
// not necessary if using `[ApiController]`
if (!ModelState.IsValid)
return BadRequest();
var person = await _context.People.FindAsync(id);
if (person == null)
return NotFound();
// map `model` onto `person`
try
{
await _context.SaveChangesAsync();
}
catch (DbUpdateConcurrencyException)
{
// use an optimistic concurrency strategy from:
// https://learn.microsoft.com/en-us/ef/core/saving/concurrency#resolving-concurrency-conflicts
}
return NoContent();
}
I wanted to keep the code straight-forward, but for handling optimistic concurrency, I'd actually recommend using the Polly exception handling library. You can set up retry policies with that which can keep trying to make the update after error correction. Otherwise, you'd need try/catch within try/catch within try/catch, etc. Also, the DbUpdateConcurrencyException is something you should always handle in some way, so re-throwing it makes no sense.

I'm truly sorry to anyone who's time I have wasted with this question. I figured out the problem, and it was a stupid mistake I made in my dbContext. I have an audit trail setup, so I am overriding SaveChangesAsync, OnBeforeSaveChanges and OnAfterSaveChanges. There was a bug in that code. However, I am not overriding SaveChanges, which is why that still worked. Sorry!

Related

Metro c++ async programming and UI updating. My technique?

The problem: I'm crashing when I want to render my incoming data which was retrieved asynchronously.
The app starts and displays some dialog boxes using XAML. Once the user fills in their data and clicks the login button, the XAML class has in instance of a worker class that does the HTTP stuff for me (asynchronously using IXMLHTTPRequest2). When the app has successfully logged in to the web server, my .then() block fires and I make a callback to my main xaml class to do some rendering of the assets.
I am always getting crashes in the delegate though (the main XAML class), which leads me to believe that I cannot use this approach (pure virtual class and callbacks) to update my UI. I think I am inadvertently trying to do something illegal from an incorrect thread which is a byproduct of the async calls.
Is there a better or different way that I should be notifying the main XAML class that it is time for it to update it's UI? I am coming from an iOS world where I could use NotificationCenter.
Now, I saw that Microsoft has it's own Delegate type of thing here: http://msdn.microsoft.com/en-us/library/windows/apps/hh755798.aspx
Do you think that if I used this approach instead of my own callbacks that it would no longer crash?
Let me know if you need more clarification or what not.
Here is the jist of the code:
public interface class ISmileServiceEvents
{
public: // required methods
virtual void UpdateUI(bool isValid) abstract;
};
// In main XAML.cpp which inherits from an ISmileServiceEvents
void buttonClick(...){
_myUser->LoginAndGetAssets(txtEmail->Text, txtPass->Password);
}
void UpdateUI(String^ data) // implements ISmileServiceEvents
{
// This is where I would render my assets if I could.
// Cannot legally do much here. Always crashes.
// Follow the rest of the code to get here.
}
// In MyUser.cpp
void LoginAndGetAssets(String^ email, String^ password){
Uri^ uri = ref new URI(MY_SERVER + "login.json");
String^ inJSON = "some json input data here"; // serialized email and password with other data
// make the HTTP request to login, then notify XAML that it has data to render.
_myService->HTTPPostAsync(uri, json).then([](String^ outputJson){
String^ assets = MyParser::Parse(outputJSON);
// The Login has returned and we have our json output data
if(_delegate)
{
_delegate->UpdateUI(assets);
}
});
}
// In MyService.cpp
task<String^> MyService::HTTPPostAsync(Uri^ uri, String^ json)
{
return _httpRequest.PostAsync(uri,
json->Data(),
_cancellationTokenSource.get_token()).then([this](task<std::wstring> response)
{
try
{
if(_httpRequest.GetStatusCode() != 200) SM_LOG_WARNING("Status code=", _httpRequest.GetStatusCode());
String^ j = ref new String(response.get().c_str());
return j;
}
catch (Exception^ ex) .......;
return ref new String(L"");
}, task_continuation_context::use_current());
}
Edit: BTW, the error I get when I go to update the UI is:
"An invalid parameter was passed to a function that considers invalid parameters fatal."
In this case I am just trying to execute in my callback is
txtBox->Text = data;
It appears you are updating the UI thread from the wrong context. You can use task_continuation_context::use_arbitrary() to allow you to update the UI. See the "Controlling the Execution Thread" example in this document (the discussion of marshaling is at the bottom).
So, it turns out that when you have a continuation, if you don't specify a context after the lambda function, that it defaults to use_arbitrary(). This is in contradiction to what I learned in an MS video.
However by adding use_currrent() to all of the .then blocks that have anything to do with the GUI, my error goes away and everything is able to render properly.
My GUI calls a service which generates some tasks and then calls to an HTTP class that does asynchronous stuff too. Way back in the HTTP classes I use use_arbitrary() so that it can run on secondary threads. This works fine. Just be sure to use use_current() on anything that has to do with the GUI.
Now that you have my answer, if you look at the original code you will see that it already contains use_current(). This is true, but I left out a wrapping function for simplicity of the example. That is where I needed to add use_current().

Have multiple calls wait on the same internal async task

(Note: this is an over-simplified scenario to demonstrate my coding issue.)
I have the following class interface:
public class CustomerService
{
Task<IEnumerable<Customer>> FindCustomersInArea(String areaName);
Task<Customer> GetCustomerByName(String name);
:
}
This is the client-side of a RESTful API which loads a list of Customer objects from the server then exposes methods that allows client code to consume and work against that list.
Both of these methods work against the internal list of Customers retrieved from the server as follows:
private Task<IEnumerable<Customer>> LoadCustomersAsync()
{
var tcs = new TaskCompletionSource<IEnumerable<Customer>>();
try
{
// GetAsync returns Task<HttpResponseMessage>
Client.GetAsync(uri).ContinueWith(task =>
{
if (task.IsCanceled)
{
tcs.SetCanceled();
}
else if (task.IsFaulted)
{
tcs.SetException(task.Exception);
}
else
{
// Convert HttpResponseMessage to desired return type
var response = task.Result;
var list = response.Content.ReadAs<IEnumerable<Customer>>();
tcs.SetResult(list);
}
});
}
catch (Exception ex)
{
tcs.SetException(ex);
}
}
The Client class is a custom version of the HttpClient class from the WCF Web API (now ASP.NET Web API) because I am working in Silverlight and they don't have an SL version of their client assemblies.
After all that background, here's my problem:
All of the methods in the CustomerService class use the list returned by the asynchronous LoadCustomersAsync method; therefore, any calls to these methods should wait (asynchronously) until the LoadCustomers method has returned and the appopriate logic executed on the returned list.
I also only want one call made from the client (in LoadCustomers) at a time. So, I need all of the calls to the public methods to wait on the same internal task.
To review, here's what I need to figure out how to accomplish:
Any call to FindCustomersInArea and GetCustomerByName should return a Task that waits for the LoadCustomersAsync method to complete. If LoadCustomersAsync has already returned (and the cached list still valid), then the method may continue immediately.
After LoadCustomersAsync returns, each method has additional logic required to convert the list into the desired return value for the method.
There must only ever be one active call to LoadCustomersAsync (of the GetAsync method within).
If the cached list expires, then subsequent calls will trigger a reload (via LoadCustomersAsync).
Let me know if you need further clarification, but I'm hoping this is a common enough use case that someone can help me work out the logic to get the client working as desired.
Disclaimer: I'm going to assume you're using a singleton instance of your HttpClient subclass. If that's not the case we need only modify slightly what I'm about to tell you.
Yes, this is totally doable. The mechanism we're going to rely on for subsequent calls to LoadCustomersAsync is that if you attach a continuation to a Task, even if that Task completed eons ago, you're continuation will be signaled "immediately" with the task's final state.
Instead of creating/returning a new TaskCompletionSource<T> (TCS) every time from the LoadCustomerAsync method, you would instead have a field on the class that represents the TCS. This will allow your instance to remember the TCS that last represented the call that represented a cache-miss. This TCS's state will be signaled exactly the same as your existing code. You'll add the knowledge of whether or not the data has expired as another field which, combined with whether the TCS is currently null or not, will be the trigger for whether or not you actually go out and load the data again.
Ok, enough talk, it'll probably make a lot more sense if you see it.
The Code
public class CustomerService
{
// Your cache timeout (using 15mins as example, can load from config or wherever)
private static readonly TimeSpan CustomersCacheTimeout = new TimeSpan(0, 15, 0);
// A lock object used to provide thread safety
private object loadCustomersLock = new object();
private TaskCompletionSource<IEnumerable<Customer>> loadCustomersTaskCompletionSource;
private DateTime loadCustomersLastCacheTime = DateTime.MinValue;
private Task<IEnumerable<Customer>> LoadCustomersAsync()
{
lock(this.loadCustomersLock)
{
bool needToLoadCustomers = this.loadCustomersTaskCompletionSource == null
||
(this.loadCustomersTaskCompletionSource.Task.IsFaulted || this.loadCustomersTaskCompletionSource.Task.IsCanceled)
||
DateTime.Now - this.loadCustomersLastCacheTime.Value > CustomersService.CustomersCacheTimeout;
if(needToLoadCustomers)
{
this.loadCustomersTaskCompletionSource = new TaskCompletionSource<IEnumerable<Customer>>();
try
{
// GetAsync returns Task<HttpResponseMessage>
Client.GetAsync(uri).ContinueWith(antecedent =>
{
if(antecedent.IsCanceled)
{
this.loadCustomersTaskCompletionSource.SetCanceled();
}
else if(antecedent.IsFaulted)
{
this.loadCustomersTaskCompletionSource.SetException(antecedent.Exception);
}
else
{
// Convert HttpResponseMessage to desired return type
var response = antecedent.Result;
var list = response.Content.ReadAs<IEnumerable<Customer>>();
this.loadCustomersTaskCompletionSource.SetResult(list);
// Record the last cache time
this.loadCustomersLastCacheTime = DateTime.Now;
}
});
}
catch(Exception ex)
{
this.loadCustomersTaskCompletionSource.SetException(ex);
}
}
}
}
return this.loadCustomersTaskCompletionSource.Task;
}
Scenarios where the customers aren't loaded:
If it's the first call, the TCS will be null so the TCS will be created and customers fetched.
If the previous call faulted or was canceled, a new TCS will be created and the customers fetched.
If the cache timeout has expired, a new TCS will be created and the customers fetched.
Scenarios where the customers are loading/loaded:
If the customers are in the process of loading, the existing TCS's Task will be returned and any continuations added to the task using ContinueWith will be executed once the TCS has been signaled.
If the customers are already loaded, the existing TCS's Task will be returned and any continuations added to the task using ContinueWith will be executed as soon as the scheduler sees fit.
NOTE: I used a coarse grained locking approach here and you could theoretically improve performance with a reader/writer implementation, but it would probably be a micro-optimization in your case.
I think you should change the way you call Client.GetAsync(uri). Do it roughly like this:
Lazy<Task> getAsyncLazy = new Lazy<Task>(() => Client.GetAsync(uri));
And in your LoadCustomersAsync method you write:
getAsyncLazy.Value.ContinueWith(task => ...
This will ensure that GetAsync only gets called once and that everyone interested in its result will receive the same task.

EF4/WCF SaveChanges() Best Practice

This is how we implement a generic Save() service in WCF for our EF entities. A TT does the work for us. Even though we don't have any problems with it, I hate to assume this is the best approach (even if it might be). You guys seem pretty darn bright and helpful, so I thought I would pose the question:
Is there a better way?
[OperationContract]
public User SaveUser(User entity)
{
bool _IsDeleted = false;
using (DatabaseEntities _Context = new DatabaseEntities())
{
switch (entity.ChangeTracker.State)
{
case ObjectState.Deleted:
//delete
_IsDeleted = true;
_Context.Users.Attach(entity);
_Context.DeleteObject(entity);
break;
default:
//everything else
_Context.Users.ApplyChanges(entity);
break;
}
// now, to the database
try
{
// try to save changes, which may cause a conflict.
_Context.SaveChanges(System.Data.Objects.SaveOptions.None);
}
catch (System.Data.OptimisticConcurrencyException)
{
// resolve the concurrency conflict by refreshing
_Context.Refresh(System.Data.Objects.RefreshMode.ClientWins, entity);
// Save changes.
_Context.SaveChanges();
}
}
// return
if (_IsDeleted)
return null;
entity.AcceptChanges();
return entity;
}
Why are you doing this with Self tracking entities? What was wrong with this:
[OperationContract]
public User SaveUser(User entity)
{
bool isDeleted = false;
using (DatabaseEntities context = new DatabaseEntities())
{
isDeleted = entity.ChangeTracker.State == ObjectState.Deleted;
context.Users.ApplyChanges(entity); // It deletes entities marked for deletion as well
try
{
// no need to postpone accepting changes, they will not be accepted if exception happens
context.SaveChanges();
}
catch (System.Data.OptimisticConcurrencyException)
{
context.Refresh(System.Data.Objects.RefreshMode.ClientWins, entity);
context.SaveChanges();
}
}
return isDeleted ? null : entity;
}
If I'm not mistaken, people typically don't expose their Entity Framework objects directly in a WCF service. Entity Framework is typically thought of as a data-access layer, and WCF is more of a front-end layer, so they are put on different tiers.
A Data-Transfer Object (DTO) is used in the WCF methods. This is typically a POCO which doesn't have any state-tracking on it whatsoever. The DTO is then mapped to an Entity either by hand or via a framework like AutoMapper.
Typically clients should know whether they are "adding" or "updating" an object, and I would personally prefer these to be two separate operations on the service interface. Also, I would definitely require them to use a separate method for deleting an object. However, if you absolutely need a generic "Save", you should be able to tell whether the object you've been given is "new" or not based on the presence (or absence) of a primary key value.
A lot of the code can be put into a generic utility. For example, supposing your T4 template produces attributes on the key values of your entities, you could automatically determine whether the key values are present and perform an Insert/Update accordingly. Also, the try SaveChanges catch retry block you're using--while probably unnecessary--could easily be put into a simple utility method to be more DRY.

Retry mechanism on WCF operation call when channel in fautled state

I'm trying to find an elegant way to retry an operation when a WCF channel is in faulted state. I've tried using the Policy Injection AB to reconnect and retry the operation when a faulted state exception occurs on first call, but the PolicyInjection.Wrap method doesn't seem to like wrapping the TransparentProxy objects (proxy returned from ChannelFactory.CreateChannel).
Is there any other mechanism I could try or how could I try get the PIAB solution working correctly - any links, examples, etc. would be greatly appreciated.
Here is the code I was using that was failing:
var channelFactory = new ChannelFactory(endpointConfigurationName);
var proxy = channelFactory.CreateChannel(...);
proxy = PolicyInjection.Wrap<IService>(proxy);
Thank you.
I would rather use callback functions, something like this:
private SomeServiceClient proxy;
//This method invokes a service method and recreates the proxy if it's in a faulted state
private void TryInvoke(Action<SomeServiceClient> action)
{
try
{
action(this.proxy);
}
catch (FaultException fe)
{
if (proxy.State == CommunicationState.Faulted)
{
this.proxy.Abort();
this.proxy = new SomeServiceClient();
//Probably, there is a better way than recursion
TryInvoke(action);
}
}
}
//Any real method
private void Connect(Action<UserModel> callback)
{
TryInvoke(sc => callback(sc.Connect()));
}
And in your code you should call
ServiceProxy.Instance.Connect(user => MessageBox.Show(user.Name));
instead of
var user = ServiceProxy.Instance.Connect();
MessageBox.Show(user.Name);
Although my code uses proxy-class approach, you can write a similar code with Channels.
Thank you so much for your reply. What I ended up doing was creating a decorator type class that implemented the interface of my service, which then just wrapped the transparent proxy generated by the ChannelFactory. I was then able to use the Policy Injection Application Block to create a layer on top of this that would inject code into each operation call that would try the operation, and if a CommunicationObjectFaultedException occurred, would abort the channel, recreate it and retry the operation. It's working great now - although it works great, the only downside though is the wrapper class mentioned having to implement every service operation, but this was the only way I could use the PIAB as this made sense for me for in case I did find a way in future, it was easy enough to change just using interfaces.

Persisted properties - asynchronously

In classic ASP.NET I’d persist data extracted from a web service in base class property as follows:
private string m_stringData;
public string _stringData
{ get {
if (m_stringData==null)
{
//fetch data from my web service
m_stringData = ws.FetchData()
}
return m_stringData;
}
}
This way I could simply make reference to _stringData and know that I’d always get the data I was after (maybe sometimes I’d use Session state as a store instead of a private member variable).
In Silverlight with a WCF I might choose to use Isolated Storage as my persistance mechanism, but the service call can't be done like this, because a WCF service has to be called asynchronously.
How can I both invoke the service call and retrieve the response in one method?
Thanks,
Mark
In your method, invoke the service call asynchronously and register a callback that sets a flag. After you have invoked the method, enter a busy/wait loop checking the flag periodically until the flag is set indicating that the data has been returned. The callback should set the backing field for your method and you should be able to return it as soon as you detect the flag has been set indicating success. You'll also need to be concerned about failure. If it's possible to get multiple calls to your method from different threads, you'll also need to use some locking to make your code thread-safe.
EDIT
Actually, the busy/wait loop is probably not the way to go if the web service supports BeginGetData/EndGetData semantics. I had a look at some of my code where I do something similar and I use WaitOne to simply wait on the async result and then retrieve it. If your web service doesn't support this then throw a Thread.Sleep -- say for 50-100ms -- in your wait loop to give time for other processes to execute.
Example from my code:
IAsyncResult asyncResult = null;
try
{
asyncResult = _webService.BeginGetData( searchCriteria, null, null );
if (asyncResult.AsyncWaitHandle.WaitOne( _timeOut, false ))
{
result = _webService.EndGetData( asyncResult );
}
}
catch (WebException e)
{
...log the error, clean up...
}
Thanks for your help tvanfosson. I followed your code and have also found a pseudo similar solution that meets my needs exactly using a lambda expression:
private string m_stringData;
public string _stringData{
get
{
//if we don't have a list of departments, fetch from WCF
if (m_stringData == null)
{
StringServiceClient client = new StringServiceClient();
client.GetStringCompleted +=
(sender, e) =>
{
m_stringData = e.Result;
};
client.GetStringAsync();
}
return m_stringData;
}
}
EDIT
Oops... actually this doesn't work either :-(
I ended up making the calls Asynchronously and altering my programming logic to use MVVM pattern and more binding.