ERP+Shopping cart ASP.NET MVC 4.8 application is planned to migrate to .NET 5 MVC Core application.
Entity Framework Core with NpgSql data provider is planned to use.
MVC 4.8 application does no use any async method.
There are async methods in .NET 5 for Data Accesss like ToListAsync(), ExecuteSqlInterpolatedAsync().
Samples of Core MVC Controllers return async tasks like
[HttpPost]
public async Task<IActionResult> LogOn(LogOnModel model, string returnUrl,
[FromServices] ShoppingCart shoppingcart
)
There will be 100 human users.
Application has also Web API providing json data over http. Shopping cart part allows anonynous access an is scanned by search engines.
Ngpsql has connation pooling support so multiple connections are added automatically.
Application is hosted in Debian Linux VPS server with 4 Cores using Apache.
VSP has 20 GB of RAM so lot of data is cached by Linux. However probably most of time is still consumed by reading data from Postgres database.
Most controllers read and write data to/from database.
Answer in
https://forums.asp.net/t/2136711.aspx?Should+I+always+prefer+async+actions+over+sync+actions+
recommends to use async methods for data access always.
Answer in
Always using Async in an ASP.NET MVC Controller
recommends not to use async always.
Conclusion from https://gokhansengun.com/asp-net-mvc-and-web-api-comparison-of-async-or-sync-actions/ states
However async actions do not come with zero cost, writing async code
requires more care, proficiency and it has its own challenges.
Application and Database in in same VPS server
Answer in
mvc should everything be async
states that async should not used if application and database are in same server.
Answer in
When should I use Async Controllers in ASP.NET MVC?
states
I'd say it's good to use it everywhere you're doing I/O.
but afterwards:
If you're talking about ASP.NET MVC with a single database backend,
then you're (almost certainly) not going to get any scalability
benefit from async. This is because IIS can handle far more concurrent
requests than a single instance of SQL server (or other classic RDBMS)
There two upgrade paths in my case:
Continue to use only sync methods. Don't waste resources on async. Existing tested MVC controllers code can used. Number of threads in kestrel is not limited. Assume in future .NET compiler creates async code by analyzing application and manual async will become obsolete.
Change MVC controllers signatures to
public async Task
Replace all EF data access calls with async calls. Assume this is solid .NET feature which remains. Re-factor code so that Visual Studio 2019 warnings will not appear after change. After my application is released this allows to optimize existing code without major re-write.
Which upgrade path should used in this case ?
Will changing everything to async introduce new bugs in code ?
Async is not mandatory for web applications. It is mostly mandatory for GUIs only.
Your application will continue to work. Async programming is great at handling scale of requests. But you said you have at most 100 users. If they were 100.000, your application would have suffered a lot.
And I can tell for sure that async programming does come with challenges, because for example there are issues with transactions if you don't handle it properly.
Of course threads come with a cost too. Async exists to avoid the 500KB of overhead that is required for every thread. This means that the machine(s) running the application might need to be scaled vertically. In this sense, async saves RAM.
The choice is yours. Since you are refactoring your app anyways, you could work on improving it to the next step and get it ready for bigger scale.
Otherwise your application will still work fine for 100 users.
[Edit] a pull request is worth 1000 words. In async context, the transaction should be initialized with TransactionScopeAsyncFlowOption.Enabled to avoid the exception descripted and in order to tell the transaction engine that the thread is participating an async flow. To keep it simply simple, async flows share the same thread, so application code (and transaction management is C# code) must not rely on thread-local information and has to clean up context every time the context is switched to another async flow asking for attention.
Conclusion: your first comment is correct. Async flows dramatically reduce RAM utilizations on concurrent requests.
Related
I am constructing a web service that receives data and updates it periodically. When a user pings the service, it will send specific data back to the user. In order to receive this data, I have a persistent that is created on startup and regularly receives updates, but not at periodic intervals. I have already implemented it, but I would like to add DI and make it into a service. Can this type of problem be solved with a BackgroundService or is this not recommended? Is there anything better I should use? I originally wanted to just register my connection object as a singleton, but since singletons are not initialized on startup, that does not work so well for me.
I thought I would add an answer as so expand on my comment. From what you have described, creating a BackgroundService is likely the best solution for what you want to do.
ASP.NET Core provides an IHostedService interface that can be used to implement a background task or service, in your web app. They also provide a BackgroundService class that implements IHostedService and provides a base class for implementing long running background services. These background services are registered within the CreateWebHostBuilder method in Program.cs.
You can consume services from the dependency injection container but you will have to properly manage their scopes when using them. You can decide how to manage your BackgroundService classes in order to fit your needs. It does take an understanding of how to work with Task objects and executing, queueing, monitoring them etc. So I'd recommend giving the docs a thorough read, so you don't end up impacting performance or resource usage.
I also tend to use Autofac as my DI container rather than the built in Microsoft container, since Autofac provides more features for resolving services and managing scopes. So it's worth considering if you find yourself hitting a wall because of the built in container.
Here's the link to the docs section covering this in much more depth. I believe you can also create standalone service workers now, so that might be worth a look depending on use case.
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/host/hosted-services?view=aspnetcore-3.1&tabs=visual-studio
Edit: Here's another link to a guide an example implementation for a microservice background service. It goes a little more in depth on some of the specifics.
https://learn.microsoft.com/en-us/dotnet/architecture/microservices/multi-container-microservice-net-applications/background-tasks-with-ihostedservice#implementing-ihostedservice-with-a-custom-hosted-service-class-deriving-from-the-backgroundservice-base-class
I currently have 2 static dictionaries in a wcf restful service. These both hold look up data that's not worth putting in a database. Will these stay in memory until the application restarts or should I put them in HttpContext.Current.Application?
The static data will remain until the process recycles or stops, the same as HttpContext.Current.Application.
If you are looking for a more sophisticated caching option, check out the System.Runtime.Caching namespace introduced in 4.0. It is easy to use, works in any .NET application, and offers features like setting expiration times and creating callback functions to execute on expire.
How do we integrate custom async code in the WCF pipeline, either with await/async or IAsyncResult?
Basically I'm considering the possibility of doing possibly blocking operations during message processing. Two areas for now:
Logging, where we may want to write to a file / database that exposes async versions (granted, this could be done with a queue and a writer thread)
Authorization, where we may need to query a database and it also provides async methods.
Now I was looking on the WCF extensibility points and I can't find any hooks with async versions. I'm looking for IParameterInspector, IDispatchMessageInspector and the likes.
Even the new ClaimsAuthorizationManager doesn't seem to provide an async counterpart either.
I feel I'm missing some big part of the puzzle here, because I have this project where all the code uses the new async features and now I can't hook it up here without doing a .Wait() call on the Tasks.
Could someone shed some lights here or tell me what's wrong with this?
I believe WCF (like MVC) only supports async at the operation level (for now); the pipeline is not fully async. On the other hand, WebAPI was designed with async in mind and supports it at all stages in its pipeline.
I was working on a presentation and thought the following should fail since the ActionResult isn't being returned on the right context. I've load tested it with VS and got no errors. I've debugged it and know that it is switching threads. So it seems like it is legit code.
Does ASP.NET not care what context or thread it is on like a client app? If so, what purpose does the AspNetSynchronizationContext provide? I don't feel right putting a ConfigureAwait in the action itself. Something seems wrong about it. Can anyone explain?
public async Task<ActionResult> AsyncWithBackendTest()
{
var result = await BackendCall().ConfigureAwait(false);
var server = HttpContext.Server;
HttpContext.Cache["hello"] = "world";
return Content(result);
}
ASP.NET doesn't have the 'UI thread' need that many clients apps do (due to the UI framework below it). That context isn't about thread affinity, but for tracking the page progress (and other things, like carrying around the security context for the request)
Stephen Toub mentions this in an MSDN article:
Windows Forms isn't the only environment that provides a
SynchronizationContext-derived class. ASP.NET also provides one,
AspNetSynchronizationContext, though it's not public and is not meant
for external consumption. Rather, it is used under the covers by
ASP.NET to facilitate the asynchronous pages functionality in ASP.NET
2.0 (for more information, see msdn.microsoft.com/msdnmag/issues/05/10/WickedCode). This
implementation allows ASP.NET to prevent page processing completion
until all outstanding asynchronous invocations have been completed.
A little more detail about the synchronization context is given in Stephen Cleary's article from last year.
Figure 4 in particular shows that it doesn't have the 'specific thread' behavior of WinForms/WPF, but the whole thing is a great read.
If multiple operations complete at once for the same application,
AspNetSynchronizationContext will ensure that they execute one at a
time. They may execute on any thread, but that thread will have the
identity and culture of the original page.
In your code, HttpContext is a member of your AsyncController base class. It is not the current context for the executing thread.
Also, in your case, HttpContext is still valid, since the request has not yet completed.
I'm unable to test this at the moment, but I would expect it to fail if you used System.Web.HttpContext.Current instead of HttpContext.
P.S. Security is always propagated, regardless of ConfigureAwait - this makes sense if you think about it. I'm not sure about culture, but I wouldn't be surprised if it was always propagated too.
It appears because the Controller captures the Context whereas using System.Web.HttpContext is live access to what is part of the synchronization context.
If we look at the ASP.NET MVC5 sources we can see that the ControllerBase class that all controllers inherit from has its own ControllerContext which is built from the RequestContext.
I would assume this means that while the synchronization context is lost after a ConfigureAwait(false); the state of the Controller in which the continuation is happening still has access to the state of the control from before the continuation via the closure.
Outside of the Controller we don't have access to this ControllerContext so we have to use the live System.Web.HttpContext which has all the caveats with ConfigureAwait(false);.
I'm developing and application that runs as a Windows service. There are other components which include a few WCF services, a client GUI and so on - but it is the Windows service that access the database.
So, the application is a long-running server, and I'd like to improve its performance and scalability, I was looking to improve data access among other things. I posted in another thread about second-level caching.
This post is about session management for the long-running thread that accesses the database.
Should I be using a thread-static context?
If so, is there any example of how that would be implemented.
Every one around the net who is using NHibernate seem to be heavily focussed on web-application style architectures. There seems to be a great lack of documentation / discussion for non-web app designs.
At the moment, my long running thread does this:
Call 3 or 4 DAO methods
Verify the state of the detached objects returned.
Update the state if needed.
Call a couple of DAO methods to persist the updated instances. (pass in the id of the object and the instance itself - the DAO will retrieve the object from the DB again, and set the updated values and session.SaveOrUpdate() before committing the transaction.
Sleep for 'n' seconds
Repeat all over again!
So, the following is a common pattern we use for each of the DAO methods:
Open session using sessionFactory.OpenSession()
Begin transaction
Do db work. retrieve / update etc
Commit trans
(Rollback in case of exceptions)
Finally always dispose transaction and session.Close()
This happens for every method call to a DAO class.
I suspect this is some sort of an anti-pattern the way we are doing it.
However, I'm not able to find enough direction anywhere as to how we could improve it.
Pls note, while this thread is running in the background, doing its stuff, there are requests coming in from the WCF clients each of which could make 2-3 DAO calls themselves - sometimes querying/updating the same objects the long running thread deals with.
Any ideas / suggestions / pointers to improve our design will be greatly appreciated.
If we can get some good discussion going, we could make this a community wiki, and possbily link to here from http://nhibernate.info
Krishna
There seems to be a great lack of documentation / discussion for non-web app designs.
This has also been my experience. However, the model you are following seems correct to me. You should always open a session, commit changes, then close it again.
This question is a little old now, but another technique would be to use Contextual Sessions rather than creating a new session in each DAO.
In our case, we're thinking of creating the session once per thread (for our multi-threaded win32 service), and make it available to the DAOs using either a property that returns SessionFactory.GetCurrentSession() (using the ThreadContext current session provider, so it's session-per-thread) or via DI (dependency injection - once again using ThreadContext.)
More info on GetCurrentSession and Contextual Sessions here.
You can also flush the session without actually closing it and it achieves the same thing. I do.
We've recently started using an IoC container to manage session lifecycle, as a replacement for the contextual sessions mentioned above. (More details here).
I agree, there aren't many examples for stateful apps.
I'm thinking of doing the following:
Like you I have a windows service hosting a number of WCF services. So the WCF services are the entry points.
Ultimately all my WCF services inherit from AbstractService - which handles a lot of logging and basic DB inserts/updates.
In one of the best NHibernate posts I've seen, a HttpModule does the following:
see http://www.codeproject.com/KB/architecture/NHibernateBestPractices.aspx
private void BeginTransaction(object sender, EventArgs e) {
NHibernateSessionManager.Instance.BeginTransaction();
}
private void CommitAndCloseSession(object sender, EventArgs e) {
try {
NHibernateSessionManager.Instance.CommitTransaction();
}
finally {
NHibernateSessionManager.Instance.CloseSession();
}
}
So perhaps I should do something similar in AbstractService. So effectively I'll end up with a session per service invocation. If you examine the NHib best practices article link above, you'll see that the NHibernateSessionManager should deal with everything else, as long as I open and close the session (AbstractService constructor and destructor).
Just a thought. But I'm experiencing errors because my session seems to be hanging around for too long, and I'm getting the infamous error - NHibernate.AssertionFailure: null id in entry (don't flush the Session after an exception occurs).