I am quite new to ASP .NET core, so please help. I would like to avoid database round trip for ASP .NET core application. I have functionality to dynamically add columns in datagrid. Columns settings (visibility, enable, width, caption) are stored in DB.
So I would like to store List<,PersonColumns> on server only for actual session. But I am not able to do this. I already use JsonConvert methods to serialize and deserialize objects to/from session. This works for List<,Int32> or objects with simple properties, but not for complex object with nested properties.
My object I want to store to session looks like this:
[Serializable]
public class PersonColumns
{
public Int64 PersonId { get; set; }
List<ViewPersonColumns> PersonCols { get; set; }
public PersonColumns(Int64 personId)
{
this.PersonId = personId;
}
public void LoadPersonColumns(dbContext dbContext)
{
LoadPersonColumns(dbContext, null);
}
public void LoadPersonColumns(dbContext dbContext, string code)
{
PersonCols = ViewPersonColumns.GetPersonColumns(dbContext, code, PersonId);
}
public static List<ViewPersonColumns> GetFormViewColumns(SatisDbContext dbContext, string code, Int64 formId, string viewName, Int64 personId)
{
var columns = ViewPersonColumns.GetPersonColumns(dbContext, code, personId);
return columns.Where(p => p.FormId == formId && p.ObjectName == viewName).ToList();
}
}
I would like to ask also if my approach is not bad to save the list of 600 records to session? Is it better to access DB and load columns each time user wants to display the grid?
Any advice appreciated
Thanks
EDIT: I have tested to store in session List<,ViewPersonColumns> and it is correctly saved. When I save object where the List<,ViewPersonColumns> is property, then only built-in types are saved, List property is null.
The object I want to save in session
[Serializable]
public class UserManagement
{
public String PersonUserName { get; set; }
public Int64 PersonId { get; set; }
public List<ViewPersonColumns> PersonColumns { get; set; } //not saved to session??
public UserManagement() { }
public UserManagement(DbContext dbContext, string userName)
{
var person = dbContext.Person.Single(p => p.UserName == userName);
PersonUserName = person.UserName;
PersonId = person.Id;
}
/*public void PrepareUserData(DbContext dbContext)
{
LoadPersonColumns(dbContext);
}*/
public void LoadPersonColumns(DbContext dbContext)
{
LoadPersonColumns(dbContext, null);
}
public void LoadPersonColumns(DbContext dbContext, string code)
{
PersonColumns = ViewPersonColumns.GetPersonColumns(dbContext, code, PersonId);
}
public List<ViewPersonColumns> GetFormViewColumns(Int64 formId, string viewName)
{
if (PersonColumns == null)
return null;
return PersonColumns.Where(p => p.FormId == formId && p.ObjectName == viewName).ToList();
}
}
Save columns to the session
UserManagement userManagement = new UserManagement(_context, user.UserName);
userManagement.LoadPersonColumns(_context);
HttpContext.Session.SetObject("ActualPersonContext", userManagement);
HttpContext.Session.SetObject("ActualPersonColumns", userManagement.PersonColumns);
Load columns from the session
//userManagement build-in types are set. The PersonColumns is null - not correct
UserManagement userManagement = session.GetObject<UserManagement>("ActualPersonContext");
//The cols is filled from session with 600 records - correct
List<ViewPersonColumns> cols = session.GetObject<List<ViewPersonColumns>>("ActualPersonColumns");
Use list for each column is better than use database.
you can't create and store sessions in .net core like .net framework 4.0
Try Like this
Startup.cs
public void ConfigureServices(IServiceCollection services)
{
//services.AddDbContext<GeneralDBContext>(options => options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection")));
services.AddMvc().AddSessionStateTempDataProvider();
services.AddSession();
}
Common/SessionExtensions.cs
sing Microsoft.AspNetCore.Http;
using Newtonsoft.Json;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
namespace IMAPApplication.Common
{
public static class SessionExtensions
{
public static T GetComplexData<T>(this ISession session, string key)
{
var data = session.GetString(key);
if (data == null)
{
return default(T);
}
return JsonConvert.DeserializeObject<T>(data);
}
public static void SetComplexData(this ISession session, string key, object value)
{
session.SetString(key, JsonConvert.SerializeObject(value));
}
}
}
Usage
==> Create Session*
public IActionResult Login([FromBody]LoginViewModel model)
{
LoggedUserVM user = GetUserDataById(model.userId);
//Create Session with complex object
HttpContext.Session.SetComplexData("loggerUser", user);
return Json(new { status = result.Status, message = result.Message });
}
==> Get Session data*
public IActionResult Index()
{
//Get Session data
LoggedUserVM loggedUser = HttpContext.Session.GetComplexData<LoggedUserVM>("loggerUser");
}
Hope this is helpful. Good luck.
This is an evergreen post, and even though Microsoft has recommended serialisation to store the object in session - it is not a correct solution unless your object is readonly, I have a blog explaining all scenario here and i have even pointed out the issues in GitHub of Asp.Net Core in issue id 18159
Synopsis of the problems are here:
A. Serialisation isn't same as object, true it will help in distributed server scenario but it comes with a caveat that Microsoft have failed to highlight - that it will work without any unpredictable failures only when the object is meant to be read and not to be written back.
B. If you were looking for a read-write object in the session, everytime you change the object that is read from the session after deserialisation - it needs to be written back to the session again by calling serialisation - and this alone can lead to multiple complexities as you will need to either keep track of the changes - or keep writing back to session after each change in any property. In one request to the server, you will have scenarios where the object is written back multiple times till the response is sent back.
C. For a read-write object in the session, even on a single server it will fail, as the actions of the user can trigger multiple rapid requests to the server and not more than often system will find itself in a situation where the object is being serialised or deserialised by one thread and being edited and then written back by another, the result is you will end up with overwriting the object state by threads - and even locks won't help you much since the object is not a real object but a temporary object created by deserialisation.
D. There are issues with serialising complex objects - it is not just a performance hit, it may even fail in certain scenario - especially if you have deeply nested objects that sometimes refer back to itself.
The synopsis of the solution is here, full implementation along with code is in the blog link:
First implement this as a Cache object, create one item in IMemoryCache for each unique session.
Keep the cache in sliding expiration mode, so that each time it is read it revives the expiry time - thereby keeping the objects in cache as long as the session is active.
Second point alone is not enough, you will need to implement heartbeat technique - triggering the call to session every T minus 1 min or so from the javascript. (This we anyways used to do even to keep the session alive till the user is working on the browser, so it won't be any different
Additional Recommendations
A. Make an object called SessionManager - so that all your code related to session read / write sits in one place.
B. Do not keep very high value for session time out - If you are implementing heartbeat technique, even 3 mins of session time out will be enough.
Related
In the app I'm working on, I'm using Mediatr and its pipelines to handle database interaction, some minor business logic, validation, etc.
There's a few checks for things like access control I can handle in the pipeline, since I'm using a context object as described here https://jimmybogard.com/sharing-context-in-mediatr-pipelines/ to go from ASP.Net identity to a custom context object with user information and claims.
One problem I'm having is that since this application is multi-tenant, I need to ensure that even if an object exists, it belongs to that tenant, and the only way to be sure of that is to grab the object from the database and check it. It seems to me the validation shouldn't have side effects, so I don't want to rely on that to populate the context object. But then that pushes a bunch of validation down into the Mediatr handlers as they check for object existence, and so on, leading to a lot of repeated code. I don't really want to query the database multiple times since some queries can be expensive.
Another issue with doing the more complicated validation in the actual request handlers is getting what are essentially validation errors back out. Currently, if one of these checks fail I throw a ValidationException, which is then caught by middleware and turned into a ProblemDetails that's returned to the API caller. This is basically exceptions as flow control, and a validation failure really isn't "exceptional" anyhow.
The thoughts I'm having on how to solve this are:
Somewhere in the pipeline, when I'm building the context, include attempting to fetch the objects needed from the database. Validation then fails if any of these are null. This seems like it would make testing harder, as well as needing to decorate the requests somehow (or use reflection) so the pipeline can know to attempt to load these objects.
Have the queries in the validator, but use some sort of cache aware repository so when the same object is queried later, it's served from the cache, and not the database. The handlers would also use this cache aware repository (Currently the handlers interact directly with the EF Core DbContext to query). This then adds the issue of cache invalidation, which I'm going to have to handle at some point, anyhow (quite a few items are seldom modified). For testing, a dummy cache object can be injected that doesn't actually cache anything.
Make all the responses from requests implement an interface (or extend an abstract class) that has validation info, general success flags, etc. This can either be returned through the API directly, or have some pipeline that transforms failures into ProblemDetails. This would add some boilerplate to every response and handler, but avoids exceptions as flow control, and the caching/reflection issues in the other options.
Assume for 1 and 2 that any sort of race conditions are not an issue. Objects don't change owners, and things are seldom actually deleted from the database for auditing/accounting purposes.
I know there's no true one size fits all for problems like this, but I would like to know if there's additional options I'm missing, or any long term maintainability issues anyone with a similar pipeline has encountered if they went with one of these listed options.
We use MediatR IRequestPreProcessor for fetching data that we need both in RequestHandler and in FluentValidation validators.
RequestPreProcessor:
public interface IProductByIdBinder
{
int ProductId { get; }
ProductEntity Product { set; }
}
public class ProductByIdBinder<T> : IRequestPreProcessor<T> where T : IProductByIdBinder
{
private readonly IRepositoryReadAsync<ProductEntity> productRepository;
public ProductByIdBinder(IRepositoryReadAsync<ProductEntity> productRepository)
{
this.productRepository = productRepository;
}
public async Task Process(T request, CancellationToken cancellationToken)
{
request.Product = await productRepository.GetAsync(request.ProductId);
}
}
RequestHandler:
public class ProductDeleteCommand : IRequest, IProductByIdBinder
{
public ProductDeleteCommand(int id)
{
ProductId = id;
}
public int ProductId { get; }
public ProductEntity Product { get; set; }
private class ProductDeleteCommandHandler : IRequestHandler<ProductDeleteCommand>
{
private readonly IRepositoryAsync<ProductEntity> productRepository;
public ProductDeleteCommandHandler(
IRepositoryAsync<ProductEntity> productRepository)
{
this.productRepository = productRepository;
}
public Task<Unit> Handle(ProductDeleteCommand request, CancellationToken cancellationToken)
{
productRepository.Delete(request.Product);
return Unit.Task;
}
}
}
FluentValidation validator:
public class ProductDeleteCommandValidator : AbstractValidator<ProductDeleteCommand>
{
public ProductDeleteCommandValidator()
{
RuleFor(cmd => cmd)
.Must(cmd => cmd.Product != null)
.WithMessage(cmd => $"The product with id {cmd.ProductId} doesn't exist.");
}
}
I see nothing wrong with handling business logic validation in the handler layer.
Moreover, I do not think it is right to throw exceptions for them, as you said it is exceptions as flow control.
Introducing a cache seems like overkill for the use case too. The most reasonable option is the third IMHO.
Instead of implementing an interface you can use the nifty OneOf library and have something like
using HandlerResponse = OneOf<Success, NotFound, ValidationResponse>;
public class MediatorHandler : IRequestHandler<Command, HandlerResponse>
{
public async Task<HandlerResponse> Handle(
Command command,
CancellationToken cancellationToken)
{
Resource resource = await _userRepository
.GetResource(command.Id);
if (resource is null)
return new NotFound();
if (!resource.IsValid)
return new ValidationResponse(new ProblemDetails());
return new Success();
}
And then map it in your API Layer like
public async Task<IActionResult> PostAsync([FromBody] DummyRequest request)
{
HandlerResponse response = await _mediator.Send(
new Command(request.Id));
return response.Match<IActionResult>(
success => Created(),
notFound => NotFound(),
failed => new UnprocessableEntityResult(failed.ProblemDetails))
);
}
I have a repository, for example, UserRepository. It returns a user by given userId. I work on web application, so objects are loaded into memory, used, and disposed when the request ends.
So far, when I write a repository, I simply retrieved the data from the database. I don't store the retrieved User object into memory (I mean in a collection of the repository). When the repository's GetById() method is called, I don't check if the object is already in the collection. I simply query the database.
My questions are
Should I store retrieved objects in the memory, and when a repository's Get method is called, should I check if the object exists in the memory first before I make any Database call?
Or is the memory collection unnecessary, as web request is a short-lived session and all objects are disposed afterward
1) Should I store retrieved objects in the memory, and when a repository's Get method is called, should I check if the object exists in the memory first before I make any Database call?
Since your repository should be abstracted enough to simulate the purpose of an in-memory collection, I think it is really up to you and to your use case.
If you store your object after being retrieved from the database you will probably end-up with an implementation of the so-called IdentityMap. If you do this, it can get very complicated (well it depends on your domain).
Depending on the infrastructure layer you rely on, you may use the IdentityMap provided by your ORM if any.
But the real question is, is it worth implementing an IdentityMap?
I mean, we agree that repeating a query may be wrong for two reasons, performance and integrity, here a quote of Martin Fowler:
An old proverb says that a man with two watches never knows what time it is. If two watches are confusing, you can get in an even bigger mess with loading objects from a database.
But sometimes you need to be pragmatic and just load them every time you need it.
2) Or is the memory collection unnecessary, as web request is a short-lived session and all objects are disposed afterward
It depends™, for example, in some case you may have to play with your object in different place, in that case, it may be worth, but let's say you need to refresh your user session identity by loading your user from database, then there are cases where you only do it once within the whole request.
As is the usual case I don't think there is going to be a "one-size-fits-all".
There may be situations where one may implement a form of caching on a repository when the data is retrieved often, does not go stale too quickly, or simply for efficiency.
However, you could very well implement a type of generic cache decorator that can wrap a repository when you do need this.
So one should take each use case on merit.
When you're using an ORM like Entity Framework or NHibernate it's already taken care of - all read entities are tracked via IdentityMap mechanism, searching by keys (DbSet.Find in EF) won't even hit the database if the entity is already loaded.
If you're using direct database access or a microORM as base for your repository, you should be careful - without IdentityMap you're essentially working with value objects:
using System;
using System.Collections.Generic;
using System.Linq;
namespace test
{
internal class Program
{
static void Main()
{
Console.WriteLine("Identity map");
var artrepo1 = new ArticleIMRepository();
var o1 = new Order();
o1.OrderLines.Add(new OrderLine {Article = artrepo1.GetById(1, "a1", 100), Quantity = 50});
o1.OrderLines.Add(new OrderLine {Article = artrepo1.GetById(1, "a1", 100), Quantity = 30});
o1.OrderLines.Add(new OrderLine {Article = artrepo1.GetById(2, "a2", 100), Quantity = 20});
o1.ConfirmOrder();
o1.PrintChangedStock();
/*
Art. 1/a1, Stock: 20
Art. 2/a2, Stock: 80
*/
Console.WriteLine("Value objects");
var artrepo2 = new ArticleVORepository();
var o2 = new Order();
o2.OrderLines.Add(new OrderLine {Article = artrepo2.GetById(1, "a1", 100), Quantity = 50});
o2.OrderLines.Add(new OrderLine {Article = artrepo2.GetById(1, "a1", 100), Quantity = 30});
o2.OrderLines.Add(new OrderLine {Article = artrepo2.GetById(2, "a2", 100), Quantity = 20});
o2.ConfirmOrder();
o2.PrintChangedStock();
/*
Art. 1/a1, Stock: 50
Art. 1/a1, Stock: 70
Art. 2/a2, Stock: 80
*/
Console.ReadLine();
}
#region "Domain Model"
public class Order
{
public List<OrderLine> OrderLines = new List<OrderLine>();
public void ConfirmOrder()
{
foreach (OrderLine line in OrderLines)
{
line.Article.Stock -= line.Quantity;
}
}
public void PrintChangedStock()
{
foreach (var a in OrderLines.Select(x => x.Article).Distinct())
{
Console.WriteLine("Art. {0}/{1}, Stock: {2}", a.Id, a.Name, a.Stock);
}
}
}
public class OrderLine
{
public Article Article;
public int Quantity;
}
public class Article
{
public int Id;
public string Name;
public int Stock;
}
#endregion
#region Repositories
public class ArticleIMRepository
{
private static readonly Dictionary<int, Article> Articles = new Dictionary<int, Article>();
public Article GetById(int id, string name, int stock)
{
if (!Articles.ContainsKey(id))
Articles.Add(id, new Article {Id = id, Name = name, Stock = stock});
return Articles[id];
}
}
public class ArticleVORepository
{
public Article GetById(int id, string name, int stock)
{
return new Article {Id = id, Name = name, Stock = stock};
}
}
#endregion
}
}
I need to group some data from a SQL Server database and since LightSwitch doesn't support that out-of-the-box I use a Domain Service according to Eric Erhardt's guide.
However my table contains several foreign keys and of course I want the correct related data to be shown in the table (just doing like in the guide will only make the key values show). I solved this by adding a Relationship to my newly created Entity like this:
And my Domain Service class looks like this:
public class AzureDbTestReportData : DomainService
{
private CountryLawDataDataObjectContext context;
public CountryLawDataDataObjectContext Context
{
get
{
if (this.context == null)
{
EntityConnectionStringBuilder builder = new EntityConnectionStringBuilder();
builder.Metadata =
"res://*/CountryLawDataData.csdl|res://*/CountryLawDataData.ssdl|res://*/CountryLawDataData.msl";
builder.Provider = "System.Data.SqlClient";
builder.ProviderConnectionString =
WebConfigurationManager.ConnectionStrings["CountryLawDataData"].ConnectionString;
this.context = new CountryLawDataDataObjectContext(builder.ConnectionString);
}
return this.context;
}
}
/// <summary>
/// Override the Count method in order for paging to work correctly
/// </summary>
protected override int Count<T>(IQueryable<T> query)
{
return query.Count();
}
[Query(IsDefault = true)]
public IQueryable<RuleEntryTest> GetRuleEntryTest()
{
return this.Context.RuleEntries
.Select(g =>
new RuleEntryTest()
{
Id = g.Id,
Country = g.Country,
BaseField = g.BaseField
});
}
}
public class RuleEntryTest
{
[Key]
public int Id { get; set; }
public string Country { get; set; }
public int BaseField { get; set; }
}
}
It works and all that, both the Country name and the Basefield loads with Autocomplete-boxes as it should, but it takes VERY long time. With two columns it takes 5-10 seconds to load one page.. and I have 10 more columns I haven't implemented yet.
The reason it takes so long time is because each related data (each Country and BaseField) requires one request. Loading a page looks like this in Fiddler:
This isn't acceptable at all, it should be a way of combining all those calls into one, just as it does when loading the same table without going through the Domain Service.
So.. that was a lot explaining, my question is: Is there any way I can make all related data load at once or improve the performance by any other way? It should not take 10+ seconds to load a screen.
Thanks for any help or input!s
My RIA Service queries are extremely fast, compared to not using them, even when I'm doing aggregation. It might be the fact that you're using "virtual relationships" (which you can tell by the dotted lines between the tables), that you've created using your RuleEntryTest entity.
Why is your original RuleEntry entity not related to both Country & BaseUnit in LightSwitch BEFORE you start creating your RIA entity?
I haven't used Fiddler to see what's happening, but I'd try creating "real" relationships, instead of "virtual" ones, & see if that helps your RIA entity's performance.
I'm using WCF custom Validator with HTTPS (.NET 4.5). Validate on success returns Customer object which I would like to use later. Currently I'm able to do it with Static variables which I like to avoid if possible. I tried to use HttpContext which becomes null in main thread. My understanding Validate runs under different thread. Is there any way I could share session info without involving DB or File share. See related threads here and here.
In Authentication.cs
public class CustomValidator : UserNamePasswordValidator
{
public override void Validate(string userName, string password)
{
//If User Valid then set Customer object
}
}
In Service.cs
public class Service
{
public string SaveData(string XML)
{
//Need Customer object here. Without it cannot save XML.
//HttpContext null here.
}
}
I can suggest you an alternative approach. Assuming that the WCF service is running in ASP.Net compatibility mode and you are saving the customer object to session storage. Create a class such as AppContext
The code would look something like this
public class AppContext {
public Customer CurrentCustomer {
get {
Customer cachedCustomerDetails = HttpContext.Current.Session[CUSTOMERSESSIONKEY] as Customer;
if (cachedCustomerDetails != null)
{
return cachedCustomerDetails;
}
else
{
lock (lockObject)
{
if (HttpContext.Current.Session[CUSTOMERSESSIONKEY] != null) //Thread double entry safeguard
{
return HttpContext.Current.Session[CUSTOMERSESSIONKEY] as Customer;
}
Customer CustomerDetails = ;//Load customer details based on Logged in user using HttpContext.Current.User.Identity.Name
if (CustomerDetails != null)
{
HttpContext.Current.Session[CUSTOMERSESSIONKEY] = CustomerDetails;
}
return CustomerDetails;
}
}
}
}
The basic idea here is to do lazy loading of data, when both WCF and ASP.Net pipelines have executed and HTTPContext is available.
Hope it helps.
Alright this should have been easier. Since the way UserNamePasswordValidator works, I needed to use custom Authorization to pass UserName/Password to the main thread and get customer info again from the database. This is an additional DB call but acceptable workaround for now. Please download code from Rory Primrose's genius blog entry.
I'm running two instances of my application. In one instance, I save one of my entities. When I check the RavenDB (http://localhost:8080/raven), I can see the change. Then, in my other client, I do this (below), but I don't see the changes from the other application. What do I need to do in order to get the most recent data in the DB?
public IEnumerable<CustomVariableGroup> GetAll()
{
return Session
.Query<CustomVariableGroup>()
.Customize(x => x.WaitForNonStaleResults());
}
Edit: The code above works if I try to make a change and get a concurrency exception. After that, when I call refresh (which invokes the above code), it works.
Here is the code that does the save:
public void Save<T>(T objectToSave)
{
Guid eTag = (Guid)Session.Advanced.GetEtagFor(objectToSave);
Session.Store(objectToSave, eTag);
Session.SaveChanges();
}
And here is the class that contains the Database and Session:
public abstract class DataAccessLayerBase
{
/// <summary>
/// Gets the database.
/// </summary>
protected static DocumentStore Database { get; private set; }
/// <summary>
/// Gets the session.
/// </summary>
protected static IDocumentSession Session { get; private set; }
static DataAccessLayerBase()
{
if (Database != null) { return; }
Database = GetDatabase();
Session = GetSession();
}
private static DocumentStore GetDatabase()
{
string databaseUrl = ConfigurationManager.AppSettings["databaseUrl"];
DocumentStore documentStore = new DocumentStore();
try
{
//documentStore.ConnectionStringName = "RavenDb"; // See app.config for why this is commented.
documentStore.Url = databaseUrl;
documentStore.Initialize();
}
catch
{
documentStore.Dispose();
throw;
}
return documentStore;
}
private static IDocumentSession GetSession()
{
IDocumentSession session = Database.OpenSession();
session.Advanced.UseOptimisticConcurrency = true;
return session;
}
}
Lacking more detailed information and some code, I can only guess...
Please make sure that you call .SaveChanges() on your session. Without explicitly specifiying an ITransaction your IDocumentSession will be isolated and transactional between it's opening and the call to .SaveChanges. Either all operations succeed or none. But if you don't call it all your previous .Store calls will be lost.
If I was wrong, please post more details about your code.
EDIT: Second answer (after additional information):
Your problem has to do with the way RavenDB caches on the client-side. RavenDB by default caches every GET request throughout a DocumentSession. Plain queries are just GET queries (and no, it has nothing to do wheter your index in dynamic or manually defined upfront) and therefore they will be cached. The solution in your application is to dispose the session and open a new one.
I suggest you rethink your Session lifecycle. It seems that your sessions live too long, otherwise this concurrency wouldn't be an issue. If you're building a web-application I recommend to open and close the session with the beginning and the end of your request. Have a look at RaccoonBlog to see it implemented elegantly.
Bob,
It looks like you have but a single session in the application, which isn't right. The following article talks about NHibernate, but the session management parts applies to RavenDB as well:
http://archive.msdn.microsoft.com/mag200912NHibernate
This code is meaningless:
Guid eTag = (Guid)Session.Advanced.GetEtagFor(objectToSave);
Session.Store(objectToSave, eTag);
It basically a no op, but one that looks important. You seems to be trying to work with a model where you have to manually manage all the saves, don't do that. You only need to manage things yourself when you create a new item, that is all.
As for the reason you get this problem, here is a sample:
var session = documentStore.OpenSession();
var post1 = session.Load<Post>(1);
// change the post by another client
post2 = session.Load<Post>(1); // will NOT go to the server, will give the same instance as post1
Assert.ReferenceEquals(post1,post2);
Sessions are short lived, and typically used in the scope of a single form / request.