Problem with Include() EntityFramework Core with blazor server side [duplicate] - sql

I had seen some books(e.g programming entity framework code first Julia Lerman) define their domain classes (POCO) with no initialization of the navigation properties like:
public class User
{
public int Id { get; set; }
public string UserName { get; set; }
public virtual ICollection<Address> Address { get; set; }
public virtual License License { get; set; }
}
some other books or tools (e.g Entity Framework Power Tools) when generates POCOs initializes the navigation properties of the the class, like:
public class User
{
public User()
{
this.Addresses = new IList<Address>();
this.License = new License();
}
public int Id { get; set; }
public string UserName { get; set; }
public virtual ICollection<Address> Addresses { get; set; }
public virtual License License { get; set; }
}
Q1: Which one is better? why? Pros and Cons?
Edit:
public class License
{
public License()
{
this.User = new User();
}
public int Id { get; set; }
public string Key { get; set; }
public DateTime Expirtion { get; set; }
public virtual User User { get; set; }
}
Q2: In second approach there would be stack overflow if the `License` class has a reference to `User` class too. It means we should have one-way reference.(?) How we should decide which one of the navigation properties should be removed?

Collections: It doesn't matter.
There is a distinct difference between collections and references as navigation properties. A reference is an entity. A collections contains entities. This means that initializing a collection is meaningless in terms of business logic: it does not define an association between entities. Setting a reference does.
So it's purely a matter of preference whether or not, or how, you initialize embedded lists.
As for the "how", some people prefer lazy initialization:
private ICollection<Address> _addresses;
public virtual ICollection<Address> Addresses
{
get { return this._addresses ?? (this._addresses = new HashSet<Address>());
}
It prevents null reference exceptions, so it facilitates unit testing and manipulating the collection, but it also prevents unnecessary initialization. The latter may make a difference when a class has relatively many collections. The downside is that it takes relatively much plumbing, esp. when compared to auto properties without initialization. Also, the advent of the null-propagation operator in C# has made it less urgent to initialize collection properties.
...unless explicit loading is applied
The only thing is that initializing collections makes it hard to check whether or not a collection was loaded by Entity Framework. If a collection is initialized, a statement like...
var users = context.Users.ToList();
...will create User objects having empty, not-null Addresses collections (lazy loading aside). Checking whether the collection is loaded requires code like...
var user = users.First();
var isLoaded = context.Entry(user).Collection(c => c.Addresses).IsLoaded;
If the collection is not initialized a simple null check will do. So when selective explicit loading is an important part of your coding practice, i.e. ...
if (/*check collection isn't loaded*/)
context.Entry(user).Collection(c => c.Addresses).Load();
...it may be more convenient not to initialize collection properties.
Reference properties: Don't
Reference properties are entities, so assigning an empty object to them is meaningful.
Worse, if you initiate them in the constructor, EF won't overwrite them when materializing your object or by lazy loading. They will always have their initial values until you actively replace them. Worse still, you may even end up saving empty entities in the database!
And there's another effect: relationship fixup won't occcur. Relationship fixup is the process by which EF connects all entities in the context by their navigation properties. When a User and a Licence are loaded separately, still User.License will be populated and vice versa. Unless of course, if License was initialized in the constructor. This is also true for 1:n associations. If Address would initialize a User in its constructor, User.Addresses would not be populated!
Entity Framework core
Relationship fixup in Entity Framework core (2.1 at the time of writing) isn't affected by initialized reference navigation properties in constructors. That is, when users and addresses are pulled from the database separately, the navigation properties are populated.
However, lazy loading does not overwrite initialized reference navigation properties.
In EF-core 3, initializing a reference navigation property prevents Include from working properly.
So, in conclusion, also in EF-core, initializing reference navigation properties in constructors may cause trouble. Don't do it. It doesn't make sense anyway.

In all my projects I follow the rule - "Collections should not be null. They are either empty or have values."
First example is possible to have when creation of these entities is responsibility of third-part code (e.g. ORM) and you are working on a short-time project.
Second example is better, since
you are sure that entity has all properties set
you avoid silly NullReferenceException
you make consumers of your code happier
People, who practice Domain-Driven Design, expose collections as read-only and avoid setters on them. (see What is the best practice for readonly lists in NHibernate)
Q1: Which one is better? why? Pros and Cons?
It is better to expose not-null colections since you avoid additional checks in your code (e.g. Addresses). It is a good contract to have in your codebase. But it os OK for me to expose nullable reference to single entity (e.g. License)
Q2: In second approach there would be stack overflow if the License class has a reference to User class too. It means we should have one-way reference.(?) How we should decide which one of the navigation properties should be removed?
When I developed data mapper pattern by myself I tryed to avoid bidirectional references and had reference from child to parent very rarely.
When I use ORMs it is easy to have bidirectional references.
When it is needed to build test-entity for my unit-tests with bidirectional reference set I follow the following steps:
I build parent entity with emty children collection.
Then I add evey child with reference to parent entity into children collection.
Insted of having parameterless constructor in License type I would make user property required.
public class License
{
public License(User user)
{
this.User = user;
}
public int Id { get; set; }
public string Key { get; set; }
public DateTime Expirtion { get; set; }
public virtual User User { get; set; }
}

It's redundant to new the list, since your POCO is depending on Lazy Loading.
Lazy loading is the process whereby an entity or collection of entities is automatically loaded from the database the first time that a property referring to the entity/entities is accessed. When using POCO entity types, lazy loading is achieved by creating instances of derived proxy types and then overriding virtual properties to add the loading hook.
If you would remove the virtual modifier, then you would turn off lazy loading, and in that case your code no longer would work (because nothing would initialize the list).
Note that Lazy Loading is a feature supported by entity framework, if you create the class outside the context of a DbContext, then the depending code would obviously suffer from a NullReferenceException
HTH

The other answers fully answer the question, but I'd like to add something since this question is still relevant and comes up in google searches.
When you use the "code first model from database" wizard in Visual Studio all collections are initialized like so:
public partial class SomeEntity
{
[System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Usage", "CA2214:DoNotCallOverridableMethodsInConstructors")]
public SomeEntity()
{
OtherEntities = new HashSet<OtherEntity>();
}
public int Id { get; set; }
[System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Usage", "CA2227:CollectionPropertiesShouldBeReadOnly")]
public virtual ICollection<OtherEntity> OtherEntities { get; set; }
}
I tend to take wizard output as basically being an official recommendation from Microsoft, hence why I'm adding to this five-year-old question. Therefore, I'd initialize all collections as HashSets.
And personally, I think it'd be pretty slick to tweak the above to take advantage of C# 6.0's auto-property initializers:
public virtual ICollection<OtherEntity> OtherEntities { get; set; } = new HashSet<OtherEntity>();

Q1: Which one is better? why? Pros and Cons?
The second variant when virtual properties are set inside an entity constructor has a definite problem which is called "Virtual member call in a constructor".
As for the first variant with no initialization of navigation properties, there are 2 situations depending on who / what creates an object:
Entity framework creates an object
Code consumer creates an object
The first variant is perfectly valid when Entity Framework creates a object,
but can fail when a code consumer creates an object.
The solution to ensure a code consumer always creates a valid object is to use a static factory method:
Make default constructor protected. Entity Framework is fine to work with protected constructors.
Add a static factory method that creates an empty object, e.g. a User object, sets all properties, e.g. Addresses and License, after creation and returns a fully constructed User object
This way Entity Framework uses a protected default constructor to create a valid object from data obtained from some data source and code consumer uses a static factory method to create a valid object.

I use the answer from this Why is my Entity Framework Code First proxy collection null and why can't I set it?
Had problems with constructor initilization. Only reason I do this is to make test code easier. Making sure collection is never null saves me constantly initialising in tests etc

Related

Problems with EF-Agnostic design consumed by WCF service.

I am trying to set up EF to work on WCF and keeping the domain class models EF Agnostic.
The code is organized into 3 projects. (I am taking a stab a DDD - I am very new to it but am looking forward t learning more)
Project: QA - Domain Layer. Contains the DataContract models/entities.
References
QA.Data
Project: QA.Data - Data Layer. Contains the context and EDMX (code generation stragtegy = "none")
References
Entity Framework/System.Data.Entity
Project: QA.Repository - Data Access/Repository. Contains the repository classes
References
QA [Domain Layer]
QA.Data [Data Layer]
Entity Frame/System.DataEntity
My understanding is that the domain layer can reference the data layer but the data layer should never reference the domain. The problem that this presents is that my Domain Models/Classes are defined in the Domain layer but the Context which creates and returns them is in the Data layer. In order for my context to know to return a "Widget" object it would need a reference to the Domain layer which defined the "Widget"
My (failed) solution : My solution was to create interfaces for each Domain Model and place them in the data layer. The context would return ... IdbSet ... These interfaces would, in turn, be implemented by the Domain Models, therefore keeping my data layer from directly needing to reference my domain (which causes illegal circular references anyway). The domain models were originally contructed using "ADO.NET DbContext Generator w/WCF Support" T4 templates. This process resulted in the inclusion of the [KnownType(typeof(IWidgetPiece))] at the beginning of of the widget class defin ition. (A Widget has a navigation property ... ICollection ...)
The problem appears when I attempt to access the service, I get the following error
'QA.Data.IWidgetPiece' cannot be added to list of known types since
another type 'System.Object' with the same data contract name
'http://www.w3.org/2001/XMLSchema:anyType' is already present. If
there are different collections of a particular type - for example,
List and Test[], they cannot both be added as known types.
Consider specifying only one of these types for addition to the known
types list.
I can change these to the concrete implementations ... [KnownType(typeof(WidgetPiece))] ... but I continue to get this error because the navigation property they are referring to is still returning an IWidgetPiece interface type which it MUST do in order to satify the interface implementation.
I am trying to figure out how to keep things appropriately divided and still have the context returning what it should. the context returning Interfaces still doesn't "sit" right with me for this and other reasons but I cannot think of another way to do this, and even this is presenting the aforementioned issue. HELP!
Some code to hopefully clarify my previous ramblings ...
namespace QA.Data
{
public interface IWidgetPiece
{
String ID { get; set; }
}
public interface IWidget
{
String ID { get; set; }
ICollection<IWidgetPiece> Pieces;
}
public partial class WidgetEntities : DbContext
{
IDbSet<IWidget> Widgets { get; set; }
IDbSet<IWidgetPiece> WidgetPieces { get; set; }
}
}
namespace QA
{
[KnownType(typeof(IWidgetPiece))]
// [KnownType(typeof(WidgetPiece))]
[DataContract(IsReference = true)]
public partial class Widget : QA.Data.IWidget
{
[DataMember]
public String ID { get; set; }
[DataMember]
public virtual ICollection<IWidgetPiece> Pieces { get; set; }
}
[DataContract(IsReference = true)]
public partial class WidgetPiece : QA.Data.IWidgetPiece
{
[DataMember]
public string ID { get; set; }
}
}
namespace QA.Repository
{
public class WidgetRepository
{
public List<Widget> GetWidgetbyID(String sId)
{
WidgetEntities context = new WidgetEntities();
List<IWidget> objs = context.Widgets.Where(b => b.ID == "78").ToList();
List<Widget> widgetList = new List<Widget>();
foreach (var iwidget in widgetList)
widgetList((Widget)iwidget);
return widgetList;
}
}
}
Do you really want / need two separate models i.e. your data access layer model (edmx) and your "real" domain model? The whole point of an ORM framework like EF is so you can map your domain model to your database tables, using mappings between the physical (database) conceptual model.
Since EF4.1, you can construct your domain model and then in your data access layer map that to your database directly using a fluent API. You can also elect to reverse-engineer your POCO domain model from a database if you want to quickly get up an running.
It just seems a bit of unnecessary complexity to create an entire EF class model, only to then have to map it again into another class model (which will most likely be fairly close to the EF-generated one).

Optimum Way To Restore Domain Object

This is such a simple and common scenario I wonder how did I managed until now and why I have problems now.
I have this object (part of the Infrastructure assembly)
public class Queue {}
public class QueueItem
{
public QueueItem(int blogId,string name,Type command,object data)
{
if (name == null) throw new ArgumentNullException("name");
if (command == null) throw new ArgumentNullException("command");
BlogId = blogId;
CommandType = command;
ParamValue = data;
CommandName = name;
AddedOn = DateTime.UtcNow;
}
public Guid Id { get; internal set; }
public int BlogId { get; private set; }
public string CommandName { get; set; }
public Type CommandType { get; private set; }
public object ParamValue { get; private set; }
public DateTime AddedOn { get; private set; }
public DateTime? ExecutedOn { get; private set; }
public void ExecuteIn(ILifetimeScope ioc)
{
throw new NotImplementedException();
}
}
This will be created in another assembly like this
var qi = new QueueItem(1,"myname",typeof(MyCommand),null);
Nothing unusal here. However, this object will be sent t oa repository where it will be persisted.The Queue object will ask the repository for items. The repository should re-create QueueItem objects.
However, as you see, the QueueItem properties are invariable, the AddedOn property should be set only once when the item is created. The Id property will be set by the Queue object (this is not important).
The question is how should I recreate the QueueItem in the repository? I can have another constructor which will require every value for ALL the properties, but I don't want that constructor available for the assembly that will create the queue item initially. The repository is part of a different assembly so internal won't work.
I thought about providing a factory method
class QueueItem
{
/* ..rest of definitions.. */
public static QueueItem Restore(/* list of params*/){}
}
which at least clears the intent, but I don't know why I don't like this approach. I could also enforce the item creation only by the Queue , but that means to pass the Queue as a dependency to the repo which again isn't something I'd like. To have a specific factory object for this, also seems way overkill.
Basically my question is: what is the optimum way to recreate an object in the repository, without exposing that specific creational functionality to another consumer object.
Update
It's important to note that by repository I mean the pattern itself as an abstraction, not a wrapper over an ORM. It doesn't matter how or where the domain objects are persisted. It matters how can be re-created by the repository. Another important thing is that my domain model is different from the persistence model. I do use a RDBMS but I think this is just an implementation detail which should not bear any importance, since I'm looking for way that doesn't depend on a specific storage access.
While this is a specific scenario, it can applied to basically every object that will be restored by the repo.
Update2
Ok I don't know how I could forget about AutoMapper. I was under the wrong impression it can't map private fields/setter but it can, and I think this is the best solution.
In fact I can say the optimum solutions (IMO) are in order:
Directly deserializing, if available.
Automap.
Factory method on the domain object itself.
The first two don't require the object to do anyting in particular, while the third requires the object to provide functionality for that case (a way to enter valid state data). It has clear intent but it pretty much does a mapper job.
Answer Updated
To answer myself, in this case the optimum way is to use a factory method. Initially I opted for the Automapper but I found myself using the factory method more often. Automapper can be useful sometimes but in quite a lot of cases it's not enough.
An ORM framework would take care of that for you. You just have to tell it to rehydrate an object and a regular instance of the domain class will be served to you (sometimes you only have to declare properties as virtual or protected, in NHibernate for instance). The reason is because under the hood, they usually operate on proxy objects derived from your base classes, allowing you to keep these base classes intact.
If you want to implement your own persistence layer though, it's a whole nother story. Rehydrating an object from the database without breaking the scope constraints originally defined in the object is likely to involve reflection. You also have to think about a lot of side concerns : if your object has a reference to another object, you must rehydrate that one before, etc.
You can have a look at that tutorial : Build Your Own dataAccess Layer although I wouldn't recommend reinventing the wheel in most cases.
You talked about a factory method on the object itself. But DDD states that entities should be created by a factory. So you should have a QueueItemFactory that can create new QueueItems and restore existing QueueItems.
Ok I don't know how I could forget about AutoMapper.
I wish I could forget about AutoMapper. Just looking at the hideous API gives me shivers down my spine.

Populating association properties in entities from service call

Say I have a common pattern with a Customer object and a SalesOrder object. I have corresponding SalesOrderContract and CustomerContract objects that are similar, flatter objects used to serialize through a web service
public class Customer
{
public int CustomerId { get; set; }
public string Name { get; set; }
public Address ShippingAddress { get; set; }
//more fields...
}
public class Order
{
public int OrderId { get; set; }
public Customer Customer { get; set;
// etc
}
And my sales order contract looks like this
public class OrderContract
{
public int OrderId { get; set; }
public int CustomerId { get; set; }
}
public class OrderTranslator
{
public static Order ToOrder(OrderContract contract)
{
return new Order { OrderId = contract.OrderId };
// just translate customer id or populate entire Customer object
}
}
I have a layer inbetween the service layer and business object layer that translates between the two. My question is this...do I populate the Order.Customer object on the other end since the Order table just needs the customer id. I don't carry the entire customer object in the OrderContract because it's not necessary and too heavy. But, as part of saving it, I have to validate that it's indeed a valid customer. I can do a few things
Populate the Order.Customer object completely based on the CustomerId when I translate between contract and entity.. This would require calling the CustomerRepository in a helper class that translates between entities and contracts. Doesn't feel right to me. Translator should really just be data mapping.
Create a domain service for each group of operations that performs the validation needed without populating the Order.Customer. This service would pull the Customer object based on Order.CustomerId and check to see if it's valid. Not sure on this because a sales order should be able to validate itself, but it's also not explicitly dealing with Orders as it also deals with Customers so maybe a domain service?
Create a seperate property Order.CustomerId and lazy load the customer object based on this.
Populate Order.Customer in from a factory class. Right now my factory classes are just for loading from database. I'm not really loading from datacontracts, but maybe it makes sense?
So the question is two part...if you have association properties in your enties that will be required to tell if something is completely valid before saving, do you just populate them? If you do, where you do actually do that because the contract/entity translator feels wrong?
The bottom line is that I need to be able to do something like
if (order.Customer == null || !order.Customer.IsActive)
{
//do something
}
The question is where does it make sense to do this? In reality my Order object has a lot of child entities required for validation and I don't want things to become bloated. This is why I'm considering making domain services to encapsulate validation since it's such a huge operation in my particular case (several hundred weird rules). But I also don't want to remove all logic making my objects just properties. Finding the balance is tough.
Hope that makes sense. If more background is required, let me know.
You have a couple of things going on here. I think part of the issue is mainly how you appear to have arranged your Translator class. Remember, for an entity, the whole concept is based on instance identity. So a Translator for an entity should not return a new object, it should return the correct instance of the object. That typically means you have to supply it with that instance in the first place.
It is perhaps useful to think in terms of updates vs creating a new object.
For an update the way I would structure this operation is as follows: I would have the web service that the application calls to get and return the contract objects. This web service calls both repositories and Translators to do it's work. The validation stays on the domain object.
In code an update would look something like the following.
Web Service:
[WebService]
public class OrderService
{
[WebMethod]
public void UpdateOrder(OrderContract orderContract)
{
OrderRepository orderRepository = new OrderRepository(_session);
// The key point here is we get the actual order itself
// and so Customer and all other objects are already either populated
// or available for lazy loading.
Order order = orderRepository.GetOrderByOrderContract(orderContract);
// The translator uses the OrderContract to update attribute fields on
// the actual Order instance we need.
OrderTranslator.OrderContractToOrder(ref order, orderContract);
// We now have the specific order instance with any properties updated
// so we can validate and then persist.
if (order.Validate())
{
orderRepository.Update(order);
}
else
{
// Whatever
}
}
}
Translator:
public static class OrderTranslator
{
public static void OrderContractToOrder(ref Order order, OrderContract orderContract)
{
// Here we update properties on the actual order instance passed in
// instead of creating a new Order instance.
order.SetSomeProperty(orderContract.SomeProperty);
// ... etc.
}
}
The key concept here is because we have an entity, we are getting the actual Order, the instance of the entity, and then using the translator to update attributes instead of creating a new Order instance. Because we are getting the original Order, not creating a new instance, presumably we can have all the associations either populated or populated by lazy load. We do not have to recreate any associations from an OrderContract so the issue goes away.
I think the other part of the issue may be your understanding of how a factory is designed. It is true that for entities a Factory may not set all the possible attributes - the method could become hopelessly complex if it did.
But what a factory is supposed to do is create all the associations for a new object so that the new object returned is in a valid state in terms of being a full and valid aggregate. Then the caller can set all the other various and sundry "simple" attributes.
Anytime you have a Factory you have to make decisions about what parameters to pass in. Maybe in this case the web service gets the actual Customer and passes it to the factory as a parameter. Or Maybe the web service passes in an Id and the factory is responsible for getting the actual Customer instance. It will vary by specific situation but in any case, however it gets the other objects required, a factory should return at minimum a fully populated object in terms of it's graph, i.e all relationships should be present and traversible.
In code a possible example of new Order creation might be:
[WebService]
public class OrderService
{
[WebMethod]
public void SaveNewOrder(OrderContract orderContract)
{
// Lets assume in this case our Factory has a list of all Customers
// so given an Id it can create the association.
Order order = OrderFactory.CreateNewOrder(orderContract.CustomerId);
// Once again we get the actual order itself, albeit it is new,
// and so Customer and all other objects are already either populated
// by the factory create method and/or are available for lazy loading.
// We can now use the same translator to update all simple attribute fields on
// the new Order instance.
OrderTranslator.OrderContractToOrder(ref order, orderContract);
// We now have the new order instance with all properties populated
// so we can validate and then persist.
if (order.Validate())
{
//Maybe you use a Repository - I use a unit of work but the concept is the same.
orderRepository.Save(order);
}
else
{
//Whatever
}
}
}
So, hope that helps?

Do I have to implement Add/Delete methods in my NHibernate entities?

This is a sample from the Fluent NHibernate website:
Compared to the Entitiy Framework I have ADD methods in my POCO in this code sample using NHibernate. With the EF I did context.Add or context.AddObject etc... the context had the methods to put one entity into the others entity collection!
Do I really have to implement Add/Delete/Update methods (I do not mean the real database CRUD operations!) in a NHibernate entity ?
public class Store
{
public virtual int Id { get; private set; }
public virtual string Name { get; set; }
public virtual IList<Product> Products { get; set; }
public virtual IList<Employee> Staff { get; set; }
public Store()
{
Products = new List<Product>();
Staff = new List<Employee>();
}
public virtual void AddProduct(Product product)
{
product.StoresStockedIn.Add(this);
Products.Add(product);
}
public virtual void AddEmployee(Employee employee)
{
employee.Store = this;
Staff.Add(employee);
}
}
You don't have to do this for nhibernate, you have to do this for keep in-memory consistence and not repeat yourself.
Consistence in memory
If you have a two way relationship, lets say Order has Lines, and Line as a relationship to order. You don't want to have a reference from one side and not from the other.
If you just do:
order.Lines.Add(line);
You have made a reference from Order to Line, but Line.Order property remains null. So your in-memory instances are not consistent.
Don't Repeat Yourself
You can use the following code :
order.Lines.Add(line);
line.Order = order;
but you will be repeating yourself, so it is better to put this code in only one place, and the best place is as order.AddLine(..).
You don't have to. You could just call SomeStore.Products.Add(someProduct) directly from outside of your entity. But it's often good practice to make the collections 'read-only' from a public perspective, and using an add method in the entity for adding items.
One benefit of this is that you can put additional logic in there. For instance in your store example, you could set a 'storesStockedIn' collection (if there was such a thing) in the same method, and so keep all the logic about to creating that relationship in one place.
This isn't really a NHibernate thing, but rather an OOP thing. (Although I'm not familiar with EF - maybe it automates some of this for you). The design decisions are exactly the same as if it was just an unpersisted poco (without NHibernate).

NHibernate exception: method Add should be 'public/protected virtual' or 'protected internal virtual'

Take this class as example:
public class Category : PersistentObject<int>
{
public virtual string Title { get; set; }
public virtual string Alias { get; set; }
public virtual Category ParentCategory { get; set; }
public virtual ISet<Category> ChildCategories { get; set; }
public /*virtual*/ void Add(Category child)
{
if (child != null)
{
child.ParentCategory = this;
ChildCategories.Add(child);
}
}
}
When running the application without the virtual keyword of add method, I getting this error:
method Add should be 'public/protected virtual' or 'protected internal virtual'
I understand why properties need to declare as virtual, because thay need to be overridden by the lazy loading feature.
But I don't understand why Methods need to be declare as virtual... they need to be overridden for what reason?
This very confusing!
Methods as well need to be virtual because they could access fields. Consider this situation:
class Entity
{
//...
private int a;
public virtual int A
{
get { return a; }
}
public virtual void Foo()
{
// lazy loading is implemented here by the proxy
// to make the value of a available
if (a > 7)
// ...
}
}
I believe this is required for the lazy-loading feature in NHibernate where NHibernate creates proxies of your entity and controls all access to it. This is why every single method and property must be virtual. Basically, if there is a member doing anything with the entity, NH need to know about it and tap into it.
Like mentioned earlier, in order for NHibernate to do the 'magic' it creates proxy classes which inherit from your entities (Category in your case). However, if you make your entities implement an interface, it will use that interface to create a proxy instead of concrete types. This way, you wouldn't have to mark everything virtual.
EDIT: Some corrections... According to this, i am compelled to say that it almost looks like NH doesn't really do anything with virtual methods, after all. And i even read someone saying that they removed this run-time check from the NH core assembly just to get around it. My assumption would be that it is an older requirements which hasn't been removed. The cool thing is that it looks like there is an initiative to use PostSharp for static proxies, so your classes won't have to declare anything virtual for NH to generate proxies. The bad thing is that it looks like it's been stuck in a branch for almost two years.