What is the performance different between Repository and Entity Manager in TypeORM? - orm

I am new in TypeORM and very confused when I should use Entity manager, and repository.
As far as I know, the difference is I should specify the entity when using entity manager, but not for repository.
Not sure the performance is the same for both case.
I have used both way, but didn't find any difference in functionality.

The performance is the same.
But prefer to use repository.
Each entity has its own repository which handles all operations with its own entity and repositories are more convenient to use than EntityManagers when deal with entities a lot.

Related

What are the advantages for using Command Query Responsibility Segregation over Repository pattern in ASP.NET MVC?

Can someone give me a better way to understand the advantages of using command-query separation over global repository pattern?
CQS and repository are quite different concepts. You might think of CQRS with a specific query handlers implementation.
Anyway, all these are compatible. CQRS implies a 'command' model i.e a model which is very easy to update. The repository pattern is used to abstract persistence. CQS means you don't do a command and a query in the same function (note that the query isn't a sql query). Basically, a command changes something, while a query reads and returns a result.
With a read model, you can have a specific querying services (aka handlers) used to handle querying use cases. In this case, a 'real' repository doesn't help very much, because the query handler itself abstracts the persistence. But in spirit, a query handler is basically a repository method.
Personally, I'm using repositories only with the Command (write/business) model. They do very little: Add, Get, Save, Delete . For querying needs, I have a read model with query handlers.

Practical usage of the Unit Of Work & Repository patterns

I'm building an ORM, and try to find out what are the exact responsibilities of each pattern. Let's say I want to transfer money between two accounts, using the Unit Of Work to manage the updates in a single database transaction.
Is the following approach correct?
Get them from the Repository
Attach them to my Unit Of Work
Do the business transaction & commit?
Example:
from = acccountRepository.find(fromAccountId);
to = accountRepository.find(toAccountId);
unitOfWork.attach(from);
unitOfWork.attach(to);
unitOfWork.begin();
from.withdraw(amount);
to.deposit(amount);
unitOfWork.commit();
Should, as in this example, the Unit Of Work and the Repository be used independently, or:
Should the Unit Of Work use internally a Repository and have the ability to load objects?
... or should the Repository use internally a Unit Of Work and automatically attach any loaded entity?
All comments are welcome!
The short answer would be that the Repository would be using the UoW in some way, but I think the relationship between these patterns is less concrete than it would initially seem. The goal of the Unit Of Work is to create a way to essentially lump a group of database related functions together so they can be executed as an atomic unit. There is often a relationship between the boundaries created when using UoW and the boundaries created by transactions, but this relationship is more coincidence.
The Repository pattern, on the other hand, is a way to create an abstraction resembling a collection over an Aggregate Root. More often than not the sorts of things you see in a repository are related to querying or finding instances of the Aggregate Root. A more interesting question (and one which doesn't have a single answer) is whether it makes sense to add methods that deal with something other than querying for Aggregates. On the one hand there could be some valid cases where you have operations that would apply to multiple Aggregates. On the other it could be argued that if you're performing operations on more than one Aggregate you are actually performing a single action on another Aggregate. If you are only querying data I don't know if you really need to create the boundaries implied by the UoW. It all comes down to the domain and how it is modeled.
The two patterns are dealing at very different levels of abstraction, and the involvement of the Unit Of Work is going to be dependent on how the Aggregates are modeled as well. The Aggregates may want to delegate work related to persistence to the Entities its managing, or there could be another layer of abstraction between the Aggregates and the actual ORM. If your Aggregates/Entities are dealing with persistence themselves, then it may be appropriate for the Repositories to also manage that persistence. If not, then it doesn't make sense to include UoW in your Repository.
If you're wanting to create something for general public consumption outside of your organization, then I would suggest creating your Repository interfaces/base implementations in a way that would allow them to interact directly with your ORM or not depending on the needs of the user of your ORM. If this is internal, and you are doing the persistence work in your Aggregates.Entities, then it makes sense for your Repository to make use of your UoW. For a generic Repository it would make sense to provide access to the UoW object from within Repository implementations that can make sure it is initialized and disposed of appropriately. On that note, there will also be times when you would likely want to utilize multiple Repositories within what would be a single UoW boundary, so you would want to be able to pass in an already primed UoW to the Repository in that case.
I recommend you to use approach when repository uses UoW internally. This approach has some advantages, especially for web application.
In web application recommended pattern of using UoW is Unit of Work (session) per HTTP request. So if your repositories will share UoW, you will be able to use 1st level cache (using identity map) for object that were requested by other repositories (like data dictionaries that are referenced by multiple aggregates). Also you will have to commit only one transaction instead of multiple, so it will work much better in terms of the performance.
You could take a look at Hibernate/NHibernate source codes that are mature ORMs in Java/.NET world.
Good Question!
Depends on what your work boundaries are going to be. If they are going to span multiple repositories then you might have to create another abstraction to ensure that multiple repositories are covered. It would be like a small "service" layer that is defined in Domain Driven Design.
If your unit of work is going to be pretty much per Repository then I would go with the second option.
My question, however, to you would be, how can you worry about repository when writing an ORM? They are going to be defined and used by the consumers of your Unit of Work right? If so, you have no option but to just provide a Unit of Work and your consumers will have to enlist the repositories with your unit of work and will also be responsible for controlling the boundaries of unit of work. Isn't it?

steps to build a NHibernate data layer

What are the proper steps to design and implmement a NHibernate data layer ?
Should I include a step to let NHibernate to generate the schema defintion rather than coding the schema by myself ?
It all depends whether you are starting from scratch or not. For new projects I use NHibernate to create the schema for me. For existing projects that I want to switch to NH I usually do the db changes manually. You need to be a little careful though in regards to your mapping and the db you are using. If you do not use the correct mapping with the correct db mapping you might have performance issues, as well as objects might update themselves without you knowing and when you flush the session your db will be updated.
In regards to using an actual data layer I usually use the Automatic Transaction Management & the NHibernate Facility from the Castle project. You can also make your own configuration builder for the NHibernate Facility so that it works with Fluent NHibernate as well.
That's a very open question.
Regarding the schema generation, yes, it's usually better to let NHibernate generate it.
For architectures based on NHibernate, you can check Sharp Architecture, Effectus and uNhAddIns

Using Schema Export Efficiently

We are using NHibernate as our ORM framework.
We have a need to persist classes we load at run time. We do that according to metadata they come with, that holds names and types of the data they have.
In order to build the tables for them at run time, we use the SchemaExport class from the NHibernate ToolSet API.
We wanted to ask two questions:
Is there a way to make NHibernate do all the actual creations in one roundtrip to the DB instead of a roundtrip per table?
In order to use the SchemaExport tool are building a dynamic string that represents a mapping file from a template we keep. Is there a better way to do this? Maybe even without a mapping string?
Ad 2.
If I understand you correctly, you don't want to use hbm mappings, right? Have you considered using Fluent NHibernate? (http://fluentnhibernate.org/)

nHibernate - Generate classes from a database?

I know that you can generate a database from classes and their mappings using nHibernate, but I can't find any good links to do this the other way around. We have a database that has already been designed, and we are looking at using nHibernate. It would be nice to use a tool to generate the mappings and classes from the database, and then we could tweak/manipulate them to suit our tastes.
NHibernate Mapping Generator
You can use nhibernate with an existing database. It is a matter of writing the mapping files.
I also recommend using Fluent Nhibernate...I started using this after this community recommended it..
Look at subsonic as well if you do not like maintenance of mappings.