A silly question, perhaps, but at this time of night, StackOverFlow is my only friend.
I'm playing with NHibernate and wanted to factualize these 2 statements regarding Sessions in web applications.
1) You should only ever have 1 ISessionFactory per database for the lifecycle of an application.
2) You should only have 1 ISession per HttpRequest or batch of HttpRequests (i.e. conversation)
[I don't want tool or framework recommendation, just want to confirm the above]
You should only have one ISessionFactory for the lifecycle of the application.
Session-per-request is how I work with NHibernate. There may be other patterns, but this is the one that I stick to in my web projects.
You are correct in your assumptions.
Session-per-request is the most common pattern for web applications, and plays nice with MVC, web farms, etc.
There are some ready-to-use modules to handle these patterns at http://unhaddins.googlecode.com/svn/trunk/uNhAddIns/uNhAddIns.Web/ (you can check the rest of uNhAddIns too)
We currently utilize Session-per-Request however I have come across issues with this approach in some cases.
I don't think the answer is that generic, and you should consider using one session per unit of work as well.
Especially when it comes to automagic flushing of entities the Session per request might bite you in the ass.
Related
not sure if my question is explainable enough but I will try to explain it here as much as I can. I am currently exploring and playing with a microservices architecture, to see how it works and learn more. Mostly I understand how things work, what is the role of API Gateway in this architecture, etc...
So I have one more theoretical question. For example, imagine there are 2 services, ie. event (which manage possible events) and service ticket which manages tickets related to a specific event (there could be many tickets). So these 2 services really depend on each other but they have a separated database, are completely isolated and loosely coupled, just as it should be in "ideal" microservices environment.
Now imagine I want to fetch event and all tickets related to that event and display it in a mobile application or web spa application or whatever. Is calling multiple services / URLs to fetch data and output to UI completely okay in this scenario? Or is there a better way to fetch and aggregate this data.
From my reading different sources calling one service from another service is adding latency, making services depend on each other, future changes in one service will break another one, etc so it's not a great idea at all.
I'm sorry if I am repeating a question and it was already asked somewhere (althought I could not find it), but I need an opinion from someone that met this question earlier and can explain the flow here in a better way.
Is calling multiple services / URLs to fetch data and output to UI
completely okay in this scenario? Or is there a better way to fetch
and aggregate this data.
Yes it is ok to call multiple services from your UI and aggregate the data in your Fronted code for your needs. Actually in this case you would call 2 Rest API's to get the data from ticket micro-service and event micro-service.
Another option is that you have some Views/Read optimized micro-service which would aggregate data from both micro-services and serve as a Read-only micro-service. This of course involves some latency considerations and other things. For example this approach can be used if you have requirement like a View which consists of multiple of models(something like a Denormalized view) which will be accessed a lot and/or have some complex filter options as well. In this approach you would have a Third micro-service which would be aggregated from the data of your 2 micro-services(tickets and events). This micro-services would be optimized for reading and could also if needed use a different storage type(Document db or similar). For your case if you would decide to do this you could have only one API call to this micro-service which will provide you all your data.
Calling One micro-service from another. In some cases you can not really avoid this. Even though there are some sources online which would tell you not to do it sometimes it is inevitable. For your example I would not recommend this approach as it would produce coupling and unnecessary latency which can be avoided with other approaches.
Background info:
You can read this answer where the topic is about if it is ok to call one micro-service from another micro-service. For your particular case it is not a good option but for some cases it might be. So read it for some clarification.
Summary:
I have worked with system where we where doing all those 3 things. It really depends on your business scenario and needs of your application. Which approach to pick will depend on a couple of criteria like: usability from UI, scaling(if you have high demand on the micro-services you could consider adding a Third micro-service which could aggregate data from tickets and events micro-service), domain coupling.
For your case I would suggest option 1 or option 2 (if you have a high demanding UI) from user prospective. For some cases option 1 is enough and having a third micro-service would be an overkill, but sometimes this is an option as well.
In my experience with cloud based services, primarily Microsoft Azure, the latency of one service calling another does indeed exist, but can be relied upon to be minimal. This is especially true when compared to the unknown latency involved with the users device making the call over whichever internet plan they happen to have.
There will always be a consuming client that is dependent on a service and its defined interface, whether it is the SPA app or a another service. So in the scenario which you described, something has to aggregate the payloads from both services.
Based on this I have seen improved performance by using a service which handles client requests, aggregates results from n services and responds accordingly. This does indeed result in dependencies, but as your services evolve, it is possible to have multiple versions of your services active simultaneously allowing you to depreciate older versions at a time that is appropriate.
I hope that helps.
Optional Advice
You can maintain the read table (denormalize ) inside any of the services , which suited best. Why? - Because the CQRS apply where needed , CQRS best suited for the big and complex application. It introduce the complexity in your system , and you gain less profit and more headache.
I have been looking for a good example of WCF and NHibernate facilities working together, (one session per web request, etc) but all the tutorials i find are dated 2009 and older. I am afraid i may lose my time trying to implement all this when there are probably better ways to achieve this.
The other thing i noticed is that Rhino.Commons.NHRepository was popular three years ago, but i can't find anything related to this assembly more recent than that. Any reason for this?
Anyone can point me on good examples on how to implement WCF and NHibernate using facilities?
NHRepository? RIP?
Thanks
It seems to be dead. I would suggest you to use one of popular IoC containers to manage lifetime of session and session factory. Almost all modern containers have integration with WCF including per request lifetime.
Does anyone have any good sources of information of using NHibernate with Sql Azure with the implications of sharding (because of the 10gb cap)? I know there are posts on the internet that reference a sharding project for NH but they are from 3rd quarter 09 and I haven't found any much more relevant on google.
Related does anyone have information about manually implementing sharding if the sharding project isn't viable to use yet? Would it just be as simple as creating a session factory for each shard and keep a collection of factories? That seems like it would be problematic reproducing the ISession calls through each factory however I suppose it could be achieved by passing operations as Funcs that get invoked on the ISession from each factory but seems more like the wrong path to be going down.
I wrote a proof of concept about a month ago using NHibernate on SQLAzure/Sharding. As you've pointed out, there are aspects that just do not feel right about it. Until the NH support has evolved, you may have to try a few things to find out what works best for you. I can tell you a general flow of how it worked for us.
We implemented a simple sharding strategy factory that provides strategies that decide which shard to place you in based on our needs. Your needs may vary here. The key is creating strategies that process, merge and order your query results. From there, session creation and usage is all the same as any other session usage, which is highly desirable.
EDIT: I know this post by Ayende is a few months old, but it's exactly how we implemented it and it works. The rumor is better support in nHibernate will be coming.
As a newbie to nHibernate, I am finding the creation of sessions to be a tad confusing.
Why doesn't nHibernate ship with something like what (i believe) Unit Of Work does?
or is there something built and UofW is more of an addon/preference? (context: ASP.NET MVC web application with SQL server).
Note: I am new to nHibernate, and spoiled with how MS does things. I would love it if things that are generally required to be shipped with the product download etc.
The session is kind of unit of work and also an identity map. You can create your own unit of work abstraction over it. Different applications need different kinds of unit of works and different programmers like different implementations too.
You won't need more than 50 lines of code to create your own unit of work implementation to abstract session and transaction management.
In case you haven't found it yet... there is a Unit of Work component for web applications that is part of the NHibernate Contrib project, called NHibernate Burrow. It is reasonably flexible and supports short (session-per-request) and long (sessions spanning more than one request) conversations.
I have used Burrow with ASP.NET MVC in the past and it works fine, though I haven't tried the long conversation in MVC. In it's current state, in order to use it, you will need to update the library references and compile the Burrow project. It's a little extra work but it's worth it.
I agree with Paco, although in a web application this can be a little more work.
I think the real issue is that most of us want to isolate our upstream code from a direct dependency on NHibernate and therefore we don't want to work directly against the session.
Once you wrap you head around NHibernate this abstraction becomes pretty simple but but it's a little harder to get your head around this abstraction at the same time you are trying to learn about NHibernate for the first time.
I work for a large state government agency that is a tad behind the times. Our skill sets are outdated and budgetary freezes prevent any training or hiring of new employees/consultants (firing people is also impossible). Designing business objects, implementing design patterns, establishing code libraries and services, unit testing, source control, etc. are all things that you will not find being done here. We are as much of a 0 on the Joel Test as you can possibly get. The good news is that we can only go up from here!
We develop desktop CRUD applications (in C++, C#, or Java) that hit the Oracle database directly through an ODBC connection. We basically have GUI's littered with SQL statements and patchwork code. We have been told to move towards a service-oriented n-tier architecture to prevent direct access to the database and remove the Oracle Client need on user machines.
Is WCF the path we should be headed down? We've done a few of the n-tier application walkthroughs (like this one) and they seem easy to implement, but we just don't know enough to understand if we are even considering the right technologies. Utilizing the .NET generated typed DataSets seems like a nice stopgap to save us month/years of work (as opposed to creating new business objects from the ground up for numerous projects). Is this canned approach viable for a first step?
I recently started using WCF services for my Data Layer in some web applications and I must say, it's frustrating at the beginning (the first week or so), but it is totally worth it once the code is deployed.
You should first try it out with a small existing app, or maybe a proof of concept to make sure it will fit your needs.
From the description of the environment you are in, I'm sure you'll realize the benefit almost immediately.
The last company I worked for chose WCF for almost the exact reason you describe above. There is lots of good documentation and books for WCF, its relatively easy to get working, and WCF supports a lot of configuration options.
There can be some headaches when you start trying to bend WCF to work in a way not specifically designed out of the box. These are generally configuration issues. But sites like this or IDesign can help you through those.
First of all, I would definitely not (sorry for the emphasis) worry about the time you'll save using typed DataSet's versus creating your own business objects. That is usually not where you will spend most of your development time. I prefer using business objects myself.
In you're situation I would want to implement a proof-of-concept first. One that addresses all issues you may encounter. This proof-of-concept should implement an entire use case, starting on the client, retrieving data from the database and returning it to the client. You should feel confident about your implementation before continuing.
Then about choice of technology. WCF is definitely a good choice for communication between your client applications and the service layer. I suppose that both your clients as well as your service layer will become C# applications? That makes things a lot easier since interoperability between different platforms (Java/C# for example) is still not trivial although it should work in most cases.
Take a look at Entity Framework (as there are a couple Oracle providers available for it already) in conjunction with .NET 3.5 SP1 which enables built-in WCF serialization of your EF generated classes.
Here is a good blog to get started: http://blogs.msdn.com/dsimmons
CSLA might be a good fit for your N-Tier desktop apps. It supports WCF, has a large dev community, and is well documented. It is very object oriented.