How long can/should an NHibernate session be kept open? - nhibernate

I've created a windows service which is listening to a MSMQ. For each message I receive, some DB transactions need to be made. Later it might be possible that there will be 1 message every second.
Currently the Nhib session is kept open until the service is stopped manually.
Is it a good practice or should I close the session after each message?
Thanks in advance

An NHibernate session is meant to be relatively short lived, so its generally not a good idea to keep it active for a longer period. The session caches entities and as more entities are fetched more data is cached, if you don't manage the caching in some way. This leads to a performance degradation.
The NHibernate docs describe ISession like this:
A single-threaded, short-lived object representing a conversation between the application and the persistent store. Wraps an ADO.NET connection. Factory for ITransaction. Holds a mandatory (first-level) cache of persistent objects, used when navigating the object graph or looking up objects by identifier.
I would suggest using a session-per-conversation, i.e. if you have a few db operations that "belong together" you use the same session for those operations, but when those operations are done you close the session.
So, using a new session for each message you process sounds like a good idea.

In contrast to the session factory (ISessionFactory) which is thread safe you should open and close the session (ISession) with every database transaction.

Related

ASP Net Core 3 Session (state) concurrency and integrity

I have a page which requests multiple requests concurrently. So those requests are in the very same session. For accessing the session I use everywhere IHttpContextAccessor.
My problem is that regardless of the timing, some request does not see other requests already set session state, instead sees some previous state. (again in timing, the set state operation happened already, still)
As far as I know each requests has its own copy of the state, which is written back... (well "when"?) to the common "one" state. If this "when" is the delayed to when request is completely served, then the scenario what I experiencing is easily happen: The 2nd concurrent request within the session got his copy after the 1st request modified the state but before as it was finished completely.
However this all above means that in case of concurrent request serving within a session there is no way to maintain session integrity. The 2nd not seeing the already done changes by the 1st, will write back something what is not consistent with the already done 1st process change.
Am I missing something?
Is there any workaround? (with some cost of course)
First, you may know this already, but it bears point out, just in case: session state is specific to one client. What you're talking about here, then, is the same client throwing multiple concurrent requests at the same time, each of which is touching the same piece of session state. That, in general, seems like a bad design. If there's some actual application reason to have multiple concurrent requests from the same client, then what those requests do should be idempotent or at least not step on each others toes. If it's a situation where the client is just spamming the server, either due to impatience or maliciousness, it's really not your concern whether their session state becomes corrupted as a result.
Second, because of the reasons outline above, concurrency is not really a concern for sessions. There's no use case I can imagine where the client would need to send multiple simultaneous requests that each modify the same session key. If there is, please elucidate by editing your question accordingly. However, I'd still imagine it would be something you likely shouldn't be persisting in the session in the first place.
That said, the session is thread-safe in that multiple simultaneous writes/reads will not cause an exception, but no guarantee is or can be made about integrity. That's universal across all concurrency scenarios. It's on you, as the developer, to ensure data integrity, if that's a concern. You do so, by designing a concurrency strategy. That could be anything from locks/semaphores to gate access or just compensating for things happening out of band. For example, with EF, you can employ concurrency tokens in your database tables to prevent one request overwriting another. The value of the token is modified with each successful update, and the application-known value is checked against the current database value before the update is made, to ensure that it has not been modified since the application initiated the update. If it has, then an exception is thrown to give the application a chance to catch and recover by cancelling the update, getting the fresh data and modifying that, or just pushing through an overwrite. This is to elucidate that you would need to come up with some sort of similar strategy if the integrity of the session data is important.

In what scenarios is recommended a reliable session?

In few words, if I am not wrong, a session is used when I want to ensure that the packages are sent in order, and to be able to use sessions is needed a reliable connection.
But my doubt what kind of applications need that? In my case is a simple application in which a client request to a service data from a database, the service get the data from the database and send to the client the results. Also the client can requeset to add, modify or delete data from database. In this case, should I need a reliable connection and sessions or not?
Thanks.
Session presumes that for some period of time you want to retain some data. Such a period of time, as far as session is concerned, refers to client's lifecycle that is when client opens up proxy, both service along with session are created, when client closes proxy service and session terminate their actions. There is exception when closing proxy does not actually perform it right away and this occures when you invoke one-way-operation. Service will keep working as long as operation performs its action despite the fact that it previously received an order to get rid of instance.
Based on provided information I assume the best choice would be PerCall. You do not store any data between calls and every single call can be perceived separately. Additionaly, leverage of ConcurrencyMode set to multiple so as to allow services being created simultaneously.
Personally, I find session useful in MSMQ, whenever I want to specific number of messages be wrapped into single queue-message. If error occures, regardless of whether which message is in charge of it, the whole queue-message is rolled back.

Caching MessageSender and MessageReceiver

According to MSDN here we should cache the objects used to communicate with Service Bus. It dosen't however explain it in more details.
To be more specific I create the MessagingFactory for given connection string and cache it as long as possible. I use the factory to create the MessageReciever and MessageSender instances for different queues and topics. Now my question is: Should I also cache them?
I do not call the Close on them.
Just to be clear, when we say cache here, what we mean is keep a reference to the object, not store it in a cache [like Redis]. The guidance from Microsoft is just pointing out that establishing a connection to Service Bus is an expensive operation compared to just sending/receiving messages, and there's no benefit to tearing down the connection and reestablishing it on every send/receive.
When I write code using these objects, I usually create a static property on a class and keep it in there, so the objects last for the lifetime of the app domain. In an ASP.NET application, if you don't like the static class approach, you could keep the Service Bus objects in the HttpContext.Application collection, for example, Application["ServiceBusReceiver"] = myServiceBusReceiver; and then you just keep pulling it out when you need it.
(And, yes, there are other ways to do "global" objects in ASP.NET... not looking to wade into that topic here. :-) )
This is (sort-of) the same idea as SQL connection pooling... once the connections are established, they're kept around and reused. Ultimately, it's not a functional difference, it's just a performance optimization that reduces the number of calls over the network.
Hope that helps,
Scott

WCF Performance: Can I create a pool of my objects like ConnectionPooling does

I have a service that uses a fairly expensive object to create. I would like to improve the performance from call to call.
When I remove the object and run a load test, like how many invocations I can do per second, I have a massive performance difference between to situations.
Situation 1. I remove the expensive object: Invocations per sec ~= 130.
Situation 2. I use it as normal, with the object: rate is ~= 2 per sec.
I have a .NET WCF service hosted on an IIS 2008 server.
I was wondering if there was a way I could create an object cache/pool and hand those objects to each invocation of the service.
Any thoughs/comments that may help this situation?
You could run the WCF service in per session mode and create the object using the singleton pattern, that way you only create the object once per session, as opposed to once per call.
You may also be able to cache the objects using enterprise libray caching.
If the expensive part is building the State of the object, and you only want to limit the number of times that you create that object, I suggest using a Durable Service.
A Durable WCF component persists its state between calls and between clients. Each time you call a method it writes its state to a persistence store (the default is a sql server database). The catch is you have to pass around a context token between whoever is going to call your Durable component. This token could be persisted in a file or database or whatever.
This would allow you to make a call against the component, it could create its state one time and then you could keep calling it from other clients as long as they have access to its context token.
Nothing hangs around in memory since the object goes away each time your client closes, but the state persists.
Here's a tutorial.

Best approach for WCF client

I have client application that uses WCF service to insert some data to backend database. Client application is going to call service on per event basis (it can be every hour or every second).
I'm wondering what's the best way of calling that service.
Should I create communication channel and keep it open all the time, or should I close channel after each call and create it again?
The first question is whether your server needs to maintain any state about the client directly (i.e. are you doing session-like transactions?) If you are, you will need to be able to manage how the server holds the information between communications.
My initial feeling of your question is that if there is no need to leave a connection open, then close it each time and recreate a new connection on demand. This will avoid issues where a connection can be placed into a faulted state between calls. The overhead of creating and destroying connections is minimal, and it will (probably) save you a lot of time in debugging when something goes wrong.
I would think you probably wanna implement a Keep Alive pattern, with a configurable duration to inform your underlying mechanism to close the connection if past beyond the Keep-alive duration with zero communication activity.