is there a way to see all open nhibernate session in the application?
Why?
Because, Ia m getting this error in my MVC application:
Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
I have done this configuration:
http://www.kevinwilliampang.com/2010/04/06/setting-up-asp-net-mvc-with-fluent-nhibernate-and-structuremap/
Please help.
I would think the easiest way will be to hook up NHProf to your application. It will report back via it's UI all open sessions.
See the screenshots page for how the sessions will show up in this tool.
You can see open sessionsin the session section of the nfprof. Closed sessions have duration displayed.
Related
An asp.net core application with react+redux on the client side, using signalR.
Getting the following error on the client side:
Unhandled Rejection (Error): WebSocket closed with status code: 1000 ().
Seems like this is a "normal closure", but there's no code to close the connection.
The application sends small images at 60 FPS per viewport, in several viewports. This utilizes the JS thread almost completely, to the extent that I'd assume that it may prevent signalR from maintaining keep-alive.
Tried setting the timeouts in the server for signalR to their max value, that did not prevent the issue from recurring.
What is it that could cause the signalR socket to close without invoking the close and without an error message?
I'm guessing the browser or the server could close out of self-preservation or reaching set limits.
Most likely: The default maximum size of a hub message (MaximumReceiveMessageSize) is 32 KB, and a image could easily surpass this. You could turn on EnableDetailedErrors to see if there's more info.
If the browser is unable to send quickly enough, it will need to buffer and this buffer can't grow infinitely. You could also run into some sort of anti-malware protection based either on hogging the JS thread (maybe use workers?) or on using too much network I/O. The server can also close for similar reasons.
As for why the error message is vague: The browser literally can't give you too much feedback about this - see the warning text before 9.3.4. Edit: this is wrong and only applies to close code 1006.
To solve the issue, I turned on the logs as Jesper suggested.
The issue was that I was cancelling a CancellationToken passed to the SendAsync method. For some odd reason cancelling the send closes the socket (I'd expect it to only cancel the specific message, not close the connection).
I'm working on an ASP.NET MVC application which use EF 6.x to work with my Azure SDL Database. Recently with an increased load app start to get into a state when it's unable to communicate with the SQL server anymore. I can see that there are 100 active connections to my database using exec sp_who and any new connection is unable to create with the following error:
System.Data.Entity.Core.EntityException: The underlying provider
failed on Open. ---> System.InvalidOperationException: Timeout
expired. The timeout period elapsed prior to obtaining a connection
from the pool. This may have occurred because of all pooled connections
were in use and max pool size was reached.
Most of the time app works with average active connection count from 10 to 20. And any load doesn't change this number... Event when load is high it stays at level 10-20. But in certain situations, it could just up to 100 in less than a minute without any ramp up time and this causes app state when all my requests are failing. All those 100 connection are in sleeping state and awaiting command.
The good part is I found a workaround which helped me to mitigate the issue - clear connection pool from the client side. I'm using SqlCoonection.ClearAllPools() and it instantly closing all the connections and sp_who shows me my regular 10-20 connection after that.
The bad part, I still don't know the root cause.
Just to clarify the app load is about 200-300 concurrent users, which generate 1000 requests per minute
With the great suggestion #DavidBrowne to track leaked connection with a simple pattern I was able to find leaked connections while configuring Owin engine
private void ConfigureOAuthTokenGeneration(IAppBuilder app)
{
// here in create method I'm creating also a connection leak tracker
app.CreatePerOwinContext(() => MyCoolDb.Create());
...
}
Basically with every request, Owin creates a connection and doesn't let it go and when the WebAPI load is increased I have troubles.
Could it be the real cause and I Owin smart enough to lazy create a connection when needed (using the function provided) and let it go when it was used?
It's very unlikely that this is caused by anything other than your application code leaking connections.
Here's a helper library you can use to track when a connection is leaked, and report the call site that initially opened the connection.
http://ssbwcf.codeplex.com/SourceControl/latest#SsbTransportChannel/SqlConnectionLifetimeTracker.cs
Whenever I am trying to deploy my application I keep getting this Exception in the logs:
MQJMSRA_LB4001: start:Aborted:Unable to ping Broker within 60000 millis
I couldn't understand why this was happening so I checked domains/domain1/imq/logs/log.txt and this is what I found:
No threads are available to process a new connection on service admin. 10 threads out of a maximum of 10 threads are already in use by other connections. A minimum of 2 threads must be available to process the connection. Please either limit the # of connections or increase the imq.<service>.max_threads property. Closing the new connection. ". Count: service=5 broker=5
Can someone help me with understanding how to increase this count..
I would really appreciate your help on this.
You should change the connection properties (max_threads) of the broker as the error message suggests. The broker configuration file is \domains\\imq\instances\imqbroker\props\config.properties.
This depends on whether you are using OpenMQ in embedded mode or not. If you are using embedded MQ, look for the Thread Pools section of your config in the admin console. One of them will have a max threads set to 10, that will be the one to increase.
It's hard to be sure since you haven't given any other detail from the logs, but that is very likely what you need to change.
HelloA Windows Phone application need to connect to a server and get messages from it. This is done using WCF and long polling on the server. 3 minutes is the timeout defined on the server. Call from windows phone is done using HttpWebRequest.
The problem is that Windows Phone devices have a timeout of 60 seconds for get request (emulator have a different value, greater than 3 minutes).
Currently i can't decrease server timeout. Doing a new GetRequest after the 60 seconds doesn't get anymore messages.
Does anyone have an idea ?
Thanks
I don't think leaving a connection open is a good idea on mobile devices. I'm assuming that's what you're doing. In my app, I would just poll whenever needed by creating a new HttpWebRequest. But it made sense to do this in my app, because I would be updating train arrival status every 40 seconds.
If you're trying to pull data on a given schedule, put a timer in and just call the webserver every 3 minutes or whatever the requirement is.
If you want to be able to check things (when the app is closed) or if there's rarely fresh data on the server, then you'd need to implement a Push mechanism.
Update: Here's a good article on dealing with the timeout issue - http://blog.xyzzer.me/2011/03/10/real-time-client-server-communication-on-windows-phone-with-long-polling/
Update 2: What if you arranged it so that, you have cascading connections - what I mean is since you can't go beyond 60 seconds per connection, you can write a class that'll house two connections and once one of them is about to timeout, say several seconds before, you can start opening the other connection - you can pick the timing so that there's at most 5 seconds of overlap between them. This way you could have your always open connection.
Also see what these guys have done with the GChat app, they have their source code available at this link. This may provide a more proper design.
I'd like to host a WCF web service in IIS. The service should keep a certain set of data all the time, it must never be lost.
My colleague told me this is impossible because IIS closes down the service after a certain time (I assume without any activity). Is that true? How do I prevent that behavior?
In case it matters, both IIS 6 and 7 are available.
By default, IIS recycles the worker process after a certain period of inactivity (20 mins if I recall correct). This causes your data to be lost.
You can turn off this behavior in the Properties page of the ApplicationPool under which your app is running.
EDIT: having said that, if it is really important that this data is never lost, I would consider storing it in a database or some other form of storage.
My colleague told me this is
impossible because IIS closes down the
service after a certain time (I assume
without any activity). Is that true?
How do I prevent that behavior?
This is true, but you can get around it by using an out of process state server.
Here are three links describing session state and how to set it up in IIS:
http://www.microsoft.com/technet/prodtechnol/WindowsServer2003/Library/IIS/0d9dc063-dc1a-46be-8e84-f05dbb402221.mspx?mfr=true
http://www.eggheadcafe.com/articles/20021016.asp
http://msdn.microsoft.com/en-us/library/ms178586.aspx