Allow 1000+ concurrent users on IIS 8 for ASP.NET MVC web application - asp.net-mvc-4

We have built the ASP.NET MVC4 application and deployed on IIS8.5
We updated setting in Appication pool for QueueLength = 5000 and also updated the same in ASP.NET.config file of framework (C:\Windows\Microsoft.NET\Framework64\v4.0.30319)Appication pool setting upate
ASP.NET Config file change
Still, max 100 users are allowed in one go and rest being queued.
Server configuration is 4 core processor, 8 GB Ram and 64-bit OS.
Need help to fix my problem. many many thanks in Advance.
Attached image of all configuration.
We updated setting in Appication pool for QueueLength = 5000 and also updated the same in ASP.NET.config file of framework (C:\Windows\Microsoft.NET\Framework64\v4.0.30319)
Allow maximum number of users with concurrent login and support.SServer Configuration

Need help to fix my problem. many many thanks in Advance. Attached image of all configuration.
I suggest you could run below command to modify the appConcurrentRequestLimit in server runtime in applicationhost.config file.
c:\windows\system32\inetsrv\appcmd.exe set config /section:serverRuntime /appConcurrentRequestLimit:100000
However,I would like a recommendation from you regarding two options we have. 1 We can Update my existing server from 4 Core 8gb RAM to 6 core 16 GB RAM or 2 We can Keep separate servers i.e One for IIS and One for SQL Server. Server Config will be same for both(4 Core 8GB RAM). Which option would be preferable?
In my opinion, this should according to your application performance. I think you should analysis the sql's performance, if you find the sql has takes a lot of server's resource to handle the sql query or something elese which cause your IIS takes so long time to response, I suggest you could keep separate servers.
If you find the sql server just take a little server resouce, I suggest you could use use just one server.

Related

Maximum concurrent requests in signalr self hosted in kestrel

I've encountered a strange problem with an application I've developed. The application is a windows service hosting AspNetCore 2.0 running on Kestrel. This application receives requests through an IIS site acting as a proxy.
In this application, I also use signal 2.2.2 integrated using Microsoft.AspNetCore.Owin. All worked well until I detected that the application was not responding to requests.
Other applications on the same machine and using the same IIS server as proxy were working fine. Restarting the application pool serving the site solved the problem temporarily.
The problem resurfaced again and digging through monitoring information the application seems to hang when there are 400 signalr SSE connections on the same machine. This seems plausible as I've found that by default OWIN limits the number of concurrent requests at 100 * number of cpus. (Note that a site on the same machine is serving 5000 requests per minute without a sweat but these are not a long-lived request like the SignalR ones)
The problem is that I seem unable to find the same option when hosting Owin inside AspNetCore. Does someone know if this can be the solution and what is the correct setting?
EDIT: I'm fairly certain that the issue is caused by the number of SignalR connections opened concurrently because by disabling it in Javascript the problem vanished.
2nd EDIT: signalr does not seem to be the cuplrit as load testing the site with crank both in test and in production worked until 5000 concurrent connections which is the default IIS limit and is fine by me
After some trial and error I've been able to identify and correct the problem but it was no easy task so I'm leaving this answer behind if someone else stumbles upon the same problem.
Disabling SignalR did not solve the problem but it made it appear less often.
Thanks to the monitoring in place on the server and IIS I observed that the problem appeared when the number of connections to the site started growing rapidly. This system primarily makes request to other services so it does not have a database nor expensive computations.
Examining the code I've found that there were three problems:
a new HttpClient was created for every request which can exhaust the sockets which are not reused between requests blog blog2 blog3
by default there's a maximum number of concurrent connections on the httpClient to a single domain and this limit is set by default to 2 (!!!) blog4
the code was waiting synchronously on every web request to another system (this program was ported from an mvc4 site which never displayed this problem). This worked fine in MVC but asp.net core is very sensitive to this as it will rapidly exhaust all available threads and because the thread pool starts with the number of cores they will be exhausted quickly making all the requests wait. This value can be increased as temporary stop gap solution with ThreadPool.SetMaxThreads(Int32, Int32) but the only solution is to transform all calls in async calls.
Once all calls were mde async the problem never returned. Basically the problem was due to threadpool starvation and aspnet core sensibility to it vs MVC. Here you can find a nice explanation and a detection method using PerfView.
This could be the issue, but it's unlikely. When hosting in dotnet core you're probably using Kestrel as a webserver implementation, to switch these limits such as concurrent connections you can use KestrelServerLimits class as described in this Microsoft article.
KestrelServerLimits should not be causing you any problems since the default value for ConcurrentConnections is unlimited.

Weblogic Performance Tunning

We have a problem with Weblogic 10.3.2. We install a standard domain with default parameters. In this domain we only have a managed server and only running a web application on this managed server.
After installation we face performance problems. Sometimes user waits 1-2 minutes for application responses. (Forexample user clicks a button and it takes 1-2 minutes to perform GUI refresh. Its not a complicated task.)
To overcome these performance problems define parameters like;
configuraion->server start->arguments
-Xms4g -Xmx6g -Dweblogic.threadpool.MinPoolSize=100 -Dweblogic.threadpool.MaxPoolSize=500
And also we change the datasource connection pool parameters of the application in the weblogic side as below.
Initial Capacity:50
Maximum Capacity:250
Capacity Increment: 10
Statement Cache Type: LRU
Statement Cache Size: 50
We run Weblogic on 32 GB RAM servers with 16 CPUs. %25 resource of the server machine is dedicated for the Weblogic. But we still have performance problem.
Our target is servicing 300-400 concurrent users avoiding 1-2 minutes waiting time for each application request.
Defining a work manager can solve performance issue?
My datasource or managed bean definition is incorrect?
Can anyone help me?
Thanks for your replies

Why doesn't my azure hosted WCF service scale when I add more machines?

We have a WCF service which we are hosting on azure. It takes some xml and processes it in memory (no external calls/db etc and it takes about 150ms) and returns some xml.
We have been load testing it and when we run it on 1,2 and 4 core machines we can max out the processors and get around a max of 40 calls per second throughput (on the 4 core machine). However when we switch to an 8 core machine or two 4 core machines we still only get around 40 calls per second.
Why might I not be able to get more throughput when I scale up the number of machines doing the processing? I would expect adding more machines would increase my throughput fairly linearly, but it doesn't. Why not?
Not sure if Azure has specific throttling, but the .NET framework has a limit on the number of outgoing connections to the same address that can be active at a time. In this MSDN article called Improving Web Services Performance it mentions that the default value for this is 2.
Configure The maxconnection Attribute
The maxconnection attribute in Machine.config limits the number of concurrent outbound calls.
Note This setting does not apply to local requests (requests that originate from ASP.NET applications on the same server as the Web service). The setting applies to outbound connections from the current computer, for example, to ASP.NET applications and Web services calling other remote Web services.
The default setting for maxconnection is two per connection group. For desktop applications that call Web services, two connections may be sufficient. For ASP.NET applications that call Web services, two is generally not enough. Change the maxconnection attribute from the default of 2 to (12 times the number of CPUs) as a starting point.
<connectionManagement>
<add address="*" maxconnection="12"/>
</connectionManagement>
Note that 12 connections per CPU is an arbitrary number, but empirical evidence has shown that it is optimal for a variety of scenarios when you also limit ASP.NET to 12 concurrent requests (see the "Threading" section later in this chapter). However, you should validate the appropriate number of connections for your situation.
These limits are in place to prevent a single users from monopolizing all the resources on a remote server (DOS attack). Since this is a service running in Azure I would guess that they have throttling on their end to prevent a user from consuming all of their incoming connections from a single IP.
My next step would be to check and see if there is a concurrent connection limit for azure web roles (this thread suggests there is and it's configurable) and to either increase it. Otherwise I would try to perform my load test from multiple sources and see if you still experience the same limits.

Load Balanced Deployments

I have an application that is load balanced across two web servers (soon to be three) and deployments are a real pain. First I have to do the database side, but that breaks the production code that is running - and if I do the code first the database side isn't ready and so on.
What I'm curious about is how everyone here deploys to a load balanced cluster of X servers. Since publishing the code from test to prod takes roughly 10 minutes per server (multiple services and multiple sites) I'm hoping someone has some insight into the best practice.
If this was the wrong site to ask (meta definitely didn't apply - wasn't sure if serverfault did as I'm a dev doing the deployment) I'm willing to re-ask elsewhere.
I use nant scripts and psexec to execute them.
Basically in the farm there's a master server that copies the app and db scripts locally and then executes a deployment script in each server in the farm, that copies the code locally, modifies it if needed takes the app offline deploys the code and takes the app online
Usually the app is of for about 20 seconds (5 nodes)
Also, I haven't tried it but I hear a lot about MSDeploy.
Hope this helps
Yeah, if you want to do this with no downtime you should look into HA (High Availability) techniques. Check out a book by Paul Bertucci - I think it's called SQL Server High Availability or some such.
Otherwise, put up your "maintenance" page, take all your app servers down, do the DB and one app server first, then go live and do the other two offline.

Synchronizing Lucene indexes across 2 application servers

I've an asp.net web application hosted on a web server(IIS 7).It uses Lucene for search functionality.
Lucene search requests are served by .Net WCF services sitting on 2 application servers (IIS 7).The 2 application servers are Load balanced using "netscaler".
Both these servers host a .net windows service which updates search indexes on the respective servers in the night on a daily basis.
I need to synchronize search indexes on these 2 servers so that at any point of time both the servers have uptodate indexes.
I was thinking what could be the best architecture/design strategy to do so given the fact that any of the 2 application servers could be serving search request depending upon its availability.
Any inputs please?
Thanks for reading!
Basically you need two identical copies of the same Lucene index - one for each IIS server.
I believe the simplest approach is to build an updated index on one machine, optimize it and then copy it to the other machine. On Linux I would use rsync, but I do not know the Windows equivalents. See Jeff Atwood's ideas for Windows rsync alternatives.
Alternatively, you could issue the same index update commands to both Lucene indexes and verify they were processed properly. This is harder technically and only useful when you have more frequent updates. Please see Scaling Lucene and Solr for a broader discussion of distributed Lucene indexes.