How to test the max number of simultaneous connections of browsers - testing

I am trying to change the number of simultaneous connections of Edge behind a proxy server. The default number of connections is 32, and I want to change it to 12 (same as the default number in Internet Explorer).
I have found that I can use the MaxConnectionsPerProxy browser policy to restrict the number of connections, but I could not find a way to confirm the actual effect of the policy.
Is there any way to test the effect of the MaxConnectionsPerProxy policy (especially, without launching a web server by myself manually)? For example, is there any service to test the number of simultaneous connections from a web browser?

Related

IIS 10 ARR LoadBalancer Working more like Redundent Web Servers

We have configured a new webfarm using IIS10 with 3 hosts operating with the web traffic with a loadbalancing IIS ARR3.0 server sitting infront to balance incoming requests between all the nodes. During initial testing (Basic HTML pages) the round robin setup (33.33%) distribution between each node was working well but we had to enable server / client affinity so that our applications kept a consistent connection between our client session and the application. Since then, we are finding that all traffic going to these applications originating from different machines on different networks are all being forwarded to the same application server. If you take the server offline the application seamlessly starts running on the next server in the list (Client obviously must sign in again). Whilst one server is fine at this time to run the two applications we have running when we ramp up our migration and have all our 140 applications running, I don’t think one server will be too happy with the load.
ADDITIONAL INFORMATION
LoadBalancers/Arr Servers: LB-01 (LB-02 DUPLICATED Server for redundancy). Default ARR URL ReWrite with Route to Server Farm Action. Image of LB/ARR URL ReWrite Rule Server Affinity Enabled Client Affinity enabled use hostname selected no Advanced Settings, no routing rules. ARR Default Proxy Settings Image of Proxy Settings
Web/Application Servers WEB-01, WEB-02, WEB-03 FileSystem Shared using DFS All running on Shared Config's
The Applications would be as follows
https://www.domainname.com/application-name1
https://www.domainname.com/application-name2
...
Were the application launch page changes but the domain name stays the same
Image of IIS Monitoring and Management Window showing distribution
If there is a setting you wish to verify please ask for them. I know people arent physchic but huge paragraphs of information never really help.
My hunch is it is something to do with the URL rewrite I have tried the settings in the below post to no avail.
IIS ARR & load balancing
Uncheck 'Host Name Affinity' to dispatch to all your hosts

If I change web hosting and re-point my domain to it, can it still read secure cookies from the previous server? [duplicate]

I have two HTTP services running on one machine. I just want to know if they share their cookies or whether the browser distinguishes between the two server sockets.
The current cookie specification is RFC 6265, which replaces RFC 2109 and RFC 2965 (both RFCs are now marked as "Historic") and formalizes the syntax for real-world usages of cookies. It clearly states:
Introduction
...
For historical reasons, cookies contain a number of security and privacy infelicities. For example, a server can indicate that a given cookie is intended for "secure" connections, but the Secure attribute does not provide integrity in the presence of an active network attacker. Similarly, cookies for a given host are shared across all the ports on that host, even though the usual "same-origin policy" used by web browsers isolates content retrieved via different ports.
And also:
8.5. Weak Confidentiality
Cookies do not provide isolation by port. If a cookie is readable by a service running on one port, the cookie is also readable by a service running on another port of the same server. If a cookie is writable by a service on one port, the cookie is also writable by a service running on another port of the same server. For this reason, servers SHOULD NOT both run mutually distrusting services on different ports of the same host and use cookies to store security sensitive information.
According to RFC2965 3.3.1 (which might or might not be followed by browsers), unless the port is explicitly specified via the port parameter of the Set-Cookie header, cookies might or might not be sent to any port.
Google's Browser Security Handbook says: by default, cookie scope is limited to all URLs on the current host name - and not bound to port or protocol information. and some lines later There is no way to limit cookies to a single DNS name only [...] likewise, there is no way to limit them to a specific port. (Also, keep in mind, that IE does not factor port numbers into its same-origin policy at all.)
So it does not seem to be safe to rely on any well-defined behavior here.
This is a really old question but I thought I would add a workaround I used.
I have two services running on my laptop (one on port 3000 and the other on 4000).
When I would jump between (http://localhost:3000 and http://localhost:4000), Chrome would pass in the same cookie, each service would not understand the cookie and generate a new one.
I found that if I accessed http://localhost:3000 and http://127.0.0.1:4000, the problem went away since Chrome kept a cookie for localhost and one for 127.0.0.1.
Again, noone may care at this point but it was easy and helpful to my situation.
This is a big gray area in cookie SOP (Same Origin Policy).
Theoretically, you can specify port number in the domain and the cookie will not be shared. In practice, this doesn't work with several browsers and you will run into other issues. So this is only feasible if your sites are not for general public and you can control what browsers to use.
The better approach is to get 2 domain names for the same IP and not relying on port numbers for cookies.
An alternative way to go around the problem, is to make the name of the session cookie be port related. For example:
mysession8080 for the server running on port 8080
mysession8000 for the server running on port 8000
Your code could access the webserver configuration to find out which port your server uses, and name the cookie accordingly.
Keep in mind that your application will receive both cookies, and you need to request the one that corresponds to your port.
There is no need to have the exact port number in the cookie name, but this is more convenient.
In general, the cookie name could encode any other parameter specific to the server instance you use, so it can be decoded by the right context.
In IE 8, cookies (verified only against localhost) are shared between ports. In FF 10, they are not.
I've posted this answer so that readers will have at least one concrete option for testing each scenario.
I was experiencing a similar problem running (and trying to debug) two different Django applications on the same machine.
I was running them with these commands:
./manage.py runserver 8000
./manage.py runserver 8001
When I did login in the first one and then in the second one I always got logged out the first one and viceversa.
I added this on my /etc/hosts
127.0.0.1 app1
127.0.0.1 app2
Then I started the two apps with these commands:
./manage.py runserver app1:8000
./manage.py runserver app2:8001
Problem solved :)
It's optional.
The port may be specified so cookies can be port specific. It's not necessary, the web server / application must care of this.
Source: German Wikipedia article, RFC2109, Chapter 4.3.1

How many maximum number of simultaneous Chrome connections/threads I can start through Selenium WebDriver?

Assuming I do not have a Grid setup, what is the Maximum number of simultaneous Chrome threads I can start from Selenium WebDriver?
Is it 5? And does it hold good for Chrome Headless as well?
Browser connection limitations
Browsers limit the number of HTTP connections with the same domain name. This restriction is defined in the HTTP specification (RFC2616). Most modern browsers allow six connections per domain where as most of the older browsers allow only two connections per domain.
The HTTP 1.1 protocol states that single-user clients should not maintain more than two connections with any server or proxy. This is the reason for browser limits. You can find a detailed discussion in RFC 2616 – Hypertext Transfer Protocol, section 8 – Connections.
Modern browsers are less restrictive than this, allowing a larger number of connections. The RFC does not specify how to prevent the limit being exceeded. Either connections can be blocked from opening or existing connections can be closed.
Table of MAXIMUM SUPPORTED CONNECTIONS:
http.maxConnections
As per Networking Properties:
http.maxConnections (default: 5)
If HTTP keepalive is enabled (see above) this value determines the maximum number of idle connections that will be simultaneously kept alive, per destination.
Connection per-host
As per Network.http.max-connections-per-server Firefox 3 has boosted the connections per host to 15.
As per Match Firefox's per-host connection limit of 15 Chrome team tried to match the same and went through the discussion Configurable connections-per-host but ended up without any conclusion in Configurable connections-per-host
Conclusion
The same standards are also applicable while you use any of the WebDriver and Web Browser variant combo. The behavior with Selenium Grid Setup, Chrome Headless and Firefox Headless will also be identical.
References
Increasing Google Chrome's max-connections-per-server limit to more than 6
How to solve Chrome's 6 connection limit when using xhr polling
Max parallel http connections in a browser?

Difference between maxstartups and maxsessions in sshd_config

I want to limit the total number of ssh connections. I have gone through many sshd manuals. They just say that these two fields can be used
MaxStartups: the max number of concurrent unauthenticated connections to the SSH daemon
MaxSession: the max number of (multiplexed) open sessions permitted per TCP connection.
What is the contribution of both in calculating the total number of ssh connections?
The question is quite old and might be better suited to serverfault but it never got an answer beyond citing the man page. My answer is to complement the details of the man page by adding some context.
First of all, it should be noted that both settings are independent of each other they address different stages of the SSH connection.
MaxSessions
SSH allows session multiplexing aka opening many sessions (e.g. a shell, an sftp transfer and a raw command) at the same time using just one TCP connection. This saves the overhead of multiple TCP handshakes and multiple SSH authentications. The parameter MaxSessions allows to restrict this multiplexing to a certain number of sessions.
If you set MaxSessions 1 and have a shell open, you can still run an SFTP transfer or open a second shell but in the background SSH will open another TCP connection and authenticate again. (Use password authentication to make this visible).
If you set MaxSessions 0 you can make sure no one can open a session (a shell, SFTP or similar) but you can still connect to open a tunnel or ssh into the next host.
Checkout the ControlMaster section of ssh_config(5).
MaxSessions
Specifies the maximum number of open shell, login or subsystem
(e.g. sftp) sessions permitted per network connection. Multiple
sessions may be established by clients that support connection
multiplexing. Setting MaxSessions to 1 will effectively disable
session multiplexing, whereas setting it to 0 will prevent all
shell, login and subsystem sessions while still permitting for-
warding. The default is 10.
MaxStartups
When you connect to the remote SSH server, there is a time window between establishing the connection and successful authentication. This time frame can be very small, e.g. when you configure your SSH client to use a certain private key for this connection, or it can be long, when the client first tries three different SSH keys, aks you to enter a password and then waits for you to enter a 2nd factor auth code you get via text message. The sum of connections that are in this time frame at the same time are the "concurrent unauthenticated connections" mentioned on the man page cited. If there are too many of connections in this state, sshd stops accepting new ones. You can tweak MaxStartups to change when this happens.
A real world use case for changing the default is for example a jump host that is used by provisioning software like ansible. When asked to provision a lot of hosts behind the jump host, Ansible opens up many connections at the same time so it might run into this limit if connections are opened quicker than the SSH host is able to authenticate them.
MaxStartups
Specifies the maximum number of **concurrent unauthenticated con-
nections to the SSH daemon.** Additional connections will be
dropped until authentication succeeds or the LoginGraceTime
expires for a connection. The default is 10:30:100.
Alternatively, random early drop can be enabled by specifying the
three colon separated values ``start:rate:full'' (e.g.
"10:30:60"). sshd(8) will refuse connection attempts with a
probability of ``rate/100'' (30%) if there are currently
``start'' (10) unauthenticated connections. The probability
increases linearly and all connection attempts are refused if the
number of unauthenticated connections reaches ``full'' (60).
MaxSessions
Specifies the maximum number of open shell, login or subsystem
(e.g. sftp) sessions permitted per network connection. Multiple
sessions may be established by clients that support connection
multiplexing. Setting MaxSessions to 1 will effectively disable
session multiplexing, whereas setting it to 0 will prevent all
shell, login and subsystem sessions while still permitting for-
warding. The default is 10.
MaxStartups
Specifies the maximum number of **concurrent unauthenticated con-
nections to the SSH daemon.** Additional connections will be
dropped until authentication succeeds or the LoginGraceTime
expires for a connection. The default is 10:30:100.
Alternatively, random early drop can be enabled by specifying the
three colon separated values ``start:rate:full'' (e.g.
"10:30:60"). sshd(8) will refuse connection attempts with a
probability of ``rate/100'' (30%) if there are currently
``start'' (10) unauthenticated connections. The probability
increases linearly and all connection attempts are refused if the
number of unauthenticated connections reaches ``full'' (60).

speeding up website load using multiple servers/domains

When Yahoo! developer guide says "Deploying your content across multiple, geographically dispersed servers will make your pages load faster from the user's perspective".
And as an explanation I read somewhere, that browsers will load up to 5 things simultaneously from the same domain.
Would a subdomain, for example cdn.example.com be considered a new domain, in the previous statement?
Yahoo: The HTTP/1.1 specification suggests that browsers download no more than two components in parallel per hostname. If you serve your images from multiple hostnames, you can get more than two downloads to occur in parallel.
Google also says you only need different host names.
This may depend on browser, but I believe they may need to have different IP addresses. All that HTTP spec really says is: "Clients that use persistent connections SHOULD limit the number of simultaneous connections that they maintain to a given server."
So the safest choice is to have different host name AND address.