Is http keep-alive effective with different domain on the same webserver? - apache

On the same nginx/apache server:
Scene 1: a.test.com and b.test.com can use keep-alive?
Scene 2: localhost and 127.0.0.1 can use keep-alive?
or the "Host header" must be consistent and the keep-alive will effective?

For Apache it looks like it's based on IP address rather than host header, though I guess it also very much depends on client implementation.
https://httpd.apache.org/docs/2.4/vhosts/details.html#hostmatching
Persistent connections
The IP lookup described above is only done once for a particular
TCP/IP session while the name lookup is done on every request during a
KeepAlive/persistent connection. In other words, a client may request
pages from different name-based vhosts during a single persistent
connection.
Unless you are using tens of hundreds of domains I'd say you'd struggle to notice either way though it should be easy enough to test using developer tools or webpagetest.org to see if time is spent negotiating a new connection.

Related

If I change web hosting and re-point my domain to it, can it still read secure cookies from the previous server? [duplicate]

I have two HTTP services running on one machine. I just want to know if they share their cookies or whether the browser distinguishes between the two server sockets.
The current cookie specification is RFC 6265, which replaces RFC 2109 and RFC 2965 (both RFCs are now marked as "Historic") and formalizes the syntax for real-world usages of cookies. It clearly states:
Introduction
...
For historical reasons, cookies contain a number of security and privacy infelicities. For example, a server can indicate that a given cookie is intended for "secure" connections, but the Secure attribute does not provide integrity in the presence of an active network attacker. Similarly, cookies for a given host are shared across all the ports on that host, even though the usual "same-origin policy" used by web browsers isolates content retrieved via different ports.
And also:
8.5. Weak Confidentiality
Cookies do not provide isolation by port. If a cookie is readable by a service running on one port, the cookie is also readable by a service running on another port of the same server. If a cookie is writable by a service on one port, the cookie is also writable by a service running on another port of the same server. For this reason, servers SHOULD NOT both run mutually distrusting services on different ports of the same host and use cookies to store security sensitive information.
According to RFC2965 3.3.1 (which might or might not be followed by browsers), unless the port is explicitly specified via the port parameter of the Set-Cookie header, cookies might or might not be sent to any port.
Google's Browser Security Handbook says: by default, cookie scope is limited to all URLs on the current host name - and not bound to port or protocol information. and some lines later There is no way to limit cookies to a single DNS name only [...] likewise, there is no way to limit them to a specific port. (Also, keep in mind, that IE does not factor port numbers into its same-origin policy at all.)
So it does not seem to be safe to rely on any well-defined behavior here.
This is a really old question but I thought I would add a workaround I used.
I have two services running on my laptop (one on port 3000 and the other on 4000).
When I would jump between (http://localhost:3000 and http://localhost:4000), Chrome would pass in the same cookie, each service would not understand the cookie and generate a new one.
I found that if I accessed http://localhost:3000 and http://127.0.0.1:4000, the problem went away since Chrome kept a cookie for localhost and one for 127.0.0.1.
Again, noone may care at this point but it was easy and helpful to my situation.
This is a big gray area in cookie SOP (Same Origin Policy).
Theoretically, you can specify port number in the domain and the cookie will not be shared. In practice, this doesn't work with several browsers and you will run into other issues. So this is only feasible if your sites are not for general public and you can control what browsers to use.
The better approach is to get 2 domain names for the same IP and not relying on port numbers for cookies.
An alternative way to go around the problem, is to make the name of the session cookie be port related. For example:
mysession8080 for the server running on port 8080
mysession8000 for the server running on port 8000
Your code could access the webserver configuration to find out which port your server uses, and name the cookie accordingly.
Keep in mind that your application will receive both cookies, and you need to request the one that corresponds to your port.
There is no need to have the exact port number in the cookie name, but this is more convenient.
In general, the cookie name could encode any other parameter specific to the server instance you use, so it can be decoded by the right context.
In IE 8, cookies (verified only against localhost) are shared between ports. In FF 10, they are not.
I've posted this answer so that readers will have at least one concrete option for testing each scenario.
I was experiencing a similar problem running (and trying to debug) two different Django applications on the same machine.
I was running them with these commands:
./manage.py runserver 8000
./manage.py runserver 8001
When I did login in the first one and then in the second one I always got logged out the first one and viceversa.
I added this on my /etc/hosts
127.0.0.1 app1
127.0.0.1 app2
Then I started the two apps with these commands:
./manage.py runserver app1:8000
./manage.py runserver app2:8001
Problem solved :)
It's optional.
The port may be specified so cookies can be port specific. It's not necessary, the web server / application must care of this.
Source: German Wikipedia article, RFC2109, Chapter 4.3.1

Apache force DNS lookups

I've got an Apache that's proxying requests to an external entity:
ProxyPass /something https://external.example.com/somethingelse
This external site likes to switch the values of that domain based on where they want their traffic. Apache seemingly doesn't pick up the new value until it's restarted. Is there a way to force Apache to do new lookups based on certain amount of time? After some research and even looking at the code, I don't see an obvious answer. If that isn't an option, any other suggestions?
According to Apache documentation:
DNS resolution for origin domains DNS resolution happens when the
socket to the origin domain is created for the first time. When
connection reuse is enabled, each backend domain is resolved only once
per child process, and cached for all further connections until the
child is recycled.
There is ProxyPass key=value parameter to control this:
disablereuse Off This parameter should be used when you want to force
mod_proxy to immediately close a connection to the backend after being
used, and thus, disable its persistent connection and pool for that
backend. This helps in various situations where a firewall between
Apache httpd and the backend server (regardless of protocol) tends to
silently drop connections or when backends themselves may be under
round- robin DNS. When connection reuse is enabled each backend domain
is resolved (with a DNS query) only once per child process and cached
for all further connections until the child is recycled. To disable
connection reuse, set this property value to On.

Multiple SSL/TLS handshakes despite Keep-Alive and Session Identifier/Ticket enabled

I have some trouble finding out why I'm experiencing several SSL/TLS handshakes on the same page (for several resources on the same page, i.e. multiple HTTP requests), when both Keep-Alive and Session Identifiers/Tickets are active on the website/server.
I've recently activated TLS (https) on my website and therefore I wanted to check what impact this had on the speed/load performance of the site. When going through the waterfall diagram from both various speed tests on the internet (e.g. tools.pingdom.com and webpagetest.org) and Chrome Developer Tools, i see multiple SSL handshakes/negotiations on the same page, on different content. You can see an image of this here:
As can be seen, there are multiple SSL negotiations on different http requests within the same domaian. I'm wondering about this, as both Keep-Alive and Session identifiers & -tickets are active (checked via multiple tests such as the ones from webpagetest.org and ssllabs.com/ssltest/). Please do also note that I dont have access to the server (apache) configurations as I'm on a shared host.
Is what I'm experiencing possibly:
due to the server configuration limiting some amount of connections?
a misconfiguration of some sort?
something entirely else?
Me that have misunderstood something?
Please note that I'm a complete rookie in this field, but I've tried to find as much information regarding this topic, but sadly not an answer.
In case you would like to test something for yourself, the website is https://www.aktie-skat.dk
It is normal for a browser to establish multiple parallel connections to the same site since each connection can only request and load a single resource at a time. Even with HTTP keep-alive these resources do not get loaded in parallel over a single HTTP/1.x connection but only one after the other. This is only different with HTTP/2. Apart from that some requests might result in an Connection: close from server which requires the client to use a different connection for the next requests.
In detail: The first two handshakes start at 0.362s and 0.333s and take each about 100ms. These are full handshakes. All the other TLS handshakes are way shorter (about 50ms) and are thus abbreviated handshakes using session resume. The second TCP/TLS connection could not use session resume yet since the TLS handshake for the first connection was not done yet and thus no session was available for resume.

How do HTTP/2 and CNAME work together?

I don't know exactly how to ask it, so I will try to explain with an example.
I have these resources on example.com, an HTTP/2 enabled server:
//example.com/css/file.css
//example.com/js/file.js
//example.com/images/file.png
What I want is to load one of these files through an alias domain cdn.example2.com that points to the domain example.com. So, the actual resources inside the HTML should look like:
//example.com/css/file.css
//cdn.example2.com/js/file.js -> points to //example.com/js/file.js
//example.com/images/file.png
My question here is: Shall all the resources in the second example be loaded by the browser over a single connection as they will be loaded when there is no alias domain?
Thanks for help.
If the aliases resolve to different IPs, there is no way the resources can be loaded over the same connection (called "connection re-use" by HTTP/2, if I'm not mistaken). That's a problem with CDNs from here on.
But for your peace of mind and utter rejoice of CDNs, connection re-use is a tricky thing and you may not have it even if all your domains resolve to the same IP, as is the case in your question.
To be future proof, you may want to ensure that your sites have the certificate extensions configured correctly to enable connection re-use.
In the current versions of Firefox and Chrome, I haven't observed connection re-use, even after crafting the certificates with all due care, and of course being sure that the two domains point to the same IP.
And just some food for thoughts: HTTP/2 over TLS requires SNI, which happens only when openning a connection. So when you connect for the first time to one domain, say example.com, the server obtains SNI data. But the server won't obtain such data if the same connection is re-used to send a request to cdn.example.com. Some servers or usage scenarios may be sensitive to this asymmetry, and that may have something to do with the way in which browsers implement (or not) connection re-use. But these are only speculations of yours truly...
The specification doesn't require its reuse, but it does explicitly include information on when reuse is acceptable -- such as two hosts that resolve to the same IP address.
https://www.rfc-editor.org/rfc/rfc7540#section-9.1.1
Connections that are made to an origin server, either directly or
through a tunnel created using the CONNECT method (Section 8.3), MAY
be reused for requests with multiple different URI authority
components. A connection can be reused as long as the origin server
is authoritative (Section 10.1). For TCP connections without TLS,
this depends on the host having resolved to the same IP address.
For "https" resources, connection reuse additionally depends on
having a certificate that is valid for the host in the URI. The
certificate presented by the server MUST satisfy any checks that the
client would perform when forming a new TLS connection for the host
in the URI.

Can I use Apache mod_proxy as a connection pool, under the Prefork MPM?

Summary/Quesiton:
I have Apache running with Prefork MPM, running php. I'm trying to use Apache mod_proxy to create a reverse proxy that I can re-route my requests through, so that I can use Apache to do connection pooling. Example impl:
in httpd.conf:
SSLProxyEngine On
ProxyPass /test_proxy/ https://destination.server.com/ min=1 keepalive=On ttl=120
but when I run my test, which is the following command in a loop:
curl -G 'http://localhost:80/test_proxy/testpage'
it doesn't seem to re-use the connections.
After some further reading, it sounds like I'm not getting connection pool functionality because I'm using the Prefork MPM rather than the Worker MPM. So each time I make a request to the proxy, it spins up a new process with its own connection pool (of size one), instead of using the single worker that maintains its own pool. Is that interpretation right?
Background info:
There's an external server that I make requests to, over https, for every page hit on a site that I run.
Negotiating the SSL handshake is getting costly, because I use php and it doesn't seem to support connection pooling - if I get 300 page requests to my site, they have to do 300 SSL handshakes to the external server, because the connections get closed after each script finishes running.
So I'm attempting to use a reverse proxy under Apache to function as a connection pool, to persist the connections across php processes so I don't have to do the SSL handshake as often.
Sources that gave me this idea:
http://httpd.apache.org/docs/current/mod/mod_proxy.html
http://geeksnotes.livejournal.com/21264.html
First of all, your test method cannot demonstrate connection pooling since for every call, a curl client is born and then it dies. Like dead people don't talk a lot, a dead process cannot keep a connection alive.
You have clients that bothers your proxy server.
Client ====== (A) =====> ProxyServer
Let's call this connection A. Your proxy server does nothing, it is just a show off. The handsome and hardworking server is so humble that he hides behind.
Client ====== (A) =====> ProxyServer ====== (B) =====> WebServer
Here, if I am not wrong, the secured connection is A, not B, right?
Repeating my first point, on your test, you are creating a separate client for each request. Every client needs a separate connection. Connection is something that happens between at least two parties. One side leaves and connection is lost.
Okay, let's forget curl now and look together at what we really want to do.
We want to have SSL on A and we want A side of traffic to be as fast as possible. For this aim, we have already separated side B so it will not make A even slower, right?
Connection pooling? There is no such thing as connection pooling at A. Every client comes and goes making a lot of noise. Only thing that can help you to reduce this noise is "Keep-Alive" which means, keeping connection alive from a client for some short period of time so this very same client can ask for other files that will be required by this request. When we are done, we are done.
For connections on B, connections will be pooled; but this will not bring you any performance since on one-server setup you did not have this part of the noise production.
How do we help this system run faster?
If these two servers are on the same machine, we should get rid of the show-off server and continue with our hardworking webserver. It adds a lot of unnecessary work to the system.
If these are separate machines, then you are being nice to web server by taking at least encyrption (for ssl) load from this poor guy. However, you can be even nicer.
If you want to continue on Apache, switch to mpm_worker from mpm_prefork. In case of 300+ concurrent requests, this will work much better. I really have no idea about the capacity of your hardware; but if handling 300 requests is difficult, I believe this little change will help your system a lot.
If you want to have an even more lightweight system, consider nginx as an alternative to Apache. It is very easy to setup to work with PHP and it will have a better performance.
Other than front-end side of things, also consider checking your database server. Connection pooling will make real difference here. Be sure if your PHP installation is configured to reuse connections to database.
In addition, if you are hosting static files on the same system, then move them out either on another web server or do even better by moving static files to a cloud system with CDN like AWS's S3+CloudFront or Rackspace's CloudFiles. Even without CloudFront, S3 will make you happy. Rackspace's solution comes with Akamai!
Taking out static files will make your web server "oh what happened, what is this silence? ohhh heaven!" since you mentioned this is a website and web pages have many static files for each dynamically generated html page most of the time.
I hope you can save the poor guy from the killer work.
Prefork can still pool 1 connection per backend server per process.
Prefork doesn't necessarily create a new process for each frontend request, the server processes are "pooled" themselves and the behavior depends on e.g. MinSpareServers/MaxSpareServers and friends.
To maximise how often a prefork process will have a backend connection for you, avoid very high or low maxspareservers or very high minspareservers as these will result in "fresh" processes acceptin new connections.
You can log %P in your LogFormat directive to help get an idea if how often processes are being reused.
The Problem in my case was, the the connection pooling between reverse proxy and backend server was not taking place because of the Backend Server Apache closing the SSL connection at the end of each HTTPS request.
The backend Apache Server was doing this becuse of the following Directive being present in the httpd.conf:
SetEnvIf User-Agent ".*MSIE.*" nokeepalive ssl-unclean-shutdown
This directive does not make sense when the backend server is connected via a reverse proxy and this can be removed from the backend server config.