If I change web hosting and re-point my domain to it, can it still read secure cookies from the previous server? [duplicate] - ssl

I have two HTTP services running on one machine. I just want to know if they share their cookies or whether the browser distinguishes between the two server sockets.

The current cookie specification is RFC 6265, which replaces RFC 2109 and RFC 2965 (both RFCs are now marked as "Historic") and formalizes the syntax for real-world usages of cookies. It clearly states:
Introduction
...
For historical reasons, cookies contain a number of security and privacy infelicities. For example, a server can indicate that a given cookie is intended for "secure" connections, but the Secure attribute does not provide integrity in the presence of an active network attacker. Similarly, cookies for a given host are shared across all the ports on that host, even though the usual "same-origin policy" used by web browsers isolates content retrieved via different ports.
And also:
8.5. Weak Confidentiality
Cookies do not provide isolation by port. If a cookie is readable by a service running on one port, the cookie is also readable by a service running on another port of the same server. If a cookie is writable by a service on one port, the cookie is also writable by a service running on another port of the same server. For this reason, servers SHOULD NOT both run mutually distrusting services on different ports of the same host and use cookies to store security sensitive information.

According to RFC2965 3.3.1 (which might or might not be followed by browsers), unless the port is explicitly specified via the port parameter of the Set-Cookie header, cookies might or might not be sent to any port.
Google's Browser Security Handbook says: by default, cookie scope is limited to all URLs on the current host name - and not bound to port or protocol information. and some lines later There is no way to limit cookies to a single DNS name only [...] likewise, there is no way to limit them to a specific port. (Also, keep in mind, that IE does not factor port numbers into its same-origin policy at all.)
So it does not seem to be safe to rely on any well-defined behavior here.

This is a really old question but I thought I would add a workaround I used.
I have two services running on my laptop (one on port 3000 and the other on 4000).
When I would jump between (http://localhost:3000 and http://localhost:4000), Chrome would pass in the same cookie, each service would not understand the cookie and generate a new one.
I found that if I accessed http://localhost:3000 and http://127.0.0.1:4000, the problem went away since Chrome kept a cookie for localhost and one for 127.0.0.1.
Again, noone may care at this point but it was easy and helpful to my situation.

This is a big gray area in cookie SOP (Same Origin Policy).
Theoretically, you can specify port number in the domain and the cookie will not be shared. In practice, this doesn't work with several browsers and you will run into other issues. So this is only feasible if your sites are not for general public and you can control what browsers to use.
The better approach is to get 2 domain names for the same IP and not relying on port numbers for cookies.

An alternative way to go around the problem, is to make the name of the session cookie be port related. For example:
mysession8080 for the server running on port 8080
mysession8000 for the server running on port 8000
Your code could access the webserver configuration to find out which port your server uses, and name the cookie accordingly.
Keep in mind that your application will receive both cookies, and you need to request the one that corresponds to your port.
There is no need to have the exact port number in the cookie name, but this is more convenient.
In general, the cookie name could encode any other parameter specific to the server instance you use, so it can be decoded by the right context.

In IE 8, cookies (verified only against localhost) are shared between ports. In FF 10, they are not.
I've posted this answer so that readers will have at least one concrete option for testing each scenario.

I was experiencing a similar problem running (and trying to debug) two different Django applications on the same machine.
I was running them with these commands:
./manage.py runserver 8000
./manage.py runserver 8001
When I did login in the first one and then in the second one I always got logged out the first one and viceversa.
I added this on my /etc/hosts
127.0.0.1 app1
127.0.0.1 app2
Then I started the two apps with these commands:
./manage.py runserver app1:8000
./manage.py runserver app2:8001
Problem solved :)

It's optional.
The port may be specified so cookies can be port specific. It's not necessary, the web server / application must care of this.
Source: German Wikipedia article, RFC2109, Chapter 4.3.1

Related

How do HTTP/2 and CNAME work together?

I don't know exactly how to ask it, so I will try to explain with an example.
I have these resources on example.com, an HTTP/2 enabled server:
//example.com/css/file.css
//example.com/js/file.js
//example.com/images/file.png
What I want is to load one of these files through an alias domain cdn.example2.com that points to the domain example.com. So, the actual resources inside the HTML should look like:
//example.com/css/file.css
//cdn.example2.com/js/file.js -> points to //example.com/js/file.js
//example.com/images/file.png
My question here is: Shall all the resources in the second example be loaded by the browser over a single connection as they will be loaded when there is no alias domain?
Thanks for help.
If the aliases resolve to different IPs, there is no way the resources can be loaded over the same connection (called "connection re-use" by HTTP/2, if I'm not mistaken). That's a problem with CDNs from here on.
But for your peace of mind and utter rejoice of CDNs, connection re-use is a tricky thing and you may not have it even if all your domains resolve to the same IP, as is the case in your question.
To be future proof, you may want to ensure that your sites have the certificate extensions configured correctly to enable connection re-use.
In the current versions of Firefox and Chrome, I haven't observed connection re-use, even after crafting the certificates with all due care, and of course being sure that the two domains point to the same IP.
And just some food for thoughts: HTTP/2 over TLS requires SNI, which happens only when openning a connection. So when you connect for the first time to one domain, say example.com, the server obtains SNI data. But the server won't obtain such data if the same connection is re-used to send a request to cdn.example.com. Some servers or usage scenarios may be sensitive to this asymmetry, and that may have something to do with the way in which browsers implement (or not) connection re-use. But these are only speculations of yours truly...
The specification doesn't require its reuse, but it does explicitly include information on when reuse is acceptable -- such as two hosts that resolve to the same IP address.
https://www.rfc-editor.org/rfc/rfc7540#section-9.1.1
Connections that are made to an origin server, either directly or
through a tunnel created using the CONNECT method (Section 8.3), MAY
be reused for requests with multiple different URI authority
components. A connection can be reused as long as the origin server
is authoritative (Section 10.1). For TCP connections without TLS,
this depends on the host having resolved to the same IP address.
For "https" resources, connection reuse additionally depends on
having a certificate that is valid for the host in the URI. The
certificate presented by the server MUST satisfy any checks that the
client would perform when forming a new TLS connection for the host
in the URI.

what is proxy server and how it helps in server architecture

I am very confused with proxy server, and proxy and this word proxy. I saw everywhere people are using proxy program, proxy server. Some of them using the proxy websites to unblock the websites. There are lot of things like reverse-proxy like that..
When I read one article about nginx I ran into one pic it says proxy cache. So what's proxy cache?
And how can I write a proxy program? What does that mean ? Why we need to use a proxy program?
Anybody can answer my question as simple as possible, I am not much in to this area.
A proxy server is used to facilitate security, administrative control or caching service, among other possibilities. In a personal computing context, proxy servers are used to enable user privacy and anonymous surfing. Proxy servers are used for both legal and illegal purposes.
On corporate networks, a proxy server is associated with -- or is part of -- a gateway server that separates the network from external networks (typically the Internet) and a firewall that protects the network from outside intrusion. A proxy server may exist in the same machine with a firewall server or it may be on a separate server and forward requests through the firewall. Proxy servers are used for both legal and illegal purposes.
When a proxy server receives a request for an Internet service (such as a Web page request), it looks in its local cache of previously downloaded Web pages. If it finds the page, it returns it to the user without needing to forward the request to the Internet. If the page is not in the cache, the proxy server, acting as a client on behalf of the user, uses one of its own IP addresses to request the page from the server out on the Internet. When the page is returned, the proxy server relates it to the original request and forwards it on to the user.
To the user, the proxy server is invisible; all Internet requests and returned responses appear to be directly with the addressed Internet server. (The proxy is not quite invisible; its IP address has to be specified as a configuration option to the browser or other protocol program.)
An advantage of a proxy server is that its cache can serve all users. If one or more Internet sites are frequently requested, these are likely to be in the proxy's cache, which will improve user response time. A proxy can also log its interactions, which can be helpful for troubleshooting.

NTLM authentication and smartcards

I'm running a program (Mathematica) in a VMWare VPC behind a corporate internet proxy. Various programs installed in that VPC like IE, Chrome, Excel, Word, Acrobat Reader, and even MS Paint get data from the Internet without problems, but Mathematica doesn't seem to handle the proxy correctly.
My guess is it's not able to handle the proxy's NTLM authentication.
In an earlier situation, behind a different firewall, I had some success with CNTLM as an intermediate between Mathematica and the proxy. CNTLM talks to the proxy and takes care of the NTLM authentication, and Mathematica is given the port CNTLM listens to and ip address (localhost), to talk to. However, in that earlier case I knew the credentials to be used for the proxy (i.e., my own).
In the current situation, my logon takes place using a smartcard and a PIN. The VPC gets credentials passed transparently (I don't have to enter them) and apparently all the programs I mentioned above automagically know about them. This makes me think Mathematica or CNTLM should be able to do this as well. However, my PIN used as password doesn't work (in fact, I get locked out if I try too often). I assume that the credentials used are in fact not my own but are either the windows password (that I don't have as smartcard user) or are derived from my PIN and smartcard.
My question is: how can I make this setup work? This may involve CNTLM, but other solutions are welcome as well.
You could have a chance by using a browser proxy such as Fiddler
Like CNTLM also Fiddler act as a local proxy and allow applications that support proxy, but do not support NTLM (they support a “plain” proxy) to use the corporate proxy not directly but through a local proxy.
Unlike CNTLM , Fiddler doesn't require to configure the credentials but it uses the current user crediatials to authenticate the web requests.
I Can't be sure that this is the solution for you , since I haven't an enviroment like your, but this workaround works in some other cases as reported in this
answer about ruby gem
or/and this blog about mercurial so I hope this can work with Mathematica too.
Note: Once you run Fiddler it automatically configure the browser proxy to itself ( http://localhost:8888 ) therefore you can leave the proxy settings of your application to "Use Proxy Settings from My System or Browser". By the way Fiddler it's not only a local proxy and could be used also to troubleshooting or debugging, the feature list is available in here

Can access a host on port 9988 locally but not remotely

I've set up a page and host it using bindings on ports 80 and 9988 for all incoming IPs. When testing locally on localhost:port it works for both the values but when accessing it remotely, only port 80 gives the desired result. The other one brings out connection time out.
First I thought it had to do with not recognized HTTP request so I added http:// before the IP number but that didn't make any difference.
I'm guessing that I need to alter web.config but I'm unclear why (and how). The only change from the vanilla state I've made was to allow for multiple site bindings (multipleSiteBindingsEnabled), believing that's enough. It's not, apparently. :)
Eventually, I'll be hosting the site on several different ports (none of which is the default 80, though).
What need I to do?
Firewall...
And since SO requires a minimum number of characters: check the settings of your firewall.
My experience is that when a connection isn't working but it should, you need to go "have-you-tried-turning-it-on-and-off-again" but web style: "have-you-checked-your-firewall-settings".

Error with DOJO when using IP

Strange error with an Project using dojo:
if i call : http://localhost/project everything works like expected.
if i call : http://127.0.0.1/project everything works like expected.
if i call : http://192.168.2.1/project i get the following error (ONLY in IE6!):
"Bundle not found, locale.."
Any ideas?
Iam running Zend Server CE with PHP 5.2
if i add: 192.168.2.1 to "hosts" it works (windows)
Sounds like Zend server is performing some kind of virtual site support using the site name as a partial domain.
I can't say 100% if/how it is beacuse I don't use Zend, but I can explain the principle using Apache as an Example.
There are 3 ways in which a web site can be virtually hosted under a single web server application, this applies to most servers on the market today, Apache, IIS, nginx and many others.
It all boils down to one thing, giving one running server application instance the ability to host multiple individual websites.
The 3 methods of seperating sites are as follows:
By IP address : If you have multiple IP addresses (Usually -but not always beacuse you have multiple network interface cards) then you can tell your server application to listen to one IP for one site, another IP for another site and so on. If you browse to one IP you'll get one site, and likewise the other on the other IP.
By Port Number : If your using only one IP address, then you can bind to multiple port numbers, port 80 is generally the default for web servers, but by browsing to an address and pinning the port number on the end (http://mysite.com:99) you'll force the browser to use that port. You can then have multiple websites listening on different ports and select them manually at browse time as required.
By Host Name Header: This is by far the most common way of supporting multiple sites, all web servers that understand the HTTP/1.1 protocol have to obey a header field in the request that contains the host name, when a request comes in for EG: http://mysite,com/ then there will be an entry in the request header that looks like 'Host: mysite.com' the webserver can then use that to say, oh yes.. I know which one that is.. and it then selects and serves the correct website.
The problems start to arise however when you start to use IP addresses that generally cannot be resolved or have no DNS name, because the web server then doesn't know which hostname to tag it to.
As an example in Apache, if you set up a virtual host, then try to browse that server using just the IP address, you'll get the default server, which in many cases won't even be configured to respond correctly or display anything.
To compound this, going up to web application layer, many frameworks also do their own checks on hostnames and other variables passed to them by the web server, and many make decisions on how to operate based on this information.
If you've gotten to the default web application by IP address, then there's a high chance that the framework may get confused at being presented with an IP address as a host name.
As the OP noted, in many cases, you can add a name to your hosts file and use this as a poor man's DNS substitute, the file to modify can be found in the following locations:
c:\windows\system32\drivers\etc\ - on windows
and
/etc/
on Linux/Unix
The file is generally just called 'hosts' and is a plain text file. Adding a line like:
123.456.789.123 myserver
Will tie http://myserver/ to http://123.456.789.123/
If you can, and your doing a lot of web applications it may be worth setting up your own DNS server, most Linux distros will allow you to install 'Bind' and I do also believe there is a version available for windows too.
I'm not going to go into the pro's and cons of private DNS servers here, it's a whole other subject in itself, but if your likely to be doing a lot of additions to your hosts, then in the long run you'll find it a better option.