NTLM authentication and smartcards - authentication

I'm running a program (Mathematica) in a VMWare VPC behind a corporate internet proxy. Various programs installed in that VPC like IE, Chrome, Excel, Word, Acrobat Reader, and even MS Paint get data from the Internet without problems, but Mathematica doesn't seem to handle the proxy correctly.
My guess is it's not able to handle the proxy's NTLM authentication.
In an earlier situation, behind a different firewall, I had some success with CNTLM as an intermediate between Mathematica and the proxy. CNTLM talks to the proxy and takes care of the NTLM authentication, and Mathematica is given the port CNTLM listens to and ip address (localhost), to talk to. However, in that earlier case I knew the credentials to be used for the proxy (i.e., my own).
In the current situation, my logon takes place using a smartcard and a PIN. The VPC gets credentials passed transparently (I don't have to enter them) and apparently all the programs I mentioned above automagically know about them. This makes me think Mathematica or CNTLM should be able to do this as well. However, my PIN used as password doesn't work (in fact, I get locked out if I try too often). I assume that the credentials used are in fact not my own but are either the windows password (that I don't have as smartcard user) or are derived from my PIN and smartcard.
My question is: how can I make this setup work? This may involve CNTLM, but other solutions are welcome as well.

You could have a chance by using a browser proxy such as Fiddler
Like CNTLM also Fiddler act as a local proxy and allow applications that support proxy, but do not support NTLM (they support a “plain” proxy) to use the corporate proxy not directly but through a local proxy.
Unlike CNTLM , Fiddler doesn't require to configure the credentials but it uses the current user crediatials to authenticate the web requests.
I Can't be sure that this is the solution for you , since I haven't an enviroment like your, but this workaround works in some other cases as reported in this
answer about ruby gem
or/and this blog about mercurial so I hope this can work with Mathematica too.
Note: Once you run Fiddler it automatically configure the browser proxy to itself ( http://localhost:8888 ) therefore you can leave the proxy settings of your application to "Use Proxy Settings from My System or Browser". By the way Fiddler it's not only a local proxy and could be used also to troubleshooting or debugging, the feature list is available in here

Related

If I change web hosting and re-point my domain to it, can it still read secure cookies from the previous server? [duplicate]

I have two HTTP services running on one machine. I just want to know if they share their cookies or whether the browser distinguishes between the two server sockets.
The current cookie specification is RFC 6265, which replaces RFC 2109 and RFC 2965 (both RFCs are now marked as "Historic") and formalizes the syntax for real-world usages of cookies. It clearly states:
Introduction
...
For historical reasons, cookies contain a number of security and privacy infelicities. For example, a server can indicate that a given cookie is intended for "secure" connections, but the Secure attribute does not provide integrity in the presence of an active network attacker. Similarly, cookies for a given host are shared across all the ports on that host, even though the usual "same-origin policy" used by web browsers isolates content retrieved via different ports.
And also:
8.5. Weak Confidentiality
Cookies do not provide isolation by port. If a cookie is readable by a service running on one port, the cookie is also readable by a service running on another port of the same server. If a cookie is writable by a service on one port, the cookie is also writable by a service running on another port of the same server. For this reason, servers SHOULD NOT both run mutually distrusting services on different ports of the same host and use cookies to store security sensitive information.
According to RFC2965 3.3.1 (which might or might not be followed by browsers), unless the port is explicitly specified via the port parameter of the Set-Cookie header, cookies might or might not be sent to any port.
Google's Browser Security Handbook says: by default, cookie scope is limited to all URLs on the current host name - and not bound to port or protocol information. and some lines later There is no way to limit cookies to a single DNS name only [...] likewise, there is no way to limit them to a specific port. (Also, keep in mind, that IE does not factor port numbers into its same-origin policy at all.)
So it does not seem to be safe to rely on any well-defined behavior here.
This is a really old question but I thought I would add a workaround I used.
I have two services running on my laptop (one on port 3000 and the other on 4000).
When I would jump between (http://localhost:3000 and http://localhost:4000), Chrome would pass in the same cookie, each service would not understand the cookie and generate a new one.
I found that if I accessed http://localhost:3000 and http://127.0.0.1:4000, the problem went away since Chrome kept a cookie for localhost and one for 127.0.0.1.
Again, noone may care at this point but it was easy and helpful to my situation.
This is a big gray area in cookie SOP (Same Origin Policy).
Theoretically, you can specify port number in the domain and the cookie will not be shared. In practice, this doesn't work with several browsers and you will run into other issues. So this is only feasible if your sites are not for general public and you can control what browsers to use.
The better approach is to get 2 domain names for the same IP and not relying on port numbers for cookies.
An alternative way to go around the problem, is to make the name of the session cookie be port related. For example:
mysession8080 for the server running on port 8080
mysession8000 for the server running on port 8000
Your code could access the webserver configuration to find out which port your server uses, and name the cookie accordingly.
Keep in mind that your application will receive both cookies, and you need to request the one that corresponds to your port.
There is no need to have the exact port number in the cookie name, but this is more convenient.
In general, the cookie name could encode any other parameter specific to the server instance you use, so it can be decoded by the right context.
In IE 8, cookies (verified only against localhost) are shared between ports. In FF 10, they are not.
I've posted this answer so that readers will have at least one concrete option for testing each scenario.
I was experiencing a similar problem running (and trying to debug) two different Django applications on the same machine.
I was running them with these commands:
./manage.py runserver 8000
./manage.py runserver 8001
When I did login in the first one and then in the second one I always got logged out the first one and viceversa.
I added this on my /etc/hosts
127.0.0.1 app1
127.0.0.1 app2
Then I started the two apps with these commands:
./manage.py runserver app1:8000
./manage.py runserver app2:8001
Problem solved :)
It's optional.
The port may be specified so cookies can be port specific. It's not necessary, the web server / application must care of this.
Source: German Wikipedia article, RFC2109, Chapter 4.3.1

Connecting Devices to Fiddler Without Proxy Changing?

I'm interested in using Fiddler to inspect HTTP(S) API traffic on my home network. I want two networks/routers; "Normal" and "Fiddler". I want the devices to easily connect to either network. I do not want to manually configure/unconfigure proxy settings when moving devices between normal and Fiddler proxy network. I just want to select a new access point and have the device work. How can this be done? Will some kind of port-forwarding on the "fiddler" router suffice?
After several weeks of experimentation and discussion, my conclusion is that neither Fiddler nor Charles Proxy support transparent proxy which is key to making a simple router setup work. OTOH, mitmproxy does work well. mitmproxy runs on OS/X and Linux. For Windows there are two options, mitmdump is a UI-less version of mitmproxy, and mitmweb (available but not presently released) has a very promising UI.
Indeed you easily configure a router to gateway to a mitm system. From there mitm will show http requests and responses. If you want to see HTTPS, you'll simply need to have the device accept an mitm certificate. Do so by visiting the special domain name http://mitm.it and follow instructions.
For a more detailed discussion see Best Way to Inspect HTTP(S) APIs of Many Devices
I wrote up the procedure for configuring a router to proxy client traffic to a transparent proxy. Works great with MitmProxy. The beauty of this approach is that you can simply connect a client device, wired or wireless, to the router and HTTP(S) traffic will be displayed by MitmProxy. No need to fiddle with each device's proxy settings. You simply choose the router's network, when done you flip back to the usual router.
Best Way to Inspect HTTP(S) API Traffic in a Multi-Platform Multi-Device Environment
http://fiddlerbook.com/fiddler/help/hookup.asp --- have you checked this? I think this helps.

Local HTTPS proxy possible?

TL;DR
I want to set up a local HTTPS proxy that can (LOCALLY) modify the content of HTML pages on my machine. Is this possible?
Motivation
I have used an HTTP Proxy called GlimmerBlocker for years. It started in 2008 as a proxy-based approach to blocking ads (as opposed to browser extensions or other OS X-specific hacks like InputManagers). But besides blocking ads, it also allows the user to inject their own CSS or JavaScript into the page. Development has seriously slowed, but it remains incredibly useful.
The only problem is that it doesn’t do HTTPS (from its FAQ):
Ads on https pages are not blocked
When Safari fetches an https page using a proxy, it doesn't really use the http protocol, but makes a tunneled tcp connection so Safari receives the encrypted bytes. The advantage is that any intermediate proxies can't modify or read the contents of the page, nor the URL. The disadvantage is, that GlimmerBlocker can't modify the content. Even if GlimmerBlocker tried to work as a middleman and decoded/encoded the content, it would have no means of telling Safari to trust it, nor to tell Safari if the websites certificate is valid, so Safari would think you have visited a dubious website.
Fortunately, most ad-providers are not going to switch to https as serving pages using https are much slower and would have a huge processing overhead on the ad-providers servers.
Back in 2008, maybe that last part was true…but not any more.
To be clear, I think the increasing use of SSL is a good thing. I just want to get back the control I had over the content after it arrives on my end.
Points of Confusion
While searching for a solution, I’ve become confused by some apparently contradictory points.
(Also, although I’m quite experienced with the languages of web pages, I’ve always had a difficult time grokking networks and protocols. On that note, sorry if I’m missing something that is way obvious!)
I found this StackOverflow question asking whether HTTPS proxies were possible. The best answer says that “TLS/SSL (The S in HTTPS) guarantees that there are no eavesdroppers between you and the server you are contacting, i.e. no proxies.” (The same answer then described a hack to pull it off, but I don’t understand the instructions. It was very theoretical, anyway.)
In OS X under Network Preferences ▶︎ Advanced… ▶︎ Proxies, there is clearly a setting for an HTTPS proxy. This seems to contradict the previous statement that TLS/SSL’s guarantee against eavesdropping implies the impossibility of proxies.
Other things of note
I can’t remember where, but I read that it is possible to set up an HTTPS proxy, but that it makes HTTPS pointless (by breaking the secure communication in the process). I don’t want this! Encryption is good. I don’t want to filter anyone else’s traffic; I just want something to customize the content after I’ve already received it.
GlimmerBlocker has a nice GUI interface, but I’m fine with non-GUI solutions, too. I may have a poor understanding of networking and protocols, but I’m perfectly comfortable on the command line, tweaking settings in text editors, and so on.
Is what I’m asking possible? Or is my question a case of “either you get security, or you can break it with hacks and get to customize your content—but not both”?
The common idea of a HTTP proxy is a server which accepts a CONNECT request which includes the target hostname and port and then just builds a tunnel to the target server. All the https is done inside the tunnel, so there is no way for the proxy to modify it (end-to-end security from browser to web server).
To modify the data you need to have a proxy which plays man-in-the-middle. In this case you have a https connection between the proxy and the web server and another https connection between the browser and the proxy. Between proxy and web server the original server certificate is used, while between browser and proxy a newly created certificate is used, which is signed by a CA specific to the proxy. Of course this CA must be imported as trusted into he browser, otherwise it would complain all the time about possible attacks.
Of course - all the verification of the original server certificate has to be done in the proxy now, and not all solutions do this the correct way. See also http://www.secureworks.com/cyber-threat-intelligence/threats/transitive-trust/
There are several proxy solution which might do this SSL interception, like squid, mitmproxy (python) or App::HTTP_Proxy_IMP (perl). The last two are specifically designed to let you modify the content with your own code, so these might be good places to start.

https client certificate logout/relogin

I have a web site using ssl certificate authentication.
How to force the web browser from the server to ask again the certificate to be used?
It would be useable for logout, but the use case here is switching user identity.
I remember something about directing the user to a page which have ssl settings incompatible with the current authentication certificate, but could not find the right settings.
My setup uses apache mod-ssl, but an IIS solution would also be welcome.
Update:
I am specifically asking the server side: how to set up an URL on the same hostname that requires client certificates but rejects all certificates.
For Firefox, javascript:window.crypto.logout(); does work with minor user inconvenience (which I believe could be scripted around).
This is rather difficult in general (and certainly one of the reasons why client-certificate usage can be tedious for most users).
From the client side, there are some JavaScript techniques, but they are not supported across the board (see this question).
Using Apache Tomcat 7, you can invalidate the SSL/TLS session using a request attribute, as described in this question.
I'm not aware of any hook that would let you do this with Apache Httpd (and mod_ssl). The mechanisms usable behind Apache Httpd (e.g. mod_php, CGI, FCGI, ...) generally don't get to be able to alter any settings or environment variables set by mod_ssl, which would be necessary to invalidate the session.
On IIS, this question is still unanswered.
The general way, from the browser point of view, is to go into its setting and clear the SSL state (this varies depending on the browser, but usually requires a couple of dialog boxes at least, not just a quick button, at least not without a plugin).
From a client side web browser you can do this for MSIE (Internet explorer):
your Clear SSL state by going to Tools>Internet Options>Content(tab)>Clear SSL State.
In firefox (prior to version 20) you can do:
Tools | Start Private browsing.
Then visit the page in question. Or then do "Tools | stop private browsing" and then...
Then, when you reload a page you're on it will prompt you to present a new client certificate (if you have more than one from the CA that your server trusts). Othererwise if you just have one certificate it will use the one and only one PKI client cert that is in your store.
For logout read this post: https://security.stackexchange.com/questions/36853/is-it-possible-to-force-a-new-ssl-session#
On the client side, SSL sessions are normally kept in RAM. Internet Explorer, for instance, internally consists of several process that talk to each other, and you have to kill them all to make it forget a SSL session (in practice, this happens only when you have closed all the IE windows).
An alternative can be close browser with javascript.

Capturing HTTPS traffic in the clear?

I've got a local application (which I didn't write, and can't change) that talks to a remote web service. It uses HTTPS, and I'd like to see what's in the traffic.
Is there any way I can do this? I'd prefer a Windows system, but I'm happy to set up a proxy on Linux if this makes things easier.
What I'm considering:
Redirecting the web site by hacking my hosts file (or setting up alternate DNS).
Installing an HTTPS server on that site, with a self-signed (but trusted) certificate.
Apparently, WireShark can see what's in HTTPS if you feed it the private key. I've never tried this.
Somehow, proxy this traffic to the real server (i.e. it's a full-blown man-in-the-middle "attack").
Does this sound sensible? Can WireShark really see what's in HTTPS traffic? Can anyone point me at a suitable proxy (and configuration for same)?
Does Fiddler do what you want?
What is Fiddler?
Fiddler is a Web Debugging Proxy which
logs all HTTP(S) traffic between your
computer and the Internet. Fiddler
allows you to inspect all HTTP(S)
traffic, set breakpoints, and "fiddle"
with incoming or outgoing data.
Fiddler includes a powerful
event-based scripting subsystem, and
can be extended using any .NET
language.
Fiddler is freeware and can debug
traffic from virtually any
application, including Internet
Explorer, Mozilla Firefox, Opera, and
thousands more.
Wireshark can definitely display TLS/SSL encrypted streams as plaintext. However, you will definitely need the private key of the server to do so. The private key must be added to Wireshark as an SSL option under preferences. Note that this only works if you can follow the SSL stream from the start. It will not work if an SSL connection is reused.
For Internet Explorer this (SSL session reuse) can be avoided by clearing the SSL state using the Internet Options dialog. Other environments may require restarting a browser or even rebooting a system (to avoid SSL session reuse).
The other key constraint is that an RSA cipher must be used. Wireshark can not decode TLS/SSL stream that use DFH (Diffie-Hellman).
Assuming you can satisfy the constraints above, the "Follow SSL Stream" right-click command works rather well.
You need to setup a proxy for your local application and if it doesnt honour proxy settings, put a transparent proxy and route all https traffic into it before going outside. Something like this can be the "man" in the middle: http://crypto.stanford.edu/ssl-mitm
Also, here's brief instructions on how to archive this with wireshark: http://predev.wikidot.com/decrypt-ssl-traffic
You should also consider Charles. From the product description at the time of this answer:
Charles is an HTTP proxy / HTTP monitor / Reverse Proxy that enables a developer to view all of the HTTP and SSL / HTTPS traffic between their machine and the Internet. This includes requests, responses and the HTTP headers (which contain the cookies and caching information).
For using https proxy to monitor, it depends on the type of handshake. If you local application does not check the server's certificate by CA's signature which you can not fake, and the server does not check your local application's certificate ( or if you have one to setup on https proxy) then you can set up a https proxy to monitor the https traffic. Otherwise, I think it is impossible to monitor traffic with https proxy.
Another way you can try is to add instrumentation probe at the routines of your client program where it send and receive messages from its https library. It needs some reverse engineering work, but should work for you for all situations.
I would recommend WireShark, it is the best tool to follow on different pieces of traffic. Although, I am not sure what can you see with SSL turned on. Maybe, if you supply it with a certificate?