Why can I see SSL communication as a plain text in a sniffer? - wcf

I've created WCF Service and I share it via ssl. I have little knowledge about security, but I'm curious why can I see whole communication as a plain text in httpAnalyzer, even though POSTs are sending via https?
When my client application invokes wcf service, then I can see it in sniffer - passwords etc.
Does it mean that SSL works only on the lower layer - while transporting data? So every evil application can sniff communication on client's side and an encryption only secures us against man-in-the-middle?

SSL works indeed on a "lower layer" than HTTP. According to the OSI Model, SSL works on the Session Layer, while HTTP is on the Application Layer.
Most of these clientside HTTP Analyzers work from within the browser, analyzing the HTTP traffic on the application layer, before it is processed by the SSL logic. So it is completely normal to see the plain HTTP request.
Concerning security, an evil application installed within the browser can indeed read upon the traffic. But once it is processed by the SSL layer, it becomes way harder for an evil application to read the traffic.
SSL works by firstly authenticating the server to you as a client. (Do I talk to the one I really want to talk to). As you can't know all of the servers and their certificates before hand, you use some well known root certificates, which are pre-installed on your OS. These are used to check if some server is perhaps known by an already well known service. (I don't know you, but some really important server tells me that you indeed are who you say you are).
This authentication step works independent from the encryption of the traffic. No program can decrypt an arbitrary SSL stream by "installing a root certificate". (As said these root certificates are already on your machine from the first moment you install an OS on it =)
But if a evil programs is able to let you believe that you are talking to a legitimate server, using a forged root certificate for example, instead of actually talking to malware, it is able to see what the contents of the SSL traffic is. But then again, you are talking to the evil program itself, not the server you were intended to talk to. This is however not the case with HTTP Analyzer
This is in short terms how SSL works and hopefully answers your question.

Most likely HTTP analyzer install it's own root certificate, and intercepts SSL traffic, working as man-in-the-middle.

Related

Which proxy mode to use if host company terminates TLS on reverse proxy

Friendly Disclaimer: I am new to working with Keycloak and IdP in general. So it's likely that I use incorrect terminology and/or am more confused than I think I am. Corrections are gratefully accepted.
My question is conceptual.
I have a TLS certificate that is terminated on my host machine by my host company. My reverse proxy (Traefik) is picking up that certificate.
Which of the following proxy modes should I use now to be able to deploy Keycloak to production: edge, reencrypt or passthrough? (see here for relevant documentation)
I can pretty much rule out passthrough, because as I wrote, the TLS certificate is terminated on the server. But I am unsure if I have to bring my own certificate and reencrypt or if it is considered safe to go along with edge?
I have done my best to keep this question short and general. However, I am happy to share configurations or further details if needed.
As far as I know, most organizations consider a request to be safe when the proxy validated and terminated the TLS. It also removes the performance overhead (depends on your load). Unless your organization is going for Zero Trust for its internal network, using the edge should be totally acceptable.

Automated ACME subdomain SSL certificate generation for resources on different IP addresses

I've been investigating the possibility of migrating to using Let's Encrypt to maintain the SSL certificates we have in place for the various resources we use for our operations. We have the following resources using SSL certificates:
Main website (www.example.com / example.com) - Hosted and maintained by a 3rd party who also maintains the SSL certificate
Client portal website (client.example.com) - IIS site hosted and maintained by us on a server located in a remote data center
FTP server (ftp.example.com) - WS_FTP Server hosted and maintained by us on a server located in a remote data center
Hardware firewall (firewall.example.com) - Local security appliance for our internal network
Remote Desktop Gateway (rd.example.com) - RDP server hosted and maintained by us on a server located locally
As indicated above, the SSL certificate for the main website (www) is maintained by the 3rd-party host, so I don't generally mess with that one. However, as you can tell, the DNS records for each of these endpoints point to a variety of different IP addresses. This is where my inexperience with the overall process of issuing and deploying SSL certificates has me a bit confused.
First of all, since I don't manage or maintain the main website, I'm currently manually generating the CSR's for each of the endpoints from the server/service that provides the endpoint - one from the IIS server, a different one from the RDP server, another from the WS_FTP server, and one from the hardware firewall. The manual process, while not excessively time-consuming, still requires me to go through several steps with different server systems requiring different processes.
I've considered using one of Let's Encrypt's free wildcard SSL certificates to cover all four of these endpoints (*.example.com), but I don't want to "interfere" with what our main website host is doing on that end. I realize the actual certificate itself is presented by the server to which the client is connecting, so it shouldn't matter (right?), but I'd probably still be more comfortable with individual SSL certificates for each of the subdomain endpoints.
So, I've been working on building an application using the Certes ACME client library in an attempt to automatically handle the entire SSL process from CSR to deployment. However, I've run into a few snags:
The firewall is secured against connections on port 80, so I wouldn't be able to serve up the HTTP-01 validation file for that subdomain (fw.example.com) on the device itself. The same is true for the FTP server's subdomain (ftp.example.com).
My DNS is hosted with a provider that does not currently offer an API (they say they're working on one), so I can't automate the process of the DNS-01 validation by writing the TXT record to the zone file.
I found the TLS-ALPN-01 validation method, but I'm not sure whether or not this is appropriate for the use case I'm trying to implement. According to the description of this method from Let's encrypt (emphasis mine):
This challenge is not suitable for most people. It is best suited to authors of TLS-terminating reverse proxies that want to perform host-based validation like HTTP-01, but want to do it entirely at the TLS layer in order to separate concerns. Right now that mainly means large hosting providers, but mainstream web servers like Apache and Nginx could someday implement this (and Caddy already does).
Pros:
It works if port 80 is unavailable to you.
It can be performed purely at the TLS layer.
Cons:
It’s not supported by Apache, Nginx, or Certbot, and probably won’t be soon.
Like HTTP-01, if you have multiple servers they need to all answer with the same content.
This method cannot be used to validate wildcard domains.
So, based on my research so far and my environment, my three biggest questions are these:
Would the TLS-ALPN-01 validation method be an effective - or even available - option for generating the individual SSL certificates for each subdomain? Since the firewall and FTP server cannot currently serve up the appropriate files on port 80, I don't see any way to use the HTTP-01 validation for these subdomains. Not being able to use an API to automate a DNS-01 validation would make that method generally more trouble than it's worth. While I could probably do the HTTP-01 validation for the client portal - and maybe the RDP server (I haven't gotten that far in my research yet) - I'd still be left with handling the other two subdomains manually.
Would I be better off trying to do a wildcard certificate for the subdomains? Other than "simplifying" the process by reducing the number of SSL certificates that need to be issued, is there any inherent benefit to going this route versus using individual certificates for each subdomain? Since the main site is hosted/managed by a 3rd-party and (again) I can't currently use an API to automate a DNS-01 validation, I suppose I would need to use an HTTP-01 validation. Based on my understanding, that means that I would need to get access/permission to create the response file, along with the appropriate directories on that server.
Just to be certain, is there any chance of causing some sort of "conflict" if I were to generate/deploy a wildcard certificate to the subdomains while the main website still used its own SSL certificate for the www? Again, I wouldn't think that to be the case, but I want to do my best to avoid introducing more complexity and/or problems into the situation.
I've responded to your related question on https://community.certifytheweb.com/t/tls-alpn-01-validation/1444/2 but the answer is to use DNS validation and my suggestion is to use Certify DNS (https://docs.certifytheweb.com/docs/dns/providers/certifydns), which is an alternative managed alternative cloud implementation of acme-dns (CNAME delegation of DNS challenge responses.
Certify DNS is compatible with most existing acme-dns clients so it can be used with acme-dns compatible clients as well as with Certify The Web (https://certifytheweb.com)

What are alternatives to secure a web-server other than firewall

I'm doing a network security course and trying to wrap my head around all the concepts. One of which is:
What technology other than firewall can be used to allow only a specific customers while block some other customers? Why is firewall not suitable?
During the course, I've been learning about all the security tools such as: firewall (static, dynamic, DPI), Proxy, VPN, Tunnel, all sorts of IDS (signature, anomaly, darknet/greynet and honeypot) then mod_security to secure apache but still puzzled by this question.
Any insights here will be greatly appreciated.
A firewall implied that you block based on the customer IP address. This may work if the customer has his own range of addresses and all requests from him are legitimate.
It gets complicated when he is with a large cloud provider who who provide a wide range of possible IPs, including IPs from other people.
For an application one good solution would be to use client-side certificates. In that case, during the TLS handshake (the process of putting in place a TLS (was: SSL) tunnel), the server will request the client to provide a certificate he (the server) trusts. Failure to providing one will break the connection.
This way, you can distribute the certificate to the clients you want to be able to reach your service and others will be rejected. This solution is better as it uses technologies which were developed exactly to solve this problem. The drawback is that you have to maintain and distribute the certificates (and usually run a PKI).

Local HTTPS proxy possible?

TL;DR
I want to set up a local HTTPS proxy that can (LOCALLY) modify the content of HTML pages on my machine. Is this possible?
Motivation
I have used an HTTP Proxy called GlimmerBlocker for years. It started in 2008 as a proxy-based approach to blocking ads (as opposed to browser extensions or other OS X-specific hacks like InputManagers). But besides blocking ads, it also allows the user to inject their own CSS or JavaScript into the page. Development has seriously slowed, but it remains incredibly useful.
The only problem is that it doesn’t do HTTPS (from its FAQ):
Ads on https pages are not blocked
When Safari fetches an https page using a proxy, it doesn't really use the http protocol, but makes a tunneled tcp connection so Safari receives the encrypted bytes. The advantage is that any intermediate proxies can't modify or read the contents of the page, nor the URL. The disadvantage is, that GlimmerBlocker can't modify the content. Even if GlimmerBlocker tried to work as a middleman and decoded/encoded the content, it would have no means of telling Safari to trust it, nor to tell Safari if the websites certificate is valid, so Safari would think you have visited a dubious website.
Fortunately, most ad-providers are not going to switch to https as serving pages using https are much slower and would have a huge processing overhead on the ad-providers servers.
Back in 2008, maybe that last part was true…but not any more.
To be clear, I think the increasing use of SSL is a good thing. I just want to get back the control I had over the content after it arrives on my end.
Points of Confusion
While searching for a solution, I’ve become confused by some apparently contradictory points.
(Also, although I’m quite experienced with the languages of web pages, I’ve always had a difficult time grokking networks and protocols. On that note, sorry if I’m missing something that is way obvious!)
I found this StackOverflow question asking whether HTTPS proxies were possible. The best answer says that “TLS/SSL (The S in HTTPS) guarantees that there are no eavesdroppers between you and the server you are contacting, i.e. no proxies.” (The same answer then described a hack to pull it off, but I don’t understand the instructions. It was very theoretical, anyway.)
In OS X under Network Preferences ▶︎ Advanced… ▶︎ Proxies, there is clearly a setting for an HTTPS proxy. This seems to contradict the previous statement that TLS/SSL’s guarantee against eavesdropping implies the impossibility of proxies.
Other things of note
I can’t remember where, but I read that it is possible to set up an HTTPS proxy, but that it makes HTTPS pointless (by breaking the secure communication in the process). I don’t want this! Encryption is good. I don’t want to filter anyone else’s traffic; I just want something to customize the content after I’ve already received it.
GlimmerBlocker has a nice GUI interface, but I’m fine with non-GUI solutions, too. I may have a poor understanding of networking and protocols, but I’m perfectly comfortable on the command line, tweaking settings in text editors, and so on.
Is what I’m asking possible? Or is my question a case of “either you get security, or you can break it with hacks and get to customize your content—but not both”?
The common idea of a HTTP proxy is a server which accepts a CONNECT request which includes the target hostname and port and then just builds a tunnel to the target server. All the https is done inside the tunnel, so there is no way for the proxy to modify it (end-to-end security from browser to web server).
To modify the data you need to have a proxy which plays man-in-the-middle. In this case you have a https connection between the proxy and the web server and another https connection between the browser and the proxy. Between proxy and web server the original server certificate is used, while between browser and proxy a newly created certificate is used, which is signed by a CA specific to the proxy. Of course this CA must be imported as trusted into he browser, otherwise it would complain all the time about possible attacks.
Of course - all the verification of the original server certificate has to be done in the proxy now, and not all solutions do this the correct way. See also http://www.secureworks.com/cyber-threat-intelligence/threats/transitive-trust/
There are several proxy solution which might do this SSL interception, like squid, mitmproxy (python) or App::HTTP_Proxy_IMP (perl). The last two are specifically designed to let you modify the content with your own code, so these might be good places to start.

Capturing HTTPS traffic in the clear?

I've got a local application (which I didn't write, and can't change) that talks to a remote web service. It uses HTTPS, and I'd like to see what's in the traffic.
Is there any way I can do this? I'd prefer a Windows system, but I'm happy to set up a proxy on Linux if this makes things easier.
What I'm considering:
Redirecting the web site by hacking my hosts file (or setting up alternate DNS).
Installing an HTTPS server on that site, with a self-signed (but trusted) certificate.
Apparently, WireShark can see what's in HTTPS if you feed it the private key. I've never tried this.
Somehow, proxy this traffic to the real server (i.e. it's a full-blown man-in-the-middle "attack").
Does this sound sensible? Can WireShark really see what's in HTTPS traffic? Can anyone point me at a suitable proxy (and configuration for same)?
Does Fiddler do what you want?
What is Fiddler?
Fiddler is a Web Debugging Proxy which
logs all HTTP(S) traffic between your
computer and the Internet. Fiddler
allows you to inspect all HTTP(S)
traffic, set breakpoints, and "fiddle"
with incoming or outgoing data.
Fiddler includes a powerful
event-based scripting subsystem, and
can be extended using any .NET
language.
Fiddler is freeware and can debug
traffic from virtually any
application, including Internet
Explorer, Mozilla Firefox, Opera, and
thousands more.
Wireshark can definitely display TLS/SSL encrypted streams as plaintext. However, you will definitely need the private key of the server to do so. The private key must be added to Wireshark as an SSL option under preferences. Note that this only works if you can follow the SSL stream from the start. It will not work if an SSL connection is reused.
For Internet Explorer this (SSL session reuse) can be avoided by clearing the SSL state using the Internet Options dialog. Other environments may require restarting a browser or even rebooting a system (to avoid SSL session reuse).
The other key constraint is that an RSA cipher must be used. Wireshark can not decode TLS/SSL stream that use DFH (Diffie-Hellman).
Assuming you can satisfy the constraints above, the "Follow SSL Stream" right-click command works rather well.
You need to setup a proxy for your local application and if it doesnt honour proxy settings, put a transparent proxy and route all https traffic into it before going outside. Something like this can be the "man" in the middle: http://crypto.stanford.edu/ssl-mitm
Also, here's brief instructions on how to archive this with wireshark: http://predev.wikidot.com/decrypt-ssl-traffic
You should also consider Charles. From the product description at the time of this answer:
Charles is an HTTP proxy / HTTP monitor / Reverse Proxy that enables a developer to view all of the HTTP and SSL / HTTPS traffic between their machine and the Internet. This includes requests, responses and the HTTP headers (which contain the cookies and caching information).
For using https proxy to monitor, it depends on the type of handshake. If you local application does not check the server's certificate by CA's signature which you can not fake, and the server does not check your local application's certificate ( or if you have one to setup on https proxy) then you can set up a https proxy to monitor the https traffic. Otherwise, I think it is impossible to monitor traffic with https proxy.
Another way you can try is to add instrumentation probe at the routines of your client program where it send and receive messages from its https library. It needs some reverse engineering work, but should work for you for all situations.
I would recommend WireShark, it is the best tool to follow on different pieces of traffic. Although, I am not sure what can you see with SSL turned on. Maybe, if you supply it with a certificate?