2 different SSL Certificates (and servers) on 1 webpage - ssl

I need to know if it is possible to have a webpage secured with SSL to be to able to fetch an image on a different server also secured with a potentially different SSL certificate? Difficult or trivial? Battling to find a real world way of testing this setup so I'm asking here.
Further explanation...
page under SSL (https://their-server.com/theirPage.html) to to call an image script (c# server-page delivering an image) e.g. <img href="https://my-server.com/image.ashx?id=1" />, but my server CANT use the other server's SSL certificate...
Is the mix of SSL certificates going to cause any problems or popups that will intimidate the user or make it look unsecure in any fashion?

So long as the images you are loading are loaded over some sort of secure connection, it doesn't matter at all what certificate is used.

As you already said the server from which you are accessing images is already secure with SSL right? Now it doesn't matter which SSL they are using.
I don't think you will have any problem in fetching that images !

Related

Automated ACME subdomain SSL certificate generation for resources on different IP addresses

I've been investigating the possibility of migrating to using Let's Encrypt to maintain the SSL certificates we have in place for the various resources we use for our operations. We have the following resources using SSL certificates:
Main website (www.example.com / example.com) - Hosted and maintained by a 3rd party who also maintains the SSL certificate
Client portal website (client.example.com) - IIS site hosted and maintained by us on a server located in a remote data center
FTP server (ftp.example.com) - WS_FTP Server hosted and maintained by us on a server located in a remote data center
Hardware firewall (firewall.example.com) - Local security appliance for our internal network
Remote Desktop Gateway (rd.example.com) - RDP server hosted and maintained by us on a server located locally
As indicated above, the SSL certificate for the main website (www) is maintained by the 3rd-party host, so I don't generally mess with that one. However, as you can tell, the DNS records for each of these endpoints point to a variety of different IP addresses. This is where my inexperience with the overall process of issuing and deploying SSL certificates has me a bit confused.
First of all, since I don't manage or maintain the main website, I'm currently manually generating the CSR's for each of the endpoints from the server/service that provides the endpoint - one from the IIS server, a different one from the RDP server, another from the WS_FTP server, and one from the hardware firewall. The manual process, while not excessively time-consuming, still requires me to go through several steps with different server systems requiring different processes.
I've considered using one of Let's Encrypt's free wildcard SSL certificates to cover all four of these endpoints (*.example.com), but I don't want to "interfere" with what our main website host is doing on that end. I realize the actual certificate itself is presented by the server to which the client is connecting, so it shouldn't matter (right?), but I'd probably still be more comfortable with individual SSL certificates for each of the subdomain endpoints.
So, I've been working on building an application using the Certes ACME client library in an attempt to automatically handle the entire SSL process from CSR to deployment. However, I've run into a few snags:
The firewall is secured against connections on port 80, so I wouldn't be able to serve up the HTTP-01 validation file for that subdomain (fw.example.com) on the device itself. The same is true for the FTP server's subdomain (ftp.example.com).
My DNS is hosted with a provider that does not currently offer an API (they say they're working on one), so I can't automate the process of the DNS-01 validation by writing the TXT record to the zone file.
I found the TLS-ALPN-01 validation method, but I'm not sure whether or not this is appropriate for the use case I'm trying to implement. According to the description of this method from Let's encrypt (emphasis mine):
This challenge is not suitable for most people. It is best suited to authors of TLS-terminating reverse proxies that want to perform host-based validation like HTTP-01, but want to do it entirely at the TLS layer in order to separate concerns. Right now that mainly means large hosting providers, but mainstream web servers like Apache and Nginx could someday implement this (and Caddy already does).
Pros:
It works if port 80 is unavailable to you.
It can be performed purely at the TLS layer.
Cons:
It’s not supported by Apache, Nginx, or Certbot, and probably won’t be soon.
Like HTTP-01, if you have multiple servers they need to all answer with the same content.
This method cannot be used to validate wildcard domains.
So, based on my research so far and my environment, my three biggest questions are these:
Would the TLS-ALPN-01 validation method be an effective - or even available - option for generating the individual SSL certificates for each subdomain? Since the firewall and FTP server cannot currently serve up the appropriate files on port 80, I don't see any way to use the HTTP-01 validation for these subdomains. Not being able to use an API to automate a DNS-01 validation would make that method generally more trouble than it's worth. While I could probably do the HTTP-01 validation for the client portal - and maybe the RDP server (I haven't gotten that far in my research yet) - I'd still be left with handling the other two subdomains manually.
Would I be better off trying to do a wildcard certificate for the subdomains? Other than "simplifying" the process by reducing the number of SSL certificates that need to be issued, is there any inherent benefit to going this route versus using individual certificates for each subdomain? Since the main site is hosted/managed by a 3rd-party and (again) I can't currently use an API to automate a DNS-01 validation, I suppose I would need to use an HTTP-01 validation. Based on my understanding, that means that I would need to get access/permission to create the response file, along with the appropriate directories on that server.
Just to be certain, is there any chance of causing some sort of "conflict" if I were to generate/deploy a wildcard certificate to the subdomains while the main website still used its own SSL certificate for the www? Again, I wouldn't think that to be the case, but I want to do my best to avoid introducing more complexity and/or problems into the situation.
I've responded to your related question on https://community.certifytheweb.com/t/tls-alpn-01-validation/1444/2 but the answer is to use DNS validation and my suggestion is to use Certify DNS (https://docs.certifytheweb.com/docs/dns/providers/certifydns), which is an alternative managed alternative cloud implementation of acme-dns (CNAME delegation of DNS challenge responses.
Certify DNS is compatible with most existing acme-dns clients so it can be used with acme-dns compatible clients as well as with Certify The Web (https://certifytheweb.com)

SNI unrecognized_name warning when terminating TLS at HAProxy

We have a client whose code was written in Java 1.7. Java 1.7 by default refuses to connect via HTTPS to servers who return an SNI unrecognized_name warning. It's possible to turn off this behavior, but (of course) our client can't do that. Most other clients just ignore the warning.
We have a valid wildcard certificate for our domain, let's call it *.widgets.com. Anything in the domain widgets.com resolves to our HAProxy load balancer. We've installed that cert onto the load balancer, and we specify it in the front-end that listens on port 443. The cert is current and checks out fine when we test it from Qualys... except for that SNI warning.
Our client makes a call to a specific subdomain, say foo.widgets.com. The service is working fine, serving up content to anyone who calls it. Except for our client, of course, who won't connect to us after we return the SNI warning.
I've found lots of articles about how to solve this problem on Apache, but those don't help me with HAProxy. On HAProxy, I see that I can specify more than one cert, and I am told that HAProxy will "choose the right one". Do I need to get a separate, non-wildcard cert for foo.widgets.com? I don't want to buy another cert only to find out that that was not the solution.
Turns out the problem had little to do with HAProxy. Apparently we had an intrusion detection system in place that would terminate TLS prior to relaying down to HAProxy.
There is probably a way to make the IDS behave properly, presenting the correct certificate to the client. But we don't really need IDS on our non-prod environments, anyway. So we left it switched off, and the problem went away.
So if you're having a similar issue, after making sure that your certificate is good for the request you're testing, my advice would be to check whether you have any security software that could intercept traffic before it reaches your LB.

Local HTTPS proxy possible?

TL;DR
I want to set up a local HTTPS proxy that can (LOCALLY) modify the content of HTML pages on my machine. Is this possible?
Motivation
I have used an HTTP Proxy called GlimmerBlocker for years. It started in 2008 as a proxy-based approach to blocking ads (as opposed to browser extensions or other OS X-specific hacks like InputManagers). But besides blocking ads, it also allows the user to inject their own CSS or JavaScript into the page. Development has seriously slowed, but it remains incredibly useful.
The only problem is that it doesn’t do HTTPS (from its FAQ):
Ads on https pages are not blocked
When Safari fetches an https page using a proxy, it doesn't really use the http protocol, but makes a tunneled tcp connection so Safari receives the encrypted bytes. The advantage is that any intermediate proxies can't modify or read the contents of the page, nor the URL. The disadvantage is, that GlimmerBlocker can't modify the content. Even if GlimmerBlocker tried to work as a middleman and decoded/encoded the content, it would have no means of telling Safari to trust it, nor to tell Safari if the websites certificate is valid, so Safari would think you have visited a dubious website.
Fortunately, most ad-providers are not going to switch to https as serving pages using https are much slower and would have a huge processing overhead on the ad-providers servers.
Back in 2008, maybe that last part was true…but not any more.
To be clear, I think the increasing use of SSL is a good thing. I just want to get back the control I had over the content after it arrives on my end.
Points of Confusion
While searching for a solution, I’ve become confused by some apparently contradictory points.
(Also, although I’m quite experienced with the languages of web pages, I’ve always had a difficult time grokking networks and protocols. On that note, sorry if I’m missing something that is way obvious!)
I found this StackOverflow question asking whether HTTPS proxies were possible. The best answer says that “TLS/SSL (The S in HTTPS) guarantees that there are no eavesdroppers between you and the server you are contacting, i.e. no proxies.” (The same answer then described a hack to pull it off, but I don’t understand the instructions. It was very theoretical, anyway.)
In OS X under Network Preferences ▶︎ Advanced… ▶︎ Proxies, there is clearly a setting for an HTTPS proxy. This seems to contradict the previous statement that TLS/SSL’s guarantee against eavesdropping implies the impossibility of proxies.
Other things of note
I can’t remember where, but I read that it is possible to set up an HTTPS proxy, but that it makes HTTPS pointless (by breaking the secure communication in the process). I don’t want this! Encryption is good. I don’t want to filter anyone else’s traffic; I just want something to customize the content after I’ve already received it.
GlimmerBlocker has a nice GUI interface, but I’m fine with non-GUI solutions, too. I may have a poor understanding of networking and protocols, but I’m perfectly comfortable on the command line, tweaking settings in text editors, and so on.
Is what I’m asking possible? Or is my question a case of “either you get security, or you can break it with hacks and get to customize your content—but not both”?
The common idea of a HTTP proxy is a server which accepts a CONNECT request which includes the target hostname and port and then just builds a tunnel to the target server. All the https is done inside the tunnel, so there is no way for the proxy to modify it (end-to-end security from browser to web server).
To modify the data you need to have a proxy which plays man-in-the-middle. In this case you have a https connection between the proxy and the web server and another https connection between the browser and the proxy. Between proxy and web server the original server certificate is used, while between browser and proxy a newly created certificate is used, which is signed by a CA specific to the proxy. Of course this CA must be imported as trusted into he browser, otherwise it would complain all the time about possible attacks.
Of course - all the verification of the original server certificate has to be done in the proxy now, and not all solutions do this the correct way. See also http://www.secureworks.com/cyber-threat-intelligence/threats/transitive-trust/
There are several proxy solution which might do this SSL interception, like squid, mitmproxy (python) or App::HTTP_Proxy_IMP (perl). The last two are specifically designed to let you modify the content with your own code, so these might be good places to start.

SSL certificates with unknown domain name

We're having an issue with securing an intranet / internet website with SSL where
we can't know the qualified domain name in advance.
Basically, I'm trying to make a program that will be installed on a webserver
outside my direct control, to be accessable over intra- or internet. In either case
I want it to be secure via SSL (https). To do this, I would like to include and
install a SSL certificate on the target machine. My installer is fully prepackaged
and should not require any particular during- or postinstall intervention from my
end. Problem is, I can't know ahead of time the target machine's name or domain
name, so as far as I can tell the SSL connection will be returning warnings (or
worse?) when accessed, since the certificate I include will (must) have a different
name on it.
I really want to avoid those warnings, but I still want to keep it secure. Is there
any way to install a SSL connection without certificate warnings without the domain
name known ahead of time?
Thanks for any help you folks can give.
What you want to do is not possible. Here's why.
A certificate will include a set of names (Common Name, possibly along with Subject Alternative Names, possibly including wildcard names).
The client's web browser will do the following:
The user wanted to visit "https://myapp.mydomain.com/blog/posts/1".
The request is via SSL and the domain name in the request is "myapp.mydomain.com".
Get the certificate from the Web server.
Ensure that at least one of the names in the certificate is exactly equal to, or wildcard-matches, the domain name in the request.
Display the page.
Therefore, you need a certificate with the exact domain name (or a wildcard matching the exact domain name) by which the application will be used. And the certificate needs to be available at the same time as, or later than, the time when the exact domain name of the website becomes known, and cannot be made available any earlier.
You seem to be under the misapprehension that somehow a certificate can "create" or "install" an SSL connection. That is false. The Web server - Apache, IIS, Nginx, LigHTTPD, or whichever one you happen to use - is the program that knows how to every aspect of SSL connectivity. The certificate is just a file that the Web server sends to the client, without even opening or using in any way.
Additionally, the author of a webapp to be distributed is not responsible for creating or distributing certificates, and should not be under the misapprehension that he is responsible. Only the website maintainer should be responsible for obtaining a certificate for his website. As another person remarked, in your installation process or perhaps in a post-installation process, you may ask the person installing the webapp for a certificate. But that is the best you can do.
The best you can do is to buy a wildcard SSL certificate - but wait, it's not what you think. You still need to know the second-level domain (the TLD being ".com") ahead of time. You can effectively ask for a cert that covers *.foo.com - then any site, a.foo.com, b.foo.com will be covered. Of course, these certs are more expensive that FQDN certs because you are doing the buggers out of some extra coin.
-Oisin
Each of those sites should have their own SSL certificate. Why not prompt the user to provide the cert file during installation?
In most (if not all) cases, the SSL certificate is associated with the webserver (apache, IIS, etc.) and is not part of your application. It's up to the admin of the web server to install the certificate and not you as the author of the program.
If your installation program does have the ability to modify the web server configuration, and you are willing to have it use a self-signed certificate, you can script the creation of the certificate to allow the domain name to be input. However, I sense this is not really available to you. Also, a self-signed certificate will generally cause certificate warnings.
If I understand you correctly there might be a solution to your problem now. This solution won't help you, however, if you have no control over specifying what SSL certificates are served from the web server where your program is installed (as mentioned by someone else). If your program itself contains a web server you won't have this issue.
If you start with a trusted https website, you can make cross-domain TLS (SSL) XmlHttpRequests to the web servers that are running your application. This is made possible using the opensource Forge project. The project uses a TLS implementation written in JavaScript and a small Flash swf to handle the cross-domain requests. Your program will need to serve an XML Flash policy file that grants the trusted website access to the web server running the application.
Your program will also need to generate a self-signed SSL certificate and upload it to the trusted website. From there, each program's certificate can be included as trusted via the JavaScript TLS implementation. Alternatively, you can have your program upload its certificate to be signed by a CA you create, using a common or subject alternative name that is appropriate for your use (it doesn't have to be the domain name). Then you can use JavaScript to trust the CA certificate and look for the correct name on each certificate.
For more details check out the Forge project at github:
http://github.com/digitalbazaar/forge/blob/master/README
The links to the blog posts at the end provide more in-depth information about how it works.

Problem viewing sites with SSL certs

I am managing a number of websites that use SSL certificates and have had a few complaints from individuals that are not able to view some of these sites in secure mode. The problem persists regardless of browser or version that is used, does not affect viewing in non-secure mode, and only occurs with a few of the secure sites, not all. Each site has a separate SSL certificate.
I don't have any idea what may be causing this problem or how to address it and would appreciate any helpful questions or ideas that would contribute to fixing it.
Just a hunch, but one possibility could be that weak ciphers are disabled on the server and strong ciphers are not enabled on the client. Another possibility might be that the root certificates list on the client is not up-to-date and the SSL certificate is signed by an authority that is not in the trusted list.
They key here is to find out what they all have in common. Maybe they all use AOL? Behind a caching proxy? Can they view other secure sites? Maybe they have a virus or trojan causing the issue?
Maybe they are behind some proxy that is performing interesting operations of SSL connections?