I'm setting up HTTPS Load Balancing (LB) on Google Compute Engine (GCE). Key components are outlined in the Overview Diagram.
After successfully creating a HTTP Backend Service where 1 of 1 (GCE) instance is healthy, I decided to do the same for HTTPS. I'm using the Developer Console UI to do this.
The Healtheck "wizard" provides a drop-down menu for protocol with the option HTTP and HTTPS:
The successful HTTP Heathcheck used the path :8080/admin/healthcheck.
Presumably the HTTPS Healtheck will use the path :443/admin/healthcheck. The problem is my HTTPS Healthchecks are failing. This was expected since when visiting https://[INSTANCE_IP]:443/admin/healthcheck in a browser, it could not connect. So I didn't expect the Healthcheck to mark the instance as healthy.
How can I connect to https://[INSTANCE_IP]:443/admin/healthcheck over TLS, do I merely need to upload a certificate and create a Certificate Resource in the Developer Console (I doubt it)?
I think it's a conceptual problem too.
The URL https://[INSTANCE_IP]:443/admin/healthcheck does exist, I think because the instance doesn't implement TLS, the Healthcheck fails.
What is the relationship between a uploading a certificate (i.e. creating Certificate Resource) and a specific GCE instance accepting HTTPS requests such that HTTPS HealthCheck pass?
After re-reading the documentation, it is stated:
The client SSL session terminates at the load balancer. Sessions
between the load balancer and the instance can either be HTTPS
(recommended) or HTTP. If HTTPS, each instance must have a
certificate.
It is the last sentence that I was trying to achieve because HTTPS Healthchecks use a HTTPS URL to check the 'health' of an individual instance:
https://[INSTANCE_IP]:443/admin/healthcheck
Since this was failing, I incorrectly assumed I needed to implement TLS on each instance for the Healthcheck to succeed. However, I do not require each instance to implement TLS (HTTPS), only the Load Balancer.
The final configuration I used involved creating a new HTTPS Target Proxy, which pointed to the same Backend Service used for the HTTP Target Proxy. In other words: 2 Target Proxies (HTTP and HTTPS), but only one Backend Service).
Since Healthchecks are employed by Backend Services, the only Healthcheck required was the (original) unsecure Healthcheck, i.e.
http://[INSTANCE_IP]:8080/admin/healthcheck
The next sentence is important to:
The Beta release of HTTPS load balancing only supports a single SSL
certificate with a single load balancing service.
If the beta release only supports a single SSL certificate, I assume this certificate belongs to the LB, and therefore, on the beta at least, it's not actually possible to secure individual instances.
Related
I have two nifi nodes I want to run behind an AWS Application Load Balancer. This type of load balancer decrypts the incoming request to parse it, then re-encrypts with it's own cert.
I'm having issues getting the Nifi to recognize the user making a request since the requests always come in with the LB cert instead of the original users cert. I'm wondering if Nifi already has a means of handling this, for instance is it possible to have my LB set a header to specify the DN of the user's cert and have nifi authenticate first the LB DN and, if that passes, the header?
I am aware that the other two types of load balancers provided by AWS would in theory work, so long as I updated the SAN of the nifi certs to include the LB DN. However, I have reasons I would prefer to stick to an ALB. Is their any viable manner to properly authenticate users behind an ALB?
The Proxy Configuration section of the Admin Guide should cover this:
https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#proxy_configuration
Specifically the part about X-ProxiedEntitiesChain.
Ant-Media-Server is running on an IPAdress without any domains. We just set up this server to be used for streaming in order to use it from different domains pointing to different servers.
Since all of our domains use ssl, we face the typical connection problem:
mixed Content: The page at 'https://SOMEDOMAIN.com/QUERY' was loaded over HTTPS, but attempted to connect to the insecure WebSocket endpoint 'ws://1.2.3.4:56'. This request has been blocked; this endpoint must be available over WSS.
Ant-Media already offers tutorials on how to install a Let's Encrypt SSL Certificate but sadly it is not available for pure IP-Addresses.
Apart from the Ant-Media Service, the server doesn't has any NGINX, NodeJS, Apache or other http Servers installed - the plan was just to use it for streaming by calling the IP-Address.
Do you have any ideas on how to solve that problem?
Unfortunately, this is not possible.
The goal of having a SSL is ensure you are requesting the right domain name besides encrypting the content between your users and your server.
Here are some alternatives:
create an endpoint in your own app that proxies data to your server.
Instead of playing the IP address, you can play:
/your-proxy-url?stream=http://yourIp.com:port/....
Note that using a proxy will make all the traffic pass through your web app.
As a reference, if you are using PHP on your website, you can have some ideas from here: https://gist.github.com/iovar/9091078
Create a reverse-proxy in front of your web app that redirects the traffic to your IP address.
Both solutions does not change your Ant Media Server, just adds a new resource between your users and your streaming server - adding the SSL on it.
I cannot find authoritative information about how WSS interacts with HTTPS proxies and load balancers.
I have a load balancer that handles the SSL (SSL off-loading), and two web servers that contains my web applications and handle the requests in plain HTTP. Therefore, the customers issue HTTPS requests, but my web servers get HTTP requests, since the load balancer takes care of the SSL certificates handling.
I am developing now an application that will expose WebSockets and SSL is required. But I have no clear idea about what will happen when the load balancer gets a secure HTTPS handshake for WSS.
Will it just relay the request as normal handshake to the web server?
WebSockets use a "Upgrade:WebSocket" HTTP header that is only valid for the first hop (as there is also "Connection:Upgrade", will this be a problem?
Cheers.
loadbalancers can normally deal with websockets - also including ssl offloading shouldn't be an issue - BUT you have to configure the LB to take care about HTTP and not only to take care about balancing the traffic based on Layer 3 infos - therefore, you have to ensure that the LB has to take care about the session state.
i don't know what LB you are using - but e.g. with F5 LBs you just have to assign a http profile to loadbalance websocket based apps.
If you want to do ssl offloading additionally - just assign an ssl client profile to your virtual server.
http://support.f5.com/kb/en-us/solutions/public/14000/700/sol14754.html
I would have thought SSL-terminating LBs handle WebSockets as well, but I had to realize I was mistaken, once I tried. So the answer for F5 LBs, as of January 2013, is: It won't work. The gist of the answer I was given over at serverfault:
As of December of 2012, BIG-IP doesn't support SSL offload of WebSocket traffic.
I wish to setup an HTTPS proxy and have HTTP clients send requests securely to the proxy. For example, a browser can initiate a HTTP GET request which should be an encrypted request to the proxy and the proxy then removes the encryption and then passes the request to the end-site. Squid proxy can be set up to work like this (info here).
I have set up such a HTTPS enabled proxy. But I am unable to write my own HTTP clients to work with it. The same link above mentions that chrome is the only browser that supports such a proxy. I tested Chrome and it was able to work with such an HTTPS proxy.
I wish to gain an understanding of how such a proxy works so that I can write my own HTTP clients.
As I understand it, it's a connection to regular HTTP proxy BUT this connection is made over TLS. The client indeed needs to support this scheme explicitly and existing clients as-is can't be tuned up (without extra coding).
We are currently using Apache to handle incoming SSL requests. These are two-way SSL connections. Apache accepts the https connection and pass the request on as http connection to the application server. This works well for us.
We would like to use the same kind of centralized mechanism for outgoing two-way SSL connections. Is there a way do this with Apache or another product? To complicate things the client certificate needed to identify out client can vary depending on the destination.
In short:
- Internal clients connect through http to Apache or another product.
- Apache or another product knows based on a rule (?) that a two-way ssl connection is required and sets this up with the destination.
- Depending on the destination the correct certificate is sent to identify our client.
Regards,
Nidkil
What you're talking about is, or course, an HTTP proxy server. In the first scenario you are using it as a transparent proxy to provide SSL support for connections to a set of web pages. In the second scenario you want to use it to provide connections to secure-only pages on behalf of clients speaking HTTP.
You can do this with the Squid proxy, which is free and open-source, provided that your machine sits between the clients and the Internet. Look for "SSLBump". You do need a certificate which the clients would consider valid for all web pages to be accessed (otherwise they will notice what you are doing, which is basically a man-in-the-middle attack).
However, I would strongly recommend against this - if a site requires SSL, it is likely to do so for a reason. It is almost certainly not OK to have internal clients connecting to an online banking site and have you bumping down their encryption so that you can monitor their traffic or whatever...