What are alternatives to secure a web-server other than firewall - apache

I'm doing a network security course and trying to wrap my head around all the concepts. One of which is:
What technology other than firewall can be used to allow only a specific customers while block some other customers? Why is firewall not suitable?
During the course, I've been learning about all the security tools such as: firewall (static, dynamic, DPI), Proxy, VPN, Tunnel, all sorts of IDS (signature, anomaly, darknet/greynet and honeypot) then mod_security to secure apache but still puzzled by this question.
Any insights here will be greatly appreciated.

A firewall implied that you block based on the customer IP address. This may work if the customer has his own range of addresses and all requests from him are legitimate.
It gets complicated when he is with a large cloud provider who who provide a wide range of possible IPs, including IPs from other people.
For an application one good solution would be to use client-side certificates. In that case, during the TLS handshake (the process of putting in place a TLS (was: SSL) tunnel), the server will request the client to provide a certificate he (the server) trusts. Failure to providing one will break the connection.
This way, you can distribute the certificate to the clients you want to be able to reach your service and others will be rejected. This solution is better as it uses technologies which were developed exactly to solve this problem. The drawback is that you have to maintain and distribute the certificates (and usually run a PKI).

Related

Which proxy mode to use if host company terminates TLS on reverse proxy

Friendly Disclaimer: I am new to working with Keycloak and IdP in general. So it's likely that I use incorrect terminology and/or am more confused than I think I am. Corrections are gratefully accepted.
My question is conceptual.
I have a TLS certificate that is terminated on my host machine by my host company. My reverse proxy (Traefik) is picking up that certificate.
Which of the following proxy modes should I use now to be able to deploy Keycloak to production: edge, reencrypt or passthrough? (see here for relevant documentation)
I can pretty much rule out passthrough, because as I wrote, the TLS certificate is terminated on the server. But I am unsure if I have to bring my own certificate and reencrypt or if it is considered safe to go along with edge?
I have done my best to keep this question short and general. However, I am happy to share configurations or further details if needed.
As far as I know, most organizations consider a request to be safe when the proxy validated and terminated the TLS. It also removes the performance overhead (depends on your load). Unless your organization is going for Zero Trust for its internal network, using the edge should be totally acceptable.

Automated ACME subdomain SSL certificate generation for resources on different IP addresses

I've been investigating the possibility of migrating to using Let's Encrypt to maintain the SSL certificates we have in place for the various resources we use for our operations. We have the following resources using SSL certificates:
Main website (www.example.com / example.com) - Hosted and maintained by a 3rd party who also maintains the SSL certificate
Client portal website (client.example.com) - IIS site hosted and maintained by us on a server located in a remote data center
FTP server (ftp.example.com) - WS_FTP Server hosted and maintained by us on a server located in a remote data center
Hardware firewall (firewall.example.com) - Local security appliance for our internal network
Remote Desktop Gateway (rd.example.com) - RDP server hosted and maintained by us on a server located locally
As indicated above, the SSL certificate for the main website (www) is maintained by the 3rd-party host, so I don't generally mess with that one. However, as you can tell, the DNS records for each of these endpoints point to a variety of different IP addresses. This is where my inexperience with the overall process of issuing and deploying SSL certificates has me a bit confused.
First of all, since I don't manage or maintain the main website, I'm currently manually generating the CSR's for each of the endpoints from the server/service that provides the endpoint - one from the IIS server, a different one from the RDP server, another from the WS_FTP server, and one from the hardware firewall. The manual process, while not excessively time-consuming, still requires me to go through several steps with different server systems requiring different processes.
I've considered using one of Let's Encrypt's free wildcard SSL certificates to cover all four of these endpoints (*.example.com), but I don't want to "interfere" with what our main website host is doing on that end. I realize the actual certificate itself is presented by the server to which the client is connecting, so it shouldn't matter (right?), but I'd probably still be more comfortable with individual SSL certificates for each of the subdomain endpoints.
So, I've been working on building an application using the Certes ACME client library in an attempt to automatically handle the entire SSL process from CSR to deployment. However, I've run into a few snags:
The firewall is secured against connections on port 80, so I wouldn't be able to serve up the HTTP-01 validation file for that subdomain (fw.example.com) on the device itself. The same is true for the FTP server's subdomain (ftp.example.com).
My DNS is hosted with a provider that does not currently offer an API (they say they're working on one), so I can't automate the process of the DNS-01 validation by writing the TXT record to the zone file.
I found the TLS-ALPN-01 validation method, but I'm not sure whether or not this is appropriate for the use case I'm trying to implement. According to the description of this method from Let's encrypt (emphasis mine):
This challenge is not suitable for most people. It is best suited to authors of TLS-terminating reverse proxies that want to perform host-based validation like HTTP-01, but want to do it entirely at the TLS layer in order to separate concerns. Right now that mainly means large hosting providers, but mainstream web servers like Apache and Nginx could someday implement this (and Caddy already does).
Pros:
It works if port 80 is unavailable to you.
It can be performed purely at the TLS layer.
Cons:
It’s not supported by Apache, Nginx, or Certbot, and probably won’t be soon.
Like HTTP-01, if you have multiple servers they need to all answer with the same content.
This method cannot be used to validate wildcard domains.
So, based on my research so far and my environment, my three biggest questions are these:
Would the TLS-ALPN-01 validation method be an effective - or even available - option for generating the individual SSL certificates for each subdomain? Since the firewall and FTP server cannot currently serve up the appropriate files on port 80, I don't see any way to use the HTTP-01 validation for these subdomains. Not being able to use an API to automate a DNS-01 validation would make that method generally more trouble than it's worth. While I could probably do the HTTP-01 validation for the client portal - and maybe the RDP server (I haven't gotten that far in my research yet) - I'd still be left with handling the other two subdomains manually.
Would I be better off trying to do a wildcard certificate for the subdomains? Other than "simplifying" the process by reducing the number of SSL certificates that need to be issued, is there any inherent benefit to going this route versus using individual certificates for each subdomain? Since the main site is hosted/managed by a 3rd-party and (again) I can't currently use an API to automate a DNS-01 validation, I suppose I would need to use an HTTP-01 validation. Based on my understanding, that means that I would need to get access/permission to create the response file, along with the appropriate directories on that server.
Just to be certain, is there any chance of causing some sort of "conflict" if I were to generate/deploy a wildcard certificate to the subdomains while the main website still used its own SSL certificate for the www? Again, I wouldn't think that to be the case, but I want to do my best to avoid introducing more complexity and/or problems into the situation.
I've responded to your related question on https://community.certifytheweb.com/t/tls-alpn-01-validation/1444/2 but the answer is to use DNS validation and my suggestion is to use Certify DNS (https://docs.certifytheweb.com/docs/dns/providers/certifydns), which is an alternative managed alternative cloud implementation of acme-dns (CNAME delegation of DNS challenge responses.
Certify DNS is compatible with most existing acme-dns clients so it can be used with acme-dns compatible clients as well as with Certify The Web (https://certifytheweb.com)

ssl connection, using a hostname that is not in the SAN list of the host's certificate

I am quite new to ssl stuffs but I am afraid I can guess the final answer of the following problem/question:
We are building hardware (let's call them servers) that WILL have IP address modifications along there lifetime. Each Server must be reachable in a secured manner. We are planning to use a TLS 1.3 secured connection to perform some actions on the servers (update firmware, change configuration and so on). As a consequence we need to provide the server's with one certificate (each) so that they can state their identity. PKI issue is out of the scope of this question (we suppose) and we can take for granted that the clients and the servers will share a common trusted CA to ensure the SSL handshake goes ok. The server's will serve http connection on there configured (changeable) IP addresses only. There is no DNS involved on the loop.
We are wondering how to set the servers' certificates appropriately.
As IP will change, it cannot be used as the common name in the server's certificate.
Therefore, we are considering using something more persistent such as a serial number or a MAC address.
The problem is, as there is no DNS in the loop, the client can not issue http request to www.serialNumberOfServer.com and must connect to http://x.y.z.t (which will change frequently (at least frequently enough so that we don't issue a new server's certificate at each time))
If we get it right, ssl handshake requires to have the hostname (that's in the URL we are connecting to) matching either the commonName of the server's Certificate or one of its Subject's Alternative Name (SAN). Right? Here, it would be x.y.z.t.
So we think we are stucked in a situation in which the server cannot use it's IP to prove its identity and the client wants to use it exclusively to connect to the server.
Is there any work around?
Are we missing something?
Any help would be very (VERY) appreciated. Do not hesitated in cas you should need more detailed explanation!
For what it's worth, the development environment will be Qt using the QNetworkAccessManager/QSSlstuffs framework.
If you're not having the client use DNS at all, then you do have a problem. The right solution is to use DNS or static hostname lists (/etc/hosts, eg, on unix* or hosts.txt on windows eg.). That will let you set names appropriately.
If you can only use IP addresses, another option is to put all of your IP addresses into the certificate that the server might use. This is only doable if you have a reasonable small number of addresses that they might get assigned to.
Or you could keep a cache of certificates on the server with one address for each, and have part of the webserver start process to select the right certificate. Requires a bit more complex startup.
Edit: Finally, some SSL stacks (e.g. openssl) let you decide whether or not each particular verification error should be accepted as an error or that it can be ignored. This would let you override the errors on the client side. However, this is hard to implement properly and very prone to security issues if you don't bind the remote certificate properly it means you're subjecting yourself to man-in-the-middle or other attacks by blindly accepting any old certificate. I don't remember if Qt's SSL library gives you this level of flexibility or not (I don't believe so but didn't go pull up the documentation).
Went back on the subject 9 mont later!
Turns out there is an easy solution (at least with Qt framework)
Qt's QNetworkRequest::setPeerVerifyName does the job for us. It allows to connect to an host using its IP and verify a given CN during SSL handshake
See Qt's documentation extract below:
void QNetworkRequest::setPeerVerifyName(const QString &peerName)
Sets peerName as host name for the certificate validation, instead of the one used for the TCP connection.
This function was introduced in Qt 5.13.
See also peerVerifyName.
Just tested it positively right now

HTTP/2 for applications deployed on the intranet / Lack of SSL possibilities

So HTTP/2 adds performance I'd love to harness. I don't like concatenating my javascript for various reasons, and HTTP/2 would make that unnessesary anyway.
BUT. I'm developing a webapp which is going to be deployed inside customers local networks. Thus I cannot have SSL (neither domains nor IP addresses are fixed/known). Now Mozilla and Chrome said they will only support HTTP/2 with TLS. To have that without browser warnings I need proper certificates, which I can't have. So does this mean HTTP/2 is dead for intranet applications?
There are a few scenarios where using HTTP/2 without SSL makes a lot of sense. Relatively secure intranets is one of them, and website development is another. That said, you can still use HTTP/2 in your intranet by deploying SSL, and it is actually easier and cheaper in an intranet.
Usually there is much more control in an intranet, without any implied monetary cost. For example, you can setup a simple local DNS server (like DNSMasq or the one built-in into Windows) to have domains pointing to ip addresses, and configure, through DHCP, those addresses to be static.
The certificates issue is certainly more tricky. You can use the internal Certificate Authority of your client, if they have one already, or set it up for them.
And finally, if your client is so small that all the computers that will use your application are in the same building than the server, don't bother neither concatenating the files nor using HTTP/2.

SSL certificate for intranet web servers

I'm interested to purchase a wild card SSL certificate for my public domain (say example.com), so that we can run intranet web servers using a universally recognized CA (e.g., GoDaddy). I do plan to publish the DNS names publicly (e.g. internal.example.com), but their IP addresses are actually LAN addresses (e.g., 192...*). We want to use public DNS, because these web servers may actually be development laptops which travel around, and thus we will use Dynamic DNS to update. It's our intention that these web servers will only be available on the LAN each one is currently running on.
Will that work universally with all clients, e.g., TLS v1.2 ?
Thanks.
As long as the clients can route their traffic to these IP addresses, it will work (otherwise you won't get the connection, of course).
Certificate verification relies on two points:
Verifying that the certificate is genuine, trusted and valid in time.
Verifying that the identity of the certificate matches what you were looking for (host name verification).
This does not depend on how the DNS resolution mechanism. These mechanisms are also orthogonal to the SSL/TLS specifications (although they do recommend to verify the remote party's identity).
I've seen this sort of setup used on various clients and platforms (IE, Chrome, FF, Java clients on Windows/Linux/Mac) and it worked fine.
Of course, whether all implementations do this well is hard to guarantee. There might be some implementation that thinks it's a good idea to perform a reverse DNS lookup, for example.