Means to properly authenticate NIFI users behind an LB that changes cert? - load-balancing

I have two nifi nodes I want to run behind an AWS Application Load Balancer. This type of load balancer decrypts the incoming request to parse it, then re-encrypts with it's own cert.
I'm having issues getting the Nifi to recognize the user making a request since the requests always come in with the LB cert instead of the original users cert. I'm wondering if Nifi already has a means of handling this, for instance is it possible to have my LB set a header to specify the DN of the user's cert and have nifi authenticate first the LB DN and, if that passes, the header?
I am aware that the other two types of load balancers provided by AWS would in theory work, so long as I updated the SAN of the nifi certs to include the LB DN. However, I have reasons I would prefer to stick to an ALB. Is their any viable manner to properly authenticate users behind an ALB?

The Proxy Configuration section of the Admin Guide should cover this:
https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#proxy_configuration
Specifically the part about X-ProxiedEntitiesChain.

Related

Apache: How to Block "curl --insecure" in a ssl virtual host

I did create a ssl virtualhost in apache with a self-signed certificate.
In my opinion the configuration is correct however it is possible to access this url using "curl --insecure".
Searching at google, reading several tutorials and trying several configurations (diretives SSLVerifyClient|SSLVerifyDepth|AuthType|AuthBasicProvider|AuthUserFile|Require valid-user) I did not have any success in block this url using "curl --insecure"
I have been thinking in testing mod_security but I don't know if is the right way.
Could you give me some advice?
Thanks
Hudson
I suspect you may need to refine the understanding of sleep. You can't force clients to verify your SSL certificate. Besides, if you're using a self signed cert, it would never verify for anyone who didn't add the cert to their ca library.
You could block curl by rejecting requests based on their User Agent string. But that's just a header, and can be set by the client to anything ( such as a "valid" browser URL). If you really want to control clients, one way would be to use client certificates, which is the analog of the server certificate you set up, but on the client side. In that case, in addition to the client (ostensibly) verifying the server's cert, the server would verify the client's cert, providing a very strong and reliable mechanism to verify client access. Unfortunately, due the the difficulty of generating keys and cert signing requests, and signing certs for clients, client http certificates are not common. But they're very secure, and a good choice if you control both sides.
A middle ground would be to add an authentication layer into your app to control who can access it (you'd then refuse unauthenticated requests altogether)
In short, though, none of these things block curl. They block clients who cannot authenticate. I would recommend you not focus on the remote browser/client in use ( that's at the discretion of your http client). instead, focus on providing the security authentication you require. IMHO, trying to block client user-agents is a fool's errand. It's security by obscurity. Anyone can set any user-agent.

About proxy man-in-middle attack

I have a website that run under a H2O Proxy, let's call it A server. The backend is WordPress site running with EasyEngine script, let's call it B server.
Now it running like this:
User --(Let's Encrypt SSL)--> A (H2O Proxy) --(self-signed SSL)--> B (nginx backend).
I wonder if attackers know my backend's IP address, so can he decrypt or do harmful thing or see what user send to proxy? And how to setup a better strategy?
I have thought to setup Let's Encrypt SSL from A server to B server too. But I think the problem will occur when Let's Encrypt can only renew certificate on A server because the domain is pointing to A's IP address. And the backend (B server) can't renew it.
Found this answer but I don't really know how to do it: https://serverfault.com/a/735977.
It sounds like what you're trying to do is to put LetsEncrypt into as many places as possible, possibly facing the issues of not having the desired Fully-Qualified-Domain-Name for the applicable backend on the backend itself in order to get the certificate, especially for automated renewal.
But the whole and only purpose of LetsEncrypt is that it gives you certificates that would expectedly be recognised by all the major browsers, such that the users would not have to manually verify and install your certificate into their respective cacert.pem.
But if you just need a secure connection between your own backend and front-end server, then you're not facing the same issue; as such, using LetsEncrypt provides little, if any, extra protections. What you have to do is use something like proxy_ssl_trusted_certificate, together with proxy_ssl_verify, both on the front-end, to pin the backend's certificate and/or certificate authority on the front-end, which will be an order of magnitude more secure (due to the pinning) than using LetsEncrypt on the backend.

Relationship between HTTPS Healthchecks and an HTTPS connection to a GCE Instance

I'm setting up HTTPS Load Balancing (LB) on Google Compute Engine (GCE). Key components are outlined in the Overview Diagram.
After successfully creating a HTTP Backend Service where 1 of 1 (GCE) instance is healthy, I decided to do the same for HTTPS. I'm using the Developer Console UI to do this.
The Healtheck "wizard" provides a drop-down menu for protocol with the option HTTP and HTTPS:
The successful HTTP Heathcheck used the path :8080/admin/healthcheck.
Presumably the HTTPS Healtheck will use the path :443/admin/healthcheck. The problem is my HTTPS Healthchecks are failing. This was expected since when visiting https://[INSTANCE_IP]:443/admin/healthcheck in a browser, it could not connect. So I didn't expect the Healthcheck to mark the instance as healthy.
How can I connect to https://[INSTANCE_IP]:443/admin/healthcheck over TLS, do I merely need to upload a certificate and create a Certificate Resource in the Developer Console (I doubt it)?
I think it's a conceptual problem too.
The URL https://[INSTANCE_IP]:443/admin/healthcheck does exist, I think because the instance doesn't implement TLS, the Healthcheck fails.
What is the relationship between a uploading a certificate (i.e. creating Certificate Resource) and a specific GCE instance accepting HTTPS requests such that HTTPS HealthCheck pass?
After re-reading the documentation, it is stated:
The client SSL session terminates at the load balancer. Sessions
between the load balancer and the instance can either be HTTPS
(recommended) or HTTP. If HTTPS, each instance must have a
certificate.
It is the last sentence that I was trying to achieve because HTTPS Healthchecks use a HTTPS URL to check the 'health' of an individual instance:
https://[INSTANCE_IP]:443/admin/healthcheck
Since this was failing, I incorrectly assumed I needed to implement TLS on each instance for the Healthcheck to succeed. However, I do not require each instance to implement TLS (HTTPS), only the Load Balancer.
The final configuration I used involved creating a new HTTPS Target Proxy, which pointed to the same Backend Service used for the HTTP Target Proxy. In other words: 2 Target Proxies (HTTP and HTTPS), but only one Backend Service).
Since Healthchecks are employed by Backend Services, the only Healthcheck required was the (original) unsecure Healthcheck, i.e.
http://[INSTANCE_IP]:8080/admin/healthcheck
The next sentence is important to:
The Beta release of HTTPS load balancing only supports a single SSL
certificate with a single load balancing service.
If the beta release only supports a single SSL certificate, I assume this certificate belongs to the LB, and therefore, on the beta at least, it's not actually possible to secure individual instances.

configure tomcat for client authentication only for specific URL patterns

I have an application with a few war files all deployed on the same tomcat server. I need to force client authentication only for one war context, and only for a specific URL.
I've read a lot on the web and similar questions here, but the conclusions I've reached are not matching the solution I need:
define 2 connectors with different ports (one with clientAuth enabled and one without) and access the specific URL with the relevant port ==> this solution is not good since if a hacker tries to access this URL with the other port he can succeed
define transport-guarantee in web.xml (for example Enabling mutual SSL per service in Tomcat) ==> this is also not good since I don't want to define users in some realm, I just want the server to ask for the client certificate and verify it is trusted and valid.
Is there a way to use option 2 without defining users? Or maybe a third option?
Thanks in advance!
You can't do this in pure Tomcat. The best solution is to put an Apache HTTP in front of it, that terminates the SSL connection, and in which you can configure SSL to your heart's content right down to the level of an individual directory.
If you want to accept any certificate from trusted CAs, just put clientAuth="want" to Connector and write a filter to check, if a certificate was sent. Assign that filter to desired web app only. In the filter, get the certificate using:
request.getAttribute("javax.servlet.request.X509Certificate");
and check it's CA.
But remember, that any certificate from that CA will allow access. If this is a public CA, anyone can buy one and access your app. You should always check the DN, in Tomcat you do this by defining a user, or manually in a filter.

Centralizing outgoing two-way SSL connections

We are currently using Apache to handle incoming SSL requests. These are two-way SSL connections. Apache accepts the https connection and pass the request on as http connection to the application server. This works well for us.
We would like to use the same kind of centralized mechanism for outgoing two-way SSL connections. Is there a way do this with Apache or another product? To complicate things the client certificate needed to identify out client can vary depending on the destination.
In short:
- Internal clients connect through http to Apache or another product.
- Apache or another product knows based on a rule (?) that a two-way ssl connection is required and sets this up with the destination.
- Depending on the destination the correct certificate is sent to identify our client.
Regards,
Nidkil
What you're talking about is, or course, an HTTP proxy server. In the first scenario you are using it as a transparent proxy to provide SSL support for connections to a set of web pages. In the second scenario you want to use it to provide connections to secure-only pages on behalf of clients speaking HTTP.
You can do this with the Squid proxy, which is free and open-source, provided that your machine sits between the clients and the Internet. Look for "SSLBump". You do need a certificate which the clients would consider valid for all web pages to be accessed (otherwise they will notice what you are doing, which is basically a man-in-the-middle attack).
However, I would strongly recommend against this - if a site requires SSL, it is likely to do so for a reason. It is almost certainly not OK to have internal clients connecting to an online banking site and have you bumping down their encryption so that you can monitor their traffic or whatever...