Are all backends affected by performance penalty when ingress-nginx has ssl passtrough enabled? Should I use seperate ingresses? - ssl

As ingress-nginx docs state enabling ssl passthrough (--enable-ssl-passthrough) "bypasses NGINX completely and introduces a non-negligible performance penalty."
Does this mean that all backends are affected by this performance penalty, or only those whose ingress has the annotation nginx.ingress.kubernetes.io/ssl-passthrough?
In my case, I'd like to proxy a Kafka cluster behind an nginx ingress, and Kafka demands ssl passthrough to be enabled. So would it be advisable to install two ingresses, one without ssl passthrough/performance penalty for the usual http traffic to the web application, and a second one with ssl passthrough solely for Kafka?

Does this mean that all backends are affected by this performance penalty, or only those whose ingress has the annoation "nginx.ingress.kubernetes.io/ssl-passthrough"?
To answer this question, I will quote the entire warning:
This feature is implemented by intercepting all traffic on the configured HTTPS port (default: 443) and handing it over to a local TCP proxy. This bypasses NGINX completely and introduces a non-negligible performance penalty.
It follows that all traffic that is directed to your HTTPS port (all pods with HTTPS traffic) will have a slight performance penalty as this bypasses NGINX itself. It shouldn't affect your HTTP traffic. So you shouldn't need to run a second ingress, but you can always do so to separate rules in two separate ingresses.

Related

Traefik max-conn

We are using traefik for ingress and was wondering if there are any configuration to limit the maximum number of concurrent connection. When using Apache or nginx or even haproxy, I always configures the max number of concurrent connections . For nginx, i use the worker_connections. Is there any concept like this on traefik. I could find some middleware for limiting the infligh request https://doc.traefik.io/traefik/middlewares/http/inflightreq/. But i am looking for some config to prevent the backend by bombarding a lot of request.
Thanks,

AWS ELB + apache httpd + tomcat

We are currently using the "standard" architecture created by AWS OpsWorks.
We have set up AWS ELB in front of multiple machines, which sends the requests to one machine using round-robin algorithm ( we have stateless application without any cookies ). Apache httpd + Apache tomcat is installed on every machine ( everything set up and configured by AWS OpsWorks ). So Apache httpd handles the connection and then sends it to Tomcat via AJP connection.
I would like to get rid of the apache httpd.
Few reasons for that:
Easier architecture, easier configuration
Maybe slight gain in performance
Less monitoring ( need to monitor only Tomcat, but not Apache httpd )
I have checked the following thread:
Why use Apache Web Server in front of Glassfish or Tomcat?
and haven't find any reasons why I shouldn't remove apache httpd from my architecture.
However, I know that some applications have nginx in front of the Tomcat for the following reasons:
Slow clients handling ( ie worker thread of tomcat is freed, but async nginx thread sends clients )
DDoS SYN ( using SYN cookies ) protection
Questions to consider:
Does apache httpd protects from these DDoS techniques?
Does AWS ELB protects from these DDoS techniques?
Should I remove apache httpd ( given that I don't need anything from the list )? Should I replace it with nginx? Should I replace it with nginx ( taking into account that we have a DDoS protection with Incapsula )?
Any other advice/comment would be highly appreciated!
Thank you in advance!
Does apache httpd protects from these DDoS techniques?
No apache httpd does not automatically protect from DDOS attack you have to enable and configure the security modules.
Does AWS ELB protects from these DDoS techniques?
AWS ELB features are High Availability, Health Checks, Security Features By managing associated security groups, SSL Offloading for encryption etc.No AWS ELB does not protect from the DDOS and DDoS techniques.
Should I remove apache httpd?
By using Apache HTTP as a front end you can let Apache HTTP act as a front door to your content to multiple Apache Tomcat instances. If one of your Apache Tomcats fails, Apache HTTP ignores it and your Sysadmin can sleep through the night. This point could be ignored if you use a hardware loadbalancer and Apache Tomcat's clustering capabilities. This option is when you are not using AWS ELB.
Should I replace it with nginx?
If you have Incapsula for DDOS there no need to complex the process by adding nginx.

Jetty server running SPDY behind an Apache firewall

I have an application at /mine running on a Jetty server that supports SPDY. It is sitting behind an Apache firewall that is being used as a proxy.
The application at /mine gets routed by the following config rules on Apache:
RewriteRule ^/mine$ /mine/ [R,L]
ProxyPass /code/ https://jettyserver:9443/mine/ nocanon
ProxyPassReverse /mine/ https://jettyserver:9443/mine/ nocanon
As a result, when I hit apache/mine/, my browser is not negotiating SPDY with my application.
Adding mod_spdy to the proxy would be the correct approach but I cannot currently do that with the Apache we are running.
Is there a way I can get this to work?
For that particular configuration you want to run, I am afraid there is no way to get it working with SPDY or HTTP/2.
Apache configured as a reverse proxy talks HTTP/1.1 to Jetty, so there is no way to get SPDY or HTTP/2 into the picture at all (considering you cannot make Apache talk SPDY).
However, there are a number of alternative solutions.
Let's focus on HTTP/2 only because SPDY is now being phased out in favour of HTTP/2.
The first and simplest solution is just to remove Apache completely.
You just expose Jetty as your server, and it will be able to speak HTTP/2 and HTTP/1.1 to browsers without problems.
Jetty will handle TLS and then HTTP/2 or HTTP/1.1.
The second solution is to put HAProxy in the front, and have it forward to Jetty.
HAProxy will handle TLS and forward clear-text HTTP/2 or HTTP/1.1 to Jetty.
The advantage of these two solutions is that you will benefit of the HTTP/2 support of Jetty, along with its HTTP/2 Push capabilities.
Not only that, Jetty also gets you a wide range of Apache features such as rewriting, proxying, PHP/FastCGI support, etc.
For most configurations, you don't need Apache because Jetty can do it.
The first solution has the advantage that you have to configure one server only (Jetty), but you will probably pay a little for TLS because the JDK implementation used by Jetty is not the most efficient around.
The second solution has the advantage that TLS will be done more efficiently by HAProxy, and you can run it more easily on port 80. However, you have to configure two servers (HAProxy and Jetty) instead of just one.
Have a look at the Jetty HTTP/2 documentation and at the Webtide blogs where we routinely add entries about HTTP/2, configurations and examples.

Proxy Protocol on Elastic Load Balancing non-terminated SSL connection

For reasons we're not going to change, our application needs to handle the SSL connection, and not the ELB. The goal of using the Proxy Protocol is to get the client's IP address over an SSL connection.
http://aws.typepad.com/aws/2013/07/elastic-load-balancing-adds-support-for-proxy-protocol.html?ref_=9 indicates "Alternatively, you can use it if you are sending HTTPS requests and do not want to terminate the SSL connection on the load balancer. For more information, please visit the Elastic Load Balancing Guide."
Unfortunately, it appears the guide that's linked to doesn't actually elaborate on this, and the basic documentation for the Proxy Protocol ( http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.html ) fails in our environment when configured as described.
Does anyone have steps or a link for this?
The proxy protocol (version 1) injects a single line into the data stream at the beginning of the connection, before SSL is negotiated by your server. You don't get this information "over" an SSL connection; you get the information prior to SSL handshaking. Your server has to implement this capability and specifically be configured so that it can accept and understand it. For an IPv4 connection, it looks like this:
PROXY TCP4 source-ip dest-ip source-port dest-port\r\n
The standard for the protocol is here:
http://haproxy.1wt.eu/download/1.5/doc/proxy-protocol.txt
Additional info in the ELB docs here:
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/TerminologyandKeyConcepts.html#proxy-protocol
Regarding Apache support, at least at the time AWS announced support for the proxy protocol...
“neither Apache nor Nginx currently support the Proxy Protocol header inserted by the ELB”
— http://aws.typepad.com/aws/2013/07/elastic-load-balancing-adds-support-for-proxy-protocol.html?ref_=9
That is subject to change, of course, but I didn't successfully google for any Apache support of the proxy protocol. Of course, since Apache is open source, you could presumably hack it in there, though I am unfamiliar with the Apache source code.
Realizing that you don't want to change what you're doing now, I still would suggest that depending on your motivation for not wanting to change, there may still be a relatively simple solution. It's a change, but not involving SSL on ELB. What about running HAProxy behind ELB to terminate the SSL in front of Apache? Since HAProxy 1.5 can terminate SSL and appears to be able to translate the proxy protocol string from ELB into an X-Forwarded-For header, as well as generate X-SSL headers to give your application information about the client's SSL cert (perhaps that's your motivation for terminating SSL at the app server instead of on the ELB?) ... so this might be an alternative.
Otherwise, I don't have suggestions unless Apache implements support in the future, or we can find some documentation to indicate that they already have.
For the newer Network Load Balancers which allow your application servers to terminate the TLS connections, you can still get the real IP addresses of your clients and avoid all the work of configuring proxy protocol on the ELBs and in the web server config by simply configuring the target groups to use the servers' instance ids rather than their IP addresses. Regardless of which web server you use, the real IPs of the clients will show up in the logs with no translation needed.
Just to follow up on Michael - sqlbot's answer discussing the AWS support for proxy protocol on EC2 instances behind classic TCP elastic load balancers, the Apache module to use that implements the proxy protocol is mod_remoteip. Enabling it and updating the configuration properly will correct the problem of logging IP addresses of users rather than the elastic load balancer's IPs.
To enable proxy protocol on the elastic load balancer you could use these aws cli commands described in the aws documentation:
aws elb create-load-balancer-policy --load-balancer-name my-elb-name --policy-name my-elb-name-ProxyProtocol-policy --policy-type-name ProxyProtocolPolicyType --policy-attributes AttributeName=ProxyProtocol,AttributeValue=true
aws elb set-load-balancer-policies-for-backend-server --load-balancer-name my-elb-name --instance-port 443 --policy-names my-elb-name-ProxyProtocol-policy
aws elb set-load-balancer-policies-for-backend-server --load-balancer-name my-elb-name --instance-port 80 --policy-names my-elb-name-ProxyProtocol-policy
To enable use of proxy protocol in apache, in a server-wide or VirtualHost context, follow the mod_remoteip documentation such as below:
<IfModule mod_remoteip.c>
RemoteIPProxyProtocol On
RemoteIPHeader X-Forwarded-For
# The IPs or IP range of your ELB:
RemoteIPInternalProxy 192.168.1.0/24
# The IPs of hosts that may need to connect directly to the web server, bypassing the ELB (if applicable):
RemoteIPProxyProtocolExceptions 127.0.0.1
</IfModule>
You'll need to update the LogFormat wherever you have those defined (e.g. httpd.conf) to use %a rather than %h else the load balancer IP addresses will still appear.

Apache HTTPD/mod_proxy/Tomcat and SSL with client auth

I'm sure this is an FAQ but I couldn't find anything I recognized as being the same question.
I have several web-apps running in Tomcat, with some pages e.g. the login page protected by SSL as defined by confidentiality elements in their web.xmls. One of the apps also accepts client-authentication via certificate. I also have a rather extensive JAAS-based authorization & authentication scheme, and there is all kinds of shared code and different JAAS configurations etc between the various webapps.
I really don't want to disturb any of that while accomplishing the below.
I am now in the process of inserting Apache HTTPD with mod-proxy and mod-proxy-balancer in front of Tomcat as a load balancer, prior to adding more Tomcat instances.
What I want to accomplish for HTTPS requests is that they are redirected 'blind' to Tomcat without HTTPD being the SSL endpoint, i.e. HTTPD just passes ciphertext directly to Tomcat so that TC can keep doing what it is already doing with logins, SSL, web.xml confidentialty guarantees, and most importantly client authentication.
Is this possible with the configuration I've described?
I am very familiar with the webapps and SSL and HTTPS and Tomcat, but my knowledge of the outer reaches of Apache HTTPD is limited.
Happy to have this moved if necessary but it is kind of programming with config files ;)
This sounds similar to this question, where I've answered that it's not possible:
You can't just relay the SSL/TLS traffic to Tomcat from Apache. Either
your SSL connection ends at Apache, and then you should reverse proxy
the traffic to Tomcat (SSL [between Httpd and Tomcat] is rarely useful in this case), or you make
the clients connect to Tomcat directly and let it handle the SSL
connection.
I admit it's a bit short of links to back this claim. I guess I might be wrong (I've just never seen this done, but that doesn't strictly mean it doesn't exist...).
As you know, you need a direct connection, or a connection entirely relayed, between the user-agent and the SSL endpoint (in this case, you want it to be Tomcat). This means that Apache Httpd won't be able to look into the URL: it will know the host name at best (when using Server Name Indication).
The only option that doesn't seem to depend on a URL in the mod_proxy documentation is AllowCONNECT, which is what's used for forward proxy servers for HTTPS.
Even the options in mod_proxy_balancer expect a path at some point of the configuration. Its documentation doesn't mention SSL/HTTPS ("It provides load balancing support for HTTP, FTP and AJP13 protocols"), whereas mod_proxy talks at least about SSL when mentioning CONNECT.
I would suggest a couple of options:
Using an iptables-based load-balancer, without going through Httpd, ending the connections in Tomcat directly.
Ending the SSL/TLS connection at Httpd and using a plain HTTP reverse proxy to Tomcat.
This second option requires a bit more configuration to deal with the client certificates and Tomcat's security constraints.
If you have configured your webapp with <transport-guarantee>CONFIDENTIAL</transport-guarantee>, you will need to make Tomcat flag the connections as secure, despite the fact it sees them coming from its plain HTTP port. For Tomcat 5, here is an article (originally in French, but the automatic translations isn't too bad) describing how to implement a valve to set isSecure(). (If you're not familiar with valves, they are similar to filters, but operate within Tomcat itself, before the request is propagated to the webapp. They can be configured within Catalina) I think from Tomcat 5.5, the HTTP connector secure option does exactly that, without requiring your own valve. The AJP connector also has a similar option (if using mod_proxy_ajp or mod_jk).
If using the AJP connector, mod_proxy_ajp will forward the first certificate in the chain and make it available within Tomcat (via the normal request attribute). You'll probably need SSLOptions +ExportCertData +StdEnvVars. mod_jk (although deprecated as far as I know) can also forward the entire chain sent by the client (using JkOptions +ForwardSSLCertChain). This can be necessary when using proxy certificates (which are meaningless without the chain up to their end-entity certificate).
If you want to use mod_proxy_http, a trick is to pass the certificate via an HTTP header (mod_header), using something like RequestHeader set X-ClientCert %{SSL_CLIENT_CERT}s. I can't remember the exact details, but it's important to make sure that this header is cleared so that it never comes from the client's browser (who could forge it otherwise). If you need the full chain, you can try out this Httpd patch attempt. This approach would probably need an extra valve/filter to turn the header into the javax.servlet.request.X509Certificate (by parsing the PEM blocks).
A couple of other points that may be of interest:
If I remember well, you need to download the CRL files explicitly for Httpd and configure it to use them. Depending on the version of Httpd you're using, you may have to restart it to reload the CRLs.
If you're using re-negotiation to get your client-certificate, a CLIENT-CERT directive will not make Httpd request a client certificate as far as I know (this is otherwise done via a valve that can access the SSLSession when using the JSSE connector directly). You may have to configure the matching path in Httpd to request the client-certificate.