Traefik max-conn - traefik

We are using traefik for ingress and was wondering if there are any configuration to limit the maximum number of concurrent connection. When using Apache or nginx or even haproxy, I always configures the max number of concurrent connections . For nginx, i use the worker_connections. Is there any concept like this on traefik. I could find some middleware for limiting the infligh request https://doc.traefik.io/traefik/middlewares/http/inflightreq/. But i am looking for some config to prevent the backend by bombarding a lot of request.
Thanks,

Related

Are all backends affected by performance penalty when ingress-nginx has ssl passtrough enabled? Should I use seperate ingresses?

As ingress-nginx docs state enabling ssl passthrough (--enable-ssl-passthrough) "bypasses NGINX completely and introduces a non-negligible performance penalty."
Does this mean that all backends are affected by this performance penalty, or only those whose ingress has the annotation nginx.ingress.kubernetes.io/ssl-passthrough?
In my case, I'd like to proxy a Kafka cluster behind an nginx ingress, and Kafka demands ssl passthrough to be enabled. So would it be advisable to install two ingresses, one without ssl passthrough/performance penalty for the usual http traffic to the web application, and a second one with ssl passthrough solely for Kafka?
Does this mean that all backends are affected by this performance penalty, or only those whose ingress has the annoation "nginx.ingress.kubernetes.io/ssl-passthrough"?
To answer this question, I will quote the entire warning:
This feature is implemented by intercepting all traffic on the configured HTTPS port (default: 443) and handing it over to a local TCP proxy. This bypasses NGINX completely and introduces a non-negligible performance penalty.
It follows that all traffic that is directed to your HTTPS port (all pods with HTTPS traffic) will have a slight performance penalty as this bypasses NGINX itself. It shouldn't affect your HTTP traffic. So you shouldn't need to run a second ingress, but you can always do so to separate rules in two separate ingresses.

end to end HTTP2 - with haproxy, apache and varnish - possible? needed?

I have an application that's setup like this:
HAPROXY -> VARNISH -> APACHE (MOD_EVENT) -> PHP_FPM (REDIS + MYSQL)
Haproxy for TLS termination, varnish for cache.
I've enabled http2 in haproxy, varnish and apache:
Haproxy: added alpn h2,http/1.1 to frontend, added proto h2 to backend
Varnish: Added this flag: -p feature=+http2
Apache: installed mod_http2 and added Protocols h2 h2c http/1.1.
What i'm understanding from documentation is that haproxy supports end-to-end http2, varnish only supports http2 on frontend.
So after varnish -> http2 request becomes http 1.1 and apache receives http1.1 requests as I've confirmed through the logs.
My question is:
Should I strive to have end-to-end http2? Is that a desirable thing to do for performance? or should I just not even enable http2 on backend haproxy connection since varnish won't pass it through anyways?
I've been thinking about it for some time.
In theory once HTTP2 connection reaches HAPROXY -> I think we probably wont benefit from HTTP2 multiplexing anymore as the rest of the request travels within datacenter... and network latencies are so much smaller within datacenter right? Just curious if anyone else ran into same question.
The main reason why we use HTTP/2 is to prevent head-of-line blocking. The multiplexing aspect of H2 helps reduce the blocking.
However, when Varnish communicates with the origin server, the goal is to cache the response and avoid sending more requests to the origin.
The fact that HTTP/1 is used between Varnish and the origin shouldn't be a big problem, because Varnish is the only client that is served there. The head-of-line blocking will hardly ever occur.

AWS ELB + apache httpd + tomcat

We are currently using the "standard" architecture created by AWS OpsWorks.
We have set up AWS ELB in front of multiple machines, which sends the requests to one machine using round-robin algorithm ( we have stateless application without any cookies ). Apache httpd + Apache tomcat is installed on every machine ( everything set up and configured by AWS OpsWorks ). So Apache httpd handles the connection and then sends it to Tomcat via AJP connection.
I would like to get rid of the apache httpd.
Few reasons for that:
Easier architecture, easier configuration
Maybe slight gain in performance
Less monitoring ( need to monitor only Tomcat, but not Apache httpd )
I have checked the following thread:
Why use Apache Web Server in front of Glassfish or Tomcat?
and haven't find any reasons why I shouldn't remove apache httpd from my architecture.
However, I know that some applications have nginx in front of the Tomcat for the following reasons:
Slow clients handling ( ie worker thread of tomcat is freed, but async nginx thread sends clients )
DDoS SYN ( using SYN cookies ) protection
Questions to consider:
Does apache httpd protects from these DDoS techniques?
Does AWS ELB protects from these DDoS techniques?
Should I remove apache httpd ( given that I don't need anything from the list )? Should I replace it with nginx? Should I replace it with nginx ( taking into account that we have a DDoS protection with Incapsula )?
Any other advice/comment would be highly appreciated!
Thank you in advance!
Does apache httpd protects from these DDoS techniques?
No apache httpd does not automatically protect from DDOS attack you have to enable and configure the security modules.
Does AWS ELB protects from these DDoS techniques?
AWS ELB features are High Availability, Health Checks, Security Features By managing associated security groups, SSL Offloading for encryption etc.No AWS ELB does not protect from the DDOS and DDoS techniques.
Should I remove apache httpd?
By using Apache HTTP as a front end you can let Apache HTTP act as a front door to your content to multiple Apache Tomcat instances. If one of your Apache Tomcats fails, Apache HTTP ignores it and your Sysadmin can sleep through the night. This point could be ignored if you use a hardware loadbalancer and Apache Tomcat's clustering capabilities. This option is when you are not using AWS ELB.
Should I replace it with nginx?
If you have Incapsula for DDOS there no need to complex the process by adding nginx.

Proxy Protocol on Elastic Load Balancing non-terminated SSL connection

For reasons we're not going to change, our application needs to handle the SSL connection, and not the ELB. The goal of using the Proxy Protocol is to get the client's IP address over an SSL connection.
http://aws.typepad.com/aws/2013/07/elastic-load-balancing-adds-support-for-proxy-protocol.html?ref_=9 indicates "Alternatively, you can use it if you are sending HTTPS requests and do not want to terminate the SSL connection on the load balancer. For more information, please visit the Elastic Load Balancing Guide."
Unfortunately, it appears the guide that's linked to doesn't actually elaborate on this, and the basic documentation for the Proxy Protocol ( http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.html ) fails in our environment when configured as described.
Does anyone have steps or a link for this?
The proxy protocol (version 1) injects a single line into the data stream at the beginning of the connection, before SSL is negotiated by your server. You don't get this information "over" an SSL connection; you get the information prior to SSL handshaking. Your server has to implement this capability and specifically be configured so that it can accept and understand it. For an IPv4 connection, it looks like this:
PROXY TCP4 source-ip dest-ip source-port dest-port\r\n
The standard for the protocol is here:
http://haproxy.1wt.eu/download/1.5/doc/proxy-protocol.txt
Additional info in the ELB docs here:
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/TerminologyandKeyConcepts.html#proxy-protocol
Regarding Apache support, at least at the time AWS announced support for the proxy protocol...
“neither Apache nor Nginx currently support the Proxy Protocol header inserted by the ELB”
— http://aws.typepad.com/aws/2013/07/elastic-load-balancing-adds-support-for-proxy-protocol.html?ref_=9
That is subject to change, of course, but I didn't successfully google for any Apache support of the proxy protocol. Of course, since Apache is open source, you could presumably hack it in there, though I am unfamiliar with the Apache source code.
Realizing that you don't want to change what you're doing now, I still would suggest that depending on your motivation for not wanting to change, there may still be a relatively simple solution. It's a change, but not involving SSL on ELB. What about running HAProxy behind ELB to terminate the SSL in front of Apache? Since HAProxy 1.5 can terminate SSL and appears to be able to translate the proxy protocol string from ELB into an X-Forwarded-For header, as well as generate X-SSL headers to give your application information about the client's SSL cert (perhaps that's your motivation for terminating SSL at the app server instead of on the ELB?) ... so this might be an alternative.
Otherwise, I don't have suggestions unless Apache implements support in the future, or we can find some documentation to indicate that they already have.
For the newer Network Load Balancers which allow your application servers to terminate the TLS connections, you can still get the real IP addresses of your clients and avoid all the work of configuring proxy protocol on the ELBs and in the web server config by simply configuring the target groups to use the servers' instance ids rather than their IP addresses. Regardless of which web server you use, the real IPs of the clients will show up in the logs with no translation needed.
Just to follow up on Michael - sqlbot's answer discussing the AWS support for proxy protocol on EC2 instances behind classic TCP elastic load balancers, the Apache module to use that implements the proxy protocol is mod_remoteip. Enabling it and updating the configuration properly will correct the problem of logging IP addresses of users rather than the elastic load balancer's IPs.
To enable proxy protocol on the elastic load balancer you could use these aws cli commands described in the aws documentation:
aws elb create-load-balancer-policy --load-balancer-name my-elb-name --policy-name my-elb-name-ProxyProtocol-policy --policy-type-name ProxyProtocolPolicyType --policy-attributes AttributeName=ProxyProtocol,AttributeValue=true
aws elb set-load-balancer-policies-for-backend-server --load-balancer-name my-elb-name --instance-port 443 --policy-names my-elb-name-ProxyProtocol-policy
aws elb set-load-balancer-policies-for-backend-server --load-balancer-name my-elb-name --instance-port 80 --policy-names my-elb-name-ProxyProtocol-policy
To enable use of proxy protocol in apache, in a server-wide or VirtualHost context, follow the mod_remoteip documentation such as below:
<IfModule mod_remoteip.c>
RemoteIPProxyProtocol On
RemoteIPHeader X-Forwarded-For
# The IPs or IP range of your ELB:
RemoteIPInternalProxy 192.168.1.0/24
# The IPs of hosts that may need to connect directly to the web server, bypassing the ELB (if applicable):
RemoteIPProxyProtocolExceptions 127.0.0.1
</IfModule>
You'll need to update the LogFormat wherever you have those defined (e.g. httpd.conf) to use %a rather than %h else the load balancer IP addresses will still appear.

apache mod_jk send request to all cluster nodes

I have a distrubuted cluster system. I have set up apache server and set loadbalancing (mod_jk) conditions. And also sticky session is true mode.
Is it possible that could I send some special requests (after request header control) to all tomcat cluster nodes ? Is there any rule or method ?
There is no need to send back to clients, all nodes be informed from special url is enough. I have configured uriworkermap.properties, there are 3 status(active, disabled, stopped) for loadbalancer nodes. Is there any solution by configuring uriworkermap.properties or workers.properties?
For solution of this problem, suggesting alternatives of mod_jk ?