I have an application that's setup like this:
HAPROXY -> VARNISH -> APACHE (MOD_EVENT) -> PHP_FPM (REDIS + MYSQL)
Haproxy for TLS termination, varnish for cache.
I've enabled http2 in haproxy, varnish and apache:
Haproxy: added alpn h2,http/1.1 to frontend, added proto h2 to backend
Varnish: Added this flag: -p feature=+http2
Apache: installed mod_http2 and added Protocols h2 h2c http/1.1.
What i'm understanding from documentation is that haproxy supports end-to-end http2, varnish only supports http2 on frontend.
So after varnish -> http2 request becomes http 1.1 and apache receives http1.1 requests as I've confirmed through the logs.
My question is:
Should I strive to have end-to-end http2? Is that a desirable thing to do for performance? or should I just not even enable http2 on backend haproxy connection since varnish won't pass it through anyways?
I've been thinking about it for some time.
In theory once HTTP2 connection reaches HAPROXY -> I think we probably wont benefit from HTTP2 multiplexing anymore as the rest of the request travels within datacenter... and network latencies are so much smaller within datacenter right? Just curious if anyone else ran into same question.
The main reason why we use HTTP/2 is to prevent head-of-line blocking. The multiplexing aspect of H2 helps reduce the blocking.
However, when Varnish communicates with the origin server, the goal is to cache the response and avoid sending more requests to the origin.
The fact that HTTP/1 is used between Varnish and the origin shouldn't be a big problem, because Varnish is the only client that is served there. The head-of-line blocking will hardly ever occur.
Related
i have configued my Apache to act as a Reverse Proxy.
Now i wanted to enabled http2 to speed it up.
Apache has the module enabled and Nginx also.
When i enter
Protocols h2 h2c http/1.1
ProtocolsHonorOrder Off
ProxyPass / h2://192.168.2.100/
ProxyPassReverse / https://192.168.2.100/
into the apache site configuration, Nginx throws a 400 Bad Request Error.
Using this code instead works:
Protocols h2 h2c http/1.1
ProtocolsHonorOrder Off
ProxyPass / https://192.168.2.100/
ProxyPassReverse / https://192.168.2.100/
Nginx Config:
listen 443 ssl http2;
How do i need to configure this section to work properly?#
I realize that this post is over a year old, but I ran into a similar issue after upgrading nginx to 1.20.1. Turns out that nginx was receiving multiple Host headers from Apache when HTTP2 was used. Adding RequestHeader unset Host resolved the issue.
This is more a comment than an answer but it looks like you're new so maybe I can help (at the risk of being totally wrong).
I'm working on configuring Nginx as a reverse proxy for Apache to run a Django application (By the way, I have never heard of anyone using Apache as a reverse proxy for Nginx).
But there's some discussion in the Nginx documentation that makes me think it might not even be possible:
Reading through the Nginx docs on proxy_pass they mention how using websockets for proxying requires a special configuration
In that document it talks about how websockets requires HTTP1.1 (and an Upgrade and Connection HTTP header) so you can either basically fabricate them proxy_set_header between your server and proxy or you can pass them along if the client request includes them.
Presumably in this case, if the client didn't send the Upgrade header, you'd proxy_pass the connection to a server using TCP, rather than websockets.
Well, HTTP2 is (I assume) another Upgrade and one likely to have even less user agent support. So at the least, you'd risk compatibility problems.
Again, I have never configured Apache as a (reverse) proxy but my guess from your configuration is that Apache would handle the encryption to the client, then probably have to re-encrypt for the connection to Nginx.
That's a lot of overhead just on encryption, and probable browser compatibility issues...and probably not a great set up.
Again, I'm not sure if it's possible, I came here looking for a similar answer but you might want to reconsider.
I am not quite clear about the idea whether the Kestrel server needs to be encrypted as a localhost server.
I use Apache with HTTPS as the proxy server for Kestrel server. Does it require to run https in Kestrel as well? In theory, what passes through the Apache proxy server (HTTPS enabled) should be encrypted, right?
Please shed some light if you have any ideas.
No, you don't have to encrypt the traffic between Apache and Kestrel. The apache (or nginx or IIS) will be the SSL termination point.
However what you need to make sure is
that Apache correctly sets the forwarded headers (x-forwarded-* headers)
kestrel is correctly configured to use these headers (UseIISIntegration already does that) or register the app.UseForwardedHeaders(); middleware which also registers them
Without either one, your requests will fail if the controllers/actions are marked with [RequireHttps] attribute
I currently have a server running apache httpd 2.2 serving ~1000 websocket connections. I'm trying to scale this up to around ~10K websockets on the the same hardware. I thought I'd be able to place an nginx reverse proxy on the front end, and that nginx would only connect to the backend when there was incoming traffic, and would maintain the connection to the outside world. However, right now the connection seems to continuous (i.e., once the websocket upgrade is complete, a httpd process is tied up until the connection is broken. Am I misunderstanding how nginx should do websockets proxying, or do I have something misconfigured?
NGINX supports WebSockets by creating a tunnel between the client and the backend server and so the nginx will not terminate the connections to the backend/frontend until the client/server terminates the connections.
See: https://www.nginx.com/blog/websocket-nginx/ for more info.
I'm creating an Middleware/Webapp for a REST API in Erlang with cowboy framework and Apache HTTP with ModProxy, to redirect requests from port 80 to port 80xx, since i don't wanna use custom ports to listen requests and i don't wanna run the code in root to be able to listen in port 80.
Now i wanna encrypt the connections, with SSL, using HTTPS and my question is: where is the best practice to configure SSL with certificates, keys etc, in Apache HTTP (before redirect with ModProxy) or in Cowboy framework in Erlang app, since both support SSL configuration.
Thanks in advance!
I'd put it in Apache:
If you want to add more services later, they'd automatically benefit with SSL protection.
If you need to debug something, you can tcpdump the data between Apache and your Erlang VM, which will be decrypted at that point.
I have an application at /mine running on a Jetty server that supports SPDY. It is sitting behind an Apache firewall that is being used as a proxy.
The application at /mine gets routed by the following config rules on Apache:
RewriteRule ^/mine$ /mine/ [R,L]
ProxyPass /code/ https://jettyserver:9443/mine/ nocanon
ProxyPassReverse /mine/ https://jettyserver:9443/mine/ nocanon
As a result, when I hit apache/mine/, my browser is not negotiating SPDY with my application.
Adding mod_spdy to the proxy would be the correct approach but I cannot currently do that with the Apache we are running.
Is there a way I can get this to work?
For that particular configuration you want to run, I am afraid there is no way to get it working with SPDY or HTTP/2.
Apache configured as a reverse proxy talks HTTP/1.1 to Jetty, so there is no way to get SPDY or HTTP/2 into the picture at all (considering you cannot make Apache talk SPDY).
However, there are a number of alternative solutions.
Let's focus on HTTP/2 only because SPDY is now being phased out in favour of HTTP/2.
The first and simplest solution is just to remove Apache completely.
You just expose Jetty as your server, and it will be able to speak HTTP/2 and HTTP/1.1 to browsers without problems.
Jetty will handle TLS and then HTTP/2 or HTTP/1.1.
The second solution is to put HAProxy in the front, and have it forward to Jetty.
HAProxy will handle TLS and forward clear-text HTTP/2 or HTTP/1.1 to Jetty.
The advantage of these two solutions is that you will benefit of the HTTP/2 support of Jetty, along with its HTTP/2 Push capabilities.
Not only that, Jetty also gets you a wide range of Apache features such as rewriting, proxying, PHP/FastCGI support, etc.
For most configurations, you don't need Apache because Jetty can do it.
The first solution has the advantage that you have to configure one server only (Jetty), but you will probably pay a little for TLS because the JDK implementation used by Jetty is not the most efficient around.
The second solution has the advantage that TLS will be done more efficiently by HAProxy, and you can run it more easily on port 80. However, you have to configure two servers (HAProxy and Jetty) instead of just one.
Have a look at the Jetty HTTP/2 documentation and at the Webtide blogs where we routinely add entries about HTTP/2, configurations and examples.