I have an application at /mine running on a Jetty server that supports SPDY. It is sitting behind an Apache firewall that is being used as a proxy.
The application at /mine gets routed by the following config rules on Apache:
RewriteRule ^/mine$ /mine/ [R,L]
ProxyPass /code/ https://jettyserver:9443/mine/ nocanon
ProxyPassReverse /mine/ https://jettyserver:9443/mine/ nocanon
As a result, when I hit apache/mine/, my browser is not negotiating SPDY with my application.
Adding mod_spdy to the proxy would be the correct approach but I cannot currently do that with the Apache we are running.
Is there a way I can get this to work?
For that particular configuration you want to run, I am afraid there is no way to get it working with SPDY or HTTP/2.
Apache configured as a reverse proxy talks HTTP/1.1 to Jetty, so there is no way to get SPDY or HTTP/2 into the picture at all (considering you cannot make Apache talk SPDY).
However, there are a number of alternative solutions.
Let's focus on HTTP/2 only because SPDY is now being phased out in favour of HTTP/2.
The first and simplest solution is just to remove Apache completely.
You just expose Jetty as your server, and it will be able to speak HTTP/2 and HTTP/1.1 to browsers without problems.
Jetty will handle TLS and then HTTP/2 or HTTP/1.1.
The second solution is to put HAProxy in the front, and have it forward to Jetty.
HAProxy will handle TLS and forward clear-text HTTP/2 or HTTP/1.1 to Jetty.
The advantage of these two solutions is that you will benefit of the HTTP/2 support of Jetty, along with its HTTP/2 Push capabilities.
Not only that, Jetty also gets you a wide range of Apache features such as rewriting, proxying, PHP/FastCGI support, etc.
For most configurations, you don't need Apache because Jetty can do it.
The first solution has the advantage that you have to configure one server only (Jetty), but you will probably pay a little for TLS because the JDK implementation used by Jetty is not the most efficient around.
The second solution has the advantage that TLS will be done more efficiently by HAProxy, and you can run it more easily on port 80. However, you have to configure two servers (HAProxy and Jetty) instead of just one.
Have a look at the Jetty HTTP/2 documentation and at the Webtide blogs where we routinely add entries about HTTP/2, configurations and examples.
Related
i have configued my Apache to act as a Reverse Proxy.
Now i wanted to enabled http2 to speed it up.
Apache has the module enabled and Nginx also.
When i enter
Protocols h2 h2c http/1.1
ProtocolsHonorOrder Off
ProxyPass / h2://192.168.2.100/
ProxyPassReverse / https://192.168.2.100/
into the apache site configuration, Nginx throws a 400 Bad Request Error.
Using this code instead works:
Protocols h2 h2c http/1.1
ProtocolsHonorOrder Off
ProxyPass / https://192.168.2.100/
ProxyPassReverse / https://192.168.2.100/
Nginx Config:
listen 443 ssl http2;
How do i need to configure this section to work properly?#
I realize that this post is over a year old, but I ran into a similar issue after upgrading nginx to 1.20.1. Turns out that nginx was receiving multiple Host headers from Apache when HTTP2 was used. Adding RequestHeader unset Host resolved the issue.
This is more a comment than an answer but it looks like you're new so maybe I can help (at the risk of being totally wrong).
I'm working on configuring Nginx as a reverse proxy for Apache to run a Django application (By the way, I have never heard of anyone using Apache as a reverse proxy for Nginx).
But there's some discussion in the Nginx documentation that makes me think it might not even be possible:
Reading through the Nginx docs on proxy_pass they mention how using websockets for proxying requires a special configuration
In that document it talks about how websockets requires HTTP1.1 (and an Upgrade and Connection HTTP header) so you can either basically fabricate them proxy_set_header between your server and proxy or you can pass them along if the client request includes them.
Presumably in this case, if the client didn't send the Upgrade header, you'd proxy_pass the connection to a server using TCP, rather than websockets.
Well, HTTP2 is (I assume) another Upgrade and one likely to have even less user agent support. So at the least, you'd risk compatibility problems.
Again, I have never configured Apache as a (reverse) proxy but my guess from your configuration is that Apache would handle the encryption to the client, then probably have to re-encrypt for the connection to Nginx.
That's a lot of overhead just on encryption, and probable browser compatibility issues...and probably not a great set up.
Again, I'm not sure if it's possible, I came here looking for a similar answer but you might want to reconsider.
Is there any way I can implement HTTP/3 in Apache?
Edit:
The QUIC protocol has now been made an RFC, see RFC 9000. Waiting for HTTP/3...
OpenSSL said somewhere that they will begin working on QUIC after they release OpenSSL 3.0. Not sure when OpenSSL 3.0 is going to be released.
Until that, maybe we can integrate BoringSSL into Apache, and start testing stuff with QUIC.
No there is no way at present. Apache has not committed to doing the work required here at this time.
LiteSpeed is an Apache alternative supporting many of the same features, but with strong QUIC and HTTP/3 support.
Nginx also has only made vague comments about QUIC and HTTP/3 but Cloudflare have made an Nginx patch available that adds QUIC and HTTP/3 support. (Edit Nginx have since previewed HTTP/3 support built independently of Cloudflare’s implementation).
Or alternatively Caddy is another alternative server with QUIC and HTTP/3 support.
However, if I were looking to enable, or even just experiment, with QUIC and HTTP/3 I would look to a CDN as they will be the simplest way to enable this and ensure you have optimal settings. Cloudflare have a free plan that (I think) also includes HTTP/3 and QUIC support so is easy to set up in front of a site you own.
Note: Compile NGINX with quiche, only use this solution when you want to test HTTP/3, as it is not very reliable.
One solution I got is, you can run NGINX using HTTP/3 only on 443 with only QUIC, so it will use UDP.
And, you can use Apache on 443 which will use TCP.
So, you can make Apache send the Alt-Svc header, and make it handle HTTP/0.9, HTTP/1.0, HTTP/1.1 and HTTP/2.0.
And, you can make NGINX like a wrapper around Apache using:
listen 1.2.3.4:443 quic reuseport;
location / {
proxy_pass https://your-apache-server.tld:443;
}
This just makes you allowed to run:
HTTP/0.9
HTTP/1.0
HTTP/1.1
HTTP/2.0 with TLS
HTTP/2.0 without TLS, using Upgrade: h2c header to upgrade to it
HTTP/2.0 without TLS, using H2Direct in Apache to enable http2-prior-knowledge (not sure what its actually called).
HTTP/3.0
FAQ
Well, why would you want to do this? Instead just use NGINX!
If you need some features that apache offers, like HTTP/2 clear text (http upgrade: header or directly), if you don't need those features, you can just stick with NGINX.Or if you just want to use Apache for all the main stuff.
Issues
I've noticed that nginx has some issues with serving POST requests when the current configuration is deployed.
Setting up a public Wildfly (9.0.2.Final) server, I'm figuring out the
alternatives for doing this with or without Apache as a front towards
Internet. I'd prefer to use Apache as this solves other problems for
me.
I should say: I need to use SSL for securing the data traffic.
I've set up SSL for both Wildfly and Apache.
Looking through blogs and tutorials, I haven't found an alternative
that performs SSL between Apache and Wildfly. That would seem to be a
preferred choice for me, where there one.
I've tried and configured
Configuring Apache using mod_proxy_ajp. This prohibits me from
using SSL between Apache and Wildfly but allow me to close the
firewall for 8080 and 8443.
Configuring Apache using mod_proxy_http. This gets me the
Exception of no secure port to forward to on the Wildfly side
which seems not to have any solution.
Open up the Wildfly ports 8080 and 8443 and letting requsts go
directly to a publicly exposed Wildfly, ehich I hear is not
recommended.
Since all three alternatives have their drawbacks, I got to ask: How are people usually doing this?
We are running Tomcat 7 behind a load balancer that works also as SSL terminator, and an Apache HTTP Server 2.4. The Apache connects to the Tomcat via mod_proxy_ajp.
For the application it is important that Tomcat is aware that the request is coming in via HTTPS and is thus secure. As e.g. this article recommends, it is common to configure this on the Tomcat's Connector using the attributes secure="true" and possibly scheme="https" proxyPort="443". While this works, it is inconvenient since we are using HTTP for some purposes as well, and thus we would need to set up two Tomcat connectors for this purpose. And it has a smell, since this way we basically tell Tomcat to override the wrong information it gets from the Apache HTTP Server that the request is HTTPS instead of HTTP, instead of telling the Apache that it should send the correct information on the protocol and secure status.
So my question: is it somehow possible to configure the Apache HTTP Server itself that it passes the correct information via the AJP protocol: that the request is received via HTTPS and is secure? The problem is that it doesn't know it's HTTPS, since there is a SSL terminator before it and the requests arrives via HTTP, as far as it is concerned. Can I tell the Apache somehow that it's actually HTTPS?
A partial solution seems to be to set the protocol on a ServerName directive in the virtual host in the Apache HTTP server:
ServerName https://whatever
This way any Location: headers in redirects seem to be rewritten to https in the Apache, but the Tomcat is still passed the wrong information via AJP.
I always thought that AJP transfers this information automagically - but I'm not using mod_proxy_ajp, rather mod_jk. It's one of the reasons why I much prefer AJP over HTTP (and proxying).
Might be worth to change the module/connection
I'm sure this is an FAQ but I couldn't find anything I recognized as being the same question.
I have several web-apps running in Tomcat, with some pages e.g. the login page protected by SSL as defined by confidentiality elements in their web.xmls. One of the apps also accepts client-authentication via certificate. I also have a rather extensive JAAS-based authorization & authentication scheme, and there is all kinds of shared code and different JAAS configurations etc between the various webapps.
I really don't want to disturb any of that while accomplishing the below.
I am now in the process of inserting Apache HTTPD with mod-proxy and mod-proxy-balancer in front of Tomcat as a load balancer, prior to adding more Tomcat instances.
What I want to accomplish for HTTPS requests is that they are redirected 'blind' to Tomcat without HTTPD being the SSL endpoint, i.e. HTTPD just passes ciphertext directly to Tomcat so that TC can keep doing what it is already doing with logins, SSL, web.xml confidentialty guarantees, and most importantly client authentication.
Is this possible with the configuration I've described?
I am very familiar with the webapps and SSL and HTTPS and Tomcat, but my knowledge of the outer reaches of Apache HTTPD is limited.
Happy to have this moved if necessary but it is kind of programming with config files ;)
This sounds similar to this question, where I've answered that it's not possible:
You can't just relay the SSL/TLS traffic to Tomcat from Apache. Either
your SSL connection ends at Apache, and then you should reverse proxy
the traffic to Tomcat (SSL [between Httpd and Tomcat] is rarely useful in this case), or you make
the clients connect to Tomcat directly and let it handle the SSL
connection.
I admit it's a bit short of links to back this claim. I guess I might be wrong (I've just never seen this done, but that doesn't strictly mean it doesn't exist...).
As you know, you need a direct connection, or a connection entirely relayed, between the user-agent and the SSL endpoint (in this case, you want it to be Tomcat). This means that Apache Httpd won't be able to look into the URL: it will know the host name at best (when using Server Name Indication).
The only option that doesn't seem to depend on a URL in the mod_proxy documentation is AllowCONNECT, which is what's used for forward proxy servers for HTTPS.
Even the options in mod_proxy_balancer expect a path at some point of the configuration. Its documentation doesn't mention SSL/HTTPS ("It provides load balancing support for HTTP, FTP and AJP13 protocols"), whereas mod_proxy talks at least about SSL when mentioning CONNECT.
I would suggest a couple of options:
Using an iptables-based load-balancer, without going through Httpd, ending the connections in Tomcat directly.
Ending the SSL/TLS connection at Httpd and using a plain HTTP reverse proxy to Tomcat.
This second option requires a bit more configuration to deal with the client certificates and Tomcat's security constraints.
If you have configured your webapp with <transport-guarantee>CONFIDENTIAL</transport-guarantee>, you will need to make Tomcat flag the connections as secure, despite the fact it sees them coming from its plain HTTP port. For Tomcat 5, here is an article (originally in French, but the automatic translations isn't too bad) describing how to implement a valve to set isSecure(). (If you're not familiar with valves, they are similar to filters, but operate within Tomcat itself, before the request is propagated to the webapp. They can be configured within Catalina) I think from Tomcat 5.5, the HTTP connector secure option does exactly that, without requiring your own valve. The AJP connector also has a similar option (if using mod_proxy_ajp or mod_jk).
If using the AJP connector, mod_proxy_ajp will forward the first certificate in the chain and make it available within Tomcat (via the normal request attribute). You'll probably need SSLOptions +ExportCertData +StdEnvVars. mod_jk (although deprecated as far as I know) can also forward the entire chain sent by the client (using JkOptions +ForwardSSLCertChain). This can be necessary when using proxy certificates (which are meaningless without the chain up to their end-entity certificate).
If you want to use mod_proxy_http, a trick is to pass the certificate via an HTTP header (mod_header), using something like RequestHeader set X-ClientCert %{SSL_CLIENT_CERT}s. I can't remember the exact details, but it's important to make sure that this header is cleared so that it never comes from the client's browser (who could forge it otherwise). If you need the full chain, you can try out this Httpd patch attempt. This approach would probably need an extra valve/filter to turn the header into the javax.servlet.request.X509Certificate (by parsing the PEM blocks).
A couple of other points that may be of interest:
If I remember well, you need to download the CRL files explicitly for Httpd and configure it to use them. Depending on the version of Httpd you're using, you may have to restart it to reload the CRLs.
If you're using re-negotiation to get your client-certificate, a CLIENT-CERT directive will not make Httpd request a client certificate as far as I know (this is otherwise done via a valve that can access the SSLSession when using the JSSE connector directly). You may have to configure the matching path in Httpd to request the client-certificate.