My situation:
A website is hosted using a default apache2 installation on an ubuntu server.
Served on port 443 using HTTPS and a self-signed certificate (for developping).
Now I have a simple service written in golang that listens at port 8080 and acts as a Reverse Proxy to take https requests, forward them to apache locally and return the response back to the client. This webservice doesn't cache any files and only forwards requests.
Code: https://play.golang.org/p/tnfKVWyLuZQ
My "problem":
Calling apache directly, i.e. https://foo.com/bar/ is remarkably slower (200-400ms) than calling the website through my reverse proxy, i.e. https://foo.com:8080/bar/
Why is it slower to call apache2 directly? I expected to have overhead using a reverse proxy, not a speedup. -> Comparison for example page: https://i.imgur.com/TqznM2v.png
UPDATE: Sketch to show the current setup:
Current Setup
Regarding the encoding: The Encoding is consistent in both situations: Encoding header and Content-Length is in both cases (Situation 1 vs 2) the same, the client also receives the file size. Not sure why in the HAR Viewer it only displays the uncompressed size in the second case. If checking in Chrome I can see the compressed size in both case.
Update #2: I came to the conclusion that the golang implementation handles multiple requests from the same client in a short time more efficiently than apache2 in it's default configuration. Sicne I only test with few clients I can't say how well it scales - I imagine the webservice will fall behind when under load.
I see this as closed, thanks all for the help.
As far as i can see. There are two possible reasons.
The apache reverse proxy handled some cache contain static file like images, css or javascript.
When you browse a https url. A process named "ssl certificate uninstall" will happen, and it can cause huge server load. So if the web application and the ssl certificate are deployed on the same server, the load may cause high-latency. Generally, we use a special device named load-balancer to uninstall ssl certificate, Just like a reverse proxy.
Related
What makes nginx/apache a web server, HAProxy not?
What functionalities HAProxy lacks to be a web server?
HAProxy can listen on port 80 and can speak HTTP but that's not what people mean when they say "web server."
HAProxy is not a web server, because "web server" implies an HTTP endpoint that can serve static content from files and/or dynamic content generated from code. That's not what HAProxy is for.
Technically, there are certain capabilities in HAProxy that can be misused to emulate some capabilities of a web server -- you can serve very small static files from memory buffers and you can generate small dynamic responses using the optional embedded Lua interpreter -- but it is not intended or designed to be used as a web server. It's a proxy server -- emulating a web server toward the client, and emulating a client toward the real back-end web server(s) behind it -- because bidirectional emulation is commonly what proxies do.
With Nginx and Apache, you can specify a root directory from which files are served, and you can specify paths that are to be serviced by code running in languages like Perl, PHP, Python, etc. Not with HAProxy, because, again, that isn't what it's designed to do.
Both Nginx and Apache can also be used as proxy servers, as HAProxy can, but HAproxy is specifically designed and optimized for that primary purpose -- proxying and load balancing against multiple back-end, selecting the back-end using various rules and algorithms... in essence, HAProxy is an "intermediate router" for HTTP requests, delivering them rather than responding to them. It can also proxy and load balance non-HTTP protocols that rely on TCP.
I have two projects running on Wildfly-8 and I have two SSL certificates for each of them and one IP.
I figured out that I should have one IP for one SSL certificate.
But I needed to use these two SSL for one IP. I couldn't find a way to do it with Wildfly but there was a way to do it with Apache Server. So,I installed Apache Server up to Wildfly.
I listen https port(443) on Apache and redirect it to Wildfly's http port(I used 8080). It works without any problem.
What I wonder is;
1. Is Apache decrypt request and redirect it to Wildfly?
2. Is it correct way to do it or I have done it by chance?
3. Does this method create a security hole?
I googled some, but I could not find satisfied answers.
Thanks for replies.
For this answer, I'm supposing that by "redirecting" you mean "proxying": Apache receives the request, proxies it to Wildfly, receives an answer from Wildfly, sends the answer to the client.
If you mean something else, then the simple answer is: it is wrong[1].
Is Apache decrypt request and redirect it to Wildfly?
Yes. Apache will receive and send secure data to/from the client. Its communication with Wildfly will be plaintext.
Is it correct way to do it or I have done it by chance?
That's how it's usually done, yes. In other words: a load balancer and/or a proxy in front of Wildfly (Apache in your case). Wildfly itself is not reached directly by the public internet.
Does this method create a security hole?
It does, just like everything else is a security "compromise". In this case, you are trusting your internal network, in the name of a more practical/manageable architecture. If you do not trust your internal network, you should look for another solution. In the general case, the price to pay seems fair to me, as you'll "only" be open to a man-in-the-middle between your Apache and your Wildfly. So, if you trust your internal network, you should trust that there won't be any MITM there.
Edit
[1] - As everything else in life, there's no absolute truth. Basically, there are 3 techniques that can be used in a scenario like this: pass through, edge and re-encryption.
Pass through is a "dumb" pipe, where nothing about TLS is known by the proxy. Wildfly would then handle the secure communication with the client. I'm not sure Apache would do this, but this can be done with haproxy in TCP mode;
Edge (or offloading) is the situation I described above: Client talks TLS with Apache, Apache talks plaintext with Wildfly;
Re-encryption, which is like Edge, but the communication between Apache and Wildfly is also TLS, using a different certificate.
I'm changed my iPhone app from using HTTP to using HTTPS and it just worked.
I doubt that it is actually working though. How can I check, from my Tomcat server log files (or similar), that the request actually used HTTPS?
Log the return value of request.isSecure() from a Servlet or JSP and see.
BUT: be aware that Tomcat, if not configured properly, won't KNOW if it's serving securely or not. This is the case whether your SSL cert terminates at your load balancer (or web server sitting in front of Tomcat), or if Tomcat is handling SSL traffic directly. Your (in server.xml) for HTTPS requests must have the secure="true" attribute; the default is "false". If secure is set to false (or not set), then Tomcat may be successfully handling SSL connections, but when you call isSecure() from within a Servlet (or JSP), it'll return false.
I don't know if the secure attribute affects how Tomcat logs traffic or not, but it may.
It seems that nginx buffers requests before passing it to the updstream server,while it is OK for most cases for me it is very bad :)
My case is like this:
I have nginx as a frontend server to proxy 3 different servers:
apache with a typical php app
shaveet(a open source comet server) built by me with python and gevent
a file upload server built again with gevent that proxies the uploads to rackspace cloudfiles
while accepting the upload from the client.
#3 is the problem, right now what I have is that nginx buffers all the request and then sends that to the file upload server which in turn sends it to cloudfiles instead of sending each chunk as it gets it (those making the upload faster as i can push 6-7MB/s to cloudfiles).
The reason I use nginx is to have 3 different domains with one IP if I can't do that I will have to move the fileupload server to another machine.
As soon as this [1] feature is implemented, Nginx is able to act as reverse proxy without buffering for uploads (bug client requests).
It should land in 1.7 which is the current mainline.
[1] http://trac.nginx.org/nginx/ticket/251
Update
This feature is available since 1.7.11 via the flag
proxy_request_buffering on | off;
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_request_buffering
According to Gunicorn, they suggest you use nginx to actually buffer clients and prevent slowloris attacks. So this buffering is likely a good thing. However, I do see an option further down on that link I provided where it talks about removing the proxy buffer, it's not clear if this is within nginx or not, but it looks as though it is. Of course this is under the assumption you have Gunicorn running, which you do not. Perhaps it's still useful to you.
EDIT: I did some research and that buffer disable in nginx is for outbound, long-polling data. Nginx states on their wiki site that inbound requests have to be buffered before being sent upstream.
"Note that when using the HTTP Proxy Module (or even when using FastCGI), the entire client request will be buffered in nginx before being passed on to the backend proxied servers. As a result, upload progress meters will not function correctly if they work by measuring the data received by the backend servers."
Now available in nginx since version nginx-1.7.11.
See documentation
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_request_buffering
To disable buffering the upload specify
proxy_request_buffering off;
I'd look into haproxy to fulfill this need.
What are the advantages and disadvantages of using mod_jk and mod_proxy for fronting a tomcat instance with apache?
I've been using mod_jk in production for years but I've heard that it's "the old way" of fronting tomcat. Should I consider changing? Would there be any benefits?
A pros/cons comparison for those modules exists on http://blog.jboss.org/
mod_proxy
* Pros:
o No need for a separate module compilation and maintenance. mod_proxy,
mod_proxy_http, mod_proxy_ajp and mod_proxy_balancer comes as part of
standard Apache 2.2+ distribution
o Ability to use http https or AJP protocols, even within the same
balancer.
* Cons:
o mod_proxy_ajp does not support large 8K+ packet sizes.
o Basic load balancer
o Does not support Domain model clustering
mod_jk
* Pros:
o Advanced load balancer
o Advanced node failure detection
o Support for large AJP packet sizes
* Cons:
o Need to build and maintain a separate module
If you wish to stay in Apache land, you can also try the newer mod_proxy_ajp, which uses the AJP protocol to communicate with Tomcat instead of plain old HTTP, but which leverages mod_proxy to do the work.
AJP vs HTTP
When using mod_jk, you are using the AJP. When using mod_proxy you will use HTTP or HTTPS. And this is essentially what makes all the difference.
The Apache JServ Protocol (AJP)
The Apache JServ Protocol (AJP) is a binary protocol that can proxy inbound requests from a web server through to an application server that sits behind the web server. AJP is a highly trusted protocol and should never be exposed to untrusted clients, which could use it to gain access to sensitive information or execute code on the application server.
Pros
Easy to set up as the correct forwarding of HTTP headers is not required.
It is less resource intensive because the TCP packets are forwarded in binary format instead of doing a costly HTTP exchange.
Cons
Transferred data is not encrypted. It should only be used within trusted networks.
Hypertext Transfer Protocol (HTTP)
HTTP functions as a request–response protocol in the client–server computing model. A web browser, for example, may be the client and an application running on a computer hosting a website may be the server. The client submits an HTTP request message to the server. The server, which provides resources such as HTML files and other content, or performs other functions on behalf of the client, returns a response message to the client. The response contains completion status information about the request and may also contain requested content in its message body.
Pros
Can be encrypted with SSL/TLS making it suitable for traffic across untrusted networks.
It is flexible as it allows to modify the request before forwarding. For example, setting custom headers.
Cons
More overhead as the correct forwarding of the HTTP headers has to be ensured.
More resource intensive as the request is fully parsed before forwarding.