How to put up an off-the-shelf https to http gateway? - apache

I have an HTTP server which is in our internal network and accessible only from inside it. I would like to put another server that would listen to an HTTPS port accessible from outside, and forward the requests to that HTTP server (and send back the responses via HTTPS). I know that there are several ways to do this with some programming involved (and I myself made a temporary solution with Tomcat and a very simple servlet I wrote), but is there a way to do the same just plugging parts already made (like Apache + modules)?

This is the sort of use-case that stunnel is designed for. There is a specific example of using stunnel to wrap an HTTP server.
You should consider whether this is really a good idea, though. Web applications designed for use inside a corporate firewall are often fairly lax about security. Merely encrypting the connections prevents casual eavesdropping, but does not secure the site. If an attacker finds your outward facing server and starts connecting to it, they can still try to find exploitable flaws in the web service (SQL injection, cross-site scripting, etc).

With Apache look into mod_proxy.
Apache 2.2 mod_proxy docs
Apache 2.0 mod_proxy docs

Related

Is nginx needed if Express used

I have a nodeJS web application with Express running on a Digital Ocean droplet.The nodeJs application provides back-end API's. I have two react front-ends that utilise the API's with different domains. The front-ends can be hosted on the same server, but my developer tells me I should use another server to host the front-ends, such as cloudflare.
I have read that nginX can enable hosting multiple sites on the same server (i.e. host my front-ends on same server) but unsure if this is good practice as I then may not be able to use cloudflare.
In terms of security could someone tell me If I need nginx, and my options please?
Thanks
This is a way too open-ended question but I will try to answer it:
In terms of security could someone tell me If I need nginx, and my
options please?
You will need Nginx (or Apache) on any scenario. With one server or multiple. Using Express or not. Express is only an application framework to build routes. But you still need a service that will respond to network requests. This is what Nginx and Apache do. You could avoid using Nginx but then your users would have to make the request directly to the port where you started Express. For example: http://my-site.com:3000/welcome. In terms of security you would better hide the port number and use a Nginx's reverse proxy so that your users will only need to go to http://my-site.com/welcome.
my developer tells me I should use another server to host the
front-ends, such as cloudflare
Cloudflare does not offer hosting services as far as I know. It does offer CDN to host a few files but not a full site. You would need another Digial Ocean instance to do so. In a Cloudflare's forum post I found: "Cloudflare is not a host. Cloudflare’s basic service is a DNS provider, where you simply point to your existing host.".
I have read that nginX can enable hosting multiple sites on the same
server
Yes, Nginx (and Apache too) can host multiple sites. With different names or the same. As domains (www.my-backend.com, www.my-frontend.com) or subdomains (www.backend.my-site.com, www.my-site.com) in the same server.
... but unsure if this is good practice
Besides if it is a good or bad practice, I think it is very common. A few valid reasons to keep them in separated servers would be:
Because you want that if the front-end fails the back-end API continues to work.
Because you want to balance network traffic.
Because you want to keep them separated.
It is definitively not a bad practice if both applications are highly related.

What makes nginx/apache a web server, HAProxy not?

What makes nginx/apache a web server, HAProxy not?
What functionalities HAProxy lacks to be a web server?
HAProxy can listen on port 80 and can speak HTTP but that's not what people mean when they say "web server."
HAProxy is not a web server, because "web server" implies an HTTP endpoint that can serve static content from files and/or dynamic content generated from code. That's not what HAProxy is for.
Technically, there are certain capabilities in HAProxy that can be misused to emulate some capabilities of a web server -- you can serve very small static files from memory buffers and you can generate small dynamic responses using the optional embedded Lua interpreter -- but it is not intended or designed to be used as a web server. It's a proxy server -- emulating a web server toward the client, and emulating a client toward the real back-end web server(s) behind it -- because bidirectional emulation is commonly what proxies do.
With Nginx and Apache, you can specify a root directory from which files are served, and you can specify paths that are to be serviced by code running in languages like Perl, PHP, Python, etc. Not with HAProxy, because, again, that isn't what it's designed to do.
Both Nginx and Apache can also be used as proxy servers, as HAProxy can, but HAproxy is specifically designed and optimized for that primary purpose -- proxying and load balancing against multiple back-end, selecting the back-end using various rules and algorithms... in essence, HAProxy is an "intermediate router" for HTTP requests, delivering them rather than responding to them. It can also proxy and load balance non-HTTP protocols that rely on TCP.

Nginx serving application and ExpressJS just as backend

I think it's pretty common to use nginx to proxy connections to ExpressJS, so all is done through ExpressJS.
I was thinking, why not use nginx to server the application since it's more simple to setup things like rewrites and let ExpressJS as backend only and then the application communicate to ExpressJS directly on 3000 port.
Is it a bad idea? If not, how often people does this ?
It's very common. But having your front end code directly talk to the node server adds complexity.
You have to handle CORS issues on the node server, including preventing cross site form submissions. See here Properly Understanding CORS with Same Host / Different Port & Security.
SSL is also going to be a bit more complicated. You'll need a wild card certificate.
However, there are some big advantages to using something like ngnix to host your assets. In addition to the ones you enumerated, it sets you up to go serverless. You can host your app out of an S3 bucket our through another content delivery network.

Apache Reverse Proxy Using a Network Proxy Credential?

I'm trying to set up a reverse proxy on Apache 2.2 (Windows). I am able to do it on a non-corporate network without any problems. I am attempting to reverse proxy content from a vendor domain, but keep it under my own domain for SEO reasons.
dev.example.com/stuff ===> devstuff.vendor.com
However, when I try to incorporate this on my internal network, the Internet Gateway proxy is blocking the request, presumably as I'm not properly authenticating the call to the external domain.
dev.example.com ===> Internet Proxy =X=> devstuff.vendor.com
I've been googling every term I can think of and reading the Apache docs and can't find anything which seems to work. I have tried running Apache as a service with a network account which would have access, but naturally, it's probably not trying to use the proxy at all.
Is there any way to tell Apache to send external ProxyPass requests to use a specific proxy server, and perhaps a specific username/password as well? I'd love to avoid modifying the proxy or firewall too heavily to accomplish this.
Thanks!
Never quite did figure out the "with passing credentials" part, but using the ProxyRemote directive, we could pass everything for our devstuff.vendor.com domain through our network proxy. From there, we had a proxy exception put in to allow from our web server IPs without authentication, since this was an approved arrangement anyhow.
Though, in hindsight, even after solving this, we ended up backing up one step further and just going straight out the firewall for performance reasons (both for the end user with too many hops) as well as negative impacts to our proxy server.

Socket RPC with Tornado in an established Apache/PHP environment

We've got an establish Apache/PHP site, with a Flash front-end. We're going to start to need to implement some sort of socket communication, or 'long-polling', to push updates to the flash app. Since this obviously isn't going to be a good situation for Apache, or PHP, I'd like to use Tornado for this aspect of the functionality, but I also don't want to run Tornado on another port, since the Flash app will be running on the client machine, we don't want to have to deal with restrictive firewalls blocking the socket connections.
Ideally I'd like to run a proxy which can forward most requests to Apache, and other requests to Tornado. I saw some suggestions for using Apache as the first-contact proxy, forward requests to Tornado when necessary, but I've also seen this discounts a lot of the async capabilities of Tornado.
I thought, why no use Tornado as the first-contact for port 80 and have it proxy back to Apache? I couldn't find anything on this at all and am wondering if this is even possible?
Another option would be to use something like lighttpd as the proxy and have it decide whether to pass things along to Apache or to Tornado, but does this kind of setup make sense? Or what about Nginx?
Any suggestions, advice or corrections on my understanding of things would be greatly appreciated!
This is called a reverse proxy and it's very easy to configure nginx to perform this. (lighttpd should also be able to do this job well, but I have no experience using it).
The tornado documentation has an example nginx configuration
One thing to note when using a reverse proxy is that the connection to your upstream server will now be originating from the proxy, not the client. The de facto standard is to put information about the original request in certain http headers. In the example from the tornado docs, the X-Real-IP header is set to the IP of the original client and X-Scheme is set to the scheme of the original request (http/https for example).
This may require some modifications to your upstream server. With tornado this is done by constructing the HTTPServer with the xheaders argument set to True. This will instruct the server to try and pull the IP address and scheme from the X-headers. Note that if you use this with a server that isn't behind a reverse proxy that sets the appropriate headers than you are open to IP Address spoofing.