I think it's pretty common to use nginx to proxy connections to ExpressJS, so all is done through ExpressJS.
I was thinking, why not use nginx to server the application since it's more simple to setup things like rewrites and let ExpressJS as backend only and then the application communicate to ExpressJS directly on 3000 port.
Is it a bad idea? If not, how often people does this ?
It's very common. But having your front end code directly talk to the node server adds complexity.
You have to handle CORS issues on the node server, including preventing cross site form submissions. See here Properly Understanding CORS with Same Host / Different Port & Security.
SSL is also going to be a bit more complicated. You'll need a wild card certificate.
However, there are some big advantages to using something like ngnix to host your assets. In addition to the ones you enumerated, it sets you up to go serverless. You can host your app out of an S3 bucket our through another content delivery network.
Related
I have a nodeJS web application with Express running on a Digital Ocean droplet.The nodeJs application provides back-end API's. I have two react front-ends that utilise the API's with different domains. The front-ends can be hosted on the same server, but my developer tells me I should use another server to host the front-ends, such as cloudflare.
I have read that nginX can enable hosting multiple sites on the same server (i.e. host my front-ends on same server) but unsure if this is good practice as I then may not be able to use cloudflare.
In terms of security could someone tell me If I need nginx, and my options please?
Thanks
This is a way too open-ended question but I will try to answer it:
In terms of security could someone tell me If I need nginx, and my
options please?
You will need Nginx (or Apache) on any scenario. With one server or multiple. Using Express or not. Express is only an application framework to build routes. But you still need a service that will respond to network requests. This is what Nginx and Apache do. You could avoid using Nginx but then your users would have to make the request directly to the port where you started Express. For example: http://my-site.com:3000/welcome. In terms of security you would better hide the port number and use a Nginx's reverse proxy so that your users will only need to go to http://my-site.com/welcome.
my developer tells me I should use another server to host the
front-ends, such as cloudflare
Cloudflare does not offer hosting services as far as I know. It does offer CDN to host a few files but not a full site. You would need another Digial Ocean instance to do so. In a Cloudflare's forum post I found: "Cloudflare is not a host. Cloudflare’s basic service is a DNS provider, where you simply point to your existing host.".
I have read that nginX can enable hosting multiple sites on the same
server
Yes, Nginx (and Apache too) can host multiple sites. With different names or the same. As domains (www.my-backend.com, www.my-frontend.com) or subdomains (www.backend.my-site.com, www.my-site.com) in the same server.
... but unsure if this is good practice
Besides if it is a good or bad practice, I think it is very common. A few valid reasons to keep them in separated servers would be:
Because you want that if the front-end fails the back-end API continues to work.
Because you want to balance network traffic.
Because you want to keep them separated.
It is definitively not a bad practice if both applications are highly related.
I'm trying to set up a reverse proxy on Apache 2.2 (Windows). I am able to do it on a non-corporate network without any problems. I am attempting to reverse proxy content from a vendor domain, but keep it under my own domain for SEO reasons.
dev.example.com/stuff ===> devstuff.vendor.com
However, when I try to incorporate this on my internal network, the Internet Gateway proxy is blocking the request, presumably as I'm not properly authenticating the call to the external domain.
dev.example.com ===> Internet Proxy =X=> devstuff.vendor.com
I've been googling every term I can think of and reading the Apache docs and can't find anything which seems to work. I have tried running Apache as a service with a network account which would have access, but naturally, it's probably not trying to use the proxy at all.
Is there any way to tell Apache to send external ProxyPass requests to use a specific proxy server, and perhaps a specific username/password as well? I'd love to avoid modifying the proxy or firewall too heavily to accomplish this.
Thanks!
Never quite did figure out the "with passing credentials" part, but using the ProxyRemote directive, we could pass everything for our devstuff.vendor.com domain through our network proxy. From there, we had a proxy exception put in to allow from our web server IPs without authentication, since this was an approved arrangement anyhow.
Though, in hindsight, even after solving this, we ended up backing up one step further and just going straight out the firewall for performance reasons (both for the end user with too many hops) as well as negative impacts to our proxy server.
I am currently using lighty as a load-balancing reverse-proxy for two different webapps running on a small farm of HTTP servers:
roundrobbin(URL_1) => Server_Group_1
roundrobbin(URL_2) => Server_Group_2
I want to convert the HTTP servers to HTTPS servers. URL_1 has CERT_1 and URL_2 has CERT_2.
Unlike many people, I do not want to serve certificates from the front-end proxy. I want the front-end proxy to pass the HTTPS requests to secondary proxies: Proxy_1 (serves CERT_1) and Proxy_2 (serves CERT_2).
This should be possible with SNI (Server Name Indication). But everything I have read about SNI gives the example of front-end proxy serving both certs. I do not want to put both of my certs on the fron-end proxy. Call me crazy, but I actually want to hold the certs closer to the apps.
This might seem like a lot of trouble for two URLs. It is. My real case involves dozens of URLs. So it might might seem silly not to store all the certs in one place. But there are 'organizational considerations' which make it advantageous to administer them separately.
So basically, I want to use SNI for pure forwarding and defer SSL termination to downstream.
Thanks for reading. I expect to learn a lot from this!
What you're trying to do doesn't rely on an HTTP reverse proxy but on a reverse proxy at the TCP connection level, with the additional capability of being able to recognise an SSL/TLS Client Hello, look for the Server Name extension and dispatch accordingly.
I realise this isn't quite the answer you're looking for, but I wouldn't look at HTTP servers for this.
It looks like this project might be able to do this (I haven't tried).
I hope this doesn't come across as a terribly silly question, but I'm learning how to implement a socket.io server for my website to produce real-time applications, but my problem is that I can't figure out how to implement said applications in an Apache served environment. Currently, when I run node server.js to start my socket.io server, I have to access it by visiting http://localhost:XXXX where XXXX is whatever port I attach it to, naturally. I don't want my website to be forced to be viewed on an alternate port like this, but I obviously can't attach the server to port 80 since Apache is listening on that.
Obviously a natural solution would be to stop the Apache service and then node the server on port 80 that way to avoid a collision, but I don't want to sacrifice all of the functionality that Apache offers. Basically, I want to continue to serve my website via Apache on port 80, and integrate certain aspects of real-time applications via socket.io on port 3000, let's say.
Is there a way to do this that avoid the things I don't want? Those things being 1) having users access my site with :3000 in the URL, 2) disabling Apache, 3) using iframes.
Thanks in advance.
Generally, you should be able to hide Node.js with mod_proxy. A bit of searching turned up this: https://github.com/sindresorhus/guides/blob/master/run-node-server-alongside-apache.md (old link died, this is a new one)
However, Socket.io can be a bit finicky (https://github.com/LearnBoost/socket.io/issues/25), so you may have problems with it specifically.
As that ticket is a bit old, it's worth a shot. Just don't be surprised if you have problems. You're next bet after that is bind Node.js toport 80 and have it act as a reverse proxy for Apache with https://github.com/nodejitsu/node-http-proxy (still under a fair bit of development).
The optimal solution would be run it on it's own server and just have you're socket traffic go to socket.example.com or something like that.
Socket.io has multiple transport mechanisms. Some of them don't work if you run Apache as reverse proxy, but some do. Transports that don't work are websocket and flash, but xhr-polling and jsonp-polling should work.
Here's an example on setting the transports configuration option for socket.io:
var io = require("socket.io").listen(server);
io.set("transports", ["xhr-polling", "jsonp-polling"]);
On my Apache I'm using the normal name based virtual hosts and reverse proxy setup and with these transports the socket.io seems to be working.
I have an HTTP server which is in our internal network and accessible only from inside it. I would like to put another server that would listen to an HTTPS port accessible from outside, and forward the requests to that HTTP server (and send back the responses via HTTPS). I know that there are several ways to do this with some programming involved (and I myself made a temporary solution with Tomcat and a very simple servlet I wrote), but is there a way to do the same just plugging parts already made (like Apache + modules)?
This is the sort of use-case that stunnel is designed for. There is a specific example of using stunnel to wrap an HTTP server.
You should consider whether this is really a good idea, though. Web applications designed for use inside a corporate firewall are often fairly lax about security. Merely encrypting the connections prevents casual eavesdropping, but does not secure the site. If an attacker finds your outward facing server and starts connecting to it, they can still try to find exploitable flaws in the web service (SQL injection, cross-site scripting, etc).
With Apache look into mod_proxy.
Apache 2.2 mod_proxy docs
Apache 2.0 mod_proxy docs