Enable Geode REST to use HTTP and HTTPS at the same time - gemfire

If we set Geode properties to use ssl for web then that means we have to use HTTPS for all web traffic. Is there a way to configure Geode, for development purposes, to use both HTTP on 1 port (8080) but also HTTPS on another (8443) ?
It looks like Jetty can be configured to allow both using multiple connectors, even on the same port...

Unfortunately that isn't possible at the moment. I'd suggest trying to start different instances of the various components (locator and server) with different SSL settings (off or on) for testing purposes.

Related

Is nginx needed if Express used

I have a nodeJS web application with Express running on a Digital Ocean droplet.The nodeJs application provides back-end API's. I have two react front-ends that utilise the API's with different domains. The front-ends can be hosted on the same server, but my developer tells me I should use another server to host the front-ends, such as cloudflare.
I have read that nginX can enable hosting multiple sites on the same server (i.e. host my front-ends on same server) but unsure if this is good practice as I then may not be able to use cloudflare.
In terms of security could someone tell me If I need nginx, and my options please?
Thanks
This is a way too open-ended question but I will try to answer it:
In terms of security could someone tell me If I need nginx, and my
options please?
You will need Nginx (or Apache) on any scenario. With one server or multiple. Using Express or not. Express is only an application framework to build routes. But you still need a service that will respond to network requests. This is what Nginx and Apache do. You could avoid using Nginx but then your users would have to make the request directly to the port where you started Express. For example: http://my-site.com:3000/welcome. In terms of security you would better hide the port number and use a Nginx's reverse proxy so that your users will only need to go to http://my-site.com/welcome.
my developer tells me I should use another server to host the
front-ends, such as cloudflare
Cloudflare does not offer hosting services as far as I know. It does offer CDN to host a few files but not a full site. You would need another Digial Ocean instance to do so. In a Cloudflare's forum post I found: "Cloudflare is not a host. Cloudflare’s basic service is a DNS provider, where you simply point to your existing host.".
I have read that nginX can enable hosting multiple sites on the same
server
Yes, Nginx (and Apache too) can host multiple sites. With different names or the same. As domains (www.my-backend.com, www.my-frontend.com) or subdomains (www.backend.my-site.com, www.my-site.com) in the same server.
... but unsure if this is good practice
Besides if it is a good or bad practice, I think it is very common. A few valid reasons to keep them in separated servers would be:
Because you want that if the front-end fails the back-end API continues to work.
Because you want to balance network traffic.
Because you want to keep them separated.
It is definitively not a bad practice if both applications are highly related.

What makes nginx/apache a web server, HAProxy not?

What makes nginx/apache a web server, HAProxy not?
What functionalities HAProxy lacks to be a web server?
HAProxy can listen on port 80 and can speak HTTP but that's not what people mean when they say "web server."
HAProxy is not a web server, because "web server" implies an HTTP endpoint that can serve static content from files and/or dynamic content generated from code. That's not what HAProxy is for.
Technically, there are certain capabilities in HAProxy that can be misused to emulate some capabilities of a web server -- you can serve very small static files from memory buffers and you can generate small dynamic responses using the optional embedded Lua interpreter -- but it is not intended or designed to be used as a web server. It's a proxy server -- emulating a web server toward the client, and emulating a client toward the real back-end web server(s) behind it -- because bidirectional emulation is commonly what proxies do.
With Nginx and Apache, you can specify a root directory from which files are served, and you can specify paths that are to be serviced by code running in languages like Perl, PHP, Python, etc. Not with HAProxy, because, again, that isn't what it's designed to do.
Both Nginx and Apache can also be used as proxy servers, as HAProxy can, but HAproxy is specifically designed and optimized for that primary purpose -- proxying and load balancing against multiple back-end, selecting the back-end using various rules and algorithms... in essence, HAProxy is an "intermediate router" for HTTP requests, delivering them rather than responding to them. It can also proxy and load balance non-HTTP protocols that rely on TCP.

Using mod_security, either with Apache 2.4 or with mod_proxy as a reverse proxy

I would like to setup mod_security as a stand alone instance protecting Tomcat instances against web application attacks. Would anyone know the pros and cons of doing this via installing mod_security as an Apache module versus installing mod_security on a reverse proxy? Has anyone implemented mod_security in either of these fashions? And if so is one preferred over the other?
There's really no difference in your two options. What non reverse proxy would you install the module on to protect Tomcat?
The question doesn't really make sense as they are both the same to you.
If you already have an Apache server, then you install ModSecurity in one of two ways:
In embedded mode by installing ModSecurity as module in the existing Apache instance you already have. The advantages are that you won't have to set up a separate Apache instance, and that the ModSecurity will have access to the environment that Apache runs under (so can see environment variables for example or log to same log files).
In a reverse proxy mode. This involves setting up a separate Apache instance, with ModSecurity on it only, and funnel all requests through it, before sending on the requests to your normal Apache. The advantages here are a dedicated web server just for ModSecurity, so you will not share resources with your existing version of Apache, if it is already resource hungry. Disadvantages are that it doubles your infrastructure and the complications that brings.
Personally I prefer option 1.
However, as you want to set up a dedicated web server in front of TomCat, the two options are identical for you. The new instance of Apache (or Nginx) that you set up will be running it in embedded mode and will act as a reverse proxy to your Tomcat server.
Personally I always think it's best to run a dedicated web server like Apache in front of any app server like Tomcat - especially on a public facing website. Granted Tomcat does include a pretty good web server (called Coyote), which may serve most of your web server needs, but a dedicated web server like Apache is more geared towards serving static content and contains other features for performance and security which make it a better end point server (including the ability to run ModSecurity for example!).
And just in case there is any confusion, Apache is actually short for Apache HTTP Server, and is sometimes called Apache httpd after the process that it runs. It is Apache's most popular bit of software hence why the name gets shortened, but Apache actually have lots of bits of software (including Apache Tomcat - usually shortened just to Tomcat).

How to put up an off-the-shelf https to http gateway?

I have an HTTP server which is in our internal network and accessible only from inside it. I would like to put another server that would listen to an HTTPS port accessible from outside, and forward the requests to that HTTP server (and send back the responses via HTTPS). I know that there are several ways to do this with some programming involved (and I myself made a temporary solution with Tomcat and a very simple servlet I wrote), but is there a way to do the same just plugging parts already made (like Apache + modules)?
This is the sort of use-case that stunnel is designed for. There is a specific example of using stunnel to wrap an HTTP server.
You should consider whether this is really a good idea, though. Web applications designed for use inside a corporate firewall are often fairly lax about security. Merely encrypting the connections prevents casual eavesdropping, but does not secure the site. If an attacker finds your outward facing server and starts connecting to it, they can still try to find exploitable flaws in the web service (SQL injection, cross-site scripting, etc).
With Apache look into mod_proxy.
Apache 2.2 mod_proxy docs
Apache 2.0 mod_proxy docs

Glassfish with Apache. Why SSL?

I have been looking around to figure out how to configure Glassfish front ended with Apache. And most of the tutorials using the load balancing plug in is making me enable SSL on Apache. I am trying to understand the connection. I should be able to do non SSL communications when I dont have a need for SSL.
There are several blog posts showing how you can use Apache in front of Glassfish. There are several options and depending on your needs, different strategies might be the most appropriate.
I've used Apache with mod_jk which forwards requests to Glassfish - both https and regular http. Lots of good references here.
It's possible to use other modules in Apache also like mod_proxy, but again the requirements you have will flesh out the most appropriate.
Glassfish also have pretty good http engine inside of it where you can configure virtual hosts like in apache. If the load on the Glassfish server isn't to big, you might consider just using Glassfish without anything in front of it.
You can also use the Sun Java System Web Server SJSWS instead of Apache. Despite it's atrocious name, it is just Sun's web server (free to use). It can be used as a reverse proxy (PDF). The SJWS/Glassfish combination is presumably tested really well by Sun.