Domain name and SSL for tomcat web app - apache

The question I am raising here has been asked couple of times and I went through most of them, including stackoverflow posts and other blog posts. The case is that I couldn't find something which fits to my requirement and I'm not gonna play around in our product environment with suggestions.
So the situation is, we have multiple web applications hosted in our tomcat server, deployed in Amazon AWS. Currently we access them like http://<ip-address>:8080/webapp1. Now,
We have sub domains to point at our web apps. So something like portal.example.com will point the above URL.
We have wildcard SSL to implement for domains and sub domains.
Now, first I have to sort the domain pointing thing, which I found 2 separate ways.
Install Apache and do a Virtual Proxy (https://www.digitalocean.com/community/questions/how-to-tie-domain-name-with-application-running-on-tomcat)
Edit Serverl.xml (How to map tomcat 7 webapp to my domain)
Now my questions.
Can someone please advice which method is the best (If non is good, I'm open for others as well).
Which method makes the SSL implementation easy?
If I chose the Apache Virtual Proxy, where should I install SSL? In apache or in Tomcat?
If I chose Server.xml, where should I install SSL? In apache or in Tomcat?

Related

Is nginx needed if Express used

I have a nodeJS web application with Express running on a Digital Ocean droplet.The nodeJs application provides back-end API's. I have two react front-ends that utilise the API's with different domains. The front-ends can be hosted on the same server, but my developer tells me I should use another server to host the front-ends, such as cloudflare.
I have read that nginX can enable hosting multiple sites on the same server (i.e. host my front-ends on same server) but unsure if this is good practice as I then may not be able to use cloudflare.
In terms of security could someone tell me If I need nginx, and my options please?
Thanks
This is a way too open-ended question but I will try to answer it:
In terms of security could someone tell me If I need nginx, and my
options please?
You will need Nginx (or Apache) on any scenario. With one server or multiple. Using Express or not. Express is only an application framework to build routes. But you still need a service that will respond to network requests. This is what Nginx and Apache do. You could avoid using Nginx but then your users would have to make the request directly to the port where you started Express. For example: http://my-site.com:3000/welcome. In terms of security you would better hide the port number and use a Nginx's reverse proxy so that your users will only need to go to http://my-site.com/welcome.
my developer tells me I should use another server to host the
front-ends, such as cloudflare
Cloudflare does not offer hosting services as far as I know. It does offer CDN to host a few files but not a full site. You would need another Digial Ocean instance to do so. In a Cloudflare's forum post I found: "Cloudflare is not a host. Cloudflare’s basic service is a DNS provider, where you simply point to your existing host.".
I have read that nginX can enable hosting multiple sites on the same
server
Yes, Nginx (and Apache too) can host multiple sites. With different names or the same. As domains (www.my-backend.com, www.my-frontend.com) or subdomains (www.backend.my-site.com, www.my-site.com) in the same server.
... but unsure if this is good practice
Besides if it is a good or bad practice, I think it is very common. A few valid reasons to keep them in separated servers would be:
Because you want that if the front-end fails the back-end API continues to work.
Because you want to balance network traffic.
Because you want to keep them separated.
It is definitively not a bad practice if both applications are highly related.

Same domain name for website and webapplication which are hosted at two separate geo-graphical locations

I have a website "http://www.mywebsite.com" which is hosted with a hosting solution vendor (shared hosting). I now want to put few applications which are accessible on the same domain i.e. "http://www.mywebsite.com/mywebapp". I have a mandate not to use subdomain (thus "mywebapp.mywebsite.com" is not accepted a solution)
I am aware that my website is hosted on Apache Server (shared hosting space).
My application is hosted on a completely different machine with Apache Tomcat. I am aware of the mod_jk Apache module, to link with tomcat. But apart from mod_jk I think I would need some redirection. I tried to look at mod_rewrite, but I could not find a way where my website and webapplications can co-exist on the same domain name. Can someone help me with the right approach of re-direction?

forwarding HTTPS from Plesk to AWS EC2

I'am quite new to setting up and managing websites, domains and stuff.
I purchased a domain (let's say example.de) and registerd it on my vserver running Parallels Plesk. As I need secure access I requested and created a SSL-Certificate at startssl.com. The developed application (Spring-Boot) runs on an EC2-Instance at AWS. The Product-Website runs on an Apache-Webserver on an EC2 instance. I need to secure both, the App (app.example.de) and the Website (example.de) using SSL.
What I want to archive is a redirect from the domain https://example.de to the EC2 Instance. I already tried several things - some I remember from the try&error marathon
Configure Plesk frame-forwarding the traffic on https://example.de to the ec2-ip
Obviously the Browser warns me that the Certificate is issued for example.de and not for and classifies the traffic as unsecure. Same like when accessing it like https://...
I also uploaded the certificate at Plesk - Also without success
Is there a solution for my setup? Or do I need (or is it recommened) to use Amazon Route53 for that task? Would be nice if someone could guide me and provide some tipps as I am pretty new to this topics.
Thanks
It seems there is no way around AWS route 53.
I figured out that there is a Extension for Plesk that is designed to route traffic using route53 and even a nice manual article at the Plesk homepage how to use any external DNS and also Route53 Extension. As this Extension requires a newer version of Plesk, than that one I am using I wasn't able to install it. I am pretty much bound to this version, so an update didn't come into question. I cannot tell for sure if using this Extension solves my initial problem, but it seems to be a potential solution.
The most simplistic solution (at least for me):
I ended up moving my Domain the AWS, created a Hosted-Zone, Added a Record Set with the IP of the EC2 and the DNS Server provided due the hosted Zone. Everything is now working like a charm.
Some more Background: The Product-Website and App-Frontend are running inside an Apache where I installed mod_ssl and configured SSL access. The Application backend runs as a Spring-Boot-App in a Tomcat where I also configured SSL using a TomcatConnectorCustomizer.
This setup works for my scenario

Redirecting web traffic in a server hosting both an Apache and a Glassfish web

I am hosting two web pages in my server. One is running on Apache and the another one on Glassfish. By now I solve the redirection problem making the Glassfish server to listen on a port distinct than the 80. The problem is that I think my web users have a firewall blocking those ports so they can´t access the GlassFish web. Which way would you recommend me to take in order to make a URL request-based redirection? I want to make the glassfish web a subdomain of the Apache one, being both running at the same IP.
If I have not been sufficiently clear with my question, please let me know
Thank you for your time.
Have you tried modifying the domain or using subdomains? If one application is eg. at http://subdomain1.yourdomain.net and another at http://subdomain2.yourdomain.net it should do the trick without any problems. Or try http://yourdomain.net for your main program and http://yourdomain.net/somecontext for sub program. That looks meaningful also for the service user.
Tick the answer if you got help :)

Why do some setups front-end Glassfish with Apache?

I've been trying to mug up on Glassfish and one thing that keeps coming up is the "how-to" on fronting Glassfish with Apache. Unfortunately, I have yet to find a description of why you would want to do this!
From my experimentation, Glassfish seems like a pretty fully featured web server-type service; but I might be missing a lot. So, is the notion of front-ending Glassfish more of a solution to integrate it with an existing architecture, or does front-ending (in a pure Java environment) provide extra benefits?
There's also another valid use case as to why we front Glassfish with Apache. Apache in this instance would function as a reverse proxy for increased security of your Glassfish. The RP is configured to allow only certain URLs to be passed through to the application server. For e.g., you may have app contexts /myApp and /myPrivApp deployed in Glassfish. In the RP server, you only configure /myApp to be passed to Glassfish. Anybody requesting for /myPrivApp would see a 404 'cos the request stops right at the RP level.
In one of my deployments, I have a bunch of WARs deployed, some for users coming from the internet, some for intranet only. I have 2 RPs running, one for internet users and the other for intranet. I configure the internet RP to only allow URLs for approved internet applications to pass through while intranet users get to see everything.
Hope that helps.
It is usually used to speed things up. Since apache is a very fast web server it is used to deliver static content. Like images, CSS files and so on. Glassfish serves the dynamic content (servlets, JSPs) in this scenario.
Another reason for using Apache as a frontend to Glassfish is the possibility to provide load balancing across a Glassfish cluster. See http://tiainen.sertik.net/2011/03/load-balancing-with-glassfish-31-and.html for details.
A other reason is that glassfish cannot run (easily) on port 80, without giving it root rights of course.
So, for most users it's easer to run a proxy (apache, nginx, varnish) some sort in front of apache and have both servers run under a normal user.
Then you have a other advantage of some configurations options of your front end. Like others mentioned, caching for example.