when https is used on openshift, is my tomcat behind an apache server?
I mean does my clients connect to my tomcat directly or they connect to the apache server, and then the apache server connect to my tomcat through AJP connector?
If apache is the man in the middle, then I will not get my clients' IP address directly, but with x-forward http header. And I lose the control over certificate verification and trust management. At present, I am using mochahost's server. https does not get to my tomcat, but intercepted by an apache. I hate this.
Previously, I used another hosting service, even 2 apache servers are at front of my tomcat server. Even more ridiculous that 2 apache servers and my tomcat servers are on the same machine. This kind of configuration can only show the system architecture does not have the right ability to manage the whole thing.
By the way, I am talking about the Bronze/Silver plan. I guess Openshift is not different though I have not tried it yet. Anyone has a clear answer to my question?
https is against the man in the middle, but with tomcat server, in this world, there is no service that you can avoid the man in the middle. It is not because technology does not allow, but the people in charge does not really understand the thing, so not able and not willing to provide the right service.
I want to ask: if you use tomcat server, is there any hosting service provider who does not act as a man in the middle? No. There is none on this world at present (May 2014)!
jack
There is an apache reverse proxy located in front of your tomcat instance that does ssl termination. The Apache instance is at the node level, then tomcat runs on your gear.
Related
We have an Oracle application (Agile PLM) which is deployed in a clustered environment. We have one admin node and two managed nodes supporting our application, where admin and 1 managed nodes are on the same server. We also have Load balancer which manages the traffic between the cluster.
We want to configure SSL in our application so that the application URL will be accessible over https only. We have already configured SSL at Load Balancer level(by installing security certificates in weblogic server which is the admin server) but want to know if we have to configure SSL on the managed server as well or bringing Load Balancer on https is sufficient?
All the users access the application using the Load Balancer URL only but since I am from the development team, so is only aware of the fact that we can also connect to the application with Managed server URLs, which are still running on http. Is it must to bring Managed servers also on https or it is just a good practice but not necessary?
It's not necessary, though probably a good practice.
I believe I have read in Oracle's installation guide that the recommended way is HTTP on the managed servers and terminating SSL on the load balancer. This may have changed.
For what it's worth, I leave HTTP on the managed servers.
I have two projects running on Wildfly-8 and I have two SSL certificates for each of them and one IP.
I figured out that I should have one IP for one SSL certificate.
But I needed to use these two SSL for one IP. I couldn't find a way to do it with Wildfly but there was a way to do it with Apache Server. So,I installed Apache Server up to Wildfly.
I listen https port(443) on Apache and redirect it to Wildfly's http port(I used 8080). It works without any problem.
What I wonder is;
1. Is Apache decrypt request and redirect it to Wildfly?
2. Is it correct way to do it or I have done it by chance?
3. Does this method create a security hole?
I googled some, but I could not find satisfied answers.
Thanks for replies.
For this answer, I'm supposing that by "redirecting" you mean "proxying": Apache receives the request, proxies it to Wildfly, receives an answer from Wildfly, sends the answer to the client.
If you mean something else, then the simple answer is: it is wrong[1].
Is Apache decrypt request and redirect it to Wildfly?
Yes. Apache will receive and send secure data to/from the client. Its communication with Wildfly will be plaintext.
Is it correct way to do it or I have done it by chance?
That's how it's usually done, yes. In other words: a load balancer and/or a proxy in front of Wildfly (Apache in your case). Wildfly itself is not reached directly by the public internet.
Does this method create a security hole?
It does, just like everything else is a security "compromise". In this case, you are trusting your internal network, in the name of a more practical/manageable architecture. If you do not trust your internal network, you should look for another solution. In the general case, the price to pay seems fair to me, as you'll "only" be open to a man-in-the-middle between your Apache and your Wildfly. So, if you trust your internal network, you should trust that there won't be any MITM there.
Edit
[1] - As everything else in life, there's no absolute truth. Basically, there are 3 techniques that can be used in a scenario like this: pass through, edge and re-encryption.
Pass through is a "dumb" pipe, where nothing about TLS is known by the proxy. Wildfly would then handle the secure communication with the client. I'm not sure Apache would do this, but this can be done with haproxy in TCP mode;
Edge (or offloading) is the situation I described above: Client talks TLS with Apache, Apache talks plaintext with Wildfly;
Re-encryption, which is like Edge, but the communication between Apache and Wildfly is also TLS, using a different certificate.
In my LAN I have two web servers, one Apache Linux Server (Debian) and one Microsoft Web Server (IIS7).
Apache has three webpages, i.e., apache01.com apache02.com and apache03.com and IIS7 has microsoft01.com and microsoft02.com
My question is:
To access from outside LAN to any domain, what is better? NAT port 80 to IIS7 or to Apache?
And then, How to delay from Apache/IIS7 to IIS7/Apache the domains it doesn't manage?
I've never done this, so I ask because it could be a lot of thinks I'm not seeing.
I don't need a full procedure, only a guidelines to continue researching.
Thanks
The solution, here
The key is IIS URL Rewrite Module and Reverse proxy
Can you have two separate apache servers running on the same system in parallel, as long as they make use of different ports?
I have a system I need to install JIRA on, but the system is already in-use and running an Apache server for a separate project. The JIRA installer comes with a pre-configured apache tomcat server. If I just installed JIRA, would I run into a problem from the pre-existing apache server?
If you’re asking about running two Apache Tomcat instances, then this is not a problem. Moreover you can share the same CATALINA_HOME between many separate instances of Tomcat, each with own CATALINA_BASE. I’m often running separate Tomcat instance per application on production servers. See this init script for a hint about parameters.
But if you’re asking about running Apache HTTP Server and Apache Tomcat on the same server, then it’s a little bit trickier. Commonly used approach is to use a web server (Apache HTTP, nginx, …) as a reverse proxy in front of Tomcat. Then many applications can run under the same port and IP address. In case of Apache HTTP Server, see mod_proxy_ajp.
Apache HTTPD and Tomcat are 2 different servers. Also, JIRA doesn't run on port 80 so in this case there will not be a conflict for port numbers. If you want to expose your JIRA on port 80, you can use mod_proxy for the Apache HTTPD to relay the requests to the actual port JIRA is running on, so that it is transparent to the user.
So basically: Yes, you can run both Apache HTTPD and Apache Tomcat on the same machine if not using the same port.
This is my problem:
I have a JBoss server (Running an existing app) and a Apache Tomcat (Running an app created by me) server running on the same server with different ports.
I have two subdomain names which i have routed to the IP of the server.
What i need to do is to bind the subdomain names to the IP, but with different ports.
I saw an easy way to do it with XAMPP and apache, editing the httpd.conf, but i can't find any simular fway to do it with Apache Tomcat or JBoss.
Does anyone have any ideas about this?
I rather have a solution on the question above, but the question below can be accepted as a backup solution:
Since i could not figure that out, i had to at least have a solution to one of the applications (the one running at JBoss).
So i configured JBoss to port 80 instead of 8080.
What happens now if i go to the subdomains is that i get the JBoss welcome window.
How can i change the default JBoss "app" to my app?
Thanks in advance
There's no way to get this:
sub1.domain.com(192.168.0.1) on port 80 --> jboss app
sub2.domain.com(192.168.0.1) on port 80 --> tomcat app
without either adding to or subtracting from your software stack.
Your options are:
use jboss to run your tomcat app
add a reverse-proxy
use an HTTP-aware layer 7 firewall
The first is probably easiest; jboss deploys web applications using tomcat (or, in more recent versions, a fork of tomcat called jbossweb), so you can probably just drop your .war into the deploy directory.
If that's not possible for some reason, I'd use a reverse-proxy. Apache HTTPD with mod_proxy or mod_jk is fairly common. If you go the mod_jk route and you have non-trivial load, I'd review this.
The last I'm not familiar with. I imagine that the spendy Cisco firewalls can do this, and I'm sure it's possible to hack iptables to do it too, but my google-fu failed to find specifics.