I'm looking to add a load balancer in front of a service that might be listening on any port. I've looked at a few options (apache, haproxy) but they seem to all work by specific ports -- e.g.
example.com:80
server1:80
What I need:
example.com:N
server1:N
server2:N
Where N can be any port. In otherwords, basically round robin dns, but with failover support.
Ideas about how this can be done on mod_proxy_balancer, haproxy or any other freely avaliable lb? Thanks.
Related
Load Balancer is configured to redirect TCP requests on the front port 80 to backend port 8080.
That worked fine till I removed "Allow-Port-8080" rule from the Network Security Groups attached to pool VMs.
In my understanding Load Balancer is always allowed by default due to AllowAzureLoadBalancerInBound security rule that I did not touch. Isn't it?
Moreover, port 8080 on pool VMs is reachable from hosts in the same virtual network, so there is no issue with local firewall which is not running on Centos Azure hosts by default BTW.
So to sum up - the question is why should I add an inbound security rule to let Load Balancer to redirect requests to a particular port.
After considering the issue a bit more I've realized that AllowAzureLoadBalancerInBound security rule only applies to the traffic originated by the Load Balancer - health probes, etc.
For all LoadBalancer-redirected traffic general security rules apply, hence we should setup security rules accordingly.
I just got a raspberry pi for Christmas and I have just installed apache, php and all the required stuff to host my website. I want to use my raspberry pi as a web server for my website. I obviously need to port forward! Apache is running on port 80 how safe is it to forward port 80? I want to know if I port forward is my whole Wi-Fi now under threat from hackers ? If I am hacked, what can they compromise ? And finally I heard about changing the apache port to stop malicious bot port scanners, can I just change my port from 80 to anything or are there only certain ports ?
Thanks, Jamie
The thing is: if you want people's web browser to access your web app, it needs to be on standard ports (80 or 443 for HTTPS). You'll need to redirect connections to ports 80 and/or 443 on your raspberry pi ' local ip in the configuration of your router.
If you want to isolate your raspberry pi from the rest of your local network and that your router allow it, consider putting it in a dmz
Even if you were to redirect on a custom ports later to be listened to by Apache, it wouldn't change much in such a case. If you want to secure your server, there is several other options to consider (fail2ban, firewall rules, etc).
Last: from personal experience, raspberry pis make good web servers to experiment with. Have a lot of nerdy fun
I have already installed an Apache HTTP server in my RedHat system, now I need to install a Bitnami application package which contains another Apache. So I am wondering how to make them not disturbing each other?
I guess I need to configure different ports for the two HTTP server. But what if one has 8080 and another has 9090, will we visit http://[ServerName]:8080/something.html and http://[ServerName]:9090/something.html? I think this way is quite inconvenient. Am I wrong or any better idea?
My advice would be to do something like this.
Have one Apache instance listen in port 80 and the other one in port 8080 for example. The Apache instance that listens in port 80 can act as a proxy to the other Apache (port 8080) using the ProxyPass and ProxyPassReverse directives.
https://httpd.apache.org/docs/2.4/mod/mod_proxy.html
You would need to define prefixes or virtual hosts and inside them add ProxyPass directives.
I don't know to what kind of user those applications are targeted to but the usual end-user is not used to enter ports when browsing the web.
If you like to use the ports, go for it, but I would recommend using Name-based Virtual Host
so you could use different domains or subdomains to each application.
In addition to the example provided by the docs (in where they just point to different folders) in this digitalocean page they document how to make redirects to different urls.
I completely agree with EndermanAPM that usual end-user is not used to enter ports when browsing the web. Therefore, I would only allow port 80 to be accessed by the end-users.
Additional to the current solutions I see another one:
avoid messing up the settings of the Apache servers in order to not end-up with some malfunctions of your websites
leave the Apache servers listen on their designated ports (8080 respectively 9090)
install a dedicated proxy in front of the Apache servers. The proxy would listen on port 80 and would define redirect rules that would parse the request and would redirect it to the proper Apache server. (see the attached picture)
I recommend you HA Proxy. It is a very fast and reliable http and tcp proxy. I've been using it in production for years, in front of application servers, web servers and even database servers. Once you get used with its syntax, it is pretty easy to use.
I am aware that introducing a new component into the equation might add another source of potential issues. But I think that the architecture is cleaner. Besides, the two Apache servers will not be disturbing each other as you requested. You can shut down any one of the two and the other one would properly work further.
I am trying to use a load balancer to direct traffic to a container backend. The service in the containers hosts web traffic on port 80. My health checks are all passing. If I ssh into the Kubernetes host for the containers, I can curl each container and get correct responses over port 80. When I try to access them through the load balancers external IP, however, I receive a 502 response. I have firewall rules allowing traffic from 130.211.0.0/22 on tcp:1-5000 and on the NodePort port. I've also tried adding firewall rules from 0.0.0.0/0 ports 80 and 443 to those nodes.
When in the Kubernetes host, capturing with tcpdump, I see the health check requests to my containers, but no traffic is coming through when I make an external request.
I have an identical configuration that points to a single Compute Engine VM that works perfectly. This leads me to believe that the issue might be in the container setup rather than the load balancer.
Does anyone have any advice on resolving this issue?
I was able to resolve the problem by changing the Named Port that the Load Balancer was connecting to. By default, the Load Balancer connected to Named Port "http", which pointed to port 80. It was my assumption (always a bad thing) that this matched since my application serves on port 80. Not so. Since I'd exposed the containers though NodePort, it was assigned another port. This is the port I had my Health Check pointing to. By going into "Compute Engine -> Instance groups" selecting the group, and then "Edit Group", I was able to change the Named Port "http" to match my NodePort number. Once I did that, traffic started flowing.
I have tomcat installed and running on an ubuntu 12.04 LTS system utilizing port 443 for https requests (GeoTrust certificate installed).
On the same machine, apache2 responds to requests on port 80.
Now I was given to task to secure the webapps (php) running on apache2 with SSL as well, but with a different server certificate.
Is this possible at all? - My assumption would be "no", because I cannot have two servers listening on the same port, but I'm not too sure and haven't found any helpful information about this so far.
Any help would be highly appreciated..
These days, you'll still have difficulty serving more than one certificate on a single interface/port combination (e.g. 0.0.0.0:443). IF you want to use two separate ports for HTTPS, it's no problem. If you want to bind to different interfaces (e.g. 1.2.3.4:443 and 4.3.2.1:443) it's no problem. If you want them both on the same interface/port, you'll have to rely on Server Name Indication which may or may not be supported by your web server version and/or client.
If you want different certificates, you probably want different hostnames, too, so maybe you can get a second interface configured on the machine. Note that you don't need to have multiple NICs on the machine just to enable a different interface: your OS should be able to create another interface with a different IP address and still share the NIC. Then you just set DNS to point each hostname to a different IP address and make sure you bind each SSL VirtualHost to the proper IP address (instead of using 0.0.0.0 or * for the hostname).
Honestly, SNI is the easiest thing to do: just use VirtualHosts with SSL enabled (with different certs) in each one the way you'd "expect" it to work and see if the server starts up without complaint. If so, you'll need to test your clients to see if it's going to work for your audience. For the SNI scenario, I am assuming that Apache httpd would handle all of the SSL traffic and that you'd use something like mod_proxy_* or mod_jk to proxy to Tomcat.
For the split-IP scenarios, you can do whatever you want: terminate SSL within Tomcat or use httpd for everything and proxy for dynamic content to Tomcat.