Is it possible to have multiple apache-server instances running when using "host" networking? Just as it is possible with "bridged" networking & port mapping?
Or does other instances next to the "host" networking instance have to be "bridged" in order to map another port than 80 that might already be in use?
Anything that runs using host networking, well, uses the host networking. There is no isolation between your container, other host-networked containers, and processes running directly on the host. If you are running Apache on your host, and two --net host Apache containers, and they all try to bind to 0.0.0.0 port 80, they will conflict. You need to resolve this using application-specific configuration; there is no concept of port mapping in host networking mode.
Particularly for straightforward HTTP/TCP services, host networking is almost never necessary. If you use standard bridged networking then applications in containers won’t conflict with each other or host processes. You can remap the ports to whatever is convenient for you, without worrying about reconfiguring the application.
Related
I'm looking to set up a few virtual hosts for different domains for a few friends, and want to know if one virtual host can access files from another host, whether it be via PHP or any other option or if it's totally isolated, so any scripts they can run would only affect their area.
An Apache "virtual host" is just a mapping of a hostname (or ip address or port) to a particular set of configuration directives. There is no "containment" or isolation implied by this; everything is still running on the same host.
If you want to actually isolate applications, consider investigating container technology like Docker (or a virtual machine solution), with a front-end proxy directing traffic as necessary to the appropriate backend.
I'm new to docker therefore apologies if that has already been answered, however I looked and didn't really know how to search for it so I thought I'll ask a question, and if it's already answered, at least someone that knows in docker terms how this works, can help me.
So here is what I want to do.
Subdomain x.x.com (IP A)
Container A
Container B
Container C -webserver
Subdomain y.x.com (IP B (or it could even be A, I don't know what's best)
Container D (same as container A but different user)
Container E (same as container B but different user)
Container F -webserver (same as container C but different user)
And here are my questions
For subdomain y.x.com should I use the same IP or a different one?
How can I point these subdomains to the specific containers so that if you have a container at port y.x.com:8000, you can't access the container x.x.com:8001 by simply doing y.x.com:8001?
How can I make sure that both webservers are accessible through the different subdomains (assuming that they both run at port 80?)
I'm not 100% sure I've understood the way networks work when using docker so any pointers, would be really helpful. Should I use link? should I use --net=bridge? Is there any simpler way to do any of that? What's the best way?
Thank you in advance
First, it is important to clarify what are you trying to configure. Are you configuring an Apache server as the frontend to the two sub-domains? Are you running apache in a container? What do you have in containers A, B, D, and E? Are they providing support services to the web servers (e.g., database)?
Independently of these clarifications, the most important thing you need to understand about Docker networking is that containers, by default, receive an IP belonging to a 'virtual network' that exists only in the host in which they run. Because of that, they cannot be accessed from the "outside world" (even though they can access the outside world by using the host as a gateway).
In this case, the most straightforward way to access containers from the "outside world" is to use port mapping, in which you map a port from your physical host to a container port.
For example, let's say your host has IP 10.0.0.1, and your container runs a web server on port 80. In order to access this container, the first thing you need to do is to start the container and map its port 80 to some port in the physical host. This will look like:
docker run -d -p 8000:80 <image> <command>
where -p is the relevant option that you use to map ports (in this case, you are mapping port 8000 in the physical host to port 80 in the container). Therefore, to access the container web server, you will need to use the host IP with the mapped port (10.0.0.1:8000) - and the request will be redirected to port 80 of the container.
So, assuming you are running all containers on the same host, you could map each subdomain to the same IP but different ports, and map each of these ports to the port 80 of containers C and F.
Having said all of this, recent Docker versions have been adding many new ways of configuring the network, but I feel it is really important to understand the basic behaviour before moving to more complicated scenarios.
Have a look at the basic configuration instructions here:
https://docs.docker.com/engine/userguide/containers/networkingcontainers/
I have a Google Cloud Container Engine cluster with 2 Pods, master and slave. Each of them runs RabbitMQ instance, that supposed to be joined into one cluster.
Ports exposed from Dockers aren't available from other machine, but could be accessed only through a Service. That's not a problem, I could establish a service for each instance (one-to-one, service-to-pod), and point each Pod to opposite service IP.
The problem that RabbitMQ uses more that one port for communications. That means that service IP should open all this ports from underlying Pod. But I cannot specify list of shared port for a Service, and if I create a new service for each port each of them will have own IP.
Is there any way to expose list of ports from same Docker/Pod on same internal IP address using Container Engine cluster? maybe some special routing configuration?
Your question is similar to this question, and unfortunately has the same response: Kubernetes / Google Container Engine does not currently have a way to expose a range of ports for a service at the current time. There is an open issue in GitHub to address this use case.
I'm trying to workaround a DHCP issue by configuring my guest VM to use DHCP (to avoid having to configure it manually with a static IP) but defining a static IP in the XML.
This would enable setting an IP upon creation while not requiring configuring the virtual machines operating system to a static IP (making it sort of "independent").
I should point out:
Guests are Windows/Linux mixed
Must use a bridge setup (not NAT)
Is this a reasonable solution? any recommendations to the actual XML markup of the guest?
When saying static ip configuration instead of DHCP, it's not a libvirt thing but a configuration of guest OS. refer to this maillist for example.
So you can make it via a custom DHCP server that listens on your bridge network instead of default NAT. it only assigns specific ips to specific mac addresses. It's very easy to make it via dnsmasq.
If you do want to exclude any DHCP broadcast in your bridge network, think about bootstrap processes inside your guest OS. The config drive is a good choice where it allows you creating a disk file and attach to the VM, then the cloud-init daemon on guest OS will pick it up to replace network configuration. But it's just too many if you just want static ips.
I created a WebSockets app to provide communication between connected clients, but I'm concerned about corporate firewalls and ISP rules that might block the port 8080 it's using. But the usual HTTP port 80 (that really no one would block) is already used by Apache on that server to provide the functionality for the rest of the app (which is a clasic web app running on PHP).
What are my options there? Are my concerns misplaced?
One option is to set up an Apache reverse proxy to make your app available via port 80. See (for example) Running a Reverse Proxy in Apache.