Docker: Direct subdomains to specific containers - apache

I'm new to docker therefore apologies if that has already been answered, however I looked and didn't really know how to search for it so I thought I'll ask a question, and if it's already answered, at least someone that knows in docker terms how this works, can help me.
So here is what I want to do.
Subdomain x.x.com (IP A)
Container A
Container B
Container C -webserver
Subdomain y.x.com (IP B (or it could even be A, I don't know what's best)
Container D (same as container A but different user)
Container E (same as container B but different user)
Container F -webserver (same as container C but different user)
And here are my questions
For subdomain y.x.com should I use the same IP or a different one?
How can I point these subdomains to the specific containers so that if you have a container at port y.x.com:8000, you can't access the container x.x.com:8001 by simply doing y.x.com:8001?
How can I make sure that both webservers are accessible through the different subdomains (assuming that they both run at port 80?)
I'm not 100% sure I've understood the way networks work when using docker so any pointers, would be really helpful. Should I use link? should I use --net=bridge? Is there any simpler way to do any of that? What's the best way?
Thank you in advance

First, it is important to clarify what are you trying to configure. Are you configuring an Apache server as the frontend to the two sub-domains? Are you running apache in a container? What do you have in containers A, B, D, and E? Are they providing support services to the web servers (e.g., database)?
Independently of these clarifications, the most important thing you need to understand about Docker networking is that containers, by default, receive an IP belonging to a 'virtual network' that exists only in the host in which they run. Because of that, they cannot be accessed from the "outside world" (even though they can access the outside world by using the host as a gateway).
In this case, the most straightforward way to access containers from the "outside world" is to use port mapping, in which you map a port from your physical host to a container port.
For example, let's say your host has IP 10.0.0.1, and your container runs a web server on port 80. In order to access this container, the first thing you need to do is to start the container and map its port 80 to some port in the physical host. This will look like:
docker run -d -p 8000:80 <image> <command>
where -p is the relevant option that you use to map ports (in this case, you are mapping port 8000 in the physical host to port 80 in the container). Therefore, to access the container web server, you will need to use the host IP with the mapped port (10.0.0.1:8000) - and the request will be redirected to port 80 of the container.
So, assuming you are running all containers on the same host, you could map each subdomain to the same IP but different ports, and map each of these ports to the port 80 of containers C and F.
Having said all of this, recent Docker versions have been adding many new ways of configuring the network, but I feel it is really important to understand the basic behaviour before moving to more complicated scenarios.
Have a look at the basic configuration instructions here:
https://docs.docker.com/engine/userguide/containers/networkingcontainers/

Related

Docker swarm - selenium VNC port - how to make it distinct?

I'm coming from VMs background and with each having a different IP there's no issue connecting to a specific node in a group on a VNC port.
With containers, looking at https://github.com/SeleniumHQ/docker-selenium/blob/master/README.md , "Version 3 with Swarm support"
I can see that I can publish a port for a service corresponding to a specific container image, but I think that'd be a single value for a number of replicas.
So, if I use, say, 20 containers and each container suffixed "debug" exposes VNC on port 5900, how can I access a specific container I want that I assume is identified within an output of a Jenkins job, which sends a selenium test script to one of the nodes on the grid?
I.e. if there's an issue with the test script and I see a container identifier, how can I access that specific container over VNC to see what's going on there? Since there's a single host IP for multiple containers, they need to have different ports published externally vs 5900 to be distinguishable, but I don't see how this can be done in docker-compose/swarm. Is this doable?
As an alternative, would that be any easier with Kubernetes rather than docker swarm? (I have not done much research on it yet)

Localhost works, but ip gives timout

I am trying to setup a LAMP environment on my laptop with Ubuntu 18.04.
I have no experience real previous experience with this and all tutorials i find are just a step for step guide on how to setup, but none explain what you are exactly doing.
So I don't know why I am having this problem.
After installing all parts of LAMP I can access localhost, and I see the apache default page.
But if I try to go to my IPaddress, (the ipaddress I found with curl -4 icanhazip.com) the page loads for a while and then tells me this:
Firefox can’t establish a connection to the server at 213.127.26.xxx
So my question is am I using the right IPaddress and how can I make apache work from my IPaddress? Because phpmyadmin will not work on localhost.
The issue is likely that your local ports (i imagine your web server is running on port 80 or 8080) are not being forwarded through your router. Your router likely uses something called “NAT (network address translation)” to expose all of the internal IP addresses on your network through a single “public” IP address, in your case 213.x.x.x (you should never post this here unless you’re 100% positive your network is secure!). Your router needs to be configured to forward port 80 on 213.x.x.x to your machine’s “internal” ip address, likely something like “192.168.x.x” or “10.0.x.x”. A search for “port forwarding ” should help you out
Alternatively, ngrok is a nice free tool which you can use to expose your port on a public address. By running nginx http 80, it will provide you with a temporary url where you can reach your site (on a free plan, it will only provide you that url for one day, so you will need to re-run it)
First, you have to find out on which ports your server is running.
After that, you have to go into your router's settings and add port forwarding entries for these ports, to make sure that your router forwards the requests to the right device.

Can a Virtual Host on Apache access the files of another Virtual Host?

I'm looking to set up a few virtual hosts for different domains for a few friends, and want to know if one virtual host can access files from another host, whether it be via PHP or any other option or if it's totally isolated, so any scripts they can run would only affect their area.
An Apache "virtual host" is just a mapping of a hostname (or ip address or port) to a particular set of configuration directives. There is no "containment" or isolation implied by this; everything is still running on the same host.
If you want to actually isolate applications, consider investigating container technology like Docker (or a virtual machine solution), with a front-end proxy directing traffic as necessary to the appropriate backend.

Hosting site using xampp server from local network without port-forwarding

I want to make my site available world wide. Im using xampp server for hosting. I have no access to any kind of servers and modems. Situation is shown below:
My site server has local ip assigned by wifi router and it runs Windows 8.
Remember I have no access on any kind of servers and modems so port port-forwarding is impossible (out of my scope).
Its actually difficult, but not impossible.
One way, I would approach this is:
I would host a page on internet.
Then take request and store it in database.
One of my program will always be running from my computer.
Then check for request and curl the request to localhost. For this you may use Node.js (taking data from database using GET method and curl it to localhost).
This is the best I could think of. And I am working on it, when the code is ready I'll make it open source and notify you :)
But still, it's difficult, as you need to put user's request to sleep for 2 seconds and then transferring it.
Its slow, but may work out for you.
Disadvantages:
Program will be very slow and memory usage will be more.
Breaking may happen many times.
High bandwidth wastage
If not encrypted, MIM (Men in Middle) may possible.
Advantages:
Indirect method of hosting
Need not to worry about your code being lost.
I am looking forward for a better alternative and I would like to keep this question for bounty once again.
If you cannot open the necessary ports within your LAN you will require access to an external server. However, the external server does not need to host any code, e.g.
Create a Linux based ec2 instance using Amazon's free tier.
Install a package to redirect remote to local ports:
a. using socat:
Install socat using your distributions package manager
Connect via SSH: ssh -N -R 42500:127.0.0.1:80 -o ServerAliveInterval=60 ubuntu#xxx.xxx.xxx.xxx -N -R 8080:localhost:80 "socat TCP-LISTEN:8080,fork TCP:127.0.0.1:42500"
b. using a webserver and reverse proxy:
Install apache or nginx and any required reverse proxy modules and configure your VirtualHost to proxy requests to a local port, e.g. :8080 -> 127.0.0.1:42500
Connect via SSH: ssh -N -R 42500:127.0.0.1:80 -o ServerAliveInterval=60 ubuntu#xxx.xxx.xxx.xxx
Your machine is now reachable via the ec2 instance http://xxx.xxx.xxx.xxx:8080/.
I occasionally use this technique when debugging web service callbacks.
Update 17-02-2014
If you are a Windows user you will need to install a third-party tool to support ssh. Options include:
cygwin
git bash
PuTTY
PuTTY is the easiest choice if you are not familiar with *nix tools. To configure remote port forwarding in PuTTY expand the following setting: Connection -> SSH -> Tunnels. Given the previously described scenario, populate Source port as 42500, Desination as 127.0.0.1:80 and tick the Remote option. (You may also need to add the path to a PuTTY compatible private key in the Connection -> SSH -> Auth tab depending on your server configuration.
To test you have successfully forwarded a port, execute the command netstat -lnt on your server. You will see output similar to:
tcp 0 0 127.0.0.1:42500 0.0.0.0:* LISTEN
Finally you can test with curl http://127.0.0.1:42500. You will see the output of your own machines web root running on port 80.
if you don't have a public IP address and cannot use port forwarding it is impossible to host the site
As people have said you need a public IP address. However, even if you did you should not use xampp as a public server, as it is designed for development and therefore has some security settings disabled.
I would recommend buying some shared web hosting, and uploading it to that. (you can get cheap hosting if you google 'shared web hosting', plus free .tk domains are avaliable: http://www.dot.tk/)
Do your company has any vpn network?
If it does and you have access to the vpn network, you can include your server to the vpn network and your guest will only need to login to your company vpn network then access your site like in a local network without using port forwarding. And since your data is very confidential, I assume that using vpn will also help to increase the security of your data.
Please correct me if I'm wrong.
Thank You.
What you are asking is not possible without port forwarding.
Lets break it into steps.
To host your site locally you will need a IP that is static so that
users can access it specifically.
You will need a domain so that it can be converted into user friendly name.
A 24x7 Internet Connection is must! You added a Wifi Router in your Diagram and most of today's router are capable of port forwarding.
What i will do in your scenario is:
Instead of using XAMP, i will install WAMP because i am more familiar with it and easy to configure.(totally personal preference)
Then i would set my server "ONLINE".(Google how to set WAMP server online)
Forward port "80" from router settings to my local computer ip address.(mostly it is tagged as "Virtual Server","Firewall","Port Forwarding",etc vary router to router in settings)
Suppose you have a local ip "192.168.1.3" and global/router IP "254.232.123.232" then you would redirect all the HTTP request done towards router to your local IP.
[[[[254.232.123.232]]]] --+ :80 +-- --------->192.168.1.3
That is good for now, but then you will need to tackle dynamic IP problem of router. But don't worry, thanks to some free sites that will be easy!
Go to no-ip.org -> Setup Account -> and create a entry, just a subdomain for now to test whether everything is working fine.(subdomain like mysite.no-ip.org, later purchase a real Domain)
Input your IP address there(Router IP) and download its application which will automatically update their server if your local IP changes.
Wait for some minutes and Voila! Your site is live.

Error with DOJO when using IP

Strange error with an Project using dojo:
if i call : http://localhost/project everything works like expected.
if i call : http://127.0.0.1/project everything works like expected.
if i call : http://192.168.2.1/project i get the following error (ONLY in IE6!):
"Bundle not found, locale.."
Any ideas?
Iam running Zend Server CE with PHP 5.2
if i add: 192.168.2.1 to "hosts" it works (windows)
Sounds like Zend server is performing some kind of virtual site support using the site name as a partial domain.
I can't say 100% if/how it is beacuse I don't use Zend, but I can explain the principle using Apache as an Example.
There are 3 ways in which a web site can be virtually hosted under a single web server application, this applies to most servers on the market today, Apache, IIS, nginx and many others.
It all boils down to one thing, giving one running server application instance the ability to host multiple individual websites.
The 3 methods of seperating sites are as follows:
By IP address : If you have multiple IP addresses (Usually -but not always beacuse you have multiple network interface cards) then you can tell your server application to listen to one IP for one site, another IP for another site and so on. If you browse to one IP you'll get one site, and likewise the other on the other IP.
By Port Number : If your using only one IP address, then you can bind to multiple port numbers, port 80 is generally the default for web servers, but by browsing to an address and pinning the port number on the end (http://mysite.com:99) you'll force the browser to use that port. You can then have multiple websites listening on different ports and select them manually at browse time as required.
By Host Name Header: This is by far the most common way of supporting multiple sites, all web servers that understand the HTTP/1.1 protocol have to obey a header field in the request that contains the host name, when a request comes in for EG: http://mysite,com/ then there will be an entry in the request header that looks like 'Host: mysite.com' the webserver can then use that to say, oh yes.. I know which one that is.. and it then selects and serves the correct website.
The problems start to arise however when you start to use IP addresses that generally cannot be resolved or have no DNS name, because the web server then doesn't know which hostname to tag it to.
As an example in Apache, if you set up a virtual host, then try to browse that server using just the IP address, you'll get the default server, which in many cases won't even be configured to respond correctly or display anything.
To compound this, going up to web application layer, many frameworks also do their own checks on hostnames and other variables passed to them by the web server, and many make decisions on how to operate based on this information.
If you've gotten to the default web application by IP address, then there's a high chance that the framework may get confused at being presented with an IP address as a host name.
As the OP noted, in many cases, you can add a name to your hosts file and use this as a poor man's DNS substitute, the file to modify can be found in the following locations:
c:\windows\system32\drivers\etc\ - on windows
and
/etc/
on Linux/Unix
The file is generally just called 'hosts' and is a plain text file. Adding a line like:
123.456.789.123 myserver
Will tie http://myserver/ to http://123.456.789.123/
If you can, and your doing a lot of web applications it may be worth setting up your own DNS server, most Linux distros will allow you to install 'Bind' and I do also believe there is a version available for windows too.
I'm not going to go into the pro's and cons of private DNS servers here, it's a whole other subject in itself, but if your likely to be doing a lot of additions to your hosts, then in the long run you'll find it a better option.