new docker container for every user - apache

I am building a "hackme" challenge. Users can hack there way into the website and gain root access.
I made this in docker containers to prevent users from harming the host or each others game play(container).
I can't just redirect it to a different port for every user because users could just port scan the server and find the different containers.
What i want is http://example.com/challange1/A1B2C3
were "A1B2C3" is the unique identifier for their own container.
Could someone tell me how to do this?

With nginx you can do following - you can rewrite the requestes for given paths like here: https://gist.github.com/soheilhy/8b94347ff8336d971ad0#step-7----rewriting-requests
Where the names here are the existing run containers.
server {
listen ...;
...
location /a1b2c3 {
proxy_pass http://a1b2c3:8080;
}
location /a2b1c4 {
proxy_pass http://a2b1c4:8080;
}
...
}

I would go with Jwilder/Nginx for this task. Just start the different
containers with "random" Subdomains.
Start Nginx/Jwilder
docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
Start your Containers
for i in {1..5}
do
docker run -e VIRTUAL_HOST=foo-$i.bar.com mycontainer
done
Jwilder/Nginx will be updated on every container started, and will register
the new VHOST.

Related

How to propertly configure certbot with autorenewal?

I'm learning Docker so I'm sorry if this question might sound silly. Anyway, my goal is create a LAMP container which handle all the databases in one place and also, I want to setup multiple virtual hosts for many sites. For each of this site I want use certbot to require a SSL certificate.
For doing so, I wrote the following docker-compose.yaml:
version: "3"
services:
web:
image: webdevops/php-apache:alpine-php7
ports:
- "80:80"
volumes:
- ./www:/app
- ./php.ini:/opt/docker/etc/php/php.ini
- ./sites-available:/opt/docker/etc/httpd/vhost.common.d
db:
image: mysql
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: root
ports:
- "3306:3306"
phpmyadmin:
image: phpmyadmin/phpmyadmin
environment:
MYSQL_ROOT_PASSWORD: root
ports:
- "8088:80"
certbot:
image: webdevops/certbot
volumes:
- ./etc/letsencrypt:/etc/letsencrypt
in the first service, I'm declaring Apache as web, and I'm using the alpine image created by webdevops, here the documentation. I bind the port 80, so I can access to Apache externally without specify custom ports.
In the volumes section I added the www folder which contains the php scripts.
I also specified a custom php.ini to overwrite the default php settings. Then, as the last part of volumes I tried to mount all the virtual hosts which I created inside the folder sites-available in the vhost.common.d directory.
Then I have the certbot container as the last part of my docker-compose file, and I would like to do the following:
How can I request a certificate for my subdomain which I've actually stored inside sites-available folder that is mounted as volume of web container?
How can I set a cron job or something like a task that auto renew all the certificates?
How can I store in a volume the obtained certificates?
I will admit, docker at times is often a struggle to piece together all the appropriate parts, with that said, my answer will not be complete, but hopefully will get you a step closer.
The following will create a certificate (note the --dry-run, it is highly recommended you use this to do your testing, else you'll get throttled)
docker run -it --rm \
-v /docker-volumes/etc/letsencrypt:/etc/letsencrypt \
-v /docker-volumes/var/lib/letsencrypt:/var/lib/letsencrypt \
-v /vol/to/the/web/root:/data/letsencrypt \
certbot/certbot certonly \
--noninteractive \
--webroot --webroot-path=/data/letsencrypt \
-d sub.domain.com \
--dry-run
-v /docker-volumes/etc/letsencrypt:/etc/letsencrypt
this is needed to store the cert itself
-v /docker-volumes/var/lib/letsencrypt:/var/lib/letsencrypt
not required, but in-case you want to review log messages
-v /vol/to/the/web/root:/data/letsencrypt
you need to give access to your web root, so certbot can create the .well-known dir and do its checks, this one was a tricky one as you need to link/use the same location used for your web container web-root vol
--noninteractive
certbot will bypass asking you questions
--webroot --webroot-path=/data/letsencrypt
tell certbot where to find webroot (e.g. within its own container)
Although not in the command above, you can add the following to assist in creating the cert if prompted for email address, not sure if it is a requirement or not
--email [email_address] --agree-tos --no-eff-email
Things to keep in-mind:
run certbot in --dry-run mode else, you will be throttled
certbot will need http access to the host, your vhost declaration should not redirect or deny access to http requests at least to the .well-known directory
you will need to add the appropriate SSL options in your vhost, i think certbot can do this automatically, but have not used this myself.
you will then need to reload apache like so /etc/init.d/apache2 reload
remove -it when/if you are running in cron
explore wrapping the cert creation and renewal in a shell-script
While i know this is not "the answer", hopefully some of this helps.

FaxServer installed in docker on 0.0.0.0:8080 and how to access on Unbuntu 14.x from internet

I have installed FaxServer to my Ubuntu server. It uses DOCKER.
It is up and running as follows:
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS NAMES
597d61ce2785 bludesign/faxserver:latest "/bin/sh -c 'bash -c…" 19
minutes ago Up 5 minutes 0.0.0.0:8080->8080/tcp faxserver_vapor_1
6595fe5908c5 mongo:latest "docker-entrypoint.s…" 19
minutes ago Up 6 minutes 27017/tcp faxserver_mongo_1
I do not have access to apply any public IP numbers to the DOCKER. My main server in which DOCKER is running has access to the internet and hence has a public ip.
How can I run apache or something to access the FaxServer from the internet running on 0.0.0.0:8080? The mongo is part of the FaxServer.
Any guidance much appreciated.
There are two options:
You can use NGINX as a reverse proxy server (https://github.com/jwilder/nginx-proxy), and add an env-var called "virtual-host" to the container as follows:
docker run -d -p 80808:8080 -e VIRTUAL_HOST=awesomefaxservice.com --name awesomefaxservice bludesign/faxserver
then configure a DNS to point to the machine ip, Once you have done that any requests matching the virtual host will be redirected to the container on the exposed port.
If you don't want to install a proxy and get a dns, check option 2.
You can configure the system proxy rules to accept incoming traffic from the internet and simple access your_static_ip:container_port

Put different containers containing a server in the same server

I have a Debian server with apache2 on it. I can access it by an ip address.
What I want is to be able to access to the containers in it (which contain an apache2 serveur) from the outside by an url like "myIpAddress/container1". What I currently have is an acces to those containers only from the Debian server.
I thought about using proxy reverse, but I cannot make it works.
Thank you for your help! :-)
Map the docker container's port to a host port and access the docker container from <host-ip>:port.
docker run -p host-port:container-port image
For example, upon running a container using the above command will make the container available at 127.0.0.1
docker run -p 80:5000 training/webapp
Update:
Setting up reverse proxy using NGINX
This example uses a plain NGINX container as site A and plain Apache server as site B.
Run the reverse proxy.
docker run -d \
--name nginx-proxy \
-p 80:80 \
-v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
Start the container for site A, specifying the domain name in the VIRTUAL_HOST variable.
docker run -d --name site-a -e VIRTUAL_HOST=a.example.com nginx
Check out your website at http://a.example.com.
With site A still running, start the container for site B.
docker run -d --name site-b -e VIRTUAL_HOST=b.example.com httpd
Check out site B at http://b.example.com.
Note: Make sure you have set up DNS to forward the subdomains to the host running nginx-proxy. If you're using AWS, the easiest way is to use Route53.
For testing locally, map sub-domains to resolve to localhost by adding entries in /etc/hosts file.
127.0.0.1 a.example.com
127.0.0.1 b.example.com
References
jwilder NGNIX Proxy Github
NGNIX reverse proxy using docker

Many docker container on one host

I didn't find something about running many different webapp-container on one host. So for example I have two containers. On the first I run an apache with owncloud and on the second I run a wordpress blog. Both of them have to run on port 80. How could I handle this?
Thanks
You can use -p flag to map ports:
docker run -p 8080:80 owncloud
docker run -p 8081:80 wordpress
And than access owncloud with http://yourdomain.com:8080/ and wordpress with http://yourdomain.com:8081/
It is common to combine docker with a reverse proxy like HAProxy.
With a reverse proxy you can pass request to owncloud.yourdomain.com to your owncloud container and from wordpress.yourdomain.com to the wordpress container. (or yourdomain.com/owncloud and yourdomain.com/wordpress)
You will have to use different ports in the host (otherwise you will get an error starting the second container).
To avoid that, expose one of the 80 internal port to another port in the host.
For instance, when running 'docker run':
docker run -p 8081:80 name_of_your_image
This will export the port 80 of your server in the port 8081 in the host.
if you want you can use docker-gen, it's a simple script where you can balance the docker with a simple environment variables (on container).
This is the documentation:
https://github.com/jwilder/docker-gen

How to setup a small website using docker

I have a question regarding Docker. That container's concept being totally new to me and I am sure that I haven't grasped how things work (Containers, Dockerfiles, ...) and how they could work, yet.
Let's say, that I would like to host small websites on the same VM that consist of Apache, PHP-FPM, MySQL and possibly Memcache.
This is what I had in mind:
1) One image that contains Apache, PHP, MySQL and Memcache
2) One or more images that contains my websites files
I must find a way to tell in my first image, in the apache, where are stored the websites folders for the hosted websites. Yet, I don't know if the first container can read files inside another container.
Anyone here did something similar?
Thank you
Your container setup should be:
MySQL Container
Memcached Container
Apache, PHP etc
Data Conatainer (Optional)
Run MySQL and expose its port using the -p command:
docker run -d --name mysql -p 3306:3306 dockerfile/mysql
Run Memcached
docker run -d --name memcached -p 11211:11211 borja/docker-memcached
Run Your web container and mount the web files from the host file system into the container. They will be available at /container_fs/web_files/ inside the container. Link to the other containers to be able to communicate with them over tcp.
docker run -d --name web -p 80:80 \
-v /host_fs/web_files:/container_fs/web_files/ \
--link mysql:mysql \
--link memcached:memcached \
your/docker-web-container
Inside your web container
look for the environment variables MYSQL_PORT_3306_TCP_ADDR and MYSQL_PORT_3306_TCP_PORT to tell you where to conect to the mysql instance and similarly MEMCACHED_PORT_11211_TCP_ADDR and MEMCACHED_PORT_11211_TCP_PORT to tell you where to connect to memcacheed.
The idiomatic way of using Docker is to try to keep to one process per container. So, Apache and MySQL etc should be in separate containers.
You can then create a data-container to hold your website files and simply mount the volume in the Webserver container using --volumes-from. For more information see https://docs.docker.com/userguide/dockervolumes/, specifically "Creating and mounting a Data Volume Container".