https is not working on httpd docker container - apache

I am new to Apache and docker. I am running httpd:2.4 image from docker hub. Httpd container is running fine. When I am hitting localhost from browser, it gives messages as "IT workes" but when i tried to hit localhost with https then it is giving error as site can not be reached.
command to run httpd
docker run -d -p 443:443 --name httpd httpd:2.4

You must configure ssl certificate for this. Please refer SSL/HTTPS section given on Docker Hub official doc

Related

Configuring Container Registry in gitlab over http

I'm trying to configure Container Registry in gitlab installed on my Ubuntu machine.
I have Docker configured over http and it works, added insecure.
Gitlab is installed on the host http://5.121.32.5
external_url 'http://5.121.32.5'
In the gitlab.rb file, I have enabled the following settings:
registry_external_url 'http://5.121.32.5'
gitlab_rails['registry_enabled'] = true
gitlab_rails['registry_host'] = "5.121.32.5"
gitlab_rails['registry_port'] = "5005"
gitlab_rails['registry_path'] = "/var/opt/gitlab/gitlab-rails/shared/registry"
To listen to the port, I created a file
sudo mkdir -p /etc/systemd/system/docker.service.d/
Here are its contents
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock
But when the code runs in the gitlab-ci.yaml file
docker push ${MY_REGISTRY_PROJECT}:latest
then I get an error
Error response from daemon: Get "https://5.121.32.5:5005/v2/": dial tcp 5.121.32.5:5005: connect: connection refused
What is the problem? What did I miss?
And why is https specified here if I have http configured?
When you use docker login -u gitlab-ci-token -p ${CI_JOB_TOKEN} ${CI_REGISTRY} the docker command defaults to HTTPS causing the problem.
You need to tell your GitLab Runner to use insecure registry:
On the server on which the GitLab Runner is running, add the following option to your docker launch arguments (for me I added it to the DOCKER_OPTS in /etc/default/docker and restarted the docker engine): --insecure-registry 172.30.100.15:5050, replacing the IP with your own insecure registry.
Source
Also, you may want to read more about it in this interesting discussion

Put different containers containing a server in the same server

I have a Debian server with apache2 on it. I can access it by an ip address.
What I want is to be able to access to the containers in it (which contain an apache2 serveur) from the outside by an url like "myIpAddress/container1". What I currently have is an acces to those containers only from the Debian server.
I thought about using proxy reverse, but I cannot make it works.
Thank you for your help! :-)
Map the docker container's port to a host port and access the docker container from <host-ip>:port.
docker run -p host-port:container-port image
For example, upon running a container using the above command will make the container available at 127.0.0.1
docker run -p 80:5000 training/webapp
Update:
Setting up reverse proxy using NGINX
This example uses a plain NGINX container as site A and plain Apache server as site B.
Run the reverse proxy.
docker run -d \
--name nginx-proxy \
-p 80:80 \
-v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
Start the container for site A, specifying the domain name in the VIRTUAL_HOST variable.
docker run -d --name site-a -e VIRTUAL_HOST=a.example.com nginx
Check out your website at http://a.example.com.
With site A still running, start the container for site B.
docker run -d --name site-b -e VIRTUAL_HOST=b.example.com httpd
Check out site B at http://b.example.com.
Note: Make sure you have set up DNS to forward the subdomains to the host running nginx-proxy. If you're using AWS, the easiest way is to use Route53.
For testing locally, map sub-domains to resolve to localhost by adding entries in /etc/hosts file.
127.0.0.1 a.example.com
127.0.0.1 b.example.com
References
jwilder NGNIX Proxy Github
NGNIX reverse proxy using docker

How to access my docker container (Notebook) over the Internet. My host is running on Google Cloud

I am not able to access my container which is running a “dockerized” ipython notebook application. The host is a CentOS7 running in Google Cloud.
Here is the details of the environment:
Host: CentOS7/Apache Webserver running for example on IP address: 123.4.567.890 (Port 80 is Listening)
Docker container: An Jupyter Notebook application – the container is called for example APP-PN and can be accessed via the port: 8888 in docker.
It I run the application at my local server I can access the notebook application via the browser:
http://localhost:8888/files/dir1/app.html
However, when I run the application on the Google Cloud if I put:
http://123.4.567.890:8888/files/dir1/app.html
I cannot access it.
I tried all combinations open the port 8888 via TCP on the host as well as to expose the port via the docker run command – all of which did not work:
firewall-cmd --zone=public --add-port=8888/tcp --permanent
docker run -it -p 80:8888 APP-PN
docker run --expose 8888 -it -p 80:8888 APP-PN
Also I tried to change Apache to Listen to port 80 and 8888 but I got some errors.
However if I STOP the Apache Webserver and then run the command
docker run -it -p 80:8888 APP-PN
I can access the application simply in my browser via:
htttp://123.4.567.890/files/dir1/app.html
HERE is my question: I do not want to STOP my Apache Webserver and at the same time I want to access my docker container via the external port 8888.
Thanks in advance for all the help.
I didn't see in your examples a
docker run -it -p 8888:8888 APP-PN
The -p argument describes first the host port to listen on and then the container port to route to. If you want the host to listen on the same port as the container, -p 8888:8888 will get it done.

Many docker container on one host

I didn't find something about running many different webapp-container on one host. So for example I have two containers. On the first I run an apache with owncloud and on the second I run a wordpress blog. Both of them have to run on port 80. How could I handle this?
Thanks
You can use -p flag to map ports:
docker run -p 8080:80 owncloud
docker run -p 8081:80 wordpress
And than access owncloud with http://yourdomain.com:8080/ and wordpress with http://yourdomain.com:8081/
It is common to combine docker with a reverse proxy like HAProxy.
With a reverse proxy you can pass request to owncloud.yourdomain.com to your owncloud container and from wordpress.yourdomain.com to the wordpress container. (or yourdomain.com/owncloud and yourdomain.com/wordpress)
You will have to use different ports in the host (otherwise you will get an error starting the second container).
To avoid that, expose one of the 80 internal port to another port in the host.
For instance, when running 'docker run':
docker run -p 8081:80 name_of_your_image
This will export the port 80 of your server in the port 8081 in the host.
if you want you can use docker-gen, it's a simple script where you can balance the docker with a simple environment variables (on container).
This is the documentation:
https://github.com/jwilder/docker-gen

docker: Says connection refused when attempting to connect to a published port

I'm a newbie at docker. I'm creating a Hello, World example. All I'm trying to do is bring up Apache in a docker and then view the default website from the host machine.
Dockerfile
FROM centos:latest
RUN yum install epel-release -y
RUN yum install wget -y
RUN yum install httpd -y
EXPOSE 80
ENTRYPOINT ["/usr/sbin/httpd", "-D", "FOREGROUND"]
And then I build it:
> docker build .
And then I tag it:
docker tag 17283f566320 my:apache
And then I run it:
> docker run -p 80:9191 my:apache
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.17.0.2. Set the 'ServerName' directive globally to suppress this message
It then runs....
In another terminal window, I attempt to issue the curl command to view the default web site.
> curl -XGET http://0.0.0.0:9191
curl: (7) Failed to connect to 0.0.0.0 port 9191: Connection refused
> curl -XGET http://localhost:9191
curl: (7) Failed to connect to localhost port 9191: Connection refused
> curl -XGET http://127.0.0.1:9191
curl: (7) Failed to connect to 127.0.0.1 port 9191: Connection refused
or I try localhost
Just to make sure that I got the port correct, I run this:
> docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5aed4063b1f6 my:apachep "/usr/sbin/httpd -D F" 43 seconds ago Up 42 seconds 80/tcp, 0.0.0.0:80->9191/tcp angry_hodgkin
Thanks to all. My ports were reversed:
> docker run -p 9191:80 my:apache
Despite you created the containers in your local machine. These are actually running on a different machine (a virtual machine)
First, check what is the IP of your docker machine (the virtual machine)
$docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
default * virtualbox Running tcp://192.168.99.100
Then run curl command to view the default web site on your apache web server inside the container
curl http://192.168.99.100:9191
If you are running docker on Ubuntu machine as native you should be able to access your container with localhost.
If you are using Mac or Windows your docker container runs not on local host but on its IP. you can get your container ip with command docker inspect <container id> | grep IPAddress or if your are using docker-machine docker-machine ip <docker_machine_name>
Related info:
http://networkstatic.net/10-examples-of-how-to-get-docker-container-ip-address/
https://docs.docker.com/machine/reference/ip/
How to get a Docker container's IP address from the host?
so your curl call should be something like this curl <container_ip>:<container_exposed_port>
also you can tag your image on build command with param -t like this:
docker build -t my:image .
Another tip you can optimize your dockerfile by combining yum install commands like this:
RUN yum install -y \
epel-release \
wget \
httpd
http://blog.tutum.co/2014/10/22/how-to-optimize-your-dockerfile/