I have a dockerfile where I build an apache web server with some custom configurations etc.
Executing the Dockerfile I create an image that could be used in a deployment yaml file using Kubernetes.
Everything is working properly but after deployment, my apache service is down in every container of every pod.
Obviously I can access in every container to execute an /etc/init.d/apache2 start but this solution is not very smart..
So my question is: how can I set my custom apache to be running during the execution of the deploy yaml file?
PS: I tried this solution: with the dockerfile I created a docker container then I accessed on it and I started apache. Then I created a new image from this container (dockerfile commit + gcloud image push) but when I deploy the application I always find apache down
Well, first things first - I would very much recommend just using the official apache2 image and then making your custom configurations from there. They're documentation states this in the following paragraph:
Configuration
To customize the configuration of the httpd server, just COPY your custom configuration in as /usr/local/apache2/conf/httpd.conf.
FROM httpd:2.4
COPY ./my-httpd.conf /usr/local/apache2/conf/httpd.conf
However if you're dead-set on building everything yourself; you'll notice that inside of the Dockerfile for the official image they are copying in a BASH script and then setting this as the CMD option. This works because when running a Docker container you should be running a single process; this is why, as you stated, running it from it's service is a bad idea.
You can find the script they're running here, it's very short at 7 lines - so you shouldn't have too much trouble figuring out where to go from here.
Best of luck!
Related
I have a docker-composer setup in which i am uploading source code for server say flask api . Now when i change my python code, I have to follow steps like this
stop the running containers (docker-compose stop)
build and load updated code in container (docker-compose up --build)
This take a bit long time . Is there any better way ? Like update code in the running docker and then restarting Apache server without stopping whole container ?
There are few dirty ways you can modify file system of running container.
First you need to find the path of directory which is used as runtime root for container. Run docker container inspect id/name. Look for the key UpperDir in JSON output. You can edit/copy/delete files in that directory.
Another way is to get the process ID of the process running within container. Go to the /proc/process_id/root directory. This is the root directory for the process running inside docker. You can edit it on the fly and changes will appear in the container.
You can run the docker build while the old container is still running, and your downtime is limited to the changeover period.
It can be helpful for a couple of reasons to put a load balancer in front of your container. Depending on your application this could be a "dumb" load balancer like HAProxy, or a full Web server like nginx, or something your cloud provider makes available. This allows you to have multiple copies of the application running at once, possibly on different hosts (helps for scaling and reliability). In this case the sequence becomes:
docker build the new image
docker run it
Attach it to the load balancer (now traffic goes to both old and new containers)
Test that the new container works correctly
Detach the old container from the load balancer
docker stop && docker rm the old container
If you don't mind heavier-weight infrastructure, this sequence is basically exactly what happens when you change the image tag in a Kubernetes Deployment object, but adopting Kubernetes is something of a substantial commitment.
Traefik v1.3.1
Docker CE for Windows: 17.06.0-ce-win18 (12627)
I have the /acme folder routed to a host volume which contains the file acme.json. With the Traefik 1.3.1 update, I noticed that Traefik gets stuck in an infinite loop complaining that the "permissions 755 for /etc/traefik/acme/acme.json are too open, please use 600". The only solution I've found is to remove acme.json and let Traefik re-negotiate the certs. Unfortunately, if I need to restart the container, I have to remove acme.json again or I'm stuck with the same issue again!
My guess is that the issue lies with the Windows volume mapped to Docker but I was wondering what the recommended workaround would even be for this?
Can I change permissions on shared volumes for container-specific deployment requirements?
No, at this point, Docker for Windows does not enable you to control (chmod) the Unix-style permissions on shared volumes for deployed containers, but rather sets permissions to a default value of 0755 (read, write, execute permissions for user, read and execute for group) which is not configurable.
Traefik is not compatible with regular Windows due to the POSIX permissions check. It may work in the Windows Subsystem for Linux since that has a Unix-style permission system.
Stumbled across this issue when trying to get traefik running on Docker for Windows... ended up getting it working by adding a few lines to a dockerfile to create the acme.json and set permissions. I then built the image and despite throwing the "Docker image from Windows against a non-Windows Docker host security warning" when I checked permissions on the acme.json file it worked!
[
I setup a repo and have it auto building to the dockerhub here for further testing.
https://hub.docker.com/r/guerillamos/traefik/
https://github.com/guerillamos/traefikwin/blob/master/Dockerfile
Once I got that built I switched the image out in my docker-compose file and my DNS challenge to Cloudflare worked like a charm according to the logs.
I hope this helps someone!
Setup:
1.OpenERP/Odoo installed in a Docker environment as a single file. In other words, OpenERP/Odoo and a PostgreSQL database are installed by running a single "run" command.
NGINX used as a reverse proxy
Restore database over 1Mb in size.
Reference:
Error message in restoring database via both zip file and dump file for Odoo 8
Symptoms:
OpenERP/Odoo starts to upload database but then states that database cannot be restored while at the same time advising that the database has been restored.
Database is not available at the central OpenERP/Odoo log-in screen.
For newbies like myself the experience of this problem was particularly frustrating. The problem stems from a default setting within NGINX that limits NGINX interaction with a client (the computer used to restore the database to OpenERP/Odoo) to a 1Mb upload. As a result, the database restoration feature of OpenERP/Odoo appears broken. Thankfully, the reference in the question above hinted at the problem and the solution. Included below is a more richly documented instruction set on correcting the NGINX configuration that prevents Openerp/Odoo database restoration.
Attach to Docker container
$ docker exec -it [containerIdOrName] bash
If this is the first time trying to modify NGINX install VI
$ apt-get update
$ apt-get install vim
Set client_max_body_size to 0 to disable body size checking
See Module ngx_http_core_module for more information on settings
$ vi /etc/nginx/nginx.conf
http{ ...
client_max_body_size 0;
}
Exit out of NGINX container
$ exit
Restart NGINX container
$ docker restart [containerIdOrName]
Give database restoration a try.
Please post corrections or additions to this method to sweeten the process up for others that are struggling bushwhacking their way through virtualization.
When I serve a Golang Application from within the official Docker Hub Repository I wonder what will be the default working directory the application starts up?
Background: I will have to map local Certificate Authority and server keys into the container to serve TLS https and I wonder where to map them to the application will be able to grab them in current working directory of the application from within the container?
If you are using the golang:1.X-onbuild image from DockerHub will be copied into(https://hub.docker.com/_/golang/)
/go/src/app
this means all files and directories from the directory where you run the
docker build
command will be copied into the container.
And the workdir of all images is
/go
Go will return the current working directory using
currdir, _ = filepath.Abs(filepath.Dir(os.Args[0]))
Executed within a golang container and right after startup, the pwd is set to
/go/src/app
The current working directory of a golang application starting up within a Docker container is thus /go/src/app. In order to map a file/directory into a container you will habe to use the -v-switch as described in the Documentation for run:
-v /local/file.pem:/go/src/app/file.pem
Will map a local file into the pwd of the dockerized golang app.
I have a Vagrantfile that does 2 important things; firstly pulls and runs dockerfile/rabbitmq, then builds from a custom Dockerfile that runs an application which assumes a vhost on the rabbitmq server, let's say "/foo".
The problem is the vhost is not there.
The container with rabbitmq is running successfully, the app is linked to it using --link as the built image is run. Using the environment variables docker sets I can hit the server. But somewhere in the middle of these operations I need to create the vhost as my connection is refused, i assume because "/foo" is not there.
How can I get the vhost onto the rabbit server?
Thanks
note - using the webadmin is not an option, this has to be done programatically.
You can put default_vhost in /etc/rabbitmq/rabbitmq.config: http://www.rabbitmq.com/configure.html
It will then be created on the first run. (Stop and delete the mnesia directory if has been started already)
There are few ways to get desired configuration:
Export/import whole configuration with rabbitmqadmin - Management Plugin CLI tool.
or
Use HTTP API from management plugin
or
Use rabbitmqctl cli tool to manage access control.
BTW according to docs in here: https://www.rabbitmq.com/vhosts.html
You can du this via curl by using:
curl -u userename:pa$sw0rD -X PUT http://rabbitmq.local:15672/api/vhosts/vh1
So probably it doesnt matter you are doing this remotely or not..