Installing and upgrading help Data directory (/var/www/moodledata) cannot be created by the installer - apache

I'm trying to deploy Moodle into Docker.
Here is the steps I followed:
First, create a new network for the application and the database:
$ docker network create moodle
Then, start a new database process in an isolated container:
$ docker run --name mysql --network moodle -e MYSQL_ROOT_PASSWORD=password -d mysql
Finally, you can run this moodle image and link it to your mysql container:
$ docker run --name my-moodle --network moodle --link mysql:database -p 8080:80 -d aesr/moodle
Access it via http://localhost:8080 or http://host-ip:8080 in a browser.
But while installing moodle I'm getting this error:
Data directory (/var/www/moodledata) cannot be created by the installer.
Maybe because of Apache doesn't have the proper permission. I'm running Docker on Windows.

My solutions worked on Centos 7.
Just move out the moodledata to somewhere else, like
mkdir /moodledata
chown -R apache:apache /moodledata
Because it calls the folder /var can be expose from internet and not accept to start the Installation

Related

Docker entrypoint initdb PERMISSION DENIED

I am getting the following error when I run docker-compose up:
Thanks a lot for your help
I resolved this problem by adding this to the Dockerfile after it copies the scripts to docker-entrypoint-initdb.d
RUN chown -R mysql:mysql /docker-entrypoint-initdb.d/
Example Dockerfile:
FROM mysql:latest
ENV MYSQL_DATABASE NAME_DATABASE
ENV MYSQL_ROOT_PASSWORD ***********
COPY ./sql-scripts/ /docker-entrypoint-initdb.d/
RUN chown -R mysql:mysql /docker-entrypoint-initdb.d/
EXPOSE 3306
CMD ["mysqld", "--character-set-server=utf8mb4", "--collation-server=utf8mb4_unicode_ci"]
The next step is to build the image:
docker build -t image-db:latest .
The next step is to create the container
docker run -d -p 3306:3306 --name container-db image-db:latest
You should not override the postgres image entrypoint. It is designed to look for .sql files in /docker-entrypoint-initdb.d/ directory (See line in script).
You should just mount your .sql files into /docker-entrypoint-initdb.d/ and it should be processed on startup (only if database does not already exist)
I had the same issue, however, my problem occurred due to Linux user. I am using root as a runner so the problem happened because the mounting volume in the local machine did not have permissions. in this regard, I used chmod -R 777 scripts and it worked fine. Technically, you need to set permissions for both local machine and your container.

I need to run standalone-chrome-debug in offline mode

I have a linux server, with no connectivity to github (it's blocked in our office), and need to run standalone-chrome-debug docker image.
So in my side, i clone the repo and transfer it to the linux machine, but when i run the docker command:
docker run -d -p 4444:4444 -p 0:5900 -v /dev/shm:/dev/shm -e VNC_NO_PASSWORD=1 selenium/standalone-chrome-debug
i got a lot of error, such as entry_point.sh not found, and different similar issues of missing files, so my question is:
how can i make this docker run successfully , if i have the repository locally, and have no access to github, can you assist me with this issue ?

Why can I access my Apache default page ONLY when I go in my container's bash?

First of all, I would like to say that I'm new to Docker and all that is around it.
I have been wanting to make a container where I have Apache, php and Firebird installed. So far, so good ; everything seems to work and I can get my default page when I type in my Internet search bar my ip address and :8080. I do so by first starting my container like this :
docker run -p 8080:80 -d apps
Where "apps" is the name of my container.
I have achieved this with my Dockerfile, which looks like this (it might be a bit messy, still learning the good practices !) :
# Download of base image - ubuntu 20.04
FROM ubuntu:20.04
# Updating/upgrading
RUN apt-get update -y && apt-get upgrade -y
# Installing apache2, php and firebird with modules
RUN DEBIAN_FRONTEND="noninteractive" apt-get install apache2 php libapache2-mod-php -y && \
apt-get install php-curl php-gd php-intl php-json php-mbstring php-xml php-zip -y && \
DEBIAN_FRONTEND="noninteractive" apt-get install firebird3.0-server -y && apt-get install firebird->
# Start up apache in foreground by default
CMD /usr/sbin/apache2 -D FOREGROUND
ENTRYPOINT service apache2 restart && /bin/bash
# Expose apache
EXPOSE 80
Now, my idea was to export this container to another computer and try the same thing. I have followed a few tutorials and got to import my container on the new machine. My problem here is that somehow, the command I previously used doesn't work ; it shows me this error :
docker: Error response from daemon: No command specified.
See 'docker run --help'.
Which is odd, because it works just fine on the other machine. I also did this command, WHICH WORKS :
docker run -i -t -p 8080:80 apps /bin/bash
This one works alright, but I don't want to have to access the bash everytime I want my Apache page to load. I would want my container to run without me having to get in my container, if that makes sense.
In my opinion, it probably comes from the fact that I only loaded the container, and not the image used to build it (maybe a bad practice? Couldn't find anything about it on google).
Here is my setup just in case ---
On the first machine (which is the one where I created the image and the container) :
Ubuntu 20.04 LTS
Apache/2.4.41
Docker 19.03.8
On the other machine which I'm trying to make my container work :
Ubuntu 18.04 LTS
Apache/2.4.29
Docker 19.03.6
Thank you for your patience and time !
apps is your docker image, if you want to give name for your container you can specify --name in the run command ie,
docker run --name container_name -p 8080:80 -d apps
You can use sudo docker save -o apps.tar apps to create a tar file of the image
then change the root permission of the tar file sudo chmod 777 apps.tar
Copy this tar file to the other system you want to try, then
sudo docker load --input apps.tar
This will load the image, then you can use the previous command to start the container
docker run -p 8080:80 -d apps
Where "apps" is the name of my container. <- This statement is incorrect and perhaps the misunderstood concept that leads you to the problem.
apps is the name of the image, not the name of the container. On the host on which you can run the container, you must have built that image from the Dockerfile that you shared using the command:
docker build -t apps .
Copy the Dockerfile on the host where you cannot run the container, built the image in-there as well and try again running the container.

how to add a volume to an odoo that does not use volumes

Please, i already installed odoo by a method without using volumes, by the following command:
docker run -d -e POSTGRES_USER=odoo -e POSTGRES_PASSWORD=odoo --name db-old postgres:9.4
docker run -p 8066:8069 --name odoo-old --link db-old:db -t odoo:11
Now I'm using another instance of odoo that uses volume in its installation. Here is how I installed this new version of odoo:
sudo mkdir -p /volumes/docker/test_12/pg
sudo docker run -p 5001:5432 -itd -v /volumes/docker/test_12/pg:/var/lib/postgresql/data -e POSTGRES_USER=odoo -e POSTGRES_PASSWORD=odoo -e POSTGRES_DB=postgres --name db-new postgres:10.5
sudo mkdir -p /volumes/docker/test_12/addons
docker run -p 9012:8069 -itd -v /volumes/docker/test_12/addons:/mnt/extra-addons --name odoo-new --link db-new:db -t odoo:11
The problem is that I have a lot of data in the old odoo instance, I'm going to use the same odoo-old data in the new odoo-new instance, so remove odoo-old and keep odoo-new.
You can use odoo backup to migrate to new odoo server and database server in new containers. Access odoo backup feature in old odoo with browser at url /web/database/manager and get the backup. Then access your new odoo with the same url and restore your file. It will restore both database and filestore to new server. This works with or without volumes in containers if the odoo version is same in old and new.
If you need to copy other file content from odoo server, you can use "docker cp" to copy files. In your situation I would first try with backup.
Your odoo version seems to be 11 for both old and new container images. Your directory name has hints of odoo 12. Make sure you are really using same odoo version. If you try to upgrade from odoo 11 to odoo 12, you need to upgrade your database content with e.g. OCA openupgrade (https://github.com/OCA/OpenUpgrade).

Reflecting code changes in docker containers

I have a basic hello world Node application written on express. I have just dockerised this application by creating a basic dockerfile in the applications root directory. I created a docker image, and then ran that image to run it in a running container
# Dockerfile
FROM node:0.10-onbuild
RUN npm install
EXPOSE 3000
CMD ["node", "./bin/www"]
sudo docker build -t docker-express
sudo docker run --name test-container -d -p 80:3000 docker-express
I can access the web application. My question is.. When I made code changes to my application, eg change 'hello world' to 'hello bob', my changes are not reflected within the running container.
What is a good development workflow to update changes in the container? Surely I shouldn't have to delete and rebuild the images after each change?
Thank you :)
Check out the section on Sharing Volumes. You should be able to share your host volume with the docker container and then any time you need a change you can just restart the server (or have something restart it for you!).
Your command would look something like: sudo docker run -v /src/webapp:/webapp --name test-container -d -p 80:3000 docker-express
Which mounts /src/webapp (on the host) to /webapp (in the container).