I am currently desperate with Shopware 6 - I wrote a plugin that adds a menu item to the administration and makes it possible to create job postings.
Unfortunately, there is a JavaScript error hidden somewhere in the administration. Theoretically yes, no problem, because with the command...
bin/watch-adminstration.sh
...Shopware should generate source maps, which make debugging much easier for me. Here's the problem: the source maps are missing.
So one step back: How exactly do you debug the administration and where do you see the source maps?
I hope someone has a solution, the lack of documentation from Shopware is really getting on my nerves.
It would be great to know what setup you are using in order to achieve admin watching. I & Shopware recommends using the dockware.io docker images for development. You can read their documentation for more details. They also have a Makefile in the /var/www/ folder that can be ran to watch admin. Here is what it does:
watch-admin: ## starts watcher for Shopware 6.4.13.0 Admin at http://localhost:8888
cd /var/www/html && ./bin/build-administration.sh
cd /var/www/html && php bin/console bundle:dump
cd /var/www/html && php bin/console feature:dump
cd /var/www/html && APP_URL=http://0.0.0.0 PROJECT_ROOT=/var/www/html APP_ENV=dev PORT=8888 HOST=0.0.0.0 ENV_FILE=/var/www/html/.env ./bin/watch-administration.sh
As you can see, it starts the Admin on a different port than normal.
Here is an example docker-compose file for such a setup:
version: '3'
services:
shop:
container_name: shop
image: dockware/dev:latest
ports:
- "22:22" # ssh
- "80:80" # apache2
- "443:443" # apache2 https
- "8888:8888" # watch admin
- "9998:9998" # watch storefront proxy
- "9999:9999" # watch storefront
- "3306:3306" # mysql port
#volumes:
# - "./src:/var/www/html"
# - "./src:/var/www/html/custom/plugins"
networks:
- web
environment:
- XDEBUG_ENABLED=0
## ***********************************************************************
## NETWORKS
## ***********************************************************************
networks:
web:
external: false
Just SSH to the docker container after start, cd .. && make watch-admin. Then go to http://localhost:8888, this should load the admin page with source maps visible in the inspector tool.
Related
I'm new to using Docker and docker-compose so apologies if I have some of the terminology wrong.
I've been provided with a Dockerfile and docker-compose.yml and have successfully got the images built and container up and running (by running docker-compose up -d), but I would like to update things to make my process a bit easier as occasionally I need to restart Apache on the container (WordPress) by accessing it using:
docker exec -it 89a145b5ea3e /bin/bash
Then typing:
service apache2 restart
My first problem is that there are two other services that I need to run for my project to work correctly and these don't automatically restart when I run the above service apache2 restart command.
The two commands I need to run are:
service memcached start
service cron start
I would like to know how to always run these commands when apache2 is restart.
Secondly, I would like to configure my Dockerfile or docker-compose.yml (not sure where I'm supposed to be adding this) so that this behaviour is baked in to the container/image when it is built.
I've managed to install the services by adding them to my Dockerfile but can't figure out how to get these services to run when the container is restart.
Below are the contents for relevant files:
Dockerfile:
FROM wordpress:5.1-php7.3-apache
RUN yes | apt-get update -y \
&& apt-get install -y vim \
&& apt-get install -y net-tools \
&& apt-get install -y memcached \
&& apt-get install -y cron
docker-compse.yml
version: "3.3"
services:
db:
image: mysql:5.7
volumes:
- ./db_data:/var/lib/mysql:consistent
ports:
- "3303:3306"
restart: always
environment:
MYSQL_ROOT_PASSWORD: vagrant
MYSQL_DATABASE: wp_database
MYSQL_USER: root
MYSQL_PASSWORD: vagrant
wordpress:
container_name: my-site
build: .
depends_on:
- db
volumes:
- ./my-site-wp:/var/www/html/:consistent
ports:
- "8001:80"
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: root
WORDPRESS_DB_PASSWORD: vagrant
WORDPRESS_DB_NAME: wp_database
volumes:
db_data:
my-site-wp:
...occasionally I need to restart Apache on the container (WordPress)...
Don't do that. It's a really, really bad habit. You're treating the container like a server where you go in and fix things that break. Think of it like it's a single application -- if it breaks, restart the whole dang thing.
docker-compose restart wordpress
Or restart the whole stack, even.
docker-compose restart
Treat your containers like cattle not pets:
Simply put, the “cattle not pets” mantra suggests that work shouldn’t grind to a halt when a piece of infrastructure breaks, nor should it take a full team of people (or one specialized owner) to nurse it back to health. Unlike a pet that requires love, attention and more money than you ever wanted to spend, your infrastructure should be made up of components you can treat like cattle – self-sufficient, easily replaced and manageable by the hundreds or thousands. Unlike VMs or physical servers that require special attention, containers can be spun up, replicated, destroyed and managed with much greater flexibility.)
Per each container in the compose file, you can add a run command flag in the yaml which will run a command AFTER your container has started. This will run during every start up. On the other hand, commands in the Dockerfile will only run when the image is being built. Ex:
db:
image: mysql:5.7
volumes:
- ./db_data:/var/lib/mysql:consistent
command: # bash command goes here
ports:
- "3303:3306"
restart: always
environment:
MYSQL_ROOT_PASSWORD: vagrant
MYSQL_DATABASE: wp_database
MYSQL_USER: root
MYSQL_PASSWORD: vagrant
However, this is not what you are after. Why would you mess with a container from another container? The depends_on flag should restart the downstream services. It seems your memcache instance isn't docked and hence, you are trying to fit it in the application level logic, which is the antithesis of Docker. This code should be in the infra level from the machine or the orchestrator (eg. Kubernetes).
When i'm running sudo docker-compose up inside my dir, i get this error. I'm trying to make a container, that host a php website, where you can do whoami on it.
Thanks
(13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80
| no listening sockets available, shutting down
| AH00015: Unable to open logs
Dockerfile:
FROM ubuntu:16.04
RUN apt update
RUN apt install -y apache2 php libapache2-mod-php
RUN useradd -d /home/cp/ -m -s /bin/nologin cp
WORKDIR /home/cp
COPY source .
USER cp
ENTRYPOINT service apache2 start && /bin/bash
docker-compose.yml
version: '2'
services:
filebrowser:
build: .
ports:
- '8000:80'
stdin_open: true
tty: true
volumes:
- ./source:/var/www/html
- ./logs:/var/log/apache2
There's a long-standing general rule in Unix-like operating systems that only the root user can open "low" ports 0-1023. Since you're trying to run Apache on the default HTTP port 80, but you're running it as a non-root user, you're getting the "permission denied" error you see.
The absolute easiest answer here is to use a prebuilt image that has PHP and Apache preinstalled. The Docker Hub php image includes a variant of this. You can use a simpler Dockerfile:
FROM php:7.4-apache
# Has Apache, mod-php preinstalled and a correct CMD already,
# so the only thing you need to do is
COPY source /var/www/html
# If you want to run as a non-root user, you can specify
RUN useradd -r -U cp
ENV APACHE_RUN_USER cp
ENV APACHE_RUN_GROUP cp
With the matching docker-compose.yml
version: '3' # version 2 vs 3 doesn't really matter
services:
filebrowser:
build: .
ports:
- '8000:80'
volumes:
- ./logs:/var/log/apache2
If you want to build things up from scratch, the next easiest option would be the Apache User directive: have your container start as root (so it can bind to port 80) but then instruct Apache to switch to the unprivileged user once it's started up. The standard php:...-apache image has an option to do this on its own which I've shown above.
Same kind of issue than : what causes a docker volume to be populated?
I'm trying to share configuration file of apache in /etc/apache2 with my host, and file aren't generated automatically within the shared folder.
As minimal example:
Dockerfile
FROM debian:9
RUN apt update
#Install apache
RUN apt install -y apache2 apache2-dev
ENTRYPOINT ["apache2ctl", "-D", "FOREGROUND"]
docker-compose.yml
version: '2.2'
services:
apache:
container_name: apache-server
volumes:
- ./log/:/var/log/apache2
- ./config:/etc/apache2/ #remove it will let log generating files
image: httpd-perso2
build: .
ports:
- "80:80"
With this configuration, nor ./config nor ./log will be filled with files/folders generated by the container, even if log files should have some error (getting apache-server | The Apache error log may have more information.)
If I remove the ./config volume, apache log files will be generated properly. Any clue for which reason this can append ? How can I share apache config file ?
Having the same issue with django settings file, seem to be related to config file generated by an application.
What I tried :
- using VOLUME in Dockerfile
- running docker-compose as root or chmod 777 on folders
- Creating file within the container to those directory to see if they are created on the host (and they did)
- On host, creating shared folder chown by the user (chown by root if they are automatically generated)
- Trying with docker run, having exactly the same issue.
For specs:
- Docker version 19.03.5
- using a VPS with debian buster
- docker-compose version 1.25.3
Thanks for helping.
I am trying to write a YAML pipeline script to deploy files that have been altered from my bitbucket repository to my remote server using ssh keys. The document that I have in place at the moment was copied from bitbucket itself and has errors:
pipelines:
default:
- step:
name: Deploy to test
deployment: test
script:
- pipe: atlassian/sftp-deploy:0.3.1
- variables:
USER: $USER
SERVER: $SERVER
REMOTE_PATH: $REMOTE_PATH
LOCAL_PATH: $LOCAL_PATH
I am getting the following error
Configuration error
There is an error in your bitbucket-pipelines.yml at [pipelines > default > 0 > step > script > 1]. To be precise: Missing or empty command string. Each item in this list should either be a single command string or a map defining a pipe invocation.
My ssh public and private keys are setup in bitbucket along with the fingerprint and host. The variables have also been setup.
How do I go about setting up my YAML deploy script to connect to my remote server via ssh and transfer the files?
Try to update the variables section become:
- variables:
- USER: $USER
- SERVER: $SERVER
- REMOTE_PATH: $REMOTE_PATH
- LOCAL_PATH: $LOCAL_PATH
Here is am example about how to set variables: https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html#Configurebitbucket-pipelines.yml-ci_variablesvariables
Your directive - step has to be intended.
I have bitbucket-pipelines.yml like that (using rsync instead of ssh):
# This is a sample build configuration for PHP.
# Check our guides at https://confluence.atlassian.com/x/e8YWN for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: php:7.2.1-fpm
pipelines:
default:
- step:
script:
- apt-get update
- apt-get install zip -y
- apt-get install unzip -y
- apt-get install libgmp3-dev -y
- curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
- composer install
- cp .env.example .env
#- vendor/bin/phpunit
- pipe: atlassian/rsync-deploy:0.2.0
variables:
USER: $DEPLOY_USER
SERVER: $DEPLOY_SERVER
REMOTE_PATH: $DEPLOY_PATH
LOCAL_PATH: '.'
I suggest to use their online editor in repository for editing bitbucket-pipelines.yml, it checks all formal yml structure and you can't commit invalid file.
Even if you check file on some other yaml editor, it may look fine, but not necessary according to bitbucket specification. Their online editor does fine job.
Also, I suggest to visit their community on atlasian community as it's very active, sometimes their staff members are providing answers.
However, I struggle with plenty dependencies needed to run tests properly. (actual bitbucket-pipelines.yml is becoming bigger and bigger).
Maybe there is some nicely prepared Docker image for this job.
Gitlab provides a .gitlab-ci.yml template for building and publishing images to its own registry (click "new file" in one of your project, select .gitlab-ci.yml and docker). The file looks like this and it works out of the box :)
# This file is a template, and might need editing before it works on your project.
# Official docker image.
image: docker:latest
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
build-master:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE" .
- docker push "$CI_REGISTRY_IMAGE"
only:
- master
build:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG" .
- docker push "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG"
except:
- master
But by default, this will publish to gitlab's registry. How can we publish to docker hub instead?
No need to change that .gitlab-ci.yml at all, we only need to add/replace the environment variables in project's pipeline settings.
1. Find the desired registry url
Using hub.docker.com won't work, you'll get the following error:
Error response from daemon: login attempt to https://hub.docker.com/v2/ failed with status: 404 Not Found
Default docker hub registry url can be found like this:
docker info | grep Registry
Registry: https://index.docker.io/v1/
index.docker.io is what I was looking for.
2. Set the environment variables in gitlab settings
I wanted to publish gableroux/unity3d images using gitlab-ci, here's what I used in Gitlab's project > Settings > CI/CD > Variables
CI_REGISTRY_USER=gableroux
CI_REGISTRY_PASSWORD=********
CI_REGISTRY=docker.io
CI_REGISTRY_IMAGE=index.docker.io/gableroux/unity3d
CI_REGISTRY_IMAGE is important to set.
It defaults to registry.gitlab.com/<username>/<project>
regsitry url needs to be updated so use index.docker.io/<username>/<project>
Since docker hub is the default registry when using docker, you can also use <username>/<project> instead. I personally prefer when it's verbose so I kept the full registry url.
This answer should also cover other registries, just update environment variables accordingly. 🙌
To expand on the GabLeRoux's answer,
I had issues on the pushing stage of the GitLab CI build:
denied: requested access to the resource is denied
By changing my CI_REGISTRY to docker.io (remove the index.) I was able to successfully push.