Terraform: How to automate pulling and running docker images from Azure Container Registry - docker-image

I want to automate the process of pulling docker images from Azure container Registry to the Azure VM. I have already done the following:
Created an Azure container Registry.
Setup username and password in the Azure Container Registry.
Pushed the image from my local machine to the Container Registry.
I have setup up terraform code to automate the build out of Azure VM. I also want to include the docker pull and docker run commands so that those tasks are automated. Below are the commands I would like to automate into terraform:
sudo docker login --username xxx --password xxx xxx.azurecr.io
sudo docker pull xxx.azurecr.io/xx/xxx
sudo docker run --network=host xxx.azurecr.io/xxx/xxx
Any help would be much appreciated. Thank you folks!

As I know, if you want to execute the Docker CLI command in the VM, you should install the Docker engine first.
In addition, if you want to run the Docker CLI commands in the VM automated after creating the VM through Terraform, You can use VM extension in Terraform. Write a shell script with the commands and then run it in the VM extension. Here is the example that Using Terraform with Azure VM Extensions.

Related

How to pass password to scp? (without root)

I need to download a file over SSH (scp) from Ubuntu machine A to ubuntu machine B.
I don't have a root access on the machine from I am downloading the file (machine B), so I cannot install anything like sshpass etc...Just clear Ubuntu.
I need to use password authentication because the command will be called inside TeamCity plugin which does not support downloading over SSH (just uploading) or plain bash. I don't have priviledge to read SSH private keys from command line.
Finally, I found that Docker is a solution for it. I was lucky that docker is installed on this machine and I can install anything (including scp) inside docker even if I am not root on the host machine.

creating docker file to run selenium Javascript based tests

I am trying to create a docker file to run selenium tests for a java script based project. Below is my docker file so far:
#base image
FROM selenium/standalone-chrome
#access to the project within docker container - Bundle app source
COPY ./seleniumTest/project /app
# Install Node.js
RUN sudo apt-get update
RUN sudo apt-get install --yes curl
RUN curl --silent --location https://deb.nodesource.com/setup_8.x | sudo bash -
#binding
EXPOSE 8080
#Define runtime
ENTRYPOINT /app/login.test.js
while building and running the docker image as: $ docker run -p 4000:8080 dockertest2 returns /bin/sh: 1: /app/login.test.js: Permission denied
why is the permission denied for it? P.S: I have changed to the dir which contains both Dockerfile and automation test JS files using (cd dir).
Create a Docker Container with all the dependencies needed for you app to run
Which can be specified in the DockerFile.
Attached a script at Entry point to Start Selenium Server Standalone.
Build and Run your Container and Remember to Bind and Expose the port your selenium is running

Create Docker image from existing Ubuntu + App

I installed Moodle (eLearning PHP based app, but it could be any app) locally on Ubuntu and would like to package it as Docker image/container. There were whole bunch of installations and configurations done. I'd like to package all that so that I can deploy to some Docker enabled hosting service, such as Digital Ocean or AWS.
How do I create Docker image?
Do I need to handle networking, ports and Apache configuration for production deployment?
There ara a lot of Moodle images in dockerhub. just use one of them
The process to create docker images is well documented on Docker's documentation site. See: Build your own images
The idea is simple: You inherit/extend an existing image and make additions to it. This is done in a provisioning file called Dockerfile
Dockerfile Example:
FROM debian:8.4
MAINTAINER John Doe (j.doe#example.com)
# update aptitude
RUN apt-get clean && apt-get update
# utilities
RUN apt-get -y install vim git php5.6 apache2
In the example above I extend a Debian image, update aptitude and install a series of packages.
A full list of commands available in Dockerfiles is available at https://docs.docker.com/engine/reference/builder/
Once your Dockerfile is ready you can build the image using the following command:
docker build -t debian/enhanced:8.4 /path/to/Dockerfile

start redis-server on debian/ubuntu boot

I am trying to create a docker container where redis starts at boot.
there will be other foreground services running on that other container which will connect to the redis db.
for some reason the service does not start when i run the container.
here my simplified Dockerfile
FROM debian
# this solves an issue described here:
# http://askubuntu.com/questions/365911/why-the-services-do-not-start-at-installation
RUN sed -i -e s/101/0/g /usr/sbin/policy-rc.d
# install redis-server
RUN apt-get update && apt-get install -y redis-server
# updates init script (redundant)
RUN update-rc.d redis-server defaults
# ping google to keep the container running in foreground
CMD ["ping", "google.com"]
can anybody explain me why this is not working and how this should be done right?
So a docker container is like a full OS but has some key differences. It's not going to run a full init system. It's designed and intended to run a single process tree. While you can run a supervisor such as runit et al within a container, you are really working against the grain of docker and all the tooling and documentation is going to lead you away from using containers like VMs and toward the harmony of 1 process/service per container.
So redis isn't starting because the ping command is literally the only process running in your container.
there will be other foreground services running on that other container which will connect to the redis db.
Don't do it this way. Really. Everything will be easier when you put 1 process in each container and connect them via network links.
Digging up an old question here, but I landed on it whilst trying to package a really simple Redis job queue into an existing docker image setup. I needed it to start up on image boot so the app could have access to it. Memory and performance are not a concern in this scenario or an external Redis server would absolutely be the right choice.
Here's what I did in my Dockerfile for a simple NodeJs app to make it work without editing any system files post-install:
RUN apt-get update && apt-get -y redis-server
CMD service redis-server start & node dist/src/main
Sort of crude using parallel command processes, but as the accepted answer points out this is not a real operating system so we really only care about Redis being online when the app is.

Is migration from docker to vm possible?

How to migrate from docker container to virtual machine ? Can somebody give links if any ?
vagrant up
sudo apt-get install lxc-docker
docker import ...
I'm being serious here. This is the whole fun of Docker!
If you mean migrating the services running in docker container to a VM, you could use the Dockerfile as an installation script base.