Automatically create docker container and launch python script - automation

I am working on creating an automated unit testing system which will utilise docker to test individual student assignments, written in Python, against a single unit test file.
I have created a website where students can upload their assignments but I'm a little but unsure as to how to get the automation with Docker working.
The workflow looks something like this:
A student uploads an assignment for marking
This is copied to a linux host which contains docker
The file sits here while it waits to be tested
So, say I had twenty student uploading there .py files, named as their unique student numbers, could I:
Create a Docker container which runs Ubuntu and Python
Copy the student file and unit test into this container
Run the unit test
Output the results as a text file
Copy this text file back to my webserver to display the results
Could somebody point me in the right direction to get started with this automation? I'm really just after some help of the Docker side of things, not on copying the files from my webserver to the Docker host.
Thanks.

Yes, it is possible to use Docker for that.
The Dockerfile would look like this:
FROM ubuntu
MAINTAINER xxx <user#example.org>
# update ubuntu repository
RUN DEBIAN_FRONTEND=noninteractive apt-get -y update
# install ubuntu packages
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install python python-pip
# install python requirements
RUN pip install ...
# define a mount point
VOLUME /student.py
# define command for this image
CMD ["python","/student.py"]
Now, you have to build this image with docker build -t student_test ..
To start the script and grab the output you can use:
docker run --volume /path/to/s12345.py:/student.py student_test > student_results_12345.txt`.
The --volume parameter is needed, to mount a student script to the defined mount point. Also, you could start multiple containers at once.
All paths are relative to current working directory.

Checkout the following project
https://github.com/CenturyLinkLabs/buildpack-runner
Uses Heroku buildpacks to create a docker image. Crazy but a neat idea if you get it working.

Related

Problems getting Singularity Compose to work

I wrote a small test project for Singularity Compose, consisting of a small server application, with the following YAML file:
version: "1.0"
instances:
server:
build:
context: ./server
recipe: server.recipe
ports:
- 9999:9999
When I call singularity-compose build, it successfully builds server.sif. Calling singularity-compose up also seemingly works without error, and calling singularity-compose ps results in something that looks just fine:
+ singularity-compose ps
INSTANCES NAME PID IMAGE
1 server 4176911 server.sif
However, the server application does not work, calling my test client results in it saying that there is no answer from the server.
But if I run server.sif directly without compose, everything works just fine.
Also, I tripple checked, my test application listens to port 9999, thus should be reachable from the outside.
What did I do wrong?
Edit:
I also checked whether there actually is any process listening at port 9999 by calling sudo lsof -i -P -n | grep LISTEN, this is not the case. Only when I manually start server.sif without compose it shows me the process listening.
Edit:
I went into the Singularity Compose shell and tried to start the Server application directly in there, just as a test, and it resulted in Permission denied. Not sure if that means anything.
Edit:
I now gave the application execution rights within the shell and called in there, this works. Am now trying to add execution rights in the recipe. If that works, it would be kind of strange, as the executable was build right there, and thus should already have execution rights.
Edit:
I added chmod +x in my recipe both after building Server and before executing it. Doesn't work either.
Also checked whether any bridges exist using brctl show, this is not the case.
Edit: My recipe, adjusted by the input of tsnowlan in his answer below:
Bootstrap: docker
From: ubuntu:20.04
%files
connection.cpp
connection.h
main.cpp
server.cpp
server.h
server.pro
%post
# get some basics
apt update
apt-get install -y wget
apt-get install -y software-properties-common
# get C++ compiler
apt-get install -y g++
apt-get install -y build-essential
apt-get install -y build-essential cmake
# get Qt
apt-get install -y qt5-default
# compile
qmake
make
ls
%runscript
/Server
%startscript
/Server
Again, note that the application works just fine both when compiled and startet normally and when started within a Singularity image (but without Singularity Compose).
The ls at the end of the %post block is used to verify that the Server application was build successfully.
Please share the server.recipe, as it is difficult to identify should be/is happening without it.
Without having that, my guess is that you have a %runscript in your definition file, but no %startscript. When the image is executed directly or via singularity run image.sif, the contents of %runscript determine what happens. To emulate the docker-compose style, the singularity images are started as persistent instances. In this case, the %startscript block determines what runs. If it is empty, it will just start up and sit there doing nothing. This would explain why when run by hand it works but not when using compose.

docker file to run automation test in JS files

I am trying to create a docker file to run selenium tests for a java script based project. Below is my docker file so far:
#base image
FROM selenium/standalone-chrome
#access to the project within docker container - Bundle app source
COPY ./seleniumTest/project /app
# Install Node.js
RUN sudo apt-get update
RUN sudo apt-get install --yes curl
RUN curl --silent --location https://deb.nodesource.com/setup_8.x | sudo bash -
#binding
EXPOSE 8080
#Define runtime
ENTRYPOINT /app/login.test.js
while running with $ docker run -p 4000:80 lamgadekamal/dockertest
returns: Unable to find image 'lamkam/dockertest:latest' locally docker: Error response from daemon: manifest for lamkam/dockertest:latest not found. Could not figure out why am I getting this?
I suspect that you need to build your image first, since the image cannot be found.
Run this command from the same directory where your Dockerfile is located. This will build the image.
docker build -t lamgadekamal/dockertest .
You can then verify that the image exists by running docker images
EDIT: After looking at this again, it appears that you are trying to run the wrong image. You are trying to run lamgadekamal/dockertest, but you built the image with the tag lamkam/dockertest? Seems like you have a typo. I would suggest running docker images to see exactly what is there, but in all likelihood, you need to run lamkam/dockertest.
docker run -p 4000:80 lamkam/dockertest

Docker replicate UID/GID in container from host

When creating Docker containers I keep running into the issue of the UID/GID not being reflected in the container (I realize this is by design). What I am looking for is a way to keep host permissions reasonable and / or to replicate the UID/GID from the host user / group accounts in my Docker container. For instance:
host -
woot4moo:x:504:504:woot4moo:/home/woot4moo:/bin/bash
I would like this same behavior in the Docker container. That being said, is this even the right way to do this type of thing? My belief is I could simply run:
useradd -u 504 -g 504 woot4moo
as part of my Dockerfile, but I am not sure if that is valid.
You wouldn't want to run that as part of the image build process (in your Dockerfile), because the host on which someone is running a container is often not the host on which you are building the image.
One way of solving this is passing in UID/GID information via environment variables:
docker run -e APP_UID=100 -e APP_GID=100 ...
And then have an ENTRYPOINT script that includes something like the following before running the CMD:
useradd -c 'container user' -u $APP_UID -g $APP_GID appuser
chown -R $APP_UID:$APP_GID /app/data
I had similar issues and typically included entrypoint scripts in every image as it has already been mentioned (using https://github.com/ncopa/su-exec for interactive terminal programs). However, I kept repeating the same steps in multiple Dockerfiles. But after I used "docker.inside" from Jenkins Pipeline which does the user id handling auto-magically, I decided to build a Python 3 package based on docker-py to do this in a (hopefully) similar way (with some extended features I found helpful):
https://github.com/boon-code/docker-inside
I realize that the post is rather old; Maybe it's still helpful to someone with the same problem...

Subversion export/checkout in Dockerfile without printing the password on screen

I want to write a Dockerfile which exports a directory from a remote Subversion repository into the build context so I can work with these files in subsequent commands. The repository is secured with user/password authentication.
That Dockerfile could look like this:
# base image
FROM ubuntu
# install subversion client
RUN apt-get -y update && apt-get install -y subversion
# export my repository
RUN svn export --username=myUserName --password=myPassword http://subversion.myserver.com/path/to/directory
# further commands, e.g. on container start run a file just downloaded from the repository
CMD ["/bin/bash", "path/to/file.sh"]
However, this has the drawback of printing my username and password on the screen or any logfile where the stdout is directed, as in Step 2 : RUN svn export --username=myUserName --password=myPassword http://subversion.myserver.com/path/to/directory. In my case, this is a Jenkins build log which is also accessible by other people who are not supposed to see the credentials.
What would be the easiest way to hide the echo of username and password in the output?
Until now, I have not found any way how to execute RUN commands in a Dockerfile silently when building the image. Could the password maybe be imported from somewhere else and attached to the command beforehand so it does not have to be printed anymore? Or are there any methods for password-less authentication in Subversion that would work in the Dockerfile context (in terms of setting them up without interaction)?
The Subversion Server is running remotely in my company and not on my local machine or the Docker host. To my knowledge, I have no access to it except for accessing my repository via username/password authentication, so copying any key files as root to some server folders might be difficult.
The Dockerfile RUN command is always executed and cached when the docker image is build so the variables that svn needs to authenticate must be provided at build time. You can move the svn export call when the docker run is executed in order to avoid this kind of problems. In order to do that you can create a bash script and declare it as a docker entrypoint and pass environment variables for username and password. Example
# base image
FROM ubuntu
ENV REPOSITORY_URL http://subversion.myserver.com/path/to/directory
# install subversion client
RUN apt-get -y update && apt-get install -y subversion
# make it executable before you add it here otherwise docker will coplain
ADD docker-entrypoint.sh /enrypoint.sh
ENTRYPOINT /entrypoint.sh
docker-entrypoint.sh
#!/bin/bash
# maybe here some validation that variables $REPO_USER $REPO_PASSOWRD exists.
svn export --username="$REMOTE_USER" --password="$REMOTE_PASSWORD" "$REPOSITORY_URL"
# continue execution
path/to/file.sh
Run your image:
docker run -e REPO_USER=jane -e REPO_PASSWORD=secret your/image
Or you can put the variables in a file:
.svn-credentials
REPO_USER=jane
REPO_PASSWORD=secret
Then run:
docker run --env-file .svn-credentials your/image
Remove the .svn-credentials file when your done.
Maybe using SVN with SSH is a solution for you? You could generate a public/private key pair. The private key could be added to the image whereas the public key gets added to the server.
For more details you could have a look at this stackoverflow question.
One solution is to ADD the entire SVN directory you previously checked out on your builder file-system (or added as a svn:externals if your Dockerfile is itself in a SVN repository like this: svn propset svn:externals 'external_svn_directory http://subversion.myserver.com/path/to/directory' ., then do a svn up).
Then in your Dockerfile you can simply have this:
ADD external_svn_directory /tmp/external_svn_directory
RUN svn export /tmp/external_svn_directory /path/where/to/export/to
RUN rm -rf /tmp/external_svn_directory
Subversion stores authentication details (if it not disabled in configuration) at client side and use stored username|password on request for the subsequent operations on the same URL.
Thus - you have to run (successful) svn export in Dockerfile with username|password only once and allow SVN to use cached credentials (remove auth. options from command-line) later

Proper way to automatically start and expose ssh when running my app container

I have containers with python apps and I need them to automatically start and expose ssh when running them. I know it's against Docker's best practices, but right now I don't have any other solution. I'd be interested to know the best way to automatically run an additionnal service in a docker container anyway.
Since Docker will only start one process, installing sshd isn't enough. There are apparently multiple options to deal with it:
use a process manager like Monit or Supervisor
use the ENTRYPOINT option
append a command (service sshd start, for instance) at the end of /etc/bash.bashrc (see this answer)
Option 1 seems overkill to me. Also I suppose I'll have to run the container with a cmd calling the process manager instead of bash or my python app: not exactly what I want.
I don't know how to use Option 2 for such a case. Should I write a custom script starting sshd and then running the provided command if any ? How should this script look like ?
Option 3 is very straightforward but quite dirty. Also it won't work if I run the container with another command than /bin/bash.
What's the best solution and how to set it up ?
You mention that option 1 seems like overkill. Why is it overkill? Supervisor is very simple to configure and will basically do what you want.
First, write supervisor config files that starts your python app and sshd:
[supervisord]
nodaemon=true
[program:sshd]
command=/usr/sbin/sshd -D
[program:pythonapp]
command=/path/to/python myapp.py -x args etc etc
Call that file supervisord.conf and commit it somewhere in your repo. In your Dockerfile, copy that file to the container as one of the container build steps, expose the ports for SSH and your app (if needed) and set the CMD to start supervisord:
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
EXPOSE 22 80
CMD ["/usr/bin/supervisord"]
This is clean and easy to understand. It's how I run multiple processes in a container when needed. It is even suggested in the Docker docs as a nice solution.
If you don't want to use a process manager, you can wrap your actual container command inside a shell script and sudo service ssh start, then execute your actual command.
sudo service ssh start
python myapp.py -x args blah blah
This will start up ssh as a daemon, and then your python app will start up after.
Yes, We can configure the Supervisord for the multi process in a container. If you want to use Openssh-server we can configure the Supervisor like below-:
[supervisord]
nodaemon=true
[program:sshd]
command=/usr/sbin/sshd -D
in supervisord.conf file.
We can add the supervisord.conf file in the docker image update a line in Dockerfile.
RUN apt update && apt install -y supervisor openssh-server
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
EXPOSE 22
CMD ["/usr/bin/supervisord"]
Reference link-: Gotechnies