Use wget instead of curl for healthchecks in ASP.NET Core docker images - asp.net-core

I want to use an ASP.NET Core 6 healthcheck as a docker healthcheck.
The docs state:
Containers that use images based on Alpine Linux can use the included wget in place of curl
But there is no guidance for that, and as usual getting the docker config "just right" is more of an art than a science.
How do I do this?

It's possible to specify a healthcheck via the docker run CLI, or in a docker-compose.yml. I prefer to do it in the Dockerfile.
Configure
First note that the ASP.NET Core docker images by default expose port 80, not 5000 (so the docs linked in the question are incorrect).
This is the typical way using curl, for a non-Alpine image:
HEALTHCHECK --start-period=30s --interval=5m \
CMD curl --fail http://localhost:80/healthz || exit 1
But curl is unavailable in an Alpine image. Instead of installing it, use wget:
HEALTHCHECK --start-period=30s --interval=5m \
CMD wget --spider --tries=1 --no-verbose http://localhost:80/healthz || exit 1
HEALTHCHECK switches documented here.
wget switches documented here. --spider prevents the download of the page (similar to an HTTP HEAD), --tries=1 allows docker to control the retry logic, --no-verbose (instead of --quiet) ensures errors are logged by docker so you'll know what went wrong.
Test
For full status:
$ docker inspect --format '{{json .State.Health }}' MY_CONTAINER_NAME | jq
Or:
$ docker inspect --format '{{json .State.Health }}' MY_CONTAINER_NAME | jq '.Status'
# "healthy"
$ docker inspect --format '{{json .State.Health }}' MY_CONTAINER_NAME | jq '.Log[].Output'
# "Connecting to localhost:80 (127.0.0.1:80)\nremote file exists\n"

Related

Locally test AWS Lambda container with .NET 5 web api and Lambda RIE

I'm following the instructions to locally test lambda container https://docs.aws.amazon.com/lambda/latest/dg/images-test.html
but I am unable to do so.
I've created a sample project to reproduce it https://gitlab.com/sunnyatticsoftware/sandbox/lambda-dotnet5-webapi (see the README for step by step on its generation)
Basically I am using an Amazon dotnet template that generates an AWS Lambda function as a .NET 5 web api using containers.
It's all good with the project. The Dockerfile is described as
FROM public.ecr.aws/lambda/dotnet:5.0
WORKDIR /var/task
COPY "bin/Release/net5.0/publish" .
Now I want to test it locally using the Amazon Lambda Runtime Interface Emulator (RIE) and these are the steps I follow:
Build project with dotnet build -c Release
Publish artifacts with dotnet publish -c Release
Build docker image with docker build -t lambda-dotnet .
Download the RIE with
mkdir -p ~/.aws-lambda-rie && curl -Lo ~/.aws-lambda-rie/aws-lambda-rie https://github.com/aws/aws-lambda-runtime-interface-emulator/releases/latest/download/aws-lambda-rie && chmod +x ~/.aws-lambda-rie/aws-lambda-rie
I can see the emulator downloaded properly
ls -la ~/.aws-lambda-rie/aws-lambda-rie
-rw-r--r-- 1 diego.martin 1049089 8155136 Feb 22 14:32 /c/Users/diego.martin/.aws-lambda-rie/aws-lambda-rie
Run the emulator passing the lambda image
docker run -d -v ~/.aws-lambda-rie:/aws-lambda -p 9000:8080 --entrypoint /aws-lambda/aws-lambda-rie lambda-dotnet:latest
Here is when I get the error
12997dddc6e50aca3020527be30a1479eee9ceef412ab5009b99e9eb8cf1fa67
docker: Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: exec: "C:/Users/diego.martin/AppData/Local/Programs/Git/aws-lambda/aws-lambda-rie": stat C:/Users/diego.martin/AppData/Local/Programs/Git/aws-lambda/aws-lambda-rie: no such file or directory: unknown.
What am I missing? I am not specifying any entrypoint because I don't have any.
PS: The last step would be to send some lambda event to my container's function with
curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'
The lambda docker images for dotnet already include the RIE, so it's enough with the following (see repo with further details):
To build image
docker build -t lambda-dotnet:latest .
To run it
docker run -p 9000:8080 lambda-dotnet "LambdaDotNet5::LambdaDotNet5.LambdaEntryPoint::FunctionHandlerAsync"
And then to test it, I can use CURL from a different terminal
curl -vX POST http://localhost:9000/2015-03-31/functions/function/invocations -d #test_request.json --header "Content-Type: application/json"
and in the test_request.json file I can have the json for the event I want to send to the lambda.

How to make Zalenium work with AWS Fargate?

The issue
I would like to use Zalenium in a container using AWS Fargate. However, to do so, we have to pull two images: Zalenium and Selenium. Indeed, during its process, Zalenium creates containers using Selenium image. Therefore it needs to find the Image somewhere.
A possible solution
I was thinking of creating an ubuntu container with Docker installed which would run the following commands:
It would first pull the images
docker pull elgalu/selenium
docker pull dosel/zalenium
and then create a Zalenium container with the Docker socket mounted to create another container:
docker run --rm -ti --name zalenium -p 4444:4444 \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /tmp/videos:/home/seluser/videos \
--privileged dosel/zalenium start
That would mean I would create a container that would inside another container that would be inside another container which doesn't sound straightforward.
So before doing that, I wanted to check if someone wouldn't have a better solution. As being new to AWS, I might have missed something.
You can pull the image implicity through the Zalenium container, just check https://opensource.zalando.com/zalenium/#tryit, section "Or without pulling elgalu/selenium explicitly:"
Example:
# Pull Zalenium
docker pull dosel/zalenium
# Run it!
docker run --rm -ti --name zalenium -p 4444:4444 \
-e PULL_SELENIUM_IMAGE=true \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /tmp/videos:/home/seluser/videos \
--privileged dosel/zalenium start
# Point your tests to http://localhost:4444/wd/hub and run them
# Stop
docker stop zalenium

Tf Serving - Docker from source or build from git?

Struggling to understand the workflow here for tf serving.
Official docs say to “docker pull tensorflow/serving”. But they also say to “git clone https://github.com/tensorflow/serving.git”
Which one should I use? I assume the git version is so I can build my own custom serving image?
When I pull the official image from docker and run the container, why can’t I access the root? Is it because I haven’t “built it” properly yet?
If you have added some custom code, then clone first and then build image.
If you want to deploy image directly, pull image and run.
BTW, what do you mean by "access the root"? AFAIC, root is the default user in a container.
I think that is a good observation.
Only place where I feel cloning Git hub repository using "https://github.com/tensorflow/serving.git" is required if you want to run the examples like 'half_plus_two', 'half_plus_three' or if you want to run the Examples mentioned in the link,
https://github.com/tensorflow/serving/tree/master/tensorflow_serving/example.
Except that, as far as I know, pulling the Docker Image should do everything needed.
Even building the Custom Docker Image using our Custom Model doesn't need us to clone the Git hub repo.
Code for building Custom Docker Image is shown below:
sudo docker run -d --name sb tensorflow/serving
sudo docker cp /usr/local/google/home/abc/Jupyter_Notebooks/Premade_Estimator_Export sb:/models/Premade_Estimator_Export
sudo docker commit --change "ENV MODEL_NAME Premade_Estimator_Export" sb iris_container
sudo docker kill sb
sudo docker pull tensorflow/serving
sudo docker run -p 8501:8501 --mount type=bind,source=/usr/local/google/home/abc/Jupyter_Notebooks/TF_Serving/Premade_Estimator_Export,target=/models/Premade_Estimator_Export -e MODEL_NAME=Premade_Estimator_Export -t tensorflow/serving &
saved_model_cli show --dir /usr/local/google/home/abc/Jupyter_Notebooks/Premade_Estimator_Export/1556272508 --all
curl http://localhost:8501/v1/models/Premade_Estimator_Export #To get the status of the model
Regarding access to Root, if I understand it correctly, you don't want to run the docker commands using Sudo at the start for each command. Please follow the below mentioned command to get access to Root.
i. Add docker group if it does not already exist
ii. Add the connected user $USER to the docker group. Below are the commands to be run in the Terminal:
sudo groupadd docker
sudo usermod -aG docker $USER
iii. Reboot your PC and you should be able to execute Docker commands without sudo.

How to pass securely SSH Keys to Docker Build?

I want to create a Docker image for devs that reproduces our production servers. Those servers are configured by Ansible.
My idea is to run an ansible-pull to apply all the configuration inside the container. The problem is that I need the SSH key to pull the playbook, but I don't want to share the SSH key on the Docker image.
So, there is a way to have the SSH keys on build time without having them on run time?
Nice question. The simple way to do it is by removing the SSH keys after the Ansible stuff in the build - but because Docker stores images as layers, someone could still find the old layer with the keys in it.
If you build this Dockerfile:
FROM ubuntu
COPY ansible-ssh-key.rsa /key.rsa
RUN [ansible stuff]
RUN rm /key.rsa
The final image will have all your Ansible state and the SSH key will be gone but someone could easily run docker history to look at all the image layers, and just start a container from an intermediate layer before the key was deleted, and grab the key.
The trick would be to do something like this and then use Jason Wilder's docker-squash tool to squash the final image. In the squashed image the intermediate layer is gone and there's no way to get at the deleted key.
I'd setup some local file serving facility available only in your build environment.
E.g. start lighttpd on your build host to serve your pem-files only to local clients.
And in your Dockerfile do add/pull/cleanup in a single run:
RUN curl -sO http://build-host:8888/key.pem && ansible-pull -U myrepo && rm -rf key.pem
In this case it should be done in a single layer, so there should be no trace of key.pem left after layer commit.
This is another solution by using this repo, dockito/vault,
Secret store to be used on Docker image building.
I create a service dockito/vault and Ubuntu image where I attach my private key to the volume and run it as a process using,
docker run -it -v ~/.ssh:/vault/.ssh ubuntu /bin/bash -c "echo mysupersecret > /vault/.ssh/key"
docker run -d -p 14242:3000 -v ~/.ssh:/vault/.ssh dockito/vault
And, here is my Dockerfile
FROM ubuntu:14.04
RUN apt-get update -y && \
apt-get install -y curl && \
curl -L $(ip route|awk '/default/{print $3}'):14242/ONVAULT >
/usr/local
/bin/ONVAULT && \
chmod +x /usr/local/bin/ONVAULT
ENV REV_BREAK_CACHE=1
RUN ONVAULT echo ENV: && env && echo TOKEN ENV && echo $TOKEN
RUN ONVAULT ls -lha ~/.ssh/
RUN ONVAULT cat ~/.ssh/key
You can use the alpine linux to reduce final build size, and built the image as,
docker build -f Dockerfile -t mohan08p/VaultTest .
And, you are done. You can inspect the image. Secrets has not stored inside the image as its empty.
docker run -it mohan08p/VaultTest ls /root/.ssh
This is good technique to pass the .ssh at the build time. Only disadvantage is I need to keep additional Vault service running.
You could mount the SSH Keys into the Container on runtime.
docker run -v /path/to/ssh/key:/path/to/key/in/container image command

phantomJS on docker containers

I am having a few issues with adding PhantomJS to our website docker containers.
I got 2 containers test and production but have no idea how to add it to each of these containers.
The containers are made with Dokku and are already running. This is a bit different so we are not able to pull up fresh containers with images or edit their Dockerfiles
Additionally we have managed to use commands like wget in them using dokku run, but this is not an interactive shell. Also the files downloaded with wget don't appear to be in the container when checking with ls even though the download finishes.
I would add to the Dockerfile something like:
# PhantomJS
ENV PHANTOMJS_VERSION 1.9.7
RUN wget --no-check-certificate -q -O - https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-$PHANTOMJS_VERSION-linux-x86_64.tar.bz2 | tar xjC /opt
RUN ln -s /opt/phantomjs-$PHANTOMJS_VERSION-linux-x86_64/bin/phantomjs /usr/bin/phantomjs