TensorFlow serving S3 and Docker - tensorflow

I’m trying to find a way to use Tensorflow serving with the ability to add new models and new versions of models. Can I point tensorflow serving to an S3 bucket?
Also I need it to run as a container? Is this possible or do I need to implement another program to pull down the model and add it to a shared volume and ask tensorflow to update models in the file system?
Or do I need to build my own docker image to be able to pull the content from s3?

I found that I could use the TF S3 connection information (even though it isn't outlined in the TF Serving Docker Container). Example docker run command:
docker run -p 8501:8501 -e AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID -e AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY -e MODEL_BASE_PATH=s3://path/bucket/models -e MODEL_NAME=model_name -e S3_ENDPOINT=s3.us-west-1.amazonaws.com -e AWS_REGION=us-west-1 -e TF_CPP_MIN_LOG_LEVEL=3 -t tensorflow/serving
Note Log level was set because of this bug

I've submitted a very detailed answer (but using DigitalOcean Spaces instead of S3), here:
How to deploy TensorFlow Serving using Docker and DigitalOcean Spaces
Since the implementation piggy-backs off an S3-like interface, I thought I'd add the link here in case someone needs a more comprehensive example.

Related

Setting up GPU support in Airflow containers with Docker-compose - (GPU support with Tensorflow)

I am having some difficulties in starting airflow using docker-compose with appropriate GPU libraries to run my machine learning tasks.
The airflow-scheduler throws this error:
airflow-scheduler_1 | 2022-03-21 12:33:36.919960: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
Basically, there is no CUDA libraries installed in the /usr/local within the airflow container hence the error. I have installed nvidia-container runtime and set the deamon default runtime in deamon.json file
curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey | \ sudo apt-key add - distribution=$(. /etc/os-release;echo $ID$VERSION_ID) curl -s -L https://nvidia.github.io/nvidia-container-runtime/$distribution/nvidia-container-runtime.list | \ sudo tee /etc/apt/sources.list.d/nvidia-container-runtime.list sudo apt-get update
And I have managed to use the runtime:nvidia in the docker-compose.yaml file. This way within the airflow container I can see nvidia-smi. However CUDA libraries are still missing.
Is there a way to install these libraries automatically (ideally FROM tensorflow/tensorflow:latest-gpu) as these set the CUDA libraries within the container?
On the other hand, if I am not using docker-compose I can start a container with docker:
docker run -it --gpus all tensorflow/tensorflow:latest-gpu
This container has all the libraries that I need. However, I would like to use docker-compose as life will be much easier to run multiple containers and setting up all network. So I would like to avoid this approach.
Also I can use the docker in airflow and mount the docker socket to airflow container such that I can initialise a new container from the airflow. This way, I can have all the CUDA libraries also installed however, it sounds very counter-intuitive and I am having difficulties understanding why I can't set all these within the airflow container originally.
client = docker.from_env()
# run the container
response = client.containers.run(
# The container you wish to call
'tensorflow/tensorflow:latest-gpu',
# The command to run inside the container
'find / -name "libcudart.so.11.0"',
# Passing the GPU access
device_requests=[
docker.types.DeviceRequest(count=-1, capabilities=[['gpu']])
]
)
I would appreciate if you can assist me in the right direction.

Tf Serving - Docker from source or build from git?

Struggling to understand the workflow here for tf serving.
Official docs say to “docker pull tensorflow/serving”. But they also say to “git clone https://github.com/tensorflow/serving.git”
Which one should I use? I assume the git version is so I can build my own custom serving image?
When I pull the official image from docker and run the container, why can’t I access the root? Is it because I haven’t “built it” properly yet?
If you have added some custom code, then clone first and then build image.
If you want to deploy image directly, pull image and run.
BTW, what do you mean by "access the root"? AFAIC, root is the default user in a container.
I think that is a good observation.
Only place where I feel cloning Git hub repository using "https://github.com/tensorflow/serving.git" is required if you want to run the examples like 'half_plus_two', 'half_plus_three' or if you want to run the Examples mentioned in the link,
https://github.com/tensorflow/serving/tree/master/tensorflow_serving/example.
Except that, as far as I know, pulling the Docker Image should do everything needed.
Even building the Custom Docker Image using our Custom Model doesn't need us to clone the Git hub repo.
Code for building Custom Docker Image is shown below:
sudo docker run -d --name sb tensorflow/serving
sudo docker cp /usr/local/google/home/abc/Jupyter_Notebooks/Premade_Estimator_Export sb:/models/Premade_Estimator_Export
sudo docker commit --change "ENV MODEL_NAME Premade_Estimator_Export" sb iris_container
sudo docker kill sb
sudo docker pull tensorflow/serving
sudo docker run -p 8501:8501 --mount type=bind,source=/usr/local/google/home/abc/Jupyter_Notebooks/TF_Serving/Premade_Estimator_Export,target=/models/Premade_Estimator_Export -e MODEL_NAME=Premade_Estimator_Export -t tensorflow/serving &
saved_model_cli show --dir /usr/local/google/home/abc/Jupyter_Notebooks/Premade_Estimator_Export/1556272508 --all
curl http://localhost:8501/v1/models/Premade_Estimator_Export #To get the status of the model
Regarding access to Root, if I understand it correctly, you don't want to run the docker commands using Sudo at the start for each command. Please follow the below mentioned command to get access to Root.
i. Add docker group if it does not already exist
ii. Add the connected user $USER to the docker group. Below are the commands to be run in the Terminal:
sudo groupadd docker
sudo usermod -aG docker $USER
iii. Reboot your PC and you should be able to execute Docker commands without sudo.

Docker replicate UID/GID in container from host

When creating Docker containers I keep running into the issue of the UID/GID not being reflected in the container (I realize this is by design). What I am looking for is a way to keep host permissions reasonable and / or to replicate the UID/GID from the host user / group accounts in my Docker container. For instance:
host -
woot4moo:x:504:504:woot4moo:/home/woot4moo:/bin/bash
I would like this same behavior in the Docker container. That being said, is this even the right way to do this type of thing? My belief is I could simply run:
useradd -u 504 -g 504 woot4moo
as part of my Dockerfile, but I am not sure if that is valid.
You wouldn't want to run that as part of the image build process (in your Dockerfile), because the host on which someone is running a container is often not the host on which you are building the image.
One way of solving this is passing in UID/GID information via environment variables:
docker run -e APP_UID=100 -e APP_GID=100 ...
And then have an ENTRYPOINT script that includes something like the following before running the CMD:
useradd -c 'container user' -u $APP_UID -g $APP_GID appuser
chown -R $APP_UID:$APP_GID /app/data
I had similar issues and typically included entrypoint scripts in every image as it has already been mentioned (using https://github.com/ncopa/su-exec for interactive terminal programs). However, I kept repeating the same steps in multiple Dockerfiles. But after I used "docker.inside" from Jenkins Pipeline which does the user id handling auto-magically, I decided to build a Python 3 package based on docker-py to do this in a (hopefully) similar way (with some extended features I found helpful):
https://github.com/boon-code/docker-inside
I realize that the post is rather old; Maybe it's still helpful to someone with the same problem...

How to use setfacl within a Docker container?

It seems like within the container the filesystem is mounted without 'acl', therefore 'setfacl' won't work. And it won't let me remount it either, and I can't even run 'df -h'.
I need setfacl because I make root own all the files from my websites, and I give the webserver user write permissions to only a few directories like cache, logs, etc.
What can I do?
The good news is that Docker supports ACLs.
In early releases Docker used a filesystem named AUFS which didn't support them.
You could tell Docker to use Device Mapper (LVM) for its storage, by starting your Docker daemon with the appropriate option:
docker -d --storage-driver=devicemapper --daemon=true
Source: https://groups.google.com/forum/#!topic/docker-user/165AARba2Bk
and then you were able to use setfacl in your containers.
Any reasonably recent release or Docker now uses the overlay2 storage driver, which supports that out of the box.
To check what is your storage driver:
docker info | grep Storage
df -h doesn't work for a different and unrelated reason : it relies on /etc/mtab, not present in your case. In your container, create a link from procfs, that will solve this problem:
ln -s /proc/mounts /etc/mtab

Amazon EC2: How install glassfish in EC2?

i'm trying to deploy my JSF site in EC2 instances, i'm new with cloud computing.
How do i install the GassFish 3 OpenSource in my EC2 instance ?
Update:
To download use 'curl' command :
curl http://www.java.net/download/jdk6/6u27/promoted/b03/binaries/jdk-6u27-ea-bin-b03-linux-i586-27_may_2011-rpm.bin > java-rpm.bin
or using wget:
wget http://www.java.net/download/jdk6/6u27/promoted/b03/binaries/jdk-6u27-ea-bin-b03-linux-i586-27_may_2011-rpm.bin
Here is what you need to do:
Get an AMI instance launched. Follow this tutorial to install. (Unfortunately, Glassfish installation tutorials are given as YouTube video on their official website!) The Simplest is to start with an existing EBS backed instance. This is how I started.
Now, if you want to kill the instance, it's same as throwing machine out of window. If you want to reuse it later or probably want to make a blue print for many instances that you will be launching in future. You need to bundle it up and register as an image.
If you have EBS backed instance, creating an image out of it is easier than sending an email. All you need to do is to login to your AWS Web Console, select the instance that you wanted to create an AMI of, select Instance Actions > Create Image from menu. Done!
If you have instance storage based AMI. You need to bundle up, and store in your S3 bucket, and register the AMI using, ec2-api-tools and ec2-ami-tools. So, have them installed in your instance and create the image as very neatly explained here.
Now, as far as cost is concerned, refer this. As far as I understand (my clients pay, so I don't really know how much) your running instance is going to cost you some money, even if there is no activity. However, if you make an AMI and store in S3 or in a EBS volume, you will be paying for storage cost.
Hope this explains what you wanted.
First you need to install jdk and then set environment variable JAVA_HOME.
Then follow below commands (Applicable on Amazon Linux EC2 ):
Directory used here is : usr/server
wget http://download.oracle.com/glassfish/4.1.2/release/glassfish-4.1.2.zip
unzip glassfish-4.1.2.zip
mv glassfish4 ../server/
groupadd glassfish-group
useradd -s /bin/bash -g glassfish-group glassfish-user
cd usr/server
chown -Rf glassfish-user.glassfish-group glassfish4
ls -l | grep glassfish
cd glassfish4
cd glassfish/domains
cd glassfish/bin
pwd
cd /etc/init.d/
wget https://geekstarts.info/scripts/glassfish.sh
mv glassfish.sh glassfish
chmod 755 glassfish
ls -l | grep glassfish
cd ~ glassfish/
su vector-user
whoami
pwd
cd glassfish4/bin
ls -l
whoami
./asadmin
change-master-password --savemasterpassword // default is chageit
change-admin-password // default is blank
start-domain
enable-secure-admin
restart-domain
stop-domain