connecting multiple controllers mininet - sdn

I want to connect 3 pox controllers on mininet.
Two of the controllers work fine via forwarding.l2_pairs but this only connects two of them. I used forwarding.l2_pairs and forwarding.l2_pairs as well but nothing seems to solve my problem.
Help will be appreciated.

First, run:
$ cd ~/pox
$ ./pox.py forwarding.l2_learning
then you should set port for each of them with this command:
sudo ~/pox/pox.py openflow.of_01 --port=6636 forwarding.l2_pairs info.packet_dump samples.pretty_log log.level --DEBUG

Related

How are the --network options available in podman?

I am running a virtual environment on CentOS with podman.
When I used the --net option of the podman run command, I get an error.
[user#server ~]$ podman run --net slirp4netns:port_handler=slirp4netns -p 1080:80 -d --name web nginx
Error: cannot join CNI networks if running rootless: invalid argument
Is this option unavailable?
Or is there a problem with the way the options are specified?
Please tell me solution.
I used this site as a reference for the command.
This is the configuration of the server.
[user#server ~]$ cat /etc/redhat-release
CentOS Linux release 8.2.2004 (Core)
[user#server ~]$ podman -v
podman version 2.0.6
The port_handler option requires Podman >= 2.1.0, which isn't released at this moment: https://github.com/containers/podman/commit/d86bae2a01cb855d5964a2a3fbdd41afe68d62c8
You can use that option if you compile Podman from its master branch.
I find this link quite helpful to see rootless communication :
https://www.redhat.com/sysadmin/container-networking-podman
https://podman.io/getting-started/network
I am not sure if you have seen this link before or even if it is helpful to you at this instance. But, in view of helping others out, I think the blog post quotes the following helpful statements:
Note: All podman network commands are for rootfull containers only.
Technically, the container itself does not have an IP address, because without root privileges, network device association cannot be achieved
When using Podman as a rootless user, the network is setup automatically. The container itself does not have an IP Address, because without root privileges, network association is not allowed. You will also see some other limitations.

Tf Serving - Docker from source or build from git?

Struggling to understand the workflow here for tf serving.
Official docs say to “docker pull tensorflow/serving”. But they also say to “git clone https://github.com/tensorflow/serving.git”
Which one should I use? I assume the git version is so I can build my own custom serving image?
When I pull the official image from docker and run the container, why can’t I access the root? Is it because I haven’t “built it” properly yet?
If you have added some custom code, then clone first and then build image.
If you want to deploy image directly, pull image and run.
BTW, what do you mean by "access the root"? AFAIC, root is the default user in a container.
I think that is a good observation.
Only place where I feel cloning Git hub repository using "https://github.com/tensorflow/serving.git" is required if you want to run the examples like 'half_plus_two', 'half_plus_three' or if you want to run the Examples mentioned in the link,
https://github.com/tensorflow/serving/tree/master/tensorflow_serving/example.
Except that, as far as I know, pulling the Docker Image should do everything needed.
Even building the Custom Docker Image using our Custom Model doesn't need us to clone the Git hub repo.
Code for building Custom Docker Image is shown below:
sudo docker run -d --name sb tensorflow/serving
sudo docker cp /usr/local/google/home/abc/Jupyter_Notebooks/Premade_Estimator_Export sb:/models/Premade_Estimator_Export
sudo docker commit --change "ENV MODEL_NAME Premade_Estimator_Export" sb iris_container
sudo docker kill sb
sudo docker pull tensorflow/serving
sudo docker run -p 8501:8501 --mount type=bind,source=/usr/local/google/home/abc/Jupyter_Notebooks/TF_Serving/Premade_Estimator_Export,target=/models/Premade_Estimator_Export -e MODEL_NAME=Premade_Estimator_Export -t tensorflow/serving &
saved_model_cli show --dir /usr/local/google/home/abc/Jupyter_Notebooks/Premade_Estimator_Export/1556272508 --all
curl http://localhost:8501/v1/models/Premade_Estimator_Export #To get the status of the model
Regarding access to Root, if I understand it correctly, you don't want to run the docker commands using Sudo at the start for each command. Please follow the below mentioned command to get access to Root.
i. Add docker group if it does not already exist
ii. Add the connected user $USER to the docker group. Below are the commands to be run in the Terminal:
sudo groupadd docker
sudo usermod -aG docker $USER
iii. Reboot your PC and you should be able to execute Docker commands without sudo.

Docker replicate UID/GID in container from host

When creating Docker containers I keep running into the issue of the UID/GID not being reflected in the container (I realize this is by design). What I am looking for is a way to keep host permissions reasonable and / or to replicate the UID/GID from the host user / group accounts in my Docker container. For instance:
host -
woot4moo:x:504:504:woot4moo:/home/woot4moo:/bin/bash
I would like this same behavior in the Docker container. That being said, is this even the right way to do this type of thing? My belief is I could simply run:
useradd -u 504 -g 504 woot4moo
as part of my Dockerfile, but I am not sure if that is valid.
You wouldn't want to run that as part of the image build process (in your Dockerfile), because the host on which someone is running a container is often not the host on which you are building the image.
One way of solving this is passing in UID/GID information via environment variables:
docker run -e APP_UID=100 -e APP_GID=100 ...
And then have an ENTRYPOINT script that includes something like the following before running the CMD:
useradd -c 'container user' -u $APP_UID -g $APP_GID appuser
chown -R $APP_UID:$APP_GID /app/data
I had similar issues and typically included entrypoint scripts in every image as it has already been mentioned (using https://github.com/ncopa/su-exec for interactive terminal programs). However, I kept repeating the same steps in multiple Dockerfiles. But after I used "docker.inside" from Jenkins Pipeline which does the user id handling auto-magically, I decided to build a Python 3 package based on docker-py to do this in a (hopefully) similar way (with some extended features I found helpful):
https://github.com/boon-code/docker-inside
I realize that the post is rather old; Maybe it's still helpful to someone with the same problem...

Reflecting code changes in docker containers

I have a basic hello world Node application written on express. I have just dockerised this application by creating a basic dockerfile in the applications root directory. I created a docker image, and then ran that image to run it in a running container
# Dockerfile
FROM node:0.10-onbuild
RUN npm install
EXPOSE 3000
CMD ["node", "./bin/www"]
sudo docker build -t docker-express
sudo docker run --name test-container -d -p 80:3000 docker-express
I can access the web application. My question is.. When I made code changes to my application, eg change 'hello world' to 'hello bob', my changes are not reflected within the running container.
What is a good development workflow to update changes in the container? Surely I shouldn't have to delete and rebuild the images after each change?
Thank you :)
Check out the section on Sharing Volumes. You should be able to share your host volume with the docker container and then any time you need a change you can just restart the server (or have something restart it for you!).
Your command would look something like: sudo docker run -v /src/webapp:/webapp --name test-container -d -p 80:3000 docker-express
Which mounts /src/webapp (on the host) to /webapp (in the container).

Proper way to automatically start and expose ssh when running my app container

I have containers with python apps and I need them to automatically start and expose ssh when running them. I know it's against Docker's best practices, but right now I don't have any other solution. I'd be interested to know the best way to automatically run an additionnal service in a docker container anyway.
Since Docker will only start one process, installing sshd isn't enough. There are apparently multiple options to deal with it:
use a process manager like Monit or Supervisor
use the ENTRYPOINT option
append a command (service sshd start, for instance) at the end of /etc/bash.bashrc (see this answer)
Option 1 seems overkill to me. Also I suppose I'll have to run the container with a cmd calling the process manager instead of bash or my python app: not exactly what I want.
I don't know how to use Option 2 for such a case. Should I write a custom script starting sshd and then running the provided command if any ? How should this script look like ?
Option 3 is very straightforward but quite dirty. Also it won't work if I run the container with another command than /bin/bash.
What's the best solution and how to set it up ?
You mention that option 1 seems like overkill. Why is it overkill? Supervisor is very simple to configure and will basically do what you want.
First, write supervisor config files that starts your python app and sshd:
[supervisord]
nodaemon=true
[program:sshd]
command=/usr/sbin/sshd -D
[program:pythonapp]
command=/path/to/python myapp.py -x args etc etc
Call that file supervisord.conf and commit it somewhere in your repo. In your Dockerfile, copy that file to the container as one of the container build steps, expose the ports for SSH and your app (if needed) and set the CMD to start supervisord:
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
EXPOSE 22 80
CMD ["/usr/bin/supervisord"]
This is clean and easy to understand. It's how I run multiple processes in a container when needed. It is even suggested in the Docker docs as a nice solution.
If you don't want to use a process manager, you can wrap your actual container command inside a shell script and sudo service ssh start, then execute your actual command.
sudo service ssh start
python myapp.py -x args blah blah
This will start up ssh as a daemon, and then your python app will start up after.
Yes, We can configure the Supervisord for the multi process in a container. If you want to use Openssh-server we can configure the Supervisor like below-:
[supervisord]
nodaemon=true
[program:sshd]
command=/usr/sbin/sshd -D
in supervisord.conf file.
We can add the supervisord.conf file in the docker image update a line in Dockerfile.
RUN apt update && apt install -y supervisor openssh-server
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
EXPOSE 22
CMD ["/usr/bin/supervisord"]
Reference link-: Gotechnies