In band Controller with Mininet - openflow

Please I want to test the in Band Controller with mininet, I found the code (there http://windysdn.blogspot.fr/2013/10/in-band-controller.html ) but i don't know how to integrate (or write) it in mininet.
Can anyone help me please ?
Thanks

Install Mininet (mininet.org)
Copy/paste the code you found (which is a script for creating a mininet topology and starting mininet with that topology) into a python script (.py)
Run python script, and see Mininet start-up.
Take the Mininet tutorial on mininet.org, and learn the hard, but good and efficient, way as everyone else
PS: You should probably do #4 first.

Related

TensorFlow without jupyter notebook

Do I absolutely need to use jupyter notebook to run TensorFlow in Windows ?
I tried the detect object example with the jupyter notebook, it works but I'm not really comfortable, Im used to notepad++ and running python directly on my windows without virtual environment.
I tried to copy past all the codes but I run into many hugs.
No, it is not compulsory to use Jupyter notebook to run Tensorflow on Windows. I personally use PyCharm as my IDE and Anaconda for dependency management (this is completely optional).
I would recommend you to use a proper IDE instead of notepad++ because it's much easier to do debugging using an IDE. You'll also be cloning a lot from Git when you start developing your own model, and usually the open source models out there has a lot of classes and methods in it (take Google's Inception net for example).
Another alternative would be maybe you can start posting about the bugs you are facing, then we can all start helping you.

How can i install tensorflow framework on robot NAO?

I want to install TensorFlow framework in robot NAO ,
How to do it ?
Great question! The NAO is a linux machine, so technically it might be possible. Unfortunately, the NAO also has a limited amount of computational power... Depending on the plan you have it might be a better idea to set up an external computer that does the heavy computations for you. This all depends on the application you want to build.
If you decide to install tensorflow on Nao: simply try to use SSH (or Putty) to get a console you can use to install tensorflow.
If you decide to use an external server: maybe this program I wrote a long time ago helps you: https://github.com/rmeertens/nao-wit . It is an example of how to send speech to an external server.
Good luck!

Lumify: Not Launching Local Instance on Vagrant

Followed the instructions to run a local instance of lumify using Vagrant.
Vagrant up demo, fails as the https://bits.lumify.io/yum/repodata/repomd.xml is down.
The try site is down as https://try.lumify.io/ as well.
Need pointers if any yum repo can be used for this.
I see that there are few dependencies related to opencv etc and i could not find them all in 1 place.
Any inputs on this would be greatly appreciated
I'm pretty sure active development of Lumify's open source version ended in 2015. Have you tried the open source version of Visallo? There's also an enterprise edition if you need additional capabilities or greater scalability.

Openstack recover orphaned instances

I'm using Openstack Havana with one compute node based on kvm and a controller node running in a VM.
After a bad hardware failure I got into a situation where the controller is aware of a subset of the instances (preceding a certain date) and completely lost the newer instances. I suppose we had to restart from an older backup instance of the controller.
All the information about the instances is still available on the compute node (disk, xml) and they even still appear in virsh list --all.
Is there a way to just re-import them into the controller? Maybe by sql or some nova command line?
Thanks.
Ok. We solved the issue the rough way. Converting the disk file produced for OpenStack (OS) instances to VDI (thanks qemu-img) we then run the suitable glance command to import the VDI as an image into OS. From the dashboard we then created an instance on that image and reassigned our floating-ip.
Anyone has counter-indications to this?
Thanks.

Redis Multiple Instance in FreeBSD

im trying to learn FreeBSD, and i like to install multiple instance of Redis on it. i know this is easy in linux by just running the ./install_server script.
i've tried running the script from utils but as expected it wont work in bsd. as it installs in /etc/init.d
is there anyone who could direct me where to learn doing multiple instance of redis under freeBSD or teach me how to do it?
im new in freeBSD, and i want to learn it. i came from linux and OS X.
thank you in advance!
The default install on FreeBSD only runs one instance, as usual for daemons.
But you can run multiple instances by hand. Of course you'd have to
write a different configuration file for each instance, using a separate port, and maybe a different directory where to dump databases.
Why not use multiple databases in one redis instance?