i use Open vSwitch and OpenDaylight.i want to forward packets to controller. what i want to do is build a firewall, so ovs first send all packets to controller, and the controller will judge whether the packet should be blocked or not.
i add following code in datapath/datapath.c/ovs_dp_process_packet()
struct dp_upcall_info upcall;
int error;
memset(&upcall, 0, sizeof(upcall));
upcall.cmd = OVS_PACKET_CMD_MISS;
upcall.portid = ovs_vport_find_upcall_portid(p, skb);
upcall.mru = OVS_CB(skb)->mru;
error = ovs_dp_upcall(dp, skb, key, &upcall, 0);
what i want to do is upcall the packets to controller even if they match the flowtable. but after i complie the code, it doesn't work. so how to upcall packets to controller ?
OVS:
Adding a new action to OVS is a long story that you can follow. here is a list of most important code files that you should change:
lib/ofp-actions.c: Defining new action, encoding, decoding and formating
include/openvswitch/ofp-actions.h: propagating action
datapath/linux/compat/include/linux/openvswitch.h: defining in kernel level
lib/odp-util.c: defining action's bytes length
ofproto/ofproto-dpif-xlate.c: this file handles comunication between kernel and userspace. specifically when there is no match for a new flow
datapath/flow_netlink.c: define action's bytes in kernel
datapath/actions.c: execution action
For complete steps, I highly recommend following Custom Open vSwitch Actions
After changing in source files use these commands in the root directory of OVS to stop, make and run it. be careful that your gcc version should be the same as the version that your Linux header files have been compiled.
ovs-ctl stop
ovs-dpctl del-dp ovs-system
rmmod openvswitch
make clean
make modules_install clean
./boot.sh
./configure --with-linux=/lib/modules/`uname -r`/build --enable-Werror
make
make install
make modules_install
config_file="/etc/depmod.d/openvswitch.conf"
for module in datapath/linux/*.ko; do
modname="$(basename ${module})"
echo "override ${modname%.ko} * extra" >> "$config_file"
echo "override ${modname%.ko} * weak-updates" >> "$config_file"
done
depmod -a
modprobe openvswitch
lsmod | grep openvswitch
mkdir -p /usr/local/etc/openvswitch
ovsdb-tool create /usr/local/etc/openvswitch/conf.db vswitchd/vswitch.ovsschema
mkdir -p /usr/local/var/run/openvswitch
ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile --detach --log-file
ovs-vsctl --no-wait init
ovs-vswitchd --pidfile --detach --log-file
export PATH=$PATH:/usr/local/share/openvswitch/scripts
ovs-ctl start
Controller:
In the controller, you should be able to create and push Action to switch. I have no information about the way of defining new Action in OpenDayLight but, I know that in Floodlight it is achieved by using Loxigen.
If you had any problem, feel free to contact me.
Related
I have been using the docker build --ssh flag to give builds access to my keys from ssh-agent.
When I try the same thing with podman it does not work. I am working on macOS Monterey 12.0.1. Intel chip. I have also reproduced this on Ubuntu and WSL2.
❯ podman --version
podman version 3.4.4
This is an example Dockerfile:
FROM python:3.10
RUN mkdir -p -m 0600 ~/.ssh \
&& ssh-keyscan github.com >> ~/.ssh/known_hosts
RUN --mount=type=ssh git clone git#github.com:ruarfff/a-private-repo-of-mine.git
When I run DOCKER_BUILDKIT=1 docker build --ssh default . it works i.e. the build succeeds, the repo is cloned and the ssh key is not baked into the image.
When I run podman build --ssh default . the build fails with:
git#github.com: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Error: error building at STEP "RUN --mount=type=ssh git clone git#github.com:ruarfff/a-private-repo-of-mine.git": error while running runtime: exit status 128
I have just begun playing around with podman. Looking at the docs, that flag does appear to be supported. I have tried playing around with the format a little, specifying the id directly for example but no variation of specifying the flag or the mount has worked so far. Is there something about how podman works that I may be missing that explains this?
Adding this line as suggested in the comments:
RUN --mount=type=ssh ssh-add -l
Results in this error:
STEP 4/5: RUN --mount=type=ssh ssh-add -l
Could not open a connection to your authentication agent.
Error: error building at STEP "RUN --mount=type=ssh ssh-add -l": error while running runtime: exit status 2
Edit:
I belive this may have something to do with this issue in buildah. A fix has been merged but has not been released yet as far as I can see.
The error while running runtime: exit status 2 does not to me appear to be necessarily related to SSH or --ssh for podman build. It's hard to say really, and I've successfully used --ssh like you are trying to do, with some minor differences that I can't relate to the error.
I am also not sure ssh-add being run as part of building the container is what you really meant to do -- if you want it to talk to an agent, you need to have two environment variables being exported from the environment in which you run ssh-add, these define where to find the agent to talk to and are as follows:
SSH_AUTH_SOCK, specifying the path to a socket file that a program uses to communicate with the agent
SSH_AGENT_PID, specifying the PID of the agent
Again, without these two variables present in the set of exported environment variables, the agent is not discoverable and might as well not exist at all so ssh-add will fail.
Since your agent is probably running as part of the set of processes to which your podman build also belongs to, at the minimum the PID denoted by SSH_AGENT_PID should be valid in that namespace (meaning it's normally invalid in the set of processes that container building is isolated to, so defining the variable as part of building the container would be a mistake). Similar story with SSH_AUTH_SOCK -- the path to the socket file dumped by starting the agent program, would not normally refer to a file that exists in the mount namespace of the container being built.
Now, you can run both the agent and ssh-add as part of building a container, but ssh-add reads keys from ~/.ssh and if you had key files there as part of the container image being built you wouldn't need --ssh in the first place, would you?
The value of --ssh lies in allowing you to transfer your authority to talk to remote services defined through your keys on the host, to the otherwise very isolated container building procedure, through use of nothing else but an SSH agent designed for this very purpose. That removes the need to do things like copying key files into the container. They (keys) should also normally not be part of the built container, especially if they were only to be used during building. The agent, on the other hand, runs on the host, securely encapsulates the keys you add to it, and since the host is where you'd have your keys that's where you're supposed to run ssh-add at to add them to the agent.
Background:
I'm trying to figure out how to use ConfigFS to set up an HID device on BeagleBoneBlack.
I found the following example (www.isticktoit.net/?p=1383) on the web and tried it. The sample runs on a Raspberry Pi Zero. However, the sample does not work on my BBB. The following is the script that I wrote and which is executed as root. The script attempts to define a keyboard hid device.
#!/bin/bash
cd /sys/kernel/config/usb_gadget/
modprobe libcomposite
modprobe usb_f_hid
mkdir -p isticktoit
cd isticktoit
echo 0x1d6b > idVendor # Linux Foundation
echo 0x0104 > idProduct # Multifunction Composite Gadget
echo 0x0100 > bcdDevice # v1.0.0
echo 0x0200 > bcdUSB # USB2
mkdir -p strings/0x409
echo "fedcba9876543210" > strings/0x409/serialnumber
echo "Tobias Girstmair" > strings/0x409/manufacturer
echo "iSticktoit.net USB Device" > strings/0x409/product
mkdir -p configs/c.1/strings/0x409
echo "Config 1: ECM network" > configs/c.1/strings/0x409/configuration
echo 250 > configs/c.1/MaxPower
# Add functions here
pwd
mkdir -p functions/hid.xyz
echo 1 > functions/hid.xyz/protocol
echo 1 > functions/hid.xyz/subclass
echo 8 > functions/hid.xyz/report_length
echo -ne \\x05\\x01\\x09\\x06\\xa1\\x01\\x05\\x07\\x19\\xe0\\x29\\xe7\\x15\\x00\\x25\\x01\\x75\\x01\\x95\\x08\\x81\\x02\\x95\\x01\\x75\\x08\\x81\\x03\\x95\\x05\\x75\\x01\\x05\\x08\\x19\\x01\\x29\\x05\\x91\\x02\\x95\\x01\\x75\\x03\\x91\\x03\\x95\\x06\\x75\\x08\\x15\\x00\\x25\\x65\\x05\\x07\\x19\\x00\\x29\\x65\\x81\\x00\\xc0 > functions/hid.xyz/report_desc
ln -s functions/hid.xyz configs/c.1/
# End functions
ls /sys/class/udc > UDC
The error that I get is "ls: write error: Devicew or resource busy".
I am running Debian Jessie - Linux version 4.4.9-ti-r25
I did an lsmod and libcomposite and usb_f_hid are loaded.
The usb device controller, musb-hdrc-0.auto, is loaded.
Questions:
How can I tell which device is busy?
Where can I find the USB configfs defect bug list for BBB.
Is there a logging file and enabling parameter that would give me a clue as to what is happening?
Thanks for any help
David Glaser
The problem you are likely having with the beaglebone black is the cdc_acm driver. It is difficult to remove (well, not really now that I KNOW how to do it) if you don't know how because the steps aren't just laid out to find anywhere yet. I found this: https://media.defcon.org/DEF%20CON%2023/DEF%20CON%2023%20presentations/DEFCON-23-Phil-Polstra-One-device-to-Pwn-them-all.pdf
which led me to the following solution
#!/usr/bin/env bash
function checkModule(){
MODULE="$1"
if lsmod | grep "$MODULE" &> /dev/null ; then
echo "$MODULE" found.
return 0
else
echo "$MODULE" not found.
return 1
fi
}
if which 'systemctl' | grep "systemctl" &> /dev/null ; then
systemctl stop serial-getty#ttyGS0.service >/dev/null
fi
if checkModule "g_serial" == 0; then
modprobe -r g_serial
fi
if checkModule "usb_f_acm" == 0; then
modprobe -r usb_f_acm
fi
if ! checkModule "libcomposite" == 0; then
modprobe libcomposite
fi
basically, it stops the serial-getty service which allows you to remove the g_serial device and then this allows you to remove usb_f_acm. This in turn removes the libcomposite device, which you actually want to keep. Once this is done, you can likely do all the things you needed to do. I got a nice HID keyboard working this way (well, okay I guess its a KeygleBone Black now)... It is pretty dirt simple once you understand ALL of the pieces, but I'm having a little trouble tearing my device back down. I might not need to eventually, but I'd like to be able to do that and it seems that certain directories cannot be removed (namely the "strings" directories that I've created). This means I can't really fully tear down the device, but maybe I only need to:
echo "" > /sys/kernel/config/usb_gadget/my_gadget/UDC
to actually tear it down. I haven't worked that part out yet. There are also some C libraries, but I've got a bunch of python scripts that I want to use and I don't yet have python wrappers for those. But that probably isn't too much work.
I didn't want to forget to mention, that I tried to throw the above script into rc.local so the beaglebone black I am using would be "HID ready" on boot. There are probably better locations and methods to do this, but I just wanted to use rc.local because the above is a script, rc.local is a script, it should run on boot... But it doesn't... You have to make sure to make rc.local runable ( chmod 755 /etc/rc.local ) as well as modifying the default shell it wants to run (well, it always runs bash, but its method for running bash is the "POSIX" method, and that doesn't seem to work, so you have to force it to run bash in non-POSIX mode with:
#!/usr/bin/bash
Again, there are probably other better methods (I was lazy here and, well, I'm just old school), especially if your device is going to be an IoT device or anything linked to the net, so you might want to consider something else if you need this script to run on boot.
I did stupidly leave out one thing: I made sure the beaglebone black doesn't present its usual "disk" portion as well. I would put the details here, but frankly, those I'd have to track back down. I basically googled around for how to disable the beaglebone black disk. It isn't hard and amounted to me moving some file to another name so it doesn't find the "USB disk" configuration on boot. You can also change a line in the uboot config somewhere I believe, but I didn't really want to do that.
Found the file: /var/local/bb_usb_mass_storage.img
Well, it might be bbg_usb_mass_storage.img if it is a beaglebone green, but I just moved this file so it wouldn't present the mass storage device. That should allow you to do what you want.
I have containers with python apps and I need them to automatically start and expose ssh when running them. I know it's against Docker's best practices, but right now I don't have any other solution. I'd be interested to know the best way to automatically run an additionnal service in a docker container anyway.
Since Docker will only start one process, installing sshd isn't enough. There are apparently multiple options to deal with it:
use a process manager like Monit or Supervisor
use the ENTRYPOINT option
append a command (service sshd start, for instance) at the end of /etc/bash.bashrc (see this answer)
Option 1 seems overkill to me. Also I suppose I'll have to run the container with a cmd calling the process manager instead of bash or my python app: not exactly what I want.
I don't know how to use Option 2 for such a case. Should I write a custom script starting sshd and then running the provided command if any ? How should this script look like ?
Option 3 is very straightforward but quite dirty. Also it won't work if I run the container with another command than /bin/bash.
What's the best solution and how to set it up ?
You mention that option 1 seems like overkill. Why is it overkill? Supervisor is very simple to configure and will basically do what you want.
First, write supervisor config files that starts your python app and sshd:
[supervisord]
nodaemon=true
[program:sshd]
command=/usr/sbin/sshd -D
[program:pythonapp]
command=/path/to/python myapp.py -x args etc etc
Call that file supervisord.conf and commit it somewhere in your repo. In your Dockerfile, copy that file to the container as one of the container build steps, expose the ports for SSH and your app (if needed) and set the CMD to start supervisord:
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
EXPOSE 22 80
CMD ["/usr/bin/supervisord"]
This is clean and easy to understand. It's how I run multiple processes in a container when needed. It is even suggested in the Docker docs as a nice solution.
If you don't want to use a process manager, you can wrap your actual container command inside a shell script and sudo service ssh start, then execute your actual command.
sudo service ssh start
python myapp.py -x args blah blah
This will start up ssh as a daemon, and then your python app will start up after.
Yes, We can configure the Supervisord for the multi process in a container. If you want to use Openssh-server we can configure the Supervisor like below-:
[supervisord]
nodaemon=true
[program:sshd]
command=/usr/sbin/sshd -D
in supervisord.conf file.
We can add the supervisord.conf file in the docker image update a line in Dockerfile.
RUN apt update && apt install -y supervisor openssh-server
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
EXPOSE 22
CMD ["/usr/bin/supervisord"]
Reference link-: Gotechnies
I have a question about the running of OpenFlow switch:
If we want to run OpenFlow on such a PC or a router to work as an OpenFlow switch, what should we do for that?, and is the CPU type or platform of that device important? does it make any difference?
thanks in advance.
Before you try anything on a PC, Install Open vSwitch on a VM(probably Ubuntu) and try its OpenFlow functionality.
To install Open vSwitch(OVS) on ubuntu,
sudo apt-get install openvswitch-switch
You could get a controller like Floodlight or Ryu SDN framework to act as the controller for your switch.
Here is an OpenFlow tutorial. It is outdated but still informative http://archive.openflow.org/wk/index.php/OpenFlow_Tutorial
Or you could also try mininet as #EricSorensen suggested in the previous answer. Mininet allows you to simulate a network with hosts and switches.
While using mininet, you could use its inbuilt 'nox' controller. I'd prefer floodlight though.
Hope it helps!
This is all you need: http://mininet.org/
Check the downloads and tutorial. Use either a Virtual Machine, or native install from source (on Linux)
you can install openvswitch or use ofsoftswitch13 from https://github.com/CPqD/ofsoftswitch13 in a linux based PC with at least two physical interfaces.so that the packet routing can see.
Also using mininet https://github.com/mininet/mininet can run OF (for this one interface is enough) which supports the creation of various network topologies.
As suggested for testing you can simply do a apt-get install or similar on most recent Linux distros. There are two components to openvswitch, a kernel module and the user space openvswitch. The user space process does not require any specific number of cores or processor type and for the most part you can do with less than a core.
Apart from ovs you also need ovsdb, which apt-get install or similar will automatically set up for you (hence the easiest way to go about it). Again pretty lightweight its just a json key-value db.
For the controller there are a lot of options but for playing around you can use the built in command line tools as well.
ovs-apptctl: To setup the switch http://openvswitch.org/support/dist-docs/ovs-appctl.8.txt
ovs-oftcl: To add/mod/flows http://openvswitch.org/support/dist-docs/ovs-ofctl.8.txt
ovs-dpctl: To see the kernel flows
I would highly recommend getting a feel for the command line tools, OpenFlow commands and how flows work before picking Pox, Daylight etc.
Apart from mininet there is also Oftest (https://github.com/floodlight/oftest) that is primarily used to write tests for openflow but its pretty useful to learn and debug issues in a constrained environment.
Running OpenvSwitch
If you want to build OpenvSwitch with bare metal, I suggest you use official tarball to install OpenvSwitch daemon.
Please follow this step:
#/bin/bash
# In root permission
aptitude install dh-autoreconf libssl-dev openssl
wget http://openvswitch.org/releases/openvswitch-2.4.0.tar.gz
tar zxvf openvswitch-2.4.0.tar.gz && cd openvswitch-2.4.0
./boot.sh
./configure # If you want to build kernel module, please append --with-linux=/lib/modules/`uname -r`/build
make modules_install
modprobe gre
modprobe openvswitch
modprobe livcrc32c
ovsdb-tool create /usr/local/etc/openvswitch/conf.db /usr/local/share/openvswitch/vswitch.ovsschema
ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock \
--remote=db:Open_vSwitch,Open_vSwitch,manager_options \
--pidfile --detach --log-file
ovs-vsctl --no-wait init
ovs-vswitchd --pidfile --detach --log-file
You can use ovs-vsctl show of ovs-ofctl show to check OpenvSwitch status. Next step, we need to add physical or logical (likes veth pair) interfaces to OpenvSwitch bridge
# Create OpenvSwitch is named 'ovs-br'
ovs-vsctl add-br ovs-br
# Add interface to OpenvSwitch bridge
# If you want to check, please use `ovs-vsctl` to check again.
ovs-vsctl add-port ovs-br eth0
# Set OpenFlow controller
# You should ready one controller
# If not, I have two installation solutions can give you
# Ryu installation: https://github.com/sdnds-tw/ryu-installer
# ONOS installation: https://github.com/pichuang/onos-ansible
ovs-vsctl ovs-br set-controller tcp:x.x.x.x:6653
# Use ovs-vsctl to check controller attribute is UP
Once you let another server(here we call it controller) to determine the packet forwarding behavior of your local machine (here it is your PC), it turns into so-called SDN mechanism. So it's not relevant what kind of CPU or hardware you are choosing, basically, you can consider SDN as a software solution.
For the process to install the SDN-enabled software, you can choose open vswitch which has been answered in the above posts.
I am setting up a Debian Etch server to host ruby and php applications with nginx. I have successfully configured inittab to start the php-cgi process on boot with the respawn action. After serving 1000 requests, the php-cgi worker processes die and are respawned by init. The inittab record looks like this:
50:23:respawn:/usr/local/bin/spawn-fcgi -n -a 127.0.0.1 -p 8000 -C 3 -u someuser -- /usr/bin/php-cgi
I initially wrote the process entry (everything after the 3rd colon) in a separate script (simply because it was long) and put that script name in the inittab record, but because the script would run its single line and die, the syslog was filled with errors like this:
May 7 20:20:50 sb init: Id "50" respawning too fast: disabled for 5 minutes
Thus, I got rid of the script file and just put the whole line in the inittab. Henceforth, no errors show up in the syslog.
Now I'm attempting the same with thin to serve a rails application. I can successfully start the thin server by running this command:
sudo thin -a 127.0.0.1 -e production -l /var/log/thin/thin.log -P /var/run/thin/thin.pid -c /path/to/rails/app -p 8010 -u someuser -g somegroup -s 2 -d start
It works apparently exactly the same whether I use the -d (daemonize) flag or not. Command line control comes immediately back (the processes have been daemonized) either way. If I put that whole command (minus the sudo and with absolute paths) into inittab, init complains (in syslog) that the process entry is too long, so I put the options into an exported environment variable in /etc/profile. Now I can successfully start the server with:
sudo thin $THIN_OPTIONS start
But when I put this in an inittab record with the respawn action
51:23:respawn:/usr/local/bin/thin $THIN_OPTIONS start
the logs clearly indicate that the environment variable is not visible to init; it's as though the command were simply "thin start."
How can I shorten the inittab process entry? Is there another file than /etc/profile where I could set the THIN_OPTIONS environment variable? My earlier experience with php-cgi tells me I can't just put the whole command in a separate script.
And why don't you call a wrapper who start thin whith your options?
start_thin.sh:
#!/bin/bash
/usr/local/bin/thin -a 127.0.0.1 -e production -l /var/log/thin/thin.log -P /var/run/thin/thin.pid -c /path/to/rails/app -p 8010 -u someuser -g somegroup -s 2 -d start
and then:
51:23:respawn:/usr/local/bin/start_thin
init.d script
Use a script in
/etc/rc.d/init.d
and set the runlevel
Here are some examples with thin, ruby, apache
http://articles.slicehost.com/2009/4/17/centos-apache-rails-and-thin
http://blog.fiveruns.com/2008/9/24/rails-automation-at-slicehost
http://elwoodicious.com/2008/07/15/nginx-haproxy-thin-fastcgi-php5-load-balanced-rails-with-php-support/
Which provide example initscripts to use.
edit:
Asker pointed out this will not allow respawning. I suggested forking in the init script and disowning the process so init doesn't hang (it might fork() the script itself, will check). And then creating an infinite loop that waits on the server process to die and restarts it.
edit2:
It seems init will fork the script. Just a loop should do it.