DPDK Open vSwitch can't access the network - openvswitch

I'm playing with ovs-dpdk package https://github.com/01org/dpdk-ovs and one thing I don't clearly understand is how can I have OVS bridge and VMs connected to it get access to outside, ie. to the network. On a regular openvswitch the bridge device created by vswitch is 'visible' from linux and can be configured by regular tools (ifconfig, ethtool etc.), so I could create TAP interface and add it to vswitch bridge interface and assign the bridge interface IP address. However with ovs-dpdk this is not the case: any bridge created with ovs-vsctl is not avaialble in userspace linux, at least I don't see it with ifconfig or "ip link show".
Is there another method OVS-DPDK does this? Hopefully someone can shed some light for this problem. Thanks.

ovs-dpdk when it is using DPDK to access the NIC will take over the nic and not allow regular kernel drivers to do their thing.
This means that you will not see the interface any more from the linux host if you bind the hardware with the dpdk io driver. But you can bridge/tap/mirror inside ovs these raw dpdk interfaces in dpdk-ovs to your vm's or to another interface which is visible to the kernel's regular drivers. You just can't do it on the dpdk owned interfaces.
The whole point of integrating dpdk into ovs is to bypass all the kernel drivers and get packets to/from the vswitch as fast as possible so it can route them natively through to your VM's and other local interfaces as you set in your bridge configuration.

Related

Multiple VMs Accessing a single device over PCIE

I am using the libvirt/QEMU/KVM stack to run some VMs on an Ubuntu 20.04 host. I am using the virsh CLI tool for VM management. I'd like to allow multiple VMs to access the same device (FPGA) over PCIE. It seems that libvirt doesn't allow this, and when I attach the PCIE device to multiple VMs and try to power more than one on, I get the following error.
error: Failed to start domain ubuntu-guest-2
error: Requested operation is not valid: PCI device 0000:05:00.0 is in use by driver QEMU, domain ubuntu-guest-1
This kinda makes sense to me, as there shouldn't be conflicting data sent over the PCIE bus. But nonetheless, does anyone know a workaround to make this happen?
There are a number of techniques to share a device across VMs. All of them require either device-specific software support in the VMM, hardware in the device to support sharing (SR-IOV), or both (Scalable IOV).
For a custom FPGA design, you would need to provide this.
SR-IOV is part of the PCIe specification, so there may be libraries available that you could incorporate into your FPGA design.

whether or not CX5 need dpdk-devbind.py to bind nic businfo

dpdk 18.11 , when use mlx5 nic, use dpdk-devbind.py -b igb_uio 0000:xx:00.0 ,it show mlx5_pci_probe(): no Verbs device matches PCI device 0000:xx:00.0, are kernel drivers loaded?
whether or not use dpdk-devbind.py -b igb_uio 0000:xx:00.0
MLX5 PMD does not require dpdk-devbind.py -b igb_uio it works independent to igb_uio,uio_pci_generic on both Linux and Windows.
Please re-test without binding
lspci -ks 0000:41:00.1
41:00.1 Ethernet controller: Mellanox Technologies MT2892 Family [ConnectX-6 Dx]
Subsystem: Mellanox Technologies MT2892 Family [ConnectX-6 Dx]
Kernel driver in use: mlx5_core
From DPDK documentation refer & also refer for DPDK 16.04
This capability allows the PMD to coexist with kernel network interfaces which remain functional, although they stop receiving unicast packets as long as they share the same MAC address. This means legacy linux control tools (for example: ethtool, ifconfig and more) can operate on the same network interfaces that owned by the DPDK application.

Cannot do Vagrant ssh after Vagrant up on windows Machine

I am building a sample vagrant box to install Jenkins and push it to atlas cloud.Please find below the steps that I followed.
Vagrant init ubuntu/trusty64
and the normal command to initialize the vagrant machine.
vagrant up
After this if i type command to ssh into the machine
vagrant ssh
It gives me error saying please increase timeout and so.
The main question is how can I ssh into the newly created vagrant machine.
To understand this, I have to go through all the basics. Please find below my findings.
Not attached
In this mode, VirtualBox reports to the guest that a network card is present, but that there is no connection -- as if no
Ethernet cable was plugged into the card. This way it is possible to "pull" the virtual Ethernet cable and disrupt the connection, which can be useful to inform a guest operating system that no network connection is available and enforce a reconfiguration.
Network Address Translation (NAT)
If all you want is to browse the Web, download files and view e-mail inside the guest, then this
default mode should be sufficient for you, and you can safely skip the rest of this section. Please note that there are certain limitations when using Windows file sharing (see Section 6.3.3, “NAT limitations” for details).
NAT Network
The NAT network is a new NAT flavour introduced in VirtualBox latest versions.
Bridged networking
This is for more advanced networking needs such as network simulations and running servers
in a guest. When enabled, VirtualBox connects to one of your installed network cards and exchanges network packets directly, circumventing your host operating system's network stack.
Internal networking
This can be used to create a different kind of software-based network which is visible to selected virtual machines, but not to applications running on the host or to the outside world.
Host-only networking
This can be used to create a network containing the host and a set of virtual machines, without the need for the host's physical network interface. Instead, a virtual network interface (similar to a loopback interface) is created on the host, providing connectivity among virtual machines and the host.
Generic networking
Rarely used modes share the same generic network interface, by allowing the user to select a driver which can be included with VirtualBox or be distributed in an extension pack.
At the moment there are potentially two available sub-modes:
UDP Tunnel
This can be used to interconnect virtual machines running on different hosts directly, easily and transparently, over existing network infrastructure.
VDE (Virtual Distributed Ethernet) networking
This option can be used to connect to a Virtual Distributed Ethernet switch on a Linux or a FreeBSD host. At the moment this needs compiling VirtualBox from sources, as the Oracle packages do not include it.
Out of these, only NAT and Host-only network is important.So, to solve this issue, I modified the predefined Vagrant file with the following code.
jenkins.vm.provider "virtualbox" do |vb|
jenkins.vm.network "private_network",ip:'192.168.56.5',:adapter => 2
jenkins.vm.hostname = 'jenkins.ci'
vb.name = "Jenkins"
end
Here, I have created a private network with static Ip and also, I specified the adapters count to use 2. The Private adapter is Host-only adapter and 1st adapter which is default one is NAT.

bridge two vm interfaces vmware

I would like to bridge two virtual interfaces on two different VM's in vmware. I tried searching online about this I found out how to bridge a virtual interface to a physical one. Is their a solution for this problem?
Just create a vSwitch on your host, add a network adaptor on both VM's to the vSwitch and you should be all good. Ofcourse there will be no DHCP or anything.....

Can NS3 EMU be applied on different machines in WAN?

we are currently considering whether ns3 satisfying our requirements, we're looking for a convenient tool to run in distributed devices in the real network (every kind of possible connections) and capture the network performance data (like a sniffer). I realize that the primary purpose of ns3 is to simulate network topology in a single machine, but its emu module sounds promising and the flow monitor can save our effort on data capture.
In the following link
http://www.nsnam.org/wiki/HOWTO_make_ns-3_interact_with_the_real_world
it is declared that NS-3 EMU can be applied to inject simulated nodes interacting with real live network, and 3 kinds of testbed are given. However the first solution, virtual machine vmware testbed is still woking within LAN -- in promiscuous mode the virtual machines network card are listening to all LAN broadcasts so that the emu-udp-echo server and client can find each other.
My question is, is it possible that the emu-udp-echo server/client running in different, physical systems from different positions in wide network?
e.g. in different cities or from different network providers, given ip address of the hardware where the other ns-3 node is running? if it is possible, how can i specify the "real" ip address and port for the node, instead of assign a virtual ipv4 address?
Thanks a lot.
Yes, while the documentation describes how to perform this using virtual machines, this can be done in general on real hardware. Since that HOWTO was written, there has been additional work on providing helpers for running this type of experiment, including running on PlanetLab testbed machines. This documentation describes the generalized file descriptor NetDevice, added to the ns-3.17 release: http://www.nsnam.org/docs/release/3.19/models/html/fd-net-device.html. A similar example to the one described in that HOWTO is found in the file fd-emu-udp-echo.cc.
When using emulation mode on real networks, configuration of the MAC addresses and IP addresses must be done carefully. First, the device must be able to be put into promiscuous mode. Second, the MAC address needs to be different than the hardware address of the NIC. If you intend to be riding on top of an active NIC with existing IP address (in use for other Internet traffic), you'll need to have another IP address for ns-3 that is within the right link subnet. If instead you want to dedicate the NIC to ns-3 use, then do not assign the IP address to the host NIC and just assign it to the ns-3 configuration.
The PlanetLab example also shows another configuration that uses Tap devices to send data to/from PlanetLab testbed nodes. Some of this configuration is specific to how PlanetLab works, but the use of Tap device bridged to an ns-3 device may also facilitate emulation.