I am working on a project where we create multiple vms on a host machine. There are multiple hosts of this kind. These hosts are connected to a single LAN and the VMs are created using KVM. Qemu and libvirt are used alongwith. There is a requirement where I need to create multiple VLANs such that some VMs on a host can be a part of one VLAN and the rest could be a part of another VLAN. These VLANs are also spread across multiple hosts.
I was trying to achieve this using openvswitch but failed to succeed. I have followed many solutions available online but I am left in a confused state. Please help me.
I am not clear if openvswitch creates the VLANs or we need to create the VLANs at our end and openvswitch just helps in configuring those with the VMS?
If openvswitch creates the VLANs then what is the way to go?All the configuration that I have tried gives me an error or does not behave as expected. Please point me to the right direction.
Read the section Setting VLAN tag from libvirt.org.
According to OpenVSwitch network type, you can directly specify VLAN tags via element <vlan> to a port from an OpenVSwitch bridge, and libvirt will manage the VLAN tags for your VM network.
Steps:
configure OpenVSwitch bridge for your virtual machine by following the guide How to Use Open vSwitch with Libvirt
add a new element <vlan> to your virtual machine network by specifying vlan id.
Related
If we have two custom modules that need to communicate directly via sockets, is there a way to know what the IP address assigned to each module?
After reading this article I was under the impression the azure-iot-edge network bridge would possibly support referencing the running module by the module name as the hostname. This doesn't seem to work.
I guess we are trying to avoid having to scan the network or use some local storage option and don't want to join the host network so any ideas how one module that is running can find the IP of another module that is expected to be running?
Here is a picture showing the two containers I am testing with. The one container is just an alpine instance that I can attach the console to and use to try to ping / access other containers. I can ping by IP address but want to ping by container name instead.
After further study of this issue, it turns out the issue was the arm32v7 arm image I was using when deployed had some issues. One of the oddities was that the date on the container was "Sun Jan 0 00:100:4174038 1900" and there were some other commands failing that should have worked.
I ended up switching over to an ubuntu image with iputils-ping installed and confirmed that the azuire-iot-edge bridge allows accessing other containers by their module name which servers as the host name, so all good here, works as expected, user error!
Okay so I'm trying to connect two OVS Bridges in separate hosts using GRE Tunneling. A VM is connected to each of the two OVS Bridges.
The problem is that I don't want to add eth0 to the bridges and don't want to give an IP Address to the bridges. The VM's have been given static IP's.
I've tries using multiple online tutorials. But what they all do is add eth0 to the bridges or etc which is of no use to me.
You do not have to add eth0 to the bridge.
You also do not have to give your bridge an IP, just set it to inet manual (or an equivalent in your config).
Could you please provide your current config? It would make much easier to give an advice.
I've been trying to get Avi Vantage running on a single linux box using the docker installation method described here, using the "single host deployment" method:
http://kb.avinetworks.com/installing-avi-vantage-for-a-linux-server-cloud/
I've got the Controller and the Service Engine running and the Controller can see the Service Engine. Now, I'm trying to create a Virtual Service, but I'm not sure how to configure the Virtual IP (VIP).
What network interface should I bind the VIP to?
VIP is bound to interface based on the reachability. This is done automatically by placement manager.
case-1
Say eth0 - 1.1.1.2/24 and when vip is in the same subnet (say 1.1.1.100), vs placement logic will choose eth0.
case-2:
Say eth0 - 1.1.1.2/24 and when vip is in the different subnet (say 100.100.100.100), you can set the placement subnet in vs (advanced settings). That will force the vs to be placed on eth0
case-3:
The above can be done without placement subnet by using BGP. Configure BGP peers and enable_rhi in the VS (advanced_settings). VS will be placed on eth0 and additionally we will advertise vip though bgp. This will avoid manual configuration of routes to reach vip on the first hop router.
I'm trying to workaround a DHCP issue by configuring my guest VM to use DHCP (to avoid having to configure it manually with a static IP) but defining a static IP in the XML.
This would enable setting an IP upon creation while not requiring configuring the virtual machines operating system to a static IP (making it sort of "independent").
I should point out:
Guests are Windows/Linux mixed
Must use a bridge setup (not NAT)
Is this a reasonable solution? any recommendations to the actual XML markup of the guest?
When saying static ip configuration instead of DHCP, it's not a libvirt thing but a configuration of guest OS. refer to this maillist for example.
So you can make it via a custom DHCP server that listens on your bridge network instead of default NAT. it only assigns specific ips to specific mac addresses. It's very easy to make it via dnsmasq.
If you do want to exclude any DHCP broadcast in your bridge network, think about bootstrap processes inside your guest OS. The config drive is a good choice where it allows you creating a disk file and attach to the VM, then the cloud-init daemon on guest OS will pick it up to replace network configuration. But it's just too many if you just want static ips.
I am trying to setup a development environment based on vagrant provisioned with Chef. I created an environment with Apache (used Chef) and can access web server from my host machine with port forwarding.
I'd like to make my vagrant box to contain several virtual hosts and with shared folders I will define different projects pointing out same box and related virtual host.
What I need to learn is whether there is a Chef-way to create virtual hosts for apache (it maybe other web servers, eg. nginx) under vagrant box or not. Or after vagrant+chef setup should I configure virtual hosts manually with connecting box via ssh? If both options are available, which one is more preferable to apply?
The answer is YES, you can do this using Chef. The choices you have to is to use standard community recipe of Apache2 by Opscode or part of it. You might also want to check discussion here and here
Good practice would be of course to use a recipe/write your own to create virtual hosts and enable them. One of things you want to achieve with Chef is to automate this so that you won't have to do it manually. The complexity of your scenario might demand you to do it differently than what has been tried in links below. You might have to DNS configurations in place of course if you are planning to deploy this places other than your local machine.