How to access the instance in OpenStack, configure external network - ssh

I have installed OpenStack on a Centos 7 vm using PackStack, I have created the network configuration, an internal and an external network both interconnected by a router, but I have the problem that I cannot access the instance and it does not return the ping, I already configured the pair of keys and the security group, at the time of creating the external network I did it this way:
Name: External_Network
Project: LAB
Provider Network Type: Flat
Physical Network: ens163
This physical network must be the problem because ens163 is the name of my network interface of the vm Centos where the installation of Openstack lies ... I think the problem is in the part of linking the physical network, how can I map the interface physics to a logical segment?

There is simple dictionary in neutron configuration, called bridge_mappings, that map between the bridge and the physical network that represent it.
For finding the interface that connected to the bridge, its depend of the network plugin
For ovs, just type ovs-ofctl show {bridge name}
For linux bridge, run brctl show

Related

Custom Ansible Module - Info or Facts?

I have a VMware vCenter with several VMs. I want to use ansible to configure the network interfaces of the guest based on the network to which the VMware network interface is connected. Since the network interface names in the guest may not be named in a stable way, I want to match it based on the mac-address, which is known inside and outside the VM.
For that I want to gather information from VMware vCenter for each VM, which leads to the first question: Since the needed information is tightly coupled to the VM, should I write a *_facts (and no *_info) module for that?
Since this module code directly contacts the vCenter API, there is no need to execute the code on the target host, which leads to the 2nd question: Is there a way to always execute a module locally? (Without using delegate_to in the playbook)
And the last question: Is it possible to execute *_facts modules automatically, without explicitly referencing them in a playbook?

Why cant I ping a virtual guest from host by default

I ran into some trouble when I tried to ping my Ubuntu virtual guest from my Windows 10 host, but this solution did the trick.
I'm wondering what exactly is a "host-only-adapter" and why cant the I ping the virtual machine by default? How exactly does the virtual machine access the internet when I cant ping it?
As the name suggests, host-only is intended to create a new interface that is virtual and visible to the host and not in anyway connected to the physical interface that actually connects to the internet.
Itcan be thought of as a hybrid between the bridged and internal networking modes: as with bridged networking, the virtual machines can talk to each other and the host as if they were connected through a physical Ethernet switch. Similarly, as with internal networking however, a physical networking interface need not be present, and the virtual machines cannot talk to the world outside the host since they are not connected to a physical networking interface.
You might be wondering what the use-case for this would be.Think for example: one virtual machine may contain a web server and a second one a database, and since they are intended to talk to each other, the appliance can instruct VirtualBox to set up a host-only network for the two. A second (bridged) network would then connect the web server to the outside world to serve data to, but the outside world cannot connect to the database.
How it works
when host-only networking is used, VirtualBox creates a new software interface on the host which then appears next to your existing network interfaces. In other words, whereas with bridged networking an existing physical interface is used to attach virtual machines to, with host-only networking a new "loopback" interface is created on the host. And whereas with internal networking, the traffic between the virtual machines cannot be seen, the traffic on the "loopback" interface on the host can be intercepted.
The great thing about host-only networks is that the host itself sits on this network and so, upon proper config as in you link above, you can reach all the VMs.
Hope my explanation helps!

How to access servers on a KVM guest like in VMWare?

I'm trying to establish a connection to my Guest without making it available for anyone else than my Host.
When using VMWare Guests are automatically available under their NAT IPs (at least when using Windows as a host), which makes accessing servers running on the guest.
How would I achieve something like this when using KVM?
I already tried using bridges, but that led me nowhere
I'm using Manjaro (mostly Arch under the hood)
You don't mention what you're using to launch KVM, but since you've tagged this with 'libvirt' I'll assume you're running with libvirt.
Libvirt provides a 'Virtual network' capability which can provide simple NAT based IP connectivity to guests. This uses a bridge device underneath, but that bridge device is not connected to any physical NIC, and uses iptables rules to setup NAT routing to allow outbound only connectivity. On most distros this is visible via 'virsh net-list' as a network named 'default'. If you connect your guest to that you'll get NAT eg in the guest XML use
<interface type="network">
<source network="default"/>
</interface>

Cannot do Vagrant ssh after Vagrant up on windows Machine

I am building a sample vagrant box to install Jenkins and push it to atlas cloud.Please find below the steps that I followed.
Vagrant init ubuntu/trusty64
and the normal command to initialize the vagrant machine.
vagrant up
After this if i type command to ssh into the machine
vagrant ssh
It gives me error saying please increase timeout and so.
The main question is how can I ssh into the newly created vagrant machine.
To understand this, I have to go through all the basics. Please find below my findings.
Not attached
In this mode, VirtualBox reports to the guest that a network card is present, but that there is no connection -- as if no
Ethernet cable was plugged into the card. This way it is possible to "pull" the virtual Ethernet cable and disrupt the connection, which can be useful to inform a guest operating system that no network connection is available and enforce a reconfiguration.
Network Address Translation (NAT)
If all you want is to browse the Web, download files and view e-mail inside the guest, then this
default mode should be sufficient for you, and you can safely skip the rest of this section. Please note that there are certain limitations when using Windows file sharing (see Section 6.3.3, “NAT limitations” for details).
NAT Network
The NAT network is a new NAT flavour introduced in VirtualBox latest versions.
Bridged networking
This is for more advanced networking needs such as network simulations and running servers
in a guest. When enabled, VirtualBox connects to one of your installed network cards and exchanges network packets directly, circumventing your host operating system's network stack.
Internal networking
This can be used to create a different kind of software-based network which is visible to selected virtual machines, but not to applications running on the host or to the outside world.
Host-only networking
This can be used to create a network containing the host and a set of virtual machines, without the need for the host's physical network interface. Instead, a virtual network interface (similar to a loopback interface) is created on the host, providing connectivity among virtual machines and the host.
Generic networking
Rarely used modes share the same generic network interface, by allowing the user to select a driver which can be included with VirtualBox or be distributed in an extension pack.
At the moment there are potentially two available sub-modes:
UDP Tunnel
This can be used to interconnect virtual machines running on different hosts directly, easily and transparently, over existing network infrastructure.
VDE (Virtual Distributed Ethernet) networking
This option can be used to connect to a Virtual Distributed Ethernet switch on a Linux or a FreeBSD host. At the moment this needs compiling VirtualBox from sources, as the Oracle packages do not include it.
Out of these, only NAT and Host-only network is important.So, to solve this issue, I modified the predefined Vagrant file with the following code.
jenkins.vm.provider "virtualbox" do |vb|
jenkins.vm.network "private_network",ip:'192.168.56.5',:adapter => 2
jenkins.vm.hostname = 'jenkins.ci'
vb.name = "Jenkins"
end
Here, I have created a private network with static Ip and also, I specified the adapters count to use 2. The Private adapter is Host-only adapter and 1st adapter which is default one is NAT.

DPDK Open vSwitch can't access the network

I'm playing with ovs-dpdk package https://github.com/01org/dpdk-ovs and one thing I don't clearly understand is how can I have OVS bridge and VMs connected to it get access to outside, ie. to the network. On a regular openvswitch the bridge device created by vswitch is 'visible' from linux and can be configured by regular tools (ifconfig, ethtool etc.), so I could create TAP interface and add it to vswitch bridge interface and assign the bridge interface IP address. However with ovs-dpdk this is not the case: any bridge created with ovs-vsctl is not avaialble in userspace linux, at least I don't see it with ifconfig or "ip link show".
Is there another method OVS-DPDK does this? Hopefully someone can shed some light for this problem. Thanks.
ovs-dpdk when it is using DPDK to access the NIC will take over the nic and not allow regular kernel drivers to do their thing.
This means that you will not see the interface any more from the linux host if you bind the hardware with the dpdk io driver. But you can bridge/tap/mirror inside ovs these raw dpdk interfaces in dpdk-ovs to your vm's or to another interface which is visible to the kernel's regular drivers. You just can't do it on the dpdk owned interfaces.
The whole point of integrating dpdk into ovs is to bypass all the kernel drivers and get packets to/from the vswitch as fast as possible so it can route them natively through to your VM's and other local interfaces as you set in your bridge configuration.