Interface google block storage persistent disks - google-persistent-disk

I went through the google document to find the interface for persistent disks but could not find it. For local disks it is SCSI/NVME.
What is the interface for persistent disks?
FC, iSCSI, NVMe

In case of the persistent disks (that are used by default by VM instances) it's a SCSI interface.
For confirmation I ran hwinfo command:
wb#instance-1:~$ sudo hwinfo --disk
13: SCSI 01.0: 10600 Disk
...
Driver: "virtio_scsi", "sd"
Driver Modules: "virtio_scsi", "sd_mod"
Device File: /dev/sda (/dev/sg0)
Device Files: /dev/sda, /dev/disk/by-id/google-instance-1, /dev/disk/by-path/pci-0000:00:03.0-scsi-0:0:1:0, /dev/disk/by-id/scsi-0Google_PersistentDisk_instance-1
...
You can see a virtual SCSI interface Driver Modules: "virtio_scsi", "sd_mod" which clearly indicete it's a SCSI interface.
Another hint:
wb#instance-1:~$ sudo lshw | grep scsi
logical name: scsi0
configuration: driver=virtio_scsi
bus info: scsi#0:0.1.0
bus info: scsi#0:0.1.0,1
bus info: scsi#0:0.1.0,14
bus info: scsi#0:0.1.0,15
More configrmation you can find the documentation regarding requirements for building your own images.
However, when you create an instance with Local SSD drive you have an option to select interface type - SCSI or NVM.
Or when using gcloud:
gcloud compute instances create example-instance \
--machine-type n2-standard-8 \
--local-ssd interface=[INTERFACE_TYPE] \
--local-ssd interface=[INTERFACE_TYPE] \
--image-project [IMAGE_PROJECT] \
--image-family [IMAGE_FAMILY]
More documentation on selecting the local SSD's interface here.
When you create a VM with local SSD and run lsblk you get:
wb#instance-3:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 10G 0 disk
├─sda1 8:1 0 9.9G 0 part /
├─sda14 8:14 0 3M 0 part
└─sda15 8:15 0 124M 0 part /boot/efi
nvme0n1 259:0 0 375G 0 disk
It's the clear indication what kind if interface is used.

Related

How to enable IOMMU on Guest

I am planning to perform nested virtualization with GPU device. I have guest Ubuntu OS running and I have mapped GPU to it by enabling intel_iommu on the host, and configuring NVIDIA PCI as vfio-pci device. I am also able to install NVIDIA driver on the guest and use it for deep-learning.
However, now I want to run another VM inside the guest, let's call the guest that runs on host as L1 and the guest that runs on guest as L2, I want the GPU to be accessiable by the L2 guest, I came across vIOMMU supported on Q35 Qemu chipset, how do I enable IOMMU on L1 guest, so that I can pass the gpu directly to L2 guest??
Hardware :
Intel i7 8th Gen
NVIDIA GeForce 1070
Linux - Ubuntu 18.04,
Hypervisor - KVM
There are a couple of things to be done on KVM-QEMU to allow nested IOMMU
Use BIOS OVMF.fd, because default bios might not support the same
enable the chipset q35 with accel=kvm,kernel_irqchip=split
then check with dmesg | grep -e DMAR -e IOMMU and find /sys/kernel/iommu_groups/ -type l, for iommu groups inside the VM.

Is it possible to run VM with ppc64le architecture on a host machine with x86_64 architecture?

I want to test some use-cases which need to run on 'ppc64le' architecture but I don't have a host machine with ppc64le architecture.
My host system is of x86_64 architecture. Is it possible to run VM with 'ppc64le' architecture on my host machine with x86_64 architecture?
Absolutely! The only caveat is that since you're not running natively, the virtual machine needs to emulate the target (ppc64le) instruction set. This can be much slower than running native instructions.
The way to do this will depend on which tools you're using to manage your virtual machine instances. For example, virt-manager allows you to select the architecture type when you're creating a new virtual machine. If you set this to ppc64el, you'll get a ppc64el machine. Other options (like disk and network devices) can be set just like native VMs.
If you're not using any specific VM management tools, the following invocation of qemu will get a ppc64el machine going easily:
qemu-system-ppc64le \
-M pseries # use the pseries machine model \
-m 4G # with 4G of RAM \
-hda ubuntu-18.04-server-ppc64el.iso # Ubuntu installer as a virtual disk
Depending on your usage, you may want to use the following options too:
-nographic -serial pty to use a text console instead of an emulated graphics device. qemu will print the console pty on startup - something like /dev/pts/X. Run screen /dev/pts/X to access it.
-M powernv -bios skiboot.lid to use the non-virtualised ppc64el machine model, which is closer to current OpenPOWER hardware. The skiboot.lid firmware may be included in your distro's install of qemu.
-drive, -device and -netdev to configure virtual disks and networking. These work in the same manner at x86 VMs on qemu.
I hosted centos7-ppc64le on my x86_64 machine(OS RHEL-7). I used qemu + virt-install for that. First install qemu as
wget https://download.qemu.org/qemu-3.1.0-rc1.tar.xz
tar xvJf qemu-3.1.0-rc1.tar.xz
cd qemu-3.1.0-rc1
./configure
make
make install
After installation check qemu-system-ppc64le is available from the command line. Then install virt-manager,virt-install,virt-viewer and libvirt for managing the VM's. Then I started the VM as
virt-install --name centos7-ppc64le \
--disk centos7-ppc64le.qcow2 \
--machine pseries \
--arch ppc64 \
--vcpus 2 \
--cdrom CentOS-7-ppc64le-Minimal-1804.iso \
--memory 2048 \
--network=bridge:virbr0 \
--graphics vnc

How to get graphical GUI output and user touch / keyboard / mouse input in a full system gem5 simulation?

Hopefully with fs.py, but not necessarily.
For example, I have some x86 BIOS example that draw a line on the screen on QEMU, and I'd like to see that work on gem5 too.
Interested in all archs.
https://www.mail-archive.com/gem5-users#gem5.org/msg15455.html
arm
I have managed to get an image on the screen for ARM.
Here is a highly automated setup which does the following steps:
grab the ARM gem5 Linux kernel v4.15 fork from: https://gem5.googlesource.com/arm/linux/ and use the config file arch/arm/configs/gem5_defconfig from there.
The fork is required for the commit drm: Add component-aware simple encoder https://gem5.googlesource.com/arm/linux/ I believe, which adds the required option CONFIG_DRM_VIRT_ENCODER=y.
The other required option is CONFIG_DRM_HDLCD=y, which enables the HDLCD ARM IP that manages the display: http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dui0541c/CHDBAIDI.html
run gem5 at 49f96e7b77925837aa5bc84d4c3453ab5f07408e
with a command of type:
M5_PATH='/data/git/linux-kernel-module-cheat/out/common/gem5/system' \
'/data/git/linux-kernel-module-cheat/out/common/gem5/build/ARM/gem5.opt' \
--debug-file=trace.txt \
-d '/data/git/linux-kernel-module-cheat/out/arm/gem5/m5out' \
'/data/git/linux-kernel-module-cheat/gem5/gem5/configs/example/fs.py' \
--disk-image='/data/git/linux-kernel-module-cheat/out/arm/buildroot/images/rootfs.ext2' \
--kernel='/data/git/linux-kernel-module-cheat/out/arm/buildroot/build/linux-custom/vmlinux' \
--mem-size='256MB' \
--num-cpus='1' \
--script='/data/git/linux-kernel-module-cheat/data/readfile' \
--command-line='earlyprintk=pl011,0x1c090000 console=ttyAMA0 lpj=19988480 rw loglevel=8 mem=256MB root=/dev/sda console_msg_format=syslog nokaslr norandmaps printk.devkmsg=on printk.time=y' \
--dtb-file='/data/git/linux-kernel-module-cheat/out/common/gem5/system/arm/dt/armv7_gem5_v1_1cpu.dtb' \
--machine-type=VExpress_GEM5_V1 \
connect to the VNC server gem5 provides with your favorite client.
On Ubuntu 18.04, I like:
sudo apt-get install vinagre
vinagre localhost:5900
The port shows up on a gem5 message of type:
system.vncserver: Listening for connections on port 5900
and it takes up the first free port starting from 5900.
Only raw connections are supported currently.
Outcome:
after a few seconds, the VNC client shows up a little penguin on the screen! This is because our kernel was compiled with: CONFIG_LOGO=y.
the latest frame gets dumped to system.framebuffer.png, and it also contains the little penguin.
the Linux kernel dmesg shows on telnet 3456 terminal a messages like:
[ 0.152755] [drm] found ARM HDLCD version r0p0
[ 0.152790] hdlcd 2b000000.hdlcd: bound virt-encoder (ops 0x80935f94)
[ 0.152795] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
[ 0.152799] [drm] No driver support for vblank timestamp query.
[ 0.215179] Console: switching to colour frame buffer device 240x67
[ 0.230389] hdlcd 2b000000.hdlcd: fb0: frame buffer device
[ 0.230509] [drm] Initialized hdlcd 1.0.0 20151021 for 2b000000.hdlcd on minor 0
which shows that the HDLCD was enabled.
when we connect, gem5 shows on stdout:
info: VNC client attached
TODO: also get a shell working. Currently I only have a the little penguin, and my keystrokes do nothing. Likely have to tweak the console= kernel parameter or setup a tty console there on init? CONFIG_FRAMEBUFFER_CONSOLE=y is set. Maybe the answer is contained in: https://www.kernel.org/doc/Documentation/fb/fbcon.txt
aarch64
The aarch64 gem5 defconfig does not come with all required options, e.g. CONFIG_DRM_HDLCD=y.
Adding the following options, either by hacking or with a config fragment made it work:
CONFIG_DRM=y
CONFIG_DRM_HDLCD=y
CONFIG_DRM_VIRT_ENCODER=y

How can a specific application be monitored by perf inside the kvm?

I have an application which I want to monitor it via perf stat when running inside a kvm VM.
After Googling I have found that perf kvm stat can do this. However there is an error by running the command:
sudo perf kvm stat record -p appPID
which results in help representation ...
usage: perf kvm stat record [<options>]
-p, --pid <pid> record events on existing process id
-t, --tid <tid> record events on existing thread id
-r, --realtime <n> collect data with this RT SCHED_FIFO priority
--no-buffering collect data without buffering
-a, --all-cpus system-wide collection from all CPUs
-C, --cpu <cpu> list of cpus to monitor
-c, --count <n> event period to sample
-o, --output <file> output file name
-i, --no-inherit child tasks do not inherit counters
-m, --mmap-pages <pages[,pages]>
number of mmap data pages and AUX area tracing mmap pages
-v, --verbose be more verbose (show counter open errors, etc)
-q, --quiet don't print any message
Does any one know what is the problem?
Use kvm with vPMU (virtualization of PMU counters) - https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Tuning_and_Optimization_Guide/sect-Virtualization_Tuning_Optimization_Guide-Monitoring_Tools-vPMU.html "2.2. VIRTUAL PERFORMANCE MONITORING UNIT (VPMU)"). Then run perf record -p $pid and perf stat -p $pid inside the guest.
Host system has no knowledge (tables) of guest processes (they are managed by guest kernel, which can be non Linux, or different version of linux with incompatible table format), so host kernel can't profile some specific guest process. It only can profile whole guest (and there is perf kvm command - https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Tuning_and_Optimization_Guide/chap-Virtualization_Tuning_Optimization_Guide-Monitoring_Tools.html#sect-Virtualization_Tuning_Optimization_Guide-Monitoring_Tools-perf_kvm)

mount lacie 2T on debian server

I have a server (OS debian), and to increase the disk capacities I wanted to plugin a LaCie disk (2T). Obviously debian didn't mount it automatically (like ubuntu does) which is not funny :). When I run df -h I got:
Sys. fich. Taille Util. Dispo Uti% Monté sur
rootfs 455G 7,6G 424G 2% /
udev 10M 0 10M 0% /dev
tmpfs 200M 776K 200M 1% /run
/dev/disk/by-uuid/2ae485ac-17db-4e86-8a47-2aca5aa6de28 455G 7,6G 424G 2% /
tmpfs 5,0M 0 5,0M 0% /run/lock
tmpfs 1,2G 72K 1,2G 1% /run/shm
As you can see, there isn't any 2T or 1,xT -> so the disk isn't mounted.
I looked at almost same problems on goole to see what others did to fix this, I figured out that I had to run cat /etc/fstab:
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sda1 during installation
UUID=2ae485ac-17db-4e86-8a47-2aca5aa6de28 / ext4 errors=remount-ro 0 1
# swap was on /dev/sda5 during installation
UUID=c1759574-4b7c-428b-8427-22d3a420c9e4 none swap sw 0 0
/dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0
And the my LaCie does not show up in this file either.
How can I mount my USB disk in this case?
df -h only shows mounted devices
Which type filesystem are you using ? If ntfs you fisrt need to install ntfs-3g (which is provided by ubuntu in basic install but not in debian)
You can try to mount your device with the command mount -t <fs-type> /dev/<sdxx> </mount/point>