kvm virtual machine cannot delete a snapshot - virtual-machine

I want to expand the disk capacity of the win7 virtual machine,but it prompts me that there are snapshots and cannot be expanded,but the snapshots cannot be deleted.
[root#jwf home]# qemu-img resize win7.qcow2 +30G
qemu-img: Can't resize an image which has snapshots
qemu-img: This image does not support resize
[root#jwf home]#
Use the qemu-img command to view the snapshot information.
[root#jwf home]# qemu-img info win7.qcow2
image: win7.qcow2
file format: qcow2
virtual size: 40G (42949672960 bytes)
disk size: 64G
cluster_size: 65536
Snapshot list:
ID TAG VM SIZE DATE VM CLOCK
1 clean 0 2019-10-18 14:41:10 00:00:00.000
Format specific information:
compat: 1.1
lazy refcounts: true
But I can't see the snapshot information using the virsh command.
[root#jwf home]# virsh snapshot-list win7
Name Creation Time State
------------------------------------------------------------
[root#jwf home]#
Unable to delete snapshot
[root#jwf home]# virsh snapshot-delete --domain win7 --snapshotname clean
error: Domain snapshot not found: no domain snapshot with matching name 'clean'
error: Domain snapshot not found: no domain snapshot with matching name 'clean'
[root#jwf home]# virsh snapshot-delete --domain win7 --snapshotname 1
error: Domain snapshot not found: no domain snapshot with matching name '1'
error: Domain snapshot not found: no domain snapshot with matching name '1'
No snapshot information is displayed in virt-manager.enter image description here
This virtual machine is imported by me,there is no snapshot configuration file
[root#jwf home]#
[root#jwf home]# ls /var/lib/libvirt/qemu/snapshot/win7/
[root#jwf home]#

Sometimes virsh and qemu see different things. This is occasionally because one of them is blind, and less frequently because one of them is hallucinating. In this case I believe virsh is blind. Try this:
qemu-img info disk_image
qemu-img snapshot -d snapshot_id disk_image
qemu-img info disk_image

Related

Ubuntu Server Backup and Restore via tar

I'm trying to learn how to backup and restore my Ubuntu Server via tar so I know that I have a safe system. After I untar and reboot, I have several issues, but they seem to be caused by a read-only file system. The source and destination server are both Ubuntu Server on the same version, 18.04.05 LTS. The source server is a VPS that has 6 GB RAM and 4vCPUs. The destination server is a VM on my FreeNAS machine with 6GB RAM and 2 vCPUs.
The primary applications that need to work are my Graylog server and Nagios server. I've mostly followed the instructions at Ubuntu.
First, my tar command is:
sudo tar -c --use-compress-program=pigz -f backup.tar.gz --exclude=/backup.tar.gz --exclude=/dev --exclude=/usr --exclude=/sbin --exclude=/proc --exclude=/sys --exclude=/tmp --exclude=/run --exclude=/mnt --exclude=/media --exclude=/lost+found --exclude=/home/*/.cache --exclude=/home/*/.gvfs --exclude=/home/*/.local/share/Trash --exclude=/var/log --exclude=/var/cache/apt/archives --exclude=/usr/src/linux-headers* --one-file-system /
I use pigz to utilize the VPS's 4 vCPUs to take less time. I transfer this to my VM which as a fresh copy of Ubuntu Server 18.04.05 and untar with:
sudo tar -xvpzf backup.tar.gz -C / --numeric-owner
After I reboot, I get the following as soon as I boot:
Unable to setup logging. [Errno 30] Read-only file system: '/var/log/landscape/sysinfo.log'
run-parts: /etc/update-motd.d/50-lanscape-sysinfo exited with return code 1
mktemp: failed to create file via template '/var/lib/update-notifier/tmp.XXXXXXXXXX': Read-only file system
run-parts: /etc/update-motd.d/95-hwe-eol exited with return code 1
/usr/lib/update-notifier/update-motd-fsck-at-reboot: 33: /usr/lib/update-motd-fsck-at-reboot: cannot create /var/lib/update-notifier/fsck-at-reboot: Read-only file system
I do see that some areas of the system do work like the original source. My SSH port changes, hostname changes, etc. But I get these above errors and my Graylog and Nagios servers do not work.
So I'm wondering where I went wrong in my process and any help would be appreciated. The source is a live server with backups so I'm safe there. I'm just making sure I have my ducks in a row for the future.

podman CentOS 8 not starting container as non-root user

I am trying to start busybox container as non root on CentOS 8 server, but its giving the below message.
What is the correct way to start the container as non-root user?
podman run -it --name busy docker.io/library/busybox sh
Trying to pull docker.io/library/busybox...Getting image source signatures
Copying blob bdbbaa22dec6 done
Copying config 6d5fcfe5ff done
Writing manifest to image destination
Storing signatures
ERRO[0003] Error pulling image ref //busybox:latest: Error committing the finished image: error adding layer with blob "sha256:bdbbaa22dec6b7fe23106d2c1b1f43d9598cd8fc33706cc27c1d938ecd5bffc7": Error processing tar file(exit status 1): there might not be enough IDs available in the namespace (requested 65534:65534 for /home): lchown /home: invalid argument
Failed
Error: unable to pull docker.io/library/busybox: unable to pull image: Error committing the finished image: error adding layer with blob "sha256:bdbbaa22dec6b7fe23106d2c1b1f43d9598cd8fc33706cc27c1d938ecd5bffc7": Error processing tar file(exit status 1): there might not be enough IDs available in the namespace (requested 65534:65534 for /home): lchown /home: invalid argument
Yes, the command you run is correct. On my Fedora 31 system it works just fine.
[testuser#fedora31 ~]$ podman run -it --name busy docker.io/library/busybox sh
Trying to pull docker.io/library/busybox...
Getting image source signatures
Copying blob bdbbaa22dec6 done
Copying config 6d5fcfe5ff done
Writing manifest to image destination
Storing signatures
/ # exit
[testuser#fedora31 ~]$ podman --version
podman version 1.8.0
[testuser#fedora31 ~]$
The flag --rm is also often useful.
It seems the error you get is related to the UID mapping.
Here is some information regarding running "rootless" podman:
https://github.com/containers/libpod/blob/master/docs/tutorials/rootless_tutorial.md
What also might be interesting:
"Does not work on NFS or parallel filesystem homedirs"
Quote from
https://github.com/containers/libpod/blob/master/rootless.md

Redis don't start

I have redis server 3.0.6 and ubuntu 16.04.
my config file
tcp-keepalive 60
#bind 127.0.0.1
requirepass qwerty
maxmemory-policy noeviction
appendonly yes
appendfilename redis-test.aof
and redis server don't run
Can't open the append-only file: Read-only file system
The error message is pretty clear: The file system on which redis-test.aof resides is mounted as read-only. The whole purpose of this file is to write changes to disk. So the disk must be writable.
Check if you used the ro option while mounting the drive. Run
$ mount
to list all the mountpoints. Check the one on which you want your aof file to reside.
To remount the disk as read-write, use the following command:
$ sudo mount -o remount,rw /partition/identifier /mount/point
If that doesn't help, see the system logs if there are any file system errors. To correct these, you will need to run fsck.

OpenStack's virtual nodes permanently in paused state

Recently I deployed Red Hat OpenStack 10 with Jenkins. I've found that my running nodes are became paused after a while.
virsh list stdout:
Id | Name | State
-------------------------
1 undercloud-0 paused
2 compute-0 paused
3 controller-0 paused
I tried to start or reboot VMs, but it didn't help. Machines are still in paused state. Is there any obvious things which I might miss?
I've found there is a lack of free space appeared after OpenStack runs for some time.
RHEL machines had smaller / partition and quite big /home partition. I found a VM images stored in /var and just moved it into /home
The steps are:
Stop all running VMs
# for i in $(virsh list --name); do virsh destroy $i; done
Create new directory and move images there
# mkdir /home/_images
# mv /var/lib/libvirt/images/* /home/_images
Remove the old directory with images and create a symlink to the new directory.
# rmdir /var/lib/libvirt/images
# ln -s /home/_images /var/lib/libvirt/images
Start VMs again (or reboot the machine), an ideal order is 1. undercloud-0, 2. controller-0, 3. compute-x nodes
# for i in $(virsh list --name); do virsh reboot $i; done

Changing password of a Virtual Machine

I have some virtual machines with me. I want to write a script which automates the following process...
It mounts the virtual machine (with linux as the os) to a location say /mnt/image
It modifies the /etc/passwd (or the equivalent file) to change the password of the user
Unmount the virtual machine
Since, I am using libvirt I am having some qcow2 images of the virtual machine. to mount the image on my ubuntu, I am using nbd module. Here are the commands that I am trying :
modprobe nbd max_part=63
qemu-nbd -c /dev/nbd0 image.qcow2
mount /dev/nbd0p1 /mnt/image
It gives me the error:
mount: special device /dev/nbd0p1 does not exist
When I replace nbd0p1 with nbdo I am getting the following error (though I am not sure what I am trying to do by this)
mount: you must specify the filesystem type
Any suggestions so as what could be the problem... ?
Check that /sys/modules/nbd/parameters/max_part has the expected value. If it's 0 or too low, the partitions /dev/nbd0p1, etc. will not be made available by the kernel. This can happen if the nbd kernel module was already loaded (with a different max_part parameter) when you ran modprobe.
You can fix that by unloading the module and modprobing it again.
[Not a direct answer to the question, but an alternate]
You can try to convert qcow2 image to raw and then, mount the raw image.
convert:
qemu-img convert -f qcow2 image.qcow2 -O raw image_raw.raw
mount:
sudo losetup /dev/loop0 image_raw.raw
sudo kpartx -a /dev/loop0
sudo mount /dev/mapper/loop0p3 /mnt/image
sudo mount /dev/mapper/loop0p2 /mnt/image/boot
Could it be that the partition isn't in the first slot in the MBR, or an extended partition is in use? Check to see if any other nbdXpY device nodes are being created, or run fdisk on it and print the partition table.
I stumbled on the same issue and same error but on a vdi
qemu-nbd -c /dev/nbd0 image.vdi
for me the solution was simple I just changed nbd0 to nbd1
qemu-nbd -c /dev/nbd1 image.vdi
and then:
sudo mount /dev/nbd1p1 /media/eddie/virtual
worked.
Please leave a comment if this worked for you also and on what type of image.