Proxmox virtual disk size change - resize

i have problem with disk resize.
I try to change size in Proxmox, but show size in virtual machine not showing.
But if I try qemu-img info vm-100-disk-1.qcow2 i have this result:
image: vm-100-disk-1.qcow2
file format: qcow2
virtual-size: 491G (527207235584 bytes)
disk size: 161G
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false
What is virtual size? This as I tried to adjust the size, I increased the size of the virtual disk more than really has.
I need this:
virtual size: 250G
disk size: 250G
Can you help me wih this problem?
Thank you in advance.
ps. Debian wheezy (updated)

Qcow2 disk images are not preallocated. So it's only as large as the data it contains.
You did resize the disk successfully to 491GB (virtual-size), but the disk contains only 161GB of data.
After you resize a virtual disk, you also need to extend the partitions and or filesystems inside the virtual disk to make use of the new extended disk space.
You wrote that you only need to extend your disk to 250GB, but instead you did (multiple) resizes to the current 491GB. If you did not yet resize the partitions and filesystems, you can still shrink the image. This command will basically just cut off the file. So be sure you didn't make any use of the new extended disk size.
At the Monitor in the VM on Proxmox you can execute this command:
block_resize drive-ide0 250G

Related

Updating disk size in compute engine does not update size in vm instance

I had a 50gb disk for my vm instance and I went to the disks in compute engine and changed to size to 100gb.
I restarted my server twice now and it is still showing the disk as only 50gb
Is there some form of delay associated with changing the disk size?
Here is an image of what it looks like on the Google Cloud Console
Here is an image of what it says on the server
Changing the size of the physical disks associated with your Compute Engine VM instance doesn't change the usage of that disk without first performing some additional steps. These steps change the partitioning of the disks.
Recipes for both Linux and Windows can be found in the documentation:
Resizing the file system and partitions on a zonal persistent disk

Create a HyperV disk from an existing folder

I want to create a virtual disk to attach to a hyper-v VM. This disk will be used to store a lot of files (around eight GB's worth) and will be attached to a hyper V VM.
I don't want to waste time creating the disk, then copying all eight gigs of worth of files then attaching the VM.
Is there a way to create a disk image and have its contents be a folder I specify?
You may create VHD just from partition not folder.
https://technet.microsoft.com/en-us/sysinternals/ee656415.aspx

VB.NET: To Monitor Disk Usage

I was wondering if disk usage monitoring is programmatically possible in VB.NET?
I will be using this to monitor our system clones, there is one point that the clones in the queue will fail because of lack of disk space so instead of always scrambling to see what we can delete, I will create an app that will monitor disk space usage and set a threshold, once the threshold is reached it will send an automatic email.
Gracias!
Available disk space can be found with DriveInfo Class:
Console.WriteLine(New DriveInfo("C:").AvailableFreeSpace)
However it may be necessary to use GetDiskFreeSpaceEx (kernel32).
See also Get free disk space

TempDB Initial Size resetting even after change

One of my tempdb's has a data file size of 60GB. I shrunk the file down to 2GB, then set the initial size to 2GB. The data file shrink is successful. When I go back into the db properties for tempdb, it shows initial size of 60000MB again. I've tried setting it to 4GB too and that still resets to 60000MB. This is very frustrating, since every time the service restarts, that tempdb data file is set to 60GB using up a lot of space.
Any ideas?
how did you "shrink" the filesize? If there are 60GB worth of entries, the table should "auto-size" to allow room for all entires.

EBS Volume from Ubuntu to RedHat

I would like to use an EBS volume with data on it that I've been working with in an Ubuntu AMI in a RedHat 6 AMI. The issue I'm having is that RedHat says that the volume does not have a valid partition table. This is the fdisk output for the unmounted volume.
Disk /dev/xvdk: 901.9 GB, 901875499008 bytes
255 heads, 63 sectors/track, 109646 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/xvdk doesn't contain a valid partition table
Interestingly, the volume isn't actually 901.9 GB but 300 GB.. I don't know if that means anything. I am very concerned about possibly erasing the data in the volume by accident. Can anyone give me some pointers for formatting the volume for RedHat without deleting its contents?
I also just checked that the volume works in my Ubuntu instance and it definitely does.
I'm not able to advise on the partition issue as such, other than stating that you definitely neither need nor want to format it, because formatting is indeed a (potentially) destructive operation. My best guess would be that RedHat isn't able to identify the file system currently in use on the EBS volume, which must be advertized by some means accordingly.
However, to ease with experimenting and gain some peace of mind, you should get acquainted with one of the major Amazon EBS features, namely to create point-in-time snapshots of volumes, which are persisted to Amazon S3:
These snapshots can be used as the starting point for new Amazon EBS
volumes, and protect data for long-term durability. The same snapshot
can be used to instantiate as many volumes as you wish.
This is detailed further down in section Amazon EBS Snapshots:
Snapshots can also be used to instantiate multiple new volumes, expand
the size of a volume or move volumes across Availability Zones. When a
new volume is created, there is the option to create it based on an
existing Amazon S3 snapshot. In that scenario, the new volume begins
as an exact replica of the original volume. [...] [emphasis mine]
Therefore you can (and actually should) always start experiments or configuration changes like the one you are about to perform by at least snapshotting the volume (which will allow you to create a new one from that point in time in case things go bad) or creating a new volume from that snapshot immediately for the specific task at hand.
You can create snapshots and new volumes from snapshots via the AWS Management Console, as usual there are respective APIs available as well for automation purposes (see API and Command Overview) - see Creating an Amazon EBS Snapshot for details.
Good luck!.