Explanation:
I created one virtual machine on my computer with VMWare. Now I need to move it on a VPS in Frankfurt.
Using the function provided by VMWare I created a snapshot (VMDK file)
On the hosting website I must upload a RAW format file.
To do the conversion I used qemu-img converter. According to the website, I tried to convert the vmdk to raw using this command:
qemu-img convert -f vmdk -O raw image.vmdk image.img
It works, but I obtained a largest file.
The VMDK image is 3.22GB and the RAW image after the conversion is 56.3GB
I have created the Virtual Machine with 60GB of disk size, so if I understood correctly:
The VMDK file contains only the files that I created on the
virtual machine.
The RAW file have this big disk size because it get the size
that I had indicated during the VM creation.
If this two observation are right the third question make sense.
Questions:
Is possible to reduce the RAW image size (compress it)?
I have to buy the VPS, but if I can't compress the RAW image, I have to buy it with 64GB of disk space?
Could be an idea to resize the allocated disk to the Virtual Machine ( eg. 10GB) and export it again as VMDK and after that convert to RAW?
EDIT 01/10/18:
After a lots of tests I figured out that is not possible to shrink the virtual disk size, the best solution is to create another VM with smaller disk and copy all the data on it (using rsync). If for you is important to not lost users, ssh ftp configuration etc.. I don't know how to do that. I have created again all the configuration.
I have tryed to convert in different format, VDI, HDD, RAW and reduce the virtual disk size is not possible on all the format.
It might be possible to convert VMDK to VHD if the VMDK is just a simple/single partition/volume. If VMDK contains multiple partitions many tools will fail during conversion.
If it's convertable to VHD then Windows has disk management to shrink disks/partitions.
You could use this to shrink the VHD and then perhaps convert it back to VMDK.
I checked there is an option available: "shrink volume".
This might just be the thing you are looking for.
Related
I have attached a disk to vm using Acopolis command
acli vm.disk_create Vm clone_from_nfs_file=filepath.raw bus=scsi
and write data to this disk using dd and detach a disk.
If I attach a disk again I am unable to see the written data to disk.
Please help me solve this problem..
When you create a vmdisk in this way, you're creating a copy-on-write clone of the original file. All writes go to the clone, not the original file. If you want to access the cloned file, it is located on NFS here:
/$container_name/.acropolis/vmdisk/$vmdisk_uuid
You can determine the container ID and vmdisk UUID by looking at the vm descriptor using the vm.get command.
My site's database grew too large that it can't even pack itself anymore. When I do pack it, it says that there's insufficient space.
I've tried deleting some files in the server but to no avail - the database itself just takes the whole disk space.
How would I continue on from this point? The website is basically stuck since it can't add more data since the disk space is full.
Additional details
The server is on Ubuntu 11.10
Copy the Data.fs file to another machine with more disk space, and pack it there. Then copy the smaller file back to the server, bring that down and move the packed version in place.
Depending on how much downtime you are willing to tolerate, you could remove the large unpacked Data.fs file first, then copy the replacement over.
If you are using a blobstorage with your site, you'll have to include that when copying across your ZODB.
After a few weeks, I returned to this problem and I have finally fixed it.
It is similar to the idea of #MartijnPieters, but I approached the problem differently.
My zope instance was in /dev/sda6, and that filesystem is full. I just increased the size from 27G to 60G and THEN I packed my Data.fs file.
I used GParted on my machine but it's because /dev/sda6 is a native linux filesystem. If you're running LVMs, you might need to use resize2fs.
Trying to set up a simple backup solution for my wife's computer. Have a volume on my server upstairs mounted locally using OSX automount, so it should just be a simple
rsync -a sourceDir targetDir
When I look at the files it syncs over though, all metadata is lost on jpg files. The created date is preserved on the file and the modified date ends up being the timestamp when the rsync runs, but I can't imagine why EXIF data (Device, exposure etc) would disappear when it should just be a straight file copy. Hoping someone has run into this before and can shed some light on it.
This can't be a rsync problem, there should be something else going on. rsync just does a binary copy from source to destination, the most probable explanation is a simple user error (e.g. you copied from the wrong source directory, source files where already without EXIF data, and so on).
For normal copies on reliable hardware, rsync is without doubt the best tool for the job, especially considering the huge amount of filesystems it has to cope with.
There are some corner cases where rsync may not behave as it should, at least with default parameters. For example, right now I'm investigating on an issue where, copying to a "not-so-reliable" USB drive, rsync continued to copy happily even when the drive disconnected from USB and the device disappeared.
I create disk encrypt in mac OS X ML 10.8 (use Disk utiliti or use command hdiutil ). I want read file in that disk, but I can't mount it. Because when I mount it, another app can read it before I unmount. Please help me.(hdiutil command here http://developer.apple.com/library/mac/#documentation/Darwin/Reference/ManPages/man1/hdiutil.1.htm
To do this you would have to read and decrypt the dmg file yourself and then interpret the HFS file system inside the disk image to get at your file. It's not easy but certainly possible. Take a look at the HFSExplorer source code.
But I wouldn't put too much energy into this. Either use a different file format that is easier to read to store your encrypted data, or go with pajps solution. And remember, no matter what you do, once you decrypt your file the user will be able to get to the decrypted data. You can make this harder, but you can't prevent it.
I think the only reasonable way would be to mount the disk image. To do it securely, you can use the -mountrandom and -nobrowse options to hdiutil attach. This will mount the disk image in a randomized path name, and prevent it from being visible in the UI.
hdiutil attach -mountrandom /tmp -nobrowse /tmp/secret_image.dmg
Assuming the disk image has one and exactly one HFS partition, you can parse the randomized mount path like this:
hdiutil attach -mountrandom /tmp -nobrowse /tmp/secret.dmg | awk '$2 = /Apple_HFS/ { print $3 }'
Or you can use the -plist option to get the output in plist XML format that can be parsed using XML tools or converted to json using plutil -convert json.
Of course, an attacker that has root access can still monitor for new mounts and intercept your disk image before you have the chance to unmount it, but if your attacker has root than pretty much all bets are off.
Does anyone know what apis Apple is using for it's Get Info panel to determine free space in Lion? All of the code I have tried to get the same Available Space that Apple is reporting is failing, even Quick Look isn't displaying the same space that Get Info shows. This seems to happen if I delete a bunch of files and attempt to read available space.
When I use NSFileManager -> NSFileSystemFreeSize I get 42918273024 bytes
When I use NSURL -> NSURLVolumeAvailableCapacityKey i get 42918273024 bytes
When I use statfs -> buffer.f_bsize * buffer.f_bfree i get 43180417024 bytes
statfs gets similar results to Quick Look, but how do I match Get Info?
You are probably seeing a result of local Time Machine snapshot backups. The following quotes are from the following Apple Support article - OS X Lion: About Time Machine's "local snapshots" on portable Macs:
Time Machine in OS X Lion includes a new feature called "local
snapshots" that keeps copies of files you create, modify or delete on
your internal disk. Local snapshots compliment regular Time Machine
backups (that are stored on your external disk or Time Capsule) giving
you a "safety net" for times when you might be away from your external
backup disk or Time Capsule and accidentally delete a file.
The article finishes by saying:
Note: You may notice a difference in available space statistics between Disk Utility, Finder, and Get Info inspectors. This is
expected and can be safely ignored. The Finder displays the available
space on the disk without accounting for the local snapshots, because
local snapshots will surrender their disk space if needed.
It looks like all the programmatic methods of measuring available disk space that you have tried give the true free space value on the disk, not the space that can be made available by removing local Time Machine backups. I doubt command line tools like df have been made aware of local Time Machine backups either.
This is a bit of a workaround, not a real api, but the good old unix command df -H will get you the same information as in the 'get info' panel, you just need to select the line of your disk and parse the output.
The df program has many other options that you might want to explore. In this particular case the -H switch tells the program to spit out the numbers in human readable format and to use base 10 sizes.
Take a look here on how to run command lines from within an app and get the output inside your program: Execute a terminal command from a Cocoa app
I believe that the underpinnings of both df and the get info panel are very likely to be the same thing.