Create a HyperV disk from an existing folder - hyper-v

I want to create a virtual disk to attach to a hyper-v VM. This disk will be used to store a lot of files (around eight GB's worth) and will be attached to a hyper V VM.
I don't want to waste time creating the disk, then copying all eight gigs of worth of files then attaching the VM.
Is there a way to create a disk image and have its contents be a folder I specify?

You may create VHD just from partition not folder.
https://technet.microsoft.com/en-us/sysinternals/ee656415.aspx

Related

System drive incremental clone

I would have a question that will get a "-1" rating.
I once daily backup my data drive to a clone drive with SynckBack, which compares both disks and mirrors the copy to the original, just updating by adding/deleting files. Easy and fast, and the backup disk is a bit-to-bit clone of the original disk.
Although it's not exactly that, one can call this an "incremental" backup.
I would like to know if it's possible to do the same with my system disk, i.e. to once daily update a system drive copy in order to maintain a bit-to-bit clone that would be immediately bootable. Not by re-copying each time the whole system drive, but like for my data drive, just by adding/deleting once a day the slight amount of data which have daily changed.
Apart from building a RAID1 including my system disk, which implies letting the RAID running permanently, is there another way?
I didn't find any application that can bit-to-bit clone a system disk in such an "incremental" way.
I finally answer to my own question, in case someone would have the same wonder.
There is no app that can incrementially update a bit-to-bit clone drive. Such app are making disk images, which are not immediatly bootable and need first to be installed (nevertheless more quickly than installing Windows).
The only way to update a ready-to-use backup system drive is to totally re-clone it regularly, for instance with a standalone cloning dock station.
I give generously myself a "+1" rating.

Updating disk size in compute engine does not update size in vm instance

I had a 50gb disk for my vm instance and I went to the disks in compute engine and changed to size to 100gb.
I restarted my server twice now and it is still showing the disk as only 50gb
Is there some form of delay associated with changing the disk size?
Here is an image of what it looks like on the Google Cloud Console
Here is an image of what it says on the server
Changing the size of the physical disks associated with your Compute Engine VM instance doesn't change the usage of that disk without first performing some additional steps. These steps change the partitioning of the disks.
Recipes for both Linux and Windows can be found in the documentation:
Resizing the file system and partitions on a zonal persistent disk

On a google compute engine (GCE), where are snapshots stored?

I've made two snapshots using the GCE console. I can see them there on the console but cannot find them on my disks. Where are they stored? If something should corrupt one of my persistent disk, will the snapshots still be available? If they're not stored on the persistent disk, will I be charged extra for snapshot storage?
GCE has added a new level of abstraction. The disks were separated from the VM instance. This allows you to attach a disk to several instances or restore snapshots to another VMs.
In case your VM or disk become corrupt, the snapshots are safely stored elsewhere. As for additional costs - keep in mind that snapshots store only files that changed since the last snapshot. Therefore the space needed for 7 snapshots is often not more than 30% more space than one snapshot. You will be charged for the space they use, but the costs are quite low from what i observed (i was charged 0.09$ for 3.5 GB snapshot during one month).
The snapshots are stored separately on Google's servers, but are not attached to or part of your VM. You can create a new disk from an existing snapshot, but Google manages the internal storage and format of the snapshots.

What are the disadvantages of storing images on a file system?

I have a few questions about storing files on the operating system. These may or may not be valid worries, but I don't want to go on without knowing.
What will happen when the file it is stored in get a very large amount of data (1 Million images of up to 2MB each): Will this effect RAM and make the OS go slow?
What security risks does it open as far as Viruses?
Would scalability just be transfering files from that machine to a new machine?
The only problem will be if you try to store all of those images in a single directory.
Serving static files, you are liable to hit limits of the network before you hit the machine's limit.
In terms of security, you want to make sure that only images are uploaded, and not arbitrary files - check more than the file extension or mime-type!

Moving 1 million image files to Amazon S3

I run an image sharing website that has over 1 million images (~150GB). I'm currently storing these on a hard drive in my dedicated server, but I'm quickly running out of space, so I'd like to move them to Amazon S3.
I've tried doing an RSYNC and it took RSYNC over a day just to scan and create the list of image files. After another day of transferring, it was only 7% complete and had slowed my server down to a crawl, so I had to cancel.
Is there a better way to do this, such as GZIP them to another local hard drive and then transfer / unzip that single file?
I'm also wondering whether it makes sense to store these files in multiple subdirectories or is it fine to have all million+ files in the same directory?
One option might be to perform the migration in a lazy fashion.
All new images go to Amazon S3.
Any requests for images not yet on Amazon trigger a migration of that one image to Amazon S3. (queue it up)
This should fairly quickly get all recent or commonly fetched images moved over to Amazon and will thus reduce the load on your server. You can then add another task that migrates the others over slowly whenever the server is least busy.
Given that the files do not exist (yet) on S3, sending them as an archive file should be quicker than using a synchronization protocol.
However, compressing the archive won't help much (if at all) for image files, assuming that the image files are already stored in a compressed format such as JPEG.
Transmitting ~150 Gbytes of data is going to consume a lot of network bandwidth for a long time. This will be the same if you try to use HTTP or FTP instead of RSYNC to do the transfer. An offline transfer would be better if possible; e.g. sending a hard disc, or a set of tapes or DVDs.
Putting a million files into one flat directory is a bad idea from a performance perspective. while some file systems would cope with this fairly well with O(logN) filename lookup times, others do not with O(N) filename lookup. Multiply that by N to access all files in a directory. An additional problem is that utilities that need to access files in order of file names may slow down significantly if they need to sort a million file names. (This may partly explain why rsync took 1 day to do the indexing.)
Putting all of your image files in one directory is a bad idea from a management perspective; e.g. for doing backups, archiving stuff, moving stuff around, expanding to multiple discs or file systems, etc.
One option you could use instead of transferring the files over the network is to put them on a harddrive and ship it to amazon's import/export service. You don't have to worry about saturating your server's network connection etc.