How do I compress or compact encrypted virtualbox disk image - virtual-machine

As I have created a Debian VM inside VirtualBox by encrypting their partition. So that the OS must be running in an encrypted partition. Although while creating a disk image(VHD), I had given for Dynamic allocation, but after OS installation it looks the disk image was consuming the entire disk space. Now the image size is 20GB. Is it possible for us to compress or compact it to some smaller sizes. I saw the documentations to compact the disk image in Virtual Box, but I may need to know whether we can do the same for encrypted disk image.
Your help is greatly appreciated.
Thanks.

It depends on the type of encryption being used. Since you're using Debian I assume you're using LUKS, which is inflexible. The space has to be pre-allocated and therefore the image will utilise the full space allocated to it.

Yes, there is a way to do it, but too complex to do it.
Each time you need/want to compact it you need to do some steps carefully.
(Maybe this is not really need it, try first without this) Blank with zeros all free space inside 'clear' mounted partitions, so free space is zeroed in 'clear', it will not be zeroed in 'encrypted' view point, since encryption will encrypt such zeros.
Shutdown the machine and boot with a LiveCD iso that let you mount the virtual hdd you are using and a new 'dynamic' and 'empty' one.
Set the partition scheme and encription identically on the new one, but ensure encription will not do the 'fill' part, so it does not write all sectors... this is the top most important part... this way the new virtual disk is smalll in size, but encypted by you LUKs, etc.
At this point, only scheme and encryption is on the new 'small' one, now is time to mount both enctypted... the old and the new, so they can see in 'plain' at the same time.
Again this is very important, clone form the old 'plain' to the new 'plain' only sectors that have data (most tools to clone partitions does that).
As i say... the top most important thing (to get a smaller virtual HDD) is:
Create a new virtual dynamic disk empty
Partition it and Encrypt it without writting all sectors; so omit the dd with random data prior to do the encryption or else the dynamic file will grow to max), also omit the fill empty space, that again will grow the virtual disk to max
Clone the partitions from the plain view (mounted and de-crypted on fly), so the clone tool will only write data areas of files, etc, but not free space.
There is a small part that will not be able to be reduced... files inside encrypted partitions that have full clusters fill with zeros (hope you do not have any of thoose)... the cause is that such space (when no encryption, as is all zeros, the normal compat see the full cluster is zero so it does not need such space; but when it is encrypted, such cluster is not all zero inside the real virtaul disk file, so the compact method can not reduce it).
The idea behind all is:
When encryption is on... to get the smallest virtual disk size, start with a dynamic and empty one and write as less as possible clusters on it when cloning the previous one.
As said, it is too much work... and time to time, each write occurs it will start growing and growing again.
My best personal recomendation is, get a 'BIG' and 'FAST' disk and use a fixed virtual disk... if i read well and your disk is only 20GiB... you gain in speed a lot for having it fixed and not dynamic and will not get worried about 'fragmentation' etc.
Remember if you use a USB for it, get one able to write at 30MiB/s (if only have USB 2.0 ports), if you are lucky (like me) and have at least one USB 3 port (better if it is a USB 3.1 Gen 2 Type C) seach for a 2.5 inch HDD 500GiB Sata III (with write speed greater than 100MiB/s, it is really cheap, less than 25 euros) and a Sata III to USB 3.1 Gen 2 Type C enclosure (also cheap, some are under 15 euros)... and avoid having to 'reduce', 'clone' etc.
I have 10 virtual machines on a 500GiB (with more than 50% free space), each is 20GiB in size (with Windows system inside them taking near 16BiG) and VeraCrypt encryption... so i am on the quite near case to you... i opted to use a USB 3.1 gen 2 Type C enclosure to hold all the fixed size VDI files... my experience is that encrypted fixed size fly if compared to non encrypted dynamic size.
Of course, ensure you do the needed test (when encryption takes place), i mean... test virtual HDD speed with no encryption, then test encryption algorithms on ram... and choose a method that is faster than 1.6 times the speed of the disk... so encription will not be a bottle neck... else you can have a really bad speed caused by encryption.
Also think on this, how much cores you show to the guest? that will make encryption speed very different... but also think the worst case... how much CPU will use the non encryption threads on that guest?
Just as an example... if inside the guest you are doing LZMA2 compression (or video transcoding H.264 for example) etc... the free CPU for encryption is very low... so encryption will slow down things a lot... sample cases also do much I/O to disks, so encrypt/decrypt a lot per second is needed.
Maybe a better aproach... would be... encrypt the 'container' not the 'system'... in other words... encrypt where the VDI files are stored, not the whole guest system... create a container per VDI if want different phrase passwords, etc. That way the VDI can also be dynamic and be compacted, etc.
Of course, i would be of more help if you told what encryption scheme (without details) are using.
This makes a really great difference in possible answers:
Are you encrypting system partition with any tool that runs on guest? Then use the 'clone' only used clusters trick
Are you encrypting but setting the VDI encryption property on? Maybe VirtualBox console will help to compat them
Are you encrypting the container where VDI is stored? I am quite sure this is not your case, since in such case compact can be done as normal, VDI is not encrypted at all, neither anything iside it.
I talk about VDI... same applies to the rest formats, VHD, VHDx, etc.
Remember... if encription is done on guest and still want to reduce (compact) the virtua hdd file... start with a new dynamic one, put partition scheme, encryption but without filling all the disk... at this point virtual disk file size must not be great, just a few megabytes... then clone form the old to the new one all used clusters, but not the not used ones.
Advise: Prepare to repeat the 'compact' by 'cloning' a few times per 100 hours of intense use of the guest... if gain is less than 50% it does not compensate the effort... then the best can be done is use fixed size.
Special note: With Fixed size the access speed is much more than with dynamic size... having a dynamic size with 100% size as if it where fixed is a big lost in speed... how much? you must do the test in your machine, depends a lot on CPU, I/O speed (input/ouptup operations per second) of storage you have and also on transfer speed (MiB/s), and other factors... so best do some test.
Since you are talking about 20GiB... better do the test of fixed size... i am quite sure you will enjoy it a lot.
Other thing would be talking about 500GiB system partition with only 10% fill... since space gain could be 450GiB, it is wellcome to do the clone method to compact it, that is why i say how to do such... for such people and your you, and for any one.
P.D.: If someone does not know how to do something, that does not mean it is not possible, and if someone say something is not possible, better for that person explain the demostration or be prepeared to be called an idiot; technology improves a lot time to time, knowledge a lot more.

Related

Write time to hard drive

I realize this number will change based on many factors, but in general, when I write data to a hard-drive (e.g. copy a file), how long does it take for that data to actually be written to the platter after Windows says the copy is done?
Could anyone point me in the right direction to discover more on this topic?
If you are looking for a hard number, that is pretty much unknowable. Generally it is the order of a tens to a few hundred milliseconds for the data to start reaching the disk platters, but can be as high as several seconds in a large server disk array with RAID and de-duplication.
The flow of events goes something like this.
The application calls a function like fwrite().
This call is handled by the filesystem layer in your Operating System, which has to figure out what specific disk sectors are to be manipulated.
The SATA/IDE driver in your OS will talk to the hard drive controller hardware. On a modern PC, it typically uses DMA to feed the data to the disk.
The data sits in a write cache inside the hard disk (RAM).
When the physical platters and heads have made it into position, it will begin to transfer the contents of cache onto the platters.
Steps 3-6 may repeat several times depending on how much data is to be written, where on the disk it is to be written. Additionally, there is usually filesystem metadata that must be updated (e.g. free space counters), which will trigger more writes to the disk.
The time it takes from steps 1-3 can be unpredictable in a general purpose OS like Windows due to task scheduling, background threads, and your disk write is probably queued up with a few dozen other processes. I'd say it is usually on the order of 10-100msec on a typical PC. If you go to the Windows Resource Monitor and click the Disk tab, you can get an idea of the average disk queue length. You can use the Performance Monitor to produce more finely-controlled graphs.
Steps 3-4 are largely controlled by the disk controller and disk interface (SATA, SAS, etc). In the server world, you can be talking about a SAN with FC or iSCSI network switches, which impose their own latencies.
Step 5 will be controlled by they physical performance of the disk. Many consumer-grade HDD manufacturers do not post average seek times anymore, but 10-20msec is common.
Interesting detail about Step 5: Some HDDs lie about flushing their write cache to get better benchmark scores.
Step 6 will depend on your filesystem and how much data you are writing.
You are right that there can be a delay between Windows indicating that data writing is finished and the last data actually written. Things to consider are:
Device Manager, Disk Drive, Properties, Policies - Options for disabling Write Caching.
You might be better off using Direct I/O so that Windows does not save it temporarily in File Cache.
If your program writes the data, you can log what has been copied.
If you are sending the data over a network, you are likely to have no control of when the remote system has finished.
To see what is happening, you can set up Perfmon logging. One of my examples of monitoring:
http://www.roylongbottom.org.uk/monitor1.htm#anchor2

Physical Memory and Virtual Memory data allocation behavior

Im interested in understanding how a computer allocates variables for physical memory vs files in virtual memory ( such as on a hard drive ), in terms of how does the computer determine know where to put data. It almost seems random in both memory storage types, but its not because it simply can't put data at a memory address or sector (any location) of a hard drive that's occupied or allocated for another process already. When I was studying how Norton's speed disk ( a program that de-fragments files on hard drives ) on my old W95 system, I noticed from the program's representation of hard drive's data ( a color coded visual map of different data types, e.g. swap files were always first at the top.), consisting of many files spread out all over the hard drive with empty unused areas. In addition some of these areas, I saw what looked like a mix of data and empty space showed a spotty pattern. I want to think its random for that to happen. Like wise, when I was studying the memory addresses of a simple program I wrote in C, I noticed that each version of my program after recompiling it after changes - showed different addresses for segments and offsets. I was expecting the computer to use the same address when I recompiled it. Sometimes the same address would be used, other times it was different. Again, I want to think its random also for memory locations to be chosen by programs. I thought that memory allocation or file writing was based on the first empty space available, written in a contiguous manner.
So my question is, I want to know how and what is it in the logic works of a common computer, that decides where it writes its data in such a arbitrary manner for either type of location (physical RAM or Dynamic )? What area of computer science (if not assembly language) would I need to study that would explains this, almost random behavior?
Thanks in Advance
Something broader and directly from computer science would be a linked list. http://en.wikipedia.org/wiki/Linked_list
Imagine if you had a linked list and simply added items to the end, these items might live linearly in memory or disk or whatever somewhere. But as you remove some items in the middle of the list by having say item number 7 point at item number 9 eliminating item number 8. As with memory allocation for allocs or virtual memory or hard drive sector allocation, etc how fast you fragment your storage has to do with the algorithm you use for allocating the next item.
file systems can/do use a link list type scheme to keep track of what sectors are tied to a single file. it is fast and easy to use the link list but deal with fragmentation. A much slower method would be to have no fragmentation but be constantly copying/moving files around to keep them on linear sectors.
malloc() allocation schemes and MMU allocation schemes also fall under this category. Basically any time you take something, slice it up into fractions and put a virtual interface in front of those fractions to give the appearance to the programmer/user that they are linear. Malloc() (not counting the virtual memory via the MMU) is the other way around allocating a number of linear chunks of those fractions to meed the alloc need, and having an alloc/free scheme that attempts to keep as many large chunks available, just in case, a bad malloc system is one where you have half of your memory free but the maximum malloc that works without an out of memory error is a malloc of a small fraction of that memory, say you have a gig free and can only allocate 4096 bytes.
You should look at virtual memory and TLB (translation lookaside buffer) or paging.
It is not trivial to implement virtual memory and paging. The performance of your whole system depends on it. If it's not done properly your system will thrash.
It is early morning here so Wikipedia will have to do for now: https://en.m.wikipedia.org/wiki/Translation_lookaside_buffer
EDIT:
Those coloured spots you saw in your defrag were chunks on your HDD. Each chunk is of some specified size. Depending on how fragmented your HDD is, you might have portions of your HDD that look like this:
*-*-***-***-*
where * means full, and - means empty
This (above) could be part of one application/file or multiple files; I will assume one file is split across those to simplify my example. At the end of each * there is a pointer to the next location where the next * chunk is (this is called a linked list). The more fragmented your HDD is (or memory) the more of these pointers to next chunk you will have. This in turn uses more space for next pointers instead of using space for data and the result is more overhead when reading that data. If this is a file on disk, you will have multiple seeks (which are bad because they're slow) if your data is not grouped together (locality principle). When you use defrag, it moves and groups all chunks together (as best as it can).
*-*-***-***-*
becomes
*********----
The OS decides paging and virtual memory addressing (and such). TLB is a hardware (a cache) that aids this process (it maps physical memory to virtual memory addresses for fast look up). The CPU communicates with the TLB via MMU
To answer your questions
You should study operating systems.
Yes the locations where to place your files on HDD are decided by the OS. If you deleted a file and download it again, there is no guarantee it will be placed in the same location-most likely not.
A nice summary of all these components and principles I mentioned here work: Click Here. It's a ppt with slides from a Real Time Operating Systems book (if I'm not mistaken the same exact one I used)

Accesing files which are currently being written

If a file is in a writing process, and at this time if I try to access it like if it is a log file which is being written every 10 milliseconds and I`m trying to access it will I damage or disturb the writing process?
Specifically I'm asking about video files, like if I start a recording process (using Windows Media Encoder) and at this time I would like to monitor the file if it is a blank file (black pixels everywhere) or there is a real content being recorded.
Sorry if my question is a newbie one, but I really really need to be sure about that.
Best on advance
In general you can certainly read files as they are being written, without corrupting their content. However:
It is possible to face an issue if your recording medium cannot deal with the combined data-rate or of both reading and writing. This can be a problem especially with slow-ish USB flash drives.
It is possible to face an issue on hard drives too, if the combination of reading and writing exceeds the rate of random seeks that the hard drive can handle. This can happen more easily on older drives (e.g. IDE) when dealing with HD video.
The end result is that if you have a real-time writer process, such as a TV recorder, it may be forced to drop some of the data - in the case of video a few frames.
Modern systems have quite fast disk subsystems, reasonably good I/O schedulers and large enough RAM capacities to allow for extensive data caching, which makes it quite unlikely that a single writer/reader combination would saturate the disk subsystem, unless you are doing something unusual like recording several video streams at once.
Keep in mind however, that:
The disk subsystem can also be saturated by unrelated processes reading/writing other files from the same drive.
If you are encoding video, you might also lose frames if something draws enough CPU resources that the encoding process is no longer able to keep-up with the real-time requirements. Depending on the video file, test-playing it might be just enough to do that - at least HD reproduction can be quite demanding. So, watch your CPU load and experiment before relying on it to record your favourite show :-)
EDIT:
If you are among the lucky ones that have SSD drives, seeks and data rate should normally be a non-issue. That leaves the CPU - you'd be surprised how easy it is to push it to the limit.
Above all, you should experiment to find out the limits of your system for each particular application. That way you won't have any nasty surprises...

Are disk sector writes atomic?

Clarified Question:
When the OS sends the command to write a sector to disk is it atomic? i.e. Write of new data succeeds fully or old data is left intact should the power fail immediately following the write command. I don't care about what happens in multiple sector writes - torn pages are acceptable.
Old Question:
Say you have old data X on disk, you write new data Y over it, and a tree falls on the power line during that write. With no fancy UPS or battery backed disk controller, you can end up with a torn page, where the data on disk is part X and part Y. Can you ever end up with a situation where the data on disk is part X, part Y, and part garbage?
I've been trying to understand the design of ACID systems like databases, and to my naive thinking, it seems firebird, which does not use a write-ahead log, is relying that a given write will not destroy old data (X) - only fail to fully write new data (Y). That means that if part of X is being overwritten, only the part of X that is being overwritten can be changed, not the part of X we intend to keep.
To clarify, this means if you have a page sized buffer, say 4096 bytes, filled with half Y, half X that we want to keep - and we tell the OS to write that buffer over X, there is no situation short of serious disk failure where the half X that we want to keep is corrupted during the write.
The traditional (SCSI, ATA) disk protocol specifications don't guarantee that any/every sector write is atomic in the event of sudden power loss (but see below for discussion of the NVMe spec). However, it seems tacitly agreed that non-ancient "real" disks quietly try their best to offer this behaviour (e.g. Linux kernel developer Christoph Hellwig mentions this off-hand in the 2017 presentation "Failure-Atomic file updates for Linux").
When it comes to synthetic disks (e.g. network attached block devices, certain types of RAID etc.) things are less clear and they may or may not offer sector atomicity guarantees while legally behaving per their given spec. Imagine a RAID 1 array (without a journal) comprised of a disk that offers 512 byte sized sectors but where the other disk offered a 4KiB sized sector thus forcing the RAID to expose a sector size of 4KiB. As a thought experiment, you can construct a scenario where each individual disk offers sector atomicity (relative to its own sector size) but where the RAID device does not in the face of power loss. This is because it would depend on whether the 512 byte sector disk was the one being read by the RAID and how many of the 8 512-byte sectors compromising the 4KiB RAID sector it had written before the power failed.
Sometimes specifications offer atomicity guarantees but only on certain write commands. The SCSI disk spec is an example of this and the optional WRITE ATOMIC(16) command can even give a guarantee beyond a sector but being optional it's rarely implemented (and thus rarely used). The more commonly implemented COMPARE AND WRITE is also atomic (potentially across multiple sectors too) but again it's optional for a SCSI device and comes with different semantics to a plain write...
Curiously, the NVMe spec was written in such a way to guarantee sector atomicity thanks to Linux kernel developer Matthew Wilcox. Devices that are compliant with that spec have to offer a guarantee of sector write atomicity and may choose to offer contiguous multi-sector atomicity up to a specified limit (see the AWUPF field). However, it's unclear how you can discover and use any multi-sector guarantee if you aren't currently in a position to send raw NVMe commands...
Andy Rudoff is an engineer who talks about investigations he has done on the topic of write atomicity. His presentation "Protecting SW From Itself: Powerfail Atomicity for Block Writes" (slides) has a section of video where he talks about how power failure impacts in-flight writes on traditional storage. He describes how he contacted hard drive manufacturers about the statement "a disk's rotational energy is used to ensure that writes are completed in the face of power loss" but the replies were non-committal as to whether that manufacturer actually performed such an action. Further, no manufacturer would say that torn writes never happen and while he was at Sun, ZFS added checksums to blocks which led to them uncovering cases of torn writes during testing. It's not all bleak though - Andy talks about how sector tearing is rare and if a write is interrupted then you usually get only the old sector, or only the new sector, or an error (so at least corruption is not silent). Andy also has an older slide deck Write Atomicity and NVM Drive Design which collects popular claims and cautions that a lot of software (including various popular filesystems on multiple OSes) are actually unknowingly dependent on sector writes being atomic...
(The following takes a Linux centric view but many of the concepts apply to general-purpose OSes that are not being deployed in a tightly controlled hardware environments)
Going back to 2013, BtrFS lead developer Chris Mason talked about how (the now defunct) Fusion-io had created a storage product that implemented atomic operation (Chris was working for Fusion-io at the time). Fusion-io also created a proprietary filesystem "DirectFS" (written by Chris) to expose this feature. The MariaDB developers implemented a mode that could take advantage of this behaviour by no longer doing double buffering resulting in "43% more transactions per second and half the wear on the storage device". Chris proposed a patch so generic filesystems (such as BtrFS) could advertise that they provided atomicity guarantees via a new flag O_ATOMIC but block layer changes would also be needed. Said block layer changes were also proposed by Chris in a later patch series that added a function blk_queue_set_atomic_write(). However, neither of the patch series ever entered the mainline Linux kernel and there is no O_ATOMIC flag in the (current 2020) mainline 5.7 Linux kernel.
Before we go further, it's worth noting that even if a lower level doesn't offer an atomicity guarantee, a higher level can still provide atomicity (albeit with performance overhead) to its users so long as it knows when a write has reached stable storage. If fsync() can tell you when writes are on stable storage (technically not guaranteed by POSIX but the case on modern Linux) then because POSIX rename is atomic you can use the create new file/fsync/rename dance to do atomic file updates thus allowing applications to do double buffering/Write Ahead Logging themselves. Another example lower down in the stack are Copy On Write filesystems like BtrFS and ZFS. These filesystems give userspace programs a guarantee of "all the old data" or "all the new data" after a crash at sizes greater than a sector because of their semantics even though a disk many not offer atomic writes. You can push this idea all the way down into the disk itself where NAND based SSDs don't overwrite the area currently used by an existing LBA and instead write the data to a new region and keep a mapping of where the LBA's data is now.
Resuming our abridged timeline, in 2015 HP researchers wrote a paper Failure-Atomic Updates of Application Data
in a Linux File System (PDF) (media) about introducing a new feature into the Linux port of AdvFS (AdvFS was originally part of DEC's Tru64):
If a file is opened with a new O_ATOMIC flag, the state of its application data will always reflect the most recent successful msync, fsync, or fdatasync. AdvFS furthermore includes a new syncv operation that combines updates to multiple files into a failure-atomic bundle [...]
In 2017, Christoph Hellwig wrote experimental patches to XFS to provide O_ATOMIC. In the "Failure-Atomic file updates for Linux" talk (slides) he explains how he drew inspiration from the 2015 paper (but without the multi-file support) and the patchset extends the XFS reflink work that already existed. However, despite an initial mailing list post, at the time of writing (mid 2020) this patchset is not in the mainline kernel.
During the database track of the 2019 Linux Plumbers Conference, MySQL developer Dimitri Kravtchuk asked if there were plans to support O_ATOMIC (link goes to start of filmed discussion). Those assembled mention the XFS work above, that Intel claim they can do atomicity on Optane but Linux doesn't provide an interface to expose it, that Google claims to provide 16KiB atomicity on GCE storage1. Another key point is that many database developers need something larger than 4KiB atomicity to avoid having to do double writes - PostgreSQL needs 8KiB, MySQL needs 16KiB and apparently the Oracle database needs 64KiB. Further, Dr Richard Hipp (author of the SQLite database) asked if there's a standard interface to request atomicity because today SQLite makes use of the F2FS filesystem's ability to do atomic updates via custom ioctl()s but the ioctl was tied to one filesystem. Chris replied that for the time being there's nothing standard and nothing provides the O_ATOMIC interface.
At the 2021 Linux Plumbers Conference Darrick Wong re-raised the topic of atomic writes (link goes to start of filmed discussion). He pointed out there are two different things that people mean when they say they want atomic writes:
Hardware provides some atomicity API and this capability is somehow exposed through the software stack
Make the filesystem do all the work to expose some sort of atomic write API irrespective of hardware
Darrick mentioned that Christoph had ideas for 1. in the past but Christoph has not come back to the topic and further there are unanswered questions (how you make userspace aware of limits, if the feature was exposed it would be restricted to direct I/O which may problematic for many programs). Instead Darrick suggested tackling 2. was to propose his FIEXCHANGE_RANGE ioctl which swaps the contents of two files (the swap is restartable if it fails part way through). This approach doesn't have the limits (e.g. smallish contiguous size, maximum number of scatter gather vectors, direct I/O only) that a hardware based solution would have and could theoretically be implementable in the VFS thus being filesystem agnostic...
TLDR; if you are in tight control of your whole stack from application all the way down the the physical disks (so you can control and qualify the whole lot) you can arrange to have what you need to make use of disk atomicity. If you're not in that situation or you're talking about the general case, you should not depend on sector writes being atomic.
When the OS sends the command to write a sector to disk is it atomic?
At the time of writing (mid-2020):
When using a mainline 4.14+ Linux kernel
If you are dealing with a real disk
a sector write sent by the kernel is likely atomic (assuming a sector is no bigger than 4KiB). In controlled cases (battery backed controller, NVMe disk which claims to support atomic writes, SCSI disk where the vendor has given you assurances etc.) a userspace program may be able to use O_DIRECT so long as O_DIRECT wasn't reverting to being buffered, the I/O didn't get split apart/merged at the block layer / you are sending device specific commands and are bypassing the block layer. However, in the general case neither the kernel nor a userspace program can safely assume sector write atomicity.
Can you ever end up with a situation where the data on disk is part X, part Y, and part garbage?
From a specification perspective if you are talking about a SCSI disk doing a regular SCSI WRITE(16) and a power failure happening in the middle of that write then the answer is yes: a sector could contain part X, part Y AND part garbage. A crash during an inflight write means the data read from the area that was being written to is indeterminate and the disk is free to choose what it returns as data from that region. This means all old data, all new data, some old and new, all zeros, all ones, random data etc. are all "legal" values to return for said sector. From an old draft of the SBC-3 spec:
4.9 Write failures
If one or more commands performing write operations are in the task set and are being processed when power is lost (e.g., resulting in a vendor-specific command timeout by the application client) or a medium error or hardware error occurs (e.g., because a removable medium was incorrectly unmounted), the data in the logical blocks being written by those commands is indeterminate. When accessed by a command performing a read or verify operation (e.g., after power on or after the removable medium is mounted), the device server may return old data, new data, or vendor-specific data in those logical blocks.
Before reading logical blocks which encountered such a failure, an application client should reissue any commands performing write operations that were outstanding.
1 In 2018 Google announced it had tweaked its cloud SQL stack and that this allowed them to use 16k atomic writes MySQL's with innodb_doublewrite=0 via O_DIRECT... The underlying customisations Google performed were described as being in the virtualized storage, kernel, virtio and the ext4 filesystem layers. Further, a no longer available beta document titled Best practices for 16 KB persistent disk and MySQL (archived copy) described what end users had to do to safely make use of the feature. Changes included: using an appropriate Google provided VM, using specialized storage, changing block device parameters and carefully creating an ext4 filesystem with a specific layout. However, at some point in 2020 this document vanished from GCE's online guides suggesting such end user tuning is not supported.
I think torn pages are not the problem. As far as I know, all drives have enough power stored to finish writing the current sector when the power fails.
The problem is that everybody lies.
At least when it comes to the database knowing when a transaction has been committed to disk, everybody lies. The database issues an fsync, and the operating system only returns when all outstanding writes have been committed to disk, right? Maybe not. It's common, especially with RAID cards and/or SATA drives, for your program to be told everything has committed (that is, fsync returns) and yet there is data not yet on the drive.
You can try using Brad's diskchecker to find out if the platform you are going to use for your database can survive pulling the plug without losing data. The bottom line: If diskchecker fails, the platform is not safe for running a database. Databases with ACID rely upon knowing when a transaction has been committed to backing store and when it has not. This is true whether or not the databases uses write-ahead loggin (and if the database returns to the user without having done an fsync, then transactions can be lost in the event of a failure, so it should not claim that it provides ACID semantics).
There's a long thread on the Postgresql mailing list discussing durability. It starts out talking about SSDs, but then it gets into SATA drives, SCSI drives, and file systems. You may be surprised to learn how exposed your data can be to loss. It's a good thread for anyone with a database that needs durability, not just those running Postgresql.
Nobody seems to agree on this question. So I spent a lot of time trying different Google queries until I finally found an answer.
from Dr. Stephen Tweedie, RedHat employee and linux kernel filesystem and virtual memory developer in a talk on ext3 (which he developed) transcript here. If anyone knows, it'd be him.
"It's not sufficient just to write the thing to the journal, because there's got to be some mark in the journal which says: well, (has this journal record actually) does this journal record actually represent a complete consistency to the disk? And the way you do that is by having some atomic operation which marks that transaction as being complete on disk" [23m, 14s]
"Now, disks these days actually make these guarantees. If you start a write operation to a disk, then even if the power fails in the middle of that sector write, the disk has enough power available, and it can actually steal power from the rotational energy of the spindle; it has enough power to complete the write of the sector that's being written right now. In all cases, the disks make that guarantee." [23m, 41s]
No, they are not. Worse yet, disks may lie and say the data is written when it is in fact in the disk cache, under default settings. For performance reasons, this may be desirable (actual durability is up to an order of magnitude slower) but it means if you lose power and the disk cache is not physically written, your data is gone.
Real durability is both hard and slow unfortunately, since you need to make at least one full rotation per write, or 2+ with journalling/undo. This limits you to a couple hundred DB transactions per second, and requires disabling write caching at a fairly low level.
For practical purposes though, the difference is not that big of a deal in most cases.
See:
How (not) to achieve durability.
FSync() may not flush to disk
People don't seem to agree on what happens during a sector write if the power fails. Maybe because it depends on the hardware being used, and even the filesystem.
From wikipedia (http://en.wikipedia.org/wiki/Journaling_file_system):
Some disk drives guarantee write
atomicity during a power failure.
Others, however, may stop writing
midway through a sector after power is
lost, leaving it mismatched against
its error-correcting code. The sector
is thus corrupt and its contents lost.
A physical journal guards against such
corruption because it holds a complete
copy of the sector, which it can
replay over the corruption upon next
mount.
Seems to suggest that some hard drives will not finish writing the sector, but that a journaling filesystem can protect you from data loss the same way the xlog protects a database.
From the linux kernel mailing list in a discussion on ext3 journaling filesystem:
In any case bad sector checksum is
hardware bug. Sector write is supposed
to be atomic, it either happens or
not.
I'd tend to believe that over the wiki comment. Actually, the very existence of a database (firebird) with no xlog implies that sector write is atomic, that it cannot clobber data you did not mean to change.
There's quite a bit of discussion Here about atomicity of sector writes, and again no agreement. But the people who are disagreeing seem to be talking about multiple-sector writes (which are not atomic on many modern hard-drives.) Those who are saying sector writes are atomic do seem to know more about what they're talking about.
The answer to your first question depends on the hardware involved. At least with some older hardware, the answer was yes -- a power failure could result it garbage being written to the disk. Most current disks, however, have a bit of a "UPS" built into the disk itself -- a capacitor that's large enough to power the disk long enough to write the data in the on-disk cache out to the disk platter. They also have circuitry to detect whether the power supply is still good, so when the power gets flaky, they write the data in the cache to the platter, and ignore garbage they might receive.
As far as a "torn page" goes, a typical disk only accepts commands to write an entire sector at a time, so what you'll get will normally be an integral number of sectors written correctly, and others remaining unchanged. If, however, you're using a logical page size that's larger than a single sector, you can certainly end up with a page that's partially written.
That, however, mostly applies to a direct connection to a normal moving-platter type hard drive. With almost anything else, the rules can and often will be different. Just for an obvious example, if you're writing over the network, you're mostly at the mercy of the network protocol in use. If you transmit data over TCP, data that doesn't match up with the CRC will be rejected, but the same data transmitted over UDP, with the same corruption, might be accepted.
I suspect this assumption is wrong.
Modern HDDs encode the data in sectors - and additionally protect it with ECC. Therefore you can end-up with garbaging all the sector content - it will just not make sense with the encoding used.
As for increasingly poplular SSDs, the situation is even more gruesome - the block is cleared prior to being overwritten, so, depending on the firmware being used and the amount of free space, entirely unrelated sectors can be damaged.
By the way, an OS crash will not lead to data being damaged within single sector.
I would expect one torn page to consist of part X, part Y, and part unreadable sector. If a head is in the middle of writing a sector when the power fails, the drive should park the heads immediately, so that the rest of the drive (aside from that one sector) will remain undamaged.
In some cases I would expect several torn pages consisting of part X and part Y, but only one torn page would include an unreadable sector. The reason for several torn pages is that the drive can buffer lots of writes internally, and the order of writing might interleave various sectors from various pages.
I've read conflicting stories about whether a new write to the unreadable sector will make it readable again. Even if the answer is yes, that will be new data Z, neither X nor Y.
when updating the
disk, the only guarantee drive manufactures make is that a single 512-
byte write is atomic (i.e., it will either complete in its entirety or it won’t
complete at all); thus, if an untimely power loss occurs, only a portion of
a larger write may complete (sometimes called a torn write).

Best Dual HD Set up for Development

I've got a machine I'm going to be using for development, and it has two 7200 RPM 160 GB SATA HDs in it.
The information I've found on the net so far seems to be a bit conflicted about which things (OS, Swap files, Programs, Solution/Source code/Other data) I should be installing on how many partitions on which drives to get the most benefit from this situation.
Some people suggest having a separate partition for the OS and/or Swap, some don't bother. Some people say the programs should be on the same physical drive as the OS with the data on the other, some the other way around. Same with the Swap and the OS.
I'm going to be installing Vista 64 bit as my OS and regularly using Visual Studio 2008, VMWare Workstation, SQL Server management studio, etc (pretty standard dev tools).
So I'm asking you--how would you do it?
If the drives support RAID configurations in your BIOS, you should do one of the following:
RAID 1 (Mirror) - Since this is a dev machine this will give you the fault tolerance and peace of mind that your code is safe (and the environment since they are such a pain to put together). You get better performance on reads because it can read from both/either drive. You don't get any performance boost on writes though.
RAID 0 - No fault tolerance here, but this is the fastest configuration because you read and write off both drives. Great if you just want as fast as possible performance and you know your code is safe elsewhere (source control) anyway.
Don't worry about mutiple partitions or OS/Data configs because on a dev machine you sort of need it all anyway and you shouldn't be running heavy multi-user databases or anything anyway (like a server).
If your BIOS doesn't support RAID configurations, however, then you might consider doing the OS/Data split over the two drives just to balance out their use (but as you mentioned, keep the programs on the system drive because it will help with caching). Up to you where to put the swap file (OS will give you dump files, but the data drive is probably less utilized).
If they're both going through the same disk controller, there's not going to be much difference performance-wise no matter which way you do it; if you're going to be doing lots of VM's, I would split one drive for OS and swap / Programs and Data, then keep all the VM's on the other drive.
Having all the VM's on an independant drive would let you move that drive to another machine seamlessly if the host fails, or if you upgrade.
Mark one drive as being your warehouse, put all of your source code, data, assets, etc. on there and back it up regularly. You'll want this to be stable and easy to recover. You can even switch My Documents to live here if wanted.
The other drive should contain the OS, drivers, and all applications. This makes it easy and secure to wipe the drive and reinstall the OS every 18-24 months as you tend to have to do with Windows.
If you want to improve performance, some say put the swap on the warehouse drive. This will increase OS performance, but will decrease the life of the drive.
In reality it all depends on your goals. If you need more performance then you even out the activity level. If you need more security then you use RAID and mirror it. My mix provides for easy maintenance with a reasonable level of data security and minimal bit rot problems.
Your most active files will be the registry, page file, and running applications. If you're doing lots of data crunching then those files will be very active as well.
I would suggest if 160gb total capacity will cover your needs (plenty of space for OS, Applications and source code, just depends on what else you plan to put on it), then you should mirror the drives in a RAID 1 unless you will have a server that data is backed up to, an external hard drive, an online backup solution, or some other means of keeping a copy of data on more then one physical drive.
If you need to use all of the drive capacity, I would suggest using the first drive for OS and Applications and second drive for data. Purely for the fact of, if you change computers at some point, the OS on the first drive doesn't do you much good and most Applications would have to be reinstalled, but you could take the entire data drive with you.
As for dividing off the OS, a big downfall of this is not giving the partition enough space and eventually you may need to use partitioning software to steal some space from the other partition on the drive. It never seems to fail that you allocate a certain amount of space for the OS partition, right after install you have several gigs free space so you think you are fine, but as time goes by, things build up on that partition and you run out of space.
With that in mind, I still typically do use an OS partition as it is useful when reloading a system, you can format that partition blowing away the OS but keep the rest of your data. Ways to keep the space build up from happening too fast is change the location of your my documents folder, change environment variables for items such as temp and tmp. However, there are some things that just refuse to put their data anywhere besides on the system partition. I used to use 10gb, these days I go for 20gb.
Dividing your swap space can be useful for keeping drive fragmentation down when letting your swap file grow and shrink as needed. Again this is an issue though of guessing how much swap you need. This will depend a lot on the amount of memory you have and how much stuff you will be running at one time.
For the posters suggesting RAID - it's probably OK at 160GB, but I'd hesitate for anything larger. Soft errors in the drives reduce the overall reliability of the RAID. See these articles for the details:
http://alumnit.ca/~apenwarr/log/?m=200809#08
http://permabit.wordpress.com/2008/08/20/are-fibre-channel-and-scsi-drives-more-reliable/
You can't believe everything you read on the internet, but the reasoning makes sense to me.
Sorry I wasn't actually able to answer your question.
I usually run a box with two drives. One for the OS, swap, typical programs and applications, and one for VMs, "big" apps (e.g., Adobe CS suite, anything that hits the disk a lot on startup, basically).
But I also run a cheap fileserver (just an old machine with a coupla hundred gigs of disk space in RAID1), that I use to store anything related to my various projects. I find this is a much nicer solution than storing everything on my main dev box, doesn't cost much, gives me somewhere to run a webserver, my personal version control, etc.
Although I admit, it really isn't doing much I couldn't do on my machine. I find it's a nice solution as it helps prevent me from spreading stuff around my workstation's filesystem at random by forcing me to keep all my work in one place where it can be easily backed up, copied elsewhere, etc. I can leave it on all night without huge power bills (it uses <50W under load) so it can back itself up to a remote site with a little script, I can connect to it from outside via SSH (so I can always SCP anything I need).
But really the most important benefit is that I store nothing of any value on my workstation box (at least nothing that isn't also on the server). That means if it breaks, or if I want to use my laptop, etc. everything is always accessible.
I would put the OS and all the applications on the first disk (1 partition). Then, put the data from the SQL server (and any other overflow data) on the second disk (1 partition). This is how I'd set up a machine without any other details about what you're building. Also make sure you have a backup so you don't lose work. It might even be worth it to mirror the two drives (if you have RAID capability) so you don't lose any progress if/when one of them fails. Also, backup to an external disk daily. The RAID won't save you when you accidentally delete the wrong thing.
In general I'd try to split up things that are going to be doing a lot of I/O (such as if you have autosave on VS going off fairly frequently) Think of it as sort of I/O multithreading
I've observed significant speedups by putting my virtual machines on a separate disk. Whenever Windows is doing something stupid in the VM (e.g., indexing yet again), it doesn't thrash my Mac's disk quite so badly.
Another issue is that many tools (Visual Studio comes to mind) break in frustrating ways when bits of them are on the non-primary disk.
Use your second disk for big random things.