What is the use of FAT32 reserved sectors? - fat32

I know sector 0 is mostly used for loading the operating system. Some windows versions have bootloadres bigger than 1 sector and use sector 1 and 2 as well. On sector 6 up to 8 is often a backup of the sectors 0-2. But what is the rest for? Why is the default in many formating tools 32 reserved sectors?

actually FAT32 boot sector have backup copy starting with 6 sectors so in total 12 sectors. AND FAT32 boot sector continues on 13 sector. So minimum is 13 sectors.
Then formatting tool wants to align first cluster with 1MB boundary, so amount of reserved sectors can be very different due to different amount of FAT sectors.
For HDD 4K boundary is enough, but some SSD can work faster with 1MB boundary. At least Windows7 formats with 1MB aligning.

Some file systems may want a specific alignment of data and therefor will increase the number of reserved sectors to meet that alignment.

Related

When stored procedure returns 17 million rows, it's throwing out of memory while accessing dataset in Delphi

I'm using Delphi 6 for developing windows application and have a stored procedure which returns around 17 million rows. It takes 3 to 4 minutes while returning data in SQL Server Management Studio.
And, I'm getting an "out of memory" exception while I'm trying to access the result dataset. I'm thinking that the sp.execute might to executed fully. Do I need to follow any steps to fix this or shall I use sleep() to fix this issue?
Delphi 6 can only compile 32 bit executables.
32 bit executables running on a 32 bit Windows have a memory limit of 2 GiB. This can be extended to 3 GiB with a hardware boot switch.
32 bit executables running on a 64 bit Windows have the same memory limit of 2 GiB. Using the "large address aware" flag they can at max address 4 GiB of memory.
32 bit Windows executables emulated via WINE under Linux or Unix should not be able to overcome this either, because 32 bit can at max store the number 4,294,967,295 = 2³² - 1, so the logical limit is 4 GiB in any possible way.
Wanting 17 million datasets on currently 1,9 GiB of memory means that 1,9 * 1024 * 1024 * 1024 = 2,040,109,465 bytes divided by 17,000,000 gives a mean of just 120 bytes per dataset. I can hardly imagine that is enough. And it would even only be the gross load, but memory for variables are still needed. Even if you manage to put that into large arrays you'd still need plenty of overhead memory for variables.
Your software design is wrong. As James Z and Ken White already pointed out: there can't be a scenario where you need all those dataset at once, much less the user to view them all at once. I feel sorry for the poor souls that yet had to use that software - who knows what else is misconcepted there. The memory consumption should remain at sane levels.

How to get LBA(logical block addressing) of a file from MFT on NTFS file system?

I accessed the $MFT file and extracted file attributes.
Given the file attributes from MFT, how to get a LBA of file from the MFT record on NTFS file system?
To calculate LBA, I know that cluster number of file.
It that possible using cluster number to calculate?
I'm not entirely sure of your question-- But if you're simply trying to find the logical location on disk of a file, there are various IOCTLs that will achieve this.
For instance, MFT File records: FSCTL_GET_NTFS_FILE_RECORD
http://msdn.microsoft.com/en-us/library/windows/desktop/aa364568(v=vs.85).aspx
Location on disk of a specific file via HANDLE: FSCTL_GET_RETRIEVAL_POINTERS
http://msdn.microsoft.com/en-us/library/windows/desktop/aa364572(v=vs.85).aspx
If you're trying to parse NTFS on your own, you'll need to follow the $DATA attribute-- Which will always be non-resident data runs (unless it's a small file that can be resident within the MFT). Microsoft's data runs are fairly simply structures of data contained in the first two nibbles, which specify offset and length for the next run of data.
IMHO you should write the code by doing some basic arithmetic rather than using IOCTLs and FSCTLs for everything. You should know the size of your disk and the offset from which a volume starts (or every extent by using IOCTL_VOLUME_GET_VOLUME_DISK_EXTENTS) and store those values somewhere. Then just add the LCN times the size of a cluster to the offset of the extent on the disk.
Most of the time you just have to deal with one extent. When you have multiple extents you can figure out on which extent the cluster is by multiplying the LCN with the size of a cluster and then subtracting the size of each extent returned by the IOCTL in the order they are returned, if the next number to subtract is greater than your current number, that particular LCN is on that extent.
A file is a single virtually contiguous unit consisting of virtual clusters. These virtual clusters map onto extents (fragments) of logical clusters where LCN 0 is the boot sector of the volume. The logical clusters map onto different logical clusters if there are bad clusters. The actual logical cluster is then translated to a physical cluster, PCN, or LBA (the first sector of the physical cluster) by summing the number of hidden sectors (the sector number of the boot sector relative to the start of the disk) and then adding it to LCN*(sectors per cluster in the volume). PCN = hidden sectors / (sectors per cluster in the volume) + LCN. LBA = hidden sectors + LCN*(sectors per cluster in the volume)

How to calculate capacity of a memory by given a range of address?

I have another exercice that I couldn't resolve,
A central memory composed by two memory module(RAM).the total address range attributed to the central memory is:
FROM 0000 0000H TO 3FFF FFFFH
1/Give the total capacity of the central memory (Megabyte and Gegabyte)
2/Give the capacity of each memory module(RAM)
3/Give the first and last address of each memory module(RAM)
Sorry for the bad translation the exercice is an french.
Well, 1 is easy. The range from 0000 0000H to 3FFF FFFFH contains 4000 0000H addresses. (Just like 0 to 3 is four addresses, 0, 1, 2, and 3.) 4000 0000H is 1,073,741,824 decimal, or 1GB. 1,024 MB.
2 is no problem. If two memory modules give 1GB, then each module must be 512MB.
3 is impossible. We don't know if the memory modules are consecutive or interleaved. But if we assume they're consecutive, which I imagine is what the exercise wants us to do, then the first one must be 0000 0000H to 1FFF FFFFH and the second one must be 2000 0000H to 3FFF FFFFH.
Note that mapping memory modules consecutively is generally considered dumb. It means that in the typical case where the memory module bandwidth is the limiting factor, if an application is only using the first half of memory, it's only using one of the two modules, wasting half the available memory bandwidth. (Though, in the less common case where the memory is as fast or faster than the CPU or its memory bus, it doesn't matter.)

Difference between blocks and sectors

With reference to this article, there is a line that reads:
Because there are limits to the number of blocks, or drive addresses, that an operating system can address. By defining a block as several sectors, an OS can work with bigger hard drives without increasing the number of block addresses.
What does it mean? What is meant by "operating system can address"? And the subsequent maths isn't clear either. How can 64*512 be less than 64*4?
Look at it this way. Every block that's used in your operating system's file system to store data requires a certain amount of metadata to be stored along with the actual file data you're writing. e.g: timestamps (created, modified), filename, ownership/permission bits. For files that span multiple blocks, you also have to store the IDs of each of those blocks and the order they're chained together, etc.
Determining block size in an OS is a case of tradeoffs. Every file must occupy at least one block, even if the file is 0 bytes long, so there's something for the file's metadata to be attached to. Unless you can guarantee that your files will ALWAYS be some multiple of the block size in size (e.g. in a 4k block OS, all files are 4k), there will be a certain amount of wastage for the files that don't exactly fit within that block.
Small block sizes are good when you need to store many small files. On the other hand, more blocks = more metadata, so you end up wasting a chunk of your storage system on overhead, tracking the location of all the files.
On the flip side, large blocks mean less metadata, but also mean greater wastage when you're storing small files. e.g. a 1 byte file stored in a 4k block wastes 3.99k of that block.
Each of those blocks must be given an ID number by the OS, so it can be uniquely identified. An OS which uses an 8 bit ID field can track only 256 blocks, and therefore, by extension, only 256 files. But if each of those blocks is actually 1 megabyte in size, then you can store up to 256 megabytes of data.
The article you link to has a typo/logical flaw: they meant 512 BYTES, not 512k, so 64*512 bytes is smaller than 64*4k, aka 64*4096 bytes. Most hard drives shipped with 512 byte sector/block sizes.
However, as discussed earlier, small blocks mean more metadata. With drive sizes now in the 3+ terabyte range, with 512 byte blocks, you had to have metadata storage for 3TB/512 bytes = 6.44 billion blocks. That's one major waste of space. So now they ship drives with 4k blocks, 8 times larger, so you only need metadata storage for 805 million blocks. The total number of possible files has been cut by a factor of 8, but the reduced amount of metadata means you can actually store a larger amount of useable data.
Incidentally, 6.4 billion blocks is larger than what can be addressed directly by a 32bit system. 2^32 has an upper limit of ~4.2 billion, so older 32bit machines could not use the entirety of a 3TB drive. Hence switching to larger block sizes. 32bit boxes can easily handle 805 million blocks.

Advertised disk space vs actual disk space [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Why is it that advertised disk space is almost always higher than the disk space reported by the UI? For example, I have an "80 gb" hard drive, but the iTunes UI indicates only 74. I usually see this as well with hard disks and the amount reported with the drive letter.
There are 3 reasons why the amount of space you can actually use is different from that listed for the drive, all of which work against you:
Hard drive manufacturers treat 1GB as one billion bytes, while the operating system calls it 1,073,741,824 bytes (1000 * 1000 * 1000 vs 1024 * 1024 * 1024).
You lose some space for file tables when formatting.
Disk space is divided into chunks larger than 1 byte (typically 4K). Using typical Windows defaults, a 1 byte file takes up 4K of space on disk.
Of these, the first two can influence the amount of space reported by the drive (though IIRC the 2nd one was more of an issue with FAT32 than NTFS). The last one only influences the amount of free space remaining, but will still prevent you from using the full capacity of your drive.
It's the way the OS calculates free space vs the hard drive manufacturers.
OS: 1mb = 1024 kb
Vendor: 1mb = 1000 kb
The vendor will always use the *1000 to increase their numbers.
The main culprit is using base 10 vs. base 2 to list the storage size. It effectively becomes a rounding error.
There is a movement to try and list storage size with base 2 values instead of base 10 to reflect the true size.
It's the difference between the standard (SI) prefixes (giga, mega, kilo, etc.) which are multiples of 1000 and the binary prefixes which are multiples of 1024.
Marketing considers 80 gigabytes to be 80,000,000,000 bytes. The OS considers 80 gigabytes to be 85,899,345,920 bytes.
http://www.google.com/search?q=80000000000+bytes+in+GB
Usually due to some partitioned space that the OS or some software takes and hides for backup or system purposes.
Say manufacturer consider a MB to be 1024KB; others 1000KB. Similarly for GB. Some say 1024MB; others 1000MB.
Then, that refers to the un-formatted size. Formatting takes up some space.
Additionally many times they advertise gigabytes as slightly inaccurate numbers, which result in differences. You can see this in the disclaimer text on the outside of most hard drive boxes!