Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Why is it that advertised disk space is almost always higher than the disk space reported by the UI? For example, I have an "80 gb" hard drive, but the iTunes UI indicates only 74. I usually see this as well with hard disks and the amount reported with the drive letter.
There are 3 reasons why the amount of space you can actually use is different from that listed for the drive, all of which work against you:
Hard drive manufacturers treat 1GB as one billion bytes, while the operating system calls it 1,073,741,824 bytes (1000 * 1000 * 1000 vs 1024 * 1024 * 1024).
You lose some space for file tables when formatting.
Disk space is divided into chunks larger than 1 byte (typically 4K). Using typical Windows defaults, a 1 byte file takes up 4K of space on disk.
Of these, the first two can influence the amount of space reported by the drive (though IIRC the 2nd one was more of an issue with FAT32 than NTFS). The last one only influences the amount of free space remaining, but will still prevent you from using the full capacity of your drive.
It's the way the OS calculates free space vs the hard drive manufacturers.
OS: 1mb = 1024 kb
Vendor: 1mb = 1000 kb
The vendor will always use the *1000 to increase their numbers.
The main culprit is using base 10 vs. base 2 to list the storage size. It effectively becomes a rounding error.
There is a movement to try and list storage size with base 2 values instead of base 10 to reflect the true size.
It's the difference between the standard (SI) prefixes (giga, mega, kilo, etc.) which are multiples of 1000 and the binary prefixes which are multiples of 1024.
Marketing considers 80 gigabytes to be 80,000,000,000 bytes. The OS considers 80 gigabytes to be 85,899,345,920 bytes.
http://www.google.com/search?q=80000000000+bytes+in+GB
Usually due to some partitioned space that the OS or some software takes and hides for backup or system purposes.
Say manufacturer consider a MB to be 1024KB; others 1000KB. Similarly for GB. Some say 1024MB; others 1000MB.
Then, that refers to the un-formatted size. Formatting takes up some space.
Additionally many times they advertise gigabytes as slightly inaccurate numbers, which result in differences. You can see this in the disclaimer text on the outside of most hard drive boxes!
Related
Currently I am implementing a standard mergesort that requires (n) space.
My RAM is 8GB, and a text file of 1 million numbers is 7.8MB which can be sorted by merge sort but for a text file of 2 million (which is 15.6MB) when I run the program there is a segmentation fault.
My question is if there is a way to calculate the maximum number of integers I can sort and is my RAM in any way related to the maximum number of integers I can sort?
I'm currently working on a school project to design a network, and we're asked to assess traffic on the network. In our solution (dealing with taxi drivers), each driver will have a smartphone that can be used to track its position to assign him the best ride possible (through Google Maps, for instance).
What would be the size of data sent and received by a single app during one day? (I need a rough estimate, no real need for a precise answer to the closest bit)
Thanks
Gps Positions compactly stored, but not compressed needs this number of bytes:
time : 8 (4 bytes is possible too)
latitude: 4 (if used as integer or float) or 8
longitude 4 or 8
speed: 2-4 (short: 2: integer 4)
course (2-4)
So binary stored in main memory, one location including the most important attributes, will need 20 - 24 bytes.
If you store them in main memory as single location object, additonal 16 bytes per object are needed in a simple (java) solution.
The maximum recording frequence is usually once per second (1/s): Per hour this need: 3600s * 40 byte = 144k. So a smartphone easily stores that even in main memory.
Not sure if you want to transmit the data:
When transimitting this to a server data usually will raise, depending of the transmit protocoll used.
But it mainly depends how you transmit the data and how often.
If you transimit every 5 minutes a position, you dont't have to care, even
when you use a simple solution that transmits 100 times more bytes than neccessary.
For your school project, try to transmit not more than every 5 or better 10 minutes.
Encryption adds an huge overhead.
To save bytes:
- Collect as long as feasible, then transmit at once.
- Favor binary protocolls to text based. (BSON better than JSON), (This might be out of scope for your school project)
With reference to this article, there is a line that reads:
Because there are limits to the number of blocks, or drive addresses, that an operating system can address. By defining a block as several sectors, an OS can work with bigger hard drives without increasing the number of block addresses.
What does it mean? What is meant by "operating system can address"? And the subsequent maths isn't clear either. How can 64*512 be less than 64*4?
Look at it this way. Every block that's used in your operating system's file system to store data requires a certain amount of metadata to be stored along with the actual file data you're writing. e.g: timestamps (created, modified), filename, ownership/permission bits. For files that span multiple blocks, you also have to store the IDs of each of those blocks and the order they're chained together, etc.
Determining block size in an OS is a case of tradeoffs. Every file must occupy at least one block, even if the file is 0 bytes long, so there's something for the file's metadata to be attached to. Unless you can guarantee that your files will ALWAYS be some multiple of the block size in size (e.g. in a 4k block OS, all files are 4k), there will be a certain amount of wastage for the files that don't exactly fit within that block.
Small block sizes are good when you need to store many small files. On the other hand, more blocks = more metadata, so you end up wasting a chunk of your storage system on overhead, tracking the location of all the files.
On the flip side, large blocks mean less metadata, but also mean greater wastage when you're storing small files. e.g. a 1 byte file stored in a 4k block wastes 3.99k of that block.
Each of those blocks must be given an ID number by the OS, so it can be uniquely identified. An OS which uses an 8 bit ID field can track only 256 blocks, and therefore, by extension, only 256 files. But if each of those blocks is actually 1 megabyte in size, then you can store up to 256 megabytes of data.
The article you link to has a typo/logical flaw: they meant 512 BYTES, not 512k, so 64*512 bytes is smaller than 64*4k, aka 64*4096 bytes. Most hard drives shipped with 512 byte sector/block sizes.
However, as discussed earlier, small blocks mean more metadata. With drive sizes now in the 3+ terabyte range, with 512 byte blocks, you had to have metadata storage for 3TB/512 bytes = 6.44 billion blocks. That's one major waste of space. So now they ship drives with 4k blocks, 8 times larger, so you only need metadata storage for 805 million blocks. The total number of possible files has been cut by a factor of 8, but the reduced amount of metadata means you can actually store a larger amount of useable data.
Incidentally, 6.4 billion blocks is larger than what can be addressed directly by a 32bit system. 2^32 has an upper limit of ~4.2 billion, so older 32bit machines could not use the entirety of a 3TB drive. Hence switching to larger block sizes. 32bit boxes can easily handle 805 million blocks.
I know sector 0 is mostly used for loading the operating system. Some windows versions have bootloadres bigger than 1 sector and use sector 1 and 2 as well. On sector 6 up to 8 is often a backup of the sectors 0-2. But what is the rest for? Why is the default in many formating tools 32 reserved sectors?
actually FAT32 boot sector have backup copy starting with 6 sectors so in total 12 sectors. AND FAT32 boot sector continues on 13 sector. So minimum is 13 sectors.
Then formatting tool wants to align first cluster with 1MB boundary, so amount of reserved sectors can be very different due to different amount of FAT sectors.
For HDD 4K boundary is enough, but some SSD can work faster with 1MB boundary. At least Windows7 formats with 1MB aligning.
Some file systems may want a specific alignment of data and therefor will increase the number of reserved sectors to meet that alignment.
Disclaimer: I'm very new to SQL and databases in general.
I need to create a field that will store a maximum of 32 characters of text data. Does "VARCHAR(32)" mean that I have exactly 32 characters for my data? Do I need to reserve an extra character for null-termination?
I conducted a simple test and it seems that this is a WYSIWYG buffer. However, I wanted to get a concrete answer from people who actually know what they're doing.
I have a C[++] background, so this question is raising alarm bells in my head.
Yes, you have 32 characters at your disposal. SQL does not concern itself with nul terminated strings like some programming languages do.
Your VARCHAR specification size is the max size of your data, so in this case, 32 characters. However, VARCHARS are a dynamic field, so the actual physical storage used is only the size of your data, plus one or two bytes.
If you put a 10-character string into a VARCHAR(32), the physical storage will be 11 or 12 bytes (the manual will tell you the exact formula).
However, when MySQL is dealing with result sets (ie. after a SELECT), 32 bytes will be allocated in memory for that field for every record.