How to evaluate the file bytes size and the disk's real usage? - file-io

First, I throw the truth: When you write 1KB bytes to disk, the real disk usage is not
precisely 1KB, maybe more of that.
I'm downloading file from some third-party file system to
local disk. On each file downloading, I can get the file size from HTTP response, for
example, 7kb, and I added it the the disk's usage (a counter variable), when the usage reach 90% of the disk's
capacity, the program will switch to other available disk automatically. But when I do
the real job, the disk's real usage increase faster then the real received bytes, before
I download 2TB bytes (at 1TB, for example), the disk's usage has reached to 2TB (use
df -h), how to precisely evaluate the disk usage when downloading files? is there a
ratio between file byte size and the real disk usage?
Once I plan to use df -h to get the realtime disk usage, but on each file download creat
a OS process is not realistic.

Related

Is redis using cache?

I restarted my redis server after 120 days.
Before restart, memory usage 29.5GB
After restarted, memory usage 27.5GB
So, how 2GB reduced comes?
Free memory in ram like this article https://redis.io/topics/memory-optimization
Redis will not always free up (return) memory to the OS when keys are
removed. This is not something special about Redis, but it is how most
malloc() implementations work. For example if you fill an instance
with 5GB worth of data, and then remove the equivalent of 2GB of data,
the Resident Set Size (also known as the RSS, which is the number of
memory pages consumed by the process) will probably still be around
5GB, even if Redis will claim that the user memory is around 3GB. This
happens because the underlying allocator can't easily release the
memory. For example often most of the removed keys were allocated in
the same pages as the other keys that still exist. The previous point
means that you need to provision memory based on your peak memory
usage. If your workload from time to time requires 10GB, even if most
of the times 5GB could do, you need to provision for 10GB.
However allocators are smart and are able to reuse free chunks of
memory, so after you freed 2GB of your 5GB data set, when you start
adding more keys again, you'll see the RSS (Resident Set Size) to stay
steady and don't grow more, as you add up to 2GB of additional keys.
The allocator is basically trying to reuse the 2GB of memory
previously (logically) freed.
Because of all this, the fragmentation ratio is not reliable when you
had a memory usage that at peak is much larger than the currently used
memory. The fragmentation is calculated as the amount of memory
currently in use (as the sum of all the allocations performed by
Redis) divided by the physical memory actually used (the RSS value).
Because the RSS reflects the peak memory, when the (virtually) used
memory is low since a lot of keys / values were freed, but the RSS is
high, the ratio mem_used / RSS will be very high.
Or free memory of caches which was used by my redis server?
Is redis using cache? Cache of cache?
Thanks!

FileChannel.transferTo (supposedly zero-copy) not giving any performance gain

I am working on a REST API that has an endpoint to download a file that could be > 2 GB in size. I have read that Java's FileChannel.transferTo(...) will use zero-copy if the OS supports it. My server is running on localhost during development on my MacBook Pro OS 10.11.6.
I compared the following two methods of writing file to response stream:
Copying a fixed number of bytes from FileChannel to WritableByteChannel using transferTo
Reading a fixed number of bytes from FileInputStream into a byte array (size 4096) and writing to OutputStream in a loop.
The time taken for a 5.2GB file is between 20 and 23 seconds with both methods. I tried transferTo with the fixed number of bytes in single transfer set to following values: 4KB (i.e. 4 * 1024), 1MB and 50MB. The time taken to write is in the same range in all the 3 cases.
Time taken is measured from before entering the while-loop to after exiting the while-loop, in which bytes are read from the file. This is all on the server side. The network hop time does not figure into this.
Any ideas on what the reason could be? I am quite sure MacOS 10.11.6 should support zero-copy (i.e. sendfile system call).
EDIT (6/18/2018):
I found the following blog post from 2015, saying that sendfile on MacOS X is broken. Could it be that this problem still exists?
https://blog.phusion.nl/2015/06/04/the-brokenness-of-the-sendfile-system-call/
The (high) transfer rate that you are quoting is likely close to or at the limit of what a SATA device can do anyway. If my guess is right, you will not see a performance gain reflected in the time it takes to run your test - however there will likely be a change in the CPU load during the test. Given that you have a relatively powerful machine, your CPU and memory are fast enough. Any method (zero-copy or not) will work at the same speed - which is the speed of your disk. However, zero-copy will cause a lot less CPU load and will not grab unnecessary bandwidth from your memory, either. Therefore, you should test different methods and see which one ends up using the least amount of CPU and choose that method for your application.

Configuring Lucene Index writer, controlling the segment formation (setRAMBufferSizeMB)

How to set the parameter - setRAMBufferSizeMB? Is depending on the RAM size of the Machine? Or Size of Data that needs to be Indexed? Or any other parameter? could someone please suggest an approach for deciding the value of setRAMBufferSizeMB.
So, what we have about this parameter in Lucene javadoc:
Determines the amount of RAM that may be used for buffering added
documents and deletions before they are flushed to the Directory.
Generally for faster indexing performance it's best to flush by RAM
usage instead of document count and use as large a RAM buffer as you
can. When this is set, the writer will flush whenever buffered
documents and deletions use this much RAM.
The maximum RAM limit is inherently determined by the JVMs available
memory. Yet, an IndexWriter session can consume a significantly larger
amount of memory than the given RAM limit since this limit is just an
indicator when to flush memory resident documents to the Directory.
Flushes are likely happen concurrently while other threads adding
documents to the writer. For application stability the available
memory in the JVM should be significantly larger than the RAM buffer
used for indexing.
By default, Lucene uses 16 Mb as this parameter (this is the indication to me, that you shouldn't have that much big parameter to have fine indexing speed). I would recommend you to tune this parameter by setting it let's say to 500 Mb and checking how well your system behave. If you will have crashes, you could try some smaller value like 200 Mb, etc. until your system will be stable.
Yes, as it stated in the javadoc, this parameter depends on the JVM heap, but for Python, I think it could allocate memory without any limit.

What happens at a low level when I call fseek()?

When fseek() is called in C - or, seek() is called on a file object in any modern language like Python or Go - what happens at a very low level?
What does the operating system or hard drive actually do?
What gets read?
What overhead is incurred?
How does block size affect this overhead?
Edit to add:
Given NTFS with a block size of 4KB, does seeking 4096 bytes incur less IO overhead than reading 4096 bytes?
Second Edit:
When in doubt, go empirical.
Using some naive Python code with a 1.5GB file:
Reading 4096 sequentially: 21.2
Seek 4096 (relative): 1.35
Seek 4096 (absolute): 0.75 (interesting)
Seek and read every third 4096 (relative): 21.3
Seek and read every third 4096 (absolute): 21.5
The times are averaged are in seconds. The hardware is a nondescript PC with a SATA drive running Windows XP.
This was hugely disappointing. I have several GB of files that I have to read on a near continual basis. About 66% of the 4KB blocks in the files are uninteresting and I know their offset in advance.
Initially, I thought it might be a Big Win to rewrite the legacy code involved as it now does a sequential read 4096 bytes at a time through the files. Assuming Win32 Python is not broken in some fundamental way, incorporating seek has no advantage for non-random reads.
This heavily depends on current conditions. Generally, fseek() only changes state of the stream (either sets current position, or returns an error if parameters are wrong). But - fseek() flushes buffer, that might incur pending write operation. If file is UTF8 file and translation is enabled, ftell() called from fseek() needs to read that part of the file to correctly calculate the offset. If CRLF translation is enabled, it also incurs read operations. But in case of plain binary file and no pending write operation, fseek() just sets position within the stream and doesn't need to go to lower level. For more details, see source code of CRT.

Keeping a file in the OS block buffer

I need to keep as much as I can of large file in the operating system block cache even though it's bigger than I can fit in ram, and I'm continously reading another very very large file. ATM I'll remove large chunk of large important file from system cache when I stream read form another file.
In a POSIX system like Linux or Solaris, try using posix_fadvise.
On the streaming file, do something like this:
posix_fadvise(fd, 0, 0, POSIX_FADV_SEQUENTIAL);
while( bytes > 0 ) {
bytes = pread(fd, buffer, 64 * 1024, current_pos);
current_pos += 64 * 1024;
posix_fadvise(fd, 0, current_pos, POSIX_FADV_DONTNEED);
}
And you can apply POSIX_FADV_WILLNEED to your other file, which should raise its memory priority.
Now, I know that Windows Vista and Server 2008 can also do nifty tricks with memory priorities. Probably older versions like XP can do more basic tricks as well. But I don't know the functions off the top of my head and don't have time to look them up.
Within linux, you can mount a filesystem as the type tmpfs, which uses available swap memory as backing if needed. You should be able to create a filesystem greater than your memory size and it will prioritize the contents of that filesystem in the system cache.
mount -t tmpfs none /mnt/point
See: http://lxr.linux.no/linux/Documentation/filesystems/tmpfs.txt
You may also benefit from the files swapiness and drop_cache within /proc/sys/vm
If you're using Windows, consider opening the file you're scanning through with the flag
FILE_FLAG_SEQUENTIAL_SCAN
You could also use
FILE_FLAG_NO_BUFFERING
for that file, but it imposes some restrictions on your read size and buffer alignment.
Some operating systems have ramdisks that you can use to set aside a segment of ram for storage and then mounting it as a file system.
What I don't understand, though, is why you want to keep the operating system from caching the file. Your full question doesn't really make sense to me.
Buy more ram (it's relatively cheap!) or let the OS do its thing. I think you'll find that circumventing the OS is going to be more trouble than it's worth. The OS will cache as much of the file as needed, until yours or any other applications needs memory.
I guess you could minimize the number of processes, but it's probably quicker to buy more memory.
mlock() and mlockall() respectively lock part or all of the calling process’s virtual address space into RAM, preventing that memory from being paged to the swap area.
(copied from the MLOCK(2) Linux man page)