How do I avoid "fatal: Out of memory, please increase size of physical memory." in Gem5? - gem5

It looks like I can edit a config file to increase the physical memory being considered. Which config file is this ?

Editing Options.py and changing the default "--mem-size" value there and then recompiling will be enough.

Related

Ignite start with "Set max direct memory size if getting 'OOME: Direct buffer memory' (add '-XX:MaxDirectMemorySize="

My ignite server have 128G RAM, with Xmx 10G off-heap 70G, when start, the log shows:
[11:30:27,376][INFO][main][IgniteKernal] Performance suggestions for grid (fix if possible)
[11:30:27,377][INFO][main][IgniteKernal] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
[11:30:27,377][INFO][main][IgniteKernal] ^-- Set max direct memory size if getting 'OOME: Direct buffer memory' (add '-XX:MaxDirectMemorySize=<size>[g|G|m|M|k|K]' to JVM options)
I have search web, and i found this article said it is not necessary to configure MaxDirectMemorySize, http://apache-ignite-users.70518.x6.nabble.com/Do-we-require-to-set-MaxDirectMemorySize-JVM-parameter-td21200.html
and some articles says the default MaxDirectMemorySize will be same as Xmx, so what should i configure for this option, i am just confused, and if it is no useful, why ignite dump that suggestion log to fix this problem?
This is not an indication of failure, you can just ignore this suggestion unless your node/cluster failing due to OOM in direct buffer memory. This is an option to give you ability to control how many direct memory could be allocated, otherwise it is controlled using default direct memory policy by JVM you are using. Ignite only checks if it's set in JVM options.
Do you experience any issues with OOME in direct buffer memory in your app?
Regards.
Direct buffer memory is used by some file operations (like read and write) when the program calls functions from the NIO libray.
Due to a bug, its value if not specified and you set Xmx ... it is copied from Xmx.
Direct buffer memory default is 64 Mb (if you don't set it ans also don't set Xmx).
I suggest MaxDirectMemorySize=64 or 256 Mb
Bigger values: perhaps you don't see errors, but I doubt you get better performance.

Why does Nano with error.log open use 40% of RAM?

Edit: I should clarify 40% of 2GB of RAM
I just happened to catch this on my sever, I had used nano before to open an error log and it was still open I don't know for how long. After killing that task, my ram usage dropped from just over 1GB to 250MB.
I remember coming across this before somewhere, I want to know how to prevent/avoid this in the future. I like nano for its simplicity but yeah, I guess be sure to kill the process or something.
Will have to look into remote status updates or something on the server's "livelihood" haha.
Maybe because error.log is a big file (you don't say how big is it).
Did you try using a pager like less on it?
less error.log
You probably don't want to edit (i.e. have the opportunity to change) that error.log file, you just want to look inside it (with a terminal pager like less, or more, or most); a pager uses less memory than an editor, because it does not enable you to change the file.
BTW, consider tuning your logrotate(8)
Notice that nano, like all editors, need to keep in some complex data structures the content of the edited file, in such ways that modification is efficient. This explains why it takes a lot of memory. Since nano is free software (and so is less), you could study its source code for more details.

How to configure namespace to keep partial data as cache in ram and the remaining in hard disk?

I am trying to write some data to a namespace in Aerospike, but i don't have enough ram for the whole data.
How can i configure my Aerospike so that a portion of the data in kept in the ram as cache and remaining is kept in the hard drive?
Can I reduce the number of copies of data made in Aerospike kept in ram?
It can be done by modifying the contents ofaerospike.conf file but how exactly am i going to achieve it.
You should have seen the configuration page in aerospike documentation before asking such question
http://www.aerospike.com/docs/operations/configure/namespace/storage/
How can i configure my Aerospike so that a portion of the data in kept in the ram as cache and remaining is kept in the hard drive?
The post-write-queue parameter defines the amount of RAM to use to keep recently written records in RAM. As long as these records are still in the post-write-queue Aerospike will read directly from RAM rather than disk. This will allow you to configure a LRU cache for an namespace that is storage-engine device and data-in-memory false. Note that this is least recently updated (or created) rather than least recently used (read or write) cache eviction algorithm.

Max file size for File.ReadAllLines

I need to read and process a text file. My processing would be easier if I could use the File.ReadAllLines method but I'm not sure what is the maximum size of the file that could be read with this method without reading by chunks.
I understand that the file size depends on the computer memory. But are still there any recommendations for an average machine?
On a 32-bit operating system, you'll get at most a contiguous chunk of memory around 550 Megabytes, allowing loading a file of half that size. That goes down hill quickly after your program has been running for a while and the virtual memory address space gets fragmented. 100 Megabytes is about all you can hope for.
This is not an issue on a 64-bit operating system.
Since reading a text file one line at a time is just as fast as reading all lines, this should never be a real problem.
I've done stuff like this with 1-2GB before, albeit in Python. I do not think .NET would have a problem, though. But I would only do this for one-off processing.
If you are doing this on a regular basis, you might want to go through the file line by line.
Its bad design unless you know the files sizes vs the computer memory that would be avaiable in the running app.
A better solution would be consider memory mapped files. They use themselvses as page fil storage,

Keeping a file in the OS block buffer

I need to keep as much as I can of large file in the operating system block cache even though it's bigger than I can fit in ram, and I'm continously reading another very very large file. ATM I'll remove large chunk of large important file from system cache when I stream read form another file.
In a POSIX system like Linux or Solaris, try using posix_fadvise.
On the streaming file, do something like this:
posix_fadvise(fd, 0, 0, POSIX_FADV_SEQUENTIAL);
while( bytes > 0 ) {
bytes = pread(fd, buffer, 64 * 1024, current_pos);
current_pos += 64 * 1024;
posix_fadvise(fd, 0, current_pos, POSIX_FADV_DONTNEED);
}
And you can apply POSIX_FADV_WILLNEED to your other file, which should raise its memory priority.
Now, I know that Windows Vista and Server 2008 can also do nifty tricks with memory priorities. Probably older versions like XP can do more basic tricks as well. But I don't know the functions off the top of my head and don't have time to look them up.
Within linux, you can mount a filesystem as the type tmpfs, which uses available swap memory as backing if needed. You should be able to create a filesystem greater than your memory size and it will prioritize the contents of that filesystem in the system cache.
mount -t tmpfs none /mnt/point
See: http://lxr.linux.no/linux/Documentation/filesystems/tmpfs.txt
You may also benefit from the files swapiness and drop_cache within /proc/sys/vm
If you're using Windows, consider opening the file you're scanning through with the flag
FILE_FLAG_SEQUENTIAL_SCAN
You could also use
FILE_FLAG_NO_BUFFERING
for that file, but it imposes some restrictions on your read size and buffer alignment.
Some operating systems have ramdisks that you can use to set aside a segment of ram for storage and then mounting it as a file system.
What I don't understand, though, is why you want to keep the operating system from caching the file. Your full question doesn't really make sense to me.
Buy more ram (it's relatively cheap!) or let the OS do its thing. I think you'll find that circumventing the OS is going to be more trouble than it's worth. The OS will cache as much of the file as needed, until yours or any other applications needs memory.
I guess you could minimize the number of processes, but it's probably quicker to buy more memory.
mlock() and mlockall() respectively lock part or all of the calling process’s virtual address space into RAM, preventing that memory from being paged to the swap area.
(copied from the MLOCK(2) Linux man page)