about LRU, How can the operating system know about a memory frame use when it's valid - hardware

In virtual memory, The LRU algorithm is about swapping out page frame which has been the least recently used, but how can the OS know about a frame use since it wont be raising any exception when it is valid ?
Are there use counters of some sort in the MMU or do OSes make regular check by invalidating pages without really swapping them out, only in order to make statistics on the page faults ?
I suppose it might be possible to find the information with Google, but I didn't find the right way to ask it:-/

Related

Image Resizer - Best Practice for security

We are currently testing out Image Resizer library and one of the questions is, how do we avoid malicious attacks to the site if someone programmically send thousands of resizing requests of images with arbitrary sizes to the server, overloading the CPU/RAM of server and potentially causing disk space to run out due to tremendous caching files.
Is there any way to whitelisting certain dimensions? Or what is the best practice to avoid this scenario?
Thanks!
Stephen
Neither CPU or RAM can generally be overloaded during a (D)DOS attack to ImageResizer. Memory allocation is contiguous, meaning an image cannot be processed unless there is around 15-30% free RAM remaining. Under the default pipeline, only 2 cores are used for image processing, so a regular server will not see CPU saturation either.
In general, there are far more effective ways to attack an ASP.NET website than though ImageResizer. Any database-heavy page is more likely to be a weak point, as the memory allocations are smaller and easier to saturate the server with.
Disk space starvation can be mitigated by enabling autoClean="true".
If you're a high-profile site with lots of determined ill-wishers, you can also consider the following:
Use request signing - only URLs generated by your server will be accepted.
Use the Presets plugin to white-list defined permitted command combinations.
Both of these reduce development agility and limit your options for responsive web design, so unless you have actually been attacked in the past, I don't suggest them.
In practice, (D)DOS attacks against dynamic imaging software are rarely useful at bringing down anything except — temporarily — uncached images — even when running under the same application pool. Since visited images tend to be cached, the actual effect is rather laughable.

Is it meaningful to monitor physical memory usage on AIX?

Due to AIX's special memory-using algorithm, is it meaning to monitor the physical memory usage in order to find out the memory bottleneck during performance tuning?
If not, then what kind of KPI am i supposed to keep eyes on so as to determine whether we need to enlarge the RAM capacity or not?
Thanks
If a program requires more memory that is available as RAM, the OS will start swapping memory sections to disk as it sees fit. You'll need to monitor the output of vmstat and look for paging activity. I don't have access to an AIX machine now to illustrate with an example, but I recall the man page is pretty good at explaining what data is represented there.
Also, this looks to be a good writeup about another AIX specfic systems monitoring tool, and watching your systems overall memory (svgmon).
http://www.aixhealthcheck.com/blog.php?id=255
To track the size of your individual application instance(s), there are several options, with the most common being ps. Again, you'll have to check the man page to get information on which options to use. There are several columns for memory sz per process. You can compare those values to the overall memory that's available on your machine, and understand, by tracking over time, if your application is only increasing is memory, or if it releases memory when it is done with a task.
Finally, there's quite a body of information from IBM on performance tuning for AIX, but I was never able to find a road map guide to reading that information. A lot of it assumes you know facts and features that aren't explained in the current doc set, so you then have to try and find an explanation, which oftens leads to searching for yet another layer of explanations. ! :^/
IHTH.

retrieve video ram usage iphone

i have see this article about retrieve memory usage off iphone app
programmatically-retrieve-memory-usage-on-iphone It's great !
In my project i want to retrieve the available VRAM free, because my app load many textures, and i must preload theses into the video Ram for fast rendering.
but on the VM_statistics i don't view theses properties : vm_statistics MAN page
Thanks a lot for your help.
As you've seen so far, getting hard numbers for GL texture memory usage is quite difficult. It's complicated further by the fact that CoreAnimation will also use GL Texture memory without "consulting" you, including from processes other than yours.
Practically speaking, I suggest that you use the VM Tracker instrument in Instruments to watch changes in the VM pages your process maps under the IOKit tag. It's a bit crude, but it's the best approach I've found. In my experience, this process is largely guess and check.
You asked specifically for a way to determine the amount of free VRAM, but even if you could get that info, it's not really likely to be helpful. Even if your app is totally OpenGL and uses no UIViews or CoreAnimation layers other processes, most importantly those more privileged than yours, can consume that memory at any time, either explicitly or implicitly through CoreAnimation. It's also probably safe to assume that if your app prevents those more-privileged apps from getting the texture memory they need, your process will be killed.
Put differently, even if you could ascertain the instantaneous state of the GL texture memory, you probably couldn't count on being the only consumer of that resource, so it's pretty useless.
At the end of the day, you should spend your effort designing your app to be a good citizen in terms of GL memory and manage (read: minimize) your own consumption of texture memory. iOS devices are not old-school game consoles -- you are not the only thing running -- so you need to be mindful and tolerant of that fact, lest your app be one of those where everyone has to reboot their phone every few minutes in order to use it.

Determining failing sectors on portable flash memory

I'm trying to write a program that will detect signs of failure for portable flash memory devices (thumb drives, etc).
I have seen tools in the past that are able to detect failing sectors and other kinds of trouble on conventional mechanical hard drives, but I fear that flash memory does not have the same kind of predictable low-level access to the hardware due to the internal workings of the storage. Things like wear-leveling and other block-remapping techniques (to skip over 'dead' sectors?) lead me to believe that determining if a flash drive is failing will be difficult at best, if not impossible (short of having constant read failures and device unmounts).
Flash drives at their end-of-life should be easy to detect (constant CRC discrepancies during reads and all-out failure). But what about drives that might be failing early? Are there any tell-tale signs like slower throughput speeds that might indicate a flash drive is going to fail much sooner than normal?
Along the lines of detecting potentially bad blocks, I had considered attempting random reads/writes to a file close to or exactly the size of the entire volume, but even then is it possible that the drive might report sizes under its maximum capacity to account for 'dead' blocks?
In short, is there any way to circumvent or at least detect (algorithmically or otherwise) the use of block-remapping or other life extension techniques for flash memory?
Let me end this question by expressing my uncertainty as to whether or not this belongs on serverfault.com . This is definitely a hardware-related question, but I also desire a software solution - preferably one that I can program myself.
If this question is misplaced, I will be happy to migrate it to serverfault - but I do need a programming solution. Please let me know if you need clarification :)
Thanks!
It's interesting if badblocks can help in this case
AFAIK, Wear leveling happens at the firmware level. The hardware does not know about the bad block, till such time the firmware detects one.
And there is no known way to find this bad sectors before hand. BTW, I guess, it is not bad sectors, but bad blocks. Once a sector is bad, the whole block is marked as bad ...

How you disable the processor cache on a PowerPC processor?

In our embedded system (using a PowerPC processor), we want to disable the processor cache. What steps do we need to take?
To clarify a bit, the application in question must have as constant a speed of execution as we can make it.
Variability in executing the same code path is not acceptable. This is the reason to turn off the cache.
I'm kind of late to the question, and also it's been a while since I did all the low-level processor init code on PPCs, but I seem to remember the cache & MMU being pretty tightly coupled (one had to be enabled to enable the other) and I think in the MMU page tables, you could define the cacheable attribute.
So my point is this: if there's a certain subset of code that must run in deterministic time, maybe you locate that code (via a linker command file) in a region of memory that is defined as non-cacheable in the page tables? That way all the code that can/should benefit from the cache does, and the (hopefully) subset of code that shouldn't, doesn't.
I'd handle it this way anyway, so that later, if you want to enable caching for part of the system, you just need to flip a few bits in the MMU page tables, instead of (re-)writing the init code to set up all the page tables & caching.
From the E600 reference manual:
The HID0 special-purpose register contains several bits that invalidate, disable, and lock the instruction and data caches.
You should use HID0[DCE] = 0 to disable the data cache.
You should use HID0[ICE] = 0 to disable the instruction cache.
Note that at power up, both caches are disabled.
You will need to write this in assembly code.
Perhaps you don't want to globally disable cache, you only want to disable it for a particular address range?
On some processors you can configure TLB (translation lookaside buffer) entries for address ranges such that each range could have caching enabled or disabled. This way you can disable caching for memory mapped I/O, and still leave caching on for the main block of RAM.
The only PowerPC I've done this on was a PowerPC 440EP (from IBM, then AMCC), so I don't know if all PowerPCs work the same way.
What kind of PPC core is it? The cache control is very different between different cores from different vendors... also, disabling the cache is in general considered a really bad thing to do to the machine. Performance becomes so crawlingly slow that you would do as well with an old 8-bit processor (exaggerating a bit). Some ARM variants have TCMs, tightly-coupled memories, that work instead of caches, but I am not aware of any PPC variant with that facility.
Maybe a better solution is to keep Level 1 caches active, and use the on-chip L2 caches as statically mapped RAM instead? That is common on modern PowerQUICC devices, at least.
Turning off the cache will do you no good at all. Your execution speed will drop by an order of magnitude. You would never ship a system like this, so its performance under these conditions is of no interest.
To achieve a steady execution speed, consider one of these approaches:
1) Lock some or all of the cache. All current PowerPC chips from Freescale, IBM, and AMCC offer this feature.
2) If it's a Freescale chip with L2 cache, consider mapping part of that cache as on-chip memory.