Image Resizer - Best Practice for security - imageresizer

We are currently testing out Image Resizer library and one of the questions is, how do we avoid malicious attacks to the site if someone programmically send thousands of resizing requests of images with arbitrary sizes to the server, overloading the CPU/RAM of server and potentially causing disk space to run out due to tremendous caching files.
Is there any way to whitelisting certain dimensions? Or what is the best practice to avoid this scenario?
Thanks!
Stephen

Neither CPU or RAM can generally be overloaded during a (D)DOS attack to ImageResizer. Memory allocation is contiguous, meaning an image cannot be processed unless there is around 15-30% free RAM remaining. Under the default pipeline, only 2 cores are used for image processing, so a regular server will not see CPU saturation either.
In general, there are far more effective ways to attack an ASP.NET website than though ImageResizer. Any database-heavy page is more likely to be a weak point, as the memory allocations are smaller and easier to saturate the server with.
Disk space starvation can be mitigated by enabling autoClean="true".
If you're a high-profile site with lots of determined ill-wishers, you can also consider the following:
Use request signing - only URLs generated by your server will be accepted.
Use the Presets plugin to white-list defined permitted command combinations.
Both of these reduce development agility and limit your options for responsive web design, so unless you have actually been attacked in the past, I don't suggest them.
In practice, (D)DOS attacks against dynamic imaging software are rarely useful at bringing down anything except — temporarily — uncached images — even when running under the same application pool. Since visited images tend to be cached, the actual effect is rather laughable.

Related

is it recommended to use SPI flash to run code instead internal flash due to memory limitation of internal flash?

We used the LPC546xx family microcontroller in our project, currently, at the initial stage, we are finalizing the software and hardware requirements. The basic firmware size (which contains RTOS, 3rd party stack, library, etc...) currently is 480 KB. Now once full application developed than the size will exceed the internal flash size (512KB) and plus we needed storage which can hold firmware update image separately.
So we planned to use SPI flash (S25LP064A-JBLE, http://www.issi.com/WW/pdf/IS25LP032-064-128.pdf, serial flash memory) of 4MB\8MB to boot and run firmware.
is it recommended to run code from SPI flash? how can I map external flash memory directly to CPU memory space? Can anyone give an example that contains this memory mapping(linker script etc..) or demo application in which LPC546xx uses SPI FLASH?
Generally speaking it's not recommended, or differently put: the closer to the CPU the better. Both S25LP064A and LPC546xx however support XIP, so it is viable.
This is not a trivial issue as many aspects are affecting. I.e. issue is best avoided and should really have been ironed out in the planning stage. Embedded Systems are more about compromising than anything and making the right/better choices takes skill end experience.
Same question with replies on the NXP forum: link
512K of NVRAM is huge. There are almost certainly room for optimisations even if 3'rd party libraries are used.
On a related note this discussion concerning XIP should give valuable insight: link.
I would strongly encourage use of file-systems if not done already, for which external storage is much better suited. The further from the computational unit, the more relevant. That's not XIP and the penalty is copy-to-RAM either way you do it. I.e. performance will be slower. But in my experience, the need for speed has often-times not been thoroughly considered and at least partially greatly overestimated.
Regarding your mentioning of RTOS and FW-upgrade:
Unless it's a poor RTOS there's file-system awareness built in. Especially for FW upgrading (Note: you'll need room for 3 images, factory reset included), unless already supported by the SoC-vendor by some other means (OTA), it will make life much easier and less risky. If there's no FS-awareness, it can be added.
FW upgrade requires a lot of extra storage. More if simpler. Simpler is however also safer which especially for FW upgrades matters hugely. In the simplest case (binary flat image), you'll need at least twice the amount of memory you're already consuming.
All-in-all: I think the direction you're going is viable and depending on the actual situation perhaps your only choice.

When to turn off Transparent Huge Pages for redis

According to redis docs, it's advisable to disable Transparent Huge Pages.
Would the guidance be the same if the machine was shared between the redis server and the application.
Moreover, for other technologies, I've also read guidance that THP should be disabled for all production environments when setting up the server. Is this kind of pre-emptiveness applicable to redis as well, or one must first strictly monitor latency issues before deciding to turn off THP?
Turn it off. The problem lies in how THP shifts memory around to try and keep or create contiguous pages. Some applications can tolerate this, most databases cannot and it causes intermittent performance problems, some pretty bad. This is not unique to Redis by any means.
For your application, especially if it is JAVA, set up real HugePages and leave the transparent variety out of it. If you do that just make sure you alocate memory correctly for the app and redis. Though I have to say, I probably would not recommend running both the app and redis on the same instance/server/vm.
Turning off transparent hugepages is a bad idea, and redis no longer recommends it.
What you should do instead is make sure transparent_hugepage is not set to always. (This is what recent versions of redis check for.) You can check the current value of the setting with:
$ cat /sys/kernel/mm/transparent_hugepage/enabled
And correct it like so:
# echo madvise >/sys/kernel/mm/transparent_hugepage/enabled
Although no action is likely to be necessary, since madvise is typically the default setting in recent linux distros.
Some background:
transparent_hugepage = always: can force applications to use hugepages unless they opt out with madvise. This has a lot of problems and is rarely enabled.
transparent_hugepage = never: does not fulfill allocations with hugepages, even if the application requests it with madvise
transparent_hugepage = madvise: allows applications to opt-in to hugepages. This is normally a good idea because hugepages can improve performance in some applications, but this setting doesn't force them on applications that, like redis, don't opt in
It is rather annoying that searching for "transparent huge pages" yields top results about how to disable them because Redis and some other databases cannot handle transparent huge pages without performance degradation.
These applications should do any of:
Use prctl(PR_SET_THP_DISABLE, ...) call to opt out from using transparent huge pages.
Be started by a script which does this call for them prior to fork/exec the database process. PR_SET_THP_DISABLE get inherited by child processes/threads for exactly this scenario when an existing application cannot be modified.
prctl(PR_SET_THP_DISABLE, ...) has been available since Linux 3.15 circa 2014, so that there is little excuse for those databases to not mention this solution, instead of giving this poor/panic advice to their users to disable transparent huge pages for the entire system.
3 years after this question was asked, Redis got disable-thp config option to make prctl(PR_SET_THP_DISABLE, ...) call on its own, by default.
My production memory-intensive processes go 5-15% faster with /sys/kernel/mm/transparent_hugepage/enabled set to always. Many popular desktop applications benefit from always transparent huge pages immensely.
This is why I cannot appreciate those search results for "transparent huge pages" spammed with Redis adviсe to disable them. That's a panic advice from Redis, not the best practice.
The overhead THP imposes occurs only during memory allocation, because of defragmentation costs.
If your redis instance has a (near-)constant memory footprint, you can only benefit from THP. Same applies to java or any other long-lived service that does its own memory management. Pre-allocate memory once and benefit.
why playing such echo-games when there is a kernel-param you can boot with?
transparent_hugepage=never

High CPU with ImageResizer DiskCache plugin

We are noticing occasional periods of high CPU on a web server that happens to use ImageResizer. Here are the surprising results of a trace performed with NewRelic's thread profiler during such a spike:
It would appear that the cleanup routine associated with ImageResizer's DiskCache plugin is responsible for a significant percentage of the high CPU consumption associated with this application. We have autoClean on, but otherwise we're configured to use the defaults, which I understand are optimal for most typical situations:
<diskCache autoClean="true" />
Armed with this information, is there anything I can do to relieve the CPU spikes? I'm open to disabling autoClean and setting up a simple nightly cleanup routine, but my understanding is that this plugin is built to be smart about how it uses resources. Has anyone experienced this and had any luck simply changing the default configuration?
This is an ASP.NET MVC application running on Windows Server 2008 R2 with ImageResizer.Plugins.DiskCache 3.4.3.
Sampling, or why the profiling is unhelpful
New Relic's thread profiler uses a technique called sampling - it does not instrument the calls - and therefore cannot know if CPU usage is actually occurring.
Looking at the provided screenshot, we can see that the backtrace of the cleanup thread (there is only ever one) is frequently found at the WaitHandle.WaitAny and WaitHandle.WaitOne calls. These methods are low-level synchronization constructs that do not spin or consume CPU resources, but rather efficiently return CPU time back to other threads, and resume on a signal.
Correct profilers should be able to detect idle or waiting threads and eliminate them from their statistical analysis. Because New Relic's profiler failed to do that, there is no useful way to interpret the data it's giving you.
If you have more than 7,000 files in /imagecache, here is one way to improve performance
By default, in V3, DiskCache uses 32 subfolders with 400 items per folder (1000 hard limit). Due to imperfect hash distribution, this means that you may start seeing cleanup occur at as few as 7,000 images, and you will start thrashing the disk at ~12,000 active cache files.
This is explained in the DiskCache documentation - see subfolders section.
I would suggest setting subfolders="8192" if you have a larger volume of images. A higher subfolder count increases overhead slightly, but also increases scalability.

Challenges in using flat memory model

The flat memory model(linear memory model) provides maximum execution speed, occupies minimum CPU real estate and has direct access to memory without any segmentation / paging. It seems that flat memory model is ideal for small realtime application or single threaded realtime application.
However, is it possible to use real-time application that is multi-threaded/multi-tasking along with requirement of high resource allocation/protection in flat memory model ?
Thanks
I don't think the memory model has much to do here, except for the (RT)OS itself which you use to get multi-threading / multi-tasking done.
Paging or segmentation, if provided, is useful for the OS primarily for implementing memory protection features. It is only possible this way that the OS may protect itself and running user mode tasks against improperly written code in others which would accidentally write in memory out of their intended domain. (You can't get memory protection without some kind of paging or segmentation since you can't guard every single memory access)
In 32 bit AVR processors there is even a distinction between Memory management unit (MMU) and Memory protection unit (MPU). The first is the more complex unit supporting those kinds of paging features like modern PC processors (for example even making it possible to realize virtual memory), while the latter is a simpler subset only giving you tools for realizing memory protection (for example by the OS, to protect itself and tasks against each other), while it does not have any remapping capability (by a given address you always access the same cell of memory) like the MMU does. (Why the distinction? Because some cheaper AVR32's, where that's sufficient, only have an MPU)
So on a simple flat memory model what important thing you won't get are the protection features. If you can get by without those, it should go just fine.

To compress or not to compress?

Enabling compressing (gzip/deflate) in the Apache server will reduce the size of the response but will add more CPU cycles, I will run a stress test with various response sizes but
I wanted to ask if in terms of server load is there any suggestion on when should I turn compressing on or off?
Thank you
In most cases web servers are limited by io (be it memory, network bandwidth, database, hard drive, ...), and have plenty of spare cpu cycle to use for compressing the pages before serving them, especially since this isn't even really that much cpu intensive, while it provide a huge usability boost for your users and save you bandwidth.
I think that as long as the server has a powerful CPU, use compression. Speed is usually the best feature that servers should have after security and stability.
It depends on what you want to achieve. Tipically, turning deflate on won't add a very significant footprint to your CPU performance and if your website/s include large text files (html, js, css, etc.) it's likely to make an important difference in bandwidth usage and page loading times. Of course, if what you want is to reduce system load and don't care much for bandwidth, this wouldn't be the right choice for you.
Another option you might find useful is installing a lightweight web server/proxy like Nginx, lighttpd or Varnish (I personally prefer the first one), and serve compressed static content with that (leaving heavier Apache processes only to handle the dynamic content). That would also be likely to result in a better overall performance of your server. But, again, these all depends on your scenario, what's your website/web application like, and what you want to achieve.