HTTP requests and Apache modules: Creative attack vectors - apache

Slightly unorthodox question here:
I'm currently trying to break an Apache with a handful of custom modules.
What spawned the testing is that Apache internally forwards requests that it considers too large (e.g. 1 MB trash) to modules hooked in appropriately, forcing them to deal with the garbage data - and lack of handling in the custom modules caused Apache in its entirety to go up in flames. Ouch, ouch, ouch.
That particular issue was fortunately fixed, but the question's arisen whether or not there may be other similar vulnerabilities.
Right now I have a tool at my disposal that lets me send a raw HTTP request to the server (or rather, raw data through an established TCP connection that could be interpreted as an HTTP request if it followed the form of one, e.g. "GET ...") and I'm trying to come up with other ideas. (TCP-level attacks like Slowloris and Nkiller2 are not my focus at the moment.)
Does anyone have a few nice ideas how to confuse the server's custom modules to the point of server-self-immolation?
Broken UTF-8? (Though I doubt Apache cares about encoding - I imagine it just juggles raw bytes.)
Stuff that is only barely too long, followed by a 0-byte, followed by junk?
et cetera
I don't consider myself a very good tester (I'm doing this by necessity and lack of manpower; I unfortunately don't even have a more than basic grasp of Apache internals that would help me along), which is why I'm hoping for an insightful response or two or three. Maybe some of you have done some similar testing for your own projects?
(If stackoverflow is not the right place for this question, I apologise. Not sure where else to put it.)

Apache is one of the most hardened software projects on the face of the planet. Finding a vulnerability in Apache's HTTPD would be no small feat and I recommend cutting your teeth on some easier prey. By comparison it is more common to see vulnerabilities in other HTTPDs such as this one in Nginx that I saw today (no joke). There have been other source code disclosure vulnerablites that are very similar, I would look at this and here is another. lhttpd has been abandoned on sf.net for almost a decade and there are known buffer overflows that affect it, which makes it a fun application to test.
When attacking a project you should look at what kind of vulnerabilities have been found in the past. Its likely that programmers will make the same mistakes again and again and often there are patterns that emerge. By following these patterns you can find more flaws. You should try searching vulnerablites databases such as Nist's search for CVEs. One thing that you will see is that apache modules are most commonly compromised.
A project like Apache has been heavily fuzzed. There are fuzzing frameworks such as Peach. Peach helps with fuzzing in many ways, one way it can help you is by giving you some nasty test data to work with. Fuzzing is not a very good approach for mature projects, if you go this route I would target apache modules with as few downloads as possible. (Warning projects with really low downloads might be broken or difficult to install.)
When a company is worried about secuirty often they pay a lot of money for an automated source analysis tool such as Coverity. The Department Of Homeland Security gave Coverity a ton of money to test open source projects and Apache is one of them. I can tell you first hand that I have found a buffer overflow with fuzzing that Coverity didn't pick up. Coverity and other source code analysis tools like the open source Rats will produce a lot of false positives and false negatives, but they do help narrow down the problems that affect a code base.
(When i first ran RATS on the Linux kernel I nearly fell out of my chair because my screen listed thousands of calls to strcpy() and strcat(), but when i dug into the code all of the calls where working with static text, which is safe.)
Vulnerability resarch an exploit development is a lot of fun. I recommend exploiting PHP/MySQL applications and exploring The Whitebox. This project is important because it shows that there are some real world vulnerabilities that cannot be found unless you read though the code line by line manually. It also has real world applications (a blog and a shop) that are very vulnerable to attack. In fact both of these applications where abandoned due to security problems. A web application fuzzer like Wapiti or acuentix will rape these applications and ones like it. There is a trick with the blog. A fresh install isn't vulnerable to much. You have to use the application a bit, try logging in as an admin, create a blog entry and then scan it. When testing a web application application for sql injection make sure that error reporting is turned on. In php you can set display_errors=On in your php.ini.
Good Luck!

Depending on what other modules you have hooked in, and what else activates them (or is it only too-large requests?), you might want to try some of the following:
Bad encodings - e.g. overlong utf-8 like you mentioned, there are scenarios where the modules depend on that, for example certain parameters.
parameter manipulation - again, depending on what the modules do, certain parameters may mess with them, either by changing values, removing expected parameters, or adding unexpected ones.
contrary to your other suggestion, I would look at data that is just barely short enough, i.e. one or two bytes shorter than the maximum, but in different combinations - different parameters, headers, request body, etc.
Look into HTTP Request Smuggling (also here and here) - bad request headers or invalid combinations, such as multiple Content-Length, or invalid terminators, might cause the module to misinterpret the command from Apache.
Also consider gzip, chunked encoding, etc. It is likely that the custom module implements the length check and the decoding, out of order.
What about partial request? e.g requests that cause a 100-Continue response, or range-requests?
The fuzzing tool, Peach, recommended by #TheRook, is also a good direction, but don't expect great ROI first time using it.
If you have access to source code, a focused security code review is a great idea. Or, even an automated code scan, with a tool like Coverity (as #TheRook mentioned), or a better one...
Even if you don't have source code access, consider a security penetration test, either by experienced consultant/pentester, or at least with an automated tool (there are many out there) - e.g. appscan, webinspect, netsparker, acunetix, etc etc.

Related

How to service HTTP requests on web server

Alright. I know this may draw some heat as "not good question"/etc., but I haven't found anywhere describing the process in particular (all the resources I've found describe the client-side requesting, not the server-side responses).
I'm going to be working on writing an iOS app in the next coming months necessitating the use of a web server. There are many resources on how to set these up, get them a static IP, etc. but I haven't found any clear ones (and by clear, I mean intelligible by someone not already experienced in it) on how to write a program for such a server that actually responds to the HTTP or client request.
Suppose I have a dummy app and web server combo where the app posts an HTTP request for the time. How would I write an app for the server to bounce the time back when the request comes in? Ideally, I'd like to write this in Objective-C as it's the language I've had the most experience in (whether forced or by choice).
Again, I apologize if it isn't a good question or very clear - I just haven't found any resources that are able to give me much of a place to start.
Your question could probably be described as 'too broad', but I will give it a shot anyway. Disclaimer: I haven't written much server-side code but I have been programming in objc for years now.
The reason you haven't found (m)any resources to help you do what you want to do is because Objective-C is rarely used for writing server-side code. Exactly why that is the case is no doubt a long story, but essentially the answer is because many of the dominant technologies out there (PHP, Python, C#, Java, to name only the prevailing languages) have feaures and associated frameworks that are better suited for that purpose.
In other words, although I can doubltless be done, you are probably better off using something other than Objective-C for the task because:
You will have many more resources available to help you get your job done.
You will have a much larger community that you can query for assistance when (not if) you encounter an obstacle.
You will not have to do many things the hard way because there will be existing tools to make it easier.
I would also recommend you to use PHP as the server-side programming language.
Some mounths ago I was in the same situation as you. We have planned to write a app (Android) which loads some data from a webserver. I've never programmed server-side code till the beginning of the project. So it was quiet interessting and new for me.
We have choosen PHP as the server-side language.
All I can say is, that it was really easy to learn and write your first scripts to get a response to a HTTP-Request. Also the usage of MySQL as the database is really easy and it works fine with PHP.
PHP is a standard. You can find a huge amount of documentation and examples. And of course tutorials and good books ... ;)

What should I check before I release a web application?

I am nearly finished a web application. I need to test it and find the security issues before it release. Is there any methods / guideline to do this kind of testing? Or is there any tools to help me check my application is ready to go online? Thank you.  
I would say:
check that there are no warnings or errors even in strict mode (error report).
In case you store any sensitive data (as passwords, credit cards, etc.) be sure they are encrypted with non-standard algorithms. Use SSL and try to be somehow paranoid with it.
Set your database with specific accesses by action and hosts, and do not use root account.
Perform exhaustive testing (use unit test when possible). Involve as many people you can.
Test it under the main browsers (Firefox, Chrome, Opera, Safari, IE) and if have time in others.
Validate all your HTML/CSS against standards (W3C). (recommendable)
Depends on the platform you are using, there are profilers which can help you identify bottlenecks in your code. (can be done in later stages).
Tune settings for your web server / script language.
Be sure it is search-engine friendly.
Pray once is online :)
This is not a complete list as it depends in:
which language/platform/web server you are using.
what kind of application you developed (social, financial, management, etc.)
who will use that application (the entirely world, an specific company, your family or just you).
are you going to sell it? then you must have at least most of the previous points.
is your application using very sensitive information (as credit cards)? if so, you should pay for some professional (company?) to check your code, settings and methods.
This is just my opinion, take it as it is. I would also like to hear what other people suggests.
Good Luck
As well as what's already been suggested, depending on what type of application it is, you can use a vulnerability scanner to scan your application for any vulnerabilities that could lead to hackers gaining entry.
There are quite a few good scanners out there, but note when using them that the results may or may not be 100%. It's hard to say.
For a list of scanners, commercial and free, see: http://projects.webappsec.org/Web-Application-Security-Scanner-List
For more information on scanners: http://en.wikipedia.org/wiki/Web_Application_Security_Scanner
Good luck.
Here you can find a practical checklist to use before launching a website
http://launchlist.net/
And here is a list of all the stuff you forgot to test
http://www.thebraidytester.com/downloads/YouAreNotDoneYet.pdf

Language-Portable Example Programs

At the moment I am learning Objective-C 2. I'm aware that it's used heavily by Mac developers, but I'm more interested in learning the language at this point in time than the frameworks for developing on Mac OS X/iPhone (except for Foundation). In order to do this I want to write a few intermediate* console applications, but I'm stuck for ideas.
Most examples are something along the lines of "Write a Fraction class that has getters/setters and a print function", which isn't very challenging coming from a C++ background. I'd like some generic examples of programs, but I don't want them to include any Objective-C implementation details. I want to figure out the program structure/write my own interfaces and learn the language from there.
In summary: I am curious as to what example programs Objective-C programmers would recommend for exploring the language.
An example of an "intermediate" application would be something along the lines of "Write a program that takes a URL from the command line and returns the number of occurrences of a certain word in data returned:
example -url www.google.com -word search
"Project Euler" is a standard response for this kind of thing, but I get the feeling that you're less interested in being told to implement algorithmic stuff (since that knowledge is easier to port between languages) and more interested in miniprojects that will familiarize you with core libraries. Is this fair?
If so, IMO, you ought to know the basics of how to do the following with the standard libraries of language you hope to use for serious work:
Standard IO
Network IO
Disk IO and navigating the filesystem
Regexp utilities
Structured data (XML libraries and CSV libraries if they exist)
Programming problems I would recommend for those:
It sounds like you've already done this.
A very simple proxy - something like what you described in your post, but that listens on a port for a message containing a URL rather than taking it on the command line, and likewise returns the results to whatever contacted it over the network rather than outputting to stdio. [Obviously you need to have the machine behind an appropriate firewall for this!]
Something which takes a directory path and recursively tallies the number of lines its children contain. (So, get the directory's listing, open each child file and count the number of line breaks. Then open each of its child directories, get their listings, ...) Record any errors encountered (e.g., no read privileges) in a reasonable way. Write out the final results to file in the directory supplied.
Usually if I tool around in a language enough, I'll run across some problem which I just naturally find myself using regexps for. I'll assume the same is true for you and punt this element for now.
Fetch StackOverflow.com, and [by putting it into a DOM model and navigating that] determine whether this question is still on the front page.
I got the most out of Objective-C by exploring it with a testing framework. I have written a short blog post about it. You should also wrap your head around the memory management conventions employed by Objective-C, reference counting takes a little time to get used to but works very well if responsibilities are clearly segregated (I have written about that on my blog too).
By getting my hands dirty on a testing framework (GHUnit for that matter), I was able to learn far more about the language than I could have in a "traditional" way. Of course you'll need a little pet project, otherwise this approach doesn't make sense.
I don't think your example is a very good idea as it requires you to mess with http connections, resources etc. which is a little framework specific after all. Parsing a text file would be a little easier in this regard. Using a unit testing framework has the following advantages for you:
learn about platform specific build systems and deployment details
forced to develop components in a loosely coupled fashion from the ground up
thereby exploring unique mechanisms of the language, that might require new or make known patterns redundant (e.g. categories make dependency injection obsolete etc.)
fast compile-test cycle, less time spent in front of the debugger
combined with source control: painless experiments
You should also look into the testing framework implementation, as testing frameworks always require to work with metadata to some extend. Testing frameworks are often used together with isolation frameworks. They basically create objects at runtime that comply to certain interfaces and act as stand-ins for concrete objects. Looking at their implementation will teach you about the runtime manipulations that can be done in Objective-C (keyword: Method-Swizzling)

Can everything be done programmatically in WCF or are configuration files for certain features?

I have a strong preference for working in code, leverage IntelliSense and opening up all of the power of the C# language to work with WCF but I want to make sure that I'm not moving in a direction that ultimately will limit the WCF feature set I can access. My experience is so limited with WCF that I don't understand the benefits of using the configuration files, especially if you can do everything in code (?).
Note: I'm using .NET 3.5.
Can you do 'everything' with WCF programmatically or are configuration files required for the full WCF feature set?
You can do about 99.8% of things in code as well as config.
Some things can be done only in code - like setting user name and password on a call that requires those two for authentication.
And there appear to be a few things that can be done in config only - see this other recent SO question for one example.
But I think, if you prefer code, you should be fine for the vast majority of cases.
Marc
An overgrown comment...
Marc_s' answer and the question's perspective is good (two +1s from me).
I have no doubt that the following will not be news to either of you, but wanted to point it out in case someone encounters this and isn't aware of the cons of a purely programmatic approach.
Moving to programmatic configuration from config-file based setup means
you lose the ability to adjust (read: hack!) things in the field -- your only avenue of recourse will be to recompile and redeploy binaries. For many scenarios (including one of mine) this is not n.
you lose the ability to switch between multiple sets of configurations by juggling them in the config file.
I admit that both of the cited 'losses' are debatable - they can encourage bad habits and prevent you from reaching the most solid solution for your customers in the quickest manner possible.
UPDATE: I've implemented a mechanism where I use ChannelFactory<T> but pick up a customised config from the app.config if it's present, or provide a default if it isn't (my scenario is that I'm a guest in someone else's process and hence can't assume a config fuile is easy to update / has been updated, yet dont want to lose the option of tweaking settings after deployment)

How you test your applications for reliability under badly behaving i/o

Almost every application out there performs i/o operations, either with disk or over network.
As my applications work fine under the development-time environment, I want to be sure they will still do when the Internet connection is slow or unstable, or when the user attempts to read data from badly-written CD.
What tools would you recommend to simulate:
slow i/o (opening files, closing files, reading and writing, enumeration of directory items)
occasional i/o errors
occasional 'access denied' responses
packet loss in tcp/ip
etc...
EDIT:
Windows:
The closest solution to do the job as described seems to be holodeck, commercial software (>$900).
Linux:
Open solution wasn't found by now, but the same effect
can be achived as specified by smcameron and krosenvold.
Decorator pattern is a good idea.
It would require to wrap my i/o classes, but resulting in a testing framework.
The only remaining untested code would be in 3rd party libraries.
Yet I decided not to go this way, but leave my code as it is and simulate i/o errors from outside.
I now know that what I need is called 'fault injection'.
I thought it was a common production-line part with plenty of solutions I just didn't know.
(By the way, another similar good idea is 'fuzz testing', thanks to Lennart)
On my mind, the problem is still not worth $900.
I'm going to implement my own open-source tool based on hooks (targeting win32).
I'll update this post when I'm done with it. Come back in 3 or 4 weeks or so...
What you need is a fault injecting testing system. James Whittaker's 'How to break software' is a good read on this subject and includes a CD with many of the tools needed.
If you're on linux you can do tons of magic with iptables;
iptables -I OUTPUT -p tcp --dport 7991 -j DROP
Can simulate connections up/down as well. There's lots of tutorials out there.
Check out "Fuzz testing": http://en.wikipedia.org/wiki/Fuzzing
At a programming level many frameworks will let you wrap the IO stream classes and delegate calls to the wrapped instance. I'd do this and add in a couple of wait calls in the key methods (writing bytes, closing the stream, throwing IO exceptions, etc). You could write a few of these with different failure or issue type and use the decorator pattern to combine as needed.
This should give you quite a lot of flexibility with tweaking which operations would be slowed down, inserting "random" errors every so often etc.
The other advantage is that you could develop it in the same code as your software so maintenance wouldn't require any new skills.
You don't say what OS, but if it's linux or unix-ish, you can wrap open(), read(), write(), or any library or system call etc, with an LD_PRELOAD-able library to inject faults.
Along these lines:
http://scaryreasoner.wordpress.com/2007/11/17/using-ld_preload-libraries-and-glibc-backtrace-function-for-debugging/
I didn't go writing my own file system filter, as I initially thought, because there's a simpler solution.
1. Network i/o
I've found at least 2 ways to simulate i/o errors here.
a) Running a virtual machine (such as vmware) allows to configure bandwidth and packet loss rate. Vmware supports on-machine debugging.
b) Running a proxy on the local machine and tunneling all the traffic through it. For the case of upd/tcp communications a proxifier (e.g. widecap) can be used.
2. File i/o
I've managed to deduce this scenario to the previous one by mapping a drive letter to a network share which resides inside the virtual machine. The file i/o will be slow.
A cheaper alternative exists: to set up a local ftp server (e.g. FileZilla), configure speeds and use Novell's NetDrive to access it.
You'll wanna setup a test lab for this. What type of application are you building anyway? Are you really expecting the application be fed corrupt data?
A test technique I know the Microsoft Exchange Server people tried was sending noise to the server. Basically feeding every possible input with seemingly random data. They managed to crash the server quite often this way.
But still, if you can't trust input that hasn't been signed then general rules apply. Track every operation which could potentially be untrusted (result of corrupt data) and you should be able to handle most problems gracefully.
Just test your application behavior on random input, that should catch most problems but you'll never be able to fully protect your self from corrupt data. That's just not possible, as the data could be part of some internal buffer being handed off within the application itself.
Be mindful of when and how you decode data. That is all.
The first thing you'll need to do is define what "correct" means under these circumstances. You can only test against a definition of what behaviour is intended.
The tactics of testing will depend on technology. In the context of automated unit testing, I have found it very useful, in OO languages such as Java, to use various flavors of "mocking" or "stubbing" to pass e.g. misbehaving InputStreams to parts of my code that used file I/O.
Consider holodeck for some of the fault injection, if you have access to spare hardware you can simulate network impairment using Netem or a commercial product based on it the Mini-Maxwell, which is much more expensive than free but possibly easier to use.