Website move: how to block/detect all HTTP requests to old domain - migration

I'm moving a website from old.com to new.com/old, but I have to make sure it works before deleting old.com.
It's a very large legacy website that probably has links, images, scripts and other things hardcoded to old.com. The problem is that these references to old.com aren't obvious since the site loads up perfectly since old.com is still up.
Is there a way to block all requests to old.com from my local machine only, or some other tool to make finding these references simpler?

The former is done by updating your hosts file on your local machine to point old.com to something else, this overrides what the internet DNS states. The latter very much depends on how your application is build and there is not enough info here.

Related

Restarting only a portion of a rack/Sinatra app

The great thing about PHP is that if you have something like
clothes.com, clothes.com/men.php, clothes.com/women.php
Then if you only edit the men's page, only that particular "app" will be restarted.
But on rack/Sinatra I have to touch the restart.txt file to restart the ENTIRE website.
Is there a way around this problem, so that users browsing other parts of the site wont have any problems while another part of the site get edited?
(i'm using mod-passenger on Apache, not that it's important..)
This would be true in all cases anyway for editing (non-inline) views (not layouts).
Aside from that, if you're really worried about this then I'd suggest using versioned folders to hold the application code. When you do a deployment, change the proxy to point at the newer version. Those who had already made requests will remain on an instance of Apache and the application that is already running, as long as their request remains alive, and seemlessly (unless you've broken something with the code) move to the new code on the next request.
It's also a convenient way to rollback to the/a previous version quickly and easily.
Check out the sinatra reloader from sinatra contrib

"Hack" in to localhost root directory from a VM web app

I have apache VM web app running locally. It's red hat.
It's PHP based but the main page is index.html. I am able to to into sub-directories for images and such, I wanted to know if it is possible for me to gain access to the directories that contain the php code, probably just one level above the images directory. Because of index.html, it forces the load, and I am unable to see server files.
Yes there used to be a trick called dot-dot-traversal that could do this. Basically you put two dots into a URL and climb above the web root. Something like http://www.example.com/../../../../etc/shadow . Occasionally a new way to exploit the vulnerability is discovered, but mostly it is rare. Unless you're running an old server, you should be pretty safe.

Authorization between Delphi app and web server

I have Lazarus(quite a lot like Delphi) application which downloads few files from https://example.com/UpdateFolder. And i was wondering if anything can be done in order for APP to know that it is downloading files from right website? Because if I am right there is a way for hacker to trick APP into going to different website and downloading wrong files and I think it is done somehow by editing system32/driver/etc/hosts file. I would appreciate any suggestions
It depends entirely on your application that downloads the files. If it's able to handle SSL you have nothing to worry about AFAIK, since you need a trusted certificate before it'll make the connection, which will be hard to fake with a Windows host file edit.
Alternatively, and this is why we have domain names in the first place - so a last resort, you could hard-code the IP address of the server that contains the updates and do a trace to make sure the IP of the website your application is connecting to, is the same one you have on file.
However, this makes it very difficult if that IP changes, since you then need to roll out a new update of your entire application (or dll's responsible) just for that, and makes the process that much harder to maintain...

Is it possible to debug mod_security2 issues without root access?

I've been developing an application in CakePHP recently, and all was well until it wasn't. On our development server (which I control) the application runs just fine. On the live server (which our university controls) most of my POST requests result in a 403 page. I've figured out that PHP is never even being called in these cases, and I'm 99% certain that the only real configuration difference between the two is mod_security2.
Here's my trouble. I cannot see the error_log file, because I am not root. I can't even list the directory that it's in. We have got to have the slowest admin on the planet, and I'm trying to get past this issue as quickly as possible. Is there any way to debug mod_security2 without simply throwing bits of post data at it until it breaks hoping to "guess" at what you might be doing wrong?
I've tried looking through the configuration files (which I do have read access to) but I've never used this mod before, and it's like wading through molasses. I don't even know where to begin.
Disabling the mod outright isn't an option, I'm simply going to have to work with it I'm afraid. HELP.

Strategies for dealing with URIs when building an application that sits behind a reverse proxy

I'm building an application with a self-contained HTTP server which can be either accessed directly, or put behind a reverse proxy (like Apache mod_proxy).
So, let's say my application is running on port 8080 and you set up your Apache like this:
ProxyPass /myapp http://localhost:8080
ProxyPassReverse /myapp http://localhost:8080
This will cause HTTP requests coming into the main Apache server that go to /myapp/* to be proxied to my application. If a request comes in like GET /myapp/bar, my application will see GET /bar. This is as it should be.
The problem that arises is in generating URIs that have to be translated from my application's URI-space in order to work correctly via the proxy (i.e. prepending /myapp/).
The ProxyPassReverse directive takes care of handling this for URIs in HTTP headers (redirects and so forth.) But that doesn't handle URIs in the HTML generated by my application, or in static files and templates.
I'm aware of filters like mod_proxy_html, but this is a non-standard Apache module, and in any case, such filters may not be available for other front-end web servers which are capable of acting as a reverse proxy.
So I've come up with a few possible strategies:
Require an environment variable be set somewhere that contains the proxy path, and prepend this to all generated URIs. This seems inelegant; it breaks the encapsulation provided by the reverse proxy.
Put the proxy path in a configuration file for my application. Same objection as above.
Use only relative URIs in my application. This can get somewhat tricky; I would have to calculate the path difference between the current resource and where the link is going and add the appropriate number of ../'es. Seems messy. Another problem is that some things must generate absolute URIs, like RSS feeds and generated emails.
Use some hacky Javascript on the front-end to mungle URIs in the document text. This seems like a really horrible idea from an interoperability standpoint.
Use a singe URI-generating function throughout my code, and require "static" files like Javascript, CSS, etc. to be run through my templating system. This is the idea I'm leaning towards now.
This must be a fairly common problem. How have you approached it in the past? What has worked and what has made things more difficult?
Yep, common problem. How to solve this depends on the kind of app you have and the server platform and web framework you're working with. But there's a general way I've approached these problems which has worked pretty well so far.
My preference is to handle problems like this in application code, rather than relying on web server modules like mod_proxy_html to do it, because there are often too many special cases (e.g. client-side-javascript assembling URLs on the fly) which the server module doesn't catch. That said, I've resorted to the server-module approach in a few cases, but I decided to revise the module code myself to handle the corner cases. Also keep perormance in mind; fixing up URLs in your code at the time they're generated is usually faster than shoving the entire HTML through another server module.
Here's my recommendation of how to handle this in your code:
First, you'll need to figure out what kind of URLs to generate. My preference is for relative URLs. You are correct above that "add the appropriate number of ../'es" is messy, but at least it's your (the programmer's) mess. If you go with the config-file/environment-variable approach, then you'll be dependent on whoever deploys your app (e.g. an underpaid and grumpy IT operations engineer) to always set things up correctly. It also complicates release of your code, even if you're doing deployment yourself, since you can't simply copy your development files into production but need to add a per-deployment-environment custom step. I've found in the past that eliminating potential deployment problems is worth a lot of pre-emptive coding.
Next, you'll need to get those URLs into your code. How you do this varies based on type of content/code:
For server-side code (e.g. PHP, RoR, etc.) you'll want to make sure that server-side URL generation happens in as few places as possible in your code (ideally, one method!). If you're using any of the mainstream MVC web frameworks (e.g. RoR, Django, etc.), this should be trivial since URL generation using an MVC framework already generally goes through a single codepath that you can override. If you're not using one of those frameworks, you likely have URL generation littered throughout your code. But the approach you'll want to take is to generate all URLs via code, and then override that method to support transforming non-relative URLs into relative URLs. You can usually search for patterns in your code (like "/, '/, "http://, 'http://) and do a manual search and replace (or if you're really nerdy and have more patience than I do, craft a regex to replace each common case in your source code).
The key to making this work reliably is that, instead of manually replacing all absolute URLs with relative ones in your server-side code (which, even if you get each of them right, is fragile if files are moved), you can leave the absolute URLs in place and simply wrap them with a call to your "relativizer" method. This is much more reliable and unbrittle.
For Javascript, I generally like to do the same thing as server code-- move all URL generation into a single method and ensure any URL generation calls this method. This can be hard on an app with lots of pre-existing javascript, but the search-and-replace method above seems to work well in JS too.
For CSS, URLs in CSS are relative to the location of the CSS file (not the calling HTML page) so using relative URLs is generally easy. Simply put your CSS into a folder and either put images into deeper folders beneath it, or put images into a parallel folder to your CSS and use a single ../ to get to the images relatively. This is a good best practice in general-- if you're not doing relative URLs in CSS already, you should consider doing it, regardless of reverse proxy.
Finally, you'll need to figure out what to do for other oddball static files (like legacy static HTML files sometimes creep in). In general, I recommend the same practice as CSS and images-- ideally, you'd put static files into predictable directories and rely on relative URLs. Or (depending on your server platform) it may be easier to remap the file extensions of those static files so that they're processed by your web framework-- and then run your server-side URL generator for all URLs. Or, barring that, you can leave the files in place and manually fix up URLs to be relative-- knowing that this is brittle.
Coming full circle, sometimes there are just too many places where URLs are generated, and it's more effective to use a server module like mod_proxy_html. But I consider this a last resort-- especially if you won't be comfortable editing the source code if needed.
BTW, I realize I didn't mention anyting about your idea #4 above (javascript-link-fixup). I wouldn't do that-- if the user has javascript turned off or (more common) some network problem prevents that javascript for some time after the rest of the page loads, then your links won't work. Too risky.