Configuring Apache to forward requests to a custom (non-HTTP) process on host - apache

I'd appreciate advise in configuring Apache (2.4.4) to forward traffic (i.e. the entries to a web form) to an external process running on the same host as Apache, but listening on a port say 4321.

Finally got an answer on the Apache forum. Thought I'd post it here for posterity's sake
I don't think this is an apache/httpd issue, but rather a programing
one. The form, when submitted, will go to a script that will process
the values on the form (hidden as well as user entered) and can do
whatever is necessary to send these data on in whatever manner is
needed. You may need to pick the programming language that back-ends
the form with a little care to make certain it can do what is
required, but the basic ones (things like perl and php) should be
able to handle this. You just need someone to do the necessary
programming.

Related

Domain URL masking

I am currently hosting the contents of a site with ProviderA. I have a domain registered with ProviderB. I want users to access the contents (www.providerA.com/sub/content) by visiting www.providerB.com. A domain forward is easy enough and works as intended, however, unless I embed the site in a frame (which is a big no-no), the actual URL reads www.providerA.com/sub/content despite the user inputting www.providerB.com.
I really need a solution for this. A domain masking without the use of a frame. I'm sure this has been done before. An .htaccess domain rewrite?
Your help would be hugely appreciated! I'm going nuts trying to find a solution.
For Apache
Usual way: setup mod_proxy. The apache on providerB becomes a client to providerA's apache. It gets the content and sends it back to the client.
But looks like you only have .htaccess. So no proxy, you need full configuration access for that.
So you cannot, see: How to set up proxy in .htaccess
If you have PHP on providerB
Setup a proxy written in PHP. All requests to providerB are intercepted by that PHP proxy. It gets the content from providerA and sends it back. So it does the same thing as the Apache module. However, depending on the quality of the implementation, it might fail on some requests, types, sizes, timeouts, ...
Search for "php proxy" on the web, you will see a couple available on GitHub and others. YMMV as to how difficult it is to setup, and the reliability.
No PHP but some other server side language
Obviously that could be done in another language, I checked PHP because that is what I use the most.
The best solution would be to transfer the content to providerB :-)

How to create a friendly url in Tomcat?

I want to modify my application URL from //localhost:8080/monitor/index.html to just monitor , so that on putting monitor on browser, my application should open. Is there a way to achieve this, can someone suggest the configuration changes which will be required for this.
Can I map my short URL to the existing one may be somewhere in web.xml. I am not sure about the approach any suggestions will be great.
Thanks and regards
Deb
You're mixing up several different protocol layers in your question.
If you just enter nothing but "monitor" in the browser URL bar the browser is going to first lookup "monitor" in DNS and finding nothing it will then probably send a query to Google or your configured search engine. In the past browsers have taken other steps, such as appending ".com" and prepending "www." but I don't think modern browsers do that any more.
So far, your server is not even remotely involved.
If you're a large ISP user (TimeWarner, Comcast) and use their DNS it's also possible the ISP will intercept your failed DNS lookup and route the request to a "helpful" search page (i.e. SPAM) of their own.
At this point the request is still nowhere near your server.
I suppose you could mess with the /etc/hosts file on your local system to resolve "monitor" to the proper hostname, but that's an extremely brittle solution that has to be hard coded on each machine you want to have this "shortcut" link (and which breaks when the hostname changes).
You're much better off just setting up a web shortcut in your browser that points to the right place.

What was the evolution of interaction paradigm between web server program and content provider program?

In my opinion, web server is responsible to deliver content to client. If it is static content like pictures and static html document, web server just deliver them as bitstream directly. If it is some dynamic content that is generated during processing client's request, the web server will not generate the conetnt itself but call some external proram to genearte the content.
AFAIK, this kind of dynamice content generation technologies include the following:
CGI
ISAPI
...
And from here, I noticed that:
...In IIS 7, modules replace ISAPI
filters...
Is there any others? Could anyone help me complete the above list and elabrate on or show some links to their evolution? I think it would be very helpful to understand application such as IIS, TomCat, and Apache.
I once wrote a small CGI program, and though it serves as a content generator, it is still nothing but a normal standalone program. I call it normal because the CGI program has a main() entry point. But with the recenetly technology like ASP.NET, I am not writing complete program, but only some class library. Why does such radical change happens?
Many thanks.
well, the biggest missing piece in your question is that you can have the webserver generating the content dynamically as well. This is common with most platforms outside of PHP and Perl. You often set that website behind apache or nginx used as a proxy, but it doesn't "call an external progam" in any reasonable sense, it forwards the http request to the proxied server. This is mostly done so you can have multiple sites on the same server, and also so you can have apache/nginx protect you against incorrect requests.
But sure, we can, for the sake of the question, say that "proxying" is a way to call an external program. :-)
Another way to "call the external program" is Pythons WSGI, where you do call a permanently running server. So again you don't start an external program, it's more like calling the module in ASP (although it's a separate program, not a module, but you don't start it with every request, you use an API).
The change from calling external programs as in CGI to calling modules like in ASP.NET, process with WGI or proxying to another webserver happened because with CGI you have to start a new prpogram for each request. The PERL/PHP interpreter needs to be laoded into memory, and all modules they use as well. This quickly becomes very heavy and process/memory intensive.
Therefore, to be able to use bigger systems that are permanently running, other techniques have been developed. Most of them are platform/language dependent, and the only one that is platform independent is really to make a complete webserver and then use apache/nginx as a proxy in front (in which case the apache/nginx strictly isn't necessary any more).
I hope this cleared things up a bit.
fastcgi and wsgi are two more interfaces content generators can use to talk to a webserver -- the reason more recent interfaces aren't complete programs is that forking and executing things that expect to be executables is costly.
OTOH, writing your little generator in such a way that it doesn't leak anything between invocations is harder than having the liberty to just exit at the end (and rely on environment variables and command line arguments like a normal executable).
This is all for performance reasons, but then you have more complicated content generators and process management in the webservers.

exim configuration - accept all mail

I've just set up exim on my ubuntu computer. At the moment it will only accept email for accounts that exist on that computer but I would like it to accept all email (just because I'm interested). Unfortunately there seem to be a million exim related config files, and I'm not having much success finding anything on google.
Is there an introduction to exim for complete beginners?
Thanks.
There's a mailing list at http://www.exim.org/maillist.html. The problem you will face as an Ubuntu user is that there's always been a slight tension between Debian packagers/users and the main Exim user base because Debian chose to heavily customize their configuration. Their reasons for customizing it are sound, but it results in Debian users showing up on the main mailing list asking questions using terms that aren't recognizable to non-Debian users. Debian runs its own exim-dedicated help list (I don't have the address handy, but it's in the distro docs). Unfortunately this ends up causing you a problem because Ubuntu adopted all these packages from Debian, but doesn't support them in the same way as Debian does, and Debian packagers seem to feel put upon to be asked to support these Ubuntu users.
So, Ubuntu user goes to main Exim list and is told to ask their packager for help. So they go to the Debian lists and ask for help and may or may not be helped.
Now, to answer your original question, there are a ton of ways to do what you ask, and probably the best way for you is going to be specific to the Debian/Ubuntu configurations. However, to get you started, you could add something like this to your routers:
catchall:
driver = redirect
domains = +local_domains
data = youraddress#example.com
If you place that after your general alias/local delivery routers and before any forced-failure routers, that will redirect all mail to any unhandled local_part at any domain in local_domains to youraddress#example.com.
local_domain is a domain list defined in the standard exim config file. If you don't have it or an equivalent, you can replace it with a colon-delimited list of local domains, like "example.com:example.net:example.foo"
One of the reasons it's hard to get up to speed with Exim is that you can literally do anything with it (literally, someone on the list proved the expansion syntax is turing complete a few years ago, IIRC). So, for instance, you could use the above framework to look the domains up out of a file, to apply regular expressions against the local_parts to catch, save the mail to a file instead of redirecting to an address, put it in front of the routers and use "unseen" to save copies of all mail, etc. If you really want to administer an Exim install, I strongly recommend reading the documentation from cover to cover, it's really, really good once you get a toe hold.
Good luck!

Are there alternatives to CGI (and do I really need one)?

I am designing an application that is going to consist of 3-4 services that run as separate processes and are linked by a suitable IPC. The system is going to have a web interface and I want to use whatever webserver is there.
The web interface should be accessed under some URL that allows to have other URLs on the same webserver doing totally different things. I'm planning to use the path below that URL to specify what the web interface should do. It has facilities for use by other applications over the net and for humans to interact with in a browser.
Off the cuff, I'd work as follows:
make the webserver fire up a CGI process for every request it receives (like SetHandler in Apache)
let the CGI connect to the IPC
let it get whatever it needs from the backend services
let the CGI return HTML / XML and whatever HTTP Status based on the services' answers
Now, what I really want is to avoid the first two steps, or if I can't, avoid the second one, because I'm afraid that I'm wasting performance on unneccesary overhead (the requests coming from other applications might be frequent).
PHP, for example, can open persistent connections to a MySQL database that survive the script's runtime and don't need to be recreated next time, though I don't know how they actually do it. Also, as I understand it, the Apache modules are loaded once when the server starts, so that might remove the first step but would tie me to Apache.
So, what are good ways to hook a handler for specific URLs into different webservers? I don't want to handle the HTTP, otherwise I might just use a proxy setup to a second server, but it just seems to be so reinventing-the-wheel. If you think, CGI is fine and have examples where it handles large numbers of request of a similar structure, please let me know.
OK, I overlooked this previously. Explaining my question here brought me onto it:
Instead of creating a new process for every request, FastCGI can use a single persistent process which handles many requests over its lifetime. -- Wikipedia: FastCGI
Even under moderate loads, CGI is a pretty unscalable beast. FastCGI is an option, but you'll probably also find a mod_XXXX package where XXXX is the name of your language. There's a mod for ruby, perl, and python for instance and probably a fair few others.