My objective is create an apache module that will provide RESTful services (i.e., we have some legacy code that controls/queries some networking equipment and we would now like to expose that functionality as a RESTful service). I guess the flow might look something like this:
WebBrowser -- issues RESTful URI---> [Apache (my_module) ] -->..
..---> Interface to existing Legacy code.
I have been mucking around various wikis, blogs, forums, articles etc. but I just can't seem to understand how those RESTful urls will get to (my_module) in apache [you can tell I have never worked with web-servers internals, much less modules, before]. I mean, do I have to edit that httpd.conf file and say something like: Send all urls that look like http://baseurl/restservices/... to my_module. If so, how do I do it?
Also, what will my_module actually get? Does it get the full http request message and it has to parse it like typical CGI programs?
Further, what is the best way for my_module to interact with my legacy code? E.g., Open a TCP connection to it and send messages and write wrapper around legacy code to interpret those messages. Or can my_module directly invoke the functions in my legacy code somehow if I compiled my entire legacy code as a module in apache?
Thanks for any hints. If u know of a good tutorial, please point me to it. I'm looking for a high level overview that will give me the architecture (the developers under me can then follow up on the nitty-gritty details).
I'd write an extension for PHP or Python and use mod_php / mod_wsgi
I think you are approaching this in the wrong way:
Apache modules are not really how you want to handle a URL if your requirements are quote basic. Depending on the language your legacy code is in, I would advise:
Binding its API into a python or PHP module, and have that script called by Apache through normal means. It is also a lot simple (in many cases) to glue a C-call style compiled language to these scripting languages rather than Apache itself.
It also has the advantage of adding an abstractions which allows you to layer additional logic in a scripting language on your core legacy code. You may also want to preprocess data and validate it from the request before handing it into your legacy code.
Both PHP and Python also have RESTful frameworks and utilities.
If you do write an Apache module, then check out Writing Apache Modules with Perl and C
See:
Developing PHP Extensions in C, Extending Python in C or C++ ... also if using Python checkout the WSGI stuff.
I'd agree with Aiden. Writing Apache modules is not for the faint hearted and you definitely don't want to go there unless you absolutely must. You would need to be prepared to become very conversant with how Apache works.
If you still think you need to, then look at:
http://httpd.apache.org/apreq/
This is a library which uses existing Apache Runtime Libraries and which provides higher level functionality for dealing with POST data, cookies etc from C code hooked into Apache via a custom module.
The book Aiden mentions though is a bit dated. Better off getting:
The Apache Modules Book: Application Development with Apache
Related
I am searching to configure gwan to act as reverse-proxy cache to my web python application.
I could'nt find some example on the web.
Thanks a lot for your examples
Laurent
If your goal is merely to accelerate your Python application then you should just run it from G-WAN (see the hello.py example).
Some (advanced) users have used G-WAN handlers to write their own custom proxy, but G-WAN will document its embedded proxy (and load-balancer) in the following weeks.
It still works without configuration files, so you will not have to learn anything new.
And the competent users will like the ability to personalize the proxy with their own scripts.
There's always a long way between a solution that "runs" and a polished version ready for a wide public.
Is there an Objective-C version of Artifice?
If not, how would I design/develop/create it?
Related Questions
Mock HTTP response via Objective-C
Mock NSURLConnection
I think I might be able to help you here.
I have a Ruby library that is somewhat similar to artifice, albeit more self-contained and built on top of Sinatra, called Mimic. I'm pretty happy with it and one of my favourite features is that as well as being configured using it's Ruby DSL (or using the Sinatra API directly), it can be configured remotely or from any process that speaks HTTP. This means you can use it in your Objective-C tests and configure it from the tests too (rather than having say, a set of external fixtures in a Ruby file).
In the name of eating my own dog food, I recently converted the acceptance tests for my Objective C RestClient port, Resty to use Mimic. The Mimic daemon is started up as part of the build process and my stubs are configured directly in the tests, using a thin Objective-C wrapper around the Mimic REST API.
As you can see, I strive very hard for test clarity!
Those tests use OCUnit but you can use this with Kiwi. In fact, the assertEventually macro in the above tests was the basis of the asynchronous testing support that I ported to Kiwi.
I've since extracted the Objective-C wrapper for Mimic from LRResty and moved it into the Mimic repository. You may want to check out the Resty project to see how my project and the tests are configured. If you have any questions, please ask.
One caveat: I haven't found a way of getting these tests to run successfully in Xcode 4, using the "Test" option, due to the way that it runs. In Xcode 3, I rely on Run Script build phases to start and stop the Mimic daemon, but because Xcode 4 doesn't run the tests as part of the build process this doesn't work. I've tried to accomplish something similar using pre/post test actions but unfortunately these are woefully inadequate due to various bugs.
Bonus tip: I find Charles Debugging Proxy as massive help when working with web services and you can use it with Mimic too; the Objective-C wrapper can be proxied through Charles so you can see exactly what is happening, both in terms of stub configuration and actual HTTP requests (Mimic can even be configured to return some helpful debugging data in the response headers).
Do let me know if you have any questions.
OK, I had a look at the memcached module for Nginx but this is clearly not for the faintheart.
Does anyone know about a way to load a library in memory and then use its functions - from a Web server like Nginx, Lighhtpd or Apache?
Example of such libraries abound like JSON parsers, Database client libraries, etc.
Hard to find anything about doing it but those guys have made it pretty easy:
http://dearetc.com/
It takes only one line of code to import a library and all its functions.
Hope it will help others...
I'm looking for the equivalent of a URL shortening service such as http://bit.ly/ for an internal deployment in our organisation. Anyone know of any open source projects (especially Java ones) or commercial products which I can install internally rather than using an external service?
Thanks!
Shorty : http://get-shorty.com/
But there's several other url shortener .... most of them are in PHP/Mysql.
Don't know if a Java one exist.
http://monkeytooth.net/2010/12/htaccess-php-how-to-wordpress-slugs/
tells you the core basics of how to achieve the concept with PHP and Htaccess building up from there I can say would solely be on your own. However not all to hard a concept in general to build off of if you know php/mysql. That said your not likely to find anything directly built in JavaScript however using this with JavaScript again wouldn't be all that hard a concept. I say your not likely to find one JS based as you need some type of server-side script to communicate with a DB somewhere, where you have all your short URL identifiers, and JavaScript to my knowledge doesn't support directly at least database connectivity. You can go through any means of AJAX to communicate with a server-side script to then do what you want with the JavaScript though.
I am a career .NET developer, but have recently been delving into the LAMP universe.
I have an Joomla/VirtueMart ecommerce site near ready to launch. The vendor's xml datafeed must undergo transformation before it can be imported into the products database.
I wrote a .NET console app that can easily perform the transform and upload to the site. I am convinced there is a better way.
So I have been looking at cgi scripts. Just from what I have read it seems the only way to execute a cgi script is through an http request. Is there a way to schedule a cgi-script to run at a specific time?
Also, which language works best for transforming xml? C, Perl, or Python?
CGI is basically just an interface defined to execute arbitrary programs on a server by way of a HTTP request, nothing more and nothing less.
How you can execute a program on a server at a specific time depends on the server; both Windows and Linux-based systems usually have a scheduler service. In Linux land it's called cron, and a scheduled job is called a cronjob. If there is no other mechanism provided by whoever hosts your application, you can define jobs using the crontab utility on the server (you'll need shell access for that); its documentation includes (somewhat inaccessible, but there you go) information about what a crontab should look like. There are tons of tutorials on the web, too.
an XSLT transformation is probably your best bet for turning XML files into things you can easily import. An XSLT transform can be processed in any language of your choice.