I've got a little project that uses perl cgi scripts. At this moment I use Apache to run these scripts. I have an index.html file that redirects to a cgi file.
Now I want to make my project 'portable', which means that I want to be able to move the project to another location without the need to configure apache (so not changing the cgi-bin directory in the configuration). The end product would be a script (or html file) that could be opened so that a browser would pop up, just like it would run as it would be using Apache. I actually don't really know where to start.
It's not completely clear what you're asking. But I suspect you're about to find out why PSGI/Plack is such a good idea.
If you write your application to the PSGI interface then you'll be able to just drop it in to any PSGI-enabled web environment (or set up your own lightweight web server using something like Starman).
Related
I have some CGI scripts which are used internal to the organization. Due to some standard changes, the server/system team has commented the below line from httpd.conf (apache config file) that support the CGI scripts. Because of this change, the existing CGI scripts are affected and unable to execute them on browser.
### LoadModule cgid_module modules/mod_cgid.so
Is there a way to overcome this situation and make the scripts work as is.
NOTE : The above commented line cannot be uncommented/enabled.
Ways of circumventing your company's policy, from best to worse:
Load mod_cgi anyway.
Load mod_fcgi, and convert your CGI script into a Fast CGI daemon. It's a lot of work, but you'll can get faster code out of it!
Load your own module that does exactly the same thing as mod_cgi. mod_cgi is open source, so it should be easy to just rename it.
Load mod_fcgi, and write a Fast CGI daemon that executes your script.
Install a second apache web server with mod_cgi enabled. Link to it directly or use mod_proxy on the original server.
Write your own web server. Link to it directly or use mod_proxy on the original server.
Last time you asked this question, you talked about using mod_perl instead. The standard way to run CGI program unchanged (for some value of "unchanged") under mod_perl is by using ModPerl::Registry. Did you try that? How did it go?
Another alternative would be to convert your programs to use PSGI. You could try using Plack::App::Wrap::CGI or CGI::Emulate::PSGI. Using Plack would free you from any deployment restrictions. You could run the code under mod-perl or even as a separate service behind a proxy server.
But I can't help marvelling at how ridiculous this whole situation is. Your company has CGI programs that it (presumably) relies on to run part of its business. And they've just decided to turn off support for them. You need to find out why this decision has been made and try to buy some time in order to convert to an alternative technology.
I have a php script which runs as the Apache daemon user, to first remove any existing crontabs that might be already installed by the daemon user, then adding the new crontab.
I wish to evoke this only once when my PHP Slim web application loads for the first time, ideally as part of the Apache startup.
It seems that putting it anywhere in the index.php file (either directly or via a require file) doesn't prevent the crontab script from being constantly reloaded. I've tried using $GLOBAL with an if statement, which unfortunately I haven't been able to formulate a working solution, however there must be something more rudimentary which allows me to invoke a php script just once after Apache has been restarted?
Ideally I'd rather not modify the Apache startup script since I wish to keep everything within the slim web application itself (for source control purposes).
I've been strugling for a couple of days with this problem, but can't seem to fix it, I think I'm almost there.... but... not quite :(
This is where I am at.
I'm on a headless debian server, running virtualmin / webmin for creating my domains / users etc. I don't know if this will mess things up, but I'm happy to modify the config files manually (via webmin or via ssh/vim).
I am attempting to run fossil as a cgi service over apache.
its an internal site, named as homeserver.net I can reach the default pages just fine, and add in and create links etc as I want to.
Please note that the solution to my problem is at the end of the question.
so the files are located on disk at, which tallys up with my apache document root
/home/homeserver/www
I would like to run fossil to have both the internal site, and later on and dev work that I practice on in separate files. So I created a new directory for these repositories.
/home/homeserver/repos/web/site.fossil
/home/homeserver/repos/dev/ [no repository yet!]
reading the instructions on the fossil page I have inserted a short cgi file called 'fos_repo.cgi' that reads as.
#!/usr/bin/fossil
directory: /home/homeserver/repos
notfound: http://www.homeserver.net/site404.htm
when I open the link to
www.homeserver.net/cgi-bin/fos_repo.cgi
I get redirected to the 404 page that I have written. So the script is clearly being read and working.
From reading the fossil pages I understand that I should be able to use the following link to open/access the repo.
www.homeserver.net/cgi-bin/repos/web/site
I'm not sure why this isn't working...
so far I have tried the following.
I opened the repository from the cli, and had the server run in the background
fossil server site.fossil &
I though maybee the file should have been inside the main repo directory, not inside a sub directory, so I moved it... it now lives in
/home/homeserver/repos/site.fossil
I tried creating an alias to the file in apache
Alias /home/homeserver/repos/web/site.fossil /home/homeserver/www/repos
When I browse to
www.homeserver.net/repos/site
I get nothing, but going to
www.homeserver.net/repos/site.fossil
will attempt to downloaded the file (which is a binary)
so I think I'm getting somewhere, but I'm not sure what I'm missing.
I've used fossil before, but I ran it as a local server, and started it up as and when I needed it.
I'm running it like this so as I can eventually push the site out to a live VPS (maybe even finish up hosting the fossil site on the VPS also).
ps I really liked fossil when I used it before, and loved the whole integrated wiki and bug tracker, and the fact I could simply copy the file to my external drive to do a backup. Personally don't really want to change to something else, but if I have to....
thanks in advance.
David
Edit: trying other options.
So I thought I would try the single repository method shown on the fossil page, so adjusted my cgi script accordingly.
Now when I navitage to : www.homeserver.net/cgi-bin/fos_repo.cgi I get the following message returned
SQLITE_CANTOPEN: cannot open file at line 30276 of [f5b5a13f73]
SQLITE_CANTOPEN: os_unix.c:30276: (21) open(/home/homeserver/repos)
however if I ssh to the server an start it manually with
fossil server site.fossil
I can get to the server with www.homeserver.net:8081
So I either have a problem with my SQLite usage in apache or something else wrong. Plesse help
Solution
So for reasons of simplicity I've decided that using a single cgi file for each repo is what I am going to go with.
My initial directory structure was as follows:
/home/homeserver/www
/home/homeserver/www/repos
/home/homeserver/www/repos/web # for web site development
/home/homeserver/www/repos/dev # for other development
I think part of my problem was that I was hoping that having the directory: pont to my repos/ location fossil would find the site.fossil file (located in repos/web) and the dev.fossil file (located in repos/deb).
Obviously this didn't work.
The reason I wanted it too look like this was for separation of the information on my system.
For some reason I had decided that pointing fossil as repos/ would give me a nice fossil style front page and links to my repositories automatically. However After having used the directory: version and getting the following error message
Unable to find or open the project repository
I realised that I was still going to need to write my front page to the repositories, and that my expectation was a little too much.
So I've decided to run with a single cgi file pointing to each repo that i need to make.
Instead of
www.homeserver.net/cgi-bin/repos/web/site
try
www.homeserver.net/cgi-bin/repos.cgi/index
Reading your ( very long ) question again, I suggest trying
www.homeserver.net/cgi-bin/fos_repos.cgi/index
(LAMP server configuration)
As a workaround for another problem, I need PHP to be able to access local files, but prevent these files from being served over http by Apache.
Normally, I would just use .htaccess to accomplish this, however due to institutional restrictions, I cannot. I also can't touch php.ini, although I can use php_ini_set within php.
As a creative solution, I thought that if php executes as its own linux user (not as apache) I could use normal chown's and chmod's to accomplish this.
Again, the goal is simply to have a directory of files that apache will not display, but php can access.
I'm open to any suggestions.
Put the files outside of your web accessible root (DocumentRoot), but keep them accessible via PHP.
Suggestion:
/sites
/sites/my.site.com
/sites/my.site.com/data // <-- data goes here
/sites/my.site.com/web // <-- web root is here
Here's a thought. Set the permissions on the files to be inaccessible to even the owner, then when PHP needs them, chmod() then, read them, then chmod() them back to inaccessible.
I have an existing web application which I have been building with an ant script and deploying as a .war file to Tomcat.
I am trying to add Drupal to my current technology stack to provide CMS and general UI-related functionality so that I don't have to write my html pages by hand and rather use templates.
During the installation of Drupal7, some of the instructions suggest that I go to this directory:
/etc/apache2/sites-available
and change the DocumentRoot to
/home/myuser/drupal/drupal7
If I make the docroot a basic directory on the file system, how will this impact how the application will work? In addition to Apache, I also have Tomcat server. My goal is to get them to all play nice together. How is this best accomplished?
If I make the docroot a basic directory on the filesystem
I'm not sure what you mean by this. There's no qualitative difference between /var/www and /home/mysuser/drupal/drupal7. The latter is longer and in the user's home directory, but assuming this user would be administering the service anyway that doesn't matter.
Next, the best way to make Tomcat and Apache get along is probably to run one of them on different subdomains. You could use the same domain, but that'd mean you had to run one of the daemons off a nonstandard port and that looks strange and might run into firewall trouble with some users.