Can anyone help me with error_log file. If you already guessed that I am not experienced user, that is true :-)
I have VPS on CentOS 5 with 4 CPU and 768 memory. With 5 sites on it.
Problem I have is that, no meter on which site system generate file "error_log" in root of sites, and in any other folder where there is any php script, so after running some php script there is error_log in that folder.
On every access system writes new lines, and it is the same error message, in any error file just time in different.
This is part of error_log file:
[13-Mar-2012 06:52:18] PHP Warning: PHP Startup: Unable to load dynamic library '/usr/local/lib/php/extensions/no-debug-non-zts-20090626/eaccelerator.so' - /usr/local/lib/php/extensions/no-debug-non-zts-20090626/eaccelerator.so: cannot open shared object file: No such file or directory in Unknown on line 0
[13-Mar-2012 06:52:20] PHP Warning: PHP Startup: Unable to load dynamic library '/usr/local/lib/php/extensions/no-debug-non-zts-20090626/eaccelerator.so' - /usr/local/lib/php/extensions/no-debug-non-zts-20090626/eaccelerator.so: cannot open shared object file: No such file or directory in Unknown on line 0
If I am right it is something about eaccelerator or something. I tried to find what is that, and it is some caching mechanism, if I am right. So far I know I did not do anything with that.
My sites are using widely used static html cache, ones page is generated by php it is stored in text file on disk and after server from disk. Something like this: [http://www.theukwebdesigncompany.com/articles/php-caching.php][1]
Any help to find problem, and to fix it would be nice. Also if you have any question, do not hesitate to ask, I will try to help as much as I can. Again, I am not experienced with WHM, so If you ask me something please tell me where exactly to look for it :-).
Best regards.
You must install eAccelerator extension for php 5.3
you maybe updated to php 5.3 and forgot to install it (or cp auto update !)
eAccelerator is a free open source PHP accelerator, optimizer, encoder and
dynamic content cache for PHP. It increases performance of PHP scripts by
caching them in compiled state, so that the overhead of compiling is almost
completely eliminated. Also it uses some optimizations to speed up execution
of PHP scripts. eAccelerator typically reduces server load and increases the
speed of your PHP code by 1-10 times.
gogole about it to install or use this link http://www.dedicated-resources.com/guide/128/eAccelerator-for-PHP.html
if you can not do that or dont want to install ir, you can edit /usr/local/lib/php.ini and remove eaccelerator.so, pdo.so, pdo_sqlite.so, sqlite.so in extension section
or back to php 5.2
i recommend to you to install eAccelerator and boost your php performance ;)
Related
I've written a cgi script that processes data that is generated by another program. The problem is that this file is located outside the cgi-bin. How can I make sure that my perl scripts can read this file? I've already tried changing the permissions of this file and I also tried to make a link in the cgi-bin folder but Apache is too smart for that. I guess possible solutions are:
Edit the Apache config file in a way that Apache can read files outside the cgi-bin.
Run the cgi script with a 'portable' webserver. Like you can do with python (python -m http.server [port]). Unfortunately this does not execute the perl cgi scripts.
I'm kind of stuck how to do either one of the solutions.
Your CGI-script could access anything on your OS unless you run the apache under a sort of jail, in this case the your can read anything in the jail. (Of course, if the apache process has permissions to read the file).
e.g the next simple script will print out your password file
use strict;
use warnings;
use CGI;
my $q=CGI->new();
print $q->header();
print qx(cat /etc/passwd);
About the modern perl web-app development, read the following:
PSGI: What is it and what's the fuss about?
plack advent calendar: http://advent.plackperl.org/2009/12/day-1-getting-plack.html (buy the ebook if you can here: http://handbook.plackperl.org )
https://github.com/plack/Plack
Get some modern web-framerowk from CPAN - here are many (maybe too many) - the most known are:
Dancer (Dancer2)
Mojolicious
Poet/Mason
and of course, the big-gun: Catalyst
I personally mostly using
Poet/Mason
Mojolicious
EDIT
In your cgi-bin should exists a script called printenv.pl. Try:
chmod 755 printenv.pl
and point your browser to http://address/cgi-bin/printenv.pl You will get, the apache environment. See, you must know the basics of operating system commands and how the web works to succesfully run an web-application. It is impossible to write down everything in one answer, you need to use google, read answers to other questions here and such.
Also, in the above script, you can change the cat /etc/passwd to any other shell command for testing only what your cgi-script can or can not.
I've solved this problem by using plackup in combination of PSGI.
use CGI::Emulate::PSGI;
use CGI::Compile;
my $sub = CGI::Compile->compile("location/to/script.cgi");
my $app = CGI::Emulate::PSGI->handler($sub);
If you run plackup file.psgi, it sets up a local webserver that runs as the current user. Problem solved.
I would like to run a local Monticello HTTP repository at work, so that we can share code easily among colleagues.
Is there a way to run something similar to SmalltalkHub privately?
EDIT:
I have tried all the options here and neither of them seems to work smoothly. Let me recap the options:
1) WebDAV on Apache, following Stuart. I have tried it, following some online guides. My current dav.conf looks like this:
DavLockDB /tmp/DavLock
Alias /pharo /opt/data/pharo
<Location /pharo>
Order Allow,Deny
Allow from all
Options Indexes MultiViews
Dav On
AuthType None
</Location>
I worked for a few days. Then suddenly I am not able to read new versions of a certain package. Whenever I write a version in an image and read it in another one, I get an exception ZnInvalidUTF8. I am not sure why, it may be that WebDAV has issues listing too many files?
2) Setting up my FTP. It seems to work, but when I try to set this repository as a remote in the versionner I get MCFtpRepository doesNotUnderstnd: #koRemote
3) SqueakSource3, following Tobias. I have tried running the two Gofer commands in both Pharo2 and Pharo3. In Pharo2 it does not load at all. In Pharo3, more or less everything works. I had to fix a few errors due to deprecated or removed messages, but in the end I am able to create projects and write to them.
The problem arises when I read. Apparently SS3 keeps some kind of internal cache. The result is that the list of packages I see on the project page is different from the list of packages that the client gets. The difference seems to be that the client requires a short version of the page, like http://localhost:8080/ss/MyProject/?C=M;O%3DD, and the results there are consistently less than in the full page http://localhost:8080/ss/MyProject.
Moreover, even on the project page, the list of versions remains cached until I navigate on a different project.
4) SmallTalkHub, following Sean. I have tried both using images from the INRIA server and images suggested from the Pharo-VM-loader (they may be the same).
I had to install Seaside again, since there was no ZnZincAdaptor in the downloaded image. I am now able to start SmallTalkHub, but as soon as I try to register a user, I get an error MessageNotUnderstood: receiver of "new" is nil. I am not able to track where this error comes from (is there a way to open a server-side debugger instead of returing 500 in Seaside?).
After this error, I can see a user both in mongodb and in the interface, but I am not able to login.
5) Git using filetree, as suggested by Kylon. This would prevent me from using MetaCello to handle dependencies and looks even more compelx than the other options.
At this point I am at a loss. :-( If I want to use Pharo, I will need to be able to collaborate with my colleagues. Using open source repositories is not an option, at least right now.
Is there a simple, tried and tested way to set up such a repository?
SqueakSource3 or SmallTalkHub would be even better, thanks to their user interfaces, but I really need at least basic collboration. Having an option that can run on a headless server would also be a big plus, as if this becomes a tool we use, it will not be doable to host the repository on my laptop.
Per this thread on the Pharo Dev mailing list:
Setting up the Server:
Download a SmalltalkHub image (https://ci.inria.fr/pharo-contribution/job/SmalltalkHub/)
Install mongodb on your computer (for Debian: apt-get install mongodb)
Launch the SmalltalkHub image
Evaluate: ZnZincServerAdaptor startOn: 8080
Visit http://localhost:8080/tools/hub, create an account and a project
In addition to Sean's answer - if you just want a Metacello repository, and don't necessarily need the full SmalltalkHub stuff, then you just need a WebDav server. Apache will work fine, and I've even used Confluence's WebDAV support (with some tweaking) successfully in the past.
In addition to the other answers:
Just storing your versions in DropBox work very well!
You can also install SqueakSource3 (like SmalltalkHub, doesn't need MongoDB):
Gofer new
url:'http://www.smalltalkhub.com/mc/Seaside/MetacelloConfigurations/main';
package: 'ConfigurationOfSeaside3';
load.
((Smalltalk at: #ConfigurationOfSeaside3) project version: #stable) load.
Gofer new
url:'http://www.squeaksource.com/MetacelloRepository';
package: 'ConfigurationOfSqueakSource';
load.
((Smalltalk at: #ConfigurationOfSqueakSource) project version: #bleedingEdge) load: #('All').
Then start your Adaptor (eg ZnZincServerAdaptor startOn: 8080) and goto http://localhost:8080/instalSS
Another way is go down the popular route of Git. I am using Github for my projects and it works great while Git itself works very well locally too. So if are already familiar with Git then its a very good choice
You can find more information here https://ci.inria.fr/pharo-contribution/job/PharoForTheEnterprise/lastSuccessfulBuild/artifact/GitAndPharo/GitAndPharo.pier.html
Sorry about the bad smalltalkhub experience. I have made some fixes to the configuration, and need to check if that is enough
I have been using php for CGI scripting for some time now and recently got interested in lua.
I installed the latest version of luarocks(2.1.2) and the bundled version of lua(5.1.4). I wanted to start from the basics and hence installed cgilua(5.1.4-2) and all its dependencies using "luarocks install cgilua".
I am able to run simple lua scripts using the shebang line to point to my lua interpreter but when i use it to point to the cgi launcher "cgilua.cgi.exe" to run .lp files it just won't work. I edited my httpd configuration file to allow cgi execution in my htdocs and cgi-bin directory and used the cgi-script handler for .lp pages. I am trying to run the login.lp example in the cgilua examples directory. I even added the line "Content-type:text/html" to no avail. Executing the cgilua.cgi.exe file from the command line without arguments just closes the application with the message "cgilua.cgi.exe" stopped working".
Could anyone tell me what am I missing? Maybe the launcher is supposed to be used in a different way?
I don't suppose permissions have a part to play in this as in windows all users have at least read and execute permissions.
The url I'm trying to access is http://localhost/login.lp. My apache error log shows "Premature end of script headers: login.lp" with a 500 internal server error and the same thing if I access http://localhost/cgilua.cgi.exe
I don't know what your requirements are, but perhaps it will be easier to simply use apache's mod_lua.
http://httpd.apache.org/docs/trunk/mod/mod_lua.html
its been more than 2 weeks. i try to instal but still getting error.
firstly, indeed I have searched similar error but i didn't find solution at all. if you find it. please let me know.
second, this is my state :
1. i have installed python 2.7 and django 1.5.1 (it works).
2. i also install MAMP.
3. i try to configure mod_wsgi and want it integrated with my MAMP apache server.
4. using mac mountain lion 10.8.4
my configuration file :
/etc/apache2/original/extra/httpd-userdir.conf inside my apache2/original/extra/
/etc/apache2/users/akhyar.conf pastebin.com/zcY58WTV (sorry about this Iam new on stack overflow)
/etc/apache2/httpd.conf pastebin.com/je2D8zMz
third, this is my error :
when i run apachectl configtest this error appears my error
so, what is going on actually? can someone tell me why and show me the mistakes?
if its been solved, what is the next step for configuring mod_wsgi on my MAMP?
thanks before, any help are highly appreciated.
In this file, line 15, you're including the per-user conf files:
http://pastebin.com/7y7ibuqP
On line 473 of this one, one of those per-user conf files, you're including the above file again:
http://pastebin.com/zcY58WTV
This causes infinite recursion while trying to parse the conf files.
I think there are some other errors too, and to be honest the files are pretty messy, but the best way forward is to remove all Include directives from akhyar.conf. For the most part they're already duplicates, where they're not, inline the contents of those files instead of using Include. If there are other errors, you'll at least see useful line numbers to start tracking them down.
Also note that the [warn] lines are just warnings - which you should probably fix, but the server will still run without them, that's not what's causing the error.
I have access to a remote Solaris terminal which crashes occasionally, and I have to ask someone with physical access to boot the machine up, which it does successfully. I would like to know which tools/files should I look at to find out the cause of the crash so that I can make the necessary configuration changes and avoid it in the future.
What tools you can use will depend on what version of solaris you have running and what the actual problem
is. The first thing to do is check the system console (which it sounds like you don't have access to) and the /var/adm/messages file. This file is updated with system messages and the newest will appear at the end.
Next, you can look for a system core file. If a core file is created, it would be in /var/crash/hostname where "hostname" is the name of the machine.
If you have an actual core file in the /var/crash/hostname directory, this set of commands will give you a good
string to search google with:
# cd /var/crash/hostname
Replace "hostname" with the hostname of your machine.
# mdb -k unix.0 vmcore.0
If you have multiple core files, select the most recent version.
> ::status
This should give you a panic message, cut and paste that into google and see what you can find.
For more core file analysis read this:
http://cuddletech.com/blog/pivot/entry.php?id=965