Strange apache behaviour when lauching an external binary called by a perl script - apache

I am currently setting up a web service powered by apache and running on CENTOS 6.4.
This service uses perl scripts (cgi-bin) launching in particular external homemade fortran compiled binaries.
Here is the issue: when I boot my server, everything goes well except that one of my binary crashes systematically (with a kernel segfault) when called by my perl scripts.
If I restart manually the httpd service (at the command line: service httpd restart), the issue is totally fixed.
I examined apache/system logs and nothing suspicious can be found.
It appears that the problem occurs only when httpd is launched by /etc/rc[0-6].d startup directives. I tried to change the launch order of http (S85httpd by default) to any other position without success.
To summarize, my web service is only functional (with no external binary crash) when httpd is launched at the command line once the server has fully booted up!
[EDIT] This issue is now resolved:
My fortran binary handles very large arrays and complex functions requiring an unlimited stack size.
Despite that the stack size limit was defined on a system-wide basis (in /etc/security/limits.conf), for any reason it appears that the "apache/perl/fortran binary" ensemble was not aware of that (causing my binary to crash each time it was called).
At the contrary, when I manually restarted apache at the shell prompt, the stacksize limit was correctly passed (.bashrc with 'ulimit -S -s unlimited').
As a workaround, I used BSD::Resource module (http://metacpan.org/pod/BSD::Resource) to define stacksize directly in my perl script by using e.g. setrlimit(RLIMIT_STACK, $softlimit, $hardlimit);
Thus, this new stack size limit is now directly passed from my perl script to my binary.

I've run into similar problems before. Maybe one way to solve this is to put the binary on a 'delayed start', so that it starts after everything else on your system is running. One way to do this is to put an at job in your /etc/rc.local script, to start the binary in X minutes.

Related

How to run a script from startup on Raspbian 10 (buster)?

I have designed a GUI that I want to run as soon as I turn on my Raspberry Pi. It is currently set up to automatically log in as user on startup, but if that makes the process more difficult I can change that. My Raspi runs on Raspbian 10 (buster), which has made things difficult since I can only find tutorials for Raspbian 8 or so.
I have tried modifying autostart folder, but it is not in the same location as it was in previous Raspbian versions and doesn't seem to be working the way it used to. Tutorials have said to create a .desktop file in /home/pi/.config/autostart but I don't have a .config folder, or at least it's hidden. For me, autostart is in /etc/xdg/autostart and when I try to create a new file here using nano in the terminal, I get the message [Directory '/etc/xdg/autostart' is not writable] and it doesn't save my file.
I have also tried calling my script in /etc/rc.local but it did nothing. Some have said it doesn't work for GUIs.
Here's what I type into terminal:
$ nano /etc/xdg/autostart/gui.desktop
and a new file pops up, but at the bottom I get the warning [Directory '/etc/xdg/autostart' is not writable]
How can I get my GUI script to run on startup with Raspbian 10 (buster)?
There are a number of issues here, first when you are looking at tutorials recognize that Linux distros are built in layers, for simplicity let's say your "layer stack" looks like this: kernel, systemd, x11, xdg, lxde. The kernel boots, then starts systemd, which then starts x11 (and a lot of other stuff), x11 starts xdg (and some other stuff, I think), lxde is started by either x11 or xdg I'm not sure which.
You want to add something to this process, you can do it at the kernel level (bad idea), at they systemd level (probably not right unless its a daemon), at the x11 level (still probably bad as you still don't have a user session yet), or at the xdg or lxde level.
xdg is probably the right place as it has all you need ( a gui, a user session) while being common (xdg will still work if you switch window managers, probably)
With that out of the way, why isn't your solution of modifying xdg working? It's because '/etc/xdg/autostart' is a system configuration directory. Any changes made to it will apply to all users. You may want this, but the system is trying to protect other users on your system and only allows root to make changes to everyone. If you want to do that use "sudo" (documented elsewhere on stack exchange and the internet). If you want to do it just for you use ~/.config/autostart, (https://wiki.archlinux.org/index.php/XDG_Autostart) you might need to create that directory with "mkdir ~/.config/" and then "emacs ~/.config/autostart"
Would it be better to have the python program run in a terminal window from startup? That way you would see what it is doing in case of errors.
If so, perhaps check this out https://stackoverflow.com/a/61730679/7575617
By the way, in the file manager, hit CTRL+H to toggle viewing hidden files and folders.

Redhat with httpd24 connecting to Informix using DBI

I'm at my wits' end on this. I have 2 RH7 boxes that I just installed httpd24 (v2.4.34) on. They were running httpd (v2.4.6) without any connection problems. Now when I try and run Perl scripts from the browser, they fail with...
install_driver(Informix) failed: Can't load '/usr/local/lib64/perl5/auto/DBD/Informix/Informix.so' for module DBD::Informix: libifsql.so: cannot open shared object file: No such file or directory at /usr/lib64/perl5/DynaLoader.pm line 190.
at (eval 5) line 3.
Compilation failed in require at (eval 5) line 3.
Perhaps a required shared library or dll isn't installed where expected
at /var/www/html/app/cgi-bin/test_informix_odbc.cgi line 35.
But when I run the same script from the command line, as 'apache', it runs just fine. All the ENV vars are set correctly.
Anyone run into anything similar before?
It would no longer use the LD_LIBRARY_PATH environment variable I was setting in httpd.conf.
Services are started in a fresh environment without any influence of user's environment (like environment variable values). As a consequence, information of all enabled collections will be lost during service start up.
Newer versions of httpd have stopped bringing the user environment in when the service is started. I found this little blurb in /opt/rh/httpd24/service-environment.
grep -r "LD_LIBRARY_PATH" /opt/rh/httpd24/
/opt/rh/httpd24/enable:export LD_LIBRARY_PATH=/opt/rh/httpd24/root/usr/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
I prepended the standard informix paths in /opt/rh/httpd24/enable.
export LD_LIBRARY_PATH=/opt/IBM/informix/lib:/opt/IBM/informix/lib/esql:/opt/rh/httpd24/root/usr/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
And everything is back to normal. Woohoo!

Running wrapper file continuously for using JFR to monitor ActiveMQ performance

I have an issue about continuously running Java Flight Recorder to monitor memory usage and other performance statistics of ActiveMQ.
Wrapper configuration file (wrapper.conf) is under this directory with nearside (wrapper, activemq, libwrapper.so) files;
../apache-activemq-5.12.1/bin/linux-x86-64/wrapper.conf
I added the lines below to run JFR;
wrapper.java.additional.13=-XX:+UnlockCommercialFeatures
wrapper.java.additional.14=-XX:+FlightRecorder
wrapper.java.additional.15=-XX:FlightRecorderOptions=defaultrecording=true,disk=true,repository=../jfr/jfrs_%WRAPPER_PID%,settings=profile
wrapper.java.additional.16=-XX:StartFlightRecording=filename=../jfr/jfrs_%WRAPPER_PID%/myrecording.jfr,dumponexit=true,compress=true
When I run wrapper file, expected output 'myrecording.jfr' is generated under specified path in wrapper.conf. But the problem is, I also want it to be happen automatically (without running wrapper file by hand).
What might be the possible solution for that?

Serving lua pages in apache windows

I have been using php for CGI scripting for some time now and recently got interested in lua.
I installed the latest version of luarocks(2.1.2) and the bundled version of lua(5.1.4). I wanted to start from the basics and hence installed cgilua(5.1.4-2) and all its dependencies using "luarocks install cgilua".
I am able to run simple lua scripts using the shebang line to point to my lua interpreter but when i use it to point to the cgi launcher "cgilua.cgi.exe" to run .lp files it just won't work. I edited my httpd configuration file to allow cgi execution in my htdocs and cgi-bin directory and used the cgi-script handler for .lp pages. I am trying to run the login.lp example in the cgilua examples directory. I even added the line "Content-type:text/html" to no avail. Executing the cgilua.cgi.exe file from the command line without arguments just closes the application with the message "cgilua.cgi.exe" stopped working".
Could anyone tell me what am I missing? Maybe the launcher is supposed to be used in a different way?
I don't suppose permissions have a part to play in this as in windows all users have at least read and execute permissions.
The url I'm trying to access is http://localhost/login.lp. My apache error log shows "Premature end of script headers: login.lp" with a 500 internal server error and the same thing if I access http://localhost/cgilua.cgi.exe
I don't know what your requirements are, but perhaps it will be easier to simply use apache's mod_lua.
http://httpd.apache.org/docs/trunk/mod/mod_lua.html

Debugging Solaris OS crash

I have access to a remote Solaris terminal which crashes occasionally, and I have to ask someone with physical access to boot the machine up, which it does successfully. I would like to know which tools/files should I look at to find out the cause of the crash so that I can make the necessary configuration changes and avoid it in the future.
What tools you can use will depend on what version of solaris you have running and what the actual problem
is. The first thing to do is check the system console (which it sounds like you don't have access to) and the /var/adm/messages file. This file is updated with system messages and the newest will appear at the end.
Next, you can look for a system core file. If a core file is created, it would be in /var/crash/hostname where "hostname" is the name of the machine.
If you have an actual core file in the /var/crash/hostname directory, this set of commands will give you a good
string to search google with:
# cd /var/crash/hostname
Replace "hostname" with the hostname of your machine.
# mdb -k unix.0 vmcore.0
If you have multiple core files, select the most recent version.
> ::status
This should give you a panic message, cut and paste that into google and see what you can find.
For more core file analysis read this:
http://cuddletech.com/blog/pivot/entry.php?id=965