CGI : 502 Bad Gateway The CGI was not CGI/1.1 compliant. - cgi

I have a form in an HTM page that, after pressing the submit button, calls the index.cgi !
#!/usr/bin/sh
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/nastools/pysqlite2/pysqlite2
export PYTHONPATH=$PYTHONPATH:/nastools/pysqlite2/:/path/impo
export PATH=/nastools/python64/bin/:$PATH
python /remote/path/impo/manager.py 2>&1
I just want to run the manager.py that is stored in folder /remote/path/impo !!
I am totally going crazy with this stupid error and cannot find the way to solve it ...
Any suggestion ??

Thanks NineBarry but I found the problem ... After trying with CGI scripts surely good like :
#!/bin/sh
echo Content-type: text/html
echo
echo
echo "<HTML>"
echo "<HEAD>"
echo "</HEAD>"
echo "<BODY>"
echo "<H2>Users logged in are:</H2>"
echo "<PRE>"
who
echo "</PRE>"
echo "</BODY>"
echo "</HTML>"
I've remembered by chance that I haven't set the access rights !!!! I LOST 2 HOURS on it ... I've fixed it with just chmod 777 name.cgi ...
Sorry to all of u if I ve wasted your time !
Bye

Related

UDEV- Why echo not affect?

I trying to echo something when my USB is plugged in with udev(rules). But echo seem not working or not affect while other command is working.
My script: autorun.sh
echo "-----------usb detect---------"
cp /home/root/data1 /home/root/data2
My rules file:
ACTION=="add", ATTRS{idVendor}=="16c3", ATTRS{idProduct}=="1536", RUN+="/home/root/autorun.sh"
What should I do now ?
I solve this issue by:
echo "-----------usb detect---------" > /dev/ttymxc0

Strange behaviour with CGI

I've a small project almost finished. In it, I use some .cgi files in which I make communication between a microcontroler and a webserver. The problem is the following:
If I use this code the project works fine:
#!/bin/bash
./moveRight
echo "Status: 204 No Content"
echo "Content-type: text/html"
echo ""
but when I use this code, nothing happens:
#!/bin/bash
echo "Status: 204 No Content"
echo "Content-type: text/plain"
echo ""
mkdir /tmp/stream
raspistill --nopreview -w 640 -h 480 -q 5 -o /tmp/stream/pic.jpg -tl 100 -t 9999999 -th 0:0:0 &
chmod 777 /dev/ttyAMA0
LD_LIBRARY_PATH=/usr/local/lib mjpg_streamer -i "input_file.so -f /tmp/stream -n pic.jpg" -o "output_http.so -p 8080 -w /var/www" &
What I want is to execute those bash commands when the .cgi is called, do you think what can be my problem? Or any workaround for this issue?

Bash script for deployment as a cgi

i have a bash script that copies and modifies a bunch of configuration files in my app running as a cgi for fast deployment and testing.
This script is in the public_html folder of my hosting and it works perfectly, permissions set to 755.
I am making other cgi scripts in a subfolder public_html/foo/bar.cgi and giving it 755 permission.
#!/bin/bash
echo "Content-type: text/html"
echo ""
# test if we are in the correct place
touch foo 2>&1
echo '<html>'
echo '<head>'
echo '<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">'
echo '</head>'
echo '<body>'
echo '<p>Hello World</p>'
echo '<p> </p>'
echo '<hr/>'
echo '</body>'
echo '</html>'
exit 0
This doesn't work, the server answers with a 403.
I have checked the folder permission, and set it to 755 and 777 without results
UPDATE: Changed name from test.cgi to testscript.cgi and it's woking now

bash script to restart Apache automatically

I wrote a bash script to restart Apache when it hanged and send email to the admin. The code is shown below. the code will restart Apache if the number of Apache process is zero. The problem is: Apache some time hangs and processes is still not zero,so in this case the script will not restart Apache.
The needed is: how do I modify the code to restart Apache if it hanged and the processes is not zero.
#!/bin/bash
if [ `pgrep apache2 -c` -le "0" ]; then
/etc/init.d/apache2 stop
pkill -u www-data
/etc/init.d/apache2 start
echo "restarting....."
SUBJECT="Apache auto restart"
# Email To ?
EMAIL="me#mydomain.com"
# Email text/message
EMAILMESSAGE="apache auto restart done"
# send an email using /bin/mail
/bin/mail -s "$SUBJECT" "$EMAIL" "$EMAILMESSAGE"
fi
We used to have Apache segfaulting sometimes on a machine; here's the script we used trying to debug the problem while keeping Apache up. It ran from cron (as root) once every minute or so. It should be self-explanatory.
#!/bin/sh
# Script that checks whether apache is still up, and if not:
# - e-mail the last bit of log files
# - kick some life back into it
# -- Thomas, 20050606
PATH=/bin:/usr/bin
THEDIR=/tmp/apache-watchdog
EMAIL=yourself#example.com
mkdir -p $THEDIR
if ( wget --timeout=30 -q -P $THEDIR http://localhost/robots.txt )
then
# we are up
touch ~/.apache-was-up
else
# down! but if it was down already, don't keep spamming
if [[ -f ~/.apache-was-up ]]
then
# write a nice e-mail
echo -n "apache crashed at " > $THEDIR/mail
date >> $THEDIR/mail
echo >> $THEDIR/mail
echo "Access log:" >> $THEDIR/mail
tail -n 30 /var/log/apache2_access/current >> $THEDIR/mail
echo >> $THEDIR/mail
echo "Error log:" >> $THEDIR/mail
tail -n 30 /var/log/apache2_error/current >> $THEDIR/mail
echo >> $THEDIR/mail
# kick apache
echo "Now kicking apache..." >> $THEDIR/mail
/etc/init.d/apache2 stop >> $THEDIR/mail 2>&1
killall -9 apache2 >> $THEDIR/mail 2>&1
/etc/init.d/apache2 start >> $THEDIR/mail 2>&1
# send the mail
echo >> $THEDIR/mail
echo "Good luck troubleshooting!" >> $THEDIR/mail
mail -s "apache-watchdog: apache crashed" $EMAIL < $THEDIR/mail
rm ~/.apache-was-up
fi
fi
rm -rf $THEDIR
We never did figure out the problem...
Can the count of a process really be less than zero?
This should be sufficient:
if ! pgrep apache2 -c >/dev/null; then
You could try to send an http request to apache (e.g. using wget --timeout=10) and if that request times out or fails (exit status != 0), you kill and restart apache.
Why would Apache hang? Can you get to the cause?
There are a number of scripts and tools out there to 'daemonize' apps and watch over them. As you seem to be on Debian or Ubuntu, have a look at the packages daemon and daemontools. I am sure there are others too.

Is there a curl/wget option that prevents saving files in case of http errors?

I want to download a lot of urls in a script but I do not want to save the ones that lead to HTTP errors.
As far as I can tell from the man pages, neither curl or wget provide such functionality.
Does anyone know about another downloader who does?
I think the -f option to curl does what you want:
-f, --fail
(HTTP) Fail silently (no output at all) on server errors. This is mostly done to better
enable scripts etc to better deal with failed attempts. In normal cases when an HTTP
server fails to deliver a document, it returns an HTML document stating so (which often
also describes why and more). This flag will prevent curl from outputting that and
return error 22. [...]
However, if the response was actually a 301 or 302 redirect, that still gets saved, even if its destination would result in an error:
$ curl -fO http://google.com/aoeu
$ cat aoeu
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
here.
</BODY></HTML>
To follow the redirect to its dead end, also give the -L option:
-L, --location
(HTTP/HTTPS) If the server reports that the requested page has moved to a different
location (indicated with a Location: header and a 3XX response code), this option will
make curl redo the request on the new place. [...]
One liner I just setup for this very purpose:
(works only with a single file, might be useful for others)
A=$$; ( wget -q "http://foo.com/pipo.txt" -O $A.d && mv $A.d pipo.txt ) || (rm $A.d; echo "Removing temp file")
This will attempt to download the file from the remote Host. If there is an Error, the file is not kept. In all other cases, it's kept and renamed.
Ancient thread.. landed here looking for a solution... ended up writing some shell code to do it.
if [ `curl -s -w "%{http_code}" --compress -o /tmp/something \
http://example.com/my/url/` = "200" ]; then
echo "yay"; cp /tmp/something /path/to/destination/filename
fi
This will download output to a tmp file, and create/overwrite output file only if status was a 200. My usecase is slightly different.. in my case the output takes > 10 seconds to generate... and I did not want the destination file to remain blank for that duration.
NOTE: I am aware that this is an older question, but I believe I have found a better solution for those using wget than any of the above answers provide.
wget -q $URL 2>/dev/null
Will save the target file to the local directory if and only if the HTTP status code is within the 200 range (Ok).
Additionally, if you wanted to do something like print out an error whenever the request was met with an error, you could check the wget exit code for non-zero values like so:
wget -q $URL 2>/dev/null
if [ $? != 0]; then
echo "There was an error!"
fi
I hope this is helpful to someone out there facing the same issues I was.
Update:
I just put this into a more script-able form for my own project, and thought I'd share:
function dl {
pushd . > /dev/null
cd $(dirname $1)
wget -q $BASE_URL/$1 2> /dev/null
if [ $? != 0 ]; then
echo ">> ERROR could not download file \"$1\"" 1>&2
exit 1
fi
popd > /dev/null
}
I have a workaround to propose, it does download the file but it also removes it if its size is 0 (which happens if a 404 occurs).
wget -O <filename> <url/to/file>
if [[ (du <filename> | cut -f 1) == 0 ]]; then
rm <filename>;
fi;
It works for zsh but you can adapt it for other shells.
But it only saves it in first place if you provide the -O option
As alternative you can create a temporal rotational file:
wget http://example.net/myfile.json -O myfile.json.tmp -t 3 -q && mv list.json.tmp list.json
The previous command will always download the file "myfile.json.tmp" however only when the wget exit status is equal to 0 the file is rotated as "myfile.json".
This solution will prevent to overwrite the final file when a network failure occurs.
The advantage of this method is that in case that something is wrong you can inspect the temporal file and see what error message is returned.
The "-t" parameter attempt to download the file several times in case of error.
The "-q" is the quiet mode and it's important to use with cron because cron will report any output of wget.
The "-O" is the output file path and name.
Remember that for Cron schedules it's very important to provide always the full path for all the files and in this case for the "wget" program it self as well.
You can download the file without saving using "-O -" option as
wget -O - http://jagor.srce.hr/
You can get mor information at http://www.gnu.org/software/wget/manual/wget.html#Advanced-Usage