Run a binary CGI file on the command line with GET params - cgi

How can I run a binary cgi file on the command line and provide GET parameters to it?
I understand this task may be straightforward for perl or php files, but I've got a binary cgi file and no documentation for it. I'd like to run it without a web server so that I can evaluate certain problems on some co-workers' machines.
I've tried the following, but to no avail:
QUERY_STRING="foo=bar" ./myfile.cgi
foo=bar ./myfile.cgi
./myfile.cgi foo=bar
./myfile.cgi <<< "foo=bar"
In each case, the script outputs Error in form found<br>Missing foo<br><b></b><br>. (When executed through apache on our server, it returns no error message, only the intended results.)

Specifying the environment variable REQUEST_METHOD=GET in addition to QUERY_STRING=... makes the difference.
Among the tokens output by strings myfile.cgi, there were a number of cgi-related variables which the web server might be expected to set, such as REMOTE_ADDR, SERVER_SOFTWARE, and SERVER_PROTOCOL, but the two aforementioned variables proved enough to get this executable to produce a non-error output.
(For a POST, I've read that the body/params are read from stdin, but I haven't substantiated that personally.)

Related

Executing additional command in Backend that takes the to be generated file

I'm currently looking for a way to execute iverilog in in Yosys, to be more exact at the write_verilog step.
I need to feed iverilog the file, which will be generated by write_verilog (reason is, I need to uphold the variable source information, which are kept in the yosys attributes).
However the execute() function only writes into the file upon function end.
If I were to call iverlog testbench.v design.v with design.v being the file which is generated through write_verilog, I get an error, telling me it's missing modules.
Is it possible to carry out commands, that depend on the file which is generated after execute() has run through, while still being in the verilog backend?
You could use a script instead, to run iverilog after write_verilog, inside a Yosys script a line beginning ! is passed to the shell:
write_verilog design.v
!iverilog testbench.v design.v

Printing variable in the server code

I have inherited some code that is written in Perl and makes HTTP requests between the server and the client. I want to print few variables that is in the server code but that raises errors when I use the print statement. The variables are scalars, arrays or hashes. I want to print the output to the terminal and only for debugging purposes. Few errors I get are-
malformed header from script 'get_config': Bad header: self=$VAR1 = bless( {
Response header name 'self=Bio' contains invalid characters, aborting request
A simple print 'test' raises error like
malformed header from script 'get_config': Bad header: test
How do I print the variable values without any errors?
You haven't explained yourself very well at all. But, from the errors you're getting, I assume this is a CGI program.
A CGI program sends its output to STDOUT, where the web server catches it and processes it in various ways. In order for this to work, the data that your program prints to STDOUT needs to follow various rules. Probably the most important of those rules is that the first output from your program must be the CGI headers - and at the least, those headers should include a Content-type: header.
I assume that you're trying to display your debugging output before your program has sent the CGI headers. That's not going to work.
But do you really want to send your debugging output to STDOUT? That seems like a bad idea. If you use warn() instead of print() then your output will go to STDERR instead - and in most web servers, STDERR is connected to the web server's error log.
For more control over the output generated by warn(), see the CGI::Carp module.

How to redirect output of a running process to a file in Linux Shell

I am trying a bit of experiments with airmon-ng script in Linux. Meanwhile i want to redirect output of a process "airodump-ng mon0" to a file. I can see the instantaneous output on the screen. The feature of this process is that it won't stop execution(actually it is a script to scan for AP and clients, it will keep on scanning) unless we use ctrl+c.
Whenever i try
airodump-ng mon0 > file.txt
i won't get the output in the file.
My primary assumption is that the shell will write it to the file only after completing the execution. But in the above case i stopped the execution(as the execution won't complete).
So to generalize i can't pipe the output of a running process to a file. How can i do that?
Or is there any alternative way to stop the execution of the process(for example after 5 seconds) and redirect the current output to a file?
A process may send output to standard output or standard error to get it to the terminal. Generally, the former is for information and the latter for errors, but in some cases, a process may mix them up.
I'm supposing that in your case, the standard error is being used. To get both of these to the output file, you can use:
airmon-ng mon0 > file.txt 2>&1
This says to send standard output to file.txt and to reroute 2 (which is the file id for standard error) into 1 (the file id for standard output) so that it also goes to the file.

How to add a user defined function in QDB Library?

QDB is a database provided by QNX Neutrino package. I went through the QDB documentation to add a user defined SQL function: http://www.qnx.com/developers/docs/6.5.0/topic/com.qnx.doc.qdb_en_dev_guide/writing_functions.html?cp=2_0_8
I created a source file which had my user define SQL function written in C and qdb_function structure definition. I built it with a make file to create libudf.so.
As suggested by QDB I added Function = udftag#libudf.so in the qdb.cfg. But while running the qdb in the shell prompt, it is giving the error (in bold):
qdb -I basic -V -R set -v -c /etc/sql/qdb.cfg -s de_DE#cldr -o tempstore=/fs/tmpfs
QDB: No script registered for handling corrupt database.
qdb: processing [TempMainAddressBook]Function - Can't access shared library
and qdb is getting exited immediately.
I have tried following things:
made sure sqlite3 library is added in the make file
source code is in strictly in C by using directive : extern "C" to avoid name mangling as the file extension is .cpp. I also tried with .c extension.
given the absolute path of the libudf.so in qdb.cfg as : Function = udftag#/usr/lib/libudf.so
qdb_funcion struct is properly defined in library's source code only.
tried without using the static declaration of function(mentioned in the qdb docs)
After trying all hits and trials also, I am getting the same error every time which is Can't access shared library
If any one has any idea to resolve this error please share.
Suggestion 1: run qdb by setting LD_DEBUG=1, like in:
LD_DEBUG=1 qdb command line options
This will output a lot of debug information from the dynamic loader as it attempts to locate and then load the .so files. Check what is the path that it output before the "Can't access" message is displayed.
Suggestion 2: obvious but make sure that the permissions are OK for the .so file. Do you have the execution permission set?
Suggestion 3: check if the error message is identical if you completely remove the .so file from the system
Suggestion 4: increase the number of lower-case 'v'-s. QDB likely supports more, with progressively more verbose information provided as you increase the numbers (6 should be enough for full verbosity)

limitations of #! in scripts

It seems as if a script with #! prefix can have the interpreter name and ONLY one argument. Thus:
#!/bin/ls -l
works, but
#!/usr/bin/env ls -l
doesn't
Do you agree? Any thoughts?
Francesc
Different Unixes interpret #! differently. Here's a comprehensive-looking writeup: http://www.in-ulm.de/~mascheck/various/shebang/
It seems that the lowest common denominator across platforms is "the interpreter (which must not itself be a script) and no more than one argument".
Originally, we only had one shell on Unix. When you asked to run a command, the shell would attempt to invoke one of the exec() system calls on it. It the command was an executable, the exec would succeed and the command would run. If the exec() failed, the shell would not give up, instead it would try to interpret the command file as if it were a shell script.
Then unix got more shells and the situation became confused. Most folks would write scripts in one shell and type commands in another. And each shell had differing rules for feeding scripts to an interpreter.
This is when the “#! /” trick was invented. The idea was to let the kernel’s exec () system calls succeed with shell scripts. When the kernel tries to exec () a file, it looks at the first 4 bytes which represent an integer called a magic number. This tells the kernel if it should try to run the file or not. So “#! /” was added to magic numbers that the kernel knows and it was extended to actually be able to run shell scripts by itself. But some people could not type “#! /”, they kept leaving the space out. So the kernel was expended a bit again to allow “#!/” to work as a special 3 byte magic number.