I am writing to write a cgi script to display a png image in a browser. When user clicks on the 'Submit' form in the HTML page , the CGI program is called to display the image .
But this is not working for me . The script is trying to open the PNG image in a browser,but the image contains the following error message
"The image “http://localhost/cgi-bin/image.sh?subbtn=Submit” cannot be displayed, because it contains errors."
The following is the snippet of CGI code.
!/bin/bash
echo "Content-type: text/html"
echo "Content-type: image/png"
echo ""
echo "<html>"
echo "<body>"
echo "Hi"
echo "<img src="/home/zaman/ssdggraph/SSDGhistory.png" alt="DG-Reports">"
echo "</body>"
echo "</html>"
If I remove the below line from the above code , then the text "DG-Reports" in alt section from img src tag is displayed.
echo "Content-type: image/png"
Also if I write the same code in plain HTML page , the PNG image is displayed fine.
Please suggest what I am missing in the code to display the image without any errors ?
You should specify a URL for the <IMG>'s src attribute that can be reached from the browser.
/home/zaman/ssdggraph/SSDGhistory.png is unlikely to to satisfy that requirement. (You should probably have someting like http://path_known_to_webserver/SSDGhistory.png).
You should probably also read up on the concepts of web pages and cgi scripts. I would call a CGI "that displays an image", some program that would be referenced in the above URL and dynamically streams out the image data once the web pages refers to it and a browser asks for it. That might be what you mean, but it is kind of unclear.
You have to use content type, then blank line:
#!/bin/bash
echo "Content-type: image/png"
echo
cat /var/www/img/your-image.png
Related
I've managed to locate the correct element on the web page and I can click it. I see the download prompt come up, asking me whether I should open the pdf, open it in Firefox, or save it.
My code looks like this:
my $profile = Selenium::Firefox::Profile->new;
$profile->set_preference(
"browser.download.dir" => "/Users/john/Downloads",
"browser.download.folderList" => "1",
"browser.helperapps.neverAsk.SaveToDisk" => "application/pdf"
);
my $driver = Selenium::Firefox->new(
'binary' => "/usr/local/bin/geckodriver",
'firefox_profile' => $profile
);
[...]
$driver->find_child_element($driver->find_element_by_id('secHead_1'), "./a[\#class='phoDirFont']")->click();
My understanding is that if I've set up the correct preferences, then this file would save without the prompt. That's not happening though. I've dug down into it with dev tools, and it does seem to be serving up the pdf file with "application/pdf" as the mime-type. Firefox certainly recognizes it as one (offering to open it in Firefox, not just with the registered app).
If there is another method (perhaps by sending keystrokes to the prompts), that would be acceptable too. Though I've been using Firefox (trying to get away from Google products in my personal life), if Chrome would make a difference I could switch to that instead.
Looking at my about:config (with the Selenium-opened window), the preferences settings seem to have taken. However, it still prompts for the file.
According to https://www.selenium.dev/documentation/en/worst_practices/file_downloads/ file downloads are not supported in selenium.
You could try using curl instead eg:
system("curl -s -o output.pdf -L $URL");
where -s stands for silent, -o stands for where to save the file and -L tells curl to follow redirect
If you need cookies you can obtain them with:
my #cookies = $driver->get_all_cookies();
extract whichever cookies you need and then pass them to curl with a --cookie parameter, like so:
system("curl -s -o output.pdf -L --cookie $cookie_one --cookie $another_cookie $URL");
I am happily converting docx files to PDF via the command line (controlled via C# process calls) out of my service.
Unfortunately I could not find any internet search results on how to set the options for the output PDF that the GUI offers me. I am specifically looking for generating PDF/A and tagged PDF via the command line.
Anyone ever done this and knows how to do that?
EDIT:
Obviously getting a PDF/A can be done by using unoconv instead.
On windows one would use the following command line in a checked out unoconv repository:
python.exe .\unoconv -f pdf -eSelectPdfVersion=1 C:\temp\libre\renderingtest.docx
I did not find further information on how to select other things (tagged PDF etc.) and where to get a complete list of the options that are available.
EDIT: It seems as one could try the different options in the GUI. The settings get saved to C:\Users\<userName>\AppData\Roaming\LibreOffice\4\user\registrymodifications.xcu. Then one can look up the changed setting and provide that to unoconv as this:
python.exe .\unoconv -f pdf -eUseTaggedPDF=1 -eSelectPdfVersion=1 C:\temp\libre\renderingtest.docx
Still not sure if I am doing this correctly though.
The gotenberg project shows how that can be done using unocov.
$ curl --request POST 'http://localhost:3000/forms/libreoffice/convert' --form 'files=#"doc.docx"' --form 'nativePdfFormat="PDF/A-1a"' -o pdfA.pdf
Example PDF
I just started working with shell_exec in php and stuck at this point.
Below is my php script which runs in terminal correctly but not in browser.
<?php
echo shell_exec("ssh -tq root#192.168.31.5 \"whoami\"");
?>
And output in terminal is
$ php /var/www/html/monitor/ssh.php
root
But in Browser,
Interesting thing is just whoami works like a charm
<?php
echo shell_exec("whoami");
?>
any suggetion is appriciated. Thank you!
EDIT :- USING OB_START() and OB_GET_CONTENT
<?php
ob_start();
echo shell_exec("ssh -tq root#192.168.31.5 \"whoami\"");
$out1 = ob_get_contents();
ob_end_clean();
var_dump($out1);
?>
OUTPUT IN TERMINAL :-
php /var/www/html/monitor/ssh.php
string(6) "root"
OUTPUT IN BROWSER (CHROME) :-
string(0) ""
That's because in CLI you're executing the script as the user from SSH (root in your case) but in browser, the one executing the script is your WebServer (apache/nginx). For you to get root as output in browser you might want to have a look at ob_start ob_get_contents ob_flush functions.
I am trying to download all PDFs from http://www.fayette-pva.com/.
I believe the problem is that when hovering over the link to download the PDF chrome shows the URL in the bottom left hand corner without a .pdf file extension. I saw and used another forum answer similar to this but the .pdf extension was used for the URL when hovering over the PDF link with my cursor. I have tried the same code that is in the link below but it doesn't pick up the PDF files.
Here is the code I have been testing with:
wget --no-directories -e robots=off -A.pdf -r -l1 \
http://www.fayette-pva.com/sales-reports/salesreport03-feb-09feb2015/
I am using this on a single page of which I know that it has a PDF on it.
The complete code should be something like
wget --no-directories -e robots=off -A.pdf -r http://www.fayette-pva.com/
Related answer: WGET problem downloading pdfs from website
I am not sure if downloading the entire website would work and if it wouldn't take forever. How do I get around this and download only the PDFs?
Yes, the problem is precisely what you stated: The URLs do not contain regular or absolute filenames, but are calls to a script/servlet/... which hands out the actual files.
The solution is to use the --content-disposition option, which tells wget to honor the Content-Disposition field in the HTTP response, which carries the actual filename:
HTTP/1.1 200 OK
(...)
Content-Disposition: attachment; filename="SalesIndexThru09Feb2015.pdf"
(...)
Connection: close
This option is supported in wget at least since version 1.11.4, which is already 7 years old.
So you would do the following:
wget --no-directories --content-disposition -e robots=off -A.pdf -r \
http://www.fayette-pva.com/
Update: Edited to make question more understandable.
I am creating a script which automatically parses an HTTP upload file and stores the information of the uploaded file like name, time of upload to another data file. These information are found in mod_security log file. mod_security has a rule in which we can redirect the uploaded file to a Perl script. In my case the Perl script is upload.pl. In this perl script I will scan the uploaded file using ClamAV antivirus. But the mod_sec only logs the uploaded file information like name, time of upload after the Perl script upload.pl is executed. But I am initiating another perl script execute.pl from upload.pl with a sleep(10) in execute.pl. The intention is that the execute.pl starts its function only after the completion of upload.pl. I need execute.pl to be executed as background process and upload.pl should complete without waiting the output of execute.pl.
But my issue is even I have made the execute.pl to run in background the HTTP upload waits for the completion of execute.pl even I have made the process to execute in background. I need the upload.pl to get complete without waiting the output of execute.pl. The script runs fine in console. For example I execute perl upload.pl from console the upload.pl completely executed without waiting the output of execute.pl. But when I try the same through apache, that means when I upload a sample file, the upload stucks for both the upload.pl and execute.pl to complete. Since execute.pl has been called from upload.pl as background process , the upload process should complete without waiting the output of execute.pl.
The methods I have tried so far are
system("cmd &")
my $pid = fork();
if (defined($pid) && $pid==0) {
# background process
my $exit_code = system( $command );
exit $exit_code >> 8;
}
my $pid = fork();
if (defined($pid) && $pid==0) {
exec( $command );
}
Rephrase of the question:
How do I start a perl deamon process with a perl webscript?
Answer:
The key is to close the streams of the background job, since they are shared:
Webscript:
#!/usr/bin/perl
#print html header
print <<HTML;
Content-Type: text/html
<!doctype html public "-//W3C//DTD HTML 4.01 Transitional//EN">
<html><head><title>Webscript</title>
<meta http-equiv="refresh" content="2; url=http://<your host>/webscript.pl" />
</head><body><pre>
HTML
#get the ps headers
print `ps -ef | grep STIME | grep -v grep`;
#get the running backgroundjobs
print `ps -ef | grep background.pl | grep -v grep`;
#find any running process...
$output = `cat /tmp/background.txt`;
#start the background job if there is no output yet
`<path of the background job>/background.pl &` unless $output;
#print the output file
print "\n\n---\n\n$output</pre></body></html>";
Background job:
#!/usr/bin/perl
#close the shared streams!
close STDIN;
close STDOUT;
close STDERR;
#do something usefull!
for ( $i = 0; $i < 20; $i++ ) {
open FILE, '>>/tmp/background.txt';
printf FILE "Still running! (%2d seconds)\n", $i;
close FILE;
#getting tired of all the hard work...
sleep 1;
}
#done
open FILE, '>>/tmp/background.txt';
printf FILE "Done! (%2d seconds)\n", $i;
close FILE;