PHP running a shell script to scp - scp

I want to use PHP to run a shell script that sends a file from server 1 to server 2. I have the server 1 public key written to the server 2 authorized_keys and it works perfectly.
For some reason the following script doesn't actually send the file from server 1 to server 2:
// This is a webpage at http://server1.com/sendfile.php
<?php
if($_POST['a'])
{
echo '<pre>';
echo passthru('./scp.sh');
echo '</pre>';
}
?>
<form method="post">
<button name="a" value="Af">Send File</button>
</form>
//This is the contents of scp.sh
scp ../dbexport/db.txt someuser#server2.net:
So when I execute from scp.sh from terminal, everything works fine - the file actually gets sent and received.
But when I go to http://server1.com/sendfile.php and press the button, the php file actually executes the shell file (i confirmed this by putting echo statements before and after scp command), but the file is never successfully received by server2.com
Does anyone know why this might be?

Marc B answered my question with a comment...posting here
did you add the key to the webserver's account's authorized_keys? Just because it works from a shell running under YOUR permissions means absolutely nothing to a shell running under the webserver's ID. – Marc B Jan 9 at 19:47
ooooh yeahhhhh....i forgot about that – John Jan 9 at 19:57
yup it worked. thanks Marc! – John Jan 9 at 20:42

Related

How do we send a canvas image data as an attachment to a server on Pharo?

How do we send or upload a data file to a server on Pharo. I saw some example of sending file from a directory on the machine.
It works fine.
ZnClient new
url: MyUrl;
uploadEntityfrom: FileLocator home /Path to the file;
put
In my case I don't want to send/upload file downloaded on a machine but instead I want to send/upload a file hosted somewhere or data I retrieved over the network and send it attached to another server.
How can we do that ?
Based on your previous questions I presume you are using linux. The issue here is not within Smalltak/Pharo, but the network mapping.
FTP
If you want to have a ftp, don't forget it is sending password in plaintext, set-up it a way you can mount it. There are probably plenty of ways to do this but you can try using curlftpfs. You need kernel module fuse for that, make sure you have it loaded. If it is not loaded you can do so via modprobe fuse.
The usage would be:
curlftpfs ftp.yoursite.net /mnt/ftp/ -o user=username:password,allow_other
where you fill username/password. The option allow_other allows other users at the system to use your mount.
(for more details you can see arch wiki and its curlftpfs)
Webdav
For webdav I would use the same approach, this time using davfs
You would manually mount it via mount command:
mount -t davfs https://yoursite.net:<port>/path /mnt/webdav
There are two reasonable way to setup it - systemd or fstab. The information below is taken from davfs2 Arch wiki:
For systemd:
/etc/systemd/system/mnt-webdav-service.mount
[Unit]
Description=Mount WebDAV Service
After=network-online.target
Wants=network-online.target
[Mount]
What=http(s)://address:<port>/path
Where=/mnt/webdav/service
Options=uid=1000,file_mode=0664,dir_mode=2775,grpid
Type=davfs
TimeoutSec=15
[Install]
WantedBy=multi-user.target
You can create an systemd automount unit to set a timeout:
/etc/systemd/system/mnt-webdav-service.automount
[Unit]
Description=Mount WebDAV Service
After=network-online.target
Wants=network-online.target
[Automount]
Where=/mnt/webdav
TimeoutIdleSec=300
[Install]
WantedBy=remote-fs.target
For the fstab way it is easy if you have edited fstab before (it behaves same as any other fstab entry):
/etc/fstab
https://webdav.example/path /mnt/webdav davfs rw,user,uid=username,noauto 0 0
For webdav you can even store the credentials securely:
Create a secrets file to store credentials for a WebDAV-service using ~/.davfs2/secrets for user, and /etc/davfs2/secrets for root:
/etc/davfs2/secrets
https://webdav.example/path davusername davpassword
Make sure the secrets file contains the correct permissions, for root mounting:
# chmod 600 /etc/davfs2/secrets
# chown root:root /etc/davfs2/secrets
And for user mounting:
$ chmod 600 ~/.davfs2/secrets
Back to your Pharo/Smalltalk code:
I presume you read the above and have either /mnt/ftp or /mnt/webdav mounted.
For e.g. ftp your code would simply take from the mounted directory:
ZnClient new
url: MyUrl;
uploadEntityfrom: FileLocator '/mnt/ftp/your_file_to_upload';
put
Edit Bassed on the comments.
The issue is that the configuration of the ZnClient is in the Pharo itself and the json file is also generated there.
One quick and dirty solution - would be to use above mentined with a shell command:
With ftp for example:
| commandOutput |
commandOutput := (PipeableOSProcess command: 'curlftpfs ftp.yoursite.net /mnt/ftp/ -o user=username:password,allow_other') output.
Transcript show: commandOutput.
Other approach is more sensible. Is to use Pharo FTP or WebDav support via FileSystemNetwork.
To load ftp only:
Gofer it
smalltalkhubUser: 'UdoSchneider' project: 'FileSystemNetwork';
configuration;
load.
#ConfigurationOfFileSystemNetwork asClass project stableVersion load: 'FTP'
to load Webdav only:
Gofer it
smalltalkhubUser: 'UdoSchneider' project: 'FileSystemNetwork';
configuration;
load.
#ConfigurationOfFileSystemNetwork asClass project stableVersion load: 'Webdav'
To get everything including tests:
Gofer it
smalltalkhubUser: 'UdoSchneider' project: 'FileSystemNetwork';
configuration;
loadStable.
With that you should be able to get a file for example for ftp:
| ftpConnection wDir file |
"Open a connection"
ftpConnection := FileSystem ftp: 'ftp://ftp.sh.cvut.cz/'.
"Getting working directory"
wDir := ftpConnection workingDirectory.
file := '/Arch/lastsync' asFileReference.
"Close connection - do always!"
ftpConnection close.
Then your upload via (ftp) would look like this:
| ftpConnection wDir file |
"Open connection"
ftpConnection := FileSystem ftp: 'ftp://your_ftp'.
"Getting working directory"
wDir := ftpConnection workingDirectory.
file := '/<your_file_path' asFileReference.
ZnClient new
url: MyUrl;
uploadEntityfrom: FileLocator file;
put
"Close connection - do always!"
ftpConnection close.
The Webdav would be similar.

How to run CGI script in web browser?

I am setting up environment for CGI script. For that I am following this link: http://help.cs.umn.edu/web/cgi-tutorial
I have written sample CGI script in my Ubuntu environment which is hosted in other server. I am connecting this server using PuTTY. I have successfully given permission to script file and directory. Now to test it, as mentioned in above link.
Below is the code of CGI script i have used:
#!/usr/bin/perl -w
use CGI;
$cgi = new CGI();
print $cgi->header();
print '<?xml version="1.0" encoding="UTF-8"?>';
print '<!DOCTYPE html>
<html>
<head>
<title>Perl CGI test</title>
</head>
<body>';
print '
<p>
Hello world!
</p>';
print '
</body>
</html>
';
I am opening http://www-users.cselabs.umn.edu/~ <your_username>/test-cgi.cgi in my own system in Chrome and Firefox browsers.
Below error is displayed:
Object not found!
The requested URL was not found on this server. If you entered the URL manually please check your spelling and try again.
If you think this is a server error, please contact the webmaster.
Error 404
www-users.cselabs.umn.edu
Thu Jun 11 01:56:35 2015
Apache.
How should I execute it successfully?
Note: CGI script is written on server which is accessed by PuTTY and WinSCP. And I am opening test URL on my own system i.e. windows in Chrome and Firefox.

"End of script output before headers" - CGI on Apache

I have the following CGI script:
#!c:\cygwin\bin\perl.exe
use CGI qw(:standard);
my $query = $CGI->new;
print header (
-type => "text/html",
-status => "404 File not found"
);
print "<b>File not found</b>";
This gives me an error:
Server error!
The server encountered an internal error and was unable to complete your request.
Error message:
End of script output before headers: test.cgi
If you think this is a server error, please contact the webmaster.
Error 500
127.0.0.1
Apache/2.4.10 (Win32) OpenSSL/1.0.1h PHP/5.4.31
I've looked at this (and other similar) question(s), but there the headers were not being printed, as opposed to mine.
I'm using the XAMPP Windows package with Cygwin Perl.
Can anyone help? Thanks.
I don't know why you are using $CGI instead of CGI
I think It should be
my $query = CGI->new;
tested on Linux working perfect.
So, as others have pointed out, your problem was using a variable ($CGI) where you actually needed a class name (CGI). But, in my mind, this raises two more questions.
1/ Why are you trying to create a CGI object in the first place? You are using the function-based interface to CGI (print header(...) for example) so there's no need for a CGI object.
2/ Why are you writing a CGI program in 2014? Perl web programming has moved on a long way this millennium and you seem to be stuck in the 1990s :-/

Code works in PHP 5.3.2. In PHP 5.2.17 get Invalid argument supplied for foreach()

I am using this code:-
<?php // Load and parse the XML document
$rss = simplexml_load_file('http://partners.userland.com/nytRss/nytHomepage.xml');
$title = $rss->channel->title;
?>
<html xml:lang="en" lang="en">
<head>
<title><?php echo $title; ?></title>
</head>
<body>
<h1><?php echo $title; ?></h1>
<?php
// Here we'll put a loop to include each item's title and description
foreach ($rss->channel->item as $item) {
echo "<h2><a href='" . $item->link . "'>" . $item->title . "</a></h2>";
echo "<p>" . $item->description . "</p>";
}
?>
</body>
</html>
Which I got from this site www.ibm.com/developerworks/library/x-simplexml.html
I have one puzzling issue.
When I run the code on my development server it works with no problem.
When I run it on my web host server I get this error report:-
Warning: Invalid argument supplied for foreach() in /web1/............../test3.php on line 15
My development server is a TurnKey Linux LAMP server with PHP 5.3.2.
My web host has PHP 5.2.17 running on Linux.
Looking up the error message on the web seems to indicate that the data read from the XML feed is not being treated as an array by PHP 5.2.17.
The solutions on here under 'Invalid argument....foreach()' that I have tried do not resolve the issue.
Any ideas as to how to get around this?
It looks as though Torstein's instincts are 'on the button'; my webhost seems to be blocking downloading files from the internet.
I used this code segment to try to load the distant file to a local copy:-
if(!#copy('http://partners.userland.com/nytRss/nytHomepage.xml','./buffer.xml'))
The resulting file buffer.xml contained this:-
The URL you requested has been blocked. URL = partners.userland.com/nytRss/nytHomepage.xml
This is not specific to this url, I get the same on a BBC newsfeed url.
So, indications are that it is not a PHP problem!
I have raised the issue with my webhost provider.
One thing this problem has reminded me of is that the PHP error message & line number may be far removed from the actual problem!
Thanks to Torstein, mario & GusDe Cool for helping me get my head around this problem.
BillP
It looks like your webhost has disabled opening of files on the internet.
I'm not sure how simplexml_load_file() works, but if you run phpinfo() on the website and options like allow_url_fopen and allow_url_include are disabled, thats a good indication.

Unexpected Connection Reset: A PHP or an Apache issue?

I have a PHP script that keeps stopping at the same place every time and my browser reports:
The connection to the server was reset
while the page was loading.
I have tested this on Firefox and IE, same thing happens. So, I am guessing this is an Apache/PHP config problem. Here are few things I have set.
PHP.ini
max_execution_time = 300000
max_input_time = 300000
memory_limit = 256M
Apache (httpd.conf)
Timeout 300000
KeepAlive On
MaxKeepAliveRequests 100
KeepAliveTimeout 0
Are the above correct? What can be causing this and what can I set?
I am running PHP (5.2.12.12) as a
module on Apache (2.2) on a Windows
Server 2003.
It is very likely this is an Apache or PHP issue as all browsers do the same thing. I think the script runs for exactly 10 mins (600 seconds).
I had a similar issue - turns out apache2 was segfaulting. Cause of the segfault was php5-xdebug for 5.3.2-1ubuntu4.14 on Ubuntu 10.04 LTS. Removing xdebug fixed the problem.
I also had this problem today, it turned out to be a stray break; statement in the PHP code (outside of any switch or any loop), in a function with a try...catch...finally block.
Looks like PHP crashes in this situation:
<?php
function a ()
{
break;
try
{
}
catch (Exception $e)
{
}
finally
{
}
}
This was with PHP version 5.5.5.
Differences between 2 PHP configs were indeed the root cause of the issue on my end. My app is based on the NuSOAP library.
On config 1 with PHP 5.2, it was running fine as PHP's SOAP extension was off.
On config 2 with PHP 5.3, it was giving "Connection Reset" errors as PHP's SOAP extension was on.
Switching the extension off allowed to get my app running on PHP 5.3 without having to rewrite everything.
I had an issue where in certain cases PHP 5.4 + eAccelerator = connection reset. There was no error output in any log files, and it only happened on certain URLs, which made it difficult to diagnose. Turns out it only happened for certain PHP code / certain PHP files, and was due to some incompatibilities with specific PHP code and eAccelerator. Easiest solution was to disable eAccelerator for that specific site, by adding the following to .htaccess file
php_flag eaccelerator.enable 0
php_flag eaccelerator.optimizer 0
(or equivalent lines in php.ini):
eaccelerator.enable="0"
eaccelerator.optimizer="0"
It's an old post, I know, but since I couldn't find the solution to my problem anywhere and I've fixed it, I'll share my experience.
The main cause of my problem was a file_exists() function call.
The file actually existed, but for some reason an extra forward slash on the file location ("//") that normally works on a regular browser, seems not to work in PHP. Maybe your problem is related to something similar. Hope this helps someone!
I'd try setting all of the error reporting options
-b on error batch abort
-V severitylevel
-m error_level
and sending all the output to the client
<?php
echo "<div>starting sql batch</div>\n<pre>"; flush();
passthru('sqlcmd -b -m -1 -V 11 -l 3 -E -S TYHSY-01 -d newtest201 -i "E:\PHP_N\M_Create_Log_SP.sql"');
echo '</pre>done.'; flush();
My PHP was segfaulting without any additional information as to the cause of it as well. It turned out to be two classes calling each other's magic __call() method because both of them didn't have the method being called. PHP just loops until it's out of memory. But it didn't report the usual "Allowed memory size of * bytes exhausted" message, probably because the methods are "magic".
I thought I would add my own experience as well.
I was getting the same error message, which in my case was caused by a PHP error in an exception.
The culprit was a custom exception class that did some logging internally, and a fatal error occurred in that logging mechanism. This caused the exception to not be triggered as expected, and no meaningful message to be displayed either.